id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
264907236
pes2o/s2orc
v3-fos-license
Laparoscopic Cholecystectomy in Children: The Experience of Two Centers Focusing on Indications and Timing in the Era of “New Technologies” Background: In children, laparoscopic cholecystectomy (LC) is now considered the gold standard for gallbladder (GB) removal. In the past, hemolytic disorders associated with cholelithiasis represented the most frequent conditions requiring LC; this is being overtaken by cholelithiasis and biliary conditions in overweight or ex-premature children. Aims: This study aims to describe current indications and timing for LC in pediatric patients. Methods: Retrospective study. Data on previous medical therapy, ultrasound, pre- and intraoperative aspects, and histology were collected for patients treated in 2020–2023. Results: In total, 45 patients were enrolled: 15 who underwent urgent surgery and 30 electives. Groups differed in terms of obesity rate, symptoms, ultrasound features, and intraoperative status. The most relevant risk factors for surgical complexity were age and pubertal stage, elevated cholestasis indexes, and gallbladder wall thickness > 3 mm at ultrasound. GB wall thickening ≥3 mm, US Murphy sign, fluid collections, and gallbladder distention on ultrasound correlated with high surgical scores. Conclusions: Indications for laparoscopic cholecystectomy in children seem to evolve caused by changing characteristics of the pediatric population. Patients with overweight/obesity may develop more complex GB diseases. Asymptomatic patients should be considered for surgery after observation, considering age and/or pubertal maturation when other risk factors are absent. Introduction Laparoscopic cholecystectomy (LC) in children is now considered the gold standard for gallbladder (GB) removal [1,2].In the past, hemolytic disorders have represented the most frequent condition requiring LC.Pediatric cholelithiasis has been progressively increasing in the past decades and, to date, the trend is shifting towards cholelithiasis, biliary pancreatitis, cholecystitis, cholangitis, and, less commonly, biliary dyskinesia [1,[3][4][5]. This trend is probably related to multiple factors, including the spread of childhood obesity and overweight and the survival of critical neonates and infants who received longterm medical care (parenteral nutrition and antibiotics) presenting sequelae of congenital malformation (e.g., duodenal atresia, biliary malformations) or severe conditions (e.g., short bowel syndrome) [6][7][8][9]. Despite the increased use of LC, many issues still need to be clarified, especially in children.The consensus needs to be greater regarding surgical timing for symptomatic and asymptomatic patients and more data should be available on the outcomes.Specific pediatric risk scores are required, since those used for adults have proven unreliable [1,7,10]. The best indications for surgical timing have been previously underlined [7].The long duration of symptoms, systemic inflammatory signs, previous lithotherapy, and wall thickening ≥of 3 mm have been described as the major indications for immediate surgery.However, specific timing criteria were unavailable and clinical and auxological data needed to be included. From a recent study, it is clear that the mean age has increased from 11 to 15.5 years and the mean BMI has increased from 19.2 cm/m 2 to 23.0 cm/m 2 .Hereditary spherocytosis decreased from 63.6% to 11.8% of indications for cholecystectomy, while the proportion of cholesterol stones increased from 27.3% to 70.6% [8]. In this study, the scoring system proposed by Pelizzo et al. [7] was retrospectively applied for patients with cholecystic disease in order to evaluate its application in clinical practice and further explore the issue of surgical timing.We also highlight the benefits added by the application of new technologies, such as preoperative virtual reality (VR) three-dimensional (3D) models and indocyanine green (ICG) fluorescent cholangiography, to support the aim of the study. Patients From June 2020 to January 2023, patients admitted to two surgical departments of pediatric surgery (V.Buzzi Children's Hospital, Milan, and ARNAS Civico-Di Cristina-Benfratelli, Palermo) with signs and symptoms of GB disease were prospectively enrolled in the study. Clinical data, imaging details, surgical procedures, histological results, and outcomes were recorded.An adaptation of the scoring system previously described [7] was applied, and each patient's severity was determined based on the overall score obtained.The detailed scoring system can be found in the Supplementary Material. Only data of pediatric patients (age < 18) undergoing LC were considered for the analysis.Urgent laparoscopic surgery (ULS) was performed in patients with complicated cholelithiasis, no symptom resolution, and no biochemical changes in inflammation (according to the 2018 Tokyo Guidelines) within seven days of symptom onset [11].This group included patients with right upper quadrant mass/pain/tenderness, Murphy's sign, and systemic signs such as fever, elevated C-reactive protein, and white blood cell count.Those patients who did not fall within these urgency criteria were electively scheduled (ELS, elective laparoscopic surgery) after 3-6 months of conservative treatment with ursodeoxycholic acid (dose 20 mg/kg/day), clinical examination, and US evaluation approximately once a month.Associated hematological disease and multiple stones detected on the US were considered as indications for surgery for asymptomatic patients. Data have been retrospectively evaluated according to the principles of the Declaration of Helsinki as revised in 2008.Ethical committee approval was not requested because the General Authorization to Process Personal Data for Scientific Research Purposes (Authorization no.9/2014) declares that ethical approval is not needed for retrospective archive studies that use ID codes, preventing the data from being traced back directly to the data subject.The reservedness of the collected information was ensured according to Regulation (EU)/2016/679 GDPR (Regulation (EU) 2016/679), Legislative Decree n.101/18. The primary outcome of this study was to evaluate the application of the Pelizzo scores [7] in patients with cholecystic disease and surgical indications.In order to support this primary aim, the secondary outcome included the critical analysis of surgical timing and the benefits of applying new technologies. Clinical Data Data on epidemiology (age, gender, ethnicity), medical and family history, perinatal data (gestational age and birth weight), associated medical conditions, previous medical/surgical therapy, and anamnesis on the symptoms' onset, type, and duration were collected.Weight was evaluated standing upright in the center of the scale platform (Seca, Hamburg, Germany) [12].Height was measured using a Harpenden stadiometer with a fixed vertical backboard [12].BMI was calculated as body weight (kilograms) divided by height (meters squared).According to WHO classification, children aged between 5-19 years are classified as overweight or with obesity when body mass index (BMI) for age and sex is at or above the 85th percentile and below the 97th percentile, or above the 97th percentile, respectively [13].Pubertal stages were collected at time of surgery, classified according to Marshall and Tanner [14], and considered as follows: prepubertal/early puberty (PRE/EA-Puberty) = Tanner stage 1-2; middle/late puberty (MI/LA-Puberty) = Tanner stages 3-5. Radiological Data All patients underwent sonographic examination for detection of specific items: presence, maximum diameter, location, and mobility of gallstones; -appearance, volume, and diameters of the gallbladder (GB, e.g., wall thickening, pericholecystic fluid); -features of the biliary tree (e.g., dilatations and presence of calculi); -presence of hepatic/splenomegaly or hepatic steatosis; -presence of lymph nodes in the hepatic pedicle and sonographic Murphy sign (maximal tenderness from US probe pressure over the GB). In complicated cases, MRI with cholangiographic sequences was performed before surgery to precisely assess biliary and vascular anatomy.The radiological images were elaborated to obtain 3D models that were zoomable and viewable from many viewpoints and hidden or shown in transparency, allowing focus on specific structures.Free, opensource software was used for image segmentation (https://www.slicer.org,accessed on 23 August 2022) and 3D models were loaded into an HMD (head-mounted display) (Oculus Quest v.1-META Inc., Menlo Park, CA, USA) [15,16]. Surgical LC LC was performed with the standard four-trocar technique as previously described [7], by operators with more than five years of experience in the two centers.Surgical details were described: the presence of adhesions (> or <50%), the aspect of the gallbladder (distended/contracted, unable to be grasped), the impact of stones, the presence of inflammation signs, and the time to identify the cystic artery/duct (> or <90 min). In cases of suspected or certain choledocholithiasis, endoscopic retrograde cholangiopancreatography (ERCP) or intraoperative cholangiography with laparoscopic common bile duct exploration (LCBDE) were planned before or during the operation, respectively.ICG fluorescent cholangiography using RUBINA TM technology (KARL STORZ SE & Co KG, Tuttlingen, Germany) has been recently adopted as surgical guidance to define the extrahepatic biliary anatomy.Patients receive an intravenous ICG injection (0.35 mg/kg) the day before surgery and during surgery, and the ICG near-infrared fluorescence (NIRF) image allows a real-time fluorescent visualization of the extrahepatic biliary tree to guide the surgical dissection. Histological Examination Pathologists with pediatric experience examined all the removed gallbladders.The specimens were fixed in formalin and embedded in paraffin, and 3-micron sections were cut and stained with hematoxylin and eosin.The histological parameters analyzed were ulcers/erosions, inflammatory cell infiltration, fibrosis, adenomyosis, reactive epithelial hyperplasia, epithelial atrophy, parietal atrophy, intramural micro-lithiasis, and intestinal metaplasia.As for US and surgery, a histopathological severity score was obtained (one or zero points were assigned for the presence or absence of the abovementioned histological features, respectively). Statistical Analysis The normality distribution of the variables was tested with the Shapiro-Wilk test.Categorical variables were described as frequencies and percentages, and continuous variables were expressed as the mean (±standard deviation, SD) or the median and IQR (interquartile range) as appropriate.We used the Fisher test to analyze categorical variables and the Student's t-test for continuous variables or the Wilcoxon rank-sum test as appropriate.Surgical time and surgical risk were considered as dependent variables for two respective univariate linear regression models using the following as predictive variables: age, sex, prematurity, family history of cholelithiasis, obesity, symptoms, cholestasis, hematological diseases, gallbladder wall >3 mm, distended gallbladder on US, US Murphy sign, stone diameter, gallbladder fluid collections on US. The dependent variables for the three respective logistic regression models were the presence of adhesions >50%, intraoperative inflammation signs, and gallbladder appearance (contraction vs. distension).Univariate variable selection (likelihood-ratio test) using p < 0.25 to select candidates for the multivariable model was performed on the same variables as the linear regression.In building the final model, p < 0.05 was considered statistically significant. Data were analyzed with Stata 18.0 BE (Stata Corporation, College Station, TX, USA). The mean age at surgery was 12.3 ± 3.3 years (age range 2-16 years).Overall, 62% of cases (29 patients) were MI/LA-Puberty; three were born premature and seven had a BMI >85th percentile.Most MI/LA-Puberty patients had multiple stones detected on the US (23/28 cases) and US signs such as GB thickness >3 mm (11/28 cases) and peri-cholecystic collections (9/28 cases).During surgery, 17/28 patients had adhesions >50% and 5/28 had stone impact; 19/28 showed signs of infection and in 5/28 the operative time to detect the cystic artery/duct was >90 min. Sixteen patients (35.5%) received surgery at PRE/EA puberty stage.No differences between the two puberty groups were evident in terms of symptoms (p = 0.11), complicated cases with need for ERCP (p = 0.39), and post-operative complications (p = 1.00), as shown in Table 1.Seven patients (15.5%) had a BMI for age and sex above the 85th percentile and most of them showed MI/LA-Puberty stage (p = 0.032).All overweight patients received ULS with a significant difference compared with the ELS group (p < 0.001).Nevertheless, overweight/obesity was not related to the need for ERCP (p = 0.30) nor the development of postoperative complications (p = 0.41), as shown in Table 1. Biochemical parameters were similar in both groups, without differences for each considered variable (p > 0.05). Considering sonographic features, the US Murphy sign (p = 0.009) and gallbladder wall >3 mm (p = 0.016) were more common in the ULS group (Table 2).At the same time, there were no differences regarding number of stones (p = 0.90), stone diameter (0.13), nor gallbladder distension (p = 0.53); the latter is a more frequent sign in children with overweight/obesity compared to normal-weight patients (p = 0.016, Table 2).The univariate analysis showed a high significance between fluid collections on the US and the surgical risk score (p < 0.001). A 3D reconstruction of preoperative MRI was performed in two patients.One of them underwent surgery in the neonatal period for duodenal atresia, and years later she developed symptomatic pancreato-biliary tree stones.The 3D models clearly showed the stones' disposition and peculiar anatomy (the choledochal channel ends in the upper duodenal stump with dilatation and the pancreatic channel goes to the lower duodenal stump).The other case was a boy with stones and genetic-based chronic pancreatitis who required multiple endoscopic procedures before LC to obtain biliopancreatic drainage and symptom relief after acute and severe abdominal pain attacks. ERCP was attempted before surgery in seven cases (15.5%); it was successful in six cases and was technically unfeasible due to difficulty reaching the papilla in one patient who immediately underwent LC and anterograde cholangiography with papilla dilatation.Intraoperative cholangiography with or without LCBDE was performed as a primary procedure on 13 patients (28.8%).Adhesions > 50% were more common in the ULS group (p = 0.05).Other surgical macroscopic features did not differ between ULS and ELS (gallbladder distension/contraction p = 0.46, stone impact p = 0.41, signs of inflammation p= 0.33, time to identify the cystic artery/duct > 90 min p = 0.41).Body weight did not influence surgery. No significant correlation was found between surgical score and timing (urgent vs. elective) nor between the surgical score and BMI (normal weight vs. overweight/obesity, Table 3 and Figure 1). Discussion We applied the Pelizzo scores [7] to patients with cholecystic disease managed in two pediatric surgical centers in order to describe current indications and timing for LC in pediatric patients.Our results show that the main surgical indication was symptomatic cholelithiasis and a discrete percentage of patients required an urgent approach. Patients with overweight/obesity showed a prevalence of MI/LA-Puberty stage and mostly received ULS without differences in complication rates.On the other hand, their histological examinations revealed higher rates of adenomyosis and reactive epithelial hyperplasia.Considering the pubertal stages, MI/LA seems associated with sonographic and surgical severity elements with no implications regarding symptoms and complications. GB wall > 3 mm on US, US Murphy sign, US fluid collections, cholestasis, older age at surgery, and blood diseases can be considered as surgical risk factors. LC is a well-established approach for gallbladder removal in the adult population, and its application in children has been increasing in recent decades [9,17] with cholelithiasis as its most common indication [18][19][20]. Clinical manifestations of cholelithiasis are highly variable, from completely asymptomatic children (80%) to patients with nonspecific mild symptoms or severe clinical cholecystitis, cholangitis, and pancreatitis [7,21].This variability can make it difficult to indicate surgery and define the timing [22]. ELS is recommended in patients with hemolytic anemia without specific haste [23].It is usually performed simultaneously with the splenectomy, but it gives advantages even afterward [23].The association of cholecystectomy and splenectomy probably explains the younger age of the spherocytosis patients in our series.Compared with older patients, the similar rate of complications confirms the procedure's safety. Surgery should also be scheduled for symptomatic patients who do not respond to medical therapy or in case of complication, whereas a period of conservative management can be proposed for asymptomatic patients.The rationale for waiting is based on the possibility that these patients remain asymptomatic even as they grow or that the stones disappear [22,24].However, they could develop chronic and complicated diseases without an apparent clinical picture.A recent study on 22.257 adult patients with asymptomatic gallstones showed that the development of symptoms occurs at approximately 2% per year, especially in the presence of the following risk factors: female gender, younger age, multiple stones, GB polyps, large stones, and chronic hemolytic anemia [25].Children seem to be subject to the effects of recurrent mild or subclinical episodes of inflammation leading to severe gallbladder damage and pre-cancerous conditions [7].Still, the definition of timing can be complex [7,9,22]. We should consider that the severity of the pathology is the main influencer of the therapeutic choices, affecting the results [26][27][28].In adults, radiological and clinical parameters define the severity and, thus, the surgical timing [29,30], but they do not apply to children [7].A recent paper by our coauthors showed that long duration of symptoms, systemic inflammatory signs, previous lithotherapy, and wall thickening ≥3 mm should be considered indicators of severe forms.At the same time, age, sex, and history of abdominal surgery are not useful [7].Our results are consistent with these data.Considering the sonographic signs, we found that GB wall thickening ≥3 mm, US Murphy sign, and fluid collections are associated with more severe forms.In contrast, the characteristics of the stones (diameter and number) seem to be irrelevant.In a previous study, small stones (at least one < 5 mm in diameter) increased the risk of developing complicated GB disease, but this association was not confirmed by Kirsaclioglu et al., who identified older age, independent of stone size and etiology, as a risk factor [23,31].We also found that older age at surgery, cholestasis, hematological diseases, GB wall thickening >3 mm, and GB distension in the US are risk factors for a more complex surgical procedure. Obesity is a common risk factor for cholelithiasis, with rising rates in children [23].The prevalence of cholelithiasis in children and adolescents with obesity grows from 0.13-0.3% to 2-6.1% [23].Greer et al. addressed the phenomenon as a "facet of the obesity epidemic" [32].In our series, patients with overweight/obesity more often required ULS and presented multiple sonographic risk signs.We should take this as a warning to consider overweight/obesity as a complex, multifaceted disease requiring the involvement of many pediatric health practitioners. As previously demonstrated, LC appeared to be a safe approach for emergency and elective surgery [23].The presence of multiple adhesions in ULC, as reported, could complicate the procedure and represent a risk factor for developing complications.Our work could not demonstrate correlations between surgery regimen (elective versus urgent) and surgical times or complication rates, as could be expected, possibly because of a relatively small cohort size. In case of complicated GB disease (choledocholithiasis, common bile duct dilatation, gallstone pancreatitis), ERCP is indicated to drain the biliary tree or to remove stones after sphincterotomy [33].Recently, the literature suggests performing LCBDE instead of ERCP to provide a definitive treatment in a single procedure and reduce complications associated with the endoscopic operative approach (ERCP complication risk is 5-10%) [34][35][36].Although there are conflicting previous reports, the pediatric DUCT criteria (common bile duct dilation, US choledocholithiasis, and total bilirubin ≥1.8 mg/dL) seem to estimate the risk of choledochal involvement with high accuracy (>76%), specificity (>78%), and negative predicted values (>79%) [10]. Histological examination of the specimen is essential in caring for children with GB pathologies to detect metaplasia [7].Although elective versus emergency procedures did not show differences in histological parameters, we identified more frequent elements in overweight/obese patients (adenomyosis and reactive epithelial hyperplasia), confirming that BMI is a risk factor for developing microscopic changes. Although our experience is limited, recent technological and educational innovations promise to extend treatment options for children with complex GB pathologies.Three-dimension models used for preoperative surgical simulations help understand the anatomy's complexity [15].ICG fluorescent cholangiography is a real-time surgical aid to target the anatomy, quickly obtaining the "critical view of safety" [28,37].A better comprehension of biliary, duodenal, and pancreatic anatomy needs pre-and intraoperative imaging support to reduce the intraoperative risk and ameliorate the patient's outcome.Children with congenital malformation or a suspicion of pancreatic disease and/or malformation should benefit from these technologies.Some limitations should be acknowledged.First, this study's retrospective nature may determine biases in the statistical analysis and reduce the amount and type of information available for each patient.Perspective and multicenter studies would improve the results' quality and support this study's validity.Second, the surgical timing was not randomized, and management is based on medical clinical decisions made by pediatric surgeons, reflecting the decision-making process but introducing a potential limitation.The effects of overweight/obesity and the correlation with stones should be investigated on a broader range of premature infants to better determine each variable's role.The joint effort must continue toward the validation of clinical and ultrasound scores and the development of new technologies.Identifying clinical and/or radiological risk factors may help better define surgical timing and indications for asymptomatic patients. Conclusions Surgical indications for pediatric patients with cholelithiasis include complicated cases and hematologic diseases.Asymptomatic patients may be considered for surgery after an adequate observation period, but the need for an operation in these children is yet to be confirmed. Overweight and obesity may have negative repercussions on surgical procedures and histological results.These data underline the importance of their early management by a dedicated multidisciplinary team and the possible need for early surgery. Table 1 . Clinical features in our series and the four subgroups (PRE/EA-Puberty and MI/LA-Puberty; normal weight and overweight/obese).
2023-11-02T15:06:24.530Z
2023-10-31T00:00:00.000
{ "year": 2023, "sha1": "d30371edfb0c08c68c8b87e406bf73f626053d01", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "951ea4e0f18fd2c435b961e4966bf03a56677701", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
245236077
pes2o/s2orc
v3-fos-license
Mixed Use of Analytical Derivatives and Algorithmic Differentiation for NMPC of Robot Manipulators : In the context of nonlinear model predictive control (NMPC) for robot manipulators, we address the problem of enabling the mixed and transparent use of algorithmic differentiation (AD) and efficient analytical derivatives of rigid-body dynamics (RBD) to decrease the solution time of the subjacent optimal control problem (OCP). Efficient functions for RBD and their analytical derivatives are made available to the numerical optimization framework CasADi by overloading the operators in the implementations made by the RBD library Pinocchio and adding a derivative-overloading feature to CasADi. A comparison between analytical derivatives and AD is made based on their influence on the solution time of the OCP, showing the benefits of using analytical derivatives for RBD in optimal control of robot manipulators. INTRODUCTION The computational cost of solving nonconvex optimal control problems (OCP) restricts the real-time implementation of nonlinear model predictive controllers (NMPC).Algorithms that converge to solutions of the underlying OCP with a reduced number of iterations still face the common issue of evaluating expensive, nonlinear functions of system dynamics and, in general, their derivatives.In NMPC of robot manipulators, this evaluation represents a larger bottleneck than the algebra of the OCP solver since manipulators are subjected to highly-nonlinear rigid-body dynamics (RBD) and operating constraints.Optimal control of robot manipulators is a key component in a wide range of industrial applications, e.g.space manipulation (Giordano et al. (2019)), robotic surgery (Su et al. (2020)).These applications drive a need for fast numerical optimal control and hence the need for efficient functions for RBD and their derivatives arises.This paper illustrates the potential of using computationally-efficient RBD functions and derivatives to reduce the solution time of OCPs arising in NMPC of robot manipulators, without loosing flexibility about the types of OCPs that can be solved. ⋆ The authors would like to thank Flanders Make SBO MULTIROB: "Rigorous approach for programming and optimal control of multi-robot systems", FWO project G0A6917N of the Research Foundation -Flanders (FWO -Flanders), and KU Leuven-BOF PFV/10/002 Centre of Excellence: Optimization in Engineering (OPTEC) for supporting this research. Rigid-body dynamics libraries (RBDL), including Robotran (Docquier et al. (2013)), Drake (Tedrake (2019)), RobCoGen (Giftthaler et al. (2017)) and Pinocchio (Carpentier et al. (2019)) implement efficient RBD algorithms.Such libraries usually depend on algorithmic differentiation (AD) tools to obtain derivatives for numerical optimization.Pinocchio, in contrast, implements its own efficient algorithms for analytical derivatives for RBD.Crocoddyl, introduced in Mastalli et al. (2020), is a framework for optimal control of robots that exploits such differentiation in Pinocchio.It implements differential dynamic programs but does not handle constraints in a unified way as done in sequential quadratic programs (SQP).Other numerical optimization frameworks, such as CasADi (see Andersson et al. (2019)), automatically apply AD to supply a numerical solver with any derivatives it needs.One can easily build OCP frameworks on top of such generic foundation, e.g.OpenOCL (Koeneman et al. (2019)), Rockit (Gillis et al. (2020)).Adding tailored RBD derivatives to the core of a generic framework like CasADi would nevertheless have a limited impact since they are application specific.Current practice involves either adding AD support to RBDL (see Tedrake (2019); Giftthaler et al. (2017)), or implementing RBDL on top of AD frameworks (see Gjerde Johannessen et al. (2019);Millard et al. (2020)) rather than combining the power of generic AD and tailored derivative routines in RBDL libraries.However, there is still uncertainty about how such combination contributes to reduce the solution time of nonconvex OCPs. The contributions of this paper are twofold.It explores and enables the mixed use of AD and analytical derivatives in a particular optimization framework that can emit C code for efficient evaluation.Moreover, it demonstrates the significant contribution of analytical derivatives of RBD to reduce the solution time of OCPs for robotic manipulators.To the best of the authors' knowledge, this is the first implementation that allows to use both analytical derivatives and AD within a numerical optimization framework in the context of NMPC for robot manipulators.This paper is organized as follows.Notation and preliminary concepts on rigid-body dynamics and optimal control are presented in Section 2. Section 3 explains the differentiation of rigid-body dynamics.Next, the software framework is presented in Section 4. Experiments and results are discussed in Section 5. We close the paper with concluding remarks. PRELIMINARIES This section introduces some notation and preliminary concepts on rigid-body dynamics and numerical optimal control.We then motivate the need for efficient expressions for rigid-body dynamics and their derivatives in the context of optimal control problems. Notation ⊤ as the gradient, ∇ w1 f (w) := ∂f ∂w1 (w) as the directional derivative of f (w) along w 1 , and H f (w) := ∇ 2 f (w) as the Hessian.The superscripts c and a indicate differentiation via algorithmic or analytical differentiation, respectively. Rigid-Body Dynamics Robot manipulators can be represented as kinematic chains, i.e. chains of rigid-bodies connected by joints.Joints impose constraints on how a rigid-body moves with respect to neighbour bodies in the chain.For fully actuated systems, RBD can be expressed using the Lagrangian formalism (Murray et al. (1994)) M (q)q + C(q, q) q + G(q) = τ + J c (q) ⊤ f ext , (1) where q stands for the joint position vector, q and q are its first and second order derivatives, respectively, τ is the generalized joint torque, M is the joint-space inertia matrix, C is the Coriolis matrix, G encloses gravity effects, J c is the contact Jacobian, and f ext is the stack of external forces.Dependency on (q, q, q) is dropped henceforth to increase readability. Let us briefly introduce the algorithms to compute inverse dynamics, forward dynamics and joint-space inertia matrix.Inverse dynamics (ID) computes the torque τ needed to produce a certain acceleration q for a rigid body with given kinematics (q, q) and subject to f ext .The most efficient algorithm to evaluate the ID equation is the recursive Newton-Euler algorithm (RNEA), see Featherstone (2008).This algorithm avoids the explicit computation of M and exploits the sparsity induced by the kinematic chain.The RNEA has a computational complexity of O(n b ), where n b is the number of bodies composing the rigid-body system.Analogous to ID, forward dynamics (F D) computes the acceleration q of a rigid-body as The articulated-body algorithm (ABA), generalized in Featherstone (2008), is one of the most efficient algorithms to compute F D. It avoids the explicit computation of M −1 and has a complexity of O(n b ). The joint-space inertia matrix M is a positive-definite matrix of special interest when determining the illconditioning of RBD or computing ∇ qI D, for instance.It can be computed with the composite-rigid-body algorithm (CRBA), see Walker and Orin (1982), with a complexity of O(n 2 b ).Its inverse M −1 is relevant when computing ∇ τ F D or solving F D naively, and can be directly computed with ABA as shown in Carpentier and Mansard (2018). Numerical Optimal Control In this paper we are interested in solving OCPs of the form are path and terminal constraints, respectively.Constraints (4c) arise from the shooting intervals introduced by the multiple-shooting method.In direct optimal control, ( 4) is converted into an equivalent nonlinear program (NLP) with objective function f (w), equality constraints h(w) = 0 and inequality constraints g(w) ≤ 0. Such NLP can be solved using derivative-based optimization, with first-order methods, i.e. require the evaluation of up to first-order derivatives, or second-order methods, i.e. require the evaluation of up to second-order derivatives.A well-known second-order method is the SQP method.This method approximates an optimal solution w * by iterating the decision variable as w k+1 = w k + d k , where d k is the solution of a quadratic program (QP).For every SQP iteration, at least one QP is solved, requiring the evaluation of ∇f , h, ∇h, g, ∇g and the Hessian H L of the Lagrangian of the NLP. Bottleneck in Dynamics Evaluation Within the framework of MPC, OCP (4) is solved for a new value of p at every sample instant.Therefore, the solution time of (4) must be less than a sample time T s to allow real-time implementations.For robot manipulators, the state vector is usually defined as x k := [q ⊤ , q⊤ ] ⊤ .Common choices for u k are u k := τ and u k := q.When the former is chosen, ξ(x k , u k ) in (4c) becomes a discretization of F D, while r(x k , u k ) in ( 4d) is a linear mapping from u k to τ .Conversely, when u k := q and assuming the robot manipulator is fully actuated, the differential flatness property can be exploited.Hence, the multiple-shooting constraints are governed by the dynamics of a double integrator, which are cheap to evaluate, while r(x k , u k ) becomes ID.Constraints (4f) and (4e) define path and terminal constraints on the forward kinematics of the robot, which are cheaper to compute than the RBD.Note that, whether u k := τ or u k := q, the evaluation of F D or ID and their derivatives may well become a bottleneck.The reason is that the evaluation of such highly-nonlinear functions is computationally expensive and is required to compute h and ∇h for u k := τ , or g and ∇g for u k := q, while solving the NLP equivalent to OCP (4). JACOBIAN OF RIGID-BODY DYNAMICS EXPRESSIONS FOR NUMERICAL OPTIMIZATION The problem of computing the derivatives of RBD for their use in OCPs is discussed in this section.The first subsection presents the commonly used method of AD.We then give details on the efficient computation of analytical derivatives for RBD and the complexity of higher-order derivatives. Algorithmic Differentiation Algorithmic differentiation (AD) is a method to compute the derivatives of functions by applying the Leibniz's chain rule to expression-graph representations of such functions.There are two basic approaches for AD: the forward mode and the reverse mode.For a function ϑ(x) : R nin → R nout , the forward mode computes a Jacobian-times-vector product, i.e. a directional derivative, ŷ := J ϑ x with a seed x and a computational complexity comparable to evaluating ϑ(x).The reverse mode computes a Jacobian-transposed-timesvector product x := J ⊤ ϑ ȳ with a seed ȳ, also with a complexity comparable to evaluating ϑ(x).By choosing the seeds as slices of a unit matrix, each sweep of the forward mode computes one column of the Jacobian J ϑ ∈ R nout×nin , while a sweep of the reverse mode computes one row of J ϑ .Therefore, the computation of J ϑ requires n in sweeps of forward mode or n out sweeps of reverse mode. Accurate first-order derivatives of F D or ID can be computed with AD.Both F D and ID are functions mapping from R 3n b to R n b , assuming that there are no external forces f ext affecting the rigid-body, and q is the same size as its tangent vector q, e.g.q is not a quaternion. Analytical Derivatives Contrary to AD, which evaluates both the values and their derivatives for a expression-graph from atomic expressions, the analytical differentiation of RBD directly operates at the level of spatial operations that describe the dynamics of rigid-body systems (Carpentier and Mansard (2018)). For instance, the time derivative of the rotation matrix R associated to a frame rotating at a speed ω is given by Ṙ = R [ω] × , where [.] × is the skew operator.While from a mechanical point of view this relation seems basic, AD would have to recover it by differentiating each individual element of R. Therefore, analytical derivatives are able to exploit the inherent sparsity of the spatial operations to compute the derivatives of both F D and ID, or any other related quantities.In other words, the granularity of the operations is shifted from atomic expressions to spatial expressions.In addition to that, and as shown in Carpentier and Mansard (2018), the evaluation complexity of RBD algorithms can also be lowered by exploiting some simplifications which appears from the recursion of the dynamic equations themselves or by exploiting the intimate relations between forward and inverse dynamics derivatives, to skip for instance the evaluation of complex tensorial quantities. Let us define the Jacobian of F D (respectively ID) computed with analytical derivatives as J a F D (J a ID ).The computational complexity of evaluating the partial derivatives of F D (ID), as in Carpentier and Mansard (2018), is O(n b n d ) where n d is the depth of the kinematic tree.Note that for serial robot manipulators n d = n b .Consequently, the computational complexity of J a F D and J a ID for serial robot manipulators is O(n 2 b ).This is the same complexity as that from the Jacobians J c F D and J c ID , computed with AD.However, analytical derivatives exploit the sparsity in the function by preserving the structured sparsity of RNEA and directly differentiating the spatial operators, then reducing the number of atomic operations in the Jacobian evaluations, and thereby reducing the evaluation time of J a ID and J a F D . Higher-order Derivatives A family of second-order methods requires the computation of second-order derivatives to populate Hessians in the NLP solution algorithm.The computation of H L requires, for instance, computing the Hessian H c F D of γ := λ ⊤ F D, where λ are the Lagrange multipliers corresponding to the equality constraints in the NLP.The Hessian of a scalarvalued function can be computed as the Jacobian of the gradient ∇γ.Thus, where reverse mode of AD is applied recursively to, first, directly compute ∇γ as a Jacobian-transposed-vector product and next, compute J ∇γ , with a total complexity of O(n b ).Contrarily, if J a F D has already been computed via analytical derivatives, the Hessian of γ is computed as where the computation of (λ ⊤ J a F D ) ⊤ needs to be performed before applying reverse mode of AD to ∇γ.In this case, computing ∇γ has a complexity of O(n2 b ) due to matrix multiplication and transposition, while computing J ∇γ has a complexity of O(n b ). SOFTWARE FRAMEWORK Having discussed how to compute derivatives of RBD and their importance in the context of numerical optimal control, let us now consider the software implementation of an interface developed to close the gap between (i) a general framework for numerical optimization and AD and (ii) the efficient implementation of analytical derivatives tailored for application-specific numerical evaluation. Numerical Optimization Framework -CasADi CasADi is an open tool for numerical optimization and AD.It is based on a symbolic framework which constructs expression-graphs from functions and algorithms.Every node in the graph represents an atomic operation.These graphs are automatically differentiable by performing AD on them.As a numerical optimization framework, CasADi implements algorithms to solve (non)convex OCPs and interfaces other numerical optimization solvers.CasADi also features a native support for code-generation. Rigid Body Dynamics Library -Pinocchio Pinocchio is an open-source RBDL which features efficient implementations of RBD algorithms such as RNEA, ABA, and CRBA.It also implements the analytical derivatives of RBD mentioned in Section 3. The algorithms implemented in Pinocchio exploit sparsity, with specific spatial operators for different types of joints, and static polymorphism to reduce their evaluation time, outperforming other RBDLs (Carpentier et al. (2019); Neuman et al. (2019)). Development of an Interface Framework CasADi, being a general optimization and AD framework, is not aimed to include RBD features in its core.Moreover, the algorithms in Pinocchio are not tailored to be evaluated symbolically.Hence, we built a C++ interface 1 that generates CasADi expression-graphs from the RBD algorithms in Pinocchio by using operator-overloading (Phipps and Pawlowski (2012)).The interface includes methods to compute analytical derivatives of RBD and to export them as serialized CasADi functions, which can then be imported by any software tool that is compatible with CasADi such as MPC tools in Python (Lucia et al. (2017)), in MATLAB (Chen et al. (2019)), or in C (Verschueren et al. (2019)).A new feature that allows the user to overload derivatives with custom user-defined derivative expressions was added to CasADi.The feature is implemented as an option called custom jacobian, settable by the user for any CasADi function.Hereby, CasADi is set to use analytical derivatives for RBD function calls, while the remainder of the constraint computation (e.g. the discretization step) still uses regular AD.This hybrid setup works transparently, with regular CasADi features such as C code generation still fully functional. RESULTS Having discussed how to generate computationally efficient RBD derivatives and how to use them within a numerical optimization framework, this section addresses the assessment of such expressions with respect to AD and its contribution into the solution time of a nonconvex OCP.Each test case in this section is executed using Ubuntu 18.04 in a laptop with an Intel Core i7-8850H CPU.The reported evaluation times are the average time after executing each test case 100000 times.Each function is code-generated and then compiled using LLVM 9.0.0 with compilation flags -O3 and -march=native, unless stated otherwise. Benchmark on Dynamics Expressions and Derivatives We first evaluate the performance of the RBD derivatives over five robot models 2 in terms of number of atomic operations and evaluation time.The assessment in terms of atomic operations is highlighted in Table 1.The results show a consistent reduction of atomic operations for most of the Jacobians computed by analytical derivatives with respect to AD.This reduction is mainly due to the sparsity handling in Pinocchio's implementations.Moreover, there is a reduction in the number of expensive operations like sine, cosine, and division in J a F D and J a ID .For instance, the number of sin and cos operations in both J a F D and J a ID is n b while in J c F D and J c ID is 2n b for all the evaluated robots.Similarly, the number of divisions is reduced for all cases of J a F D compared to those in J c F D , from a reduction of 25.0% in the double pendulum up to 93.3% in Atlas.Jacobians of ID have no division operations.For detailed information on the count of operation types, see the supplementary material 3 .If we now turn to the comparison in terms of evaluation time, Fig. 1 shows that, for all code-generated cases, analytical derivatives outperform AD in the computation of Jacobians of RBD.F D and ID are included in the comparison as a reference for the evaluation time of their Jacobians.For code-generated functions, the Jacobian J a F D had, on average, an evaluation time 57.99% lower than J c F D , while the evaluation time of J a ID was on average 21.20% lower than J a ID .Note that the evaluation time of J a F D and J a ID from native Pinocchio, i.e. without codegeneration, are on average 2.35 (respectively 1.42) times slower with respect to their code-generated version.This highlights the potential for speed-up when combing codegeneration and appropriate compiler flags.Two unexpected findings stand out from Fig. 1: (i) for a double pendulum, the evaluation time of J a F D (and J a ID ) was lower than in J c F D (and J c ID ) despite having a larger number of operations (see Table 1), and (ii) for robots with a small number of bodies (i.e.n b ≤ 7) the evaluation time of J a F D (and J a ID ) is comparable to evaluating F D (and ID respectively).We suspect an explanation would involve the exact nature and complexity of each atomic operation (see supplementary material). Contour-following on a 7-DoF Robot Manipulator To assess whether and how analytical derivatives contribute to the solution time of OCPs arising from NMPC of robot manipulators, we present a test case of a contourfollowing task for a 7-DoF Kinova Gen3 robot.For this test case, an NMPC with an underlying OCP of the form (4) is executed, first without overloading RBD derivatives, i.e. using AD, and then overloading RBD derivatives with analytical derivatives.The derivatives in the rest of the OCP are computed with AD.Following the approach in Van Duijkeren (2019), the functions in (4) are defined as , where s is a path parameter variable subject to double integrator dynamics that augment the state vector as x := [q ⊤ , q⊤ , s, ṡ] ⊤ , e p (q, s) is a funtion for end-effector's position error, ρ = 0.01 is an upper bound to e p , and the prediction horizon is N = 16.The speed at which the end-effector follows a reference path is governed by ṡref k .Recall from section 2.4 that the selection of u as q or τ determines the expressions for ξ in (4c) and r in (4d).If u := q⊤ , s ⊤ (OCP-ID), then ξ is a discretized representation of a double integrator with appropiate dimensions while r is ID.Contrarily, if u := τ ⊤ , s ⊤ (OCP-FD), ξ is a discretized representation of F D stacked with the double integrator dynamics of s, and r is a linear mapping from u to τ .Note that u is augmented with s, and the discretized representations in ξ are obtained by applying the explicit Runge-Kutta method.OCP-FD (OCP-ID) is solved with a variation of the SQP method called the sequential convex quadratic programming method (SCQP) (Verschueren et al. (2016)).This method exploits the convexity in the so-called convexover-nonlinear functions (i.e. a function composition of an outer convex function and a inner nonlinear function) in the objective and constraints of the OCP.This method avoids computing the exact Hessian H L and instead creates an approximation based on the Jacobian of inner nonlinear functions and the Hessian of the outer convex function, which is a scalar for this problem.Thereby, there is no need to compute second-order derivatives when solving OCP-FD or OCP-ID with SCQP.The interested reader is referred to Verschueren et al. (2016) for more information on the selection of these functions.We use QRQP (Andersson et al. (2019)) to solve the inner QP subproblems arising from the SCQP method.Fig. 2 presents the comparison, in terms of evaluation time, of OCP-FD and OCP-ID being solved with and without analytical derivatives of RBD overloading their AD counterpart.In this figure, there is a clear trend on the OCP solution with analytical derivatives on RBD being faster to solve than those fully depending on AD.In fact, solution of OCP-FD with analytical derivatives on RBD is 1.29 times faster than OCP-FD with AD on RBD, while OCP-ID with analytical derivatives on RBD is only 1.09 faster than its AD counterpart.These results are consistent with those from Fig. 1, where the evaluation time of J a F D differ with the evaluation time of J c F D in a greater proportion than the difference between evaluation time of J a ID and J c ID .We also compare the computation of the Hessian of γ CONCLUSION AND FUTURE WORK In this paper we have shown the benefits of using computationally-efficient functions for both RBD and their derivatives when aiming to reduce the solution time of OCPs involving robot manipulators.Relevant state-ofthe-art implementations from a numerical optimization framework and a RBDL were combined and enhanced to allow the transparent use of analytical derivatives of RBD in OCP solution algorithms, without excluding the use of AD for the remainder of the functions in the algorithms.The results of this study indicate that using tailored, analytical derivatives of RBD widely contributes to reduce the solution time of OCPs arising in the context of NMPC of robot manipulators.We showed, however, that computing second-order derivatives of RBD by recursively applying AD leads to more efficient functions compared to applying AD to first-order analytical derivatives.Future work should focus on the implementation of efficient second-order analytical derivatives of RBD, a quantitative comparison between the presented framework and other optimal control frameworks using RBD, as well as benchmarks of the effect of analytical derivatives within different derivative-based optimization algorithms. Since, n in = 3n b > n out = n b , reverse mode of AD is used to differentiate the RBD.Let us define J c F D and J c ID as the Jacobian of F D and ID, respectively, computed with reverse mode.The computation of such Jacobians has a complexity of O(n 2 b ), due to the O(n b ) complexity of both F D and ID, and the need for n b sweeps of reverse mode to compute a Jacobian. Fig. 2 . Fig. 2. Comparison of evaluation time of OCP-FD andOCP-ID with and without using analytical derivatives on RBD. Table 1 . Number of atomic operations in forward and inverse dynamics functions and their Jacobians computed both with AD (J c F D , J c ID ) and analytical derivatives (J a F D , J a ID ). Table 2 . F D := λ ⊤ F D and γ ID := λ ⊤ ID as required in the solution of OCP-FD and OCP-ID with SQP.Unlike SCQP, SQP does require the computation of second-order derivatives due to the exact Hessian computation of H L .The comparison of H a F D , H c F D , H a ID , H c ID in terms of both evaluation time and number of atomic operations, is shown in Table 2. Evaluation time and number of atomic operations of the Hessian of F D and ID computed both with AD (H c F D , H c ID ) and analytical derivatives (H a F D , H a ID ).As Table 2 shows, H c F D and H c ID computed with AD, have fewer atomic operations and a faster evaluation than H a F D and H a ID , which first-order derivatives are obtained with analytical derivatives.This result is expected, since the computation of H a F D and H a ID based on the precomputed Jacobian J a F D (J a ID ) has a complexity of O(n 2 b ), while the computation of H c F D and H c ID solely based on AD has a complexity of O(n b ), as shown in Section 3.3.
2021-12-17T16:37:55.774Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "bdf679d6c70c50fdd2f810cdd7c9f9be05966979", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ifacol.2021.11.156", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ca9beb80d71f0e34a017fa33a7cbdbe2b7a7dddc", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
270033790
pes2o/s2orc
v3-fos-license
Kidney Cancer and Potential Use of Urinary Extracellular Vesicles Kidney cancer is the 14th most common cancer globally. The 5-year relative survival rate of kidney cancer at a localized stage is 92.9% and it declines to 17.4% in metastatic stage. Currently, the most accurate method of its diagnosis is tissue biopsy. However, the invasive and costly nature of biopsies makes it undesirable in many patients. Therefore, novel biomarkers for diagnosis and prognosis should be explored. Urinary extracellular vesicles (uEVs) are small vesicles (50–200 nm) in urine carrying nucleic acids, proteins and lipids as their cargos. These uEVs’ cargos can provide non-invasive alternative to monitor kidney health. In this review, we have summarized recent studies investigating potential use of uEVs’ cargos as biomarkers in kidney cancer for diagnosis, prognosis and therapeutic intervention. INTRODUCTION Kidney cancer is the 14th most common cancer globally and one of the top ten most common cancers in males.According to the GLOBOCAN 2020, there are 431,288 new cases and 179,386 new deaths of kidney cancer.Incidence and mortality rate of kidney cancer are 6.1 and 2.5 in males, and 3.2 and 1.2 in females, respectively [1].Notably, the 5-year relative survival rate of kidney cancer at localized stage is 92.9%, and sharply decline to 17.4% in metastatic stage [2].Different types of kidney cancer are classified based on histology and require different targeting therapies.Therefore, novel biomarkers for diagnosis and prognosis should be investigated to ameliorate the survival rate of kidney cancer. Extracellular vesicles (EVs) are the lipid bilayer membrane-bound particles which contain abundant biological information (nucleic acids, proteins, metabolites and lipids).They have various sizes including exosomes (50-200 nm), ectosomes (100-1,000 nm) and apoptotic bodies (50-5,000 nm).Body fluids such as blood, plasma, serum and urine, and cell culture media are the great resources of EVs [3].Recent studies have reported that EVs can be taken up from donor cells to recipient cells and considered as a new tool for intercellular communication [4].Due to the membrane of EVs, their cargos are protected from degradation by proteases and other enzymes.This protection of cargos enables them to be delivered to the recipient cell or organ.Plasma-derived exosomal protein profiles exhibit unique patterns of cargos that allow us to classify primary tumors.These unique patterns of plasma-derived EVs may be utilized to predict the tumor origin in patients [5].Furthermore, EVs contain biomarkers for predicting future site of metastasis.Therapeutic approaches can be targeting EVs and inhibiting specific organ uptake, targeting EVs-induced changes in future site of metastasis, and using EVs as drug delivery system [6].The emerging potential role of EVs in diagnosis and therapy with highest sensitivity has led to increased interest in their investigation. EXTRACELLULAR VESICLES (EVS) IN KIDNEY CANCER Renal cell carcinoma (RCC) is the most common type of kidney cancer in adults.It ranks as the third most common urological cancer following prostate and bladder cancer.RCC starts in the renal tubules that clean the blood and produce urine.In addition, RCC in the later stages disseminates to other organs frequently, i.e., bones, lungs, or brain.Histopathologically, the most common subtypes of RCC are: clear cell (75%-85%), papillary (10%-15%), and chromophobe (5%-10%) renal cell carcinoma.Clear cell renal cell carcinoma (ccRCC) has the lowest survival rate among these prevalent subtypes [7].The common metastatic sites of ccRCC are lungs (54%), bone (18%), lymph nodes (16%) and liver (6%) [8]. Kidney surgery is the gold intervention to manage localized kidney cancer.This includes partial nephrectomy that removes only the cancerous portion of the kidney, while radical nephrectomy removes the entire kidney [9].Further treatments for kidney cancer comprise of radiation therapy, chemotherapy, targeting medicines, cryoablation, radiofrequency ablation and microwave ablation [10][11][12].Moreover, in an effort to gain insight into targeting therapy, engineering EVs show potentially effective vehicles against RCC.TRAIL (TNF-related apoptosisinducing ligand) engineered MSC-derived EVs showed a significant effect on TRAIL resistant renal cancer cell lines, e.g., RCC10 and HA7-RCC [13].Mesenchymal stem cell-derived EVs have mild effect on renal cancer by enhanced apoptosis and preventing proliferation [14].Currently, diagnosis of kidney cancer is composed of physical exam, urine test, blood test, intravenous pyelogram, CT scan, ultrasound test and biopsy [9].RCC raises a big concern due to high metastatic rate, mortality rate, increased incidence and therapeutic resistance.Diagnosing solid tumor becomes challenging in circumstances of unconventional tumor cell patterns or limited tissue samples [15]. Several pioneer studies have shown the potential of EVs in RCC diagnosis.Remarkable markers CA9, CD70 and CD147, which are expressed in ccRCC tumor tissues, are also identified in secreted EVs.Expression of these proteins in EVs validate their origin from the primary kidney tissue and can be the reliable biomarkers for less invasive and tumor-specific diagnostic methods [16].Cargos of EVs derived from clear cell RCC, papillary RCC (pRCC) and benign kidney cell lines have unique signatures, thereby they can be used to discriminate not only RCC subtype, but also RCC from benign renal cells.Twenty and thirty-four exosomal proteins are exclusively enriched in EVs released from ccRCC and pRCC, respectively.Exosomal mRNA of EPCAM, PRKCZ, PXDN, CXADR, EPS8L1, HOXA7, LAD1, MYO1D, ROCK2, and SLC35A3 are unique to EVs of benign renal cell, but not ccRCC [17].In contrast, an epithelial tumor cell marker EpCAM heterogeneously expresses in both normal tubular and ccRCC samples [16].Moreover, CDH2, COL7A1, FGFR2, BMPR1B, HDHD3, ICAM1, KIAA1462, and PFKFB4 mRNA are found only in ccRCC-derived EVs [17]. Moreover, exosomal miR-210 is upregulated in ccRCC patients compared to healthy control, especially, the high expression of this miRNA is significantly associated with patients at T3/T4 tumor stage, Fuhrman grade III/IV and metastasis [24].In addition, exosomal miR-210 is significantly elevated in renal cell lines HK-2, 786-O, and SN12-PM6 upon hypoxic condition induced by CoCl 2 .miR-210 has proven to be a good prognostic biomarker to monitor recurrence after primary tumor resection as well.Indeed, miR-210-3p, which is upregulated in RCC tissue, has high level in serum and urine of RCC patients, and significantly decreases in post-operative patients' urine within a month [24][25][26].Nakada et al. have demonstrated that HIF1α protein accumulation induces miR-210 expression, which subsequently suppresses E2F transcription factor 3 and causes centrosome amplification and aneuploidy in ccRCC cell lines [27].Another study also showed that miR-210 silencing in metastatic RCC cells deregulates the HIF1α protein [28].Furthermore, miR-210-5p is a downstream target of exosomal circular RNA_400068 which is isolated from Caki-1 and Caki-2 cell derived-EVs (ccRCC cell lines) and plays a role as tumor suppressor in RCC [29]. Long non-coding RNAs such as exosomal lncARSR and lncRNA IGFL2-AS1 facilitate sunitinib resistance in RCC cells.Both of these lncRNAs also transform sunitinib-sensitive cells into resistant cells.Hence, EVs are the effective delivery package that disseminate drug resistance in advanced RCC.These lncRNAs might be prognostic indicators and potential therapeutic approach in chemotherapeutic resistance [30,31]. URINARY EXTRACELLULAR VESICLES (UEVS) IN KIDNEY CANCER Urinary extracellular vesicles (uEVs), which originate from bladder, prostate and kidney, have gained immense investigation since uEVs reflect the pathology of the kidney [32,33].First and mid-stream urine are collected as an appropriate resource for EV analysis [34].uEVs are isolated by several methods such as ultracentrifugation, chemical precipitation, size exclusion chromatography and ultrafiltration [35].Tamm Horfall protein (THP) is abundant in urine and can trap uEVs.Detergents such as Dithiothreitol (DTT)/urea, which are used for entrapping EVs, enhance yield of uEVs [36,37].Transmission electron microscopy (TEM) and nanoparticle tracking analysis (NTA) are utilized to identify the morphology and size of EVs.NTA is preferable to determine the size distribution of EVs than TEM, because EVs usually coagulate and form a bundle on the coal-copper grid [16].EV markers are characterized by immune blotting.CD63 is found to be a representative exosomal marker for RCC cell lines, e.g., 786-O, 769-P, ACHN, Caki-2, Caki-1 and RCC53 due to its stable expression rather than other exosomal markers CD9 and CD81, which have variable expression among RCC cell lines [16].Thus, anti-CD63 nanobodies have been applied for an efficient isolation of EVs from urine with high purification [38].CANX is identified as a negative EV marker for RCC cell lines by spatial proteomics analysis [16,17].Indeed, human renal cancer tissue derived-EVs are enriched in CD63, CD81 and flotillin-1 [3].Notably, clinical urine samples also contain bacteria.These bacteria have been known as a resource of bacterial EVs and can interfere with the analysis results.Furthermore, bacterial EVs induce the cytokine secretion of renal cells [39].This implies that the consideration about storage urine samples should be rigorously considered. In kidney cancer, examining uEVs is a non-invasive method than tissue biopsy and longitudinal monitoring to observe the condition of the disease (Table 1).The contents of uEVs are also identified in the tissue of origin [16].Studies have shown that compared to serum miRNAs, urinary miRNAs provided a stronger signature for acute kidney injury by oxalic acid poisoning [48].Secreted EVs are comparable in human urine and various immortalized human kidney cell lines, e.g., podocyte, glomerular endothelial, mesangial and proximal tubular cells.This suggests that in vitro experiments may imitate the in vivo condition [49]. Additionally, lncARSR enhances sunitinib resistance by competitively binding miR-34 and miR-449 which facilitates upregulation of AXL/c-MET and the activation of STAT3, AKT, and ERK signaling in resistant RCC cells [31].The low levels of exosomal shuttle RNAs consisting of GSTA1, CEBPA and PCBD1 in ccRCC patients relative to the healthy controls, are well defined in ccRCC, while these three genes have high expression in non ccRCC.One month after nephrectomy in ccRCC patients, these exosomal shuttle RNA levels are recovered [34]. uEV-derived miR-204-5p is detected at high level in both 20and 40-weeks-old Xp11 translocation RCC (tRCC) mice relative to control mice.This upregulated miR-204-5p is additionally [43].miR-224-5p is significantly upregulated in both uEVs and tissue from RCC patients compared to healthy controls.miR-224-5p stabilizes PD-L1 (programmed cell death protein 1) expression via directly suppressing the gene encoding cyclin D1 (CCND1).The study has elucidated the mechanism how miR-224-5p promotes resistance to T cell-dependent toxicity and metastasis via EVs transmission between RCC cells [44].Cancer metastasis is the major cause of death of cancer patients and considered a hallmark of tumor progression.To invade, resist apoptosis, and disseminate, carcinoma cells must lose their epithelial phenotypes, detach from epithelial sheets, while gaining the mesenchymal characteristics.This reversible process called the epithelial-mesenchymal transition (EMT) which involves in-wound healing, embryogenesis and inflammation [53].Podocytes and proximal tubular cell line HK-2 under renal damage condition develop EMT.In addition, these cells specifically exhibit elevated levels of miR-145 and miR-126 in EVs, in accordance with uEVs from diabetic nephropathy patients and lead to EMT progression [42]. The small RNA sequencing of uEVs of ccRCC patients shows significantly lower level of miR-30c-5p in ccRCC compared to healthy individuals.Indeed, this miR-30c-5p is the specific biomarker for RCC owing to its different expression between RCC patients and healthy controls, but it is not distinguishable in bladder and prostate cancer.The AUC, sensitivity and specificity of miR-30c-5p in the diagnosis of ccRCC are 0.8192 (95% confidence interval 0.7388-0.8996,p < 0.01), 68.57% and 100%, respectively.Indeed, miR-30c-5p directly binds and suppresses heat shock protein HSPA5 which promotes ccRCC progression [45]. DISCUSSION Urine diagnostics has limitations due to contamination with many factors and short-term stability of nucleic acids, but urine EVs and their contents retain high integrity in alternative temperature [39,[56][57][58].Since EVs produced by the cells are membranous, the information is protected and accurate, which facilitates the application of uEVs in kidney cancer diagnosis and prognosis.To achieve a better outcome, combining EVs contents with other information would improve discrimination sensitivity and specificity between cancer patients and healthy participants.Even though a large amount of research has shown many potential markers, these biomarkers still need to be validated for clinical application.Further evaluation is required for the specificity of EVs related to kidney cancer since experimental models or sample sizes are limited.Other concern for optimizing the uEVs utilization in biomarker discovery for kidney cancer are normalization, quantification, and characterization in spot urines.There are several normalization approaches to compare uEV biomarkers among individuals such as urine creatinine, nephron mass or uEV excretion rate, total urine protein and albumin [37,58,59].Despite these limitations, uEVs are a promising and applicable biomarker resource and can revolutionize clinical diagnosis, prognosis and treatment of kidney cancer patients in the future. TABLE 1 | Studies of urinary extracellular vesicles' cargos in kidney cancer. TABLE 1 | (Continued) Studies of urinary extracellular vesicles' cargos in kidney cancer.Differential expression or modulation of the cargos in uEVs of kidney cancer compared to healthy controls.b Receiver Operating Characteristic (ROC) curve.observed in human Xp11 tRCC cell lines compared to normal cells, which is caused by overexpression of PRCC-TFE3 fusion gene.The comparable level of miR-204-5p in 20 and 40 weeks of age infers that uEVs can be biomarkers for early diagnosis of patients with Xp11.2 tRCC a
2024-05-26T15:35:31.669Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "9c2dd4f4d2c598b5dcdb90745e3c3e11734db010", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/or.2024.1410450/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7380f92bc59986c871f0500b290f00a69d3446fb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
7559483
pes2o/s2orc
v3-fos-license
Tail Bounds for the Stable Marriage of Poisson and Lebesgue Let \Xi be a discrete set in R^d. Call the elements of \Xi centers. The well-known Voronoi tessellation partitions R^d into polyhedral regions (of varying volumes) by allocating each site of R^d to the closest center. Here we study allocations of R^d to \Xi in which each center attempts to claim a region of equal volume \alpha. We focus on the case where \Xi arises from a Poisson process of unit intensity. It was proved in math.PR/0505668 that there is a unique allocation which is stable in the sense of the Gale-Shapley marriage problem. We study the distance X from a typical site to its allocated center in the stable allocation. The model exhibits a phase transition in the appetite \alpha. In the critical case \alpha=1 we prove a power law upper bound on X in dimension d=1. It is an open problem to prove any upper bound in d\geq 2. (Power law lower bounds were proved in math.PR/0505668 for all d). In the non-critical cases \alpha<1 and \alpha>1 we prove exponential upper bounds on X. Introduction The following model was studied in [3]. Let d ≥ 1. We call the elements of R d sites. We write | · | for the Euclidean norm and L for Lebesgue measure or volume on R d . Let Ξ ⊂ R d be a discrete set. We call the elements of Ξ centers. Let α ∈ [0, ∞] be a parameter, called the appetite. An allocation (of R d to Ξ with appetite α) is a measurable function ψ : R d → Ξ ∪ {∞, ∆} such that Lψ −1 (∆) = 0, and Lψ −1 (ξ) ≤ α for all ξ ∈ Ξ. We call ψ −1 (ξ) the territory of the center ξ. We say that ξ is sated if Lψ −1 (ξ) = α, and unsated otherwise. We say that a site x is claimed if ψ(x) ∈ Ξ, and unclaimed if ψ(x) = ∞. The following definition is an adaptation of that introduced by Gale and Shapley [2]. Definition of stability. Let ξ be a center and let x be a site with ψ(x) / ∈ {ξ, ∆}. We say that x desires ξ if |x − ξ| < |x − ψ(x)| or x is unclaimed. We say that a site-center pair (x, ξ) is unstable for the allocation ψ if x desires ξ and ξ covets x. An allocation is stable if there are no unstable pairs. Note that no stable allocation may have both unclaimed sites and unsated centers. ✸ Now let Π be a translation-invariant, ergodic, simple point process on R d , with intensity λ ∈ (0, ∞) and law P. Our main focus will be on the case when Π is a Poisson process of intensity λ = 1. The support of Π is the random set [Π] = {z ∈ R d : Π({z}) = 1}. We consider stable allocations of the random set of centers Ξ = [Π]. In [3] it was proved that for any ergodic point process Π with intensity λ ∈ (0, ∞) and any appetite α ∈ (0, ∞) there is a L-a.e. unique allocation Ψ = Ψ Π from R d to [Π]. Furthermore we have the following phase transition phenomenon. (i) If λα < 1 (subcritical) then a.s. all centers are sated but there is an infinite volume of unclaimed sites. (ii) If λα = 1 (critical) then a.s. all centers are sated and L-a.a. sites are claimed. (iii) If λα > 1 (supercritical) then a.s. not all centers are sated but L-a.a. sites are claimed. See Figure 1 for an illustration. For further information and more pictures see [3]. The critical model was applied in [4] to the construction of certain shift-couplings. While the results in (i)-(iii) above suggest the subcritical / critical / supercritical terminology, the typical signature of a critical phenomenon in statistical physics is exponential decay (of correlations, cluster sizes, or large deviation probabilities) in the subcritical and supercritical regimes, and subexponential decay (usually given by a power law) at criticality. We will establish such a phenomenon for the stable allocation model when the centers are distributed as a Poisson process. One natural quantity to consider is the distance from the origin to its center: where we take X = ∞ if 0 is unclaimed. Another natural quantity is the radius of the territory Ψ −1 (ξ): R(ξ) = R Ψ (ξ) = ess sup x∈Ψ −1 (ξ) |ξ − x|. Suppose Π is a Poisson process. We introduce the point process Π * with law P * obtained from Π by adding an extra center at the origin: Define the radius for a typical center thus: In the subcritical and critical phases, the conditional law of X given that it is finite is dominated by the law of R * ; see Lemma 4 in the remarks below. (i) For all d and α > 1 we have Ee cX d < ∞; (ii) For all d and for some c = c(d, α) > 0. We shall also prove the following, which answers a question posed by Lincoln Chayes (personal communication). Theorem 3 (supercritical rigidity) Let Π be a Poisson process with intensity λ = 1, and consider the stable allocation to the process Π * . As α ց 1 we have P * 0 is an unsated center → 0. Remarks. In the case when Π is a Poisson process, the process Π * defined by (1) is the Palm process associated with Π; it may be thought of as Π conditioned to have a center at 0. The center at 0 may be thought of as playing the role of a "typical" center in the original process Π. (The Palm process Π * may also be defined for general point processes, but (1) is no longer a correct description; see [3], [5] for more information). The following simple result relates the random variables X and R * . Lemma 4 (site to center comparison) Let Π be a Poisson process of intensity λ and suppose λα ≤ 1. Then for all r ∈ [0, ∞) we have Thus, in the subcritical and critical phases, upper bounds for R * yield corresponding upper bounds for X. In particular, applying Lemma 4 to Theorem 1 we obtain the following. Let Π be a Poisson process with intensity λ = 1. It is immediate from Theorem 5(i) of [3] that R * < ∞ a.s. in all dimensions, but we have been unable to prove any quantitative upper bound on R * or X in the critical case in dimensions d ≥ 2. The following lower bounds for the critical phase were proved in [3]. By Lemma 4 they imply the analogous lower bounds for R * . Let Π be a Poisson process with intensity λ = 1. Applying Lemma 4 to Theorem 2(ii) we obtain the following. Let Π be a Poisson process with intensity λ = 1. For all d and α < 1 we have E(e cX d ; X < ∞) < ∞. We conjecture that (R * ) d has a finite exponential moment in the supercritical case α > 1 also. It is straightforward to check that the exponential bounds obtained are tight up to the value of c. Indeed, denoting the ball consider the event that B(0, r) contains centers lying approximately on a densely-packed lattice, while B(0, 2r) \ B(0, r) contains no centers. Such an event has probability decaying at most exponentially in r d (for any α), and it guarantees that X > r and R * > r. Our proof of Theorem 2 does not in general yield any explicit bound on the exponential decay constant c(d, α). However, such a bound is available in each of the following cases: (iii) d = 1 and α = 1. For the precise statements see Propositions 11 and 12. The proofs of these results are considerably simpler than that of Theorem 2, and are based on standard large deviation bounds for the Poisson process. To what extent are stable allocations robust to changes in the parameters? There are several natural ways to formulate such a question precisely. We shall prove one such formulation, Theorem 5 below, which roughly speaking states that if we change the set of centers Ξ far away from the origin, then near the origin the stable allocation ψ changes only on a small volume. This result will be a key ingredient in the proofs of Theorems 2 and 3. In order to state Theorem 5 precisely, we need following conventions (to be used only in Sections 5 and 9). We will work with various sets of centers, and we want to ensure that they have various almost sure properties enjoyed by point processes. We call an allocation ψ to a set of centers Ξ canonical if, for any z ∈ R d and ζ ∈ Ξ ∪ {∞}, whenever L[B(z, r) \ ψ −1 (ζ)] = 0 for some r > 0 then ψ(z) = ζ. We call a set of centers Ξ benign if it satisfies (i) Ξ has a L-a.e. unique stable allocation, and (ii) Ξ has a unique canonical allocation, which we denote ψ Ξ . By Theorems 1, 3 and 24 of [3], for any ergodic point process Π we know that [Π] is almost surely a benign set. (But it appears hard to describe simple properties of [Π] which ensure that it is benign). If Ξ is benign then ψ Ξ has all territories open and the unclaimed set open. Furthermore it is the unique minimizer of the set ψ −1 (∆) in the class of stable allocations ψ of Ξ with those properties. For sets of centers Ξ 1 , Ξ 2 , . . . and Ξ we write Ξ n ⇒ Ξ if for any compact K ⊆ R d there exists N such that for n > N we have Ξ n ∩ K = Ξ ∩ K. For allocations ψ 1 , ψ 2 , . . . and ψ we write ψ n → ψ a.e. if for L-a.e. x ∈ R d we have ψ n (x) → ψ(x) in the one-point compactification R d ∪ {∞}. Theorem 5 (continuity) Fix α. Let Ξ 1 , Ξ 2 , . . . and Ξ be benign sets of centers, and write ψ n = ψ Ξn and ψ = ψ Ξ for their canonical allocations. If We shall refer extensively to results from the companion article [3]. We adopt the convention that "Theorem I-x" refers to Theorem x of [3]. Site to Center Comparison Proof of Lemma 4. Our proof applies in the more general context when Π is an ergodic point process of intensity λ ∈ (0, ∞), and Π * is the Palm process (see [3], [5] for more details). First note that by Theorem I-4, λα ≤ 1 implies that all centers are sated a.s. We shall use the mass-transport principle (Lemma I-17). For z ∈ Z d let Using Fubini's Theorem and translation invariance we have On the other hand, since all centers are sated, and by a standard property of the Palm process, Lemma I-17 states that v∈Z d m(0, v) = u∈Z d m(u, 0), and by Proposition I-20 we have P(X < ∞) = αλ, so (2) follows. ✷ One Dimensional Critical Bound In this section we deduce Theorem 1 from a more general result. Let Π be a stationary renewal process, and let Π * be its Palm version. Write the support [Π * ] = {ξ j : j ∈ Z}, where (ξ j ) is an increasing sequence. Thus (ξ j ) is a two-sided random walk, with ξ 0 = 0. We assume that the i.i.d. increments ξ j − ξ j−1 have mean 1 and finite variance σ 2 . In the (critical) stable allocation with α = 1, our goal is to prove a power law tail bound for R * = R Ψ Π * (0). Theorem 6 With the assumptions above, there exists a constant C < ∞ that depends on the law of ξ j − ξ j−1 , such that for all r > 1, Proof of Theorem 1. This is immediate from Theorem 6. ✷ . We introduce the function F : R → R defined by See Figure 2 for an illustration. Note that for all ξ ∈ Ξ, we have F (ξ+) = F (ξ) = F (ξ−) + 1. We will prove that if F has certain properties then R * cannot be too large. On the other hand, we can analyze the behavior of F using the technology of random walks. Proposition 7 (measure-preserving map) Let Ξ ⊆ R be a discrete set of centers and let ψ be a stable allocation to Ξ with appetite α = 1 in which all centers are sated. For each center ξ, the restriction of F to ψ −1 (ξ) is a measure-preserving map into [F (ξ−), F (ξ)] (where on both sides, the measure is L). We will prove the above proposition from the following. Lemma 8 Let Ξ ⊆ R be a discrete set of centers and let ψ be a stable allocation to Ξ with appetite α = 1. Proof. By symmetry, it suffices to prove (i). and a set of positive length of sites s / ∈ (ξ, t] with ψ(s) = η. If s > t or η − s > t − η then (t, η) is an unstable pair, so we must have s ∈ [2η − t, ξ) for a.e. such s. But then (s, ξ) is an unstable pair. and a set of positive length of sites s ∈ (ξ, t) with ψ(s) = η. If η < ξ or η − s > s − ξ, then s, ξ is an unstable pair, so we must have η ∈ [t, 2s − ξ) for a.e. such s. But then t, η is an unstable pair. ✷ Proof of Proposition 7. Lemma 9 Under the assumptions of Proposition 7, suppose that 0 ∈ Ξ and that x > 0 is such that Consider two cases. Case I: There exists a site t ∈ D such that ψ(t) < 0 or ψ(t) > 2x. In this case, since there is a center at 0, stability of the pair (0, t) implies that 0 must be sated by distance r. By Proposition 7, for every center ξ ∈ [0, r] we have and the identity in (3) implies that for each ξ the above inequality must be an equality. In particular, for ξ = 0 this shows that The following random walk lemma will provide the tail estimate needed to prove Theorem 6. Lemma 10 (random walk estimate) Let {X j } j≥1 be i.i.d. random variables, with mean zero and variance σ 2 < ∞. Suppose X j ≥ −1 a.s. Write S k = k j=1 X j and denote Fix m > 1, and denote M := 2 3m . Consider the event We will first show that It clearly suffices to show this when D m on the right-hand side is replaced by D m ∩ {S M < 0}. To do so, let τ be the largest integer j ≤ M such that S j ≥ 0. Denote by τ * the index of the last maximum for the walk in (τ, M], so that S i ≤ S τ * for i ∈ (τ, τ * ], and S τ * > S j for j ∈ (τ * , M]. Note that S τ * ≥ −1, since all X j ≥ −1. We will derive (7) from the uniform estimate Observe that conditional on τ * = ℓ, the sequence {S ℓ+i − S ℓ } ∞ i=0 has the same law as the sequence {S i } ∞ i=0 conditioned to stay negative for the interval i ∈ [1, M − ℓ], and this also applies when we condition further on D m and on the value of S ℓ . By [1] Chapter XII formula (8.8), as k → ∞, and furthermore the probability is non-zero for all k ≥ 1. Therefore, uniformly in ℓ as M = 2 3m → ∞. This proves (8) and hence (7). Next, we show that Indeed by the strong Markov property, it suffices to show that The latter follows from the arcsine law for the last zero of Brownian motion on an interval ( [5] Theorem 13.16). In conclusion, we obtain (5), with any θ such that 1−θ < ( whence R(0) < M/2 by Lemma 9. Therefore So far, we have only considered the centers on the positive axis, and our estimates hold uniformly over the positions of centers on the negative axis. By considering the symmetrical events on the negative axis, we obtain Given any r > 1, we can choose m maximal so that M/2 = 2 3m−1 ≤ r. Since C 0 M −1/17.6 ≤ Cr −1/17. 6 for a suitable C, the theorem follows. ✷ Explicit Exponential Bounds In this section we prove exponential upper bounds involving explicit constants in several cases. Denote Write ω d for the volume of the unit ball in R d . Proposition 11 (explicit bounds for extreme α) Let Π be a Poisson process on R d with intensity 1. (In fact, the proofs of Propositions 11 and 12 give explicit upper bounds on the tail probabilities P(X > r) and P * (R * > r).) Proof of Proposition 11. We first note a standard large deviation estimate. If Z is a Poisson random variable with mean γ we have: Indeed, (11) follows from setting s = b/γ in s b P(Z ≥ b) ≤ E(s Z ) = e γ(s−1) , and (12) follows similarly from (a/γ) a P(Z ≤ a) ≤ E((a/γ) Z ). See e.g. [5] Chapter 27. For (i), fix α > 2 d and let Z be the number of centers in [Π]∩B(0, r). Then Z is Poisson with mean ω d r d . On the event that Z > ω d r d 2 d /α, there must be at least one some center ξ in [Π] ∩ B(0, r) which is not sated within B(0, 2r). Stability of the pair (0, ξ) then implies that 0 must be allocated to some center no farther than ξ, whence X ≤ |ξ| < r. Thus P(X > r) ≤ P(Z ≤ ω d r d 2 d /α); an application of (12) completes the proof. For (ii), fix α < 2 −d and let Z ′ be the number of centers in [Π * ] ∩B(0, 2r). Then Z ′ − 1 is a Poisson with mean ω d 2 d r d . On the event that Z ′ < ω d r d /α, there must be (a positive volume of) sites x in B(0, r) that are not allocated to any center in [Π] ∩B(0, 2r). Stability of such a site x and the center 0 implies that 0 must be sated within the closed ball B(0, |x|), whence R * ≤ |x| < r. Thus P * (R * > r) ≤ P(Z ′ −1 ≥ ω d r d /α −1); an application of (11) completes the proof. ✷ In order to prove Proposition 12, it will be convenient to work with α = 1 and λ = 1 and then rescale. Recall the definition of the function F from Section 3. The following states that sites are allocated to centers on the same level of F . Lemma 13 Let Ξ ⊆ R be a discrete set of centers and let ψ be a stable allocation to Ξ with appetite α = 1. Proof. The result is immediate from Lemma 8, since for any interval Proof of Proposition 12. We start by noting the following standard large deviation estimates. If Π is a Poisson process with intensity λ on R, then for any r, a ≥ 0 we have: To prove the above facts, consider the martingale If λ > 1, consider the stopping time τ = inf{t ≥ r : Π(0, t] ≤ t + a}. On the event τ = t, where t < ∞, we have M(τ ) ≥ e t(λ−1) λ −(t+a) = λ −a e q(λ)λt . Hence applying the optional stopping theorem to τ ∧ N yields 1 = EM(τ ∧ N) ≥ P(τ < N)λ −a e q(λ)λr , and taking N → ∞ yields (13). For (14) we apply similar reasoning to τ ′ = inf{t ≥ r : Π(0, t] ≥ t − a}. Now we prove exponential bounds on X and R * in the case when d = 1, α = 1 and λ = 1; then we will rescale R. Firstly, let Ξ = [Π]. By Lemma 13, on the event that r < X < ∞ there exists some center ξ ∈ [Π] \ [−r, r] with F (ξ) ∈ [0, 1]. Recalling the definition of F , taking t = |ξ| and using (13),(14) we therefore obtain Secondly, let Ξ = [Π * ]. By Lemma 13, on the event that R * > r there exists x ∈ R \ [−r, r] with F (x) ∈ [0, 1], so we obtain similarly Finally, rescaling R by a factor of λ changes the intensity to 1 and the appetite to λ, while scaling X and R * by a factor of λ. Thus we obtain the desired results. ✷ Continuity Recall the continuity result, Theorem 5, stated in the introduction. In this section we deduce some consequences which will be used in the proofs of Theorems 2 and 3. The proof of Theorem 5 is deferred until the end of the article. We shall apply Theorem 5 as follows. Roughly speaking, given an almost sure local property of stable allocations, we may find some large box such that with high probability the property holds throughout the box, whatever the configuration of centers outside. More precisely, we apply this to the notions of replete sets and decisive sets as described below. In what follows we take α = 1, and take Π to be a Poisson process of intensity λ, with associated probability measure and expectation P λ , E λ . Lemma 14 and Corollary 15 below apply to the critical and subcritical models; that is to λ ≤ 1. The critical case will be used to prove Theorem 3 and the subcritical case will be used to prove Theorem 2(ii). Recall that ψ Ξ denotes the canonical allocation of the benign set of centres Ξ. Given a benign set Ξ ⊆ R d and a measurable A ⊆ R d , let Ξ ′ A be a random set of centers which is the union of Ξ ∩ A and a Poisson process of intensity Define the box Q(L) = [−L, L) d . Corollary 15 (replete boxes) Let α = 1 and let Π be a Poisson process of intensity λ ≤ 1. For any ǫ > 0 there exists M such that Now given benign Ξ, we say that a measurable set can be determined by looking only at Ξ ∩ A). Note that if A is Ξ-decisive for x then ψ Ξ (x) cannot be a center outside A. The supercritical case below will be used to prove Theorem 2(i). Corollary 17 (decisive boxes) Let α = 1 and let Π be a Poisson process of intensity λ ≥ 1. For any ǫ > 0 there exists M < ∞ such that Next we turn to the proofs of the four results above. Lemma 18 Suppose Ξ n ⇒ Ξ and ψ n → ψ a.e. are as in Theorem 5. If there is a set A of positive volume such that every z ∈ A desires ξ under ψ, then for n sufficiently large, ξ is sated in ψ n , and lim sup n→∞ R ψn (ξ) ≤ ess inf z∈A |z − ξ| (< ∞). Proof. As the set A has positive volume, Theorem 5 implies that there exists z ∈ A such that ψ n (z) → ψ(z). Thus for n sufficiently large, z desires ξ under ψ n . By stability ξ does not covet z, and the result follows. In order to prove Lemma 16 we need the following enhancement of Theorem 5, in which we (partially) specify the set on which a.e. convergence occurs. The proof is deferred until the end of the article. which is less than ǫ(2M) d if M is sufficiently large. ✷ Supercritical Rigidity In this section we prove Theorem 3. So in particular and therefore since Π * is the Palm process, Supercritical Bound In this section we prove Theorem 2(i). Let α = 1 and let Π be a Poisson process of rate λ with law P λ . Theorem 21 Let α = 1 and let Π be a Poisson process of intensity λ. For any λ > 1 there exist C, c ∈ (0, ∞) such that for all r > 0, Proof of Theorem 2(i). By rescaling R d , the required result is equivalent to the same statement with α = 1 and λ > 1, and this is immediate from Theorem 21. This is because ξ must covet some z / ∈ B(0, 2r), so |ξ −z| > r; but |0−ξ| < r, so (0, ξ) would be unstable if X > r. So it is enough to show that the probability that (18) fails decays exponentially in r d . Given λ let Also consider the events We claim that if E and G occur and (19) holds then (18) is satisfied. To verify this claim, note that given those assumptions, (Here the third inequality holds because by the choice of ǫ we have 4ǫ2 d +2ǫ < 10ǫ2 d ≤ λ − 1). Then recalling that α = 1 we see that (18) must indeed hold. Finally, we must show that P(E C ), P(G C ) each decay at least exponentially in r d as r → ∞. For E C this is a standard large deviations bound since Π (B(0, r)) is Poisson with mean λLB(0, r) = Θ(r d ) as r → ∞. Turning to G C , note that the random variables (Y z ) z∈I are i.i.d. with mean less than ǫ(2M) d by Corollary 17. We have #I = Θ(r d ), while Furthermore, each random variable Y z is bounded by (2M) d . Therefore by the Chernoff bound ( [5] Corollary 27.4), P(G C ) decays exponentially in r d . ✷ Subcritical Bound In this section we prove Theorem 2(ii), via the following. Theorem 22 Let α = 1 and let Π be a Poisson process of intensity λ. For any λ < 1 there exist C, c > 0 such that for all r > 0, Proof of Theorem 2(ii). First note that by rescaling R d , it suffices to prove the same statement for α = 1 and λ < 1. Let C, c be as in Theorem 22. Let Y be the number of centers ξ ∈ [Π]∩B(0, 1) with R(ξ) > r, and note that by a standard property of the Palm process, E(Y ) = λLB(0, 1)P * (R * > r), so it is enough to prove that E(Y ) decays exponentially in r d . Let u = e cr d /2 . Then note that From Theorem 22 we have P(Y > 0) ≤ Ce −cr d , while the second term is bounded above by E (Π(B(0, 1)) 2 )/u. Thus both terms decay exponentially in r d , hence so does E(Y). This is because otherwise we would have |y −ξ| < r +1 and |y −Ψ(y)| > r +1, and so (y, ξ) would be unstable. So it is enough to show that the probability (20) fails decays exponentially in r d . Let ǫ = 1 − λ 10 · 2 d , and let M = M(λ, ǫ) be as in Corollary 15. Note that ǫ and M do not depend on r. Now for any r > 0 we tile the shell B(0, 2r + 1) \ B(0, r) with disjoint copies of the box Q(M). For z ∈ Z d write Q z = Q(M) + 2Mz, and define the random variable We claim that if E, F and G all occur then (20) is satisfied. To verify this claim, recall that α = 1, so that on E we have Therefore since B(0, 2r establishing the claim. Finally, we must show that P(E C ), P(F C ), P(G C ) each decay at least exponentially in r d as r → ∞. For E C this is a standard large deviations bound since #([Π] ∩ B(0, r)) is Poisson with mean λLB(0, r) = Θ(r d ). For F C it also follows from the standard large deviations bound on noting that LS < (ǫ/2)LB(0, r) for r sufficiently large. Turning to G C , note that the random variables (W z ) z∈I are i.i.d. with mean less than ǫ(2M) d by Corollary 15. We have #I = Θ(r d ), while (#I)(2M) d ≤ LB(0, 2r + 1) ≤ 2 · 2 d LB(0, r), Furthermore, we have W z ≤ Π(Q z ) so each random variable W z has exponentially decaying tails. Therefore by the Chernoff bound ( [5] Corollary 27.4), P(G C ) decays exponentially in r d . ✷ Proofs of Continuity Results Proof of Theorem 5. We can find a countable dense set X ⊆ R d such that ψ n (x) = ∆ for each x ∈ X and for all n. We can choose a subsequence (n j ) such that ψ n j (x) converges in the compact space Ξ ∪ ∞ for all x ∈ X. We define the map ψ ∞ by for all z where the limit exists. Thus ψ ∞ exists on X and perhaps elsewhere. We define where the first and second unions are over all centers and all pairs of centers in Ξ respectively. And let The sets Z and D are L-null a.s. For z ∈ R d let S(z) = {ξ ∈ Ξ ∪ ∞ : ∃x 1 , x 2 , . . . ∈ X such that x j → z and ψ ∞ (x j ) → ξ}. By the compactness of Ξ ∪ ∞, for any z the set S(z) is not empty. We claim the following. To prove this, we take z / ∈ Z ∪ D and consider two cases. Hence we can pick x ∈ ψ −1 ∞ (ξ) ∩ X such that |x − ξ| > |z − ξ|. Since ψ ∞ (x) exists there is N such that we have that ψ n j (x) = ξ for all n j > N, so ξ covets z under ψ n j . Since (z, ξ) is stable for ψ n j we deduce that |ψ n j (z)−z| ≤ |z −ξ| for all n j > N. We will show that for L-a.e. y ∈ B(z, r) we have ψ n j (y) → ξ. Let L = min{i : R ∞ (ξ i ) > |z − ξ i |}. We first show that ψ n j (y) → ξ L for all y ∈ B(z, r) \ D. By the definition of r there exists w ∈ X and N such that for all n j > N ψ n j (w) = ξ L and |z − ξ L | + r < |w − ξ L |. For n j > N and for every y ∈ B(z, r) with ψ n j (y) = ∆, from the stability of (y, ξ L ) under ψ n j , and by (22) we have Therefore for all y ∈ B(z, r) \ D we have ψ n j (y) = ξ i j for n j > N, where i j = i j (y) ≤ L. Our next task is to show that in fact i j < L is impossible for j sufficiently large. Suppose on the contrary that there exists I < L and a subsequence (n j k ) and sites (y j k ) such that for all k ψ n j k (y j k ) = ξ I and |y j k − ξ I | > |z − ξ I | − r. Then there exists u ∈ X ∩ B(z, r) such that for all k |u − ξ I | < |y j k − ξ I |. Since u ∈ X, the sequence ψ n j k (u) converges to some ξ i . By stability of (u, ξ I ) under ψ n j k and by the choice of r we must have i ≤ I < L. Thus By the choice of r the previous line implies R ∞ (ξ i ) ≥ |z − ξ i | + r. This contradicts the definition of L, so there is no I < L as described. We have shown that for all y ∈ B(z, r) \ D the sequence ψ n j (y) converges to the same center ξ L , and since ξ ∈ S(z), this center must be ξ. Since z / ∈ D we have that ψ ∞ (z) = ξ. Hence we have proved claim (21) in Case I. Case II. Suppose S(z) ∩ Ξ = ∅; then S(z) = {∞}, and we want to show that ψ ∞ (z) = ∞. We work by contradiction. Suppose there exists ξ ∈ Ξ and a subsequence (n j k ) such that ψ n j k (z) → ξ. Then there exists r > 0 such that for all x ∈ X ∩ B(z, r) As z / ∈ Ξ we may further choose x ∈ X ∩ B(z, r) such that Then there exists j k such that (x, ξ) is an unstable pair for ψ n j k . Hence we have proved claim (21) in Case II also. We have proved that ψ ∞ is defined almost everywhere. It is straightforward to show that if Ξ n ⇒ Ξ and ψ n → ψ ∞ then ψ ∞ is a stable allocation to Ξ (the main step is to show that ψ −1 (ξ) = lim inf ψ −1 n (ξ) = lim sup ψ −1 n (ξ) a.e.). Since Ξ is benign it has an a.e. unique stable allocation, so ψ ∞ must agree with ψ a.e. Thus we have ψ n j → ψ a.e. Finally we prove convergence of the entire sequence. We claim that for we have ψ n (z) → ψ(z). Suppose this does not hold for some z where ψ(z) = ζ ∈ Ξ ∪ {∞} say. Then there exists (n j ) such that ψ n j (z) = ζ for all n j . Also since ψ is a canonical allocation we have ψ(y) = ζ for all y in a neighborhood of z. Thus for all j k we have ψ n j k (z) = ζ which contradicts (24). Finally suppose ζ = ∞. If ψ n (z) does not converge to ∞ then there exists a subsequence ψ n j and center ξ such that ψ n j (z) = ξ for all j. By (25) and the subsequential convergence proved earlier there exist x and a further subsequence n j k such that |x − ξ| < |z − ξ| and ψ n j k (x) → ∞. Thus for k large enough we have that |ψ n j k (x) − x| > |ξ − x|. By stability for these k we have that ψ n j k (z) = ξ. This is a contradiction. Claim. For all n > N and all y ∈ B(z, r ′ ) we have that ψ n (y) = ξ or ψ n (y) = ∆. Suppose that the claim does not hold for some y and n > N. If ψ n (y) = ∞ or if ψ n (y) = η ∈ Ξ \ {ξ 1 , . . . , ξ ℓ } then (y, ξ) would be unstable by (ii) and (iii) above. On the other hand if ψ n (y) = ξ i where i < ℓ then by (i) and (ii) (ξ i , y i ) would be an unstable pair. Thus the claim is established. As for every n the set ψ −1 n (ξ) is open and Lψ −1 n (∆) = 0, we deduce from the claim and the fact that ψ is a canonical allocation that ψ n (y) = ξ for all y ∈ B(z, r ′ ) and for all n > N. Thus ψ n (z) → ξ. ✷ Open Problems (i) Critical behavior in dimension two and higher. What is the tail behavior of X or R * for the critical Poisson model? In particular, give any quantitative upper bound on P(X > r) as r → ∞ for d ≥ 2. (ii) Critical behavior in one dimension. Can the critical model be analyzed exactly in the case d = 1? Which moments of X are finite? The variant model in which each site is only allowed to be allocated to a center to its right can be analyzed exactly via of the function F from Section 3. The method may be found in [6], in a slightly different context. For this model, EX ν < ∞ if and only if ν < 1/2. (iii) Explicit non-critical bounds. Give explicit bounds on the exponential decay rates for the subcritical and supercritical models for general appetite and dimension. (iv) Supercritical radius. Does (R * ) d have exponentially decaying tail for the supercritical model in dimension d ≥ 2?
2014-10-01T00:00:00.000Z
2005-07-18T00:00:00.000
{ "year": 2005, "sha1": "7a781d9e97267fd1c492adbb3c5b12999bc09ddc", "oa_license": null, "oa_url": "http://arxiv.org/abs/math/0507324", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "52c1dc816af714b152cc2bf225621e466c5ffa73", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
251749918
pes2o/s2orc
v3-fos-license
Soil texture is a stronger driver of the maize rhizosphere microbiome and extracellular enzyme activities than soil depth or the presence of root hairs Different drivers are known to shape rhizosphere microbiome assembly. How soil texture (Texture) and presence or lack of root hairs (Root Hair) of plants affect the rhizosphere microbiome assembly and soil potential extracellular enzyme activities (EEA) at defined rooting depth (Depth) is still a knowledge gap. We investigated effects of these drivers on microbial assembly in rhizosphere and on potential EEA in root-affected soil of maize. Samples were taken from three depths of root hair defective mutant rth3 and wild-type WT maize planted on loam and sand in soil columns after 22 days. Rhizosphere bacterial, archaeal, fungal and cercozoan communities were analysed by sequencing of 16S rRNA gene, ITS and 18S rRNA gene fragments. Soil potential EEA of ß-glucosidase, acid phosphatase and chitinase were estimated using fluorogenic substrates. The bacterial, archaeal and cercozoan alpha- and beta-diversities were significantly and strongly altered by Texture, followed by Depth and Root Hair. Texture and Depth had a small impact on fungal assembly, and only fungal beta-diversity was significantly affected. Significant impacts by Depth and Root Hair on beta-diversity and relative abundances at taxonomic levels of bacteria, archaea, fungi and cercozoa were dependent on Texture. Likewise, the patterns of potential EEA followed the trends of microbial communities, and the potential EEA correlated with the relative abundances of several taxa. Texture was the strongest driver of rhizosphere microbiome and of soil potential EEA, followed by Depth and Root Hair, similarly to findings in maize root architecture and plant gene expression studies. Introduction The rhizosphere is defined as soil influenced by plant roots, it represents a zone with complex and dynamic interactions among plant roots, soil and biota (Hartmann et al. 2008(Hartmann et al. , 2009). These interactions are modulated by physicochemical conditions such as pH, organic carbon and altered moisture. A wide variety of organisms inhabits the rhizosphere including bacteria, archaea, fungi, cercozoa, nematodes, algae, viruses and arthropods (Mendes et al. 2013). Rhizosphere microorganisms are attracted from the bulk soil to the plant roots via root exudates that consist of numerous primary and secondary metabolites serving as energy or carbon sources for their growth (Kawasaki et al. 2016;Sasse et al. 2018;Canarini et al. 2019;Cotton et al. 2019). In turn, rhizosphere microorganisms alter plant root exudations through systemic microbe-root signalling mechanisms (Korenblum et al. 2020), and contribute significantly to plant growth and health by enhancing nutrient acquisition, pathogen resistance and stress tolerance (Mendes et al. 2013;Berg et al. 2017;Mohanram and Kumar 2019). Rhizosphere inhabiting bacteria, archaea, fungi and cercozoa -hereafter termed rhizosphere microbiome -contribute to soil quality and health. Extracellular enzyme activities (EEA) produced by members of the soil microbiome accelerate the breakdown of complex organic substances for gaining energy and nutrients from soil (Nannipieri et al. 2002;Kompała-Bąba et al. 2021). Thus, they regulate the availability of nutrients for plants (Nannipieri et al. 2003;Dick 1997). Phosphomonoesterases or acid phosphatases are enzymes that catalyse the hydrolysis of organic monophosphoesters, releasing phosphate for plant and microbial uptake (Nannipieri et al. 2011). Other enzymes like β-glucosidases generally play a role in the last stage of cellulose degradation. They hydrolyze cellobiose residues, resulting in the release of sugar monomers, which serve as a source of energy for microbial growth and activity (Gil-Sotres et al. 2005;Merino et al. 2016). N-acetyl-βd-glucosaminidases or chitinases are involved in the degradation of chitin (fungal cell wall component) and peptidoglycan (bacterial cell wall component) (Ekenler and Tabatabai 2004). Soil enzyme activities depend on the abundance and diversity of the microbiome and its metabolic activity (Zhang et al. 2017;Kompała-Bąba et al. 2021). However, our understanding of the relationship between different types of soil enzymes, their activities and corresponding microbial abundance and diversity is still limited. Soils differing in texture have differences in pore size distribution and connectivity and rooting space of plants as observed for the two soils loam and sand used as substrates to grow maize plants . Smaller pore sizes can increase bacterial diversity in soil due to reduced connectivity of microsites (Carson et al. 2010;Hemkemeyer et al. 2018;Seaton et al. 2020), and reduced access of predators (Rutherford and Juma 1992). Some microbial taxa show preference for different particle sizes of the soil, i.e., clay and silt were shown to be more preferred substrates for specific bacteria than sand particles due Vol.: (0123456789) to their higher cation exchange capacity and nutrient content with consequences for the composition of the soil microbiome (Hemkemeyer et al. 2018;Seaton et al. 2020). Most studies to date investigated rhizosphere microbiome assembly using composite samples of the whole root system of plants (Li et al. 2014;Silva et al. 2017;Gomes et al. 2018;Walters et al. 2018;Renoud et al. 2020;Kusstatscher et al. 2020) despite the great heterogeneity of root system morphology and architecture . Although several studies reported the microbial assembly at different scales of the plant root, e.g., at different root zones along the primary root axis (Rüger et al. 2021), at different root types (DeAngelis et al. 2009;Kawasaki et al. 2016;Pervaiz et al. 2020), root sizes (Zai et al. 2021) and root age (Wei et al. 2021), there is still lack of information regarding the microbial assembly in the rhizosphere along defined soil and rooting depth. Root hairs were shown to play a significant role in the uptake of water and nutrients, especially in soils with low phosphate (P) contents (Leitner et al. 2010;Klamer et al. 2019). Under limited P and water shortage of soil, root hairs were known to significantly enhance P uptake compared to a root hairless mutant of barley (Ruiz et al. 2020). Especially organic acids are important rhizodeposits of root hairs that help in the acquisition of P (Pantigoso et al. 2020), and may influence the rhizosphere microbiome (Robertson-Albertyn et al. 2017). Further, the presence of root hairs also increases formation of a rhizosheath (Burak et al. 2021). The role of root hairs in microbial colonization was previously examined in barley by Robertson-Albertyn et al. (2017) who reported that root hairless barley mutants had a less complex bacterial community with lower richness and diversity than the wild type in the rhizosphere. However, the impact of root hairs on the microbial assembly remains poorly understood. The two root hairless mutant lines of barley also showed significantly distinct microbial community composition and diversity (Robertson-Albertyn et al. 2017). Maize (Z. mays) is a plant species with a wealth of data on root traits and rhizosphere microorganisms (reviewed recently by Bonkowski et al. 2021), and thus serves as an excellent model to address specific questions of rhizosphere biology Rüger et al. 2021). In this study, we aimed to investigate spatial microbial colonization patterns along the roots of maize grown in soil columns ( Fig. 1) in relation to 1) soil texture (Texture: loam and sand), 2) rooting depths (Depth: D1, D2 and D3) and 3) absence or presence of root hairs (Root Hair: root hair defective mutant rth3 and the corresponding wild-type WT). Further, we related soil potential EEA to the microbial abundance, diversity and dominant taxa in order to understand the mechanisms of plant-microbial interactions in the rhizosphere. Based on root gene expression and root system architecture analyses at D1, D2 and D3 of the maize plants, we also learned that increasing Fig. 1 22-day-old maize plant cultivated in a soil column (WT in loam, left) under a control environment and maize root system architectures from different treatments obtained from X-ray computer tomography (right). Indicated depths were used for the rhizosphere and root-affected soil sampling. L, loam and S, sand rooting depth means increasing share of young roots (≤ 7 days). The following hypotheses were assessed: 1) loam as Texture with higher content of organic matter and sorption capacity but with lower porosity ) shows a higher microbial abundance and diversity and potential EEA (V max and K m ) than sand; 2) particularly high differences between microbial community structure are expected between uppermost and lowest rooting depths. This stratification is dependent on porosity and water-holding capacity of the Texture, which means that a bigger difference is expected in sand; and 3) the effect of Root Hair on the microbial community assembly in the rhizosphere of maize, and potential EEA in the root-affected soil are more subtle compared to Texture and Depth. Soil column experiment and collection of samples To address our hypotheses, we took samples from a soil column experiment (see Ganther et al. 2021) using a three-factorial design: Texture, Depth and Root Hair. Briefly, soil columns (25 cm height and 7 cm inner diameter) were either packed with two soils with contrasting textures loam (33.2% sand, 47.7% silt, 19.1% clay, 0.84% C org, 0.084% Nt and pH 6.21) and sand. A mixture of 16.7% loam and 83.3% quartz sand was prepared for the Texture sand (88.6% sand, 8.1% silt, 3.3% clay, 0.14% C org, 0.014% Nt and pH 6.25). Loam and sand (≤ 1 mm mesh size) were fertilized differently to achieve a similar content of plant available nutrients for plant growth based on pre-trials . Two maize genotypes differing in root hair formation B73 wild-type (WT) and the root hair defective mutant (rth3) were used (Hochholdinger et al. 2008). Planted columns (six replicates per treatment) were grown in a growth chamber for 22 days under controlled conditions (12 h at 22 °C/12 h at 18 °C for day/ night, 65% relative humidity and 350 μmol m −2 s −1 photo-synthetically active radiation). A volumetric water content of 22 and 18% (v/v) was maintained for loam and sand, respectively. Rhizospheres and root-affected soil were sampled at three depths (D1: 4.5-6.1, D2: 9.0-10.6 and D3: 13.5-15.1 cm from soil surface, Fig. 1). A soil slice of a thickness of approximately 1.6 cm at each depth was taken. Root segments from each depth were gently shaken to remove excessive soil before being briefly submerged, and shaken in a 15 mL centrifuge tube containing sterile 0.3% NaCl. The soil obtained from shaking off the roots is determined as root-affected soil. While the rhizosphere was obtained by centrifugation of soil resuspension at 5000 x g for 30 min at 4 °C. The rhizosphere pellet was stored at −20 °C until the total community (TC-) DNA was extracted. TC-DNA extraction and amplification of 16S rRNA gene fragments and ITS regions for real-time PCR analysis The TC-DNA was isolated from about 0.52 ± 0.01 g per sample using the FastDNA Spin Kit and the Geneclean Spin Kit for soil following the manufacturer's instructions (MP Biomedicals, Heidelberg, Germany). Data analysis of amplicon sequencing Demultiplexed sequences of the 16S rRNA gene and ITS region fragments were processed as previously described (Ganther et al. 2020;Yim et al. 2020). Briefly, raw sequence reads of the bacterial and archaeal communities were trimmed for primers using cutadapt v.2.3 (Martin 2011). Primer-trimmed sequence reads were error-corrected and merged, and amplicon sequence variants (ASVs, 100% identity) were identified using DADA2 v.1.10.0 (Callahan et al. 2016) within QIIME2 (Bolyen et al. 2018). Each ASV was given a taxonomic annotation using q2-feature classifier classify-sklearn module trained with SILVA SSU rel. 132 database (Quast et al. 2013). For the fungal community, an automated ITS pipeline, so-called PIPITS (PIPITS_PREP, PIP-ITS_FUNITS and PIPITS_PROCESS) with default parameters for the ITS2 region was followed according to Gweon et al. (2015). Read-pairs from Illumina MiSeq sequencers of ITS files were merged into a single file, followed by quality filtered for the next step, generated by the PIPITS_PREP. Then, the PIP-ITS_FUNITS identified the ITS sub-regions using HMMER3 according to Mistry et al. (2013). The PIPITS_PROCESS generated ASVs, and calculated their read abundances as well as the RDP taxonomic assignments using the UNITE fungal ITS reference data set. Regarding the cercozoan community, sequence reads of the 18S rRNA gene fragments were processed using the customized MOTHUR pipeline v.39.5 (Schloss et al. 2009). Paired-end reads were merged, not allowing any mismatches in primer or barcode sequences, maximum two mismatches and one ambiguity in the target sequence. Assembled sequences with an overlap lower than 200 bp were removed. Merged contigs were demultiplexed, and primers and tag sequences were trimmed. Remaining reads were clustered into operational taxonomic units (OTUs) using VSEARCH (Rognes et al. 2016) according to the abundance-based greedy algorithm (agc) with a similarity threshold of 97%. Clusters represented by less than 350 reads were removed as likely to represent amplification or sequencing noise (Fiore-Donno et al. 2018). OTUs were assigned to taxa using BLAST+ (Camacho et al. 2009) with an e-value of 1e-50 and the PR2 database (Guillou et al. 2013), keeping only the best hit. Sequences were aligned with the template provided by Fiore-Donno et al. (2018), allowing gaps of maximum five nucleotides, and cleaned from chimeras using UCHIME (Edgar et al. 2011) and non-cercozoan sequences. Sequence contingency tables showing taxonomy identifications and read abundances (ASV or OTU table) were exported for subsequent analyses. For 14 samples, below 4,000 reads were obtained after removing any reads associated to plant materials such as chloroplast and mitochondrial DNA, and these samples were excluded from the analyses. Rarefaction analyses for 16S rRNA gene fragments (bacteria and archaea), ITS regions (fungi) and 18S rRNA gene fragments (cercozoa) were performed, and they showed that the sequences covered the diversity in the analysed samples (Fig. S1). Soil potential extracellular enzyme activity (potential EEA) Soil potential EEA (V max and K m ) analyses in the root-affected soil were performed using fluorogenically labelled substrates (Marx et al. 2005;German et al. 2011) based on 4-methylumbelliferone -MUF: 4-MUF-D-glucoside for β-glucosidase (BG), 4-MUF-phosphate for acid phosphatase (AP) and 4-MUF-N-Acetyl-ß-D-glucosaminide for chitinase (NG). Resuspensions of root-affected soil (50 mL) were prepared using low energy sonication (40 J s −1 output energy) for 2 min. Thereafter, 50 μL of soil suspension, 100 μL substrate solution (2.5, 5, 10, 20, 50, 100 μM) and 50 μL of buffer (MES) were transferred into a 96-well microplate (Tian et al. 2020). Fluorescence was measured at 30, 60 and 120 min at 360 nm excitation and 465 nm emission wave-length and at a slit width of 35 nm with a plate reader (TECAN Infinite F200 Pro). Calibration curves were included in every series of enzyme measurements. Enzyme activities were expressed as MUF release in nM g −1 dry soil h −1 . Michaelis-Menten equation was used to determine enzyme kinetic parameters V max , and K m (Eq. 1) Where v is the rate of enzyme-mediated reaction, S is the substrate concentration and K m is an affinity constant equal to the substrate concentration at half of the maximum reaction rate V max . Statistical analysis Amplicon sequencing data, ASV or OTU species richness and the Shannon diversity index (Shannon et al. 1948) were evaluated using rarefied reads of the 16S rRNA gene fragments (4121), ITS regions (4993) and relative abundance reads of the 18S rRNA gene fragments. Microbial abundances (copy numbers of 16S rRNA gene and ITS region fragments) and soil potential EEA were fitted into bacterial, archaeal, fungal and cercozoan beta-diversities as environmental variables applying Redundancy Analysis (RDA) using scaling = 1 (vegan package, Oksanen et al. 2020 The 30 ASVs or OTUs with highest relative abundances (bacteria, archaea, fungi and cercozoa) were used to generate heatmaps (presenting the average relative abundance ASVs or OTUs per sample). Further, the Spearman's rank correlation coefficient analysis was performed between the relative abundances of those 30 ASVs or OTUs and the soil potential EEA (BG, AP or NG), applying p value correction based on Benjamini and Hochberg (1995). The Levene-Test was applied to check the variance homogeneity of the data. The three-way (Texture x Depth x Root Hair) ANOVA was performed to test effects of Texture, Depth and Root Hair on microbial assembly and potential EEA in the rhizosphere and root-affected soil of maize, respectively at p < 0.05. The two-way ANOVA (Depth x Root Hair) was applied per Texture (loam or sand). When the two-way ANOVA indicated significant differences between treatments, then multiple comparisons were followed in loam or sand using Tukey-Test, applying p value correction (at p < 0.05) based on Benjamini and Hochberg (1995). Results Microbial abundance and diversity in the rhizosphere of maize affected by Texture, Depth and Root Hair Copy numbers of 16S rRNA gene determined by qPCR were more than three orders of magnitude higher than the copies of ITS fragments (Table S1). Three-way ANOVA of 16S rRNA gene copy numbers revealed a significant effect of Texture and Depth, while the ITS copy numbers did not differ significantly (Tables S1 & S2). The effects of interactions of Texture:Depth and Texture:Root Hair on 16S rRNA gene and ITS fragment copy numbers were significant, but they differed between loam and sand (Table S2). After sequence processing, the reads clustered into 28,590 bacterial and archaeal, 2694 fungal ASVs and 409 cercozoan OTUs. Texture significantly affected bacterial and archaeal ASV richness and Shannon indices, with a higher diversity in loam than in sand as shown in the box-plots and by three-way ANOVA (Table 1; Fig. 2A). Depth showed a significant effect on bacterial and archaeal Shannon indices, dependent on Texture (a significant interaction of Texture:Depth, Table 1). The effect of Root Hair on ASV richness and Shannon indices was dependent on Depth (a significant interaction of Depth:Root Hair, Table 1). Pairwise comparisons revealed that Shannon index in the rth3 rhizosphere was lower at D3 compared to D1 in both loam and sand ( Fig. 2A). This pattern was not observed for the WT rhizosphere. Further, significantly less ASVs and a lower Shannon index were recorded in loam at D3 in rth3 compared to WT rhizosphere ( Fig. 2A). The RDA showed that bacterial, archaeal, fungal and cercozoan beta-diversities differed between Texture (Fig. 3A, B & C) and Depth (Figs. S2, S3 & S4), and the significance was confirmed by the PERMANOVA ( Table 2). The significant effect of Depth and Root Hair on bacterial, archaeal, fungal and cercozoan beta-diversity was dependent on Texture (significant interactions of Texture:Depth and Texture:Root Hair, Table 2). Depth explained a higher variation in bacterial, archaeal, fungal and cercozoan beta-diversities in sand than in loam (PERMANOVA, Table 3). For bacteria and archaea, the greatest differences were observed between D1 vs. D3 for both rth3 and WT grown in loam or in sand (Table 4). Significant differences for fungal beta-diversity between D1 vs. D2 and D1 vs. D3 were only detected for rth3 grown in sand (Table 4). Further, for WT grown in loam significant differences of the fungal beta-diversity were found between D1 vs. D3. For cercozoa, the greatest differences in beta-diversity were shown between D2 vs. D3 in the rhizosphere of WT (in loam) and rth3 (in loam or sand) (Table 4). Overall, the PERMANOVA tests revealed the Texture as the strongest driver, followed by Depth and Root Hair on bacterial, archaeal, fungal and cercozoan assembly (Table 2). Bacterial and archaeal ASVs were affiliated to 35 phyla but 25 of them had a relative abundance below 1%, and thus they were grouped together and assigned as "Other". The phylum Proteobacteria was most dominant in sand, while in loam the dominant phylum was Firmicutes (Fig. 4A). Acidobacteria and Thaumarchaeota were far more abundant in loam than in sand (Fig. 4A). Changes in relative abundances of bacteria and archaea at the phylum level along examined depths were also highly modulated by Texture. For instance, the relative abundance of Proteobacteria was significantly lower at D1 than D3 in the rhizosphere of both WT and rth3 in sand, and the opposite was observed in loam (Fig. 4A). The effects of Depth on Thaumarchaeota were only observed in the rhizosphere of maize grown in loam (Fig. 4A). Overall, Root Hair had a subtle effect on the relative abundances at the phylum level (Fig. 4A). For fungi, all ASVs were affiliated to eight phyla (four of them were grouped together, and renamed as "Other" due to a low relative abundance ≤1%). The fungal phyla Ascomycota and Basidiomycota were dominant in loam or sand (Fig. 4B). Effects of Depth on fungal relative abundances at phylum level were only observed for Mortierellomycota that displayed significantly lower relative abundances at D3 than D1 for WT grown in loam (Fig. 4B). Overall, no effect of Root Hair on fungal relative abundances was detected at the phylum level in the rhizosphere of maize grown in loam or sand (Fig. 4B). Cercozoan OTUs were affiliated to 19 orders, 11 of them were grouped together and re-named "Other" as each represented less than 1% of the total number of reads. The orders Glissomonadida and Cercomonadida dominated in the rhizosphere of maize grown in loam or sand and at all three depths (Fig. 4C). In loam, no clear effect of Depth or Root Hair was detected, while in sand an increase in the relative abundance of the order Glissomonadida was found with increasing depth, but only in the rth3 rhizosphere. Cryomonadida were mainly detected in sand, and their relative abundances were higher at D1 than at D2 or D3 in WT and rth3 rhizosphere (Fig. 4C). To increase the level of resolution effects of Texture, Depth and Root Hair on the rhizosphere microbiome were also analysed at the ASV or OTU level. The 30 bacterial or archaeal ASVs with the highest relative abundance can be depicted from Fig. 5A. ASVs belonging to archaeal family Nitrososphaeraceae (ASV_2 &_10) and the bacterial genera Terrimonas (ASV_19) and Bacillus (ASV_11, _12 & _13) were dominant in loam, while the ASVs affiliated to bacterial genera Dyella (ASV_41 & _45), Massilia (ASV_1, _4 & _14) and Streptomyces (ASV_30) were more abundant in sand (Fig. 5A). The relative abundance of Bacillus (ASV_13) was higher at D3 than D1 for the rth3 grown in loam (at p < 0.05). In sand, the relative abundance of Massilia (ASV_1 & _14) was significantly higher at D3 than D1 for the WT or rth3 rhizosphere. In contrast, the relative abundance of ASV_30 belonging to Streptomyces was significantly higher at D1 than D2 or D3 for the WT or rth3 rhizosphere in sand. Root Hair affected the distribution of several bacterial ASVs in the rhizosphere dependent on Texture and Depth. For instance, ASVs affiliated to bacterial Dyella (ASV_41 & _45) in the rhizosphere of rth3 grown in sand had significantly higher relative abundance at D1 than in the WT rhizosphere (Fig. 5A). It is important to note that not all ASVs affiliated to a certain genus or family showed identical Depth and Root Hair-dependent relative abundances. The majority of the 30 most abundant fungal ASVs did not show Texture, Depth and Root Hair-dependent differences in their relative abundances (Fig. 6A). But ASV_5 affiliated to the genus Trichophaea and the ASV_7 (unclassified Ascomycota) were dominant in loam and in sand, respectively. This similar pattern was also revealed for the 30 most abundant cercozoan OTUs (Fig. 7A). Sequences belonging to cercozoan families Allapsidae (OTU_15) and Sandonidae (OTU_8) were dominant in loam, while two other unclassified cercozoan OTUs again of the families Allapsidae (OTU_4) and Sandonidae (OTU_11) were more abundant in sand (Fig. 7A). The relative abundance of OTUs of the families Sandonidae (OTU_11) and Allapsida (OTU_4) was highest at D2 and D3 rhizosphere, respectively, for both WT or rth3 grown in sand. In loam, the OTU assigned to the Allapsidae (OTU_15) showed the lowest relative abundance at D2 for both WT and rth3 rhizosphere. Soil potential extracellular enzyme activities (EEA) affected by Texture, Depth and Root Hair In general, the soil potential EEA (V max / V) were approximately 2.5-7 times significantly higher in loam than sand (Table 5), and the significant effect of Texture was confirmed by three-way ANOVA ( Table S3). Effects of Depth on the soil potential EEA were Texture-dependent, and a significant interaction of Texture:Depth was indicated (Table S3). The β-glucosidase activity (BG_V) was not affected by Depth in loam or in sand. Acid phosphatase (AP_V) showed lower activity at D3 compared to D1 or D2 for rth3 and WT in loam, but the opposite pattern was observed in sand (Table 5). Chitinase (NG_V) had a higher enzyme activity at D1 than D2 for rth3 (loam) and at D1 than D2 and D3 for WT (loam). Root Hair affected only the BG_V at D1 in sand as WT had a more than two-fold activity higher than rth3. The soil potential EEA affinity (K m / K) BG_K showed different enzyme affinities at D2 vs. D3 in loam of WT (Table 5). AP_K at D3 in loam and at D2 in sand were significantly higher than other depths of the rth3. NG_K indicated a similar enzyme affinity at all depths. Overall, enzyme kinetics also revealed Texture as the strongest driver, followed by Depth and Root Hair. Data of soil potential EEA (V max and K m ) were incorporated into the microbial beta-diversity in RDA plots to examine their linkage to the bacteria and archaea (Fig. 3A), fungi (Fig. 3B) and cercozoa (Fig. 3C) of both textures and to increase the resolution per Texture (Figs. S2, S3 & S4). The fitted vectors showed that bacterial and archaeal communities had significantly high activities of BG_V and AP_V in loam, while in sand significantly high enzyme affinity BG_K was observed (Fig. 3A). Spearman's rank correlation coefficients were calculated to discover potential interactions between the relative abundances of the 30 dominant bacterial and archaeal ASVs and the soil potential EEA (Fig. 5B). We found that several bacterial ASVs affiliated to Flavisolibacter (ASV_32) in loam and Paenibacillus (ASV_47) and Massilia (ASV_1, _4 & _14) in sand were significantly and positively correlated to AP_V. The fungal communities had higher AP_V, BG_V, NG_V and NG_K in loam than in sand but differences were not significant (Fig. 3B). Spearman's rank correlation coefficients indicated several ASVs were positively correlated to AP_V, BG_V and NG_V, but they were not significant (Fig. 6B). The cercozoan communities had significantly higher NG_V in loam than in sand (Fig. 3C). The relative abundance of OTUs affiliated to Allapsidae_OTU004 and Eocercomonas_OTU006 was significantly and positively correlated to AP_V, while the Paracerco-monas_OTU007 was correlated to activity and affinity of NG_V and NG_K, respectively in sand (Fig. 7B). Overall, the copy numbers of 16S rRNA gene and ITS region fragments were strongly linked with AP_V ( Fig. 3A, B & 3C). The RDA plot including the soil potential EEA and microbial communities per Texture (Figs. S2, S3 & S4) revealed that the bacterial, archaeal, fungal and cercozoan communities had higher AP_V at D2/D3 than D1 in sand. While in loam, the NG_V was higher at D1 for bacterial, archaeal and cercozoan communities. Discussion Effects of Texture on the maize rhizosphere microbiome Substrates with different textures sand and loam used for growing maize in the present work differed per se in pore size distribution, and maize root-induced changes in porosity in the rhizosphere . Due to higher organic matter, silt and clay contents, loam has higher sorption sites than sand causing lower nutrient mobility and leaching substances in loam . The dilution of loam with 83.3% quartz sand (for Texture sand) indeed resulted in a decrease in abundance (copy numbers of 16S rRNA gene fragments) and alpha-diversity (observed ASVs/OTUs and Shannon indices) of bacteria, archaea and cercozoa (Tables 1, S1 & S2; Fig. 2). Higher pore connectivity of sand was recently already reported in different studies to decrease bacterial diversity in soil due to easier migration of microbial cells between pores (Carson et al. 2010;Hemkemeyer et al. 2018;Seaton et al. 2020). In the present study, changes in cercozoan richness and diversity displayed a similar pattern as those observed for bacteria and archaea (Table 1; Fig. 2). This is likely due to feeding traits of cercozoa. Most cercozoan taxa mainly feed on bacteria , and these bacterivorous cercozoan taxa affect bacteria and archaea in soil due to their specificity in feeding patterns (Kreuzer et al. 2006;Rosenberg et al. 2009;Flues et al. 2017;Henkes et al. 2018). Subsequently, cercozoa are bottom up controlled by bacteria, as the cercozoan community assembly is affected by bacterial defense mechanisms (Jousset 2012), and their growth depends on the availability of their major food source. In addition, the differences in Texture of the two soils might have changed these predator-prey interactions. Loam with a higher content of fine soil pores, might have restricted the access of protists to their bacterial prey (Rutherford and Juma 1992). Table 2 Global PERMANOVA analysis (R 2 value) to reveal effects of Texture, Depth and Root Hair on bacterial and archaeal (16S rRNA gene fragments), fungal (ITS regions) and cercozoan (18S rRNA gene fragments) beta-diversities in the rhizosphere of maize R 2 + * significant differences at p < 0.05 (in bold). n = 6, except for L_rth3_D3, S_WT_D1 and S_rth3_D3 (n = 4) and S_WT_ D3 (n = 5) for bacteria and archaea; S_WT_D2 and S_WT_D3 (n = 5), S_rth3_D3 (n = 4) and S_rth3_D2 (n = 3) for fungi. For cercozoa, n = 6, except for S_WT_D3 (n = 5) and S_rth3_D3 (n = 3). L, loam and S, sand Table 3 PERMANOVA analysis (R 2 value) to reveal effects of Texture, Depth and Root Hair on bacterial and archaeal (16S rRNA gene fragments), fungal (ITS regions) and cercozoan (18S rRNA gene fragments) beta-diversities in the rhizosphere of maize -in loam or sand R 2 + * significant differences at p < 0.05 (in bold). n = 6, except for L_rth3_D3, S_WT_D1 and S_rth3_D3 (n = 4) and S_WT_D3 (n = 5) for bacteria and archaea; S_WT_D2 and S_WT_D3 (n = 5), S_rth3_D3 (n = 4) and S_rth3_D2 (n = 3) for fungi. For cercozoa, n = 6, except for S_WT_D3 (n = 5) and S_rth3_D3 (n = 3). L, loam and S, sand Table 4 PERMANOVA analysis (R 2 value) to reveal effects of Depth on bacterial and archaeal (16S rRNA gene fragments), fungal (ITS regions) and cercozoan (18S rRNA gene fragments) beta-diversities in the rhizosphere of maize R 2 + * significant differences at p < 0.05 (in bold). n = 6, except for L_rth3_D3, S_WT_D1 and S_rth3_D3 (n = 4) and S_WT_D3 (n = 5) for bacteria and archaea; S_WT_D2 and S_WT_D3 (n = 5), S_rth3_D3 (n = 4) and S_rth3_D2 (n = 3) for fungi. For cercozoa, n = 6, except for S_WT_D3 (n = 5) and S_rth3_D3 (n = 3). L, loam and S, sand Compared to bacteria, archaea and cercozoa, Texture revealed less effects on the fungal abundance (copy numbers of ITS regions, Tables S1 & S2), alpha- (Table 1) and beta-diversity (Table 2) as previously observed for other soils (Hartmann et al. 2014;Yim et al. 2017;Seaton et al. 2020). Fungal hyphae can easily bridge pore spaces (Ritz and Young 2004), while single cell organisms depend on the connectivity of the water film for dispersal. Findings using maize roots obtained from the three depth layers of the soil column experiments of the present study, Ganther et al. (2021) found a higher expression level of genes for aquaporins involved in passive water transport, genes related to plant immunity function, e.g., gene related to pathogenesisrelated protein 5 (PR5), chitinase, ethylene-intensive 3 transcription factor and genes related to secondary metabolite production in sand compared to loam. Genes coding for aquaporin (Marulanda et al. 2010), chitinases (Shoresh and Harman 2010), PR5 (Anisimova et al. 2021) and metabolites (Cotton et al. 2019;Murphy et al. 2021) that were differentially expressed between loam and sand, were previously shown to be involved in plant microbe interactions, and might be influenced by the Texture-dependent microbiome composition. Effects of Depth on the maize rhizosphere microbiome In an identical experiment, we also learned that share of young roots increased with increasing rooting depth ; Fig. 1). Young or fine roots play active roles in respiration, transport and absorption of water and nutrients, and they also have much more carbon release as exudates for microbial cells compared to the older roots (Nikolova et al. 2020). Thus, they might favor r-strategists (King et al. 2021;Wei et al. 2021) as observed in the present study for Massilia and Bacillus. The assembly of bacterial, archaeal and cercozoan communities in different root zones, i.e., distance to root tips of the primary root of nine-day-old WT maize plants grown in loam, was previously investigated by Rüger et al. (2021). They reported that along the primary root axis, the alpha-diversity was higher at root tips compared to older root regions. In our study, a mixture of roots per depth layer was used for obtaining the rhizosphere microbial pellet, but as the proportion of young roots increased with depth ), we observed a slightly increased alpha-diversity of bacteria and archaea in the rhizosphere of 22-day-old WT maize grown in loam at the D3 compared to D1 ( Fig. 2A), confirming the findings by Rüger et al. (2021). The soil water content showed a depth gradient due to gravity in the soil column . As plants developed, differences in root length density with depth evolved, and might have introduced additional gradients in nutrient distribution. Therefore, differences in water, O 2 availability and nutrient resources contributed to differences in microbial abundance and diversity at the three depth levels analysed as already reported in earlier studies (Schlüter et al. 2019;Schimel et al. 1999;Schlatter et al. 2018;Li et al. 2020). Further, microbe-microbe interactions in terms of competition and facilitation of niche occupancy (Sasse et al. 2018;Cotton et al. 2019), top-down control by microbial predators, i.e., protists (Bonkowski et al. 2021) -these factors also cause heterogeneity along rooting depth, and thus shape the rhizosphere microbiome. Larger differences in bacterial and archaeal betadiversity were observed between D1 vs. D3 than between D1 vs. D2 or D2 vs. D3 (Table 4), and similar findings were reported for plant root gene expression analysis using roots from the same experiment and depths ). Depth affected also the structure of acdS gene carrying bacteria in TC-DNAs of the same rhizosphere samples investigated in the present study (Gebauer et al. 2021). Depth-related effects on fungal gene copy numbers and alpha-diversity were not observed in the present study (at 4.5-15.1 cm depth) likely due to relatively short soil columns being used. Effects of Root Hair on the maize rhizosphere microbiome Lack of Root Hair had little effects on microbial assembly compared to Texture and Depth in the present study, but was significant for bacteria, archaea and cercozoa (Table 2). Only in loam Root Hair had also an effect on fungal beta-diversity. This suggests that there might be a link to the findings by Lippold et al. (2021) that total P uptake was significantly lower for rth3 than WT in loam. The small effect of Root Hair was in line with root gene expression ) and with bacterial community carrying acdS gene (Gebauer et al. 2021) observed for the same experiment. Root hairs of the WT used in this study were relatively short, of a length of about 0.24 mm Phalempin et al. 2021). This resulted in small differences between the WT and rth3 in the rhizosphere zone and in their role in water and nutrient uptake. This might also explain the rather small effects on rhizosphere microbiome observed. Further, the subtle effects of Root Hair in microbial assembly might be attributed to fertilization of the soils and to well irrigation . In a previous work, the root hairless rth2 maize accumulated less biomass and P than WT plants, but only under water stress (Klamer et al. 2019), suggesting that testing the genotypes under different P and soil moisture levels could reveal stronger effects of Root Hair on the rhizosphere microbiome. Further, the effects of Root Hair on the rhizosphere microbiome might be larger for other plant species showing longer root hairs, i.e., for barley and growing under water deficiency or drought (Marin et al. 2021). Relative abundance of microbial taxa affected by Texture, Depth and Root Hair The dominant phyla in loam were Firmicutes followed by Proteobacteria (Fig. 4) as previously also reported by Ganther et al. (2020), for the same Texture and WT maize, using the same soil column experimental set-up to investigate the effects of X-ray computer tomography on soil bacterial communities. In contrast to loam, the relative abundance of Proteobacteria was dominant in the rhizosphere of WT and rth3 grown in sand. In particular, the genus Massilia was remarkably dominant in the Texture sand that was obtained by mixing loam with quartz sand. Texture-dependent changes in root exudation patterns or the soil physicochemical characteristics might have facilitated the successful rhizosphere assembly of the three ASVs affiliated to Massilia. Acidobacteria and also Thaumarchaeota were mainly detected in loam indicating a preference of these phyla to the soil conditions in loam, e.g., particle size preferences as recently reported by Hemkemeyer et al. (2018). Wei et al. (2021) revealed that most of the enriched taxa at younger roots or at a lower depth were r-strategists, they proliferated, and responded quickly to nutrients available. Indeed, our findings revealed significantly higher relative abundances of ASVs affiliated to the genera Bacillus (in loam) and Massilia (in sand) in the rhizosphere of roots sampled from D3 compared to D1. Isolates affiliated to these genera are also known as r-strategists (Ofek et al. 2012;Wei et al. 2021). Further, strains from both genera were reported to have catalase activity (Yuan et al. 2021), and thus their increased abundance in the maize rhizosphere might reduce H 2 O 2 concentration, and thus reduce H 2 O 2 stress. Root gene expression analyses of maize showed that stress-or defense-related genes such as peroxidases were highly expressed at lower than upper depths . Slight decreases in oxygen availability, which coincide with the increase in water content with depth, might explain differences in relative abundances of ASVs affiliated to Bacillus. Bacillus isolates can adapt to a lower oxygen availability (Hartmann et al. 2014) that might support the present finding that their relative abundances were enriched at D3 in loam (Fig. 5A). The fungal relative abundances were not affected by Depth and Root Hair at the phylum level and most of the 30 ASVs observed in the present work (Figs. 4B & 6A). This was likely due to our observation using a short soil column, and was under growth chamber conditions. ASVs affiliated to the genus Dyella that were detected in significantly higher relative abundances in the rhizosphere of rth3 than WT grown in sand Fig. 4 Effects of Texture, Depth and Root Hair on relative abundances of A) bacterial and archaeal (16S rRNA gene fragments), B) fungal (ITS regions) and C) cercozoan (18S rRNA gene fragments) communities at phylum or order levels in the rhizosphere of maize (> 1%). Letters indicate significant differences of each bacterial, archaeal, fungal and cercozoan phylum or order level in loam or sand, Tukey test applying "BH" p value correction, p < 0.05 (Benjamini and Hochberg 1995). n = 6, except for L_rth3_D3, S_WT_D1 and S_rth3_D3 (n = 4) and S_WT_D3 (n = 5) for bacteria and archaea; S_WT_D2 and S_WT_D3 (n = 5), S_rth3_D3 (n = 4) and S_rth3_D2 (n = 3) for fungi. n = 6 for cercozoa, except for S_WT_D3 (n = 5) and S_rth3_D3 (n = 3). L, loam and S, sand ◂ (Fig. 5A), are known to be involved in nitrogen fixation (Swarnalakshmi et al. 2020). Based on plant roots and rhizosphere microbiome feedback, strains belonging to Dyella might have been recruited to assist the rth3 plants for N-uptake in sand. Interestingly, root genes coding for nitrate transport were shown to be less expressed in the rth3 compared to WT . Linkage between bacteria, archaea, fungi, cercozoa and soil potential extracellular enzyme activities (EEA) Both plant roots and microbial cells are able to release enzymes into soil (Cabugao et al. 2017) but we assume that the fraction of plant-originated enzymes decreases with the distance from the root, and as we used root-affected soil for the EEA analyses, the detected potential EEA were assumed to be of microbial origin. In line with the bacterial and archaeal abundance and alpha-diversity, higher potential EEA (V max ) were recorded in loam than in sand (Tables 5 & S1). This is explained by a greater soil organic carbon content in loam Feng et al. 2019;Ren et al. 2018). The K m /K in soil studies is generally termed as an apparent affinity constant, which is surely affected to a certain extent by soil physical properties. The twoto-three-fold higher K m for acid phosphatase (AP_K) in sand compared to loam (Table 5), indicated the presence of different enzyme systems with a lower affinity resulting in a decline in the overall enzyme function under substrate limitation in loam German et al. 2012). Further, as discussed above, due to the higher sorption capacity of loam, the affinity of AP_K was strongly reduced compared to sand (Table 5). Variations in K m values at different depths were more pronounced for the AP, followed by NG (chitinase) and BG (ß-glucosidase). They revealed changes in functional traits of microorganisms, i.e., Spearman's rank correlation coefficient (r) to soil potential microbial extracellular enzyme activities (B). Different letters indicate significant differences between treatments of each ASV in loam or sand, Tukey test applying "BH" p value correction, p < 0.05 (Benjamini and Hochberg 1995). BG, ß-glucosidase; AP, acid phosphatase; NG, chitinase; V, V max and K, K m ; Blank, no significant correlation and * significant correlation. n = 6, except for L_rth3_D3, S_WT_D1 and S_rth3_D3 (n = 4) and S_WT_D3 (n = 5). L, loam and S, sand changes in the metabolic activity and microbial community composition (Blagodatskaya et al. 2021). The enzyme activity and affinity were fitted as environmental variables of the bacterial, archaeal, fungal and cercozoan communities (Fig. 3). Higher activities of BG_V, AP_V and NG_V were shown in loam than in sand demonstrating different abilities of dominant bacteria, archaea, fungi or cercozoa to produce or release the enzymes investigated. The correlation of AP_V with multiple ASVs or OTUs indicated functional redundancy and possible mutualistic interactions in P acquisition and in cellulose degradation within the communities (Banerjee et al. 2016). Positive correlation of AP_V with ASVs affiliated to Massilia and the cercozoa Allapsidae and Eocercomonas in sand possibly indicate different specialized taxa in production of specific enzymes. The bacterial, archaeal and cercozoan communities showed significantly higher AP_V at lower depths in sand A higher enzyme affinity in sand (Fig. 3) to the decomposed compounds indicated different taxa produced distinctly different enzyme systems (Fontaine and Barot 2005;Blagodatskaya et al. 2009;Blagodatskaya and Kuzyakov 2013). The stronger linkage of enzyme parameters with bacteria and archaea than fungi observed in the present study might be due to the lower abundance of fungi but possibly indicated a broader functional diversity within bacterial and archaeal communities that are able to decompose plant and microbial residues as well as to hydrolyze organic P compounds, while fungi invested more resources in P acquisition (Smith et al. 2011;Chiba et al. 2021). Although the potential EAA were analysed in soil taken at a larger distance from root surface, we did find the correlation to the microbiome in the rhizosphere (Figs. 5, 6 & 7). Fig. 6 Effects of Texture, Depth and Root Hair on 30 fungal (ITS regions) ASVs with highest relative abundances in the rhizosphere of maize (A) and their Spearman's rank correlation coefficient (r) to soil potential microbial extracellular enzyme activities (B). Different letters indicate significant differences between treatments of each ASV in loam or sand, Tukey test applying "BH" p value correction, p < 0.05 (Benjamini and Hochberg 1995). BG, ß-glucosidase; AP, acid phosphatase; NG, chitinase; V, V max and K, K m ; Blank, no significant correlation and * significant correlation. n = 6, except for S_WT_D2 and S_WT_D3 (n = 5), S_rth3_D3 (n = 4) and S_rth3_D2 (n = 3). L, loam and S, sand Effects of Texture, Depth and Root Hair on 30 cercozoan (18S rRNA gene fragments) OTUs with highest relative abundances in the rhizosphere of maize (A) and their Spearman's rank correlation coefficient (r) to soil potential microbial extracellular enzyme activities (B). Different letters indicate significant differences of each OUT in loam or sand, Tukey test applying "BH" p value correction, p < 0.05 (Benjamini and Hochberg 1995). BG, ß-glucosidase; AP, acid phosphatase; NG, chitinase; V, V max and K, K m ; Blank, no significant correlation and * significant correlation. n = 6, except for S_WT_D3 (n = 5) and S_rth3_D3 (n = 3). L, loam and S, sand Table 5 Soil potential microbial extracellular enzyme activities affected by Texture, Depth and Root Hair (nM g −1 h −1 for V/V max and μM for K/ K m ) The data were presented as mean ± standard error of mean. Different letters indicate significant differences between treatments in loam or sand, Tukey test applying "BH" p value correction, p < 0.05 (Benjamini and Hochberg 1995). BG, ß-glucosidase; AP, acid phosphatase; NG, chitinase; V, V max and K, K m . n = 6, except for L_rth3_D3 (BG, AP and NG), L_WT_D3 (AP), S_rth3_D1 (BG and AP) and S_WT_D1/D2 (AP), n = 5; L_WT_D3 (NG), S_rth3_D2 (BG and AP) and S_WT_D3 (NG), n = 4; and S_rth3_D3 (BG) and S_WT_D3 (BG), n = 3 and n.a., not analysed due to lack of sample materials 23.85 ± 9.9 a 278.90 ± 13.8 b 23.45 ± 3.3 ab 9.69 ± 0.5 a 6.69 ± 1.7 a Conclusion Our research on the relationship between Texture, Depth and Root Hair has three general implications. First, we have shown that Texture was the strongest driver of the rhizosphere microbial assembly and of potential EEA for both WT and rth3 plants. Second, we have demonstrated that Depth was another driver of the rhizosphere microbiome, suggesting that the abiotic environment may differ between the different layers of the column. Third, the small impact of Root Hair on the rhizosphere microbiome and potential EEA in root-affected soil of maize raises the question of the importance of the root hair length and the low plasticity of root hair defective mutant related to limited P availability as reported by Lippold et al. (2021). Overall, our hypotheses were only partly confirmed which might be due to the resolution level of amplicon sequence analyses, but also due to the experimental design of the column experiment. Most excitingly, the results of the present study as well as the previously published data on the plant side, e.g., plant gene expression and root system architecture from same and identical column experiments Lippold et al. 2021) showed the same drivers, and highlighted the close linkage between the plant and their rhizosphere microbiome and potential EEA. Acknowledgements This project was carried out in the framework of the priority program 2089 "Rhizosphere spatiotemporal organization -a key to rhizosphere functions" funded by DFG (Deutsche Forschungsgemeinschaft), project numbers 403637238, 403640293, 403641192, 403664478 and 403635931. Seeds of the maize WT and rth3 were provided by Caroline Marcon and Frank Hochholdinger (University of Bonn). We also would like to acknowledge all participants involved in the sampling at UFZ Halle, Dr. Doreen Babin and Ilse-Marie Jungkurth for reading the MS. Author contributions MG and MT designed and carried out experiments. ZI, LR, MG and BY collected the samples. BY prepared rhizosphere DNA and amplicon sequencing analyses of 16S rRNA gene fragments and ITS regions. LM prepared amplicon libraries of 16S rRNA gene and ITS region fragments, sequenced and produced ASV Tables. LR performed amplicon-sequencing of cercozoa. ZI analysed soil microbial extracellular enzyme activities. AHB contributed data analyses. BY, ZI, LR, MT, DV, MB, EB and KS wrote the manuscript with contributions from all authors. Funding Open Access funding enabled and organized by Projekt DEAL. This research was conducted within the research program "Rhizosphere Spatiotemporal Organization -a Key to Rhizosphere Functions" of the German Science Foundation, funded by the Deutsche Forschungsgemeinschaft (project numbers 403637238, 403640293, 403641192, 403664478 and 403635931). AHB was supported by the German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, funded by the Deutsche Forschungsgemeinschaft (DFG-FZT 118, 202548816). Data availability All raw sequences for bacteria, archaea and fungi were deposited at NCBI within the sequence read archive accession PRJNA677863. Raw sequences of cercozoa are available at the ENA under the accession number PRJEB49274. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-08-24T15:07:38.439Z
2022-08-22T00:00:00.000
{ "year": 2022, "sha1": "82facdf52cc855dfdbfae0388b244543e51c4462", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11104-022-05618-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "ef8659aad4deed54f1a8d1aa974bf153935e8bbb", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
216028348
pes2o/s2orc
v3-fos-license
Partial Order Reduction for Deep Bug Finding in Synchronous Hardware Symbolic model checking has become an important part of the verification flow in industrial hardware design. However, its use is still limited due to scaling issues. One way to address this is to exploit the large amounts of symmetry present in many real world designs. In this paper, we adapt partial order reduction for bounded model checking of synchronous hardware and introduce a novel technique that makes partial order reduction practical in this new domain. These approaches are largely automatic, requiring only minimal manual effort. We evaluate our technique on open-source and commercial packet mover circuits – designs containing FIFOs and arbiters. Introduction Modern society relies increasingly on electronic systems, powered by hardware components that continue to grow in complexity and variety. Ensuring the functional correctness of these components is essential, as bugs and errors can have consequences ranging from undermining a company's reputation to jeopardizing human safety [1,22,25,32,33]. Most electronic designs must therefore include a significant verification effort, and this effort often consumes more time and resources than all other aspects of the design process [17,34]. Formal methods such as symbolic model checking have become a crucial part of the verification effort because of their strong guarantees and automation [24]. However, due to the state space explosion problem [14], model checking typically only works well for small-to medium-sized circuits with primarily control logic, limiting its potential for addressing industry verification challenges. One approach for combating the state space explosion problem is partial order reduction [14]. While symbolic partial order reduction has been successfully applied for the verification of asynchronous systems [37], its use in synchronous systems has been limited. In this paper, we introduce a novel approach for adapting symbolic partial order reduction to model checking of synchronous hardware and demonstrate dramatic reductions in the time to reach deep bugs on certain classes of synchronous circuits. Moreover, the technique requires only an interface-level annotation of the circuit, and when fully automated approaches fail, can be guided by the user. The paper makes the following contributions: 1. We adapt partial order reduction for synchronous hardware verification. 2. We introduce a novel technique for reducing the possible inputs to a circuit at a single time step, which is crucial for practical application of partial order reduction to synchronous hardware. 3. We provide a set of sufficient conditions, which, if proven, guarantee that the proposed techniques maintain the reachable states. 4. We introduce conservative proof techniques for verifying these conditions, which empirically work well on packet movers. 5. We evaluate our techniques on a set of open-source and commercial packet mover circuits, demonstrating dramatic speed-ups with minimal manual effort. The rest of the paper is organized as follows. We first provide a motivating example, below. Then, in Section 2, we cover relevant background material and notation. We explain our partial order reduction in Section 3 and our interface simplification technique in Section 4. We provide an experimental evaluation in Section 5. Section 6 covers related work, and Section 7 concludes. Motivating Example Throughout this paper we use the running example shown in Code Snippet 1. We chose this example because: i) it is easy to understand; ii) it resembles real-world packet mover circuits; and iii) it contains a difficult to reach bug. The system has a synchronizing clock and takes two 1-bit inputs: inc x and inc y. The 6-bit registers (state elements) x and y index the valid vector and are initialized to 0. The 64-bit registers valid and data start at 0 and 1, respectively. The 64x64 bit memory is uninitialized. If inc x and en x are true, the system increments the value of x. When inc y is true, the system increments y, sets the valid bit at index y, writes data to the memory at location y, and rotates the data vector to the left. Notice that the en x signal ensures that x never surpasses y (until all bits in valid are set). This incrementing pointer logic is similar to that found in a circular pointer FIFO. To ensure the asserted property, the code attempts to maintain the invariant: data = 1 << y. At first, it appears that the asserted property should hold based on this invariant, but it does not. There is a bug that can first occur at cycle 65: the overflow check in the data update uses integers, which are assumed to be 32bits. Since y is zero-extended to be 32-bits, y+1 can never be equal to 0. Thus, when y has the value 63 and is incremented, data, which is supposed to be one-hot, is set to 0. Although the system is small, this is a surprisingly difficult bug to reach using model checking. We believe this is due in part to the non-determinism in the Specifically, all but two of the model checker configurations we tried timed out at 2 hours before reaching the bug. Since bounded model checking (BMC) is one of the best approaches for bug-finding, we focus on improvements to BMC that help reach this bug. We introduce automated, best effort techniques that reduce the time to hit this bug from over 1000 seconds to 46 seconds by safely adding temporal symmetry breaking constraints to the system. Background Before explaining our algorithm, we adapt the standard notion of synchronous transition systems and review fundamental model checking concepts below. For a more thorough introduction to model checking, we refer the reader to [14,15]. -S: a set of states -Init ⊆ S: a set of initial states Definition 1. A Synchronous Transition System (STS) is a tuple, S, Init, A, En, D, T : -A: a finite set of atomic actions -logically distinct operations of the system -En = {en a |a ∈ A}: where en a : S → B is a state predicate that holds iff action a is enabled in a given state -D: a set of data inputs to the system -T ⊆ S × (P(A) × D) × S: the state transition relation, where P denotes power set For our purposes, an STS instruction can perform multiple atomic actions simultaneously. We define the system's instruction set (i.e. the set of actions that the system can perform in one transition) as I := P(A). We then define the set of inputs of an STS as Input := I × D. Thus, the transition relation T is a subset of S × Input × S. We denote the cardinality of an instruction i as |i|. For s, s ∈ S, in ∈ Input, T (s, in, s ) holds iff it is possible to reach s from s by applying input in. It is often convenient to reason about sequences using vector notation. Let in ∈ Input n and s ∈ S n+1 , with n > 0. We use subscripts to name individual elements of vectors, e.g. s := s 0 , s 1 , . . . . We use the notation T (in, s) to denote 0≤i<n T (s i , in i , s i+1 ). The length of a vector is given by | · |, e.g. |s| = n + 1, and prepending is represented as · : ·, e.g. s = s 0 : s for some s ∈ S n . With some abuse of notation, we allow prepending both sequences and single elements. For k > 0, we say that s ∈ S k is reachable if ∃n ∈ N, s ∈ S n+1 , in ∈ Input n+k . Init(s 0 ) ∧ T (in, s : s). The set of enabledness predicates En constrain the valid states in which an action can occur. For an instruction i ∈ I and s ∈ S, let en i (s) := a∈i en a (s). In the remainder of the paper, we only consider transition relations T that respect the enabledness conditions. That is, we assume ∀ s, i.(en i (s) ↔ ∃ s , d.T (s, i, d , s )). Depending on the context, this can be checked with a model checker or added as an environmental assumption. We also assume that the existence of a transition does not depend on the data input, that is, ∀ s, i. (∃ d, s . T (s, i, d , s ) =⇒ ∀ d. ∃ s . T (s, i, d , s )). Example 1. We can define an STS for the motivating example. Let BV k denote the set of all bitvectors of width k. Because there is only a single clock with no negative edge behavior, we model the system without the clock, where every transition corresponds to a clock cycle. Define an STS S, Init, A, En, D, T , where: x, y, valid, data, mem -Init is the set containing all states where x = 0, y = 0, valid = 0 and data = 1 -A = {inc x, inc y} -En = {en inc x := valid[x] = 1, en inc y := true} -D = {nil } (here, nil is just a dummy placeholder used to ensure that T is not empty). -T is the relation describing the next state updates in Code Snippet 1. Model Checking. Given an STS S, let a safety property P ⊆ S be a set containing acceptable states. The model checking problem is to determine whether the system stays within this acceptable set for all possible execution traces. Formally, we want to check whether the following holds: When equation (1) holds, we say that P is an invariant of S. A number of techniques exist for solving this problem, including Binary Decision Diagram (BDD)-based [12] approaches, Interpolant-based [27] approaches, and IC3/PDR (property directed reachability) techniques [10,16]. We refer the interested reader to [15] for a more complete survey of model checking algorithms. In this paper, we will focus on bounded model checking (BMC). In BMC, instead of proving (1) for all n, we prove it for all n less than some finite bound k. Though it typically cannot be used to prove properties, BMC can be quite effective at finding bugs [6] and is especially useful when full model checking is infeasible. Symmetry. Early on in the development of model checking, researchers recognized the importance of symmetry reduction to combat the state explosion problem [13]. Existing approaches in the hardware domain perform data symmetry reduction and data type reduction through the use of bit-width reduction preprocessing passes or syntactic restrictions such as scalarsets [8,20,28]. There have also been abstraction-refinement loop algorithms proposed to handle memory symmetries [9]. All of these approaches are focused on symmetries present in the transition system description, such as the presence of large data types. We refer to these types of symmetries as data symmetries. Most of these techniques are intended to speed up proofs of true properties rather than accelerate bug-finding. Model checking of asynchronous systems such as concurrent programs faces an orthogonal issue due to the many possible redundant interleavings of independent processes. Throughout this paper, we refer to this as path symmetry. Path symmetry is a temporal symmetry: it relates to executions of a system rather than just its size. Path symmetries occur when there are many distinct ways of reaching the same state in a system execution. Exploring all such paths can result in exponential case splitting. This paper provides evidence that path symmetry can also severely hurt model checking performance in synchronous systems. One of the first techniques proposed to handle path symmetry was partial order reduction. Partial Order Reduction. Partial order reduction was first developed in the explicit-state model checking context but was later extended to symbolic model checking [37]. The approach is named "partial order reduction" for historical reasons, but Clarke noted in [14] that "model checking using representatives" [30,31] may have been a more appropriate name. In particular, partial order reduction attempts to develop equivalence classes of behaviors so that only one representative from each class needs to be considered during model checking. Note that partial order reductions are sound only for checking state invariants. If the property of interest is temporal, the reduction could disallow input sequences that trigger the property. This can be avoided by first instantiating a monitor [15] and, if necessary, converting liveness properties to safety [5]. Partial order reduction is less natural in the synchronous setting, because synchronous transition systems do not have easily expressible independent actions. Nevertheless, these systems can still benefit from partial order reduction. Consider our motivating example: despite the huge number of system execution paths to consider, many of them are redundant. Observe that if both inputs are zero, then the state does not change. Furthermore, there is a temporal symmetry in the system execution: from any state where en x is true, driving only inc x followed by only inc y results in the same state as driving them in the opposite order. Thus, this system has a large number of redundant interleavings, much like a multi-threaded program. To address this problem, we introduce a partial order reduction for synchronous hardware. Our goal is to remove redundant interleavings by adding constraints to the system. To maintain soundness, we provide a set of conditions which must pass before we can add constraints. Synchronous Partial Order Reduction In order to be able to apply partial order reduction to a synchronous transition system, we are interested in identifying pairs of instructions that can be reordered without affecting the resulting state. More generally, we also want to be able to find pairs that can only be reordered under certain conditions. To formalize these notions, we adapt the notation and representation of guarded independence relations from [37]. 1 Definition 2. Given an STS: S, Init, A, En, D, T with instruction set I, let G := P(S) be the set of predicates over the states. Let i 0 , i 1 , g be a guarded independence tuple iff for all d 0 , d 1 ∈ D and reachable s ∈ S 3 , the following condition holds: According to this definition, if we can prove that i 0 , i 1 , g is a guarded independence tuple, then we can reorder i 1 , i 0 instruction sequences as long as i) i 0 is enabled in the first state; ii) g holds in the first state; and iii) we also reorder the corresponding data inputs. We check only the enabledness of i 0 because i 0 , i 1 is the representative order, and we only need to be able to reorder to the representative, not from it. The guard allows us to consider partial order reductions that only hold for a subset of the reachable states. To avoid trivially overconstraining the system with conflicting reorderings, we will only consider one ordering for each pair of instructions. The condition in Definition 2 is difficult to check automatically because of the existential quantifier. We instead check two slightly weaker conditions that [14,37]. The first condition states that instruction i 0 cannot disable i 1 under guard g: Intuitively, this condition ensures that we do not remove reachable states by disabling instructions. The second condition is that executing the instructions in either order leads to the same final state: When applying partial order reduction to concurrent programs, the standard approach is to check conservative syntactic properties which guarantee conditions (2) and (3). Synchronous systems do not typically have these syntactic properties, because there is no notion of distinct processes. Instead, we must check these conditions directly. In real circuits, it is unlikely that (2) will hold over arbitrary states. However, it is sufficient to prove that it holds for all reachable states. This can be done with a model checker. To prove (3), we could encode it as an LTL property or build a monitor automaton and use a model checker. Alternatively, we have found that we can often use a straightforward commuting-diagram approach starting from a symbolic initial state, depicted in Fig. 1. We duplicate the system, unroll it twice, then start both copies in the same symbolic state and check that applying the instructions in either order results in the same final state. This simple approach has the disadvantage that a symbolic initial state ignores reachability which could lead to spurious counterexamples. However, notice that the initial state is constrained by enabledness assumptions. To apply an instruction it must be enabled, so both instructions must be enabled in the initial state. We have found that these enabledness assumptions often constrain the initial state enough to rule out spurious counterexamples. If both conditions pass, then we can choose a representative order and disallow the opposite ordering for that pair of instructions. If the proof of condition (3) fails, it provides a counterexample which should either convince the user that partial order reduction does not apply for that pair of instructions (a real counterexample), or serve as a guide for the user to write guards that would remove the spurious counterexample. Other invariants of the system, either obtained automatically or manually guessed by the user, could also remove spurious counterexamples. We can now state the first theorem of synchronous partial order reduction: that these conditions guarantee guarded independence over all reachable states. (2) and (3) hold for instructions i o , i 1 ∈ I, and guard g ∈ P(S), then i 0 , i 1 , g is a guarded independence tuple. Proof. Assume conditions (2) and (3) and that for some d 0 , d 1 ∈ D and reachable s ∈ S 3 , we have: Because en i0 (s 0 ), we have ∃s , d . T (s 0 , i 0 , d , s ) because of our enabledness assumption. Furthermore, by the data-input independence property of transition relations, it follows that for some s 1 , T (s 0 , i 0 , d 0 , s 1 ) Now, because one of our assumptions is a transition from s 0 using i 1 , en i1 (s 0 ) must be true. Condition (2) implies that en i1 (s 1 ), thus ∃ s , d . T ( i 0 , d 0 , i 1 , d , s 0 , s 1 , s ). As before, this implies that for some s 2 , we also have that T ( i 0 , d 0 , i 1 , d 1 , s 0 , s 1 , s 2 ). It then follows from (3) that s 2 = s 2 , and thus, i 0 , i 1 , g satisfies the condition from Definition 2. Let a guarded independence relation, R ⊆ I × I × G, be a set of guarded independence tuples. We now describe how to apply partial order reductions, given some R. For each i 0 , i 1 , g ∈ R, and for every s ∈ S 2 , d ∈ D, whenever T (s 0 , i 1 , d 1 , s 1 )∧en i0 (s 0 )∧g(s 0 ) holds, we remove from T every transition of the form s 1 , i 0 , d , s (for any d and s). Let T R be the result. To apply this reduction in practice, we add a constraint to the BMC encoding: (g(s 0 )∧en i0 (s 0 )∧i 1 ) =⇒ ¬next(i 0 ). This makes it impossible for the STS system to ever execute an instruction i 0 after an instruction i 1 when starting from a state where i 0 is enabled and g holds. This effectively gives preference to i 0 as long as it is enabled. The effect of partial order reduction on a pair of instructions in a synchronous system is depicted in Fig. 2. Red X's show removed transitions, and for simplicity, we assume a trivial guard of true. Notice that all states are still reachable via some path from the initial state in the bottom left corner. Theorem 2. Given S := S, Init, A, En, D, T , let R be a guarded independence relation and let S R be the reduced STS obtained by replacing T with T R in S. Then, if a property P is an invariant for S R , it is also an invariant for S. Proof. It suffices to show that S R can reach all the same states as S. We prove this by contradiction. Assume there is some in, s such that Init(s 0 )∧T (in, s) and 0 ≤ j ≤ |s| − 1 such that s j is the first state that is unreachable in S R . The value of j cannot be 0 or 1, because S and S R have the same initial states and T R only excludes sequences of length 2. Then, by the definition of T R , in j−2 , in j−1 must be a sequence excluded by T R . Conditions (2) and (3) guarantee that permuting in j−2 and in j−1 results in an enabled sequence that ends in the same state, s j , which contradicts the assumption. Thus, there cannot be a state which is reachable in S but not S R . Reduced Instruction Sets Now that we can apply partial order reduction to synchronous systems, our main goal is to identify a maximal guarded independence relation, R. Recall that we defined instructions as sets of atomic actions. We call an instruction containing at most one action atomic (this includes the instruction with no actions). Non-atomic instructions are complex. Instructions thus reflect the parallelism of synchronous hardware, and lead to natural candidates for R: pairs of atomic instructions. Furthermore, notice that the number of instructions is exponential in the number of actions. Thus, it could be prohibitively expensive to check every pair of instructions for guarded independence. In contrast, the number of atomic instructions is equal to the number of actions (plus one). Furthermore, it is likely that many complex instruction pairs will not have a guarded independence relationship because they contain common actions. Our goal in this section is to disallow as many complex instructions as possible without losing any reachable states, thereby reducing the number of pairs of instructions we need to check while also making it more likely for the checks to succeed. Note that, in isolation, removing instructions might be problematic, because it could extend the bound needed to reach a property violation. However, as we will demonstrate in the experimental section, this disadvantage is more than compensated for when it is applied in combination with partial order reduction. Given an STS with instruction set I, we seek a reduced instruction set, I r ⊆ I, which preserves the reachable states of the system. Let Input r be the set of inputs which only use instructions from I r . Given an input in ∈ Input, our goal is to prove the existence of a witness w(in) ∈ Input n r (for some n > 0) that simulates the behavior of in using only reduced instructions. Formally, the witness function w should satisfy: In other words, we need to show that for every instruction in the original instruction set, there exists a sequence of inputs, using only instructions from the reduced instruction set (RIS), that results in the same final state. Notice that a witness function that also depended on the state would be more general, but for our purposes, it is sufficient for the witness function to depend only on the input. Atomic instruction sets The condition in (4) is quite general and does not provide any intuition on how to choose w. Here, we focus on a specific case where w is easy to construct: we choose I r to be an atomic instruction set, defined as an instruction set containing only atomic instructions. We then must prove that the set of reachable states is not affected by restricting the instructions to those in I r . It is sufficient to prove that for each complex instruction, we can remove one of its actions and perform that action in the next step, with the same result. For some complex instruction i containing a and some data input d, let w a ( i, d ) be i−{a}, d , {a}, d . We must show that for each input in containing a complex instruction, there exists some a where w a (in) has the equivalent effect on the system as in. Formally, the requirement is: Condition (5) is still difficult to prove because of the existential quantifier. One conservative approach is to replace the existential quantifier with a universal quantifier and attempt to prove that stronger condition. For real systems, this is unlikely to hold. Instead, we propose a counterexample blocking procedure which, if it succeeds, guarantees (5). We introduce symbolic values for i, d, and a and then iteratively add constraints over them until the proof succeeds or we have enumerated all possibilities. This algorithm is a specialized ∀∃ decision procedure that exploits the structure of (5) and additional domain knowledge about the proof goal. We use a constraint solver as an oracle. while check sat(|i| = c ∧ s1 = s 2 ) do 9: µ ← get model () 10: iµ ← assignment(µ, i) 11: aµ ← assignment(µ, a) 12: add constraint(iµ ⊆ i =⇒ a = aµ) 13: if ¬check sat(i = iµ) then 14: return false // exhausted all possible decompositions for this instruction 15: end if 16: end while 17: end for 18: return true // every instruction can be decomposed Algorithm 1 takes an STS, S := S, Init, A, En, D, T and returns true if the instruction set can be decomposed into an atomic instruction set by delaying a single action from each instruction. 2 For simplicity, the algorithm assumes (and we check this assumption separately) that if a complex instruction i is enabled, then for each a ∈ i, executing i − {a} results in a state where a is enabled. Formally: ∀ i ∈ I \I r , d ∈ D, s ∈ S 2 , a ∈ i. en i (s 0 ) ∧ T (s 0 , i−{a}, d , s 1 ) =⇒ en a (s 1 ) (6) Note that this is only a slight generalization of the property that atomic instructions do not disable each other, a condition that we will need anyway in order to apply partial order reduction to the atomic instruction set (see condition (2)). The algorithm first creates an identical copy of the STS in line 1. Lines 2-4 set up symbolic variables for the instructions, data, and states of each system. Line 5 adds constraints to the solver enforcing that both systems start in the same state, use the same data, and that i is i but with symbolic action a dropped. Line 6 adds the transition relation constraint for each STS. The initial symbolic set up is depicted in Fig. 3. The outer loop at line 7 iterates over all possible complex instruction cardinalities. The inner loop starting at line 8 attempts to show that for each cardinality c, instructions of that cardinality can be decomposed by delaying one action (symbolically represented by a). If all instructions of cardinality c have been decomposed, then the while loop condition is false and the outer loop continues. Otherwise, it gets variable assignments from the constraint solver in lines 9-11 and learns a constraint at line 13 that prevents this particular action, a µ , from being chosen for decomposition again. To ensure that we have not blocked all possible actions, there is an additional check at line 13, which returns false in the case that no action can be delayed for the current instruction. Importantly, the algorithm assumes that if the delay of action a µ does not create a valid witness sequence for a given complex instruction i µ , then the same is true whenever the instruction i includes i µ . We call this a monotonicity assumption, and it typically holds when actions are somewhat independent. The monotonicity assumption motivated the current structure of the algorithm and can significantly reduce the number of iterations in the algorithm. We can remove this assumption by changing i µ ⊆ i to i µ = i in the antecedent in line 13. Note that the monotonicity assumption does not make the algorithm unsound: if it returns true, then (as we prove below) condition (5) holds. However, if the algorithm returns false, then it may be that the version without the assumption would return true. For each of our experiments, we were able to get a true result with the monotonicity assumption. Because the algorithm does not consider state reachability and looks for a witness function that only depends on inputs, it can still return false when an equivalent sequence might exist for reachable states. In such cases, users can examine the constraint solver models and attempt to remove some of them by proving other invariants. 3 If algorithm 1 returns true, we replace T with T r , where T r is the result of removing from T all transitions s, i, d , s where |i| > 1. Practically, this is achieved by adding a disjunctive constraint over the possible atomic actions. We can now state the main results for reduced instruction sets. Proof. We maintain the loop invariant at line 8 that for every instruction i , there is some action a such that check sat(|i| = c ∧ i = i ∧ a = a ) is true. It's true initially for each c by condition (6). Afterwards, the check on line 14 ensures that it is maintained. Furthermore, the check on line 9 ensures that when the while loop is exited, then any satisfying assignment for check sat(|i| = c) is such that s 1 = s 2 . Together, these conditions guarantee that (5) holds. Theorem 4. Let S := S, Init, A, En, D, T be an STS such that condition (6) holds and ProveRIS(S) returns true, and let T r be the transition relation for the reduced instruction set. Let S r be the reduced STS obtained by replacing T with T r in S. Then, safety property P ∈ S is an invariant for S r if and only if it is also an invariant for S. Proof. It suffices to show that the reachable states of S and S r are identical. Init does not change, so the initial states cannot be different. Furthermore, T r is obtained by removing transitions from T , we know that S r cannot add any reachable states. To show that it also does not remove any reachable states, consider an arbitrary trace Init(s 0 ) ∧ T (in, s) with |s| = n, we must show ∃ in , m, s ∈ S m . Init(s 0 ) ∧ T r (in , s ) ∧ s n−1 = s m−1 . We prove this by showing by induction that it holds whenever in contains instructions of cardinality at most c. In the base case, c = 1, so all instructions are of size one or less. All of these are already atomic and thus we can take in = in and s = s by the definition of T r . For the inductive step, suppose that it holds for cardinalities up to c − 1, and assume Init(s 0 ) ∧ T (in, s) with |s| = n. Let in j = i, d be an input containing an instruction of size at most c. If |i| < c, there is nothing to be done. Thus we only consider the case where |i| = c. We know that T (s j , in j , s j+1 ) holds. By Theorem 3 and condition (5), it follows that T ( i−{a}, d , {a}, d , s j , s, s j+1 ) holds for some a and s. We can thus replace in j in in by i − {a}, d followed by {a}, d to obtain an input sequence in c and insert s between s j and s j+1 in s to obtain s c with final state s n−1 such that Init(s 0 ) ∧ T (in c , s c ). Repeating this process for each input containing an instruction of size c yields a final in c such that the maximum cardinality of any instruction is c−1. The property then holds by the inductive hypothesis. Note that if there is some instruction i ∈ I which cannot be decomposed into atomic instructions, we could always keep this instruction in I r and still benefit from removing other complex instructions. In many cases, we can also remove the empty instruction, i e = ∅. If applying i e cannot change the state of the system, regardless of the data input, then it is considered a stutter step [14]. It is straightforward to check whether i e can be removed by comparing the state before and after applying i e . Experimental Results We developed a prototype flow for proving the POR and RIS conditions and applying the necessary constraints. We use the IC3/PDR implementation in ABC [11], pdr, to prove condition (6) (which implies condition (2)). This requires manually writing a Verilog property for each atomic instruction. 4 We implemented the ProveRIS algorithm in our SMT-based model checker, CoSA [26], configured with boolector [29] on the smtcomp19 branch, using CaDiCaL [4] as the underlying SAT solver. 5 We check the commuting diagram for condition (3) in CoSA as well. It tries the trivial guard true by default, and allows the user to provide additional candidate guards if necessary. The set up for proofs in CoSA is automated based on user-provided annotations for the actions and enable conditions. We show our best results which used an encoding leveraging the SMT theory of arrays to represent memories for proving conditions, and a pure bitvector encoding for bounded model checking. Our flow applies the following steps: i) read in a system description in Verilog using Yosys [38] and generate AIGER [7] for ABC (or BTOR2 [29] for other tools); ii) check condition (6) for each atomic instruction; iii) run the ProveRIS algorithm, and if it returns true, add constraints to rule out all but atomic instructions; and iv) check POR condition (3) for each pair of atomic instructions and add constraints for each passing pair of instructions with the associated guard. Each step depends on the previous step passing successfully. In each of our experiments described below, we successfully completed every step of this flow, though in some cases guards were required in step (iv). For POR and RIS runtimes, we always include the time to check the conditions. We tried running with POR alone, but it resulted in negligible improvements in runtime and thus we omit these results. This demonstrates the importance of RIS. We ran all experiments on a 3.5GHz Intel Xeon CPU with 16GB of RAM. Motivating Example First, we return to our motivating example. We compare the time to reach the bug using the SAT-based ABC [11] engines pdr and bmc, and SMT-based bounded model checking using btormc [29] and CoSA. We ran the SMT-based model checkers both with and without the SMT theory of arrays for the encoding of the memory. Both btormc and CoSA without the array encoding were able to reach the bug in 1230s and 1437s, respectively, but all other approaches timed out at two hours. In particular, pdr times out at 2 hours on the property, but can prove condition (6) for every atomic instruction in less than a second. Intuitively, this makes sense because the enabledness conditions do not involve data or mem. Thus, none of the datapath falls in the cone of influence, leaving only control logic for IC3 to reason about. The remaining conditions, (3) and (5), are proven in less than three seconds. Since all the conditions pass, we apply the POR and RIS constraints, which reduces the time to hit the bug from 1437s to 46s in CoSA, including the time to check the conditions. Packet Movers We now evaluate our approach on data integrity properties for a variety of packet-mover circuits. Data integrity is a safety property that ensures no packets are dropped or corrupted. In practice, data integrity is often checked by instantiating a monitor, called a scoreboard. It provides the necessary infrastructure for formal verification. In our case, it non-deterministically tags a magic packet and checks that this packet exits the system when it should. Crucially, the scoreboard is a reusable module which can check data integrity of arbitrary packet movers. Notice that existing symmetry reduction techniques will not be very effective for this scoreboard setup. For example, consider a circular pointer FIFO which maintains two incrementing pointers that index a memory for reading and writing, respectively. We cannot use scalarsets to break symmetries in the memory addresses because the pointers index the memory and are involved in arithmetic, breaking the syntactic requirements for scalarsets [28]. Furthermore, sequential memory abstraction [9] could reduce the size of the memory, but does not address the path symmetry. In addition, both these symmetry reduction techniques are focused on proofs, not bug-finding. We evaluate our approach on two commercial library components from a major hardware company. We also implemented simpler, open-source versions of these designs. Our open-source benchmarks include: i) a circular pointer FIFO which assumes power-of-two depth but is instantiated with a non-power-of-two depth (one greater than the provided parameter); ii) a shift register FIFO which does not properly add data to the last register in the pipeline; and iii) 2-5 Fig. 4: Runtime Comparison correct circular pointer FIFOs in parallel with a non-deterministic arbiter and credit counters for managing data flow. The reset state of the credit counter has one too many credits, so data can be pushed to a full FIFO. The single FIFOs have two actions each: one for pushing data, and one for popping data. For the arbitrated circuits, there is a separate action for pushing data onto each FIFO as well as a single request action which is enabled whenever any FIFO is non-empty. There is an inherent symmetry in all of these designs. Consider any of the FIFOs. There are two main actions: pushing data (which is enabled if the FIFO is not full); and popping data (which is enabled if the FIFO is not empty). In a state where both are enabled, pushing data followed by popping results in the same state as popping and then pushing the same data. Furthermore, the actions can be performed simultaneously, but requiring that they are performed separately should not change the reachable states (depending on the implementation), so RIS is applicable. Our experiments vary both the parameterizable data width and depth of the packet movers, by sweeping all powers of two between 2 and 128. All benchmarks contain injected bugs and reach the bug at a deep bound relative to the depth. We used a timeout of 4 hours. We use our prototype flow for checking the conditions and CoSA for bounded model checking. 6 For condition (3), we had to write one guard which is true whenever the scoreboard counter is greater than zero to handle an edge case. This same guard was used for every design, but an appropriate invariant relating the scoreboard counter to the internal state of the system being verified would also have worked. The open-source shift register FIFO required one more guard about the number of stored elements. We obtained both guards by observing counterexamples. Table 1 compares the number of solved instances (49 total per row) within the timeout and the average runtime of commonly solved instances in seconds. Columns marked "PR" used the POR and RIS constraints. We additionally use the following abbreviations: "com" for commercial, "cp" for circular pointer, "sr" for shift register and "arb" for arbitrated. In Fig. 4 we plot the actual runtime on a log-scale for all the benchmarks with and without POR and RIS. The dotted lines show 10x and 100x improvements. Analysis. There is a cluster of points in the bottom left of Fig. 4 which are solved extremely quickly by both approaches, but slightly faster without POR and RIS. These are results on benchmarks with very small parameter values, where the bug occurs at a low depth, and so the POR and RIS results are dominated by the time taken to check the conditions. However, as the parameter sizes, and runtimes, increase, it is clear that POR and RIS can result in exponential speed ups. Recall that one concern is that RIS could extend the bound needed to reach the bug. In the shift register and arbitrated FIFO systems, it extended the bound by a few steps. However, for the bug in the open-source circular pointer FIFO, it doubled the bound needed to reach the bug. Regardless, this was more than 6 Note: CoSA's bounded model checking performance is comparable to commercial model checkers on these benchmarks. compensated for by the symmetry-breaking of POR, as evidenced by the faster times to reach the bug. The deepest bound was 260 which occurred at FIFO depth 129. It is interesting to note that encoding the transition systems to SMT using the theory of arrays was always slower for bounded model checking, but was noticeably faster for checking RIS and POR conditions. Perhaps this is because the state comparison is easier for the solver to reason about using array extensionality [23]. We have demonstrated that these techniques work well for packet movers. In part, this is because packet movers are often well-constrained by their environmental assumptions, and their behavior is largely independent of incoming data values. Furthermore, we typically expect the POR and RIS conditions to hold for a correct packet-mover implementation, so a failure in a condition could identify a bug. Related Work Various techniques have been employed to accelerate bounded model checking. The authors of [19] use BDDs to accelerate BMC, and the techniques introduced in [35,36] exploit the structure of BMC queries to help the SAT solver. The authors of [18] take advantage of structural information with an SMT framework tailored for BMC. Our technique is similar in that we speed up bounded model checking by adding constraints to the transition system, but we obtain constraints using partial order reduction analysis. Wang et al. [37] pioneered partial order reduction for symbolic software model checking, guaranteeing optimal reduction for two threads. Their follow-up paper, [21], extended this framework to find the optimal reduction for any number of threads. We adapted their symbolic POR technique for synchronous hardware model checking, and developed reduced instruction sets to improve the efficacy of POR in this new domain. Bhattacharya et al. used a SAT solver to directly check guarded independence conditions (as opposed to checking syntactic properties) for asynchronous rule-based languages [3]. We also check conditions directly, but in a synchronous setting. The techniques developed by McMillan, temporal case splitting and path splitting [28], provide a framework for splitting on possible values at a given timestep. These approaches deal with system executions, but still rely on breaking data symmetries for performance. In contrast, our techniques focus on mitigating path symmetries. The work of Bengtsson et al. [2] extended POR to timed automata using a local-time desynchronization of clocks, followed by resynchronization with an added global clock. Similarly, our techniques adapt POR by modifying the system. However, our approach targets a different domain, and only modifies the original system by adding constraints. Conclusion We have presented a set of conservative conditions over transition systems and automated techniques for proving these conditions. If the conditions can be proved, then constraints can be added to the system that break path symmetries. We evaluated our approach on parameterized open-source and commercial packet-mover circuits and demonstrated significant improvements in bounded model checking performance. Some potential future work includes improvements to the ProveRIS procepacket movers, developing more targeted condition proofs by associating actions with particular data inputs, and building an interactive tool which helps the user identify and manage reduced instruction sets and partial order reductions. Data Availability Statement The experimental results and the necessary software for reproducing results in a standard Ubuntu 18.04 installation are available in the Figshare repository: https://doi.org/10.6084/m9.figshare.11874687.
2020-04-21T13:10:36.591Z
2020-03-13T00:00:00.000
{ "year": 2020, "sha1": "db15e406dcdced521d6f4c1494c4548d90e180e3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-45190-5_20.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "59553dff42f69f6f05e0d8a69576b9f4085b352b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
52130278
pes2o/s2orc
v3-fos-license
Declining water depth delayed the breeding time of Fulica atra, not human disturbance Disturbances by tourists have been considered to delay the breeding time of coots. In this study, we investigated the common coot (Fulica atra) from April to June in 2008, 2009 and 2012 around the Anbanghe Nature Reserve and Daqing Longfeng wetland of Heilongjiang Province. We evaluated the correlations of four habitat factors (water depth under coots’ nests, distance of nests to banks, distance of nests to human disturbance and nest coverage) to discuss the impacts of those factors on the breeding time of the coots. The water depth under the nest was significantly correlated with the coots’ breeding stages in the Anbanghe wetland. In addition, we investigated the breeding dates of 56 pairs of coots and found the dates were significantly negatively correlated with the water levels under the nest for both of the wetlands. However, the breeding time (breeding stages and dates) of coots was not significantly related to the distance of the nest to disturbance, distance to the bank of the lake or the nest coverage. The LME models and GAMs that related breeding time to water level received the greatest support. For the GAM, in the group with a clear breeding date, water level was the most influential variable; in the group for which only breeding stages could be recognized, nest coverage combined with water level had a lower AICc value than water depth itself. In conclusion, we found no clear evidence to indicate that disturbances from tourism delayed the breeding time of the coots; however, the water level had a clear influence on the breeding time. We inferred that reproduction was delayed in order to wait for the improvement of habitat conditions (such as food resource and concealment). Neither water level nor disturbance impacted the reproductive output of the coots as these variables showed no clear relationships with the clutch size. Introduction Human activities can cause disturbance events, which can have long-term or short-term influences on the behavior, physiology and breeding of wild animals [1][2][3]. Disturbances by humans are widely expected to reduce the reproductive fitness of nesting birds [4][5][6], threaten the habitat suitability and further reduce the sustainability of local populations [7,8], restrict the feeding and breeding chances of wild animals, which could possibly affect the population size instead of simply behavior [9][10][11], and aggravate the regional extinction of wild species [12][13][14][15]. Disturbance events can also potentially affect the timing of breeding in some birds [16,17]. Human activities are a complex disturbance that can affect biodiversity and ecological processes and can vary in frequency, intensity and duration [18]. However, the effects of human activities on wetland-obligated birds might produce a contrasting result [19], and the influence of disturbance is typically measured based on the influences on population size instead of simply on behavior [9,10]. In addition to human activities, waterbirds are also sensitive to disturbance in water factors such as depth, water level fluctuations or the size of the available water area. Waterbird responses to water level can cause fluctuations in population size [20]. To date, existing literature on the impacts of water changes on waterbirds has primarily focused on the abundance or diversity of species, as well as habitat use [21][22][23][24]. Water depth variations influence the species composition and the abundance of emergent and submersed vegetation in wetlands [25][26][27][28]. The variations also influence the amount of available food, nesting and thermal cover for waterfowl [25,29]. The common coot (Fulica atra) is a widely distributed waterbird and is found in large numbers in the wetlands of China [30,31]; however, the numbers have recently clearly declined in some areas. For example, in the Zhalong Nature Reserve, which is a national wetland nature reserve in NE China and a major breeding ground for endangered red-crowned cranes (Grus japonensis), coots were the dominant species and were found in large numbers [32] in the 1980s. In recent years, however, there has been a sharp decline, and coots are now considered a rare species in this area [32]. The Zhalong Nature Reserve started its ecotourism development activities in the 1990s [17], and a growing number of tourists visit the reserve area. Apart from tourism, the lack of water has also been a key problem in the Zhalong wetland in recent years [33][34][35]. Tourism exploration and/or water factors may be responsible for the decline in the coot population. The coots in areas open to tourists breed later than those in the touristfree core areas [17]. This study was carried out in the Anbanghe and Longfeng wetlands in Heilongjiang Province in NE China (see Fig 1). We documented the influence of human disturbance, water level and nest position on the breeding time (including breeding stages and breeding date) of common coots. In addition, we determined the most appropriate management techniques required to minimize these impacts and prevent a population decline of common coots. Factors impacting the breeding time of coots A total of 218 nests were recorded at the Anbanghe and Longfeng wetlands from 2008 to 2012. The water depth ranged from 16 to 114 cm during the three years (Table 1). In the Longfeng wetland, the breeding dates were not correlated with the distance of nests to the bank (r = 0.114, df = 24, P = 0.586) (as shown in Fig 2). However, the breeding dates were significantly correlated with the water depth under the nest (r = −0.468, df = 24, P = 0.018). Breeding dates and water level were significantly correlated in the two study areas, but there was no significant relationship between breeding time and the distance from the banks ( Table 2, Fig 2). An ordinal regression supported a significant correlation of only water level with the breeding stages of coots (r = 0.566, df = 53, P < 0.001). There was a significant positive impact of water depth under the nest on the coots' breeding stages, but there were no clear correlations between the breeding stages and the distance of nests to the disturbance or the bank of ponds, as well as nest coverage. Of the three tested factors, only water depth showed a significant correlation with the breeding dates ( Table 2). The breeding time seemed earlier as the water level increased. Models containing disturbance as a variable showed greater AICc values compared with models in which water level was considered separately, and removing the variable of water depth under the nest resulted in a large increase in AICc, suggesting that this was the most Fulica atra breeding: Disturbance and water levels influential variable. In the model test results for breeding stages, only nest coverage combined with water level had a smaller AICc than that of the model considering water depth alone; however, in the mixed factor models of the breeding date of coots, the models that contained disturbance based on the distance from nests to the bank of ponds and water level had lower AICc values than the model in which water level was considered alone ( Table 2). Factors impacting the productivity of the coots We also checked whether the productivity of the coots was influenced by the four studied environmental factors. The results did not support any relation of the studied factors with the clutch size of the coots (see Table 3, Fig 3). The clutch size was only significantly negatively correlated with the breeding date in the Longfeng wetland (see Fig 4, r = -0.420, n = 17, p = 0.012). When we tested whether the clutch size of coots was related to all the variables using generalized additive models, the results still showed no correlations (see Table 3). Discussion Disturbance delayed the first broods' breed timing significantly in heather-dominated territories of the Dartford Warbler (Sylvia undata) [16]. A similar conclusion was made regarding coots [17], but this hypothesis was not supported by our research. The breeding time was correlated with water depth in both sites whether there was disturbance from visitors or not. Coots laid later when the water depth was shallow. The coots showed a certain tolerance of direct human disturbance (e.g., visitors and vehicles in the Anbanghe wetland) but were quite sensitive to water level changes [36]. The coots abandoned their nests in the early breeding period when the water level dropped to below 10 cm. We also found that some breeding pairs left their original territory, which might have been caused by the decline in the water level. The tested correlations between the breeding time and Fulica atra breeding: Disturbance and water levels the distance from the banks in both sites were nonsignificantly negative. The contrasting results might be influenced by the water level, which was significantly correlated with the distance of the nest to the banks in the Anbanghe wetland but not in the Longfeng wetland (see Table 3). Nevertheless, our results confirmed the influence of water level on the breeding time of coots. Similar to the American coots [37], the clutch initiation of the common coots delayed as the water depth below the nest decreased. Watermifoil (Myriophyllum verticillatum) and hornwort (Ceratophyllum demersum) are the main food of coots in the breeding season [38,39]. The growth and biomass of these submerged plants are influenced by water depth [40]. The pecking rates of coots are lower in deeper water than in shallow water and muddy soils [41,42], and the birds choose an appropriate water depth with more food and a lower energy cost for diving. The water level can indicate food abundance, as well as habitat quality to some degree. It appears that a delay in the breeding time of coots can be an adaptive adjustment: the birds wait for the food conditions to improve. We found insufficient evidence of an impact of the distance to the bank on breeding time and clutch size (see Fig 3). However, in addition to experiencing more disturbance from human activities, breeding pairs inhabiting areas close to the pond margins are always in poorer-quality habitats. Delaying breeding time and waiting for the habitat to improve are positive strategies to improve breeding success and the survival rate of fledglings. However, the breeding time and reproductive output of coots are controlled by the birds' own conditions to some degree. Coots are territorial birds [43][44][45]. The pairs that were breeding for the first time generally produced eggs later than other pairs [46]. Weaker birds, or those breeding for the first time, might be at a disadvantage in the competition for territory and, as a result, select poorer habitats to nest, thus resulting in a later breeding time. The productivity of coots showed a decreasing trend with later breeding dates; however, this trend was significantly supported in only the Longfeng wetland. Similar results were found for American coots [37]. Factors affecting the clutch size of coots are complex and include age, habitat quality (food limitation), laying date, and parasite load [46,47,48]. The coots' clutch size increased with the increase in the distance from the nest to the edge of the lake [49]. Fledging success of coots is causally related to timing of breeding [50]. The survival rate of the fledgings was influenced by the clutch size [51]. In the Anbanghe wetland and Longfeng wetland, the intrabrood parasitism of coots was severe [48]. In addition, the investigation period was short and insufficient to reveal the impact of water depth on the reproduction of coots over the whole breeding season. Water depth could potentially influence the territory quality due to variations in plant mass and food abundance. Deeper water could be a better barrier against predation by terrestrial mammals. A delay in the breeding date will reduce the chance of a second brood and lead to a later migration from the breeding ground. In summary, we found tourism activities had limited impacts on the breeding performance of coots in the Anbanghe wetland. Existing studies have shown that the density of American coots (F. americana) was positively related to water depth [52], and the coots likely preferred high-water wetlands, as they require deep water for foraging and escaping [53,46]. Therefore, we consider that a decrease in water level as well as water surface reduction could be the reasons behind the decline in the coots population. The annual precipitation in the Zhalong wetland has decreased over recent years [32,36], and the construction of a water control project for agricultural irrigation also limited the water supply in the area. Water shortages might cause a reduction in the number of coots in the Zhalong wetland. The disturbance caused by tourism activities can result in behavioral changes in birds. In addition, tourism activities also have an adverse impact on bird populations in the long term, particularly in areas where breeding is concentrated. However, the most important consideration here is the management of water control measures to protect the waterbirds. Study areas We investigated the breeding populations of common coots at two sites: Anbanghe Nature Reserve (131˚06 0 12-131˚32 0 24@E, 46˚53 0 07@-47˚03 0 54@N) and Longfeng (125˚07 0 -125˚15 0 E, 46˚28 0 -46˚32 0 N) in Heilongjiang Province. The Anbanghe Nature Reserve is in the north-east part of Heilongjiang Province at the lower reach of the Anbanghe River with a total area of 10,295 ha; the reserve is part of the Sanjiang Plain wetland and has a continental monsoon climate in the temperate zone. The reserve is located in the lower part of a river floodplain wetland and mainly comprises reed swamp habitat. Since 2004, the outer area of the reserve has been developed for ecotourism and provided with recreational water activities, such as boating. This study was permited by the "Anbanghe Nature Reserve administration". Longfeng Wetland was open to researchers and visitors, no entry restrictions, as it was not a nature reserve. Even though, we still got the permission from Wildlife Conservation and Nature Reserve Managment Department of Heilongjiang Forstry Department to do field work in these two wetlands. Study methods All the work was done in the field, without direct contact with the study objects, we only observed them, coundted the eggs, and measured habitat factors of their nest place. As pond banks are often used by walking tourists as visitor routes, and boating has been developed on the extensive open waters in the Anbanghe wetland, we chose the distance from the nest to the pond bank or to the open water surface that was used for boating as the indicator of the degree of disturbance. The water level was human-controlled for boating but still fluctuated according to rainfall. To test the influence of water level on the breeding performance of coots, we checked their breeding status and recorded the date when the nests were checked in three short time segments (within approximately one week; the differences in nest checking date would not change the breeding stage of the breeding coots) from May to June in the Anbanghe wetland in 2008. As the new nests were difficult to find during the short investigation period, resulting in too few new nests to be analyzed, we checked and measured all nests for which the breeding stages could be confirmed to increase the sample size. We classified the breeding stages into five types: (a) nest-building period, indicated by a new nest without eggs, (b) egg-laying period, when the birds had started to lay but not finished (less than 6 eggs), (c) incubation period, with all eggs intact, (d) hatching period, with some eggs hatched but others not hatched, and (e) hatch completion period, indicated by an empty and deep nest without eggs. We used a GPS to plot the positions of the nests, the bank and the open water surface using for boating was drawn by GPS, so we can get the distance of coots' nest to pen water surface and the pond bank by mesureding them in MapSource software version 6.5. We recorded the breeding date of 31 pairs of coots in Anbanghe wetland in 2008 and of 25 nests in Longfeng wetland in 2009. We estimated the percent coverage of grass (both live and dead) within a 1 × 1 m quadrat centered on the nest as the nest coverage. The distance of the nest to disturbance (road, boating route, and other human activity facilities) and to the bank as well as the water depth under the nest were measured when the nest was found [54], and the clutch sizes of 136 nests in both sites were recorded in 2008, 2009 and 2012. To evaluate the influence of disturbance on the breeding time of the common coots, the Longfeng wetland was investigated as a control region where tourism was absent in 2009. Statistical analysis Following a normality test on the data, the data in line with the normal distribution were subjected to a t-test. Correlation analysis was used to evaluate the relationship between breeding time and the water depth and distance to the bank; a correlation was considered to be significant if the P value was less than 0.05. However, tourists always walked along the banks in the Anbanghe wetland, and the water depth becomes shallow close to the bank of pond, so when checking their impacts on breeding performance, we examined interactions between the water depth, distance to the bank, and distance to disturbance using correlation analysis. The statistical significance level was set at P < 0.05, and degrees of freedom (df) and P values from these models are presented. The means are presented as back-transformed parameter estimates, with the upper and lower 95% confidence limits. To analyze the impact of environmental factors (water depth under the nest, distance of nest to the bank of ponds, distance of nest to disturbance sources, and the nest coverage) on the breeding time (breeding date: data collected from the nests we can confirm the initiation day of egg laying, and breeding stages: data collected from the nests without clear laying date but the breeding stages could be recognized) and clutch size of coots, we used linear mixed effect (LME) models with a Poisson error structure and a log link function using the lme package and built GAMs in the mgcv 4 package in R (version 3.0.2). We set year as a fixed factor, site as a random factor, and the variables we wanted to test as additional predictors. Ordinal regression was used to test the variable influences on the breeding stage of coots, which is a qualitative variable, and a generalized additive model (GAM) (in R package mgcv) [55] and a POLR model were used to relate the breeding time (breeding date and stages) and reproductive output (clutch size) of coots to the four habitat factors above. AIC was used to test the impact of the parameters on the breeding time and clutch size of the coots, using AICc values [56][57][58].
2018-09-15T22:01:10.911Z
2018-08-29T00:00:00.000
{ "year": 2018, "sha1": "1ae2e893ed29c00326c2640d6c531c5458cf4dd7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0202684", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ae2e893ed29c00326c2640d6c531c5458cf4dd7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252533782
pes2o/s2orc
v3-fos-license
Machine learning reveals two heterogeneous subtypes to assist immune therapy based on lipid metabolism in lung adenocarcinoma Background Lipid metabolism pivotally contributes to the incidence and development of lung adenocarcinoma (LUAD). The interaction of lipid metabolism and tumor microenvironment (TME) has become a new research direction. Methods Using the 1107 LUAD records from the Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO) databases, a comprehensive exploration was performed on the heterogeneous lipid metabolism subtypes based on lipid metabolism genes (LMGs) and immune-related genes (LRGs). The clinical significance, functional status, TME interaction and genomic changes of different subtypes were further studied. A new scoring system, lipid-immune score (LIS), was developed and validated. Results Two heterogeneous subtypes, which express more LMGs and show the characteristics of tumor metabolism and proliferation, are defined as lipid metabolism phenotypes. The prognosis of lipid metabolism phenotype is poor, and it is more common in patients with tumor progression. Expressing more IRGs, enrichment of immunoactive pathways and infiltration of effector immune cells are defined as immunoactive phenotypes. The immunoactive phenotype has a better prognosis and stronger anti-tumor immunity and is more sensitive to immunotherapy. In addition, KEAP1 is a driving mutant gene in the lipid metabolism subtype. Finally, LIS was developed and confirmed to be a robust predictor of overall survival (OS) and immunotherapy in LUAD patients. Conclusion Two heterogeneous subtypes of LUAD (lipid metabolism subtype and immune activity subtype) were identified to evaluate prognosis and immunotherapy sensitivity. Our research promotes the understanding of the interaction between lipid metabolism and TME and offers a novel direction for clinical management and precision therapy aimed to LUAD patients. Introduction As the most frequent malignancy, lung cancer causes the highest cancer-related deaths around the world (1). Lung cancer can appear in different histological types among which, nonsmall cell lung cancer (NSCLC) is the most common type with about 85% proportion of all lung cancer patients (2). Lung adenocarcinoma (LUAD) is the most abundant subtype of NSCLC, accounting for about 55% (3). LUAD is a heterogeneous disease with different clinical prognosis and drug response results. It is worth noting that despite the great progress in clinical diagnostic methods and multimodal treating approaches, the 5-year overall survival (OS) rate of patients with advanced lung cancer has remained very low (4). Therefore, LUAD patients are still in an urgent need for new early diagnosis and clinical intervention methods. During cancer occurrence and progression, the immune system and the tumor cells are in complex interaction. On one hand, demand for local nutrition and oxygen highly increases due to fast proliferation of tumor cells. On the other hand, the same reason causes poor local vascularization, resulting in acidosis and hypoxia in the tumor microenvironment (TME) as well as local glucose deficient (5)(6)(7). Eventually, lipids existing in the TME begin to be used main as the alternative source of energy in both tumor tissues and immune cells to compensate for the energy shortages (8). Lipids also contribute to the biofilm formation, supplying the biomass production, and mediating some complex signaling pathways contributing to the growth and migration of cancer cells (9). In addition, the affinity of cancerous cells for lipids and cholesterol increases, directly leading to lipid accumulation in the TME and developing malignancy in the tumor tissues (10). Although, the lipid metabolic reprogramming and dysfunction as well as its dual impact in the TME and immune responses to tumor is not exactly recognized yet. Such further elaboration is essentially required for developing specific treatments based on the antitumor immune responses. The present study aimed to survey the crosstalk between lipid metabolism and tumor immune response in LUAD patients and identified two heterogeneous subtypes (lipid metabolism subtype and immune activity subtype). These two subtypes show specific differences in clinical outcomes, biological functions, immune infiltration and genomic variation. In addition, a lipid-immune score (LIS) was developed and validated, which shows significant advantages in predicting prognosis and immunotherapy response. In conclusion, our work strengthens the understanding of the complex role between lipid metabolism and immune system in LUAD and provides a new perspective and reference for the accurate prediction and immunotherapy of LUAD patients. Data extraction Transcriptome RNA-seq data (HT-seq FPKM), mutation data (mutect2 tool), copy number variation (CNV), and their corresponding clinical information (from Cancer Genome Atlas-lung adenocarcinoma, tcga-LUAD queue) were obtained (https://portal.gdc.cancer.gov/repository). After excluding patients who lost follow-up and clinical information, 492 LUAD samples were collected. These data were used as discovery queues after Transcripts per million (TPM) standardization. In addition, three independent data sets from the GEO database were collected, including GSE30219 from GPL570 platform, GSE42127 from GPL6884 platform, and GSE72094 from GPL15048 platform. In order to prevent the batch effect of chips, three GEO data sets were combined through the combat function of the "sva" package and the data were log2 standardized (11). Finally, a total of 615 GEO meta queues containing LUAD samples with complete clinical information were used as external validation queues. Finally, two immunotherapy cohorts were collected to verify the model's prognostic power: NSCLC cohort GSE135222 receiving Programmed Death-1(PD-1) treatment, including 27 patients (12) and Imvigor210, a cohort of advanced urothelial carcinoma cases undergoing anti-Programmed Cell Death-Ligand 1 (PD-L1) immunotherapy, including 298 patients (13). Identification of lipid and immune subtypes of LUAD immune related. A detailed list in Table S1 to indicate the lipid genes and immune genes we used. First, LMGs and IRGs with independent prognostic efficacy were evaluated by univariate Cox regression analysis, and candidate genes were identified according to the threshold of p < 0.05. According to the transcriptional map of candidate genes, consensus clustering was conducted in the discovery queue and validation queue through the ConsensusClusterPlus package (16). Pam unsupervised clustering algorithm was adopted in this analysis, and 1000 iterations were carried out based on Euclidean distance. Eighty per cent of the samples were randomly selected in each iteration. The number of clusters was set to 2-5, and the optimal cluster number was jointly determined using the consensus matrix and cumulative distribution function (CDF). Functional enrichment and immune infiltration analysis Significant differentially expressed genes (DEGs) between subgroups were identified by 'limma' package in R program according to the threshold of False Discovery Rate (FDR) < 0.05 and fold change (FC) > 2. The functional enrichment of DEGs was achieved using metascape (www.metascape.org/) database. Gene Set Enrichment Analysis (GSEA) was conducted among subgroups and significantly altered pathways were selected using Kyoto Encyclopedia of Genes and Genomes (KEGG) by p < 0.05. Based on the previously published molecular markers, ssGSEA analysis was performed using the 'gsva' package in R program to evaluate the biological pathway activity of the samples which included angiogenesis, epithelial-mesenchymal transition (EMT), myoid inflammation, and molecular markers of other immune related pathways (17)(18)(19)(20). Molecular markers of hypoxia were collected from MSigDB (14). Detailed pathway gene markers were displayed in Analysis of the genome variation map between subgroups The mutation data was processed with 'maftools' package in R package. First, the total number of mutations in the sample was measured, and then, the genes with the minimum mutation number > 30 were identified. The difference of mutation frequency of high-frequency mutation genes between the two subgroups was compared using the chi square test and visualized with maftools (24). CNV data were processed by Gistic 2.0 software. Based on the threshold of 0.2, significantly amplified and deleted chromosome segments were identified, and CNV differences on chromosome arms were evaluated. The CNV results were visualized by 'ggplot2' R package. Constructing lipid-immune score DEGs contained in all cohorts were selected for further analysis based on the above identified DEGs between the two subtypes. Univariate Cox regression analysis revealed the prognostic value of these genes. Subsequently, genes with statistical significance (p < 0.05) were incorporated into the Cox proportional hazard model with Least absolute shrinkage and selection operator (Lasso) penalty, and 300 iterative searches were carried out to find the most robust model. In order to prevent over fitting, five-cross validation was set up. The model with the highest frequency in 300 iterations was used as the final prognostic model and the lipid-immune score (LIS) was generated according to the formula: LIS = ∑iCoefficient (mRNA i ) × Expression(mRNA i ). The 'survcomp' package in R program was used to calculate the consistency C index and evaluate the prognostic value of the risk score (RS) in the training and verification sets. The higher C index indicates the more accurate prognostic power of the model (25). The high-risk group and low-risk group were divided based on their median FRS, and the prognostic value of the risk model was calculated using Kaplan-Meier (KM) survival curve, univariate and multivariate Cox regression, time-dependent ROC curve (tROC), and subgroup analysis system. Predicting immunotherapy response According to previous studies, the immunophenoscore (IPS) of the sample was calculated. Briefly, IPS is calculated from transcriptomic data of representative genes for different immunophenotypes and normalizes the final result to 0-10. Samples were positively weighted according to effective immune cells and negatively weighted according to suppressive immune cells, and then applied Z-score averaged. Z-score ≥ 3 was defined as IPS10 and Z-score ≤ 0 was defined as IPS0. The higher the IPS, the better the immunotherapy response (26). The Tumor Immune Dysfunction and Exclusion (TIDE) algorithm (http://tide.dfci.harvard.edu) was applied to predict the patients' response to the anti-PD-1 and anti-CTLA-4 treatments (27)(28)(29)(30). Finally, the predictive power of LIS was evaluated in two external immunotherapy cohorts (GSE135222 and Imvigor210). Statistical analysis Pearson chi square or Fisher exact tests were applied to compare categorical variables. The continuous variables were compared between the two groups by Wilcoxon rank sum test. The KM curve was drawn by 'survminer' package and the tROC analysis was carried out by 'survivalROC' package both in R program. The univariate and multivariate Cox regression was completed by 'survival' package in R program. The 'rms' package in R was used to draw nomograms and calibration curves, and decision curve analysis (DCA) was carried out through DCA package (31). The ROC curve used to predict immunotherapy was performed by the 'pROC' package. Two tailed p < 0.05 was considered statistically significant unless otherwise specified. Parsing LMGs and IRGs in LUAD The design of our study is shown in Figure S1. Univariate Cox regression analysis displayed 155 LMGs and IRGs with prognostic value (p < 0.05). The forest map showed the prognostic candidate genes of top15 ( Figure 1A). Detailed Cox results are provided in Table S3. Figure 1B summarizes the mutation of top15 candidate genes. Specifically, the mutation type is single nucleotide mutation, and the genes with the highest mutation frequency are VEGFC (24%) and tnfrsf11a (10%). The waterfall diagram shows their mutation map in the tcga-LUAD cohort ( Figure 1C). The histogram summarizes the CNV of the top15 candidate genes in tcga-LUAD, and the results show that they have a wide range of CNV events. Lpgat1 was the most amplified gene, and raet1e was the most deleted gene ( Figure 1D). The circle chart shows the overall CNV of the top15 candidate gene on the chromosome ( Figure 1E). Finally, the interaction of top15 candidate genes was analyzed, and the correlation network showed that they were highly positively correlated ( Figure 1F). Identification of lipid and immune subtypes Consensus clustering was performed on the discovery queue and GEO meta queue from tcga-LUAD by ConsensusClusterPlus. According to the CDF curve of consensus score, k = 2 was found to be the best choice (Figure 2A, Figure S2A). The consensus matrix also confirmed this result ( Figure 2B, Figure S2B). Based on the transcriptional profiles of candidate LMGs and IRGs, lipid metabolism subtypes and immune activity subtypes were defined ( Figure 2C, Figure S2C). IRGs were significantly increased in immunoactive subtypes, while LMGs were significantly increased in lipid metabolism subtypes. According to the survival analysis, lipid metabolism subtypes in the cohort was significantly worse compared to that of immune activity subtypes (p = 0.001, Figure 2D). A worse clinical outcome of lipid metabolism subtypes was confirmed in the validation cohort (p < 0.001, Figure S2D). In addition, the tcga-LUAD cohort had more detailed clinical follow-up information. There was a significant increase in patients with disease progression in the lipid metabolism subtype (Figures 2E, F). Biological function difference between two subtypes First, the DEGs between the two subtypes were identified by limma package. According to the threshold of FDR < 0.05 and FC > 2, a total of 1597 DEGs were identified, of which 1233 were up-regulated in immunoactive subtypes and 362 were up-regulated in lipid metabolism subtypes. Detailed results are provided in Table S4. Based on the functional enrichment analysis, the up-regulated genes in immunoactive subtypes mainly regulate cell activation, inflammatory response, cell adhesion and lymphocyte migration ( Figure 3A), and Figure 3C showed the functional interaction network of immunoactive subtypes. The upregulated genes in lipid metabolism subtypes mainly regulate biological oxidation, epithelial cell differentiation and glucose homeostasis ( Figure 3B). Figure 3D shows the functional interaction network of lipid metabolism subtypes. GSEA analysis showed that the pathways enriched in immunoactive subtypes were mainly B-cells receptor, Tcells receptor, Toll-like receptor signal pathway and NKcells killing activity ( Figure 3E). The pathways enriched in lipid metabolism subtypes were fatty acid metabolism, protein secretion and TCA cycle pathway ( Figure 3F). In conclusion, these results confirm that the immunocompetent subtype has stronger antitumor immune activity, while the tumor cells of the lipid metabolism subtype have stronger metabolic and proliferative activity, which may lead to the difference in prognosis between the two. Difference of immune infiltration between two subtypes The immune infiltration degrees were systematically compared between the two subtypes. First, the estimate algorithm showed that the immune activity subtype had a higher immune score, while the lipid metabolism subtype had a higher tumor purity ( Figure 4A), which was confirmed in the validation queue ( Figure S3A). The expression differences between five classical immune checkpoints and therapeutic targets (PD-L1, CD8A, CTLA-4, LAG-3, PD-1) were then examined. The results showed that the five checkpoints were significantly up-regulated in immunoactive subtypes ( Figure 4B), and the validation cohort ( Figure S3B). Through ssGSEA algorithm, we found that except for myeloid inflammation, other immune pathways were upregulated in the immune activity subgroups. In particular, the activity of EMT pathway in the immunoactive pathway was also up-regulated ( Figure 4C). Similar results were observed in the validation cohort. It is worth noting that the activity of angiogenesis pathway in lipid metabolism subtypes was upregulated in the validation cohort ( Figure S3C). Finally, cibersort results showed that NK cells, plasma cells and natural B cells increased in immunoactive subtypes, while Tregs increased in lipid metabolism subtypes ( Figure 4D). In addition, higher Tregs in lipid metabolism subtypes were also confirmed in the validation cohort ( Figure S3D). In conclusion, these results convey that the immunoactive subtype has more antitumor immune activity and effector immune cells, while the lipid metabolism subtype is inhibited by higher Tregs infiltration. Analysis of genome changes among subtypes The original mutation data were processed with maftools package. Chi square test showed that the mutation frequency of KEAP1, KRAS and SPTA1 in lipid metabolism subtypes increased, especially KEAP1 ( Figure 5A). The waterfall diagram shows the mutation map difference of a total of 32 high-frequency mutant genes between the two subtypes ( Figure 5B). The TMB of each patient was calculated, and the results showed that the lipid metabolism subtype had a higher TMB, but the difference between the two subtypes was not significant ( Figure 5C). CNV leads to chromosome variation in another way. We then evaluated the correlation between FRS and CNV and found that the amplification and deletion levels of immunoactive subtypes were significantly higher at the chromosome arm level ( Figure 5D). The box diagram showed no significant difference in the total number of chromosome B C D A FIGURE 4 Immune infiltration analysis of different subtypes. (A) The difference of Estimate score between the two subtypes. (B) Differences in the expression of six typical immune checkpoints (PD-L1, CD8A, CTLA-4, LAG-3, PD-1) between the two subtypes. (C) Differences in the activity of immune related pathways between the two subtypes. (D) The difference of immune cell infiltration between the two subtypes. *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001; ns p > 0.05. amplification between the two subtypes ( Figure 5E), and the number of chromosome deletions in the lipid metabolism subtype increased significantly ( Figure 5F). Immunoactive subtypes that are more sensitive to immunotherapy The functional differences and immune landscape among subgroups suggest that patients with immunoactive subtypes may have better immune treatment response. According to the literature, better immunotherapeutic efficacy is in close relation to the increase in the number of neoantigens (32, 33). Therefore, we first evaluated the difference in the number of neoantigens between the two subtypes, and the results showed that the immunoactive subtypes have more SNV neoantigens and Indel neoantigens ( Figures 6A, B). Recent studies have shown that MSI score is expected to become a new predictor of immunotherapy (34). However, there is no significant difference in MSI score between the two subtypes ( Figure 6C). IPS can systematically evaluate the activity of effector immune cells and immune treatment response of patients. The discovery queue showed that IPS of immunoactive subtypes was significantly higher than that of lipid metabolism subtypes ( Figure 6D), and the response rate of immunoactive subtypes to immunotherapy predicted by TIDE algorithm was higher than that of lipid metabolism subtypes ( Figure 6E). Although there was no significant difference in IPS between the two subtypes in the validation cohort, the immunoactive subtypes in the validation cohort also had a higher response to immunotherapy (Figures 6E, F). In conclusion, our results suggest that immunoactive subtypes are more sensitive to immunotherapy. Constructing and validating LIS First, 1597 DEGs were analyzed by univariate Cox regression to identify the prognostic valuable DEGs. According to the threshold of p < 0.001, a total of 88 DEGs with prognostic significance were identified. Then, these 88 DEGs were recruited for Lasso regression to simplify the model. After 300 iterations, the model with 22 DEGs was the most stable showing a suitable efficacy in the training queue as well as the validation queue (C index > 0.6, Figure 7A). According to the best l (0.02631), the Table S5. According to the survival analysis, patients with high LIS showed significantly less survival rate compared to the patients with low LIS (p < 0.001, Figure 7C), which was confirmed in the validation cohort (p < 0.001, Figure S4A). Based on the ROC analysis, the AUC values of the model in 1 year, 3 years, and 5 years were 0.792, 0.714, and 0.711, respectively ( Figure 7D). In the external validation queue, LIS also had satisfactory prediction efficiency, specifically, 0.68 in 1 year, 0.69 in 3 years, 0.69 in 5 years and 0.71 in 8 years ( Figure S4B). Figure 7E shows that the survival status of patients with high LIS were significantly worse compared that of patients with low LIS, and similar results were observed in the validation cohort ( Figure S4C). TROC analysis showed that LIS was the best predictor of OS ( Figure 7F), and the effectiveness of LIS and Stage was equivalent in the validation queue ( Figure S4D). Finally, univariate Cox regression confirmed that LIS was an independent prognostic indicator in both training and validation sets (p < 0.0001, Figure 7G). Multivariate Cox regression showed that LIS was still an independent prognosticator for OS in the training and validation cohorts after correcting for other factors (p < 0.0001, Figure 7H). Quantifying the risk of individual LUAD patients Subgroup analysis showed that LIS in the training cohort showed excellent predictive ability in different clinical subgroups except patients in stage 3 and stage 4 (p < 0.001, Figure 8A). In the validation cohort, LIS was able to distinguish patients with poor survival except for patients in stage 2-4 (p < 0.05, Figure S4E). These results suggest that LIS shows better performance in predicting early LUAD patients. For better quantifying of the death risk in individual LUAD patients, nomograms were constructed based on LIS ( Figure 8B). Nomogram correction curve shows that nomogram model has good stability and accuracy in 1, 3 and 5 years ( Figure 8C). TROC analysis showed that compared with clinical characteristics, nomogram model was the best predictor ( Figure 8D). DCA was then performed to calculate the decisionmaking benefits of nomogram model. The results showed that nomogram was suitable for risk assessment of LUAD patients in 1, 3 and 5 years ( Figure 8E). LIS in predicting immunotherapy First, TIDE was used to evaluate the difference of immunotherapy response between patients with high LIS and patients with low LIS. According to the results, patients with low LIS showed to be more benefitting from immunotherapy ( F i g u r e 9 A , F i g u r e S 5 A ) . T he n fi v e w i d e l y u s e d immunotherapy biomarkers were calculated, including MDSC, MSI score, IFNG, CD8 and CD274. In the training cohort and validation cohort, LIS provided higher accuracy in predicting immunotherapy ( Figure 9B, Figure S5B). Then, two immunotherapy cohorts were included to further study w h e t h e r LI S c o u l d p r e d i c t p a t i e n t s ' r e s p o n s e t o immunotherapy. Consistent with the above, patients with high LIS showed worse survival in these two immunotherapy cohorts (Figures 9C, D). Finally, the relationship between LIS and neoantigens and TMB in Imvigor210 cohort was evaluated. The results showed that LIS had no strong correlation with neoantigens and TMB. However, patients with low LIS had higher neoantigens (Figures 9E, F) Discussion Lung cancer is the main cause of cancer-related death and LUAD is the most common histological subtype with the most patients at the advanced stage on the initial diagnosis (35,36). Although a variety of targeted therapies and new chemotherapeutic drugs have been approved, the OS of advanced patients is still not ideal (4). Lipid metabolism has long been reported as the main energy source of cancer cells and is involved in the incidence and development of cancer (8). Recently, the dual regulation of lipid metabolism on immune response in TME has attracted extensive attention and has become a promising target for targeted therapy (6,10). Our study identified and verified two heterogeneous subtypes in LUAD, one of which was an effector immune cell with more expression of IRGs, enrichment of immunoactive pathways and high abundance, which was defined as an immunoactive subtype. Another kind of suppressive immune cells expressing more LMGs and high abundance, showing the characteristics of tumor metabolism and proliferation, was defined as lipid metabolism subtype. We verified the stability and repeatability of the two subtypes in a GEO meta cohort. These two subtypes also showed heterogeneity in genome driven events, clinical outcomes, and immunotherapy responses. In addition, a robust prognostic feature was proposed based on these two subtypes: LIS. Further analysis showed that LIS shows a leading advantage in predicting the immunotherapy of LUAD patients. These results promote the understanding of the interaction between lipid metabolism and TME and offer a new direction for clinical management and precision treatment of LUAD patients. These two subtypes showed different clinical characteristics. The survival of immunoactive phenotype was significantly better than that of lipid metabolism phenotype, and patients with more disease progression were in lipid metabolism phenotype. Functional enrichment indicated that metabolic related pathways and cell-cycle related pathways were enriched in the lipid metabolism phenotype, while effector immune cell receptor signaling pathways and immune related pathways were enriched in the immunoactive phenotype. In addition, immune infiltration analysis also suggested that there was higher effector immune cell infiltration in the immunoactive phenotype, and more tumor cells and inhibitory immune cells in the lipid metabolism subtype. These results suggest the hypermetabolism and proliferation of tumors in the lipid metabolism subtype and explain the worse survival rate and tumor progression of patients with this subtype (37). More effector immune cells and stronger immune activity in the immunoactive phenotype play an anti-tumor role, resulting in better survival and tumor remission of patients (38). Next, in order to elaborate the molecular characteristics of the two subtypes, the genomic alterations of the two subtypes were compared. In general, there was a higher TMB in the lipid metabolism subtype. It is worth noting that the mutation frequency of KEAP1 gene in the lipid subtype was significantly increased compared to that in the immunoactive phenotype. KEAP1 is an essential regulator of cell homeostasis and antioxidant stimulation (39). Studies have reported that this mutation is common in NSCLC with close correlation to higher tumor growth and invasiveness (40). Additionally, tumors bearing KEAP pathway mutations have been reported in preclinical and clinical studies, which have stronger resistance to traditional treatment methods, such as chemotherapy, radiotherapy, and targeted therapy (41)(42)(43). In addition, we found that the amplification and deletion levels of immunoactive subtypes were significantly higher at the chromosome arm level, and the deletion levels of lipid metabolism subtypes were higher in general. The contradictory results may suggest that CNV does not seem to be playing a pivotal role in regulating the differences between subtypes. In general, the genomic changes of these two subtypes are mainly mediated by gene mutations, especially KEAP1, which may contribute to the heterogeneous response of subtypes to tumor treatment, leading to different clinical outcomes. In addition, KEAP1 may also be a new target for drug development and clinical treatment of LUAD. Finally, a prognostic feature called LIS was developed and validated in the TCGA cohort, GEO meta cohort, and two external immunotherapy cohorts. High LIS is an independent negative prognostic factor for OS, and subgroup analysis showed that LIS showed stronger performance in predicting early LUAD patients. Considering the heterogeneity of subtypes in immunotherapy, we also evaluated the effectiveness of LIS in predicting immunotherapy. The results showed that LIS also showed high accuracy in the immunotherapy cohort. In addition, LIS also showed better accuracy than commonly used biomarkers (MDSC, MSI score, IFNG, CD8 and CD274). Finally, we found that patients with low LIS may have more neoantigens, which may lead to stronger immunotherapy sensitivity in patients with low LIS. In conclusion, our results suggest that LIS is not only a robust prognostic marker, but also a promising predictive marker of immunotherapy. We admit that our research also has some defects. First, we only use Bulk-seq data without considering the heterogeneity between cells. Secondly, the sequenced samples came from tumor tissue, which may lead to the fact that LIS is not suitable for peripheral blood samples, and the clinical application is limited. Finally, although we used algorithms and mature immunotherapy cohorts to evaluate the sensitivity of the two subtypes to immunotherapy, prospective clinical research cohorts are still needed for validation. In conclusion, our work identified and validated heterogeneous lipid metabolism subtypes and immune activity subtypes in LUAD, which showed heterogeneity in clinical outcomes, biological functions, immune infiltration, and genome driven events. In addition, we have developed a feature called LIS, which can be used as a reliable prognostic biomarker for predicting OS and immunotherapy response. These results promote the understanding of the interaction between lipid metabolism and TME and offer a new direction for clinical management and precision therapy of LUAD patients. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
2022-09-27T13:32:13.317Z
2022-09-27T00:00:00.000
{ "year": 2022, "sha1": "9a048b2a423976a05680233fce270b75a4d63680", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9a048b2a423976a05680233fce270b75a4d63680", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
119622907
pes2o/s2orc
v3-fos-license
Geometry of generating functions and Lagrangian spectral invariants Partially motivated by the study of topological Hamiltonian dynamics, we prove various $C^0$-aspects of the Lagrangian spectral invariants and the basic phase functions $f_H$, that is, a natural graph selector constructed by Lagrangian Floer homology of $H$ (relative to the zero section $o_N$). In particular, we prove that $$ \gamma^{lag}(\phi_H^1(o_N)): = \rho^{lag}(H;1) - \rho^{lag}(H;[pt]^\#) \to 0 $$ as $\phi_H^1 \to id$, \emph{provided} $H$'s satisfy $\supp X_H \subset D^R(T^*N) \setminus o_B$ for some $R>0$ and a closed subset $B \subset N$ with nonempty interior. We also study the relationship between $f_H$ and $\rho^{lag}(H;1)$ and prove a structure theorem of the micro-support of the singular locus $\Sing(\sigma_H)$ of the function $f_H$. Based on this structure theorem and a classification theorem of generic Lagrangian singularity in $\dim N = 2$ obtained by Arnold's school, we define the notion of cliff-wall surgery when $\dim N = 2$: the surgery replaces a multi-valued Lagrangian graph $\phi_H^1(o_N)$ by a piecewise-smooth Lagrangian cycle that is canonically constructed out of the single valued branch $\Sigma_H: = \Graph df_H \subset \phi_H^1(o_N)$ defined on an open dense subset of $N \setminus \Sing(\sigma_H)$ of codimension 1. 3.2. The basic phase function f H and its Lagrangian selector 12 4. Singular locus of the basic phase function and cliff-wall surgery 14 5. Lagrangian Floer homology and spectral invariants 19 5.1. Definition of Lagrangian spectral invariants 19 5.2. Comparison of two Cauchy-Riemann equations 20 5.3. Triangle inequality for Lagrangian spectral invariants 21 5.4. Assigning spectral invariants to Lagrangian submanifolds 22 6. Comparison theorem of f H and ρ lag (H; 1) 24 6.1. Anti-symplectic reflection and basic phase function 25 6.2. Analysis of Example 9.4 [Oh2] 26 6.3. Proof of comparison result on ρ lag (H; 1) and f H 27 7. A Hamiltonian C 0 continuity of spectral Lagrangian capacity 30 7.1. ε-shifting of the zero section by the differential of function 31 7.2. Lagrangian capacity versus Hamiltonian C 0 -fluctuation 34 References 36 Introduction We always assume that the ambient manifold M or N are connected throughout the entire paper. 1.1. Weak Hamiltonian topology of Ham (M, ω). In [OM], Müller and the author introduced the notion of Hamiltonian topology on the subset of the space P(Homeo (M ), id) of continuous paths on Homeo (M ) consisting of Hamiltonian paths λ : [0, 1] → Symp (M, ω) with λ(t) = φ t H for some time-dependent Hamiltonian H. We denote this subset by P ham (Symp (M, ω), id). We would like to emphasize that we do not assume that H is normalized unless otherwise said explicitly. This is because we need to consider both compactly supported and mean-normalized Hamiltonians and suitably transform one to the other in the course of the proof of the various theorems of this paper. In this subsection, we first recall the definition from [OM] of the Hamiltonian topology mostly restricted on the open manifold T * N . While [OM] considers strong Hamiltonian topology, except Remark 3.27 therein, the more relevant topology in the present paper will be the weak Hamiltonian topology. We first recall its definition. For a given continuous function h : M → R, we denote osc(h) = max h − min h. 1.2. Hamiltonian C 0 -topology on Iso B (o N ; T * N ). Let N be a closed smooth manifold. We equip the cotangent bundle T * N with the Liouville one-form θ defined by θ x (ξ x ) = p(dπ(ξ x )), x = (q, p) ∈ T * N. The canonical symplectic form ω 0 on T * N is defined by where (q 1 , . . . , q n , p 1 , . . . , p n ) is the canonical coordinates of T * N associated to the coordinates (q 1 , . . . , q n ) of N . Consider Hamiltonian H = H(t, x) such that H t is asymptotically constant, i.e., the one whose Hamiltonian vector field X H is compactly supported. We define supp asc H = supp X H := t∈[0,1] X Ht . For each given K, R ∈ R + , we define following the terminology of [W], and (1.6) Now we equip a topology with Iso(o N ; T * N ). One needs to pay some attention in finding the correct definition of the topology suitable for the study of Hamiltonian geometry of the set Iso(o N ; T * N ). For this purpose, we introduce the following measurement of C 0 -fluctuation of the Hamiltonian diffeomorphism of φ 1 F along the zero section o N ⊂ T * N , Using this we introduce the following restricted C 0 -distance . We define the following distance function , which induces the metric topology thereon. We equip with Iso(o N ; T * N ) the direct limit topology of Iso K (o N ; D R (T * N )) as R, K → ∞ and call it the Hamiltonian C 0 -topology of Iso(o N ; T * N ). For the main theorems proved in the present paper, we will also need to consider the following subset of Hamiltonian functions H. Here we would like to emphasize that the support condition on T ⊃ o B is imposed only for the time-one map φ 1 H , but not for the whole path φ H . This indicates relevance of the following discussion to the weak Hamiltonian topology described above. Similarly as PC ∞ R,K above, we define PC ∞ R,K;T . We define Iso B (o N ; T * N ) to be the subset This has the filtration where Unravelling the definition, we can rephrase the meaning of the convergence (1) We refer to the proof of Lemma 7.5 and Remark 7.2 for the reason to take these particular support hypotheses (2), (3) imposed in our definition of Hamiltonian C 0 -topology of Iso B (o N ; T * N ). This topology may be regarded as the Lagrangian analog to the above mentioned weak Hamiltonian topology and seems to be the weakest possible topology with respect to which one can prove the C 0 -continuity of spectral copacity γ lag which is stated in Theorem 1.1 below. ( ǫ is so small that its graph is contained in a Weinstein neighborhood of the diagonal, such a graph will automatically satisfy 1.3. Lagrangian spectral invariants. For any given time-dependent Hamiltonian H = H(t, x), the classical action functional on the space We define the subset P(T * N ; o N ) by The assignment γ → π(γ(1)) defines a fibration with fiber at q ∈ N given by For given x ∈ L H , we denote the Hamiltonian trajectory ) which is a Hamiltonian trajectory such that, by definition, (1.10) We denote L H = φ 1 H (o N ) and by i H : L H ֒→ T * N the inclusion map. Motivated by Weinstein's observation that the action functional A cl H : P(T * N ; o N ) → R can be interpreted as the canonical generating function of L H , the present author constructed a family of spectral invariants of L H by performing a mini-max theory via the chain level Floer homology theory in [Oh2,Oh3]. Indeed, the function defined by is a canonical generating function of L H in that (1.12) We call h H the basic generating function of L H . As a function on N , not on L H , it is a multi-valued function. Similarly, one may regard N → φ 1 H (o N ) as a multi-valued section of T * N . By considering the moduli space of solutions of the perturbed Cauchy-Riemann equation (1.13) and applying a chain-level Floer mini-max theory, the author [Oh3] defined a homologically essential critical value, denoted by ρ(H; a) associated to each cohomology class a ∈ H * (N ). (A similar construction using the generating function method was earlier given by Viterbo [V1] and it is shown in [M, MO] that both invariants coincide modulo a normalization constant.) The number ρ(H; a) depends on H, not just on L H = φ 1 H (o N ). 1.4. Statement of main results. We will be particularly interested in the two spectral invariants ρ lag (F ; 1), ρ lag (F ; [pt] # ) and their difference ρ lag (F ; 1)−ρ lag (F ; [pt] # ). This difference does not depend on the choice of normalization mentioned above. Therefore we can define a function . We call this function the spectral capacity of L (relative to the zero section o N ). (See [V1], [Oh3]. ) We denote by γ lag B the restriction of γ lag to the subset Iso B (o N ; T * N ). The following Hamiltonian continuity result is the Lagrangian analog to Corollary 1.2 of [Sey1]. Theorem 1.1 (Theorem 7.2). Let N be a closed manifold. Then the function γ lag B is continuous on Iso B (o N ; T * N ) with respect to the Hamiltonian C 0 -topology defined above. The following is a very interesting open question on the Hamiltonian C 0 -topology. Question 1.5. Is the full function γ lag : Iso(o N ; T * N ) → R continuous (without restricting to Iso B (o N ; T * N ) with B having non-empty interior)? The question seems to be an important matter to understand in C 0 symplectic topology. Indeed the affirmative answer to the question is a key ingredient in relation to Viterbo's symplectic homogenization program [V3]. The quesiton is sometimes called Viterbo's conjecture. We refer to Theorem 7.6 for the more precise statement on the relationship between the Hamiltonian C 0 -distance d ham C 0 and the spectral capacity γ lag B (φ 1 F (o N )) and the support conditions (2), (3) of the Hamiltonian path φ F given in Definition 1.3. To properly handle the individual number ρ lag (F ; 1), not just the difference of ρ lag (F ; 1) and ρ lag (F ; [pt] # ), and relate it to the Lagrangian submanifold L F = φ 1 F (o N ) itself, not to the function F , we need to put an additional normalization condition relative to L F . In this regard, it is useful to take the point of view of weighted Lagrangian submanifolds (L, ρ N ) introduced in [W], where ρ N is a probability density on N . Using this ρ N , we can put a normalization condition with respect to the chosen measure which is the Lagrangian analog to the meannormalization of Hamiltonians The next result concerns an enhancement of the construction of basic phase function f H carried out in [Oh2] in the level of topological Lagrangian embedding. This is a graph selector constructed via Lagrangian Floer homology. Then the map . Then (σ Fi , f Fi ) converges uniformly in J 1 (N ), whose limit defines a single-valued continuous section of J 1 (M ) on N \ Sing(σ F ). Here we define and call it the singular locus of f F . It follows from definition that Sing(σ F ) is a subset of the so called Maxwell set of the Lagrangian projection φ 1 A3,ZR] for detailed study of the Maxwell set. ) We first note that for a generic choice of F , Sing(σ F ) is decomposed into the union of smooth manifolds where S k (σ F ) is the stratum of codimension k in N . Along each connected component of the codimension one strata S 1 (σ F ), Σ F has two branches. We denote by f ± F the restrictions of f F in a neighborhood of the component in each branch respectively. The next theorem concerns the structure of Sing(σ F ) in the micro-local level. Theorem 1.3 (Theorem 4.1). Let q ∈ S 1 (F ). Then In dimension 2, a complete description of generic singularities of the Lagrangian projection is available (see [G1,A3,ZR] for the precise statement). Based on this generic description of the singulariies, we can precisely define the notion of cliff-wall surgery in dimension 2, which replaces the multi-valued graph φ 1 F (o N ) by a rectifiable Lagrangian cycle. A finer structure theorem is needed to perform similar surgery in higher dimension which will be studied elsewhere. It appears to the author that these results seem to carry some significance in relation to C 0 -symplectic topology and Hamiltonian dynamics, which may be worthwhile to pursue further in the future. Finally we prove the following inequality between the basic phase function and the Lagrangian spectral invariants. The inequality stated in this theorem is closely related to Proposition 5.1 of [V1], whose statement and proof were formulated in terms of the generating function. Theorem 1.4 (Theorem 6.1). For any Hamiltonian F = F (t, x), we have The proof of the second inequality uses a judicious usage of the triangle product in Lagrangian Floer homology [Oh3,Se,FOOO1] after a careful consideration of normalization problem in section 5.4. We would like to emphasize that the issue of normalization problem concerning ρ lag (F ; 1) is a delicate one when one would like to regard ρ lag (F ; 1) as an invariant attached to the Lagrangian submanifold itself, not just to the Hamiltonian F . Once the second inequality is established, the first one easily follows from this and the behavior of spectral invariants ρ lag (·; {q}) under the duality map F → F r (t, x) = −F (t, r(x)) induced by the anti-symplectic reflection r : T * N → T * N, r(q, p) = (q, −p) for x = (q, p) similarly as done in [Oh3] for the duality map F → F (t, x) = −F (1 − t, x). (We thank Seyfaddini for pointing out to us [Sey2] that the first inequality should also hold in the presence of the second inequality in Theorem 6.1.) See also [V1] for the similar consideration of this reflection map in the context of generating function techniques. The research performed in this paper is partially motivated by the study of topological Hamiltonian dynamics and its applications to the problem of simpleness question on the area-preserving homeomorphism group of the 2-disc. We anticipate that these studies play some important role in the study of homotopy invariance of Hamiltonian spectral invariant function φ F → ρ(F ; a) for a topological Hamiltonian path φ F in the sense of [OM,Oh7] on any closed symplectic manifolds (M, ω). It should also be regarded as a natural continuation of the author's study of Lagrangian spectral invariants performed in [Oh2,Oh3]. We thank F. Zapolsky for attracting our attention to the preprint [MVZ] from which we have learned the Lagrangian version of the optimal triangle inequality, and S. Seyfaddini for sending us his very interesting preprint [Sey1] before its publication, which greatly helps us in proving the Hamiltonian continuity of Lagrangian spectral capacity. We also thank A. Givental for many enlightening e-mail communications concerning the structure of Maxwell set, Proposition 4.2 and the cliff-wall surgery. Notations and Conventions We follow the conventions of [Oh6,Oh7] for the definition of Hamiltonian vector fields and action functional, and others appearing in the Hamiltonian Floer theory and in the construction of spectral invariants on general closed symplectic manifold. They are different from e.g., those used in [Po, EP] one way or the other, but coincide with those used in [Sey1]. (1) We usually use the letter M to denote a symplectic manifold and N to denote a general smooth manifold. (2) The Hamiltonian vector field X H is defined by dH = ω(X H , ·). (3) The flow of X H is denoted by φ H : t → φ t H and its time-one map by φ 1 H ∈ Ham (M, ω). (4) We denote by z q H (t) = φ t H (q) the Hamiltonian trajectory associated to the initial point q. ) the Hamiltonian trajectory associated to the final point x. The canonical symplectic form on the cotangent bundle T * N is denoted by ω 0 = −dθ where θ is the Liouville one-form which is given by θ = i p i dq i in the canonical coordinates (q 1 , · · · , q n , p 1 , · · · , p n ). (8) The classical Hamilton's action functional on the space of paths in T * N is given by H(t, γ(t)) dt. (9) We denote by o N the zero section of T * N . (10) We denote ρ lag (H; a) the Lagrangian spectral invariant on T * N (relative to the zero section o N ) defined in [Oh2] for asymptotically constant Hamiltonian H on T * N . (11) We denote by f H the basic phase function and its associated Lagrangian selector by σ H : N → T * N given by σ H (q) = df H (q) at which df H (q) exists. Basic generating function h H of Lagrangian submanifold In this section, we recall the definition of basic generating function. Let H = H(t, x) be a Hamiltonian on T * N which is asymptotically constant i.e., one whose Hamiltonian vector field X H is compactly supported. Denote by PC ∞ asc (T * N, R) be the set of such a family of functions. We denote Recall the classical action functional is defined as on the space P(T * N ) of paths γ : [0, 1] → T * N , and its first variation formula is given by which is a Hamiltonian trajectory such that which specifies the initial point q ∈ o N . (We remark that the notation here is slightly different from that of [Oh2,Oh3] in that z q H therein denotes z H q in this paper. We adopt the current notation to be consistent with that of [Oh8] and other recent papers of the author.) We define the function h H : call it the space-time (or parametric) basic generating function in the fixed frame. The following basic lemma follows immediately from (2.1) whose proof we omit. It turns out that the following form of Hamiltonian trajectories are also useful, which specifies the final point of the trajectory instead of the initial point as specified in the trajectory z q H . Then we define in the moving frame. Now consider the Lagrangian submanifold φ 1 H (o N ). We would like to point out that the function We call h H the basic generating function in the moving frame. We denote the corresponding Legendrian submanifold by R H . However, as a function on N , h H is multi-valued, while h H is a well-defined single-valued function. In general, the projection R → R × N of any Legendrian submanifold R ⊂ J 1 (N, R) = R × T * N is called the wave front [El] of the Legendrian submanifold R. We denote by W R ⊂ R × N by the front of R. We also define the (Lagrangian) action spectrum of H on T * N by which also coincides with the set of critical values of h H . It follows that Spec(H; N ) is a compact subset of R of measure zero. Remark 2.1. We would like to note that we have no a priori control of C 0 bound for the functions h H (or equivalently h H ), even when H is bounded in L (1,∞) norm. Getting this C 0 -bound is equivalent to getting the bound for the actions of the relevant Hamiltonian chords. Indeed understanding the precise relationship between the action bound, the norm H and the C 0 -distance of the time-one map φ 1 H is a heart of the matter in C 0 symplectic topology. In section 3, we recall construction of basic phase function f H from [Oh2] which is a particular single valued selection of the multivalued function h H on N that has particularly nice properties in relation to the study of spectral invariants of the present paper. This function was constructed via the Floer mini-max arguments similarly as the spectral invariants ρ ham (H; a) is defined in [Oh2], and its C 0 -norm is bounded by H . Basic phase function and its associated Lagrangian selector In this section, we first recall the definition of basic phase function constructed in [Oh2]. Then we introduce a crucial measurable map ϕ H : N → N , which is defined by a selection of of a single valued branch of the multivalued section We call this map the mass transfer map associated to the Hamiltonian H. It is interesting to note that such a selection process was studied e.g., in the theory of multi-valued functions, or Q-valued functions, in the sense of Almgren [Al] in geometric measure theory. In particular, in [DGT], existence of such a single valued branch is studied in the general abstract setting of metric spaces and a finite group action of isometries. It would be interesting to see whether there would be any other significant intrusion of the theory of multivalued functions into the study of symplectic topology. 3.1. Graph selector of wave fronts. The following theorem was proved in [Cha] and in [Oh2] by the generating function method and by the Floer theory respectively. (According to [PPS], the proof of this theorem was first outlined by Sikorav in Chaperon's seminar.) Theorem 3.1 (Sikorav, Chaperon [Cha], Oh [Oh2]). Let L ⊂ T * N be a Hamiltonian deformation of the zero section o N . Then there exists a Lipschitz continuous The choice of f is unique modulo the shift by a constant. The details of the proof of Lipschitz continuity of f is given in [PPS]. We denote by Sing f the set of non-differentiable points of f . Then by definition is a subset of full measure and f is differentiable thereon. We call such a function f a graph selector in general following the terminology of [PPS] and denote the corresponding graph part of the front of the Legendrian submanifold R by By construction, the projection π R : G f → N restricts to a one-one correspondence and the function f : Reg f → R continuously extends to Reg f = N . By definition, ) and the norm |p(x)| is measured by any given Riemannian metric on N . In [Oh2], a canonical choice of f is constructed via the chain level Floer theory, provided the generating Hamiltonian H of L is given. The author called the corresponding graph selector f the basic phase function of L = φ 1 H (o N ) and denoted it by f H . We give a quick outline of the construction referring the readers to [Oh2] for the full details of the construction. 3.2. The basic phase function f H and its Lagrangian selector. Another construction in [Oh2] is given by considering the Lagrangian pair is provided by the moduli space of solutions of the perturbed Cauchy-Riemann equation We denote the level of the chain α by The resulting invariant ρ lag (H; {q}) is to be defined by the mini-max value intersects T q N * transversely but can be extended to non-transversal q's by continuity. By varying q ∈ N , this defines a function f H : N → R which is precisely the one called the basic phase function in [Oh2]. (A similar construction of such a function using the generating function method was earlier given by Sikorav and Chaperon [Cha].) We call the associated graph part G fH the basic branch of the front W RH of R H . Oh2,Oh6]). There exists a solution z : We summarize the main properties of f H established in [Oh2]. An immediate corollary of Theorem is Based on this corollary, we will just denote the limit continuous function by [LS] for its proof) and so π −1 We introduce the following general definition ). Recall that the graph G fH is a subset of the front W RH of R H and for a generic choice of H the set Sing f H ⊂ N consists of the crossing points of the two different branches and the cusp points of the front of W RH . Therefore it is a set of measure zero in N . (See [El], [PPS], for example.) Once the graph selector f H of L H is picked out, it provides a natural Lagrangian selector defined by whenever df H (q) is defined. We call this particular Lagrangian selector of L H the basic Lagrangian selector and the pair (σ H , f H ) the basic wave front of the Lagrangian submanifold φ 1 H (o N ). The general structure theorem of the wave front (see [El], [PPS] for example) proves that the section σ H is a differentiable map on a set of full measure for a generic choice of H which is, however, not necessarily continuous: This is because as long as q ∈ N \Sing f H , we can choose a small open neighborhood of Then we define the mass transfer map ϕ H : N → N by The map ϕ H is measurable, but not necessarily continuous, which is however differentiable on a set of full measure for a generic choice of H. And from its definition, it is surjective if and only if the Lagrangian submanifold φ 1 H (o N ) is a graph of an exact one-form. On the other hand, the map ϕ H may not be continuous along the subset Sing f H ⊂ N which is a set of measure zero. By definition, we have This relationship between f H and h H is the reason why we introduce the transfer map ϕ H . The following lemma is obvious from the definition of ϕ H . We note Singular locus of the basic phase function and cliff-wall surgery We first recall two important properties of the Liouville one-form θ: (1) θ identically vanishes on any conormal variety. (See [Oh2,KO1] for the explanation on the importance of this fact in relation to the Lagrangian Floer theory on the cotangent bundle.) (2) For any one form α on N , we have α * θ = α where α : N → T * N is the section map associated to the one-form α as a section of T * N . In particular, we have σ * F θ = df F on N \ Sing(σ F ) and on each stratum of Sing(σ F ). We note that the singular locus S(σ F ) ⊂ ∆ is a subset of the bifurcation diagram S k (σ F ), S k (σ F ) = Sing k (σ F ), n = dim N (see [A1,El,G1] e.g., for such a result) so that its conormal variety ν * S(σ F ) can be defined as a finite union of conormals of the corresponding strata. Each stratum Sing k (σ F ) has codimension k in ∆. The stratum for some k could be empty. (See [KS]. See also [Ka,KO2], [NZ, N] for the usages of such conormal varieties in relation to Lagrangian Floer theory.) In dim N = 2, there are two strata to consider, one S 1 (σ F ) and the other S 2 (σ F ). For k = 1, each given point q ∈ S 1 (σ F ) has a neighborhood A(q) ⊂ N such that A(q) \ S 1 (σ F ) has two components. We also note that Σ F carries a natural orientation induced from N by projection when N is orientable and so defines an integral current in the sense of geometric measure theory [Fe]. When N is oriented, S 1 (F ) is also orientable as a finite union of smooth hypersurface. We fix any orientation on S 1 (F ). We denote by A ± (q) the closure of each component of A(q) \ S 1 (σ F ) in A(q) respectively. Here we denote by A + (q) the component whose boundary orientation on ∂A + (q) coincides with that of the given orientation on S 1 (F ) and by ∂A − (q) the other one. Then each of A ± (q) is an open-closed domain with the same boundary obtained by taking the limit on A ± (q) respectively. The limits are well-defined from the definition of σ F since Im We now prove the following theorem. We refer to [G1], [ZR] for a related statement. Let v ∈ T q S 1 (σ F ) be any given tangent vector. Choose a smooth curve γ : (−ε, ε) → S 1 (σ F ) with γ(0) = q. For any given sufficiently small δ ≥ 0, we define a family of δ-shifted curves γ ± δ (t) = exp γ(t) (±δ n(t)), where exp is the normal exponential map of S 1 (σ F ) in N and n(t) is the unit normal vector thereof at γ(t) towards the domain A + (q). Then γ + δ is mapped into Int A + (q) and γ − δ into Int A − (q) for all sufficiently small δ > 0. Note . Furthermore since f F is smooth up to the boundary on each of A ± (q) and df F is uniformly differentiable up to the boundary of A ± (q) for either of ±, where |O(|t| 2 )| ≤ C|t| 2 for a constant C > 0 uniformly over δ ≥ 0 and t ∈ (−ε, ε). which is nothing but the covariant derivative of the Jacobi field along the geodesic t → exp p (tv) with the initial vector n at p. (See [K] for an elegant exposition on the detailed study of exponential maps.) By letting δ → 0 and using the uniformity of the constant C and the continuity of f F , we obtain Then by taking the difference of two equations for ± and dividing by t, utilizing the convergence (γ ± δ ) ′ (0) → γ ′ (0) as δ → 0 and then evaluating at t = 0, we obtain Recall that γ(0) = p and γ ± δ (0) → p, and D exp p (±δ n(0)) converges to D exp p ( 0) as δ → 0, which is nothing but the identity map on ν q S 1 (σ F ) by the standard fact on the exponential map (see [K]). Therefore from this last equality, we derive Since this holds for all v ∈ T q S 1 (σ F ), the proposition for k = 1 is proved. The boundary orientations of the two components arising from that of Σ F , which in turn is induced from that of N via π 1 have opposite orientations. We call the one whose projection to S 1 (σ F ) under π 1 coinciding with the given orientation the upper branch and the one with the opposite one the lower branch and denote them by respectively. Now let L q be the line segment connecting the two vectors df ± F (q), i.e., This is an affine line that is parallel to the conormal space ν * q S 1 (σ F ). Therefore the union is contained in the translated conormal Here the bracket [−+] stands for the line segment L q , and ν * [S 1 (σ F ); N ] is the conormal bundle of S 1 (σ F ) in N . We would like to point out that since df + for all q ∈ S 1 (σ F ). Therefore we can simply write (4.4) as (4.5) unambiguously. Definition 4.1 (Basic Lagrangian selector chain). We denote by σ F the chain whose support is given by supp(σ F ) := Σ F (4.6) with the orientation given as above, and define its micro-support by (4.7) imitating the notation from [KS]. The two components of ∂σ F associated to each connected component of S 1 (σ F ) are the graphs of df ± F for the functions f ± F near S 1 (σ F ). Note that each connected component of S 1 (σ F ) gives rise to two components of ∂σ F ;[−+] ∩ σ F . We can bridge the 'cliff' between the two branches of ∂σ F over each connected component of S 1 (σ F ) and Definition 4.2 (Cliff wall chain). We define a 'cliff wall' chain σ F ;[−+] whose support is given by the union Then we define the chain σ F ;[−+] similarly as we define σ F by taking its closure in T * N . We emphasize that σ F ;[−+] lies outside the Lagrangian submanifold φ 1 F (o N ). By definition, its tangent space at x = (q, u) has natural identification with . Therefore Σ F ;[−+] carries a natural orientation and defines a current. Under the natural identification of T q N with T * q N by the dual pairing, which induces an identification ν * q S 1 (σ F ) ⊕ T q S 1 (σ F ) ∼ = ν q S 1 (σ F ) ⊕ T q S 1 (σ F ) as an oriented vector space. Then we have the relation Remark 4.3. (1) We would like to note that the singular locus S(σ F ) ⊂ ∆ is a subset of the bifurcation diagram of the Lagrangian submanifold φ 1 F (o N ): The bifurcation diagram is the union of the caustic and the Maxwell set where the latter is the set of points of which merge the different branches of the generating function h. (See section 4 [G1] for the definition of bifurcation diagram of Lagrangian submanifold L ⊂ T * N in general.) But this detailed structure does not play any role in our proof except the one described. (2) However we would like to note that each fiber of SS(σ F ) is an affine space df F (q) + ν * q [S 1 (Σ F ); N ] at q ∈ S 1 (Σ F ), not a linear space. In fact, if we incorporate the orientation into consideration, one can refine this definition further to the 'half space' instead of the full affine space. We denote this refinement by SS + (σ F ). Then at a point q in the lower dimensional strata, it will be a 'wedge domain', i.e., the intersection of several space of this type. (See [KO1,KO2] for a usage of such domains in their quantization program of Eilenberg-Steenrod axiom.) We will come back to further discussion on the detailed structure of singularities elsewhere. Next we consider the case of S 2 (σ F ) and its relationship with σ F and S 1 (σ F ). Note that for a generic choice of F , S 2 (σ F ) consists of a finite number of points in N consisting of either a caustic point or a triple intersection point of the Maxwell set (see [A1], section 4 [G1] and 7.1 [ZR]). The following proposition can be also derived from the general structure theorem of generic singularities of Lagrangian maps. We restrict the proposition to dim N = 2 here postponing the precise statement for the high dimensional cases elsewhere. Proposition 4.2. Assume dim N = 2. For a generic choice of F , the boundary of σ F + σ F ;[−+] is a finite union of triangles each of which is formed by the three line segments L q given in (4.2) associated to a triple intersection point q of S(σ F ) contained in S 2 (σ F ). Furthermore each triangle is the boundary of a 2-simplex contained in the fiber T * q N . Proof. This is an immediate consequence of the classification theorem of generic singularities in dimension 2 of Lagrangian maps originally proved by Arnold [A1]. (See also p. 55 and Figure 43 [A3], section 4 [G1] and section 7.1 [ZR].) Now we define σ F ;∆ 2 to be the union of these 2 simplices, and set σ add Then by construction, σ add F forms a mod-2 cycle. This finishes the description of the basic Lagrangian cycle. A similar description can be given in the higher dimensional cases, which we will study elsewhere. This enables us to define the following important Lagrangian cycle. Remark 4.5. (1) We also refer to [KO1,Ka,KO2] for a usage of the general conormal variety of an open-closed domain with boundary and corners, which also naturally occurs in micro-local analysis and in stratified Morse theory [KS]. (2) The basic Lagrangian cycle seems to be a good replacement of non-graph type Lagrangian submanifold φ 1 F (o N ) in general for the study of various questions arising in Hamiltonian dynamics and symplectic topology. We hope to elaborate this point elsewhere. Remark 4.6. We believe that this surgery will play an important role in the study of homotopy invariance of spectral invariants for the topological Hamiltonian paths [Oh7], which we hope to address elsewhere. Lagrangian Floer homology and spectral invariants In this section, we first briefly recall the construction of Lagrangian spectral invariants ρ lag (H; a) for L H = φ 1 H (o N ) performed by the author in [Oh3]. A priori, this invariant may depend on H, not just on L H itself. In [Oh3], we prove that One important result is the following basic property, called spectrality in [Oh6], which is not explicitly stated in [Oh2] but can be easily derived by a compactness argument. (See the proof in [Oh6] given in the Hamiltonian context.) 5.2. Comparison of two Cauchy-Riemann equations. So far we have looked at the Hamiltonian-perturbed Cauchy-Riemann equation (5.2), which we call the dynamical version as in [Oh2]. On the other hand, one can also consider the genuine Cauchy-Riemann equation We call this version the geometric version. We now describe the geometric version of the Floer homology in some more details. We refer readers to [Oh2] for the discussion on the further comparison of the two versions in the point of moduli spaces and others. The upshot is that there is a filtration preserving isomorphisms between the dynamical version and the geometric version of the Lagrangian Floer theories. We [Oh2,Oh3].) The following is a straightforward to check but is a crucial lemma. (2) The map a → Φ H (a) also defines a one-one correspondence from the set of solutions of (5.2) and that of ∂v (5.7) This latter deformation preserves the filtration of the associated Floer complexes [Oh2]. A big advantage of considering this equation is that it enables us to study the behavior of spectral invariants for a sequence of L i converging to o N in weak Hamiltonian topology. The following proposition provides the action functional associated to the equation (5.6), (5.7), which will give a natural filtration associated Floer homology HF (L, o N ). Then dA eff (γ)(ξ) = 1 0 ω(ξ(t),γ(t)) dt. In particular, We would like to highlight the presence of the 'boundary contribution' h H (γ(0)) in the definition of the effective action functional above: This addition is needed to make the Cauchy-Riemann equation (5.5) or (5.7) into a gradient trajectory equation of the relevant action functional. We refer readers to section 2.4 [Oh2] and Definition 3.1 [KO1] and the discussion around it for the upshot of considering the effective action functional and its role in the study of Cauchy-Riemann equation. Triangle inequality for Lagrangian spectral invariants. We recall from, [Sc], [Oh6] that the triangle inequality of the Hamiltonian spectral invariants ρ ham (H#F ; a · b) ≤ ρ ham (H; a) + ρ ham (F ; b) for the product Hamiltonian H#F relies on the homotopy invariance property of spectral invariants which in turn relies on the existence of canonical normalization procedure of Hamiltonians on closed (M, ω) which is nothing but the mean normalization. On the other hand, one can directly prove more easily for the concatenated Hamiltonian. (See e.g., [FOOO4] for the proof.) Once we have the latter inequality, we can derive the former from the latter again by the homotopy invariance property of ρ ham (·; a) for the mean-normalized Hamiltonians. When one attempts to assign an invariant of Lagrangian submanifold φ 1 H (o N ) itself out of the spectral invariant ρ lag (H; a), one has to choose a normalization of the Hamiltonian relative to the Lagrangian submanifold. Since there is no canonical normalization unlike the Hamiltonian case, the invariance property of Lagrangian spectral invariants and so the triangle inequality is somewhat more nontrivial than the case of Hamiltonian spectral invariants. In this subsection, we clarify these issues of invariance property and of the triangle inequality. We first recall the following triangle inequality which was essentially proved in [Oh3]. (See Theorem 6.4 and Lemma 6.5 [Oh3]. In [Oh3], the cohomological version of the Floer complex was considered and hence the opposite inequality is stated. Other than this, the same proof can be applied here.) Proposition 5.5. Let H, F ∈ PC ∞ asc (T * N ; R), and assume F is autonomous. Then we have ρ lag (H#F ; ab) ≤ ρ lag (H; a) + ρ lag (F ; b). (5.9) Monzner, Vichery, and Zapolsky [MVZ] proved the following form of the triangle inequality which uses the concatenated Hamiltonian H * F instead of the product Hamiltonian H#F . In particular, this proposition applies to all pairs H, F which are compactly supported and boundary flat. Remark 5.2. We suspect that (5.9) holds even for the non-autonomous F as in the Hamiltonian case but we did not check this, since it is not needed in the present paper. holds for all H, F ∈ PC ∞ ass;B . As the notation suggests, the class depends on the subset B ⊂ N . We start with the following proposition. The proof closely follows that of Lemma 2.6 [MVZ] which uses Proposition 5.6 in a significant way. We need to modify their proof to obtain a somewhat stronger statement, which replaces the condition "φ 1 H = φ 1 F " used in [MVZ] by the conditions put in this proposition. Proposition 5.7 (Compare with Lemma 2.6 [MVZ]). Let H, F ∈ PC ∞ asc (T * N ; R) be boundary-flat. Suppose in addition H, F satisfy the following: ( Then ρ lag (H; a) = ρ lag (F ; a) holds for all a ∈ H * (N, Z) without ambiguity of constant. Proof. We consider the Hamiltonian path φ G : t → φ t G with G = F * H with F (t, x) = −F (1 − t, x). This defines a loop of Lagrangian submanifold We claim ρ lag (G; a) = 0 for all 0 = a ∈ H * (N ). This will be an immediate consequence of the following lemma and the spectrality of numbers ρ lag (G; a). But a straightforward computation using the first variation formula (2.1) implies For the second statement, we have only to consider the constant path z ≡ c q ∈ B for which This proves the lemma. Once we have the lemma, we can apply the triangle inequality (5.10) ρ lag (H; a) ≤ ρ lag (F ; a) + ρ lag (G; 1) = ρ lag (F ; a) for any given a ∈ H * (N ). By changing the role of H and F in the proof of the above lemma, we also obtain ρ lag ( G; 1) = 0 and then obtain ρ lag (F ; a) ≤ ρ lag (H; a) by triangle inequality. This finishes the proof of the proposition. This proposition motivates us to introduce the following definitions Definition 5.3. For each given B ⊂ N , we define When a function c : [0, 1] → R is given in addition, we define With these definitions, the proposition enables us to unambiguously define the following spectral invariant attached to L. defined by ι(γ)(t) = γ(t) with γ(t) = γ(1 − t) and the action functional identity . We refer to [Oh3] for the details of the duality argument in the Floer theory used in the derivation of (5.12). On the other hand, by definition, since H ∈ PC ∞ (B;e) . This finishes the proof. 6. Comparison theorem of f H and ρ lag (H; 1) We first remark that both ρ lag (H; 1) and f H remain unchanged under the change of H outside a neighborhood of t∈[0,1] φ t H (o N ). The main theorem we prove in this section is the following which is closely related to Proposition 5.1 [V1]. For the purpose of studying comparison result given in the next section, we start with this section by adding the following additional symmetry property of f H and ρ lag under the reflection r : T * N → T * N defined by r(q, p) = (q, −p). Such a reflection argument was used by Viterbo [V1] in the proof of similar identities in the context of generating function method. 6.1. Anti-symplectic reflection and basic phase function. Proposition 6.2. Consider the canonical reflection map r : T * N → T * N given by r(q, p) = (q, −p) and define the Hamiltonian H r to be H r (t, x) = −H(t, r(x)) for x = (q, p). Then Proof. We observe that the map satisfies r * θ = −θ and in particular is antisymplectic. It also preserves the zero section and each individual fibers of T * N and so induces the corresponding reflection map on the path space We then consider J' satisfying r * J = −J. For example, the standard Sasakian almost complex structure J g associated any Riemmanian metric g on N [Fl3] is such an almost complex struture. Therefore the set of such J's is non-empty. It is also not difficult show that the set is a contractible infinite dimensional manifold. (See Lemma 4.1 [FOOO3] for its proof.) Then a straightforward computation shows that this reflection map induces oneone correspondence u → u ′ ; u ′ (τ, t) := r(u(−τ, t)) between the set of solutions of the Floer equation (3.3) and those associated to Furthermore all the generic transversality statements are equivalent for u and u ′ for J's satisfying r * J = −J via the transformation of the Hamiltonian H → H r . Therefore r induces canonical isomorphism . We also recall the canonical isomorphism established for arbitrary generic H in [Oh2] HF * (H; o N , T * q N ) ∼ = H * ({pt}) ∼ = Z which has rank 1. Therefore (r) * ([pt] H ) = ±[pt] H r . The first equality then follows from these observations and (6.1) by the general construction of spectral invariants ρ lag (H; {q}) given in section 3, especially (the Lagrangian version of) Conformality Axiom [Oh6]. A similar consideration based on (6.1) with the boundary condition gives rise to the second identity by the same kind of duality argument as done to prove (5.12) in [Oh3]. We omit the details by referring readers thereto for the details. This finishes the proof. 6.2. Analysis of Example 9.4 [Oh2]. Before giving the proof of Theorem 6.1, we illustrate the inequalities by a concrete example, which is a continuation of Example 9.4 [Oh2]. Example 6.1. Consider the Lagrangian submanifold L in T * S 1 pictured as in Figure 1 whose coordinates we denote by (q, p). One can check that the wave front projection of L, i.e., the graph of the multi-valued function h H of the associated Hamiltonian H such that L = φ 1 H (o S 1 ) can be drawn as in Figure 2 in S 1 × R whose coordinates we denote by (q, a). Here we denote by z i = (q i , 0) below for i = 0, · · · , 3 the intersections of L with the zero section, and by x i i = 1, 2 the caustics and by y the point at which the two regions between the graph and the dotted line have the same area in Figure 1. Note that the points z i 's are the critical points of the multi-valued generating function h H (or correspond to critical points of the action functional), x i 's to the cusp points of the wave front and y is the crossing point of two different branches of the wave front projection. Using the continuity of the basic phase function f H where L = φ 1 H (o N ), one can easily see that the graph of f H is the one bold-lined in Figure 2. We would like to note that the value min q∈N f H (q) is not a critical value of A cl H , and the branch of the wave front containing the point (q 1 , a 1 ) associated to the critical point z 1 of h H is eliminated from the graph of the basic phase function f H . Therefore the class 1 is realized by the Floer cycle z 0 + z 2 (or any other class of the form z 0 + z 2 + ∂(α)) and the class [pt] # is realized by the Floer cycle of the form z 1 It is interesting to observe two peculiar phenomena in this example: (1) the minimum of f H is realized at a non-smooth point y ∈ N of the function f H , and (2) the value ρ(H; [pt] # ) is realized by the 'local maximum' of the branch of h H containing the point (q 1 , a 1 ) ∈ S 1 × R where q 1 = π(z 1 ) and a 1 = A H (z H 1 ). 6.3. Proof of comparison result on ρ lag (H; 1) and f H . We now go back to the proof of Theorem 6.1. We first remark that the second inequality in Theorem 6.1 immediately follows by applying the first inequality to the Hamiltonian H r and combining Proposition 6.2. Therefore it remains to prove the inequality max f H ≤ ρ(H; 1). which will occupy the rest of this section. We first recall the definition of the triangle product described in [Oh3], [FO] and put it into a more modern context in the general Lagrangian Floer theory such as in [FOOO1] and in other more recent literatures. Let q ∈ N be given. Consider the Hamiltonians H : [0, 1] × T * N → R such that L H intersects transversely both o N and T * q N . We consider the Floer complexes each of which carries filtration induced from the effective action function given in Proposition 5.3. We denote by v(α) the level of the chain α in any of these complexes. More precisely, CF (L H , o N ) is filtered by the effective functional and CF (L H , T * q N ) by respectively. We recall the readers that h H is the potential of L H and the zero function the potentials of o N , T * q N . We now consider the triangle product in the chain level, which we denote by following the general notation from [FOOO1], [Se]. This product is defined by considering all triples the polygonal Maslov index µ(x 1 , x 2 ; x 0 ) whose associated analytical index, or the virtual dimension of the moduli space M 3 (D 2 ; x 1 , x 2 ; x 0 ) := M 3 (D 2 ; x 1 , x 2 ; x 0 )/P SL(2, R) of J-holomorphic triangles, becomes zero and counting the number of elements thereof. The precise formula of the index is irrelevant to our discussion which, however, can be found in [Se], [FOOO2]. Definition 6.2. Let J = J(z) be a domain-dependent family of compatible almost complex structures with z ∈ D 2 . We define the space M 3 (D 2 ; x 1 , x 2 ; x 0 ) by the pairs (w, (z 0 , z 1 , z 2 )) that satisfy the following: the marked points {z 0 , z 1 , z 2 } ⊂ ∂D 2 with counter-clockwise cyclic order, (3) w(z 1 ) = x 1 , w(z 2 ) = x 2 and w(z 0 ) = x 0 , (4) the map w satisfies the Lagrangian boundary condition The general construction is by now well-known and e.g., given in [FOOO1]. In the current context of exact Lagrangian submanifolds, the detailed construction is also given in [Oh3] and [Se]. One important ingredient in relation to the study of the effect on the level of Floer chains under the product is the following (topological) energy identity where the choice of the effective action functional plays a crucial role. For readers' convenience, we give its proof here. Proposition 6.3. Suppose w : D 2 → T * N be any smooth map with finite energy that satisfy all the conditions given in 6.2, but not necessarily J-holomorphic. We denote by c x : [0, 1] → T * N the constant path with its value x ∈ T * N . Then we have Proof. Recall ω 0 = −dθ and i * θ = dh H on L H and i * θ = 0 on o N and T * q N where i's are the associated inclusion maps of L H , o N , T * q N ⊂ T * N respectively. Therefore Here the last equality comes since A (2) (c x2 ) = c * x2 θ = 0. This finishes the proof. An immediate corollary of this proposition from the definition of m 2 is that the map (6.2) restricts to . It is straightforward to check that this map satisfies ∂(m 2 (x, y)) = m 2 (∂(x), y) ± m 2 (x, ∂(y) and in turn induces the product map * F : in homology. This is because if w is J-holomorphic w * ω ≥ 0. (We refer to [Oh3] and [FO] for the general construction of product map m 2 and to [Oh3], [MVZ] for the study of filtration. Similar study of filtration is also performed in [Sc], [Oh6] in the Hamiltonian Floer homology setting.) With these preparations, we are ready to wrap-up the proof of Theorem 6.1: Proof of Theorem 6.1. We consider a Floer cycle α representing the fundamental class 1 ♭ = [M ] ∈ HF (L H , o N ) and β = {q} representing the unique generator of Then its product cycle m 2 (α, β) ∈ CF (L H , T * q N ) represents the homology class [q] ∈ CF (L H , T * q N ) ∼ = Z and so v(m 2 (α, β)) ≥ ρ lag (H; {q}) = f H (q) by definition of the latter. Applying the triangle inequality, we obtain Therefore we have derived v(α) ≥ f H (q) for all cycle α ∈ CF (L H , o N ) representing [M ]. By definition of ρ lag (H; 1), this proves ρ lag (H; 1) ≥ f H (q). and in other literature such as [EP], [U]. In the Lagrangian context here, the εshiftable domain is realized as the graph of df of a function f having no critical points on the corresponding domain. In this regard, it appears to the author that the notion of ε-shiftability becomes more geometric and intuitive in the Lagrangian context than in the Hamiltonian context. 7.1. ε-shifting of the zero section by the differential of function. Fix a Riemannian metric g and the Levi-Civita connection on N . They naturally induces a metric on T * N . Denote the latter metric on T * N by g and the corresponding distance function by d(x, y) for x, y ∈ T * N . We denote by D r (T * N ) the disc bundle of T * N of radius r. The following is the well-known fact on this metric g, which can be easily checked. Lemma 7.3. The metric g carries following properties: (1) g is invariant under the reflection (q, p) → (q, −p) and in particular o N is totally geodesic. (2) There exists a sufficiently small r = r(N, g) > 0 depending only on (N, g) such that From now on, we will drop 'tilde' from d and just denote by d even for the distance function of g on T * N which should not confuse readers. Consider the subset Proof. In the proof, we will denote p ∈ N and the corresponding point in the zero section of T * N by o p for the notational consistency. Obviously we have Crit f = L f ∩ o B ⊂ φ 1 H (L f ) ∩ o N since we assume φ 1 H ≡ id on a neighborhood, T , of o B ⊃ Crit f . We will now prove the opposite inclusion φ 1 H (L f ) ∩ o N . Then we have (φ 1 H ) −1 (o p ) ∈ L f . Consider first the case p ∈ B. In this case since we assume φ 1 H = id on a neighborhood of o B , it in particular implies o p = (φ 1 H ) −1 (o p ) for all i and hence o p ∈ o B ∩ L f ∼ = Crit f . Remark 7.3. In fact all the discussion in this subsection can be generalized by replacing the differential df by any closed one form α and Crit f by the zero set of α. But we restrict to the exact case since the discussion in the next subsection seems to require the exactness of the form. 7.2. Lagrangian capacity versus Hamiltonian C 0 -fluctuation. In fact, Theorem 7.2 is an immediate consequence of the following comparison result between the Lagrangian capacity γ lag B (L) = ρ lag (H; 1) − ρ lag (H; [pt] # and the Hamiltonian C 0 -fluctuation osc C 0 (φ 1 H ; o N ) for L = φ 1 H (o N ) for H ∈ P ∞ asc;B , which itself has some independent interest in its own right. for L = φ 1 H (o N ). We would like to mention that the right hand side of (7.11) does not depend on the scale change of f to δ t for δ > 0. The following question seems to be an interesting question to ask in regard to the precise estimate of the upper bound in this theorem and Question 1.5. Question 7.4. For given H satisfying the condition in Theorem 7.6, what is an optimal estimate of the constant 2 oscf C (f ;B,T ) in terms of B, T and H? For example, can we obtain an upper bound independent of B or T ? The rest of the section is occupied by the proof of Theorem 7.6. The following proposition is a crucial ingredient of the proof, which is a variation of Proposition 2.6 [Os], Proposition 3.3 [EP], Proposition 3.1 [U] and Proposition 2.3 [Sey1]. The following lemma is the analogue of Lemma 5.1 [Os]. Therefore we can replace f by δf for a sufficiently small δ > 0, if necessary, so that min For example, we can choose any ε > 0 so that 0 < ε < d(N \ B, Crit f ) min p∈N \B |df (p)| . Since this holds for all ε > 0 satisfying (7.18), it follows letting ε → 0. This finishes the proof of Theorem 7.6.
2013-06-06T15:33:39.000Z
2012-06-21T00:00:00.000
{ "year": 2012, "sha1": "a65b6a7bd29ecc02f2db7d026c93eaa92f796ed9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a65b6a7bd29ecc02f2db7d026c93eaa92f796ed9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
247193410
pes2o/s2orc
v3-fos-license
MSR-RCNN: A Multi-Class Crop Pest Detection Network Based on a Multi-Scale Super-Resolution Feature Enhancement Module Pest disaster severely reduces crop yield and recognizing them remains a challenging research topic. Existing methods have not fully considered the pest disaster characteristics including object distribution and position requirement, leading to unsatisfactory performance. To address this issue, we propose a robust pest detection network by two customized core designs: multi-scale super-resolution (MSR) feature enhancement module and Soft-IoU (SI) mechanism. The MSR (a plug-and-play module) is employed to improve the detection performance of small-size, multi-scale, and high-similarity pests. It enhances the feature expression ability by using a super-resolution component, a feature fusion mechanism, and a feature weighting mechanism. The SI aims to emphasize the position-based detection requirement by distinguishing the performance of different predictions with the same Intersection over Union (IoU). In addition, to prosper the development of agricultural pest detection, we contribute a large-scale light-trap pest dataset (named LLPD-26), which contains 26-class pests and 18,585 images with high-quality pest detection and classification annotations. Extensive experimental results over multi-class pests demonstrate that our proposed method achieves the best performance by 67.4% of mAP on the LLPD-26 while being 15.0 and 2.7% gain than state-of-the-art pest detection AF-RCNN and HGLA respectively. Ablation studies verify the effectiveness of the proposed components. INTRODUCTION The pest disaster is considered as the main reason for crop yield reduction, thus recognizing pests is necessary to guarantee crop yield. Manual pest recognition and location are time-consuming and laborious work. Traditional pest recognition methods prefer to design feature vectors to identify specific pest species, which lacks the generalization ability (Qing et al., 2012;Wang et al., 2012;Yaakob and Jain, 2012;Wen et al., 2015;Deng et al., 2018). Differently, deep learning-based methods using object detection as a ready-to-use approach cause unsatisfied performance due to the enormous gap between pest detection and generic object detection, which could be summarized into the differences in object characters and detection requirements. The gaps of object characters include small-size, multi-scale, and high-similarly. Small size is the most distinguished property of general object detection. Taking the PASCAL VOC dataset (Everingham et al., 2010) and the LLPD-26 dataset we build as an example, the average size of pests (annotated by bounding boxes) is 1.58% of the general object bounding boxes. Existing methods fail to pay close attention to the small-size pests, which leads to insufficient recognition accuracy. The multi-scale property is another difference between pest detection and general object detection. The object size distribution is wide in pest detection tasks (e.g., the size of Gryllotalpa Orientalis Burmeister is 32 times larger than that of Nilaparvata Lugens Stal in our LLPD-26 dataset). Existing pest detection methods usually use feature fusion of adjacent layers to solve the multi-scale problem, but this fusion method is not sufficient to fully integrate information from different feature layers. The high similarity of interclass is also a crucial challenge (such as Mythimna Separata and Helicoverpa Armigera). Due to the low discrimination ability of high-similarly pests, the performance of the existing methods makes it unsuitable for practical application and remains to be improved. Furthermore, position attention is more crucial for pest detection than the high-value Intersection over Union (IoU) compared to general object detection. Different prediction bounding boxes with the same IoU value have diverse performance, as shown in Figure 1. All the predicted bounding boxes (red boxes) in Figure 1 have the same IoU value, but it is clear that the pest detection results are more accurate than the general object detection because there are lesser irrelevant pixels of other categories enclosed (as shown in Figure 1D). The result of Figure 1A is more accurate than the result of Figure 1B because Figure 1A contains all of the pest pixels. Therefore, detection bounding boxes with low IoU hardly cause trouble for pest detection since it excludes other class pixels. Existing methods usually adopt the hard IoU threshold to determine positive and negative samples. By doing so, it could cause some high-quality bounding boxes to be taken as negative samples. In summary, this study focuses on reducing the gaps between general object detection and pest detection in two dimensions (pest bounding box character and detection target) to improve the performance of pest detection. In pest bounding box dimension: (1) Existing pests detection methods and general object detectors usually utilize FPN (Lin et al., 2017a) to improve the multi-scale feature extraction ability by top-todown adjacent feature fusion method, but the incomplete fusion limits the performance of detectors. (2) High-similarly objects are recognized using channel attention (Hu et al., 2018) in the general detection field, but the single dimension attention is insufficient for pest detection. (3) The pattern of 5-layer feature maps is employed to detect objects, in which the top layer is used to recognize large-size objects and the down layer is used to recognize small-size objects, but the pest's size is far less than general objects (like dog and cat) resulting in the feature gradually disappearing with the convolution operation. In the pest detection target dimension, pest detections pay more attention to position rather than high-value IoU. Existing methods use a hard IoU threshold to distinguish positive and negative samples resulting in inadequate detection performance. To solve the defect of existing pest detection methods, we propose an MSR-RCNN to improve the detection performance of smallsize, multi-scale, and high-similarly pests. The MSR module, the highlight of MSR-RCNN, is a plug-and-play component and can improve the performance of familiar detectors. We first use the super-resolution method to enhance small-size features. Multilevel features are fused at once by feature full fusion mechanism to promote the information transition and high-similarly pests are adequately recognized by feature full weighting mechanism to enhance feature expression ability. In addition, SI is a new design to distinguish different predict bounding boxes with the same IoU value and make networks more suitable for pest detection. Furthermore, to promote the development of pest detection and verify the feasibility of our methods, we construct a large-scale light-trap pest dataset (named LLPD-26) including 18,585 images and 26 classes. Abundant experiments on the LLPD-26 show that our methods can effectively detect multi-class pests and attain start-of-the-art (SOTA) performance. The main contributions are listed as follows: • We propose a novel pest detection network (named MSR-RCNN) to solve the defect that existing methods lack the targeted improvement of pest objects in three dimensions: small-size, multi-scale, and high-similarly. The highlight of our MSR-RCNN is the multi-scale super-resolution (MSR) feature enhancement module that can improve the performance of familiar detectors by plug-and-play pattern. The MSR module consists of the super-resolution component, the feature full fusion mechanism, and the feature full weighting mechanism. The three parts focus on improving the performance of small-size, multi-scale, and high-similarly pests. • Since pest detection focus on the position rather than highvalue IoU, we design a SI to differentiate the performance of different prediction result with the same IoU. The SI generates high-quality bounding boxes for network training and employs suitable results to test for pest detection. By using the Soft-IoU, our MSR-RCNN is more fit for pest detection tasks. Meanwhile, the performance of the network is improved without other costs. • To more accurately monitor and detect multi-class crop pests, we construct a large-scale light-trap pest dataset (named LLPD-26) including 18,585 images and 26 classes. The most-species and largest-number characters of LLPD provide conditions for accurately detecting pests. In addition, adequate experiments on the LLPD-26 verify that our MSR-RCNN outperforms other SOTA methods. Deep Learning-Based Object Detection Pest detection is a specific task of general object detection. In recent years, Convolutional Neural Network (CNN) is widely applied in the object detection fields. The deep learning-based object detection networks divide into one-stage networks and two-stage networks. As one of the most famous networks in the one-stage, Redmon et al. (2016) utilized the whole image as the input and directly obtained the prediction result using 24 convolution layers and 2 full connection layers. Subsequently, some enhanced versions of YOLO were proposed one after another Farhadi, 2017, 2018;Bochkovskiy et al., 2020). Lin designed Retinanet to solve the problem of positive and negative sample imbalance with the Focal Loss, thus improving the detection accuracy (Lin et al., 2017b). The FCOS avoided the anchor mechanism with the pattern of point regression resulting in reducing the number of hyperparameters. Meanwhile, low-quality predictions were filtered out through the proposed Center-ness branch (Tian et al., 2019). Two-stage networks require the selective search (Uijlings et al., 2013) or region proposal network (RPN) to generate region proposal first, and then the R-CNN network (Girshick et al., 2014) is used to refine the proposal box (Girshick, 2015). Faster R-CNN (Ren et al., 2017) proposed RPN based on the Fast R-CNN and established the baseline of the two-stage detector. Pang et al. designed the Cascade R-CNN network to continuously optimize the detection results by gradually increasing the IoU threshold (Cai and Vasconcelos, 2018). Libra R-CNN used concat to merge feature layers, but the essence of the feature fusion method was reducing the video memory for the non-local mechanism . FPN (Lin et al., 2017a) and PANet (Liu et al., 2018) used feature fusion of adjacent layers to solve the multi-scale problem, but the incomplete fusion method did not meet the requirement of pest detection. TridentNet used dilated convolution (Yu and Koltun, 2015) to improve the capability of multi-scale feature extraction . The ThunderNet used Context Enhancement Module (CEM) module to integrate multi-scale information and adopted the Spatial Attention Module (SAM) to enhance feature representation (Qin et al., 2019). OHEM (Shrivastava et al., 2016) and Snip/Sniper improved the performance of the network by using selective backpropagation. We use the two-stage framework as the baseline because the two-stage methods are usually more accurate than the one-stage methods, especially for small-size object detection. proposed an anchor-free network (AF-RCNN) to identify and locate pests of 24 types. Liu et al. (2020) used global and local activation features to detect the 16-class pest dataset. The above methods ignore the gaps between object detection and pest detection and use insufficient improvement for pest detection, which led to an unsatisfied performance. Therefore, we design an MSR-RCNN to improve the performance of pest detection. Data Collection We use the light-trap device to automatically collect the pest images in different periods. The data collection devices are from the Intelligent Machines Institute, Chinese Academy of Sciences, and distributed in the field environment of Anhui Province. The dataset includes 18,585 JPEG images with the resolution of 2,592×1,944 and is annotated by agricultural experts. Each pest object corresponds to a unique category and bounding box coordinate, and each image has multiple pests. To ensure effectiveness, we divide the data into 14,868 images of the train set and 3,717 images of the test set. MSR-RCNN Pest Detection Network To accurately detect 26-class pests, we design an MSR-RCNN network including a backbone network (ResNet50), MSR feature MSR Feature Enhancement Module Since small-size, multi-scale, and high-similarly pest characters of pests, we design the MSR feature enhancement module to improve the detection performance using a super-resolution component, a feature full fusion mechanism, and a feature full weighting mechanism. The super-resolution component from the MSR module obtains the six-layer feature map for the recognition of small size objects. Then, the full feature fusion mechanism integrates all features at once for the recognition of multi-scale objects. Since high-similarly pests in the LLPD-26 dataset are difficult to identify, we design the feature full weighting mechanism in the MSR module to enhance the fine-grained expression ability. The red part of MSR-RCNN FIGURE 4 | The feature full fusion mechanism. FIGURE 5 | The feature full weighting mechanism. Figure 2 shows the overall framework of the MSR we devised. Super-Resolution Feature Enhancement Component Feature pyramid network (FPN) (Lin et al., 2017a) uses 5 layer feature maps to recognize objects, in which the top-level features include semantic information to detect large-size objects and the low-level features include texture information to detect small-size objects. However, the small-size pest features gradually disappear in the process of convolution operation resulting in misleading information transfer in the top-to-down feature fusion. Inspired by zooming in to identify pests in the manual annotation process, we design the super-resolution feature enhancement component to improve small-size feature extraction ability by using deconvolution to obtain fine-grained pest features. To ensure the full utilization of features, we select the feature maps after each Resnet50 block (a total of 4) as the input of the super-resolution component. We use 1 x 1 convolution kernels for each layer feature to change the number of channels to 256. Duo to the size of pest objects is small, we deconvolve the feature map after the first block of the Resnet50 network to enhance texture information, which refers to the way people zoom in on images for small-size object recognition. In this way, we have 5-layer feature maps, four layers from the feature extraction network, and one layer from deconvolution operation. We use the bilinear interpolation method to add the upper layer features and apply the lower layer features to carry out adjacent layer feature fusion. The 3 x 3 convolution kernel is utilized to enhance the feature representation capability. Max pooling operation is carried out for top layer feature to enhance semantic information. After the above process, we have 6-layer feature maps, in which the top layer feature obtained by max-pooling has sufficient semantic information, and the bottom layer feature obtained by deconvolution has rich texture information. Figure 3 shows the super-resolution feature enhancement component designed in this study. Feature Full Fusion The feature full fusion mechanism is used to improve the performance of multi-scale pest detection. By fusing the information of different feature layers, the defects are avoided in existing methods, which only combine adjacent layers or use a single feature layer to detect pests (Jiao et al., 2020;Liu et al., 2020). The inspiration for our design comes from the process of people looking at images. People often think of an image as a 2D image because the human eye treats multiple channels (usually RGB, 3-channel) at once. Similarly, the feature full fusion mechanism combines the 6-layer features from the superresolution component at once. We fuse 6-layer feature maps into The parts in bold represent the best performance. five layers to improve network efficiency. Specifically, for each of the 6-layer feature maps, we use the bilinear interpolation method to resize them to five sizes, in which the resolutions are 200×272, 100×136, 50×68, 25×34, and 13×17, respectively. We stack features of the same size and use a 1×1 convolution to unify channels to 256. The stacked feature maps are added to the C1∼C5 feature maps of the original feature maps. It is important to note that our feature full fusion is substantially different from the full connection layer, although it is very similar. This is because our feature full fusion module preserves the translation invariance of the pixels. This also leaves enough information for the next feature full weighting module. Figure 4 shows the feature full fusion mechanism. Feature Full Weighting Due to the high-similarly pests in the LLPD-26 (e.g., Cnaphalocrocis medinalis and Pyrausta nubilalis, Mamestra brassicae Linnaeus and Scotogramma trifolii Rottemberg), finegrained identification is required to improve the performance of detection. We design the feature full weighting for feature reinforcement learning. This could optimize the detection performance of similar pests from two dimensions (depth and location). For the feature map (W, H, and C) of each layer, our weighting method weights channel C and points (x, y) in the feature map, where W is the width, H is the height, and C is the channel number of the feature map. We use Formula (1) to describe our weighting method. Where π L (·) represents the local weighting function, π C (·) represents the channel weighting function, X represents the feature map, W(X) represents the weighted feature map, and α is the scale factor. Formula (2) and Formula (3) give the specific forms of π L (·) and π C (·), respectively. Among them, x j represents the point on the feature map excluding the point X i , θ (·) and φ(·) represent the learnable function for feature X, avg(·) and max(·) represent global average pooling and global maximum pooling, respectively. To guarantee the end-to-end pattern, we use a convolution operation to carry out the feature full weighting, as shown in Figure 5. Soft-IoU In general object detection (such as PASCAL VOC), IoU 50 is used as the threshold to determine positive and negative samples. However, for pest detection, different bounding boxes with the same IoU value have different performances. Therefore, we design a SI with the position suppression method to optimize the training and test processes. Specifically, the calculation method of SI is shown in Formula (4): Where E(·) represents the Euclidean distance, A center and B center represent the center point of bounding box A and B, respectively, A diagonal and B diagonal represent the diagonal distance of bounding box A and B, respectively, Max(·) represents the maximum function, and β is the scaling factor. To ensure the stability, we adjust the IoU no more than 0.1 times the original IoU. Due to the high-quality positive samples contributing to training the network finely, β is selected as 0.9. In the test phase, β = 1.1 because we expect the bounding box as shown in Figure 1A to output the results as a positive sample. Experiment Settings We use the backpropagation and Stochastic Gradient Descent (SGD) to train our MSR- RCNN (LeCun et al., 1989). For the training of MSR-RCNN, each SGD mini-batch is constructed from a single pest image that contains 256 samples. Negative samples and positive samples are randomly selected in a ratio of 1 : 1 in each mini-batch. Gaussian distribution with a mean of 0 and a SD of 0.01 is used to initialize the parameters of the classification regression layer. In each SGD iteration, we use RPN to generate 1,000 potential regions to be sent to R-CNN for learning. We train a total of 12 epochs with a momentum of 0.9, among which the first 8 epochs have a learning rate of 0.0025, and the last 4 epochs are 0.00025. Our experiment is deployed on a Dell 750 server with NVIDIA Titan RTX GPU (24G memory) using the Mmdetection2.0.0 (Chen et al., 2019) framework and Python 3.8. Unless otherwise stated, all comparison models in this study use the default parameters. Since the SmoothL1 Loss function is differentiable at zero, we use it to train the R-CNN network for more stable performance. Because the L1 Loss is a non-differentiable function at zero, we apply it in RPN network training to improve the robustness. Performance on Our LLPD-26 We compare the performance of our method with Faster R-CNN (Ren et al., 2017), Cascade R-CNN (Cai and Vasconcelos, 2018), Libra R-CNN , FCOS (Tian et al., 2019), Retinanet (Lin et al., 2017b), AF-RCNN (Jiao et al., 2020), and HGLA (Liu et al., 2020), as shown in Table 1. Among them, AF-RCNN and HGLA are the existing deep learningbased pest detection methods, MSR represents the MSR feature enhancement module proposed by us, SI represents the SI, AP 50 represents the Average Precision (AP) with the IoU threshold of 50%, AP represents the mean AP with the IoU threshold at 50, 75, and 95%. The FPN (Lin et al., 2017a) is used in all comparison methods. Our MSR module is slightly inferior to Libra R-CNN in AP 75 performance due to the high-quality training box provided by the balanced sampling approach of Libra R-CNN. In addition, since pest detection is more focused on point location performance than bounding box IoU performance, AP 50 is more valuable than AP 75 . With the SI training method, the MSR-RCNN outperforms other methods. To compare the performance of the proposed method in detail, the AP 50 results of each category are given in Table 2. We The parts in bold represent the best performance. Where * represents the method of reproduction using MMdetection. emphasize the best results for each class with bold to show the best performance. It can be found that our network outperforms other methods. Figure 6 shows the performance improvement of our MSR-RCNN compared with Faster R-CNN. Among them, the blue bar chart represents the size of the pest, and the line chart describes the performance improvement of the method for Faster R-CNN. Our methods (MSR and SI) mainly improve the detection performance of small-size objects. For medium-size pests, the performance of Soft-IoU is improved significantly. The Training Loss and AP To explain the improvement of our network in more detail, we present the training loss diagram of MSR-RCNN, Faster R-CNN, FCOS, and HGLA, as shown in Figure 7. Faster R-CNN represents two-stage methods, FCOS represents onestage methods, and HGLA represents pest detection methods. Referring to the parameter setting of MMdetection, the batch size of FCOS is 4 samples, thus the loss iter only has half the other methods. It is clear that compared with other networks, our MSR-RCNN has more excellent data fitting ability and is capable of more complex work. In addition, our MSR-RCNN convergence rate is the fastest. The Beta Value For the β in Formula (4), an ablation study is performed and the results are shown in Figure 8. When the β is less than 0.9, the detector performance is affected because a large number of positive samples change into negative samples, resulting in the imbalance between positive and negative samples. When the β is greater than 0.9, the training performance of the model is misled due to the addition of too many low-quality detection boxes. The Backbone of Our MSR Pest Detection Network We choose ResNet50 as the backbone of the MSR-RCNN After a detailed comparison of the common backbone network. Table 3 shows the performance comparison of our MSR-RCNN in different backbone networks. Why the result of ResNet50 is better than ResNet101? This reason is that the object size is generally small in our dataset. Therefore, with the deepening of the network layer, the features of small-size objects gradually disappear in the continuous convolution operation. The topto-down feature fusion method transmits blurry semantic information resulting in decreasing performance. To be fair, ResNet50 is used as the backbone extraction network for all comparative experiments in this study, unless otherwise stated. MSR Module With Various Networks We compare the performance of our MSR module with Faster R-CNN, Cascade R-CNN, FCOS, and RetinaNet, as shown in Generalization Capacity We compare the performance on general object detection datasets (PASCAL VOC and COCO), as shown in Table 5. Where * represents the results that we reproduced with MMDetection under the same parameter settings. Due to the Soft-IoU being designed for pest detection, we only present the performance of MSR-RCNN with the MSR module. Since MSR-RCNN is a small-size detection network for pest detection, we do not evaluate the performance of AP l . The training set of PASCAL VOC 0712 is used to train networks and the test set of PASCAL VOC 2007 is used to verify the results. The experimental results show that our method can significantly improve the performance of IoU 50 and small-size objects. This is highly consistent with the original intention of our MSR module. In addition, Figure 9 shows the performance comparison between our method and Faster R-CNN on different datasets, where the blue bar chart represents the normalized relative average size of the objects in several datasets, the yellow bar chart shows the normalized relative AP improved by our MSR-RCNN method compared to Faster R-CNN. With the increase of the object average size, the improvement of the performance becomes more and more obvious. Qualitative Results To visually observe the accuracy, we visualize the detection results of Faster R-CNN, AF-RCNN, HGLA, and MSR-RCNN (ours), as shown in Figure 10. Among them, the first column shows the dense distribution pest images, the second and fourth columns show the sparse distribution pest images, and the third column shows the image detection results when the camera has water mist caused by temperature change. The visualization shows that HGLA has many overlapped bounding boxes, AF-RCNN and Faster R-CNN mainly exhibit missed bounding boxes and false results (Figure 10 columns 1 and 2). For columns 3 in Figure 10 (low-quality images caused by equipment reasons), all of the detection results are degraded, but our MSR-RCNN is the least weakened. This is owed to our feature super-resolution module. Although the MSR-RCNN wrongly identifies the rice planthopper in the fourth column images (class 1 is identified as class 14), other methods did not find the existence of minimum-sized pests (Figure 10 columns 4). The visualization results show that our MSR-RCNN outperforms other methods. CONCLUSION This study aims to bridge the gap between generic object detection and pest detection, in which the challenges lie in object characters and IoU adaptation. Therefore, we propose an MSR-RCNN that is targeted at detecting agricultural pests of 26 categories. Specifically, we build a large-scale light-trap pest dataset LLPD-26. For tackling the detection difficulty on small-size, multi-scale, and high-similarly pests, the MSR-RCNN adopts a MSR model that includes a super-resolution component, a feature fusion mechanism, and a feature weighting mechanism. In addition, motivated by the higher importance of pest positions, we propose a SI strategy to improve the adaptability of the network. The experimental results show that the proposed method can effectively detect multiple classes of pests. Ablation experiments verify the MSR model can improve the performance of other detectors in the plug-and-play form. Future study will focus on few-shot pest detection research and real-world application deployment. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS YT contributed to the conception and design of software, analysis of the data and writing, and revising the manuscript. SD and SZ carried out compared method by using AF-RCNN and HGLA detector in experimental part. JZ and LL contributed to write and revise the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This study is supported in part by the national natural science foundation of China (no. 31671586) and the demonstration of intelligent management and control technology for the whole cycle of agricultural production (no. KFJ-STS-QYZD-167-02).
2022-03-03T14:34:57.073Z
2022-03-03T00:00:00.000
{ "year": 2022, "sha1": "32df2e19aa25632d0b1810b7c7985f7fc3a89600", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "32df2e19aa25632d0b1810b7c7985f7fc3a89600", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
45280487
pes2o/s2orc
v3-fos-license
Calcium/Calmodulin-dependent Protein Kinase II Phosphorylation Drives Synapse-associated Protein 97 into Spines* Synapse-associated protein 97 (SAP97) has been involved in the correct delivery and clustering of glutamate ionotropic receptors to the postsynaptic compartment. Here we demonstrate that synaptic trafficking of SAP97 itself was modulated by calcium/calmodulin-de-pendent protein kinase II (CaMKII) in cultured hippocampal neurons. CaMKII activation led to increased targeting of SAP97 into dendritic spines, whereas CaMKII inhibition was responsible for SAP97 high colocalization in the cell soma with the endoplasmic reticu-lum protein disulfide-isomerase. No effect was detected for other members of the membrane-associated guanylate kinase protein family, such as SAP102 and PSD-95. Transfection of activated (cid:1) CaMKII T286D dramatically increased concentration of both endogenous and transfected SAP97 at postsynaptic terminals. In vitro CaMKII phosphorylation of the SAP97 N-terminal fusion protein and metabolic labeling of transfected COS7 cells indicated SAP97-Ser-39 as a CaMKII phosphosite in the SAP97 protein sequence. Moreover, transfection in hippocampal neurons of SAP97 mutants that blocked or mimicked Ser-39 phosphorylation had effects similar to those observed upon inhibiting or constitutively activating CaMKII. Further, CaMKII-dependent SAP97-Ser-39 phosphorylation determined Acquisition Quantification— Confocal images ob- tained using a (cid:3) objective with sequential acquisition setting at 1024 (cid:3) 1024 pixels resolution. Each was a z series projection of (cid:4) 8–12 images taken at 0.5–1- (cid:2) m depth intervals. Pharmacologically treated and transfected neurons were chosen randomly for quantifica- tion from two to five coverslips from three to five independent experiments for each construct. Quantification of confocal experiments was performed using Bio-Rad Laserpix software. Both image acquisition and quantification of the fluorescence signal were performed by inves- tigators who were “blind” to the experimental condition. Quantification of Western blot analysis and autoradiography was performed by means of computer-assisted imaging (Quantity-One R System; Bio-Rad), and statistical evaluations were performed according to one-way analysis of variance followed by Bonferroni as a post hoc comparison test. The correct recruitment of ionotropic glutamate receptor (iGluR) 1 subunits into the postsynaptic compartment is a highly regulated process that requires the concerted action of diverse intracellular elements modulating association/dissoci-ation of key protein complexes within multiple intracellular compartments. Among molecular associations regulating subcellular targeting of iGluRs subunits, interaction with members of the membrane-associated guanylate kinase (MAGUK) protein family has been proposed (1). In fact, MAGUK protein family members have been addressed as organizing elements in excitatory neurons (2). MAGUKs are characterized by a common multimodular structure including three PDZ domains, a Src homology domain 3, and a guanylate kinase-like domain. Members of this family act as molecular scaffolds for iGluRs mainly by direct interaction with an SXV motif on the cytoplasmic termini of their binding proteins. Recently, members of this family have been proposed to be involved in iGluRs trafficking (3). Among MAGUKs, SAP97 is the rat homologue of the Drosophila (Dlg) and human (hDlg) discs large tumor suppressor protein. In the mammalian central nervous system, SAP97 has been described as enriched both at pre-and postsynaptic compartments, where it has been implicated in the processing of the GluR1 subunit of the AMPA receptor (4) and an interaction with NR2-type NMDA subunits has been put forward (5,6). SAP97 interacts with GluR1 early in the biosynthetic pathway of GluR1-containing AMPA receptors (7,8), confirming that PDZ protein interactions at the level of the endoplasmic reticulum-cis-Golgi play an important part in the distribution and surface expression of ion channels in neurons. In contrast, few synaptic AMPA receptors associate with SAP97 (8), suggesting that SAP97 dissociates from the receptor complex at the plasma membrane. These data imply the existence of a fine-tuning of iGluRs interactions with SAP97 mediated by a still unknown partner. Recently, different members of MAGUK have been identified as possible new targets for CaMKII (6, 9 -10), raising the possibility of a new mechanism regulating MAGUK function in the postsynaptic neuron. In fact, although several CaMKII substrates have been identified in the last few years (10), little is known about the functional role of specific CaMKII-dependent phosphorylation processes that take place in vivo in the postsynaptic compartment. Recently, studies from our group (6) showed that CaMKII-dependent phosphorylation of SAP97-Ser-232 within the PDZ1 domain disrupts SAP97 interaction with NR2A, thereby regulating synaptic targeting of this NMDA receptor subunit. Here we show that CaMKII-mediated phosphorylation of an additional SAP97 site, Ser-39, within the L27 domain is necessary and sufficient to drive SAP97 to the postsynaptic compartment in cultured hippocampal neurons. In addition, SAP97-Ser-39 phosphorylation represents a key step governing GluR1-containing AMPA receptor delivery to the postsynaptic complex, thus suggesting SAP97 as a multimodular element where distinct domains play differential roles in organizing the glutamatergic synapse. EXPERIMENTAL PROCEDURES Neuronal Cultures, Transfection, and Immunofluorescence Labeling-Low density or high density hippocampal neuronal cultures were prepared from E18-E19 rat hippocampi as previously described, with minor modifications (11). Neurons were transfected using the calcium phosphate precipitation method at 10 days in vitro. Mutated products were obtained by using the QuikChange TM site-directed mutagenesis kit (Stratagene, La Jolla, CA). Hippocampal neurons were fixed in 100% methanol at Ϫ20°C for 15 min. Transfected cells were used 4 days after transfection. Primary and secondary antibodies were applied in GDB buffer (30 mM phosphate buffer (pH7.4) containing 0.2% gelatin, 0.5% Triton X-100, and 0.8 M NaCl). Fluorescence images were acquired using a Bio-Rad Radiance 2100 confocal microscope. Pharmacological agents were used at the following concentrations: 50 M NMDA (Sigma), 10 TIF Preparation-Triton-insoluble fractions (TIF) were isolated from neurons harvested at 10 -14 DIV as previously described (6). Metabolic Labeling-COS-7 cells were transfected with empty vector, GFP-SAP97wt ϩ ␣CaMKII-T286D, GFP-SAP97(S39A,S232A) ϩ ␣CaMKII-T286D, or GFP-SAP97(S39A) ϩ ␣CaMKII-T286D. 48 h after transfection, cells were preincubated for 2 h in phosphate-free minimal essential medium. The medium was aspirated and replaced with fresh phosphate-free minimal essential medium containing [ 32 P]orthophosphate (500 Ci/ml). After 2 h, okadaic acid 0.2 M was added to the medium and cells were incubated for an additional hour. Cells were harvested into 0.5 ml of ice-cold solubilization buffer containing a complete set of protease inhibitors (Complete TM; Roche Diagnostics). Each solubilized sample was then incubated overnight at 4°C with anti-SAP97 polyclonal antibody. Protein A-Sepharose beads washed in the same buffer were added and incubation continued for 2 h. Western blots were performed on all immunoprecipitated samples to verify that equal amounts of SAP97 protein were being precipitated under all conditions. Image Acquisition and Quantification-Confocal images were obtained using a Nikon ϫ60 objective with sequential acquisition setting at 1024 ϫ 1024 pixels resolution. Each image was a z series projection of ϳ8 -12 images taken at 0.5-1-m depth intervals. Pharmacologically treated and transfected neurons were chosen randomly for quantification from two to five coverslips from three to five independent experiments for each construct. Quantification of confocal experiments was performed using Bio-Rad Laserpix software. Both image acquisition and quantification of the fluorescence signal were performed by investigators who were "blind" to the experimental condition. Quantification of Western blot analysis and autoradiography was performed by means of computer-assisted imaging (Quantity-One R System; Bio-Rad), and statistical evaluations were performed according to one-way analysis of variance followed by Bonferroni as a post hoc comparison test. Cloning, Expression, and Purification of GST Fusion Protein-SAP97 fragments were subcloned downstream of glutathione S-transferase (GST) in the BaMHI and HindIII sites of the expression plasmid pGEX-KG by PCR using plaque-forming unit polymerase (Promega). The inserts were fully sequenced with the ABI Prism 310 genetic analyzer (ABI Prisma). SAP97⅐GST fusion proteins were expressed in Escherichia coli, purified on glutathione-agarose beads (Sigma), and eluted as previously described (12). Fusion Protein Phosphorylation-GST⅐SAP97 purified fusion proteins were incubated with ␣CaMKII (1-325) (New England Biolabs, Beverly, MA) at 37°C in the presence of 20 mM HEPES (pH 7.4), 10 mM MgCl 2 , 1 mM dithiothreitol, 2.4 M calmodulin, 2 mM CaCl 2 with 100 M ATP ([␥-32 P]ATP, 2 Ci/tube, 3000 Ci/mmol; Amersham Biosciences). The reaction was initiated by the addition of kinase solution to the reaction mixture. The reaction was carried out for 5 min and stopped by the addition of SDS sample buffer. Surface Expression Assays-Chymotrypsin (Sigma) treatments and cross-linking experiments by means of BS 3 (Pierce) were performed as previously described (13). Differential Distribution of MAGUK Family Members in Hip- pocampal Neurons-We examined by confocal labeling the distribution pattern of SAP97 and SAP102 in cultured hippocampal neurons (Fig. 1, A-J); PSD-95 was used as a marker of postsynaptic structures. SAP97 displayed a diffuse labeling in the somatic cytoplasm; immunoreactivity was also present in dendrites where both diffuse and punctate staining are present. A moderate colocalization pattern with PSD-95 was observed, indicating the presence of endogenous SAP97 at the postsynaptic side of excitatory synapses (Fig. 1, A-D). SAP102 labeling revealed a more intense staining in the soma than in the dendrites (Fig. 1E). No punctate SAP102 labeling and colocalization with PSD-95 were detectable, suggesting the absence of SAP102 in dendritic spines. Neuronal activation by means of 15 min of treatment with NMDA (50 M) led to an increased punctate staining of SAP97 in the dendritic compartment, leading to a higher colocalization degree with PSD-95 ( Fig. 1, F-I). Quantification of SAP97 punctate staining revealed a significative increase of SAP97 immunoreactivity in PSD-95-positive dendritic spines ( Fig. 1K; *, p Ͻ0.01, NMDA versus control). On the other hand, no modification of SAP102 immunostaining was induced by NMDA treatment (Fig. 1J). To confirm that endogenous SAP97 was expressed in the postsynaptic compartment of cultured neurons and to confirm its trafficking after NMDA treatment, a biochemical approach was also used. TIF was obtained from control and NMDAtreated neurons (see Ref. 6) and protein levels measured in the homogenate and TIF. The same amount of proteins from homogenate and TIF was loaded on the SDS-PAGE for Western blot analysis. As shown in Fig. 1L, SAP97 is barely detectable in TIF in untreated neurons; NMDA treatment, confirming confocal experiments, significantly increases SAP97 immunostaining in TIF without affecting the total SAP97 protein level in the homogenate (*, p Ͻ0.01; ϩ57.2 Ϯ 8.1%, NMDA versus control expressed as SAP97 ratio TIF/homogenate). No alteration of PSD-95 immunostaining in both homogenate and TIF was observed after NMDA treatment. No SAP102 signal was present in TIF. As expected, treatment of hippocampal cultures with NMDA leads to a higher staining of ␣CaMKII in TIF (14). CaMKII Inhibition Affects SAP97 Distribution in Hippocampal Neurons-Previous data from our laboratory demonstrated that NMDA exposure, leading to a maximal CaMKII activation, produced an in vivo SAP97 phosphorylation in cultured hippocampal neurons; CaMKII inhibitor KN-93 reduced SAP97 phosphorylation to basal levels, indicating a key role for this kinase in SAP97 phosphorylation (6). These data and other recent evidence (10) identified SAP97 as substrate for CaMKII. These observations led us to test whether CaMKII-dependent SAP97 phosphorylation is involved in SAP97 trafficking from intracellular compartments into dendritic shafts and spines. To this purpose, hippocampal cultures were exposed to NMDA in the absence or presence of specific CaMKII inhibitors. Two different strategies have been used to inhibit CaMKII: i) incubation with the competitive inhibitor KN-93, and ii) with the autoinhibitory peptide (AIP-2) fused to the antennapedia peptide (Ant). Both KN-93 (Fig. 2, D-F) and Ant-AIP-2 ( Fig. 2, G-I) were able to modify SAP97 distribution, leading to a higher immunoreactivity of SAP97 within the soma and to a parallel decrease of SAP97 staining in the proximal and distal dendrites when compared with untreated neurons (Fig. 2, A-C). Quantification of the dendritic versus cell soma SAP97 immunostaining by measuring the relative fluorescence intensity revealed a significative decrease of SAP97 fluorescent signal in dendritic structures as a consequence of CaMKII inhibition ( Fig. 2N; *, p Ͻ0.001 KN-93 versus control, **, p Ͻ0.0005 Ant-AIP-2 versus controls). In addition, colocalization between SAP97 and PSD-95 was lost because the SAP97 punctate signal in dendritic spines was absent. NMDA treatment was not able to counteract the effect of CaMKII inhibitors on SAP97 localization (Fig. 2, J-N, **, p Ͻ0.0005, NMDA ϩ Ant-AIP-2 versus control). Moreover, the CaMKII-dependent effect on SAP97 localization was specific for this member of the MAGUK protein family because other members of the family such as PSD-95 ( Fig. 2, B, E, H, and K) and SAP102 (Fig. 2M) were not affected. In fact, no alterations of PSD-95 punctate staining and SAP102 somatodendritic pattern were induced by CaMKII inhibitors. As described above (see Fig. 1L), a biochemical approach was again used to confirm modulation of SAP97 distribution by CaMKII inhibitors: co-treatment of Ant-AIP-2 ϩ NMDA, confirming confocal experiments, significantly decreases SAP97 immunostaining in TIF ( Fig. 2O; *, p Ͻ0.005, ϩ97.2 Ϯ 6.7%, NMDA versus control; *, p Ͻ0.01, Ϫ44.5 Ϯ 4.5%, NMDA ϩ Ant-AIP-2 versus control) without affecting PSD-95 immunostaining. To confirm that SAP97 trafficking in hippocampal neurons was specifically modulated by CaMKII, tyrosine kinases (1 M, PP2), PKA (10 M, H89), and PKC (0.1 M, GF109203X) inhibitors were used. SAP97 staining in the presence of all these inhibitors was comparable with SAP97 immunoreactivity observed in control neurons (data not shown). Because in control untreated neurons a pronounced SAP97 intracellular staining in the cell soma was observed, we performed colocalization experiments with an endoplasmic reticulum-specific marker, protein disulfide-isomerase; SAP97 showed a good colocalization with DSI in untreated cultures (Fig. 3, A-C; see Ref. 4). NMDA treatment led to a decreased colocalization between SAP97 and the ER marker in the cell soma (Fig. 3, D-F). Of relevance, coincubation with CaMKII inhibitor Ant-AIP-2 determined a high colocalization degree between SAP97 and the ER marker, DSI (Fig. 3, G-I). It has been shown that calcium-induced calcium release from ER, usually associated to activation of ryanodine receptors (RyRs), is triggered by entry of calcium through NMDA receptor channels (15). Because it is known that CaMKII can be associated to RyRs (16,17), we tested whether CaMKII-dependent modulation of SAP97 trafficking was correlated to activation of RyRs by examining the distribution pattern of SAP97 in neurons exposed to caffeine (Fig. 4). Caffeine treatment induced SAP97 trafficking to "spine-like" structures (as confirmed by double labeling of neurons with the postsynaptic protein marker PSD-95; data not shown), both in the absence and the presence of D-2-amino-5-phosphonopentanoic acid (Fig. 4A). In addition, RyRs blocking with high concentrations of ryanodine antagonized the effects of NMDA on SAP97 distribution (Fig. 4A), suggesting that calcium-induced calcium release from RyRs is necessary to trigger SAP97 trafficking. No . L, Western blot analysis of the homogenate or TIF obtained from control or NMDA-treated high density hippocampal cultures. The same amount of proteins was loaded in each lane. NMDA treatment leads to a higher SAP97 localization in the TIF, leaving the total amount of SAP97 unaltered. CaMKII, but not PSD-95, immunostaining was more intense in the TIF of NMDA-treated cultures. Notably, SAP102 levels were undetectable in the TIF. effect on SAP102 immunostaining was observed as a consequence of the treatments indicated above acting on ER stores (data not shown). Also under these experimental conditions, coincubation of CaMKII inhibitor Ant-AIP-2 was able to block any effect of caffeine on SAP97 trafficking as tested both by confocal analysis (Fig. 4C; *, p Ͻ0.0005 caffeine versus control, **, p Ͻ0.001 caffeine ϩ Ant-AIP-2 versus control expressed as the ratio of dendrites to cell soma average fluorescence) and by Western blotting in the TIF (Fig 4D; *, p Ͻ0.01, ϩ67.0 Ϯ 8.3%, caffeine versus control; *, p Ͻ0.005, Ϫ69.5 Ϯ 4.3%, caffeine ϩ Ant-AIP-2 versus control), confirming the central role of CaMKII in these events. CaMKII-dependent Phosphorylation of SAP97-Ser-39 Affects SAP97 Trafficking into Spines-Modulation of SAP97 trafficking by CaMKII was further studied in hippocampal neurons cotransfected with GFP-SAP97wt and ␣CaMKIIwt or active-T286D ␣CaMKII (Fig. 5A). Four days after transfection, neurons were fixed and confocal analysis was performed. Several CaMKII phosphorylation consensus domains are distributed along the SAP97 sequence (6,9). Previous data from our laboratory addressed Ser-232 within the PDZ1 domain as a CaMKII in vivo phosphorylation site in SAP97. CaMKII-dependent phosphorylation of this site is responsible for the dynamic modulation of SAP97-NR2A complex (6). Inspection of the SAP97 sequence reveals the presence of another domain containing a serine residue (Ser-39 in the L27 N-terminal domain) that could represent a putative phosphate acceptor site for CaMKII (Fig. 6A) phosphoband corresponding to autophosphorylated ␣CaMKII (1-325) was also detectable (lower arrow). To further investigate the molecular mechanism governing CaMKII-dependent trafficking of SAP97, we examined the spatial localization of transfected SAP97 constructs in hippocampal neurons, carrying point mutation into the previously identified CaMKII consensus sites (Fig. 7A). Single SAP97(S232D) mutation, mimicking phosphorylation, did not produce any significative effect on SAP97 localization in dendritic spines when compared with the wild type protein, suggesting that CaMKII phosphorylation of this site is not involved in SAP97 trafficking but only in the modulation of SAP97 interaction with NR2A as previously described (6). Similar results were obtained with the SAP97(S39A construct), whereas the SAP97(S39D) mutation led to an increased staining of SAP97 into "spine-like" structures (as confirmed by double labeling of neurons with the postsynaptic protein marker PSD-95; data not shown). Quantification analysis of the different SAP97 mutation constructs revealed a significative increase of spine-like clusters in SAP97(S39D)-transfected neurons compared with all other constructs (Fig. 7B). Colocalization experiments performed with synaptophysin indicated a specific postsynaptic immunoreactivity of SAP97(S39D) constructs in close opposition to the characteristic presynaptic synaptophysin staining (Fig. 7C). Moreover, double transfection of SAP97(S39A) ϩ active-T286D ␣CaMKII showed that abrogation of the Ser-39 phosphosite is sufficient to block any effect of the active form of the kinase on SAP97 enrichment in punctate structures (Fig. 7D and see also Caffeine administration results in a redistribution of SAP97 into spine-like clusters; D-2-amino-5-phosphonopentanoic acid treatment is not able to oppose this process. Blocking calcium release from intracellular stores using high concentrations of ryanodine interferes with NMDA-mediated SAP97 trafficking toward spines. Scale bar, 4 m. B, caffeine treatment is not able to drive SAP97 localization into spines after CaMKII inhibition. Hippocampal cultures were left untreated (control) or treated with caffeine (10 mM) alone or plus Ant-AIP-2 (10 M), fixed, and immunolabeled for SAP97. Scale bar, 10 m. C, quantification of experiments shown in panel B. The ratio of dendrites to cell soma average fluorescence was computed and averaged. *, p Ͻ0.0005 caffeine versus control, **, p Ͻ0.001 caffeine ϩ Ant-AIP-2 versus controls, analysis of variance. D, representative Western blot analysis of the TIF fractions purified from high density hippocampal cultures left untreated (control) or treated with caffeine (10 mM, 15 min) with or without CaMKII inhibition mediated by Ant-AIP-2. SAP97 subcellular distribution into this fraction, following these pharmacological treatments, confirms the subcellular localization observed by confocal microscopy. Abrogation of SAP97-Ser-39 Phosphosite Impairs AMPA Receptor Subunit GluR1 Spine Delivery-SAP97 has been shown to associate to the C-terminal PDZ binding domain of ionotropic glutamate receptor subunits in neurons (7). More recently, synaptic targeting of SAP97 has been demonstrated to lead to an increase in synaptic AMPA receptors, spine enlargement, and an increase in miniature excitatory post-synaptic current frequency (4), suggesting that SAP97 can affect the synaptic recruitment of AMPA receptors. We therefore asked whether the effect of CaMKII-dependent trafficking of endogenous and/or overexpressed SAP97 could affect synaptic targeting of endogenous GluR1-containing AMPA receptors. Toward this aim, hippocampal neurons were transfected with GFP-SAP97 constructs and labeled with GluR1 antibody to identify subcellular distribution of GluR1-containing AMPA receptors (Fig. 8). Neurons transfected with GFP-SAP97(S39D) (Fig. 8B) showed a characteristic punctate GluR1 staining in spine-like structures with an apparent increase in the size of GluR1 clusters but with a very similar distribution to those of neighboring untransfected neurons (Fig. 8A). Similar data were obtained with GFP-SAP97wt (data not shown) in agreement with previous observations indicating an increase in the size of GluR1 clusters in GFP-SAP97-expressing neurons (4). On the other hand, transfection of GFP-SAP97(S39A) (Fig. 8C) resulted in a dramatic redistribution of GluR1 signal with an intense labeling in the cell soma and a diffuse dendritic staining, suggesting that CaMKII phosphorylation of SAP97-Ser-39 phosphosite can also be necessary for synaptic trafficking of SAP97-interacting proteins, i.e. GluR1. Quantification of spines versus dendritic shafts GluR1 immunostaining by meas-uring the relative fluorescence intensity revealed a significative decrease of GluR1 fluorescent signal in spine structures (p Ͻ0.001, Ϫ48.7 Ϯ 7.2%, GFP-SAP97(S39A) versus controls). In addition, experiments performed in untransfected neurons showed that inhibition of CaMKII by means of Ant-AIP-2 was responsible for a redistribution of endogenous GluR1 leading to a higher immunoreactivity within the soma and to a parallel decrease staining in the distal dendrites and spine-like structures (Fig. 8D). Quantification analysis revealed a significative decrease of GluR1 in spine structures as a consequence of CaMKII inhibition (p Ͻ0.0005, Ϫ65.8 Ϯ 6.8%, Ant-AIP-2 versus controls). We next examined the effects of CaMKII inhibition by treatment with Ant-AIP-2 on GluR1 surface expression. Control and Ant-Aip-2 hippocampal cultures were treated with the cross-linker BS 3 or with chymotrypsin (Fig. 8E). A significative increase in the GluR1 intracellular pool was observed with both experimental devices (BS 3 , p Ͻ0.01, ϩ69.7 Ϯ 10.3% Ant-AIP-2 versus control; chymotrypsin, p Ͻ0.01, ϩ51.5 Ϯ 8.9%, Ant-AIP-2 versus control). DISCUSSION The identification of molecular events governing the correct assembly of the different components of the glutamatergic synapse has emerged as a fundamental issue in the understanding of synaptic activity, plasticity, and neurodegenerative processes. In the last few years, several lines of evidence indicated FIG. 5. Effect of ␣CaMKII overexpression on SAP97 subcellular localization. A, neurons were cotransfected with GFP-SAP97 (green) plus ␣CaMKII wild type (red, upper panels) or plus constitutively active ␣CaMKII T286D (red, lower panels), fixed, and stained for the transfected proteins. The active form of the kinase is able to induce GFP-SAP97 localization into spine-like structures, whereas the wild type CaMKII does not influence GFP-SAP97 subcellular distribution. Merge data are shown on the right. B, effects of ␣CaMKII overexpression on endogenous SAP97. Neurons were transfected either with ␣CaMKII wild type (upper panels) or with constitutively active ␣CaMKII T286D (lower panels) and subsequently stained for the transfected proteins (red) and endogenous SAP97 (green). Merge data are shown on the right. ␣CaMKII wild type overexpression does not interfere with the endogenous SAP97 somatodendritic distribution pattern, leaving it similar to control cultures. Notably, ␣CaMKII T286D is not only able to induce GFP-SAP97 distribution into spines but is also able to modulate the same effect on endogenous SAP97. CaMKII activation as a key event in the regulation of glutamatergic synapses (18). In this report, we identify an additional role for CaMKII in modulating postsynaptic trafficking of SAP97, a member of the MAGUK protein family. In fact, our data show that SAP97-Ser-39 CaMKII-mediated phosphorylation is indeed necessary and sufficient to drive SAP97 to the postsynaptic compartment in cultured hippocampal neurons. Recently, studies from our group demonstrated that CaMKIIdependent phosphorylation of SAP97-Ser-232 within the PDZ1 domain modulates the association/dissociation of the SAP97-NR2A complex (6), raising the possibility of novel strategies regulating SAP97 function in hippocampal neurons. Here we confirm and expand the concept that CaMKII activation or inhibition results in a direct regulation of SAP97 function; indeed, phosphorylation/dephosphorylation of Ser-39 entails changes in SAP97 distribution and, consequently, in SAP97 synaptic localization. Transfection in hippocampal neurons of SAP97 mutants that blocked or mimicked Ser-39 phosphorylation has effects similar to those observed upon inhibiting or constitutively activating CaMKII, thus clearly addressing SAP97-Ser-39 as necessary and sufficient for modulation of SAP97 trafficking by CaMKII. No effect is obtained when wild type enzyme is transfected, suggesting that it is not ␣CaMKII per se but rather kinase activation-autophosphorylation, the key molecular event in the modulation of SAP97 distribution. In addition, double S39A,S232A mutation abolished CaMKII-dependent phosphorylation of SAP97 in metabolic labeling experiments in transfected COS-7 cells. These data indicate the presence of two serine residues, in the L27 and PDZ1 domains, respectively, as major in vivo phosphosites in SAP97, supporting the idea that SAP97 acts as a multimodular element where distinct domains play differential roles for the correct delivery of excitatory glutamate receptors. Together, these results demonstrate that the translocation of SAP97 to the postsynaptic compartment is regulated by CaMKIIdependent phosphorylation and consequently suggest a novel mechanism for the regulation of synaptic delivery of SAP97interacting proteins, i.e. glutamate receptors. In fact, it is known that members of the MAGUK protein family have been implicated as major players in targeting and clustering of glutamate receptor subunits (1,2). SAP97 interacts with GluR1 early in the biosynthetic pathway of GluR1-containing AMPA receptors; therefore, it may play a role in maturation of receptor complexes in the endoplasmic reticulum-cis-Golgi and delivery of receptors to synapses, but not in anchoring AMPA receptors at synapses (7,8). In the last five years, several studies have been focused on understanding the multiple mechanisms by which AMPA receptor-mediated transmission is strengthened during long term potentiation, and the specific role of CaMKII in these mechanisms has been pointed out. Increased CaMKII activity, but not direct CaMKII phosphorylation of GluR1, was responsible for GluR1 delivery to the synapse (19,20). This process requires interactions between GluR1 and PDZ domain-containing proteins. All these data suggest that some protein(s) other than GluR1 must be substrate(s) of CaMKII and participate in the regulated synaptic delivery of GluR1-containing AMPA receptors. Data presented here confirm and expand this hypothesis, identifying SAP97 FIG. 7. CaMKII-dependent phosphorylation of SAP97 on Ser-39 regulates SAP97 synaptic targeting. A, dissociated hippocampal neurons were transfected with wild type GFP-SAP97 or with mutant GFP-SAP97 constructs (as indicated in each panel). SAP97(S232D) and SAP97(S39A) do not differ from the SAP97wt distribution pattern, whereas SAP97(S39D) is more localized into synaptic clusters. The upper panels are a magnification of the lower panels. Scale bar, 15 m. B, quantification of GFP-SAP97-positive spine density (number of spines/50-m dendrite length) in neurons transfected as in panel A. (At least 10 neurons were examined for each construct). The graphic shows a significant increase in SAP97(S39D)-transfected neurons compared with all other constructs (histogram shows mean Ϯ S.D.; *, p Ͻ0.01). C, SAP97(S39D) shows a distinct postsynaptic distribution pattern. Hippocampal neurons were transfected with GFP-SAP97(S39A) (left panels) or GFP-SAP97(S39D) (right panels), fixed, and immunolabeled for the transfected proteins (green) and synaptophysin (red). Merge data (lower panels) indicate a specific postsynaptic immunoreactivity of the SAP97(S39D) construct in close opposition to the characteristic presynaptic synaptophysin staining. Scale bar, 4 m. D, Ser-39 is necessary for CaMKII-mediated SAP97 synaptic trafficking. Hippocampal neurons were cotransfected with GFP-SAP97(S39A) (green) and active ␣CaMKII T286D (red). Merge data are shown on the right. Abrogation of Ser-39 prevents the effect induced by ␣CaMKII T286D on SAP97 subcellular localization. Scale bar, 10 m. phosphorylation by CaMKII as a key step in the complex and finely tuned mechanism governing SAP97 and GluR1-containing AMPA receptor delivery to the postsynaptic complex. Previous observations suggested that MAGUKs contain diverse signals within their N termini for postsynaptic targeting (21). PSD-95 and PSD-93 are highly concentrated at PSD, whereas SAP97 and SAP102, although found at synaptic densities, are abundant in the cytoplasm and associated with intracellular membranes (8). In the present study, we found that SAP97 immunoreactivity is primarily somatodendritic with enrichment in perinuclear and proximal dendrites regions in untreated cultures, although a moderate colocalization with PSD-95 in spine-like structures can also be found. This is in agreement with recent data on subcellular localization of SAP97 in primary hippocampal neurons (4,8) and also with our recent biochemical studies showing an enrichment of SAP97 in the postsynaptic compartment (6). Of relevance, we show here that both NMDA and caffeine treatments lead to a modification of SAP97 but not of PSD-95 and SAP102 distribution with a significative increase of SAP97 immunostaining in PSD-95-positive synaptic clusters. Biochemical fractionation experiments confirm confocal data showing a specific enrichment of SAP97 to a Triton-insoluble "PSD-like" fraction after NMDA activation when compared with the corresponding SAP97 level in TIF of untreated neurons. Co-treatment with different kinase inhibitors shows that SAP97 redistribution to spine-like structures is strictly dependent on CaMKII activation and specifically blocked by CaMKII inhibitors; in particular, treatment with CaMKII inhibitors leads to a strong colocalization pattern of SAP97 with the ER marker, DSI. Recent observations indicate that activation of RyRs in the hippocampus can play a role in synaptic plasticity events through the elevation of ␣CaMKII activity (16,17), suggesting that CaMKII might represent a potential enzymatic target of the calcium-induced calcium release from ER ryanodine stores. Our data, showing that coincubation with CaMKII inhibitor Ant-AIP-2 blocks any effect of caffeine on SAP97 trafficking, not only confirm a close relationship between calcium-induced calcium release and CaMKII function but also indicate the presence of different pools of CaMKII in hippocampal neurons activated upon different physiological stimuli. Our data show that CaMKII affects SAP97 targeting by direct phosphorylation of SAP97-Ser-39. This phosphosite is located within the well described L27 N-terminal motif of SAP97 (22)(23)(24). It has been demonstrated that the L27 domain of SAP97 binds to Hrs, an endosomal ATPase that regulates protein sorting and has been implicated in vesicular endocytosis and exocytosis (22). In fact, the SAP97 (1-65) N-terminal domain, which is absent from PSD-95 and SAP102, has been shown to be responsible for subcellular membrane targeting of SAP97 in epithelial cells (24,25), suggesting that the N terminus of SAP97 may also be involved in neuronal targeting. Our data strengthen the role of the SAP97 N-terminal domain in the modulation of SAP97 localization, identifying inside the L27 motif a specific CaMKII phosphosite that is not conserved in PSD-95 and SAP102. However, our results do not exclude the possibility that other domains can affect subcellular targeting of SAP97 in neurons. Indeed, recent data showed that subcellular targeting of SAP97 to synaptic sites in primary dissociated hippocampal neurons is dependent on an alternatively spliced region between the SH3 and GK domains called the I3 region, a known protein 4.1 binding site (4). All these results suggest that SAP97 trafficking is a highly regulated mechanism of critical relevance for the correct delivery of excitatory glutamate receptors. Together, our results confirm that CaMKII plays a central role in excitatory neurons and confirm that activation of CaMKII and consequent kinase autophosphorylation is crucial not only in the synaptic site but also in extrasynaptic compartments to initiate the biochemical cascade that potentiates synaptic transmission. In particular, our data show that CaMKIIdependent SAP97-Ser-39 phosphorylation regulates the association of SAP97 with the postsynaptic complex, thus providing a fine molecular mechanism responsible for the synaptic delivery of SAP97-interacting proteins, i.e. ionotropic glutamate receptor subunits.
2018-04-03T06:02:07.177Z
2004-05-28T00:00:00.000
{ "year": 2004, "sha1": "200284257c67317f09429b29a090119180cb9b05", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/22/23813.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3bd5e8092d94fd72eeeb39050defca2da86a6444", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
248858573
pes2o/s2orc
v3-fos-license
Prevalence of SARS-CoV-2 antibodies during phased access to vaccination: results from a population-based survey in New York City, September 2020–March 2021 Repeated serosurveys are an important tool for understanding trends in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and vaccination. During 1 September 2020–20 March 2021, the NYC Health Department conducted a population-based SARS-CoV-2 antibody prevalence survey of 2096 NYC adults who either provided a blood specimen or self-reported the results of a previous antibody test. The serosurvey, the second in a series of surveys conducted by the NYC Health Department, aimed to estimate SARS-CoV-2 antibody prevalence across the city and for different groups at higher risk for adverse health outcomes. Weighted citywide prevalence was 23.5% overall (95% confidence interval (CI) 20.1–27.4) and increased from 19.2% (95% CI 14.7–24.6) before coronavirus disease 2019 vaccines were available to 31.3% (95% CI 24.5–39.0) during the early phases of vaccine roll-out. We found no differences in antibody prevalence by age, race/ethnicity, borough, education, marital status, sex, health insurance coverage, self-reported general health or neighbourhood poverty. These results show an overall increase in population-level seropositivity in NYC following the introduction of SARS-CoV-2 vaccines and highlight the importance of repeated serosurveys in understanding the pandemic's progression. Repeated serological surveys of antibody prevalence can improve our understanding of the trajectory of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic by providing insight into the antibody response generated by prior and recent infections (including those that are asymptomatic), and vaccination against SARS-CoV-2. Antibodies from SARS-CoV-2 infections wane over time, and vaccination induces antibodies against the virus irrespective of prior infection, complicating the interpretation of prevalence surveys of SARS-CoV-2 antibodies and cumulative estimates of infection and immunity [1]. Nonetheless, repeated prevalence surveys of SARS-CoV-2 antibodies can provide information on the distribution of infection and vaccination within a population. They can also provide valuable details for determining risk factors for infection and seroconversion, informing public health preparedness plans for future SARS-CoV-2 epidemic waves and vaccine prioritisation strategies. For example, epidemiological surveillance and survey data in NYC and across the country have helped identify disparities in the burden of disease from coronavirus disease 2019 (COVID-19). During 1 June-9 October 2020, the NYC Department of Health and Mental Hygiene (DOHMH) conducted a population-based serology survey for SARS-CoV-2 (NYC SARS-CoV-2 antibody prevalence survey) in NYC and found that nearly one in three Black and Latino adult residents had evidence of SARS-CoV-2 infection by October 2020, confirming early estimates from serosurveys done using convenience-based sampling [2][3][4]. Estimates from the NYC SARS-CoV-2 antibody prevalence survey also showed differences in antibody prevalence by borough of residence, language of interview and neighbourhood poverty, highlighting multiple factors linked to increased exposure risk in a population [2]. Improved understanding of these inequities led to targeted public health campaigns aimed at improving testing and health service utilisation among populations at high risk for SARS-CoV-2 exposures and severe outcomes due to COVID-19 and helped guide the development of a phased approach to SARS-CoV-2 vaccination in the winter of 2020/2021 [5]. In NYC, efforts to provide detailed population-based seroprevalence estimates continued with a second citywide serological survey implemented by the NYC DOHMH from 1 September 2020 to 20 March 2021. In addition to estimates stratified by participant demographics, results from this serosurvey also provide representative temporal estimates of SARS-CoV-2 antibody prevalence to assess seroprevalence during the first months after COVID-19 vaccines became available in NYC. Participants were recruited from Healthy NYC, a populationrepresentative, probability-based panel of ∼ 13 000 NYC adults ≥18 years old, managed by the NYC DOHMH Division of Epidemiology. Panellists were recruited from address-based samples, supplemented with individuals who had completed other probability-based surveys and had agreed to be recontacted for future research. From August 2020 to February 2021, monthly, cross-sectional surveys were conducted with Healthy NYC panellists. For each survey, a stratified random sample of ∼2000 panellists were invited by mail, email and/or text with up to five reminders for non-respondents. Surveys could be completed online, with alternative options for non-Internet users. Surveys were available in English, Spanish, Russian or Chinese (phone or mailed paper survey). Participants could either provide a self-reported antibody test result, a blood specimen for serological testing or both. Of the 7629 people who were invited to take the Healthy NYC COVID-19 surveys, 1935 agreed to be contacted to have their blood drawn; 1201 were reached by phone, 853 scheduled an appointment and 763 completed the blood draw. An additional 1333 provided self-reported serology results for a total of 2096 antibody test results. For each consenting participant, 5 ml of whole blood was collected and transported at 4°C to the NYC Public Health Laboratory where serum was separated from the specimen and tested for SARS-CoV-2 immunoglobulin G (IgG) antibodies against spike protein using the DiaSorin LIAISON® SARS-CoV-2 S1/S2 IgG assay as previously described [2]. We generated univariate prevalence estimates and 95% confidence intervals (CI) for combined antibody test results and self-reported test results to estimate citywide and stratified prevalence. For those who provided both a blood specimen and a self-reported test result, we used only serosurvey specimens tested by DOHMH. SAS EG v7.15 and SUDAAN 11.0.1 were used to account for weights and complex survey design. The t tests were used to compare antibody prevalence by sex, age, race/ethnicity, borough of residence, place of birth, language of interview, neighbourhood poverty and health insurance status. Two-sided P values ≤0.05 were statistically significant. Data were grouped into three time periods corresponding to COVID-19 vaccine eligibility groups defined by the phased COVID-19 vaccine rollout in New York State (NYS): no access, limited access and expanded access [5]. Respondents were classified as having no access to the vaccine if they gave a blood specimen or reported a previous antibody test before 14 December 2020. Once a vaccine became available, vaccine priority groups were established by the New York State Department of Health based on exposure risk, and early priority for vaccine administration was given to front-line health care staff, high risk long-term care facility patients and those working in essential services [5]. Respondents who provided a blood specimen or reported a previous antibody test between 14 December 2020 and 1 February 2021 were considered to have limited access to the vaccine since access was limited to the professional categories outlined above, along with New Yorkers aged 75 years and older. Respondents who provided a blood specimen or reported a previous antibody test between 2 February 2021 and the last day of specimen collection, 20 March 2021, were classified as having expanded access to the vaccine. However, data collection was completed before vaccine eligibility expanded to all adults living or working in NYC. Individual weights were developed to account for unequal probability of selection, nonresponse and potential overlap in sampling frames. These weights were further trimmed and raked using population control totals from the 2015-2019 and 2019 American Community Survey. In addition, three periodspecific weights were generated to make each vaccine periodspecific estimates representative of NYC non-institutionalised adult population by repeating the weighting method. The NYC DOHMH Institutional Review Board determined this activity to be public health surveillance. Written consent was obtained from participants before specimen collection. From combined self-reported data and blood specimens collected between 1 September 2020 and 20 March 2021, we estimate that at least 23% of NYC residents had antibodies to SARS-CoV-2. This is similar to the prevalence (23.4%) we reported for the first NYC SARS-CoV-2 antibody prevalence survey conducted during June through October 2020 [2]. Compared with the 2020 study, we found a lower proportion of Black and Latino residents with antibodies to SARS-CoV-2, while more White New Yorkers had antibodies. Our temporal estimates of seropositivity show a steady increase in citywide seroprevalence from September 2020 to March 2021. Two changes during the pandemic contributed to the observed increase in citywide seropositivity over the survey period. First, the survey implementation timeline roughly corresponds to the introduction of COVID-19 vaccines. By the time the last specimen was collected, almost 70 000 doses of COVID-19 vaccine were being administered daily in NYC with more than 1.6 million doses administered since December 2020 [6]. Additionally, the second NYC SARS-CoV-2 antibody prevalence survey was implemented during a time when NYC, like much of the USA, was experiencing heightened COVID-19 transmission. During the survey implementation period alone, the city recorded more than 450 000 new COVID-19 cases [6]. Given the high prevalence of COVID-19 transmission, along with the introduction of vaccines which trigger immune responses detectable through the assay used for this serosurvey, we expected to see an increase in citywide seropositivity for the full survey period when compared to the first survey implemented in mid-2020. However, no increase was found when comparing combined specimens collected during the first and second rounds of NYC's SARS-CoV-2 antibody prevalence survey, suggesting a potential waning in population-level coverage of antibodies acquired from natural infection. Estimates prepared by the U.S. Centers for Disease Control and Prevention (CDC) using residual blood specimens collected from participating commercial laboratories suggest similar patterns across New York State. In August 2020, the CDC estimated that approximately 23% of New York State residents had antibodies that target the nucleocapsid proteins of the SARS-CoV-2 virus, indicating likely recent infection. By November, this estimate fell to 13% [7]. A similar assessment of spike and nucleocapsid antibodies found in donated blood showed that while nationally seropositivity for any SARS-CoV-2 antibodies increased between December 2020 and June 2021, this increase was driven by vaccination [8]. Evidence of waning immunity from other population-based serological surveys is limited. While several other serosurveys were implemented in the spring and summer of 2020 in NYC and found similar results to the first NYC SARS-CoV-2 antibody prevalence survey [3,4] this is the first report of seroprevalence estimates from the winter of 2020-2021 in NYC and one of the few serosurveys, globally, to report on repeated population-based testing of residents [9]. While these seroprevalence estimates are helpful in understanding the potential susceptibility of NYC residents to future SARS-CoV-2 infection, there are some limitations. The assay used for this serosurvey only identifies antibodies to the spike (S) protein and does not differentiate between antibodies developed in response to natural infection and those developed following vaccination. The assay also does not provide information about the neutralising capabilities of detected antibodies. Additionally, this serosurvey did not collect the date of the selfreported antibody tests and it is possible that tests took place prior to September 2020. Seroprevalence estimates based on blood specimens alone are higher than estimates using the combined sample, around 31%, but the blood specimen sample is too small to provide reliable estimates for subgroups or vaccine access periods. Although interpretation is limited based on the small sample size, this higher prevalence may be a function of a smaller sample that includes a higher proportion of vaccinated individuals -80% of the blood specimens were drawn after a vaccine became available compared to 52% of self-reports received before vaccine availability. Finally, while this serosurvey provides a snapshot of population-level immunity to SARS-CoV-2 infection in the winter of 2020-2021, the implementation period included only the beginning of a major vaccination campaign. As a result, we expect the seropositivity in NYC after March 2021 to be higher than that observed during the survey period, due primarily to vaccination. A third serosurvey has been implemented from April to October 2021 and is expected to help illuminate the extent of population-level seroprevalence resulting from NYC's vaccination campaign. Finally, we caution interpretation of these temporal trends in seropositivity, which show, in contrast to the first serosurvey [2], similar seropositivity between Black, Latino and White NYC residents. These findings likely reflect the combination of inequities in SARS-CoV-2 infection as well as inequities in vaccination [6, 10]. In late January 2021, Black and Latino New Yorkers accounted for only 11% and 15%, respectively, of COVID-19 vaccinations while accounting for 24% and 29% of the city's population. Gaps in vaccination rates between White and Latino New Yorkers have diminished since data were collected for this survey, but Black New Yorkers continue to be vaccinated at lower rates and have experienced a higher burden of disease, hospitalisation and death in recent months as a result [6]. Repeated surveys of SARS-CoV-2 antibody prevalence are important tools in the city and the nation's response to the ongoing COVID-19 pandemic. Our understanding of these inequities and our ability to address them can be further advanced by use of multiple assays that distinguish between populationlevel prevalence of antibodies developed as a response to immunisation and recent natural infection. Continued temporal monitoring will be crucial to ensuring that the public health response, including vaccine distribution plans and prioritisation strategies, address issues of inequity.
2022-05-19T06:23:52.473Z
2022-05-18T00:00:00.000
{ "year": 2022, "sha1": "a247dde56b069dd9d87e2db457ebe0ef15187613", "oa_license": "CCBYNCND", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/2AED49D0C60E521E69373E68980BFE7D/S0950268822000875a.pdf/div-class-title-prevalence-of-sars-cov-2-antibodies-during-phased-access-to-vaccination-results-from-a-population-based-survey-in-new-york-city-september-2020-march-2021-div.pdf", "oa_status": "GOLD", "pdf_src": "Cambridge", "pdf_hash": "e68df134419b63541929102ace8c397552d879d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5375275
pes2o/s2orc
v3-fos-license
The whole - genome expression analysis of peripheral blood mononuclear cells from aspirin sensitive asthmatics versus aspirin tolerant patients and healthy donors after in vitro aspirin challenge Background Up to 30 % of adults with severe asthma are hypersensitive to aspirin and no unambiguous theory exists which provides a satisfactory explanation for the occurrence of aspirin-induced asthma (AIA) in some asthmatic patients. Therefore, the aim of this study was to compare the AIA expression profile against aspirin tolerant asthma (ATA) and healthy volunteers (HV) profile in peripheral blood mononuclear cells (PBMCs) after in vitro aspirin challenge in Caucasian population. Methods PBMCs were separated from blood of three groups of subjects - 11 AIA, 7 ATA and 15 HV and then stimulated by either 2 μM lysine aspirin or 20 μM lysine as a control. Subsequently, RNA was isolated, transcribed into cDNA and subjected to microarray and qPCR studies. Simultaneously, protein was extracted from PBMCs and used in further immunoblotting analysis. Results The validation of results at mRNA level has shown only three genes, whose expression was significantly altered between comprising groups. mRNA expression of CNPY3 in PBMCs in AIA was significantly lower (-0.41 ± 2.67) than in HV (1.04 ± 2.69), (p = 0.02); mRNA expression of FOSL1 in PBMCs in AIA was also significantly decreased (-0.66 ± 2.97) as opposed to HV (0.31 ± 4.83), (p = 0.02). While mRNA expression of ERAS in PBMCs was increased (1.15 ± 0.23) in AIA in comparison to HV (-1.32 ± 0.41), (p = 0.03). At protein level the changed expression of one protein was confirmed. Protein expression of FOSL1 in PBMCs in AIA was both significantly lower (-0.86 ± 0.08) than in ATA (0.39 ± 0.42), (p = 0.046) and in HV (0.9 ± 0.27), (p = 0.007). Conclusions This pilot study implies a positive association between CNPY3, ERAS, FOSL1 and aspirin-intolerant asthma, suggesting that these findings would be useful for further investigations of NSAIDs mechanism. Electronic supplementary material The online version of this article (doi:10.1186/s12931-015-0305-4) contains supplementary material, which is available to authorized users. Background Aspirin-exacerbated respiratory disease (AERD) is a distinct asthma phenotype mainly characterized by chronic eosinophilic inflammation of the upper and lower airways with symptoms that are exacerbated by aspirin and other nonsteroidal anti-inflammatory drugs (NSAIDs) [1][2][3][4]. It is estimated that 0.6-2.5 % of total population [5,6], 5-10 % of asthmatic adults [7][8][9], almost 30 % adults with severe asthma [10] and just about 40 % of asthmatic adults with refractory chronic hyperplastic sinusitis [11] are hypersensitive to aspirin (ASA). Emphatically, more than 15 % of asthmatic patients are quite unaware of suffering from this intolerance [12] and only provocation tests may reveal AIA. Higher incidence of AIA was also reported in women in whom symptoms start earlier and disease course is more rapid and severe [13]. Although the exact pathomechanism of AIA still remains unknown, pathognomonic reactions to COX-1 active drugs can be attenuated by inhibitors of 5-lipoxygenase (5-LOX), type 1 receptor for cysteinyl leukotriens (cysLTR1) [14] and by drugs that block mast cells (MA) activation [15,16]. Moreover, inhaled prostaglandin E 2 (PGE 2 ) inhibits aspirin -induced bronchoconstriction and cysLT production in subjects with AERD [17]. PGE 2 is formed from COX-dependent conversion of arachidonic acid to PGH 2 , which is metabolized to PGE 2 by three PGE 2 synthases (PGESs) [18]: cytosolic PGES and microsomal PGES (mPGES-1 and mPGES-2) [19,20]. Absence of mPGES-1 impairs the up-regulation of PGE 2 production in mice [21]. Additionally PGES -/mice develop marked eosinophil -dominated bronchovascular cellular infiltrates with lesser numbers of neutrophiles [22,23] and lysine aspirin (Lys-ASA) challenge caused additively releases of two markers of MC activationhistamine, mMCP-1 and cysLTs [21]. The marked depletion of residual PGE 2 by Lys-ASA in the PGES -/mice suggests that mPGES-1 sustains PGE 2 generation in the face of COX-1 inhibition [21]. It has been also demonstrated that platelet-adherent eosinophils and neutrophils are more frequent in the peripheral blood and sinonasal tissues from patients with AERD than in samples from aspirin tolerant controls [24]. Adherence to platelets primes granulocyte integrin function [25], chemotaxis [26] and increase susceptibility to inflammation [21]. It is probably that TP receptors are essential for platelet-adherent granulocytes to generate cysLTs by facilitating crosstalk between platelets and granulocytes [21]. Though the residual local PGE 2 derives principally from COX-1, which may explain why only COX-1active drugs provoke clinical reactions [27]. It is also known that the production of 15-hydroxyeicosatetranoic acid in AIA patients is 3.6 fold higher than in ATA patients [28]. The substantial source of 15-HETE in this reaction seems to be 15-lipooxygenase that is controlled by COX-1 [29]. Thus, inhibition of COX-1 and disregulation of PGE 2 production by aspirin results in activation of 15-LOX and 15-HETE production [28]. Overproduction of 15-HETE in aspirin sensitive asthmatics inter alia contributes to the induction of mucous glycoprotein secretion by human airway [30] and contraction of bronchial smooth muscles [31]. According to these results invitro test (ASPItest) is known that measures ASAinduced 15 -HETE in peripheral blood. ASPItest does not require special expertise, equipment and seems to be highly sensitive and specific to confirm the history of aspirin sensitivity in asthmatic patients [29]. So far, in the literature there is also a lot of data concerning genetic mechanisms suggesting the involvement of various candidate genes in the pathogenesis of AIA. Unfortunately, the majority of these results is not consistent between various populations indicating environmental factors which may predestine to development of AIA. Moreover, the likelihood that AIA is acquired in adulthood implies potential epigenetic modifications of the relevant mediator systems. Hence, it has been demonstrated that PGE 2 synthase gene in nasal polyps from subjects with AERD is hypermethylated in comparison to nasal polyps from aspirintolerant controls [32]. The aim of this study was to explore the possible difference between aspirin-induced asthma (AIA), aspirin tolerant asthma (ATA) and HV (healthy volunteers) genetic profiles in PBMCs in Caucasian population by means of whole genome scan after in vitro aspirin challenge. Study subjects Subjects of Caucasian origin were recruited from Department of Internal Medicine, Asthma and Allergy; Medical University of Lodz; Poland. The diagnosis of bronchial asthma was based on an patient's history, physical examination and pulmonary function tests according to Global Initiative for Asthma (GINA) 2014 guidelines. Asthmatic patients were included in the study if they met the following criteria: clinical diagnosis of asthma confirmed by bronchial hyperactivity assessed by a positive bronchodilator or methacholine test, the incidence of asthmatic attacks and no other respiratory disorders. Patients were asked to refrain from short acting bronchodilators for at least six hours before challenge. Aspirin-sensitive asthmatic subjects were included in the study if they had a positive oral provocation test with aspirin during the last 6 months, made without the context of the study. Patients with aspirin-tolerant asthma and healthy subjects were involved in the project if they had had a negative history of aspirin or other NSAIDs hypersensitivity and had been exposed to these medicaments during at least the last six months without any adverse events before the study. The clinical profiles of asthma patients and healthy control subjects are summarized in Table 1. The study protocol was approved by the Ethics Committee of the Medical University of Lodz (permission no. RNN/107/08/KE, RNN/103/11/KE) and written consent was obtained from every subject prior to the study. PBMCs isolation and incubation with lysine aspirin/lysine Peripheral venous blood was collected before aspirin challenge. PBMCs were separated using Histopaque® 1077 solution (Sigma Aldrich, Saint Louis, MO) according to the manufacturer's protocol, washed three times in PBS. Afterwards, the PBMCs were incubated either wuth lysine aspirin (2 μM) or lysine (20 μM) for 30 min at 37°C. Incubation conditions for the cells were selected on the basis of previous, unpublished pilot studies. PBMC counts were not statistically different between groups before and after incubation with lysine aspirin or lysine. Microarray procedures Microarray flip dye experiments were performed with Human OneArray® Whole Genome Microarrays v 5.1 (Phalanx Biotech, San Diego, CA) containing 30,255 oligonucleotide probes (29,187 human genome probes and 1,088 experimental control probes) was used for gene expression analysis. Each sample was hybridized against Universal Human Reference RNA (Stratagene, La Jolla, CA, USA) provided a common denominator for accurate and reproducible comparisons of gene expression data. Synthesis of target cDNA probes and hybridization were performed according to protocol. The preparation of a slide for hybridization included pre-wash in ethanol and pre-hybridization according to manufacturer's protocol. Hybridization was performed in a humidity chamber filled with 2× SSPE buffer at 42°C for 16-18 h. Posthybridization washes were performed with the following buffers: 1× SSPE/0.03 % SDS (2 min, 42°C), 1× SSPE (2 min, RT) and 0.1× SSPE (rinsed several times, RT). qPCR for candidate genes cDNA was subjected to qPCR using the kits of primers and probes designed for the selected genes and GAPDH as a qPCR reference (Life Technologies, Carlsbad, CA). Assay ID and contex sequences used in this study are shown in Table 2. Each sample was measured in duplicate using TaqMan analyzer 7900 (Life Technologies, Carlsbad, CA). Using the 2 -ΔΔCt method, data are presented as a fold change in gene expression normalized to endogenous reference gene GAPDH and relative to a control (lysine-treated PBMCs). The fold change of mRNA expression in each patient was calculated by comparing RQ (2 -ΔΔCt ). Protein isolation and immunoblotting analysis Total protein was isolated utilizing RIPA lysis buffer (Sigma, Saint Louis, MO) with addition of Protease Inhibitor Cocktail (Sigma, Saint Louis, MO) according to manufacturer's protocol and analyzed by immunoblotting method to detect selected proteins (Table 3) using 10 μg total protein per sample. Detailed immunoblotting protocol is provided in Additional file 1. Statistical analysis Microarray studies For microarray studies, detection of p values and normalization were performed for the extracted values. Statistical significance of the microarray data was calculated by the Student's t teststandard two-sample t-statistics with pooled variance. Additional statistical analysis was performed using the false discovery rate (FDR) to correct for multiple comparisons in multiple hypothesis testing. FDR of a test was defined as the expected proportion of false positives among the declared significant results [33,34] as it is a more convenient scale to work on instead of the p-value scale [35]; it is not too conservative for microarray studies and does not lead to low sensitivity [35]. For the diagnostic values of gene expression in the discrimination of AIA from ATA and healthy subjects, we selected candidate genes that satisfied the criteria of p < 0.05 and exhibiting change in expression greater than twofold difference between the two chosen groups. For microarray analysis, backgroundcorrected values for each probe on oligonucleotide array were extracted using MeV software (TM4, Boston, MA). qPCR and immunoblotting analysis For qPCR and immunoblotting results, the distribution of the log 2 data and the equality of variances were checked by Shapiro-Wilk and Levene's tests, respectively. The results were presented as mean ± SEM when data in groups were normally distributed; differences between groups were examined for statistical significance by ANOVA with the appropriate post-hoc test. If Kruskal -Wallis test (with multiple comparison) -non-parametric equivalent of ANOVA was used, the results were presented as median ± range. P value < 0.05 was considered as statistically significant. The data from the study was analyzed utilizing STATIS-TICA software package (Statsoft, Tulsa, OK). Power analysis Sample size was calculated based on the number of aspirin sensitive patients counted per total population of Poland (6). Based on Daniel formula for calculating sample size (29), this gave a calculated AIA sample size approximately 9 patients. However, a higher number was targeted in qPCR in order to account for possible exclusions, dropouts and the need to carry out subgroup analysis. Results Comparison of gene expression profiles between AIA versus ATA and AIA versus healthy volunteers The gene expression microarray consisting of 30.255 featured oligonucleotide probes to cDNA samples obtained from AIA (n = 5), ATA (n = 3) and healthy volunteers (n = 4) was applied. To evaluate the overall difference in gene expression levels in PBMCs among AIA, ATA and healthy volunteers, we calculated the gene expression level using a volcano plot (Figs. 1 and 2). Volcano plot against fold change values for each gene revealed that the expression levels were slightly different between AIA versus ATA and AIA versus healthy subjects. We identified genes that showed 325 significantly different expression between AIA vs. ATA (253 genes that showed a significant increase in gene expression and 72 genes that showed a significant decrease) and 376 genes with significantly changed expression between AIA versus healthy volunteers -196 genes turned out to be significantly increased and 180 genes with statistically significant decreased expression ( Figs. 3 and 4). For the next step of analysis, we selected genes DPP9, RXRG and FOSL1 with a p value of <0.05 and mean difference in fold change value >2 between the two chosen groups (Fig. 5). Differences in gene expression obtained in whole genome scan using cDNA microarrays was shown in Table 4. The role of selected genes in inflammation or asthma had been confirmed in literature before. The upregulated and downregulated genes were perfectly classified by the hierarchical clustering method. Verification of gene expression with quantitative measurement of mRNA using qPCR We validated three previously selected genes: DPP9, RXRG and FOSL1 using qPCR to measure their mRNA levels in PBMCs obtained from AIA (n = 11), ATA (n = 7) and healthy volunteers (n = 15). Therefore qPCR was analyzed for original microarray patients' group and additional patients were added to have a confirmatory cohort. Discussion Considering the genetic background of AIA, more than 100 genetic association studies have attempted to discover the numerous genetic variants related to development of AIA. However, the majority of these results have not been replicated in other, independent studies. Moreover, to the best of our knowledge, two published papers based on both microarray study and qPCR confirmation reveal the involvement of individual genes in the pathogenesis of AIA. However neither of which were also confirmed in other studies and population. The first, whole-genome study [36] demonstrated that galactin-10 mRNA is overexpressed in peripheral blood cells of AIA compared to ATA patients and controls. Galactin-10 had been previously implicated in mucosal inflammatory processes including cell adhesion [37], chemoattraction [38] and cell activation [39]. Whereas, the second study [40] showed two genes -CNKSR3 and SPTBN2 which expression in PBMCs differentiates between AIA and ATA, but neither CNKSR3 nor SPTBN2 has described relationship with asthma and aspirin. As in previous whole genome studies, the main aim of our investigation was to compare the AIA genetic profile against ATA and HV in PBMCs by microarray studies and then confirm it on protein level. The verification on two molecular levels was necessary because mRNA levels cannot be utilized as surrogates for corresponding protein levels. Although RNAs are primordial molecules, proteins are the molecules of life and it is estimated that only less than 40 % of cellular protein levels can be predicted from mRNA measurements [41]. The most known, presumable reasons for the poor correlations reported in literature between the level of mRNA and protein are: (a) many complicated and varied post-transcriptional mechanisms involved in turning mRNA into proteinthe cell can control the levels of gene at transcriptional level and/ or translational level [42]; (b) difference in half-lives of proteins as the result of varied protein synthesis and degradation depending on a number of different conditions; (c) significant amount of error in mRNA/protein studies [43,44]. Intriguingly, genes with certain combinations of mRNA and protein half-lives share common functions, indicating that they evolved under similar constraints such as abrupt respond to stimulus [41,[45][46][47]. Most mRNAs and especially proteins are stable unless genes need to respond quickly to a stimulus [41]. However, measurements performed at mRNA and protein levels are complementary and both are necessary for a complete understanding how the cell works [48]. On the basis of obtained results, we identified three genes whose expression profiles significantly differed between AIA vs. ATA and/or AIA vs. healthy subjects in PBMCs of Caucasian population. We demonstrated significant decrease in expression of FOSL1 (encoding FRA1) at either mRNA or protein level in patients diagnosed with AIA in comparison to ATA and controls. FOSL1 is a part of AP-1transcription factor that regulates target gene expression in response to various pro-oxidants, inflammatory cytokines including TGFβ1 [49,50], environmental toxicants, carcinogens and pathogens. These gene products mediate oxidative stress and inflammatory responses, as well as cell growth and tumorgenesis [51]. Additionally, TGF-β1 promoter (509C/T) polymorphism has been reported to contribute to the development of AIA with rhinosinusitis by increasing TGF-β production in the nasal mucosa and/or polyp tissues of patients with AIA [52]. Tang et al. showed that aspirin-treated bone marrow cells have significantly improved immunomodulatory function, as indicated by upregulation of regulatory T cells and downregulation of Th17 cells via, inter alia TGF-β1 pathway [53]. Moreover, FOSL1 regulates the expression of genes controlling tissue/cell remodeling, mainly at transcriptional level [54][55][56]. Rajasekaran et al. [57] have recently shown that FRA1 -/mice are more susceptible than wildtype mice to bleomycin -induced fibrosis, suggesting that this transcriptional factor is involved in pulmonary protection. To emphasize this hypothesis, downregulation of FOSL1 was also observed in malignant human bronchial epithelial cells [55] and non-small-cell lung cancer [58] compared to normal bronchial epithelium. Comparison of genetic profile between AIA and healthy controls has also demonstrated significantly increased expression of ERAS in AIA. Actually the role of this gene is restricted to the tumor -like growth properties of embryonic stem cells [59] and chemotherapy resistance [60]. However, ERAS belongs to GTPase Ras protein family engaged in airway smooth muscle growth and bronchoconstriction of airways in response to stimuli [61]. Among all proteins that belong to Ras superfamily, Rho kinase has emerged as a potential target for the treatment of airway hyperresponsiveness in asthma [62]. Additionally, arachidonic acid (AA) can activate Rho kinase by binding to the C-terminal part of the coiled-coil domain of Rho kinase, which acts as an auto-inhibitor domain [63][64][65]. Rho kinase may also be involved in eotaxin and cytokine (IL-5, IL-13) production [66] and in secretion of matrix metalloproteinase -9 (MMP-9), tightly associated with fibrosis in asthma and chronic obstructive pulmonary disease (COPD) [67,68]. It is worth mentioning, as the extent of Ras activation in T cells appears to drive Th2 Fig. 6 mRNA expression levels of CNPY3 a, FOSL1 b and ERAS c genes in PBMCs measured by qPCR between AIA (n = 11), ATA (n = 7) and healthy volunteers (n = 15). PBMCs were stimulated by lysine-aspirin or lysine as a control. The gene expression presented was analyzed utilizing Real-Time PCR. * statistically significant (p < 0.05) dependent eosinophilic airway inflammation and allergeninduced airway hyperresponsiveness [69]. Much evidence indicates also that Ras GTPases appear to regulate reactive oxygen species (ROS) production and oxidants function as effector molecules for the small GTPases [70][71][72][73]. Rac1 has been demonstrated to act upstream of AAmetabolizing enzymes, such as PLA 2 [74,75], 5-LOX [76][77][78] and COX-2 [79] and thus some reports show that AA metabolism modulates NADPH oxidase and mitochondrial ROS production [80]. The misregulation of the redox signaling of Ras with its downstream cascades also has been linked to various disorders linked with immune system [81]. According to Wells et al. [82], Ras-dependent Raf-MEK1/2-ERK1/2 pathway takes part in postnatal modulation of a host's defenses and the inflammation of T lymphocytes. In a mouse allergic asthma model, the activation of Ras in T cells controls the development of Th2-dependent eosinophilic airway inflammation and airway hyperresponsiveness. Specific inhibitors focusing on Rasmediated signaling pathways would be thus helpful in treatment approach of asthma [69]. Although ERAS was one of the genes indicating association with aspirin-induced asthma in our study, there are only single data supporting its role. Nevertheless, recently, Park et al. [83] have shown a strong association between the SNPs (14444 T > G and 41170 C > G) within RAB1A (Ras protein subfamily member) and the aspirininduced decrease in FEV 1 . The authors indicate also, that genetic alteration of the member RAS oncogene family may be related to the development of asthma and ASA hypersensitivity through the modulation of intracellular protein trafficking. Multiple points of overproduction or underproduction of critical inflammatory mediators may be determined by metabolism through the Ras family GTPase pathway. The Fig. 7 Box plot for mRNA expression levels of ALOX5 a, ALOX15 b, DOCK9 c, MARVELD1 d, PARVG e, TLR7 f BMP2 g, CSF1 h, CXCL11 i, DPP9 j, GAB3 k, and TRIP6 l genes in PBMCs measured by qPCR between AIA (n = 11), ATA (n = 7) and healthy volunteers (n = 15). PBMCs were stimulated by lysine-aspirin or lysine as a control. The gene expression presented was analyzed utilizing Real-Time PCR release of specific granules from platelets, eosinophils, and neutrophils depends on the phosphorylation of the Ras family proteins [81], but detailed mechanism associated with aspirin-induced asthma needs to be evaluated. Significantly reduced expression of CNPY3 at mRNA level in AIA in comparison to healthy controls may indicate a profound defect in stimulus responsiveness. CNPY3 is an endoplasmic reticulum -resident chaperone that is required for maturation/ glucosylation and surface trafficking of TLR4 [84]. Activated TLR4 can directly or indirectly affect the function of regulatory T cells, thus influencing the Th1/Th2 imbalance and reducing inflammatory responses [85][86][87]. It is well known, that TLR4 is important component in the innate immune response to lipopolysaccharide (LPS) of gram-negative bacteria and the fusion protein of respiratory syncytial virus (RSV) [88]. Therefore, CNPY3 knockdown led to significant defect in RSV and LPS responsiveness and limit innate immune responses [84,89]. By contrast, patients with AIA much more frequently suffer from virus infection [90] and RSV is probably one of the trigger predisposing to aspirin hypersensitivity [91]. TLR4 is activated following binding of LPS, and a series of downstream phosphorylation and dephosphorylation events eventually leads to the activation of transcription factors that regulate inflammatory factors including interferon, tumor necrosis factor; it also induces antigen-presenting cell maturation and promotes a Th0 to Th1 shift [85,92]. According to Steinke et al. [93], high levels of mentioned IFN-γ distinguish AERD (aspirin-exacerbated respiratory disease) from aspirin tolerant asthma and underlie the robust constitutive and aspirin-induced secretion of CysLTs that characterize this disorder, as AERD is associated with eosinophils maturing locally in a high interferon (IFN)-γ. To better understand the contribution of TLR4 to aspirin-induced asthma pathogenesis, additional studies are needed to determine the contribution of CNPY3 in aspitin-induced asthma. Our data also demonstrate that similar microarray scores for different genes do not necessarily mean that similar qPCR scores was obtained. This finding presumably reflects the different hybridization kinetics of the probe sets for each gene. Furthermore, varied priming methods and increased distance between the location of e and TRIP6 f genes in PBMCs measured by qPCR between AIA (n = 11), ATA (n = 7) and healthy volunteers (n = 15). Data are presented as the fold change of optical density (OD) compared with the lysinetreated cells the PCR primers and microarray probes on a given gene can also affect the results of qPCR and microarray experiments. In addition, data normalization fundamentally differs between microarray analysis and qPCR, the former requiring global normalization, while the latter generally utilizes the expression of one reference gene against which all other gene expression is calibrated. Therefore, on the basis of the qPCR data that we obtained, it is generally not feasible to predict the true expression level of one gene based on the microarray expression score of another. Conclusions To sum up, altered expression of three genes: ERAS, CNPY3 and FOSL1 have been reported at mRNA level in PBMCs of Caucasian aspirin-sensitive asthmatics as opposed to healthy volunteers. In the case of FOSL1, this difference was also confirmed at protein level, both -between AIA vs. ATA and AIA vs. HV. To our knowledge, this is the first whole-genome study for AIA that points out the positive association between ERAS, CNPY3, FOSL1 and NSAIDs metabolism. However, some previous studies have indicated participation of these genes in pathways significant for pathomechanism of AIA resulting in tissue/cell remodeling and airway hyperresponsiveness. Although our study included small number of patients, it allowed to perform statistical analysis. Undoubtedly, further studies in a larger number of cases and of other ethnicity are necessary to establish an exact functional link among the detected alternations in expression of CNPY3, ERAS and FOSL1 with pathology of AIA. Competing interests The authors declare that they have no competing interests. Authors' contributions Dr Wieczfinska takes responsibility for the integrity of the data and accuracy of the data analysis. Dr Kacprzak and Dr Wieczfinska designed the study, contributed to clinical data collection and RT-PCR analysis as well as Western Blot analysis, data collection from all methods, statistical analysis and writing of the manuscript. Dr Pospiech performed microarray experiment and contributed to statistical data analysis obtained from microarray. Dr Sokolowska designed the study and contributed to clinical data collection. Dr Nowakowska contributed to perform microarray experiment. Dr Pniewska contributed to Western Blot analysis. Prof. Bednarek contributed to statistical data analysis obtained from microarray. Dr Kuprys -Lipinska contributed to recruitment of patients and clinical data collection. Prof. Kuna contributed to recruitment of patients and clinical data collection. Prof Pawliczak: designed the study, contributed to clinical data collection and critical review of the manuscript. All authors read and approved the final manuscript.
2018-04-03T04:31:21.838Z
2015-12-09T00:00:00.000
{ "year": 2015, "sha1": "a50e86c3ad63cbc7e2779a1f690ebc6fccb3608a", "oa_license": "CCBY", "oa_url": "https://respiratory-research.biomedcentral.com/track/pdf/10.1186/s12931-015-0305-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "901b088f740b9c1a0e214e8431c2b65bb2ad33c0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
74284517
pes2o/s2orc
v3-fos-license
A Case of Fentanyl Intoxication and Delayed Hypoxic Leukoencephalopathy Caused by Incidental Use of Fentanyl Patch in a Healthy Elderly Man Background: Fentanyl intoxication has been reported occasionally since the Fentanyl patch became available. Delayed hypoxic leukoencephalopathy is recognized as a complication of hypoxic events. However, its neuropsychiatric symptoms can be delayed. Case Report: An 85-year-old male presented with mental deterioration after attaching a Fentanyl patch. A hypoxic condition was detected. Neurological examination revealed semi-coma, pupil myosis, sluggish light reflex, and no response to deep pain. He recovered gradually but showed delayed neuropsychiatric symptoms 40 days after the hypoxic event. These symptoms were improved steadily. He could talk and walk after 3 months. Conclusion: The case of acute fentanyl intoxication and delayed hypoxic leukoencephalopathy reported here is caused by incidental use of a Fentanyl patch in a healthy elderly man. When patients are admitted with mental deterioration, respiratory depression, and pupil miosis, physicians should consider opioids intoxication including Fentanyl patch. In addition, it is important to understand the clinical course of leukoencephalopathy as a delayed complication after recovery from acute fentanyl intoxication. J Neurocrit Care 2015;8(1):35-38 INTRODUCTION Fentanyl is a selective µ-receptor agonist and a synthetic narcotic analgesic with strong analgesic effect.Due to its rapid action and a short duration of action in the body, it has been used to control acute pain. 1 The concentration of a Fentanyl patch in the body generally levels off after 12-24 hours and remains relatively constant for up to 3 days through transdermal passive diffusion after adherence to the skin. 2 Because of its phamacokinetic properties, the transdermal fentanyl patch should be used to control chronic pain in patients who do not respond to less potent analgesic drugs.However, Fentanyl patch has many side-effects, such as deterioration of the central nervous system, respiratory depression, hypothermia, cold wet skin, flaccid muscles, bradycardia, and hypotension. 3Fetal fentanyl intoxication has been reported occasionally since the fentanyl patch became available, because its noninvasiveness and simple method of administration result in abuse for managing acute pain, such as muscle pains and strains.Also lethal side-effects, such as respiratory depression and hypotension, have been sometimes reported with use of the fentanyl patch for controlling postoperative pain. 4layed hypoxic leukoencephalopathy is a rare disease caused by carbon monoxide poisoning, inhaling toxic gas, cardiac arrest, and overdose of opiates or benzodiazepines. 5,6Symptoms of delayed hypoxic leukoencephalopathy, such as cognitive impairment, gait disorder, parkinsonism, and akinetic mutism, appear at different times from a few days to weeks after the patient regains consciousness from hypoxia. 7 report a case of delayed hypoxic leukoencephalopa- CASE REPORT An 85-year-old male patient was transferred to the emergency room due to reduced consciousness by ambulance. Rescue men ventilated the patient artficially with the bagvalve-mask connecting a line with 100% concentration of oxygen because oxygen saturation level is 55% and pulse rate is 110/min in ambulance.His vital signs at admission were: blood pressure, 60/40 mmHg; respiratory rate, 9/ min; pulse rate, 112/min; and body temperature, 37.8°C. An arterial blood gas analysis (ABGA) revealed pH, 7. CK-MB, 3.5 ng/mL; and troponin-I 0.231 pg/mL.The next day several laboratory levels were increased (CK, 280 U/L; CK-MB, 6.7 ng/mL; and troponin-I 0.526 pg/mL).In addition, no abnormal findings were seen on a brain computed tomography (CT) and chest X-ray conducted in the emergency room. No specific events were found in his medical history, and the patient had suffered no particular problem until he went to sleep the night before admission.We found a Fentanyl patch attached on his right wrist and got his history that he attached the patch prescribed to his friend who had cancer pain on a painful right wrist and went to bed the day before admission.The next day morning he was found unconscious.After conservative treatment, such as mechanical ventilation and hydration with normal saline mixed with naloxone in the intensive care unit (ICU), he regained consciousness.No abnormalities were detected on an electroencephalogram (EEG) or brain magnetic resonance imaging (MRI) (Fig. 1A).The patient was transferred to Respiratory Medicine because of aggravated aspiration pneumonia that had occurred on admission.He had exhibited abnormal behaviors, such as shouting, disconnecting phone lines, and urinating on the bed in the last 20 days since the hypoxic event, and was discharged from respiratory medicine. The patient was readmitted to the neurology ward 20 days after discharge (40 days since the hypoxic event) due to poor activities of daily living, such as impaired ambulation, mutism, not using the toilet, and overall memory loss including short-and long-term memory.The neurological examination revealed alert mentality without awareness and akinetic mutism but no other focal neurological abnormalities were observed.Brain MRI taken 45 days after the hypoxic event showed bilaterally symmetrical high signal intensity in the white matter on a fluid attenuated inversion recovery (FLAIR) image (Fig. 1B).After diagnosing delayed hypoxic leukoencephalopathy induced by the Fentanyl patch, walking and speaking improved through conservative and rehabilitative therapies from early bedside physical training to simple gait.Intravenous methylprednisolone was started as a possible anti-inflammatory and neuro-protective drug at a dose of 1 g daily for five days and oral methylprednisolone was taken and tapered next five days.Also neurotonics such as choline alfoscerate and donepezil were administrated consistently.After 1 month, he was discharged and went to a neighboring geriatric hospital. The patient did not use the toilet 3 months after discharge and developed a sleep disorder and aggressiveness.He died of aspiration pneumonia in the geriatric hospital 5 months after the second discharge. DISCUSSION In our case, intoxicate symptoms and signs including hypoxia occurred after a Fentanyl patch was attached incidently, and the neurological deficits of gait impairment, akinetic mutism, and memory loss were detected after 40 days since the hypoxic event.Delayed hypoxic leukoencephalopathy was diagnosed on a brain MRI scan 45 days after the its intoxication and hypoxic event. Respiratory depression with deterioration of the central nervous system was caused more in patients using opiates for the first time than in the long term to control chronic pain because of the tolerance to opioid drugs. 8In our case the patient had no history about past fentanyl exposures. The reason that symptoms of delayed hypoxic leukoencephalopathy can be confusing is clinicians cannot quickly conclude these symptoms to be effects of delayed hypoxic leukoencephalopathy or other new neuropsychological diseases because the time interval between hypoxia and the delayed neurological symptoms can be a few days to weeks.Brain MRI is most helpful to diagnose delayed hypoxic leukoencephalopathy. 9 Sometimes the arylsulfatase enzyme, which is related to metachromatic leukodystrophy, decreased in patients with delayed hypoxic leukoencephalopathy, so measuring serum arylsulfatase can be helpful for the diagnosis. 6e mechanism of delayed hypoxic leukoencephalopathy is that the activity of the myelin-producing ATPdependent enzymatic pathway that forms cerebral white matter is inhibited by hypoxia and delayed demyelination is caused. 6No particular treatment is known, except rehabilitation and conservative treatment but akinetic mutism in a patient with delayed hypoxic leukoencephalopathy improves rapidly after magnesium sulfate is administered intravenously. 10The neurological sequelae in our cases were relatively mild compared with reported cases of carbon monoxide intoxication.The neuroprotective effect of fentanyl may have contributed to the better prognosis. 11 conclusion, we report a case of acute fentanyl intoxi- 29; PaCO 2 , 60 mmHg; PaO 2 , 77 mmHg (normal range, 83-108 mmHg); HCO 3 − , 28.9 mmol/L; base excess, 0.9 mmol/L; and blood oxygen saturation level, 90%.The presentations of cyanosis were not discovered but bradypnea and shal-low respiratory movements were seen during a physical examination of the chest, so intravascular vasopressors, intubation, and mechanical ventilation were applied as soon as possible.The vital signs recovered (blood pressure, 120/70 mmHg; respiratory rate, 12/min; pulse rate, 110/ min; and body temperature, 37.5°C), and all ABGA levels were normal.He had a semi-comatose mentality, pupil miosis, a sluggish pupillary light reflex, and no response to deep pain in any limb on a neurological examination.No nuchal rigidity, abnormal deep tendon reflex, or any other lateralizing signs were detected.A complete blood count showed WBC, 22,400/uL; neutrophils, 91% and others within the normal range.A routine chemistry analysis revealed Na, 130 mmol/L; K, 4.6 mmol/L; Cl, 94 mmol/ L; BUN, 20.1 mg/dL; Cr, 1.49 mg/dL; T-protein, 5.933 g/ dL; albumin, 3.605 g/dL; CRP, 0.881 mg/dL; glucose, 168 mg/dL; s-Osm, 289 mosm/kg; CK, 128 U/L; ammonia, 48 µg/dL; and lactate, 0.6 mmol/L.A cardiac marker showed Figure 1 . Figure 1.Brain magnetic resonance images.Fluid attenuated inversion recovery (FLAIR) images of the patient on admission day (A) and following FLAIR images 40 days after the exposure of a fentanyl patch (B) show bilaterally symmetrical high signal intensities on deep and periventricular white matter without involvement of gray matter, which were not observed in previous study. cation and delayed hypoxic leukoencephalopathy caused by incidental use of a Fentanyl patch in a healthy elderly man.This case suggests that physicians working at emergency department of a hospital should consider opioid intoxication including fentanyl patch through history taking and physical examinations, when a patient is admitted with chief complaints of mental deterioration, respiratory depression, and pupil miosis.If the fentanyl patch is on the patient's skin, it must be removed as soon as possible.Rapidly administered conservative treatments, such as artificial ventilation and naloxone injections, should provide a good prognosis.In addition, it is very important to understand clinical course of delayed hypoxic leukoencephalopathy as possible delayed complications of the patient recovered from acute opioid intoxication.If the patient is discharged only after treatments of opioid intoxication, clinicians should warn a patient and his family of capability of delayed complications.And if neuropsychological symptoms considered as delayed hypoxic leukoencephalopathy are showed, adequate imaging studies such as brain MRI and detailed history taking including opioid exposures should be performed.Through these investigations, delayed hypoxic leukoencephalopathy should be diagnosed or differentiated.
2018-01-25T22:44:54.550Z
2015-06-30T00:00:00.000
{ "year": 2015, "sha1": "80d44e942d5bddf243d205aaa49fa095a6f86c4b", "oa_license": "CCBYNC", "oa_url": "https://www.e-jnc.org/upload/pdf/jnc-8-1-35.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "80d44e942d5bddf243d205aaa49fa095a6f86c4b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17419683
pes2o/s2orc
v3-fos-license
Plasma endotoxin activity in kangaroos with oral necrobacillosis (lumpy jaw disease) using an automated handheld testing system The aim of the present study was to evaluate the reliability and effectiveness of directly determining endotoxin activity in plasma samples from kangaroos with lumpy jaw disease (LJD, n=15) and healthy controls (n=12). Prior to the present study, the ability of the commercially available automated handheld portable test system (PTSTM) to detect endotoxin activity in kangaroo plasma was compared with that of the traditional LAL-kinetic turbidimetric (KT) assay. Plasma samples, which were obtained from endotoxin-challenged cattle, were diluted 1:20 in endotoxin-free water and heated to 80°C for 10 min. The performance of the PTSTM was not significantly different from that of the traditional LAL-based assay. The data obtained using PTSTM correlated with those using KT (r2=0.963, P<0.001). These findings indicated that the PTSTM is applicable as a simplified system to assess endotoxin activity in macropods. In the present study, we demonstrated the diagnostic value of plasma endotoxin activity in kangaroos with systemic inflammation caused by oral necrobacillosis and identified plasma endotoxin activity as a sensitive marker of systemic inflammation in kangaroos with LJD. Based on ROC curves, we proposed a diagnostic cut-off point for endotoxin activity of >0.22 EU/ml for the identification of LJD. Our results indicate that the assessment of plasma endotoxin activity is a promising diagnostic tool for determining the outcome of LJD in captive macropods. Oral necrobacillosis, commonly referred to as "lumpy jaw disease (LJD)", is a term used to describe progressive pyogranulomatous osteomyelitis involving the mandible or maxilla of human beings, wild sheep [15] and captive macropods [3,10,14,17,21]. LJD is one of the most significant causes of illness and death in these macropods [17]. As shown in Fig. 1, LJD commences as periodontitis with invasion of the mucosa by saprophytic bacteria, such as Fusobacterium necrophorum, Corynebacterium pyogenes and Dichelobacter nodosus, and infection frequently extends into adjacent bones, resulting in osteomyelitis [17]. The primary cause of LJD in the kangaroo is F. necrophorum [2,4], whereas the agents commonly associated with this condition in cattle and humans are members of the genus Actinomyces [22]. Numerous species of predominantly Gram-negative anaerobic bacteria have been isolated from lesions, with most members of the normal oral bacterial flora being present; however, the disease has only been reproduced experimentally by injecting F. necrophorum into the gingival mucosa [3]. F. necrophorum, a Gram-negative, non-spore-forming anaerobe, is a normal inhabitant of the alimentary tracts of animals and humans. The pathogenic mechanism of F. necrophorum is complex and has not yet been elucidated in detail. Several toxins or secreted products, including leukotoxin, endotoxin, hemolysin, hemagglutinin, proteases and adhesin, have been implicated as virulence factors [20]. Periapical lesions that begin as F. necrophorum infection in the dental pulp subsequently lead to inflammatory bone resorption [16]. The systemic complications and deleterious outcomes associated with Gram-negative infections have been attributed to the exaggerated inflammatory responses largely elicited by a highly pro-inflammatory component of the Gram-negative bacterial envelope known as endotoxin or bacterial lipopolysaccharide [9]. The accumulation of bacterial components, such as endotoxin in an infected area, may stimulate the release of pro-inflammatory cytokines from neutrophils and monocytes/macrophages. Endotoxin is the primary virulence factor of Gram-negative bacteria, is responsible for damage to animals and is released from bacteria at the time of cell death, thereby initiating an inflammatory response [5]. Endotoxin released from an infected root canal has been shown to trigger the synthesis of interleukin-1 alpha and TNF-alpha from macrophages [16]. These pro-inflammatory cytokines up-regulate the production of matrix metalloproteinase by macrophages in order to promote periapical bone resorption. Endotoxin plays a major role in the pathophysiology of Gram-negative bacterial sepsis; therefore, attempts have been made to detect and quantify it, with conflicting findings, in various states of infection. Since Levin and Bang [19] discovered the role of endotoxin in the coagulation of horseshoe crab blood in 1964, numerous methods incorporating limulus amebocyte lysate (LAL) have been developed for the detection of endotoxin and endotoxin testing of parenteral drugs [7,8]. However, these assays are very complex and, thus, inadequate for field use [23]. Charles River (Charleston, SC, U.S.A.) recently introduced an automated handheld testing system, named the Endosafe® portable test system (PTS TM ), to detect endotoxin. This automated miniaturized kinetic chromogenic LAL-based assay delivers results in 15 min [6,12,13]. Unlike the PTS TM , the traditional toxinometer [24], LALkinetic turbidimetric KT assays [13] require 75% to 85% longer processing times. The PTS TM is also advantageous when time-sensitive treatments are needed, because it is an automated handheld portable machine that is applied as a simple test. To the best of our knowledge, comparative studies on the relationship between plasma endotoxin activity and LJD have not yet been performed in macropods. Therefore, the aim of the present study was to determine plasma endotoxin activity in kangaroos with LJD using a commercially available PTS TM , such as an automated handled testing system. Receiver operating characteristic (ROC) curve was constructed in order to describe the performance of plasma endotoxin activity in kangaroos with LJD. Furthermore, endotoxin activities detected in plasma samples obtained from kangaroos with or without LJD by a commercially available PTS™ and traditional microplate LAL-based assay, which determined activities using a kinetic turbidimetric (KT) assay, were compared. MATERIALS AND METHODS All procedures were reviewed and approved by the Institutional Animal Care and Use Committee of the School of Veterinary Medicine, Rakuno Gakuen University (Japan). Fifteen Eastern grey kangaroos (Macropus giganteus) with LJD aged (mean ± SD) 3.5 ± 2.2 years old and with a body weight of 15.5 ± 6.3 kg were examined in this study. The definitive diagnosis of LJD was made based on clinical findings, such as facial swelling, weight loss, excessive salivation and flicking of the tongue [17]. Twelve Eastern grey kangaroos aged 3.6 ± 2.3 years old and with a body weight of 20.5 ± 11.7 kg were used as the control group. The health status of the control animals was determined on the basis of a physical examination and serum biochemical analysis by zoo veterinarians. All animals were kept at Hibiki Animal World (Fukuoka, Japan) and consumed concentrated pellets (ZC Pellets, Oriental Yeast Co., Ltd., Tokyo, Japan) for herbivores in accordance with the manufacturer's guidelines and had ad libitum access to hay (timothy and alfalfa), vegetables (including carrots, cabbage and potatoes), apples and water. Four milliliters of whole blood was collected via jugular venipuncture into heparinized tubes for the endotoxin analysis and then centrifuged for 10 min at 3,000 g at room temperature within 1 hr of collection. Approximately 1.8 ml of plasma was harvested and stored in sampling tubes (Cryo-TubeTM vials, Nunc, Roskilde, Denmark) at -30°C for later analyses. Immediately prior to testing, plasma samples were diluted 20-fold in endotoxin-free water (Otsuka distilled water, Otsuka Pharmaceutical Co., Ltd., Tokyo, Japan) and agitated in a vortex for 10 sec. Specimens were then heated for 10 min at 80°C in order to inactivate interfering substances, such as protease. Endotoxin-free water was used as the blank in all tests. The USP endotoxin reference standard (RSE, USP Endotoxin Reference Standard Lot G, the United States Pharmacopeial Convention, Inc., Rockville, MD, U.S.A.), which contained 10,000 endotoxin units (EU) per vial, was used as the positive control. The LAL reagent for the LAL KT (En- dosafe ® KTA 2 , Charles River) assay was reconstituted with Endotoxin-Specific Buffer Solution (Charles River) in order to eliminate any interference from β-glucans. The traditional LAL-based assay was performed on a 96-well microplate (Endosafe ® 96-well, flat bottom microplate M9001, Charles River), and endotoxin activity was determined using a microplate reader (Sunrise TM , Tecan Group Ltd., Männedorf, Switzerland) and EndoScan-V TM endotoxin-measuring software (Charles River). The range covered by the standard curve (0.003 to 3.0 EU/ml) was established according to the package insert of the LAL product. The lower limit of quantitation for this assay was 0.027 EU/ml. All samples tested with the PTS system used 1-0.001 EU/ ml sensitivity cartridges (Fig. 2). The PTS system, which comprised a spectrophotometer, reader and LAL reagent cartridge, was used in the present study. The reagent cartridges (Lot# 3183249) were prototypes that did not react with β-glucan, and were provided by Charles River Laboratories. Precise amounts of LAL reagents, buffer components and oligosaccharides as a β-glucan blocker, chromogenic substrates and control standard endotoxin were dried on the channels of the cartridges. The reagent cartridges were potency tested, spike recovery was performed, and the calibration code was then determined. The calibration code (Cal# 419065608093) contained the reagent cartridge test parameters that were determined during potency testing, as well as the archived curve for that batch of cartridges. The cartridges contained 2 sample channels and 2 spiked channels. The analyst loaded 25-µl samples into the cartridge sample reservoirs, and the reader drew, mixed and incubated the samples at different time intervals after the assay was started. The product endotoxin concentration (endotoxin activity), product positive control with a known endotoxin concentration, percentage sample coefficient of variation, percentage endotoxin spike coefficient of variation and percentage recovery of the product positive control were automatically calculated using software for research use involving an extrapolation function. In the present study, 20-fold diluted plasma, which was heated for 10 min at 80°C, was used to measure endotoxin activity. Results were automatically multiplied by the dilution factor entered into the system. The lower limit of quantitation for this assay was 0.050 EU/ml. A detailed description of PTS TM is provided elsewhere [6,18]. Statistical analysis: A test result was considered valid based on the percentage spike recovery and percentage coefficient of variation (CV) parameters falling within the acceptance criteria (25%) established by the PTS system and KT assay. Spike recovery values were considered valid, if the results were between 50% and 200%, according to the Bacterial Endotoxin Test in the US Pharmacopeia [25]. The absolute value of the correlation coefficient of the standard curve generated using reference standard endotoxin was greater than or equal to 0.980 for the range of endotoxin concentrations established, according to the Bacterial Endotoxin Test in the US Pharmacopeia [25]. Sample endotoxin activities were statistically analyzed using the SPSS software program (ver 21. IBM Japan, Tokyo, Japan). The results of the PTS TM and KT assay were compared using the Friedman test, which is a non-parametric statistical test used to detect differences across multiple test attempts [11]. Pearson's product moment correlation coefficients were calculated to evaluate relationships between any two continuous variables. A linear regression model analysis was also performed in order to obtain the equation. The median values for endotoxin activity obtained from the PTS TM method were compared with the healthy controls, and the Mann-Whitney U test was employed for comparisons between groups. ROC curves were used to characterize the sensitivity and specificity of each parameter with respect to changes associated with LJD. The optimal cut-off point for a test was calculated by the Youden index [1]. The Youden index (J) is defined as the maximum vertical distance between the ROC curve and diagonal or chance line and is calculated as J=maximum [sensitivity+specificity −1]. The cut-off point on the ROC curve that corresponds to J was regarded as the optimal cut-off point [1]. The significance level was P<0.05. RESULTS The KT test effectively recovered endotoxin from plasma (82.4%, range 73.2% -88.5%) over the range of concentrations tested. The linearity of the standard curve was also satisfactory for the KT assay (r 2 =0.984) over the range of concentrations tested. CV, a parameter in the KT assay test for endotoxin activity, was 10.9% (range, 0.2% -18.5%). The PTS TM effectively recovered endotoxin from plasma (98.3%, range 50% -180%) over the range of concentrations tested. CV, a parameter in the PTS TM for endotoxin activity, was 11.1% (range, 2.4% -24.4%). Each of the assays (KT and the PTS TM ) effectively recovered endotoxin from the plasma of kangaroos with or without LJD. The median ranges of endotoxin activity detected by each of the tests, KT and the PTS TM , were 0.206 (min to max, 0.027-1.06) and 0.222 (min to max, 0.050-1.43) EU/ml, respectively. As shown in Fig. 3, the results obtained from the PTS TM correlated well with those from the KT assay (r 2 =0.915, P<0.001). Based on the results of the Friedman Test, the ability of the PTS TM to recover endotoxin from plasma was not significantly different from that of the KT assay (P>0.05). Figure 4 shows the relationships between plasma endotoxin activity in kangaroo with LJD. Plasma endotoxin activities were higher in kangaroos with LJD (0.326 EU/ml; min to max, 0.05 to 1.56 EU/ml) than in those without LJD (0.100 EU/ml; min to max, 0.05 to 0.74 EU/ml, P<0.05). The area under the ROC curve for plasma endotoxin activity was 0.793 (Fig. 5, P<0.05). The proposed diagnostic cutoff point for plasma endotoxin activity in order to identify kangaroos with LJD based on analyses of ROC curves was set at >0.22 EU/ml. The sensitivity and specificity of the proposed diagnostic cut-offs for plasma endotoxin activity were 80.0% and 80.0%, respectively. DISCUSSION We investigated the relationship between oral necrobacillosis, commonly referred to as LJD, in captive kangaroos and plasma endotoxin activity using an automated handheld endotoxin testing system named the PTS TM . In the present study, positive control recoveries in the traditional KT assay and PTS TM were rarely outside the acceptable range. The PTS TM effectively detects plasma endotoxin activity in the kangaroo and is practical for simple and easy use to assess endotoxin activity in plasma. It offers several advantages over the microplate kinetic LAL assays currently in use by diagnostic laboratories, namely, it is small and portable, requires only small quantities of specimens, and rapidly provides results [6,18]. The activity of endotoxin in the plasma was higher in kangaroos with than in those without LJD. Furthermore, plasma endotoxin activity was significantly higher in kangaroos with LJD than in the healthy control group. Therefore, the proposed diagnostic cut-off for plasma endotoxin activity based on the ROC curve analysis to detect LJD was set at >0.22 EU/ml. The sensitivity and specificity of the proposed diagnostic cut-off for plasma endotoxin activity were 80.0% and 80.0%, respectively. USP chapter 85 [25], which addresses photometric bacterial endotoxin test methods, allows for a wide recovery range for the positive control, between 50% and 200%, because small discrepancies in test conditions and cartridge flaws contribute to variable recovery values for the positive control [6,12,13,18]. An out-of-specification percentage recovery for the positive control was previously associated with a calculated product endotoxin concentration that expressed any interference, such as inhibition and enhancement [12]. When any criteria, mainly percentage recovery of the positive control, were not within the acceptable range, the test was not considered to be valid [12]. In the present study, positive control recoveries in the traditional KT assay and PTS TM were rarely outside the acceptable range. The photometric PTS TM represents a rapid, simple and accurate technique using the quantitative kinetic chromogenic LAL method to assess plasma endotoxin activity in kangaroos and meets all the requirements for endotoxin activity including the percentage of CV and recovery of the positive control. Furthermore, the results of the PTS TM , which used plasma diluted 1:20 in endotoxin-free water and heated to 80°C for 10 min, correlated with those obtained by the traditional KT assay. Therefore, the results of the present study confirmed that the PTS TM is practical for simple and easy use in order to assess endotoxin activity in plasma. Our results showed that average plasma endotoxin activity was higher in kangaroos with LJD than in controls. Based on ROC curves, we proposed a diagnostic cut-off point for endotoxin activity of >0.22 EU/ml for the identification of LJD. Endotoxin, which is released from bacteria including F. necrophorum at the time of cell death and initiates an inflammatory response [20], refers to the lipopolysaccharide protein of the Gram-negative bacterial wall and is the primary virulence factor of Gram-negative bacteria responsible for damage to the kangaroo. Endotoxin is known to be responsible for many of the pathophysiological signs observed during Gram-negative bacterial infections in mammalians, such as fever, leukopenia, complement activation, the activation of macrophages and changes in the plasma levels of metabolites, minerals, acute phase reactants and hormones. In conclusion, we herein investigated the diagnostic value of plasma endotoxin activity in kangaroos with systemic inflammation caused by oral necrobacillosis and identified plasma endotoxin activity as a sensitive marker of systemic inflammation in kangaroos with LJD. Based on ROC curves, we proposed a diagnostic cut-off point for endotoxin activity of >0.22 EU/ml for the identification of LJD. Our results indicate that the assessment of plasma endotoxin activity is a promising diagnostic tool for the outcome of LJD in captive macropods. In addition, the photometric PTS TM represents a rapid, simple and accurate technique, which uses a quantitative kinetic chromogenic LAL method for the assessment of plasma endotoxin activity in kangaroos. Therefore, the results of the present study confirmed that the PTS TM is appropriate for assessing endotoxin activity in plasma. ACKNOWLEDGMENTS. This study was supported by a Grant-in-Aid for Science Research from the Ministry of Education, Culture, Sports, Science and Technology of Japan (no. 21580393, 26450431 and 26460513), and by a Grant-in-Aid from Asahi Group foundation.
2018-04-03T01:29:13.308Z
2016-02-22T00:00:00.000
{ "year": 2016, "sha1": "f95c69990b30d9f527e831fe265fdc004594c7ee", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/78/6/78_15-0513/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f1861552f3487f93c95da557f276a65a5657acb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
229400552
pes2o/s2orc
v3-fos-license
Postgraduate mathematics education students ’ experiences of using digital platforms for learning within the COVID-19 pandemic era Technology is increasingly used by society in education, business and general daily activities (Bell, 2011; Qurat-ul et al., 2019). Also, as we embrace, the fourth industrial revolution (4IR), there are various debates on how existing educational contexts can be adapted to support and incorporate the use of technology-based tools. Technology-based tools in this study refers to electronic, digital or any other teaching tools that are supported by technology and are used within educational contexts to facilitate learning (Ertmer & Ottenbreit-Leftwich, 2012). Within the context of this study, digital platforms provided the participants and the lecturer with a diverse array of technology-based tools for communicating and attaining new knowledge and skills to enhance and guide the online learning process. Also, within this study, digital platforms and web-based resources were used with computers or mobile gadgets which supported pedagogy using text, audio and video (Peachey, 2017). Thus, digital platforms assisted the online community of practice (participating postgraduate mathematics education students) in using digital resources and engaging with learning materials online. These platforms are combined software solutions which supported online learning within this study. Introduction Technology is increasingly used by society in education, business and general daily activities (Bell, 2011;Qurat-ul et al., 2019). Also, as we embrace, the fourth industrial revolution (4IR), there are various debates on how existing educational contexts can be adapted to support and incorporate the use of technology-based tools. Technology-based tools in this study refers to electronic, digital or any other teaching tools that are supported by technology and are used within educational contexts to facilitate learning (Ertmer & Ottenbreit-Leftwich, 2012). Within the context of this study, digital platforms provided the participants and the lecturer with a diverse array of technology-based tools for communicating and attaining new knowledge and skills to enhance and guide the online learning process. Also, within this study, digital platforms and web-based resources were used with computers or mobile gadgets which supported pedagogy using text, audio and video (Peachey, 2017). Thus, digital platforms assisted the online community of practice (participating postgraduate mathematics education students) in using digital resources and engaging with learning materials online. These platforms are combined software solutions which supported online learning within this study. Osterweil, Groff, & Haas, 2006). Digital education settings support the effective integration of technology-based tools (for example, computers and mobile devices) for teaching and learning (Buzzard, Crittenden, Crittenden, & McCarty, 2011). This study aimed to respond to the main research question: what are postgraduate mathematics education students' experiences of using digital platforms for learning within the COVID-19 pandemic era? Fourth industrial revolution The 4IR comprises various methods of integrating technology within societies and human bodies (Schwab, 2016). Within the 4IR era, technology-based tools and digital platforms are transforming the way we conduct our lives. This transformation is unsettling society since the 4IR has changed the way nations subsist and is epitomised by blending the virtual and physical domains (Schwab, 2016), shaping a comprehensively linked and progressive society. Students may be at different levels concerning readiness for progressing within the 4IR: students may be digital natives or digital immigrants. Digital natives are acquainted with the use of technology-based tools and digital platforms (for example, the Internet, computers and other online tools or platforms). The term is generally used to characterise students who are familiar with technology-based tools (Helsper & Enyon, 2009). These students will probably succeed when using technology-based tools and digital platforms when learning. In contrast, digital immigrants are students who may acquire the knowledge of how to use technology-based tools, but rather than working online initially, they may examine printed information first before referring to the Internet for support (Helsper & Enyon, 2009). These students may need added assistance when using technology-based tools and digital platforms. As is evident, lecturers need to be aware of their students' abilities concerning their acquaintance of using technology-based tools and digital platforms when learning. This knowledge will support the lecturer when presenting the notions of the 4IR within the education environment. Also, education institutions need to adjust to prepare students sufficiently to thrive within these circumstances (Butler-Adam, 2018;Mensch, 2017;Schieffer, 2016;Thieman, 2008). The 4IR impacts the purpose that higher education institutions play in preparing students for succeeding within our technologically advanced society. To take advantage of 4IR opportunities, more so within the era of the COVID-19 pandemic, we need to transform our pedagogy to include the successful use of technology-based tools and digital platforms. Using digital platforms and technology in mathematics teaching and learning The digital world has changed education environments, and teaching and learning are being transformed through using technology-based teaching tools (Grand-Clement, Devaux, Belanger, & Manville, 2017;Jeffrey, Milne, Suddaby, & Higgins, 2014;Lazarus & Roulet, 2013). Integrating technology in mathematics pedagogy supports teaching and learning and has positive effects on student performance (Cheung & Slavin, 2013;Mlotshwa & Chigona, 2018). However, within South Africa, mathematics teachers' use of technology-based tools for teaching and learning is negatively affected by their lack of computer skills. Thus, many mathematics teachers are hesitant to transform their pedagogy to incorporate the use of technology within their classrooms (Stols et al., 2015). Nevertheless, there are many online mathematics teaching and learning websites and several educational applications that can influence students' learning and achievement (Pope & Mayorga, 2019). This online teaching and learning support is available for mathematics teachers to engage with since student performance in mathematics has been improved through using digital tools for teaching (Cheung & Slavin, 2013). In the study conducted by Cheung and Slavin (2013), the authors focused on researching the link between mathematics achievement and the use of technology. Also, that study focused on exploring computer management learning and is therefore related to this study. Computer management learning is similar to Moodle, which is a learning management system (LMS) being used at the participating university in the study under focus. The use of Moodle in the teaching and learning of mathematics within higher education supports students' learning (Lopes, Babo, & Azevedo, 2008) by stimulating students' interest which results in a positive effect on student performance (Handayanto, Supandi, & Ariyanto, 2018). Moreover, the use of digital platforms, for example Moodle, results in an improvement in students' performance in mathematics (Jayashree & Tiwari, 2016). Also, successful mathematics students and those that have challenges may benefit from the use of digital tools (Bruce, 2012) since the use of technologybased tools in mathematics has shown a positive link with student achievement (Cheung & Slavin, 2013). Furthermore, the implementation of digital platforms for the learning of mathematics encourages supportive relationships to be developed within communities of practice (Mlotshwa & Chigona, 2018). However, it is also important to note that the success of using digital tools also depends on the design of the digital tools and platforms as well as the time allocated for completing specific content within the curriculum (Drijvers, 2013;Sahal & Ozdemir, 2020). Moreover, the use of digital tools in teaching and learning can lead to students becoming easily distracted, and as a result they may not complete their academic work timeously (Mbukusa, 2018). Therefore, it is dependent on the mathematics teacher to select the most helpful digital tool to achieve the full potential of digital pedagogy to support students' learning. Thus, it is the teacher's responsibility to enable progression in education (Montrieux, Vanderlinde, Schellens, & De Marez, 2015). Especially within the current COVID-19 pandemic era, the use of digital platforms to support pedagogy within the 4IR is essential. Some of these digital platforms used at the participating university include Zoom 1 , WhatsApp 2 and Moodle 3 . Thus, the digital world has entered education spheres, with digital devices and platforms being used to deliver education (Grand-Clement et al., 2017). Given the ubiquity of smartphone technology and the ease with which it is used in daily life, excluding smartphones from higher education courses confines prospects for these courses to be contemporary and realistic (Schuck, 2016). To ensure that courses and pedagogy are current and realistic, many higher education institutions are advancing to integrate digital devices and digital platforms (for example tablet computers, laptops, smartphones, netbooks, social media, Zoom, Google Meet, Microsoft Teams, Google Drive and Google Classroom) within pedagogy. Theoretical framing: Communities of Practice This study was framed using Wenger's (1998) Communities of Practice (COP) theory, a theory of learning that has its own set of norms and emphasis in which the fundamental unit of analysis is the COP (Graven & Lerman, 2003). A COP revolves around events that are of interest to members of that community (Wenger, 1998). A COP may create an environment of thinking, insight and responses that supports the connection between researchers and practitioners such that the information generated is more valued and substantial (Hearn & White, 2009). Wenger's (1998, p. 4) COP theory is founded on four principles: individuals are social beings, information concerns 'valued enterprises', knowing is about participation and experiences in the world and meaning is what learning produces. This theory maintains that a COP is formed by people who partake in the activity of communal learning within a public space. Thus, a COP comprises groups of people who share an interest in something, and through cooperation they learn how to advance what they adopt (Wenger & Wenger-Trayner, 2015). Furthermore, within the COP theory, learning is identified as being made up of four elements: practice, meaning, identity and community (Wenger, 1998). The relationship between this study and the COP theory is clarified as follows: a COP shares an interest; within the scope of this research the shared area of interest was the use of digital platforms for the learning of mathematics. Within the ambits of COP, members of the community partake in the activity of communal learning. Since technology can surpass 1.Zoom is a video conferencing software application that allows you to interact virtually with friends, colleagues, students and family when face-to-face communication is not possible. Zoom is a digital platform used for teaching and learning at the participating university during the COVID-19 pandemic. 2.WhatsApp is a free messenger application that uses the Internet to send and receive messages and calls. WhatsApp was used unofficially at the participating university by lecturers and students as a digital platform to communicate via messages, images, audio and video files during the COVID-19 pandemic. 3.Moodle is a learning management system used at the participating university. It is an open-source e-learning digital platform. location and time, virtual COPs, which rely primarily on technology to connect the online COP, are becoming increasingly popular (Dubé, Bourhis, & Jacob, 2006). Within the domains of this study, shared learning within the virtual COP took place via three interactive online workshops and two discussion forums. Thus, within the ambits of a virtual COP, members of the COP in this study were the postgraduate mathematics education students and the mathematics lecturer. The postgraduate mathematics education students in this study were also practising mathematics teachers at school level. The researcher was the mathematics lecturer who taught these participants at the research site. A virtual COP may use traditional tools, for example a telephone, and more advanced technological tools, such as emails, cell phones, virtual meeting spaces, digital platforms and websites, to create a shared virtual collaborative space (Dubé et al., 2006). The virtual COP under study supported each other as they interacted within the interactive workshops and online discussion forums; they shared resources online and discussed challenges with the mathematics content being discussed. The learning process was collaborative and interactive (Osterholt & Barratt, 2010), which involved all members of the virtual COP. Thus, this theoretical framework provided the framing for this study which focused on one main research question: what are postgraduate mathematics education students' experiences of using digital platforms for learning within the COVID-19 pandemic era? General background This qualitative study, which aimed to explore postgraduate mathematics education students' experiences of using digital platforms for learning, was located within an interpretive paradigm. Data were generated via three interactive online workshops and two discussion forums at one teacher education institution in KwaZulu-Natal, South Africa. The participating university's research committee approved gatekeeper access and ethical clearance. The study incorporated three interactive online workshops and two online discussion forums with participants. The participants were purposively selected for convenience since the researcher taught these participants. Participants Participants were emailed an informed consent sheet outlining the purpose and process of the study; the participants' right to leave the study without prejudice was also noted on the informed consent sheet. Pseudonyms were used to protect the anonymity and confidentiality of participants. Forty-two postgraduate mathematics education students were invited to participate in the study, and 37 agreed to participate in the study (23 male and 14 female). Six participants were selected at random to join the pilot study. Hence, the remaining 31 postgraduate mathematics http://www.pythagoras.org.za Open Access education students, who were also practising mathematics school teachers, participated in the main study. Pilot study The pilot study was conducted with six participants who were part of the postgraduate mathematics education class. These participants were randomly selected and were not included in the sample for the main study. All interactive workshops were piloted; during the pilot study, Internet connections were limited, sluggish and unsteady since the workshops were conducted at peak times when Internet service providers were supplying numerous customers which led to slow network speed and unstable networks. Hence, the workshops took longer than anticipated. Also, during the pilot, individual participants were unclear about what was required of them for the discussion forum questions. To avoid similar issues during the main study, through negotiation with the participating postgraduate mathematics education students, the online workshops were held at off-peak times to ensure the speed and steadiness of Internet networks. Also, the online discussion forums were piloted to explore the learning of mathematics by using digital platforms as experienced by the participants of the pilot study. As a result of conducting the pilot study, the questions used during the workshops and discussion forums were revised to enhance the trustworthiness of the research instruments and process. Thus, by conducting the pilot workshops and piloting the discussion forums, the consistency and dependability of the study were maintained. This was important for ensuring that the results were due to the study itself and not as a consequence of any other peripheral factors. Main study Thirty-one postgraduate mathematics education students agreed to participate in the main study. Data were generated via three interactive online workshops and two online discussion forums. Although 31 participants were involved in the workshops, due to various reasons, only 15 participants (eight male and seven female) were available to take part in the discussion forums. To assure the participants of their anonymity, pseudonyms were used. Table 1 represents the pseudonyms used for the participants in the discussion forums. Online workshops The purpose of the interactive workshops was to explore postgraduate mathematics education students' experiences of using digital platforms within the COVID-19 pandemic era. A sequence of online mathematics workshops (N = 3) via Zoom (a digital platform used at the participating university) was held. These workshops were facilitated by the lecturer, who is also the researcher. These workshops were mandatory and were part of a postgraduate Mathematics Education module for which the participants were registered. All participants were provided with notes, PowerPoint presentations of case studies, and examples of online assessments focusing on using digital tools and free online websites within a digital mathematics education environment. Within this environment, the lecturer presented the module content and shared resources with the participants online via email, Zoom chats and Moodle. The content for the first workshop focused on academic writing, and the second workshop concentrated on assessing and providing feedback for algebra. The third workshop was dedicated to identifying school learners' misconceptions in geometry. At the end of the third workshop, all participants were invited to participate in two discussion forums (using WhatsApp and Moodle digital platforms). The researcher presented content for all three workshops, and the postgraduate mathematics education students engaged with and discussed the content that was introduced with members of the virtual COP. The discussion forums were not compulsory. Online discussion forums The purpose of the two discussion forums was to explore postgraduate mathematics education students' experiences of using digital platforms for the learning of mathematics. The discussion forums were conducted online using digital platforms, for example Moodle and WhatsApp chats. Questions were placed on these digital platforms, and the participants had one week to respond to questions posed. If further probing was required for clarification purposes, the researcher would ask specific participants follow-up questions in each discussion forum. Each discussion forum began with a few general questions to place the participants at ease. The discussion forum then progressed to specific items focusing on the participating postgraduate mathematics education students' experiences of using digital platforms for learning about mathematics academic writing, learning how to assess and provide learners with feedback for mathematics problems in algebra as well as how to identify learner misconceptions in geometry. During the explanation of the research process, participants were asked to respond at least once to each question posed during the discussion forums. The discussion forums focused on the following key questions: • What were the postgraduate mathematics education students' experiences of using digital platforms for learning? • What were the strengths of using digital platforms for the learning of mathematics? • What were the challenges of using digital platforms for the learning of mathematics? The discussion forums were recorded; transcriptions of the discussion forums were sent to each participant for perusal to ensure the correctness of the information captured within these forums. Data analysis Data analysis in the form of coding and categorising of themes was based on the conceptual framework of the study, that is, COP theory. The relationship between the data generated and the notions of practice, meaning, identity and community (Wenger, 1998) were examined. Data analysis included the following steps: firstly, open coding was used to ensure acquaintance with the data and to classify codes after inspecting the qualitative data; secondly, all data were perused, and codes were processed into themes. The data that were related to each other were grouped into themes. Thirdly, the themes were scrutinised to ensure that all codes within each theme revealed a connection. Finally, the similarities and difference between the participants' responses were compared, and subthemes were identified. Also, to confirm the accuracy of results and to provide participants with the chance to clarify their responses, member checking was undertaken. Ethical consideration Ethical clearance for this study was obtained from the Ethics Committee of the participating university. The participants were assured that pseudonyms would be used to ensure and protect their confidentiality and anonymity when using all generated data. Participants provided informed consent to participate in the study, noting that they allowed the use of their responses to the online workshops and discussion forums for research purposes. Ethical clearance number: HSS/1562/016. Results Postgraduate mathematics education students had various experiences when using digital platforms for learning. These experiences are presented in themes and subthemes that follow. Strengths of using digital platforms The postgraduate mathematics education students indicated that there were strengths in using digital platforms for the learning of mathematics. Their sentiments are presented in the subsequent subthemes. Experiences of using WhatsApp discussion forums The majority of the participants had positive experiences of using WhatsApp for the discussion forums as reflected in their discussion forum excerpts. Based on the evidence, the participants agreed that WhatsApp assisted in providing immediate feedback and offered a means of communicating instant social messages. These participants were participating interactively within their virtual COP using digital platforms. The sharing of mathematics worksheets and resources was easy and quick to distribute to the virtual COP using WhatsApp. This engagement via digital platforms is important for enhancing academic success and development in mathematics within the era of the 4IR. Digital platforms provide unlimited access to module material and resources The participants valued the notion of creating a mathematics repository, uploading videos, resources and recordings of the workshops and discussion forums as reflected in the excerpts that follow. Xolani: …the uploaded content helped…I could learn at any time…the data bank we developed also assisted with checking misconceptions and providing feedback in maths…I could study anytime and anywhere I even could learn in my car… As was evident, the majority of the participants appreciated the uploaded workshop videos and recordings. In addition, the virtual COP developed an online data bank with mathematics examples of assessing and providing feedback as well as resources on identifying misconceptions in geometry. These resources were valued by the participants and provided them with new learning experiences as they discovered, for example, other misconceptions that learners may acquire in geometry. They felt at ease that they could view the uploaded material at any time when it was suitable for them. It was evident that the participants were prepared to engage with online material. In addition, with the uploading of the online data bank developed by the virtual COP, the participants had access to mathematics support at any time. This is an important step for achieving academic success in mathematics within the era of the 4IR. Support received within the digital community The virtual COP formed by using digital platforms created a support mechanism for the participants, as reflected in the subsequent excerpts. Based on the preceding excerpts, creating a supportive virtual COP was beneficial for the participants. They valued the online support they received concerning the mathematics being discussed as well as the online social support they received during the COVID-19 pandemic. The participants' willingness to embrace the online platforms and support other members of their COP while they learnt about mathematics academic writing, assessing and providing feedback in algebra and identifying learner misconceptions in geometry was encouraging. In addition, the development of the online repository with mathematics resources and mathematics examples supported active online interaction and engagement. This robust engagement and interaction via the Internet using digital platforms is supported within contemporary 4IR lecture rooms. Collaboration is encouraged within the digital community Collaboration and the sharing of ideas were essential to the participants, as reflected in the excerpts that follow. Delani: …worked well in the group…shared problem-solving and how to give feedback to learners…discussed answers…we placed examples in the data bank to help each other…the lecturer assisted and explained concepts further…it was good to belong to the online group… learnt new assessments methods from resources shared online… Devi: …discussed how to teach learners who had misunderstandings in geometry…shared advice about how to help learners with understanding geometry better…the group placed their examples in the online data bank. We helped each other as a group…could complete the task through this help…realised we needed to do our work on our own first and then discuss and share our answers… John: …if someone needed help we gave support. We looked at each other's answers first…then we gave advice about what worked for us in the class…we asked the lecturer to assist or we needed more explanation…could check on each other at any time using WhatsApp… working in the group was helpful… Nomsa: …no extra effort was required…convenient and was quick to respond to questions and share mathematics resources that work for us…got others' feedback quickly…I could also look at the data bank…I could complete my work with this help… Patrick: …some of us had the same problems…for example the geometry problems were confusing…we could share advice…shared resources online…we worked together…it was useful…helped each other… To take advantage of 4IR opportunities, more so within the era of the COVID-19 pandemic, the lecturer in this study transformed pedagogy to include the use of technologybased tools and digital platforms. It was evident from the discussion forums that members of the virtual COP in this study engaged with each other online and shared resources and examples online using a data bank. For example, based on the evidence provided, if the participating postgraduate mathematics education students within the virtual COP needed further assistance when solving the mathematics problems under discussion, they sought help from the lecturer who was also a member of the virtual COP and shared resources with each online. These resources included examples of assessments in mathematics, how to provide feedback in mathematics as well as common misconceptions in geometry. These examples and resources were uploaded into a data bank created by the virtual COP. As was evident, while the participants mentioned that collaboration was of value when using digital platforms, they valued their interaction within their virtual COP. However, they also indicated that they experienced challenges when using digital platforms. These challenges are discussed in detail in the following section. Challenges of using digital platforms The participants indicated that there were challenges in using digital platforms for the learning of mathematics. Their views are presented in the subsequent subthemes. Need training to use digital platforms effectively The participants did not generally use Zoom and were only exposed to this digital platform during the COVID-19 pandemic. They needed to practise and prepare for using this digital platform as is reflected in the following discussion forum excerpts. Devi: …was difficult at first…got easier as I experimented and tried out Zoom …I think we need enough training before using Zoom… John: …Zoom is something different…needed to learn how to work with Zoom…then only was I ok in the workshop… still had problems raising my hand…the buttons on the Zoom system changed…. Siya: …the workshops were a bit difficult to follow…my Zoom link was not stable…wish we could have face-face workshops…needed to learn quickly how to use Zoom… Xolani: …had to learn how to use Zoom…I studied the guide… but still asked my friends and my child for help…I felt I was missing out…needed extra time to learn… As was evident, Zoom workshops were a new form of learning for the participants. This new learning method required practice and collaborative engagement within the virtual COP for the participants to navigate this digital platform successfully. These participants exhibited characteristics of digital immigrants, and it was evident within this virtual COP that the 4IR impacts the purpose that higher education institutions play in preparing students for succeeding within a technologically advanced society. Also, during the period of the workshops, the Zoom platform and functions were revised to a minor extent; this resulted in the participants having to relearn and practise how to use specific revised techniques. This took place during the second workshop. As a result, considerable teaching and learning time was lost due to addressing technical issues during the second workshop. Devices, data and resources for working within digital platforms are expensive To use digital platforms, the participants required access to digital devices and data. Some participants experienced difficulties with this, as reflected in the following excerpts. Participants revealed that devices and data were expensive to purchase, especially as this was unforeseen at the beginning of the 2020 academic year. The participating university offers only the contact mode of learning under normal circumstances; however, due to the COVID-19 pandemic, the university was required to provide lectures online or remotely via the use of digital platforms. As was evident, it was important for lecturers to be aware of their students' challenges concerning the availability of technology-based tools to engage with digital platforms when learning. This knowledge supported the lecturer when presenting the notions of the 4IR within the education environment. Through engaging within the virtual COP, all members were made aware of challenges experienced by members within the COP. The creation of the online data bank supported students with examples and resources for mathematics. Since these were free mathematics resources, this was of benefit to the virtual COP during this challenging pandemic. Based on the evidence provided, both the participating postgraduate mathematics education students and the lecturer needed to adapt quickly to this new pedagogical approach to achieve success within the contemporary 4IR lecture context. Using digital platforms within the COVID-19 pandemic era is socially complex The COVID-19 pandemic era resulted in many people working from home with their families around them; this created difficulties for the participating postgraduate mathematics education students as captured in the following excerpts. Working and studying from home, especially during the COVID-19 lockdown conditions, created many challenges. Often family responsibilities needed to be addressed first before the participating postgraduate mathematics education students could start with academic work. As was evident, members of the virtual COP within this study experienced challenges attributed to the COVID-19 pandemic. Fortunately, the digital world has transformed education contexts that embrace notions of the 4IR. Teaching and learning were transformed using technology-based tools within the education context under study. The participating postgraduate mathematics education students and the lecturer did not have to engage within the virtual COP at the same time. Teaching and learning could occur successfully at any time using digital platforms. While members of the virtual COP within this study may not have been able to attend the online workshops, they were provided with the opportunity to view the uploaded content, workshops and videos at any time. In addition, the members of the COP could consult with the resources in the online repository if they needed more assistance. After that, the members of the virtual COP within this study could engage with other members of the online community through WhatsApp and the discussion forums on Moodle to seek further guidance and assistance. Using digital platforms for learning mathematics is time consuming and uncomfortable Learning how to use digital platforms was difficult and often took much time, as reflected in the following excerpts. Anne: …took long for me to log onto to Moodle and follow the conversation…also when I posted a question…it took long for me to get an answer… Nancy: …I used Moodle in my undergrad days…but uncomfortable to use. It is hard to follow the conversations since there are so many and everything is sent to your email…I am looking at emails and Moodle… need to catch up… Nomsa: …waited long for the class to respond on Moodle…I needed help immediately…so I had to use WhatsApp to get help…this is when we started sharing maths resources with each other…this was useful… Rani: …Moodle is ok…but sometimes people take long to respond…need to wait until someone replies…it is limiting me… Siya: …it was tricky using Moodle…took time for me to find the conversation and then I did not know to reply…this wasted much time… Xolani: …had a problem accessing Moodle…forgot my password…needed to set up a new one…so I had to catch up with the discussion…was stressful…I felt I was left behind…lucky for the WhatsApp chats…kept me informed… As was evident, the participants mentioned the challenges of using digital platforms. While the participants used Moodle previously, they did experience a few issues. When they needed a quick response, Moodle was tricky and uncomfortable to use and did not assist with providing rapid feedback. It was evident in this study that the participants needed to engage within their virtual COP for support and guidance with the mathematics problems under study. As a result, the participants used the digital platforms available to them to communicate and engage with other members of the virtual COP quickly and easily. In addition, the online repository with resources and examples that were developed by the COP was useful when the participants needed added support. Lecturers who embrace the notions of the 4IR ensure that their students have access to various technology-based tools and digital platforms to enhance and support teaching and learning within the era of the 4IR. To further interrogate the results and to exhibit the significance of this study, a comprehensive discussion is included in the following section. Discussion The qualitative results provide evidence of postgraduate mathematics education students' experiences of using digital platforms during the COVID-19 pandemic era for the learning of mathematics. Firstly, the participants generally experienced positive experiences of using WhatsApp chats; however, it was evident that they experienced difficulties when using Zoom and Moodle. Still, it is important to note the WhatsApp platform may create challenges for students when balancing online activities and academic planning, and may also distract students from finishing their assessments timeously (Mbukusa, 2018). However, based on the evidence, it was apparent within the virtual COP under focus that through collaboration and shared interest (Wenger, 1998), the participants managed to complete the mathematics academic writing, algebra and geometry tasks as discussed during workshops. Thus, these participants embraced the notions of a COP by focusing on sharing ideas and providing support to other online community members. The participating postgraduate mathematics education students within the virtual COP also developed an online repository with resources and examples that provided information regarding identifying learner misconceptions in geometry and assessing and providing feedback in algebra. This repository was available for any member of the virtual COP to access at any time. Thus, communication and the sharing and discussion of ideas is essential to mathematics pedagogy, and it is important for mathematics teachers to become actively involved in adding to the development of knowledge (Lazarus & Roulet, 2013). In this study, the use of digital platforms allowed participants to engage within the virtual COP actively, and the community members shared ideas through individual and collaborative attempts (Mlotshwa & Chigona, 2018;Osterholt & Barratt, 2010). This was evident during the academic writing, algebra and geometry workshops, where the participants showed evidence of practising the mathematics tasks independently first to promote meaning and understanding. Only when the participants encountered challenges, did they seek assistance from the virtual COP via the discussion forums or WhatsApp. The discussion forums on Moodle and WhatsApp created an added layer of mathematics support for members of the virtual COP. Thus, WhatsApp is a novel teaching method that can appeal to students and provide them with prospects for further learning (Mbukusa, 2018). The actions of the participants in this study exhibit the fundamental elements of learning: practice, meaning, identity and community within a COP (Wenger, 1998). The participants shared experiences of good practice and discussed their own meaning-making of the resources that were shared in the online data bank. This sharing of ideas and resources created an empowered online community which helped shape the identity of each member of the virtual COP. Similarly, Wenger and Wenger-Trayner (2015) proposed that members of a COP need to collaborate around ideas related to the content under study which was evident in this study. The participants showed evidence of networking with their virtual COP members to learn mathematics together. The participants also exhibited evidence of embracing notions of the 4IR by engaging with technologybased tools and digital platforms, in addition to developing an online repository to learn mathematics. Similarly, Ertmer and Ottenbreit-Leftwich (2012) suggested that the effective incorporation of technology-based tools within education contexts would support learning. In this study, mathematics learning was supported and enhanced using digital platforms. Secondly, the participants indicated that the uploaded lectures, videos, recordings and online data bank were beneficial in that they could access information at any time. In this way, the participants were not compelled to engage with content, other members of the online COP and resources during a specific period and while they were in a certain location. This is important to consider since research (Jeffrey et al., 2014) has maintained that the use of online pedagogy disregards the confines of time, place and obstacles while empowering active collaboration between members within the COP under focus. Within the ambits of COP, collaboration is supported so that learning is enriched by sharing and discussion (Wenger, 1998;Wenger & Wenger-Trayner, 2015). As was evident, in this study, the participants embraced the notions of the 4IR by using digital platforms. The participants collaborated and discussed solutions for the algebra and geometry tasks within the virtual mathematics COP. They further collaborated to develop an online repository for learning mathematics. The lecturer formed part of the virtual COP by presenting course content and guiding the learning process. It was evident that the implementation of digital platforms for learning mathematics fostered and developed supportive relationships within the virtual COP under focus (Mlotshwa & Chigona, 2018). Thirdly, while the participants mentioned that there were strengths in using digital platforms, they also indicated that they needed to practise and required training and time to prepare for using these digital platforms efficiently. They believed that practising using these platforms first before being expected to engage within these platforms formally (for example, the Zoom digital platform) would assist them in feeling more comfortable with participating in the online workshops. Moreover, the idea of practising using the online platforms would support and equip students to engage with each other and the content being discussed. Similarly, research (Thieman, 2008) maintains that for students to achieve success with digital pedagogy, they need to be prepared adequately since the use of technology within the 4IR education context is important (Boholano, 2017). Students need to know the nuances of the online class and what is expected of them within the online course. To assist with achieving preparedness in this respect, an orientation session needs to be included before engaging with online pedagogy (Mensch, 2017). Fourthly, in this study, it was evident that limited access to digital devices and data for the Internet influenced the participants' use of online learning and digital platforms (Klopfer et al., 2006). The participants believed that they need to have the basic requirements for digital pedagogy (for example, digital devices and access to data and the Internet) to achieve success with digital learning. Furthermore, postgraduate mathematics education students indicated that the virtual COP fostered active interaction and supported the acquisition of new knowledge using various digital platforms. This result resonates with research by Jeffrey et al. (2014), which maintained that students need to be encouraged to reflect on their learning so that they are provided with diverse new learning experiences and resources. As was evident in this research, using digital platforms, the participants were encouraged to become responsible for their mathematics learning. For example, members of the virtual COP within this study came up with the idea of sharing mathematics resources online to support their learning of mathematics. This idea escalated into the creation of an online mathematics data bank. This showed evidence of the participants independently taking responsibility for their own mathematics learning. This independence promoted their ability to learn mathematics at any time and place. Similarly, linking students and online platforms does not necessarily take place in a physical education context; this is ubiquitous due to their access to the Internet (Bell, 2011). Also, to promote student success within the 4IR, education http://www.pythagoras.org.za Open Access institutions need to adjust traditional pedagogy to prepare students suitably (Butler-Adam, 2018). Finally, the challenge of working from home during the COVID-19 pandemic area was also a significant difficulty. It led to many social and academic issues for these participating postgraduate mathematics education students. The advantages of collaboration and communication offered by using digital platforms within the context of COVID-19 was significant. Lecturers need to note that students are sociable people and require feedback and support from peers and the lecturer during these extraordinary circumstances. Apart from communication about the specific challenges concerning the learning of mathematics being discussed, during lockdown conditions within the COVID-19 pandemic, students may not have other people who may relate to what they are experiencing within their contexts. A virtual COP builds feelings of trust and camaraderie, and supports the online community in curbing the perception of social isolation (Schieffer, 2016). Thus, for emotional, health and social benefits, students need to discuss and interact with their peers and lecturer during this unprecedented era. Conclusion This study was conducted to respond to the question: what are postgraduate mathematics education students' experiences of using digital platforms for learning within the COVID-19 pandemic era? The article concludes with possible suggestions for lecturers who wish to use digital platforms for the learning of mathematics. These suggestions are based on the experiences of the participants as uncovered in this study. Firstly, the difficulties of using digital platforms within the era of COVID-19 are vital for lecturers to note. To alleviate this issue, it is important for the lecturer to ensure that students have access to digital devices and data for using the Internet. Lecturers need to establish that the resources being used are easy to use, readily available, inexpensive and data efficient. Also, students need to be provided with orientation sessions on the use of digital platforms to better prepare them for using these platforms effectively and successfully. Secondly, the challenges experienced by the participants in this study are important to note. The participants indicated that they needed space and time to practise and engage with the online mathematics examples first before interacting within the online community. Lecturers need to consider providing students with the content and material for discussion in advance of the online lecture or workshop. This allows students to study the content independently first. Also, the participants in this study valued the collaborative online discussions, which, for example, focused on identifying learner misconceptions in geometry. The participants reflected on and discussed their experiences with their virtual COP, and through this reflection on experiences, collaborative ideas revolving around teaching to avoid misconceptions in geometry were circulated online. Also, the participants shared examples and resources for assessing algebra, providing feedback in algebra and identifying learner misconceptions in geometry with their virtual COP. Through this sharing of ideas and resources, the participants developed an online mathematics data bank for identifying learner misconceptions in geometry and assessing and offering feedback in algebra to mathematics learners. This online mathematics data bank was available on Moodle and was available for any participant within the virtual COP. Thus, as we embrace the 4IR and as we engage within digital platforms, lecturers need to create a similar virtual COP to engage with each other to develop data banks and repositories to support mathematics teachers. These online repositories provide an added layer of support to address mathematics teaching and learning challenges, and due to the nature of digital platforms, these online repositories and data banks are accessible at any time. Thirdly, when preparing for online pedagogy during COVID-19 pandemic conditions, lecturers need to acknowledge that students have family responsibilities to deal with during these extraordinary circumstances. Lecturers need to recognise that students are sociable people and need support, encouragement and feedback from other members of the online community. The advantages of the collaboration inspired by using digital platforms within the era of the pandemic are significant. Students may need to use these digital platforms for social and emotional support during this unprecedented time to deal with family and academic issues. Finally, this study is not without limitations. The study was located at one teacher education context during the COVID-19 pandemic era. It was conducted to explore postgraduate mathematics education students' experiences of using digital platforms for learning within the COVID-19 pandemic era. Due to the importance of the study, further systematic studies based on other teacher education institutions nationally and globally could complement and provide additional insights on the topic. New empirical research could also test the strength of the research process and instruments. New empirical research may also assist in determining further noteworthy experiences of postgraduate mathematics education students on the use of digital platforms for the learning of mathematics. The experiences of postgraduate mathematics education students, as discussed in this article, focuses on their use of digital platforms for learning within the COVID-19 pandemic era. The implications, results and limitations as discussed in this article add new knowledge to the field. Also, this new knowledge is of benefit to lecturers globally as we embrace the use of digital platforms for the learning of mathematics, especially within the context of the COVID-19 pandemic era.
2020-12-03T09:02:06.103Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "61f6046b5eb5be9d0ce8c3a6af89eeb45ad59f6f", "oa_license": "CCBY", "oa_url": "https://pythagoras.org.za/index.php/pythagoras/article/download/568/824", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b44f85ffc52950565ab8d2b2a0e295fda02972f5", "s2fieldsofstudy": [ "Education", "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Sociology" ] }
252029716
pes2o/s2orc
v3-fos-license
Analysis of the Effects of Arts and Crafts in Public Mental Health Education Based on Artificial Intelligence Technology Arts and crafts, with their very different styles due to many factors such as times, regions, technologies, and cultures and nationalities, have undergone an extremely long process, and it is only through continuous superimposition, development, and innovation that they have gradually formed the posture of today's arts and crafts. Public mental health education is the main way to promote the psychological health development of the public in colleges and universities at present. And among them, sound personality and good self-awareness is one of the important standards of psychological health of the large public and one of the important tasks of mental health education. As an effective psychological test and treatment method, arts and crafts analysis are an important part of mental health education. It has a certain role in improving the level of self-awareness and promoting the integration of personality. Art and craft analysis has advantages in mental health and educational group counseling that cannot be replaced by other words and activities, so it can be used in mental health education courses. It can be used in teaching self-awareness. In order to combine the development of arts and crafts with the development concept and promotion ideas of public mental health education, this article proposes an analysis of the role of arts and crafts in public mental health education based on artificial intelligence computing to enhance the development of arts and crafts from a new perspective and seek the inheritance and innovation of arts and crafts and public mental health education in the new historical period, and proves the proposed method in the relevant dataset. The validity of the proposed method is demonstrated in the relevant dataset. Introduction Arts and crafts are a unique art discipline that has a long history in China and has been explored and developed over a long period of time to become a relatively mature and complete discipline in the art field. Art and innovation have always been complementary to each other, and it is the continuous innovation and optimization from simple to complex and from rough to fine that has led to the development of arts and crafts design, which has evolved into an important factor affecting the daily production and life of human beings. In the case of arts and crafts design, its development process is the process of innovation [1][2][3]. Only by keeping pace with the times and synchronizing human aesthetic concepts can arts and crafts design achieve sustainable development. First, innovation in arts and crafts design is conducive to strengthening the infectious power of artworks. In the design of art and craft works, designers should not stick to the rules and design according to the corresponding templates. Innovative arts and crafts design should be made to strengthen the artistic charm and infectious power of arts and crafts works, to obtain the favor of most audience groups [4][5][6][7]. Secondly, the innovation of arts and crafts design is conducive to meeting people's spiritual needs. At present, arts and crafts works are popular among people, which makes designers deeply feel and realize the huge potential of arts and crafts design work, prompting them to complete the design work independently and actively. However, this change in thinking has also directly increased the pressure of designers' work, and people's requirements for arts and crafts design have gradually begun to change towards deep spiritual needs. In this regard, designers must target innovation and improvement, to meet people's spiritual needs to the greatest extent. Again, the innovation of arts and crafts design is conducive to optimizing the characteristics of artworks. In the process of arts and crafts design, the introduction of innovative thinking can promote designers to analyze the value of arts and crafts works based on different perspectives and levels of deep consideration, to ensure that arts and crafts works can have both aesthetic and functional characteristics. e role of arts and crafts in public mental health education is shown in Figure 1. Art education can cultivate people with a sense of beauty, allowing them to see beauty in the most ordinary things; it also enables them to know how to use ordinary things around them to create beauty, giving them a positive and happy attitude towards life, and it enables them to face the hardships of life with a sense of beauty. As an important component of art education, arts and crafts is precisely the purpose of self-healing and nurturing the healthy growth of the mind through the transfer and transformation of creativity and aesthetic experience by using the perception of beauty in its unique nonverbal expression [8][9][10]. erefore, we should actively explore the psychological healing function of arts and crafts, that is, the healthcare of arts and crafts for the healthy psychology of the subject. e origin of art therapy can be traced back to prehistoric times, when humans felt fear and panic about many unknown phenomena such as nature and man himself, so they left many murals in caves to express their awe to relieve their inner pressure. Today, modern medical psychologists have shown through their research that art has a significant therapeutic and healthcare effect in regulating psychological anxiety and emotional disorders in modern people. ey believe that art and its educational activities are a kind of panacea for maintaining and improving physical and mental health, and a kind of nonverbal psychotherapy that is quite effective. rough arts and crafts, calligraphy, seal engraving, sculpture, architecture, etc., art and its educational activities explore, express, and create beauty, so that the subject can feel beauty, appreciate beauty and love beauty in the process of education, cultivate beautiful ideas, and use the psychological suggestion of beautiful ideas to create a lively and pleasant spiritual realm, so that life is full of health and vitality, and the mind is calm and peaceful [11][12][13][14]. e suggestiveness and creativity of this good intention can fully mobilize the psychological potential of individuals, so that their physiological functions show a good emotional reflection, the cerebral cortex, and the central nervous system to produce a positive stimulation and promote a more pleasant and strong body and mind. As a special group of young people loaded with high expectations from family and society, the contemporary public is facing more opportunities and at the same time is under greater psychological pressure and challenges. In this sense, the public is a high-risk group for mental health problems. According to a national sample survey, 23% of the public has different degrees of mental health disorders or psychological abnormalities. Growing adults are more likely to experience more anxiety and frustration due to their unstable state of mental activity, incomplete cognitive structure, lack of synchronization between physiological and psychological maturity, and lack of identification with society and family, and are therefore more likely to have psychological problems. If temporary psychological barriers are not eliminated in time, they will produce adverse reactions and affect the healthy development of the psyche in the future and may even lead to psychological disorders that are difficult to save in the future [15]. From the development of arts and crafts in recent years, the use of artificial intelligence in the field of arts and crafts has gradually become widespread. Especially in the process of arts and crafts design education and teaching, the use of artificial intelligence helps to improve the aesthetic level and improve the quality of teaching. Arts and crafts built on artificial intelligence can show static knowledge in a dynamic mode, helping the public to understand art-and designrelated knowledge more intuitively. In addition, arts and crafts design has humanistic and artistic characteristics, and the effective combination of big data, AI technology, and VR technology in the age of artificial intelligence can promote the cultivation of public aesthetic consciousness, spread public thinking, and guide them to establish correct mental health concepts. erefore, under the era of artificial intelligence, arts and crafts design should change its own concept, integrate various technologies and advantages involved in artificial intelligence into the process of arts and crafts design, optimize teaching resources, innovate teaching mode, create a good artistic atmosphere for arts and crafts, guide the public to feel and experience the beauty of art and design, and realize the innovation and reform of art and design teaching in colleges and universities [16]. e main contributions of this article are as follows. Firstly, it analyzes that in arts and crafts design, the role of arts and crafts in healthcare for public mental health should be actively explored, and through effective aesthetic penetration and aesthetic deepening, education, and self-education that integrates knowledge, emotion, intention, and action should be implemented, and rich arts and crafts activities should be carried out to cultivate healthy aesthetic consciousness, aesthetic emotion, and aesthetic behavior of the subject to sound psychology and develop personality. is article proposes a model for analyzing the role of arts and crafts in public mental health education by artificial intelligence technology. e neural network model with deep learning can analyze arts and crafts for public mental health education quickly and accurately. e experiments demonstrate the effectiveness of the proposed method and provide a feasible solution for the analysis of the role of arts and crafts in public mental health education in batch and fast. Arts and Crafts Design. e arts and crafts industry, with its profound cultural heritage and exquisite craftsmanship, has opened a new research direction and creative field for the current cultural and creative industries. With a far-sighted view on the inheritance and innovation of traditional arts and crafts, we bring life and vitality to the development of arts and crafts by using classics as inheritance and innovation. How to make traditional arts and crafts rejuvenate is a topic that has been explored in the past few years. Designing arts and crafts that meet the needs of contemporary people, meet the aesthetic value of the public, and have practical functions has become a necessary condition for the transformation and upgrading of arts and crafts. e innovation of arts and crafts needs several aspects to complement each other, such as the improvement of the environment for the development of arts and crafts, the change of design concept, the cultivation of the innovative spirit of arts and crafts inheritors, the integration of arts and crafts with science and technology, and the cross-border cooperation of arts and crafts [17]. ese changes are urgent, and only the collision of innovative thinking can give a new vitality, so that our life can be splendid because of innovation, and arts and crafts can be inherited because of innovation. e improvement of the environment for the development of arts and crafts is the external condition for the survival of arts and crafts, as the social environment and economic environment interpenetrate each other. If we can change our perspective and our existing habitual way of thinking, we may be able to open new horizons. Unlike pure art, which is confined to the upper class and the literati, arts and crafts are directly related to the society and the people and are closely related to the social environment. A prosperous socioeconomic environment is naturally conducive to the development of arts and crafts and provides a material basis for their development. Only arts and crafts that penetrate deeply into people's lives can develop harmoniously in the social environment, so that their artistic personality can gain vitality and win people's consensus [18][19][20][21]. Nowadays, Chinese elements have become a hot spot of attention on the world stage, and Chinese arts and crafts, with their profound cultural heritage, rich historical connotation, and regional characteristics, are favored by people from all over the world, and arts and crafts as one of the elements play their due function of cultural dissemination. e last decade has been a golden decade for the development of the arts and crafts industry. Consumers' demand for arts and crafts products has been diversified and multileveled, thus driving the rapid development of the industry and providing a wide space for China's arts and crafts industry to achieve higher economic and cultural values. No matter what industry the designer belongs to, deep cultural cultivation is necessary, and the designer's taste directly affects his works. It is important for designers to open their eyes, such as traveling abroad, attending industry exhibitions, and using the Internet are useful ways [22,23]. An excellent designer should learn a wide range of knowledge, consciously accumulate knowledge, and develop their own ability to feel the beauty, so that they can be in touch with it when designing. A designer is different from a craftsman or an artist. e creation of an artist is to a large extent a personal act, and he can create works expressing his own ideas at will. e designer's creation is a social act, holding the concept of human-oriented design and putting the needs of consumers in the first place, and his works are accepted by the public in daily life after forming products. In addition, the designer should also have sufficient understanding of the whole process of the product from design and manufacturing to the market. At present, corporate designers are subject to greater constraints because companies pursue economic benefits first and personalization later. Public Mental Health Education. Clinical psychology research shows that emotions dominate health. Maintaining good moods and stable emotions is one of the most helpful forces for physical and mental health in the human body. As an important part of arts and crafts education, the emotional factor is always present throughout the aesthetic and creative activities, and it can be said that it is difficult to produce true beauty without emotion. Healthy and noble emotions can balance many aspects of an individual's mental activity, enabling them to treat all kinds of pressure with a calm and friendly attitude, express their emotions reasonably, and gain inner peace and positive motivation for life [24][25][26]. Nourishing the heart for beauty. Perceiving and appreciating beauty is the foundation and key to aesthetic education, the core of aesthetic education. Giving up this, emotions will be indifferent to any thing of beauty. e ingestion of profound aesthetic feelings requires going deep into the object of beauty or the environment it is in to observe, experience, and comprehend, and therefore, often lead the public into nature, or observation, or description, or writing, or collage, or imagination, "to the body of," firsthand experience and then taste, guide its aesthetic mind to insight into nature, to feel the spirit of nature, touch the truth of nature, perceive the beauty of all life, think about the ugliness and evil in the world, and nourish the heart of beauty, to moisten the love of beauty. Arts and crafts education is a kind of emotional education, which mobilizes various psychological functions of the subject through beautiful things and sublimates emotions, so that through rich inner experience, it is psychologically moved, emotionally resonant, and temperamentally cultivated. When many people look at a picture, if it is an inkpainted landscape picture, they will have a sense of magnificence in their hearts; if it is a gold-blue flower and butterfly picture, they will have a sense of beauty in their hearts. erefore, if a landscape picture is hung in a hall, people in the hall will feel more solemn and respectful; if a flower and bird picture is hung in a room, people in the room will feel happier. Plato also thought that we should look for some competent artists to portray the beautiful aspects of nature, so that our young people, like living in a windy and warm area, where everything around them is good for their health, will be exposed to beautiful works every day, like breathing a breeze from a secluded realm, to breathe their good influence, so that they will unconsciously cultivate from childhood. For the love of beauty, cultivate the habit of integrating beauty in the mind. Excellent arts and crafts work can inspire the viewer's empathy through the sensual and changing forms of artistic composition, and get infected from them, thus generating a longing and aspiration for beauty and spurning and despising ugliness, and gradually entering the realm of truth, goodness, and beauty. is sublimation of emotion can make people feel beautiful things freely and happily under the requirement of inner desire, accept the baptism of beauty, and produce the love of beauty [25]. Enriching activities for a healthy personality. One of the most important tasks of upbringing is to make people governed by forms even in their purely natural state of life. e moral man can only develop from the aesthetic man and cannot arise from the state of nature. Immortal artistic creation reflects the author's conception of art, beauty, and love for life. e process of appreciating and creating beautiful things often sublimates human emotions, and this sublimation will play a subtle role in regulating human behavior and prompting it to become a moral being. erefore, in arts and crafts education, we should make good use of all the resources of beauty, organically penetrate the inner world and real life of people, form a conscious rational force, and strengthen aesthetic experience and aesthetic training through various forms of arts and crafts activities, so as to shape the healthy and complete personality of the subject. Practice the eye for beauty. Beauty is everywhere, and for our eyes, it is not a lack of beauty, but a lack of discovery. But a pair of eyes good at finding beauty needs to be honed for a long time. Only when the visual object causes physiological and psychological pleasure does perception become associated with aesthetics. Arts and crafts education is to train the perceiving subject to select different perspectives in specific aesthetic activities, to use symmetry, balance, rhythm, rhyme, and other laws of beauty to observe and analyze objects, to raise daily perception to aesthetic perception, and to develop a pair of "aesthetic eyes," that is, from unconscious viewing of nature to a conscious, active, and selective observation. e "aesthetic eye" to observe things can fully mobilize the subject's imagination, association, emotion, and other psychological factors, and consciously use the laws of beauty, to test and feel the beauty of things, so as to grasp the natural objects of the original. Artificial Intelligence Technology. e technology of recognizing the style and mental health state of arts and crafts probably emerged at the end of the 20th century, mainly through the technology of image texture generation to realize the migration of style. Research on image texture requires researchers to build models manually, the core idea of which is to generate texture through statistics of local features of images, without which models cannot be built at all, and a model can only do one style or scene [26][27][28]. In addition, the computer computing power was not strong at that time, so the development of arts and crafts style and mental health state recognition technology was very slow. e predecessor of convolutional neural network is the visual cortical map created by Hubel and Wiesel by recording the brain feedback formed by the stimulation of a cat in a specific mode. leNet-5 formed the prototype of contemporary convolutional neural network, and based on the LeNet-5 model, the convolutional neural network has a more systematic definition and precise structure under the research of researchers. Convolutional neural networks are a class of feedforward neural networks with deep structure and convolutional computation built after biological visual perception mechanism, which are widely used in image recognition, behavioral cognition, pose estimation, and natural language processing because of their ability to learn data stably. Convolutional neural network-based art and craft style and mental health state recognition technology art and craft style migration is a special application of convolutional neural network (CNN) in the field of computer vision, fully demonstrating the ability of convolutional neural network representation learning, able to learn features and learn the process of extracting features to avoid the trouble of manual extraction of features. It consists of input layer, convolutional layer, excitation function, pooling layer, and fully connected layer. e convolutional layer is an important part of the CNN and is used to extract feature values. Different convolutional kernels can extract different features [29,30]. e lower convolutional layers can only extract lowlevel features such as edges, lines, and corners, while the higher layers can use the lower features to obtain more complex features. e pooling layer is a down-sampling operation after feature extraction by the convolution kernel, which is mainly used to perform feature dimensionality reduction and improve computational speed by compressing the number of data and parameters, and can control overfitting and improve the robustness of the network. Based on the published structural model, the algorithm is optimized for both style and content by introducing the target content image based on texture synthesis and modifying the loss function to combine the style of any one image content to form an image with artistic style characteristics. e visual processing is carried out by training a multilayer convolutional neural network so that the computer discriminates and learns the artistic style. However, it is obvious from the generated images that some of the image contents are distorted, details are lost, and the time consumption of the trained convolutional network is long, and the degree of migration cannot be controlled. Subsequently, the control of details in the migration of the Arts and crafts style was enhanced, but there was still no control over the image content. After that, the Fast Neural Style approach improved the drawback of the long training time of the original craft style migration, and the GPU usually only needs to run for a few seconds to generate the corresponding craft style migration results after each style model is trained, but the generated image effect is still not improved. With the continuous development of technology, the technology of arts and crafts style and mental health status recognition is becoming more and more mature, but the problems of image distortion and loss of details still exist, and the main breakthrough point of the future technology of arts and crafts style and mental health status recognition based on convolutional neural network is to get the synthetic image with the best matching degree and lower loss. Model Architecture. e region-difference stylization model in this article is shown in Figure 2. After the content image passes through the DeepLabV3 semantic segmentation network, a segmentation map with n semantic regions is generated, and different styles are adopted for stylization in each of these n regions. Based on the neural style conversion algorithm, a pretrained VGG-16 neural network is used to calculate the style loss and content loss to ensure the stylization effect while reducing the computational burden. In the loss function part, the content features are represented by the high-level features of the VGG network, which are used to retain the spatial structure information of the content images. Image Semantic Content Representation. e network is generated based on a VGG network, which is used for image object recognition and localization. e structure of the feature extraction model is shown in Figure 3. A feature space extracted from a normalized 19-layer VGG network (consisting of 16 convolutional layers and 5 pooling layers) is used. e network is normalized by weight scaling so that the average number of activations per convolutional filter over images and locations is equal to l. is move allows rescaling the VGG network without changing its output. is is because VGG contains only rectified linear activation functions without normalizing or merging the feature mappings. Let and x be the original image and the generated image, respectively, and let P l ij and F l ij represent the features in layer l. en, the squared error loss between the two feature representations is defined as follows: (1) With respect to the activation function in layer l, the derivative of the loss function is given as e gradient with respect to the image x can then be computed using standard error backpropagation. us, the initial random image x can be changed until it generates the same response in a particular layer of the convolutional neural network as the original image p. When convolutional neural networks are trained for object recognition, they form image representations so that the object information becomes increasingly clear along the processing hierarchy, referring to the feature responses in the deeper network as content representations. Artistic Style Representation. To obtain a stylized representation of the input image, a feature space is used to capture the texture information of the image. is feature space can be built on top of the filter responses in any layer and consists of correlations between the different filter responses. ese feature correlations are represented by the Gram matrix G l ij ', where G l ij is the inner product between the vectorized feature mappings i and j in the lth layer: A stable, multiscale representation of the input image is obtained by considering multiple layers of feature correlations. e representation captures the texture information of the image, but not the global arrangement. e information captured by these stylized feature spaces constructed on different layers of the network can then be visualized by constructing images that match the stylized representation of a given input image. is is achieved by using gradient descent from a white noise image to minimize the mean square distance between the Gram matrix terms of the original image and the Gram matrix of the image to be generated. Let a and x be the original image and the generated image, while A and G denote the styles of the original and generated image layers l, respectively. en, the loss of layer l can be expressed as e total loss function has the following form: Journal of Environmental and Public Health where w l is the weighting factor. With respect to the activation function in layer l, the derivative of E l can be expressed as e gradient of E l with respect to the pixel value x can be easily calculated using standard error back propagation. Style Recognition. To transfer the style of the artwork to the photograph, a new image is synthesized that matches both the content representation and the style representation of a. erefore, the distance between the feature representation of the white noise image and the content representation of the photo in one layer and the distance between the style representation of the artwork defined on multiple layers of the convolutional neural network is jointly minimized. e minimized loss function is where α and β are the weighting factors for content and style reconstruction, respectively. e gradient about the pixel values can be used as input to the numerical optimization strategy, which is employed as L-BFGS. To extract image information in comparable proportions, the style image is always resized to the same size as the content image before computing the style features. Mental State Recognition. rough the interaction layer, k-dimensional vectors are initialized n and they are used as auxiliary vectors for the input layer x. In the embedding layer, the vectors are multiplied with their corresponding features to obtain v i and then the n sets of v i are input to the interaction layer, and I is obtained by the interaction mode calculation, and w is the number of neurons in the interaction layer, which is determined by the following four interaction calculation modes. e number of neurons in the interaction layer is determined by the following 4 interaction calculation methods, and the output is as follows: e number of interactions is I DC , and the output as e interaction layer multiplies the corresponding positions of v ik and adds the corresponding products and the output as e model studied in this article takes as input data the features selected by the image style transformation, that is, x � (x 1 , x 2 , . . . , x i , . . . , x n ). en, the features are concatenated into the hidden layer together with the result after the FM module features are combined. e hidden layer has 5 layers with 128 neurons per layer, and the activation function ReLU; fully connected layers are used between the layers. Since the use of more fully connected layers and many parameters between layers makes the model complex, dropout is used in training the model. Dropout is randomly turning off some neurons so that some features are not overlearned by a certain neuron in the model, thus improving the robustness of the model. e percentage of FM-DNN using dropout to turn off neurons is 0.5. For the 4classification task in this study, the number of neurons in the output layer is set to 4, and the output y is obtained, and the mental state recognition result y is output using the softmax layer. FM-DNN uses cross-entropy as where y i is the unique thermal coding label of the sample and y i is the probability of category i in the predicted value of the network. Stochastic gradient descent is used to train the parameters in the network, setting the batch size to 128, the maximum number of training rounds to 10, and the learning rate to 0.001. To prevent overfitting, an early stop method is used; that is, the training will end early when the accuracy of the validation set no longer rises for two rounds. Experiment Setup. e algorithms in this article were experimented on a nonpublic dataset from a domestic arts and crafts research institute in China. e proposed model is used as a model for the analysis of the role of arts and crafts in public mental health education, and experiments are conducted for a variety of styles of arts and crafts to identify the symbolic features in arts and crafts and to classify the images. e number of training sets was 4878, and the number of test sets was 2500. In order to expand the number of training sets and improve the model training effect, the training set is expanded by 4 times by rotating 90 degrees, 180 degrees, and 270 degrees for one of the arts and crafts in the training set. e Adam optimizer is used, the batch parameters are set to 16, the number of training rounds is 20, and the learning rate is 0.0001 in the first 10 training rounds and decays linearly from 0.0001 to 0 in the second 10 training rounds. e experimental environment is shown in Table 1. Keras is a higher-order application programming interface built on top of a symbolic mathematical library that supports tensor computation. e supported underlying architectures including Keras can be used to rapidly build and train networks and can be used to train networks using either CPU or GPU, with GPU training requiring an NVIDIA graphics card and configuration to install the relevant environment, while the computational speed and efficiency are greatly improved. e training process performance enhancement and loss convergence are shown in Figure 4 and Figure 5. Experimental Results. e recognition rate of this network is 64.73%, and the total number of parameters is 58,423, corresponding to each emotion as shown in Table 2. After incorporating the residual blocks, the recognition rate of the method in this chapter can reach 70.89%, and the recognition rate and loss function are corresponding to each emotion as shown in Table 3. We introduce spatial residual connectivity, referred to as RN, to the separable convolutional network and keep other conditions unchanged. From Tables 2 and 3, we find that the separable convolutional network with RN shows significant improvement compared to the method without the inclusion of residuals. is is because the residual structure selects the appropriate neighbor range for each node and avoids the lack of differentiation of the nodes when the network stacks multiple layers. By introducing cross-domain spatial residual convolution, the spatial-temporal information can be enhanced, and the residuals also solve the problem of superposition of two convolutions. Next, we also we first implemented the simpler model structure of LeNet-5, and the accuracy of the model on the Fer2013 dataset was 58.3% with 1.168 million participants. en, we performed classification experiments using VGG-16, with an accuracy of 68.81% on the Fer2013 dataset and a number of 14,754,000 parameters. While we used separable convolutional counts of only 58,000, the accuracy reached 64.17% without adding the residual network, and after adding the residual network, it brought the average recognition rate to 70.89%, an improvement of 6.72%, and both methods reached an average accuracy of 65% ± 5% on this dataset using the manual case. Better results were achieved with a smaller number of parameters and training time than the other methods. Among the recognition results, the highest recognition rate can be achieved for the happy emotion, which can reach 88.68%. e second one is normal emotion, and the recognition rate can reach 77.76%; and the third one is surprise, and the recognition rate can reach 73.61%; while the emotions with relatively low recognition rate are disgust, fear, and sadness, the facial changes of happiness and surprise are more exaggerated and disgust and sadness have less facial changes; happiness and surprise have low similarity with other expressions, while disgust and anger are more similar, and both have frown and grin, which are easy to misjudge; the number of disgust expressions in the training set is very small, only 436, which is not enough to learn enough features, making the results less satisfactory, while the number of happy is very large, so there is a high accuracy rate. To verify the superiority of this network, we also did experiments in fer2013, and the comparison results are shown in Figure 6. In this article, we not only compare the accuracy of Bi-LSTM and CNN models, but also train the classical RNN model and LSTM model to help analyze the advantages, disadvantages, and effectiveness of the models in terms of Figure 7. e Bi-LSTM achieves quite good results compared to the LSTM, which can be analyzed in context, and also has 5.52% higher accuracy compared to the CNN model, which is predicted to be due to the fact that the CNN does not have high accuracy in analyzing long segments due to the varying sentence lengths in the dataset. e RNN is significantly less accurate than the other three due to gradient disappearance and gradient explosion, and the LSTM is better compared to the RNN, but still cannot do accurate prediction because it cannot be combined with the following content. e results of this article indirectly illustrate the conclusion that the daily textual sentiment expressions of the Chinese public are scattered in the segments, and there is a high possibility of oversight if manual analysis is performed. 95.55% accuracy of Bi-LSTM can better achieve the ability of batch processing text. Figures 8 and 9. As can be seen from Figures 8 and 9, the F1-score of the six methods in no mental health and moderate mental health did not differ much; in mild mental health, the F1-score of the FM model was much worse than the other models, while FM-DNN (IDA) showed a better classification; in severe mental health, the F1-score of FM and DNN was lower, indicating that using only FM or DNN trained PCk-means selected 22 features; that is, data containing 13 important dimensions such as interpersonal stress, academic stress, and family education are not good for mental health identification. is may be since FM cannot effectively learn the complex nonlinear relationship between mental health and its related features, or the DNN model classifies a feature independently of other features to identify mental health and lacks the consideration of feature combination in mental health identification. In contrast, the classification performance of F-DNN in this article has a significant advantage over the control group, which not only shows that the diversity of dimensions has an important contribution to mental health recognition, but also shows that F-DNNN improves the diversity and comprehensiveness of prediction dimensions by using FM to effectively combine mental health features, thus enhancing the classification effect of the model. Compared with other models, F-DNN, especially IMA, can better identify severe mental health testers more accurately, which is valuable in the screening of mental health disorders. Journal of Environmental and Public Health erefore, the comparison with the control model shows that the DNN model designed in this study exhibits good classification ability, and the introduction of FM also has a significant effect on the optimization of the network structure. Conclusion Arts and crafts both reflect and exercise various mental abilities such as attention, observation, imagination, memory, and thinking, and also implicate various mental qualities such as interest, emotion, will, and character, as well as activate the right brain. By exploring the potential of arts and crafts education to develop brain potential and promote mental health, we can improve the public's sensibility through arts and crafts activities, prompt them to have a deeper understanding of classic works and themselves, become more sensitive to their own minds, and make their mental feelings more active and positive, so that brain potential, especially right brain potential, can be further developed, thus enriching the form of mental health education. erefore, seeking the organic combination of arts and crafts education and mental health education should become the direction of joint efforts between arts and crafts education and mental health education in the future. e analysis of the role of arts and crafts in public mental health education based on artificial intelligence technology proposed in this article proves that arts and crafts analysis, as an effective white ego analysis technique, is an effective psychological test and psychotherapy method, and its application to mental health education courses can guide the public to think deeply about self-awareness, improve the level of self-awareness, and promote the healthy development of the personality of the general public in order to achieve the purpose of mental health education. In the future, we plan to carry out an analysis study of the role of arts and crafts in public mental health education using recurrent neural networks and knowledge mapping. Data Availability e datasets used to support the findings of the study can be obtained from the corresponding author upon reasonable request. Conflicts of Interest e authors declare that there are no conflicts of interest.
2022-09-03T15:20:15.209Z
2022-08-31T00:00:00.000
{ "year": 2022, "sha1": "336c06b20ca8342a35774cfab5a36b5b2cb9b9df", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jeph/2022/9201892.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0000f3b9e7cf6df5d088b01ce557edcfc7ad9407", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
258154567
pes2o/s2orc
v3-fos-license
Thermal analysis of suspended single droplet evaporation measurements with a coupled lumped parameter model The measurement data of single droplet evaporation experiments are often biased due to the extra heat input through the fiber suspension and the presence of thermal radiation in hot environments. This encumbers model validation for heat and mass transfer simulations of liquid droplets. In this paper, a thermal analysis of this measurement layout is presented with a coupled lumped parameter model, considering heat conduction through the suspension. The model was validated by experimental data from the literature and good agreements were found. The thermal analysis focused on fiber material and geometry, and thermal radiation properties. Calculations were performed on a broad range of ambient conditions for liquids with different volatility characteristics. Temporal squared droplet diameter- and temperature-profiles, furthermore, droplet stationary evaporation rate were used to characterize vaporization phenomena. The thermal balance of the droplet is dominated by the convective heat rate from the environment in the early stage of evaporation. The effect of heat conduction through the fiber becomes important at the end of the droplet lifetime when the droplet size is decreased. Temperature sensor suspension may seriously bias droplet temperature due to the larger thermal conductivity compared to quartz fiber. Large droplets in high-temperature environments show significant sensitivity to thermal radiation properties, which should be considered in measurements and model validation. Introduction Transportation is the largest source of greenhouse gas (GHG) emissions in the European Union.Concerning passenger or light-duty transportation, electric vehicles represent a maturing technology.The question here is the source of electricity production and the long-term environmental impact of battery technology.Heavy-duty transportation, including shipping, aviation, and road freight, has a 46% share in the GHG emissions of transportation [1].Consequently, alternative powertrains are necessary to achieve deep decarbonization.However, heavy-duty transportation has strict constraints on cargo space and payload, therefore, energy sources with high energy density are required to provide economic operation.Consequently, conventional and renewable liquid fuels will remain in the portfolio in the foreseeable future [2], which necessitates heat and mass transfer simulations in engine design.However, calculations must be validated by experimental results. Comprehensive measurement data on single droplet evaporation is available in the literature.There are measurement layouts, where the droplet is deposited on a hot surface [3,4].However, most experimental setups can be categorized into two groups: stagnant suspended droplet and falling droplet, both aim to obtain the temporal evolution of droplet diameter.Several experimental datasets are published corresponding to the stagnant suspended droplet method for liquids with significantly different fuel properties.Among others, these include n-alkanes, like n-heptane [5][6][7], n-decane [8,9], n-hexadecane [10], vegetable oils [11], jet fuel [12], water and emulsions [13,14], and binary mixtures containing polar and non-polar components [15].The reason for the prevalence of this layout is that the effect of forced convection on droplet evaporation can be eliminated since the droplet is stagnant.Therefore, the influence of thermophysical and transport properties of the fuel can be evaluated independently of operating conditions.Moreover, the optical apparatus detecting the temporal evolution of droplet size can be arranged conveniently and spherical symmetry of the droplet can be provided at a reasonable level.Despite its advantages, the setup has several drawbacks.Thermal radiation may affect the rate of droplet evaporation due to the high-temperature wall of the measurement section and the hot ambient gas.Furthermore, the droplet is fixed to a fiber suspension or temperature sensor.In the latter case, the temporal variation of droplet temperature can be detected.However, both cases seriously affect the thermal balance of the droplet since the suspension has a significantly higher thermal conductivity than the vapor-gas mixture around the droplet surface.Therefore, it acts as an additional heat input, intensifying evaporation. To overcome the bias in the thermal balance of suspended droplet, the evaporation process of falling droplets is also measured.However, systematic experimental data for a wide range of ambient pressure and temperature is rare in the literature due to the more problematic arrangement of optical apparatus [16].The velocity of the ambient gas is adjusted, resulting in a low relative velocity between the falling droplet and the gas flow.In this manner, the effect of forced convection enhancing evaporation can be decreased.However, thermal radiation can remain important due to the hot surfaces of the measurement chamber. The present paper focuses on the suspended droplet measurement method since this layout is used more frequently.However, a systematic evaluation of its thermal biases is missing.Figure 1 summarizes the concept of a typical experimental setup based on ref. [11].A pressure regulator and a temperature controller operating a heating rod are used to adjust the ambient pressure and temperature in the measurement chamber.High temperature may lead to droplet burning, therefore, inert gas, which is usually nitrogen, is used to avoid oxidation and focus on heat and mass transfer phenomena.The droplet is placed to the suspension with a thin needle before insertion into the chamber, then it is moved inside with a droplet elevator operated by a stepper motor.In order to avoid preliminary droplet heat-up, the inlet of the chamber is cooled with water in a closed cycle.The optical access to the measurement position is provided by glass windows on the sidewalls.Backlight is applied from one side and the temporal variation of droplet size is recorded from the other side with a high-speed camera, thus binary images are obtained from the droplet by adjusting the threshold.Data is collected with a proper data acquisition (DAQ) device and transferred to a computer for post-processing.The pixel to distance conversion is done by calibration with a known object.After detecting the droplet boundary in the processed images, a high-order polynomial is fitted to it.Rotational symmetry is assumed and the volume of the droplet is determined.The diameter of the droplet is the diameter of a sphere equivalent in volume.Spherical symmetry is a cornerstone in most droplet evaporation models.To preserve this shape, an aircraft moving on a parabolic flight or a free-falling capsule, containing the suspended droplet can be used to carry out experiments.However, they are very expensive and therefore limited in the literature.Another difficulty is that the stable vapor boundary around the droplet obstacles evaporation in microgravity environment [10]. It is rather challenging to quantify the total uncertainty of the experiments.However, it is important to highlight the most important biasing factors, affecting the measured temporal variation of droplet size.The rotational symmetry may be violated due to the presence of the suspension.Curve fitting to the boundary of the droplet in the processed images is also a source of uncertainty due to the resolution of the images.Yang and Wong [17] reported an average ± 15 μm uncertainty, which is 1.5% of the initial diameter and 2.7% of the diameter at the end of the measurement.Despite the water-cooled inlet, an uncontrolled heat-up of the suspended droplet may occur at the beginning of the experiments.Moreover, temperature sensors have larger thermal conductivity than quartz fibers, which can result in the further enhancement of the extra heat input. The additional heat to the droplet through the fiber suspension enhances vaporization, thus, it can seriously affect model validation.Consequently, this effect should be considered.Generally, two concepts are used for this correction.The first one introduces an empirical correction factor to the stationary evaporation rate [7].The stationary evaporation rate characterizes the temporal surface decrease of the droplet and it is discussed in the next section.Correction factors are determined in terms of fiber suspension diameter for measurements under identical conditions.The reference data corresponds to the smallest fiber diameter and cross-fiber suspension arrangements are usually used for this purpose [7].This method can provide sufficient corrections, however, the empirical factors are confined to the experimental setup and ambient conditions.In the other method, the additional heat is considered as a source term in the thermal balance of the droplet.Several modeling concepts exist for this purpose, like lumped parameter modeling [18], one-dimensional approaches [13,17,19], and multi-dimensional approaches [20][21][22][23].However, systematic analyses on a broad range of ambient conditions, fuel volatility, and suspension material and geometry are scarce in the literature.Therefore, the novelty and the aim of this paper is to fill this gap with a coupled lumped parameter modeling approach to provide a sufficient estimate of the general thermal biasing effects in single droplet evaporation measurements.The focus is on constructing a model with reasonable computational demand applicable for comprehensive parameter analysis.Experimental data is used to evaluate the model and quantify its limitations.Focusing on qualitative, rather than quantitative thermal analysis, general recommendations can be made for further model validation and measurement planning.The applied and tested evaluation methodology of this analysis can be the base for further advanced models.Moreover, droplet evaporation models are also used in non-combustionrelated fields corresponding to different ambient condition range [24], further necessitating the thermal evaluation of droplet vaporization measurements used for validation.The coupled model is presented in the next section. Suspended droplet evaporation model The thermal balance of the suspended droplet with the various heat sources and the main concept of the evaporation model are presented in Fig. 2. The fiber is horizontal and Fig. 1 Concept of a typical experimental setup for stagnant suspended droplet evaporation measurement based on ref. [11] the immersed part of it is equal to the instantaneous droplet diameter.Both the thermal balance of the droplet and that of the fiber suspension are considered as lumped parameter models, indicated by the red contours in Fig. 2. No temperature distribution is considered inside the droplet and along the fiber.The droplet surface temperature, T s , is concentrated to the droplet center.The fiber temperature, T f , corresponds to the dry surface of the suspension and only that part is considered in its thermal balance since the dry part is significantly longer than the immersed part.Coupling between the droplet and the fiber thermal balances is performed via the conduction heat rate, Qf .Generally, the temperature of the fiber increases more rapidly due to its lower heat capacity (~10 4 J/K for ~1 cm long SiO 2 fiber [5,25]).This results in a temperature difference between the fiber and the droplet during the evaporation process. The key equations of the numerical model are presented next.The model was implemented into Matlab R2022a environment.All the liquid-and vapor-phase thermophysical and transport properties of the evaporating droplet are pressure-and temperature-dependent, acquired from the National Institute of Standards and Technology (NIST) database [25].The following equations are solved in each time-step.Two components are considered in the gas phase: fuel vapor and ambient gas.Vapor-ambient gas mixture properties, like specific heat capacity at constant pressure, c pg , thermal conductivity, k g , dynamic viscosity, μ g , and density, ρ g , are calculated for T ref reference temperature and Y v,ref reference vapor mass fraction according to the considerations detailed in ref. [26], assuming ideal mixing when Dalton's law and Amagat's law are valid and the conditions are far from critical.Vapor-phase properties are calculated for T ref , while liquid-phase properties correspond to the droplet temperature.Equations ( 1) and ( 2), solved with the explicit second-order Adams-Bashforth method, describe the thermal balances of the droplet and fiber suspension, respectively: where m d is the droplet mass, c p,L is the liquid-phase specific heat capacity at constant pressure, and T s is considered uniform inside the droplet.Overdot means time derivative.Qconv,d , Qevap , and Qrad,d are the convective heat rate from the environment to the droplet, heat rate of vaporization, and radiative heat rate from the environment to the droplet, respectively.m f is the fiber mass, c f is the specific heat capacity of fiber material, while Qconv,f and Qrad,f are the convective heat rate from the environment to the fiber and radiative heat rate from the environment to the fiber.The (1) where h d is the heat transfer coefficient between the droplet and surrounding gas, d is the droplet diameter and T ∞ is the ambient gas temperature.The heat rate of vaporization is calculated as: where ṁd is the mass flow rate of evaporation and L is the latent heat of vaporization.The latter is determined with the Watson equation [27].The conduction heat rate from the fiber, considering circular cross-section can be written as: where k f , and d f are the fiber thermal conductivity and fiber diameter.The instantaneous distance between the center of the droplet and the free surface of the fiber is represented by d/2.The radiative heat rate from the environment to the droplet is considered as: where (εφ) d describes the emissivity of the droplet and the view factor for the measurement configuration, σ 0 is the Stefan-Boltzmann constant, and T rad is the temperature of the radiation heat source.The latter can represent hightemperature measurement chamber walls, cold walls of the room, or ambient gas temperature.The convective heat rate from the environment to the fiber is calculated as: where h f is the heat transfer coefficient between the fiber and the environment, while l f is the instantaneous dry length of the fiber exposed to heat transfer from the environment.The radiative heat rate between the environment and fiber can be calculated as: where (εφ) f describes the emissivity of fiber and the view factor for the measurement setup.In accordance with the measurements, the initial droplet diameter is considered as an initial condition.Consequently, the occupied volume of suspension is subtracted from the calculated droplet volume, and the droplet mass is determined as follows: where ρ L is the droplet density.The instantaneous dry length of the fiber suspension exposed to the heat transfer from the environment is calculated as: where l 0 and d 0 are the initial dry length of the fiber and initial droplet diameter.The fiber is considered as a horizontal cylinder with d f diameter and l f length.h f is determined from Nusselt number correlations for natural and forced convection.The characteristic length is d f .The Reynolds number, required in case of forced convection is: where u is the relative velocity between the stagnant fiber and the gas flow.For forced convection, the Nusselt number for the fiber is calculated as [28]: while for natural convection, it is calculated as [29]: where the Prandtl number is: and the fiber Rayleigh number for heat transfer is: where the Grashof number for the fiber is: where g and β are the gravitational acceleration and the thermal expansion coefficient of ambient gas.The validity range of Eq. ( 12) is 0.2 ≤ Re f ⋅ Pr, while that of Eq. ( 13) is 0 < Pr < ∞.No information is available for Ra f .The total incoming heat rate of the droplet is defined as: In order to calculate the convective heat rate from the environment to the droplet by Eq. ( 3), h d needs to be determined with Nusselt number correlations [28][29][30].The characteristic (10) length is the instantaneous droplet diameter.For a stagnant non-evaporating sphere, the Nusselt number is: where p ∞ is the ambient pressure, p v,s is the vapor pressure, acquired from the NIST database, corresponding to T s , M a is the ambient gas molecular mass, and M v is the fuel molecular mass.Accounting for the Stefan flow, the mass flow rate of evaporation is: where overdot means time derivative.In order to calculate the ratio of convective mass transfer rate to diffusion rate, the Sherwood number needs to be determined with the following correlations [28].The droplet Sherwood number for natural convection is: where the Schmidt number is: while the droplet Rayleigh number for mass transfer is: The droplet Sherwood number for forced convection is: The mutual diffusion coefficient of fuel vapor and ambient gas is calculated with the method of Fuller et al. [32,33]: where M v,a is the average molar mass of the vapor-ambient gas mixture, while Σ v and Σ a are the sums of atomic and structural diffusion volume increments of vapor and ambient gas.Note that Eq. ( 33) was evaluated with reference data in ref. [34]. The d 2 -profile, shown in Fig. 3, characterizing the temporal evolution of droplet size is acquired by solving the equations in each time step.The stationary evaporation rate, λ st , is determined by fitting a line to the range of linear decrease in the d 2 -profile.In this manner, λ st characterizes the droplet surface decrease over time.The upper and lower limits of this fitting range are often arbitrary.However, the range of 0.15 ≤ (d/d 0 ) 2 ≤ 0.5 is frequently used [10]. where the droplet Rayleigh number for heat transfer is: where the Grashof number for the droplet is: The Nusselt number for a non-evaporating sphere in case of forced convection is: where the Reynolds number for the droplet is: The validity range of Eq. ( 18) is Ra T,d ≤ 10 11 and 0.7 ≤ Pr [28], while Eq. ( 21) is valid for 0.7 ≤ Pr ≤ 400 and 3.5 ≤ Re ≤ 7.6 ⋅ 10 4 [31] (available in Hungarian).The Nusselt number for the droplet, accounting for evaporation, is [30]: The Spalding heat transfer number is: where c p,v is the vapor-phase specific heat capacity at constant pressure, and the Lewis number is: characterizing the relation of thermal boundary layer thickness to the concentration boundary layer thickness, where D v,a is the mutual diffusion coefficient of fuel vapor and ambient gas.The Spalding mass transfer number is: where Y v,∞ is the mass fraction of vapor in the far field, which is considered zero for the single droplet case.Assuming vapor-liquid equilibrium and ideal gas conditions, the mass fraction of fuel vapor on the droplet surface is: that the lower boundary is usually limited by the experimental setup and droplet deformation, therefore, higher values may be applied.The corresponding range of fitting is 0.3 ≤ (d/d 0 ) 2 ≤ 0.5 for the results presented in this paper.If d 2 data are free from bias, the decrease corresponding to the stationary evaporation phase is linear.Therefore, the upper limit can be higher, as well (e.g., (d/d 0 ) 2 = 0.7).However, if fiber suspension is present, the limits of the fitting should be always provided in details since they can affect the numerical value of λ st due to the possible non-linear trend of the d 2 -profile resulting from the extra heat input.Furthermore, when the Knudsen number, relating the mean characteristic free path of the molecules to the droplet radius, is larger than 0.01, the gas phase cannot be regarded as a continuum and kinetic effects should be taken into account.This can be important at the final stage of evaporation or in the case of μm scale droplets generated by modern atomizers.In the case of the currently analyzed droplet size and ambient condition regime, the gas phase can be approximated as a continuum.For further details on kinetic modeling of droplet heating and evaporation, please see ref. [30]. Results and discussion Section 3.1 presents the validation of the numerical model against experimental data obtained from the literature.Next, the results of the thermal analysis, focusing on various features of the experimental setup are discussed Section 3.2. Model validation The presented model was validated against experimental data of Nomura et al. [5], Yang and Wong [17], and Harada et al. [20] since the initial and boundary conditions and several details of their measurements are accurately discussed.These are summarized in Table 1.However, initial fiber temperature values were not detailed directly.Therefore, it was assumed identical to the initial droplet temperature, T s,0 .Next, the features of each experimental setup are summarized and the comparison of measurement data and the results of the model are presented.The published experimental results of Nomura et al. [5] are confined to a measurement chamber with an 80 mm inner diameter and 260 mm height.The ambient gas was nitrogen to prevent droplet burning.Four windows with a 20 mm diameter each provided visual access to the droplet.The ambient gas was heated by an electric furnace inside the chamber.N-heptane droplet was placed on the tip of a silica fiber (SiO 2 , k f = 1.4 W/(m ⋅ K) [35]), which was moved to the desired position by a droplet elevator.The insertion process required 0.16 s, which may led to an uncontrolled slight preheating of the droplet.T ∞ was measured with a thermocouple 4 mm above the test position.A microgravity environment was used to perform the measurements.The tests were carried out in towers with a height of 5 m and 110 m and parabolic flights were used, as well, to acquire the desired conditions.The whole apparatus with the suspended droplet was covered to eliminate drag force and was placed in the tower, which was evacuated to low pressure.After the setup started to fall, the droplet was introduced. The experimental setup was fixed to the floor of an aircraft in the case of parabolic flights.After microgravity conditions were achieved, the droplet was introduced.The evaporation process was recorded with a CCD camera.Droplet diameter was determined according to the concept discussed earlier in Fig. 1, assuming spherical symmetry. The comparison of the experimental data of Nomura and the results of the model is presented in Fig. 4. Note that the time scale is divided by d 0 2 in accordance with the original published data.Solid lines indicate (εφ) d = 1, while dashed lines indicate (εφ) d = 0 to account for the uncertainty of (εφ) d .The former assumes the black body behavior of the droplet and considers the unity view factor, meaning that all the heat radiation from the environment reaches the droplet.The second extreme situation neglects thermal radiation.Obviously, 0 ≤ (εφ) d ≤ 1 condition is valid.Consequently, these boundaries should contain the experimental data.Note that the results showed no significant sensitivity to (εφ) f from the thermal balance of the fiber.Therefore, the effect of possible droplet transparency on the absorbed thermal radiation of fiber suspension was neglected in further calculations.The uncertainty of droplet insertion is indicated by the horizontal error bars.Model results show good agreement with reference data.In the case of (εφ) d = 1, the average relative deviation values are 8%, 7%, 7%, and 9% for 471 K, 555 K, 648 K, and 741 K, respectively.The possible influence of thermal radiation is indicated by higher deviation values at higher temperatures, when this phenomenon is neglected since thermal radiation is proportional to T 4 .The values are 7%, 22%, 31%, and 25% for 471 K, 555 K, 648 K, and 741 K, respectively. A hot laminar gas flow generated by an electric heater was used in the experiments of Yang and Wong [17].The experimental analysis aimed to investigate the influence of suspension diameter on vaporization.Quartz (silica) fibers with 50 μm, 150 μm, and 300 μm were used and placed in the uniform laminar flow provided by a convergent nozzle.The published uncertainty of temperature measurement was 2 K due to thermal radiation and it was neglected in further calculations.Before the measurements, the droplet was protected from the hot gas flow by a water-cooled shield.At the beginning of the experiment, the shield was withdrawn and the droplet was exposed to the flow.The transient temperature history of the hot flow was also measured and the data was published, making it available for model validation.Flow velocity was measured, as well, with Laser Doppler Anemometry.However, no data was available for model validation.A high-speed camera at a framing rate of 500 fps was used to record the vaporization process.The droplets were considered ellipsoids and the reported uncertainty of diameter values was within ± 15 μm. Figure 5 presents the comparison of the experimental data of Yang and the results of model calculations.Due to the absence of hot surfaces around the droplet, thermal radiation was neglected, thus, only dashed lines are present.The model provides reasonable accuracy since the calculations slightly underpredict droplet lifetime.However, the transient velocity history of gas flow was not considered in the model, which led to higher h d and enhanced evaporation.For 50 μm, 150 μm, and 300 μm, the average relative deviation between measurement data and calculations are 13%, 21%, and 15%, respectively. Harada et al. [20] used a Pt-13%Rh/Pt thermocouple (Pt, k f = 71 W/(m ⋅ K)) [36] for the suspension of n-dodecane droplets.The fiber diameter was 50 μm.An alumina protection tube covered a significant part of the sensor.The ambient temperature was adjusted with an electric furnace.The droplet was suspended and placed in a water-cooled probe.Then the whole setup was inserted into the test position and the probe was moved away before the measurement.1000 fps frame rate was used for the high-speed camera to record the images.The droplet diameter was calculated from the area of an equivalent circle. Figure 6 presents the comparison of experimental data of Harada and model calculations.Blue color corresponds to the d 2 -profile, while the red color indicates droplet temperature.Solid lines represent (εφ) d = 1, while dashed lines indicate that thermal radiation is neglected.Therefore, radiative heat Fig. 4 Comparison of experimental data of Nomura et al. [5] and results of the model.Boundary and initial conditions are presented in Table 1.Solid lines correspond to (εφ) d = 1, while dashed lines correspond to (εφ) d = 0. Uncertainty due to droplet insertion is indicated by the horizontal error bars transfer was considered as a sensitivity parameter again.A reasonable agreement can be observed with measurement data, similar to the multidimensional model of Harada.However, in their model, Harada rightly points out that the heat transfer between the suspension and droplet through the contact surface is a rate-determining factor and the corresponding heat transfer coefficient is a crucial parameter.Unfortunately, the literature has very limited information on accurately determining this coefficient, therefore, it is a potential future work.Note that the stationary evaporation phase is significantly influenced by radiation, indicated by the solid red line.On the other hand, this effect is less obvious from the d 2 -profile.It is also important to highlight that the higher k f of the thermocouple leads to increasing droplet temperature in the stationary evaporation phase.l f was considered in accordance with the protection tube.The average relative deviation values of the d 2 -profile are 22% and 31% for neglecting thermal radiation and (εφ) d = 1, respectively, and 18 K and 10 K, for the temperature-profiles, respectively.Overall, sufficient accuracy is provided by the coupled lumped parameter model. Thermal analysis Various features of the experimental setup, affecting the droplet thermal balance, are evaluated next with the presented numerical model.N-alkanes from n-hexane to n-dodecane, except for n-nonane and n-undecane, were analyzed to cover a broad range of fuel volatility.Furthermore, these compounds are often considered in experiments.Highfidelity data for their pressure-and temperature-dependent thermophysical and transport properties are available in the NIST database [25], therefore, the uncertainty resulting from these properties can be minimized.Nitrogen was considered as ambient gas in accordance with the experiments focusing on mass transfer phenomena.No gas flow was considered, therefore, Eqs. ( 13), ( 18) and ( 29) were used for the corresponding calculations, accounting for natural convection.It was assumed that the droplet was inserted into the measurement chamber while it was already suspended, therefore, the initial droplet and fiber temperatures were uniformly 300 K.The fiber material was quartz (SiO 2 ). The share of different heat sources in the total heat rate, defined by Eq. ( 17), is presented in Fig. 7a.Characteristics of n-hexane and n-dodecane are compared to present the effect of fuel volatility.Blue color corresponds to n-hexane, while red color refers to n-dodecane. Figure 7b indicates the boundary conditions.Thermal radiation is considered with (εφ) d = 0.5.The droplet lifetimes are significantly different, thus, the time scale is nondimensional.t d 2 30% is the time elapsed until d 2 reduces to 30% of d 0 2 .Due to the larger droplet size at the beginning of the evaporation process, Qconv,d dominates.As vapori- zation progresses, the surface area of the droplet reduces due to mass transfer, thus, the share of convective heat rate decreases.However, depending on the experimental layout, droplet size, and ambient temperature, the share of thermal radiation increases, then decreases due to the reduction of droplet size.This is in agreement with the findings of Harada et al. [20].N-hexane is more volatile than n-dodecane, therefore, its droplet size decrease is faster than that of n-dodecane under identical conditions, resulting in a steeper decrease of Qconv,d at the beginning of the process and an earlier maximum of Qrad,d .In the early stage of evaporation, the share of Qf is marginal.[17] and results of the model.Boundary and initial conditions are presented in Table 1.Dashed lines correspond to (εφ) d = 0 Fig. 6 Comparison of experimental data of Harada et al. [20] and results of the model.Boundary and initial conditions are presented in Table 1.Solid lines correspond to (εφ) d = 1, while dashed lines correspond to (εφ) d = 0 1 3 However, as d decreases due to mass transfer, its share increases significantly by the end of the process.Therefore, the stationary evaporation rate is seriously affected.Consequently, d f /d 0 is a crucial parameter in measurements.Qf for n-hexane possesses a slightly higher share than that of n-dodecane.The reason is the following.N-hexane is more volatile and the temperature difference between the fiber and the droplet in the stationary evaporation regime is higher, shown in Fig. 7b.However, this effect is not significant.Due to the notable temperature difference between the fiber and the droplet, a significant temperature gradient occurs along the fiber, which is the most important limitation of the applied modeling approach for the fiber and the main reason for the qualitative rather than quantitative analysis. As it was highlighted, d f /d 0 is an important parameter of experimental layouts.Therefore, its effect on evaporation characteristics is discussed next.To focus on the effect of heat conduction through the suspension, thermal radiation is neglected this time.l 0 = 1 cm was considered in the analysis and no notable effect of l 0 was recognized since the effect of d f is more dominant.Temporal d 2 -and droplet temperatureprofiles are presented in Fig. 8a The effect of d f /d 0 on the relative deviation of λ st , extended to a wider ambient condition range for the investigated n-alkanes, is presented in Fig. 9 for 500 K, 700 K, and 900 K gas temperature and for 1 bar and 5 bar ambient pressure.The 5% deviation is indicated with a dashed line again.The relative deviation notably increases with fuel volatility at T ∞ = 500 K since the difference between T f and T s is higher, as detailed in Fig. 7. 500 K is a frequent lower limit for the gas temperature in experiments.Increasing T ∞ diminishes the effect of fuel volatility and the different n-alkanes show practically matching trends.For d f /d 0 < 5%, the relative deviation stays below 5%.900 K is a typical upper limit for the gas temperature in experiments.The effect of p ∞ is the following.The boiling point of the droplet increases with pressure, therefore, the stationary evaporation phase can be characterized by a higher T s .However, T f is not influenced by p ∞ and their difference decreases.This leads to a decrease in Qf , resulting in a decrease in the relative deviation, shown in Fig. 9d-f, compared to Fig. 9a-c.Consequently, increasing pressure decreases the effect of thermal bias through the fiber.If thermal radiation is considered in the thermal balance of the droplet, λ st without fiber increases due to enhanced vaporization.Therefore, the sensitivity of λ st on d f /d 0 decreases.Quartz (SiO 2 ) is the typical suspension material in experiments.However, if droplet temperature is of interest, thermocouples or resistance temperature detectors (RTDs) with extremely small diameters are used the acquire temperature data and act as droplet suspension, as well.Platinum (Pt) is a typical material to solder type R thermocouples and Fig. 9 Effect of fiber diameter-to-initial droplet diameter ratio on the relative deviation of stationary evaporation rate at different ambient conditions (a, b, and c correspond to 1 bar and 500 K, 700 K, and 900 K, respectively, while d, e, and f correspond to 5 bar and 500 K, 700 K, and 900 K, respectively) for various n-alkanes.Thermal radiation is neglected manufacture RTDs.Note that Harada et al. [20] also used a type R thermocouple to acquire droplet temperature, as discussed in Fig. 6.However, k f of Pt is a magnitude higher than that of SiO 2 , seriously affecting Qf .Figure 10a shows the effect of material selection (blue and red colors) on the temporal d 2and T s -profiles (solid and dashed lines, respectively) of an n-dodecane droplet, while the share of Qf in the total heat rate is presented in Fig. 10b.The boundary conditions are indicated in Fig. 10b.Thermal radiation is neglected this time.d f /d 0 = 5% was considered in order to minimize the effect of Qf as much as possible.Even though the thermal conductivity of Pt is significantly higher than that of SiO 2 , their volumetric heat capacities are similar [35,36].Droplet lifetime is significantly shorter for Pt and in the stationary regime, T s is higher by more than 20 K than in the case of quartz fiber.This can seriously affect temperature measurements.Furthermore, no actual stationary state can be observed for the temperature-profile of the Pt case, shown by the increasing red dashed line in Fig. 10a.The thermal balance is dominated by Qf for Pt, shown in Fig. 10b.However, its share remains much lower for SiO 2 during the vaporization process.Consequently, quartz fiber suspension is more favorable and the measured temperature value by the thermocouple suspension can be highly biased. It is often troublesome to determine (εφ) d accurately for the actual experimental layout.However, the effect of thermal radiation significantly depends on the features of the measurement setup, such as T ∞ and d 0 .Two typical but extreme conditions are discussed next for an n-dodecane droplet at p ∞ = 1 bar.Heat conduction through the fiber is neglected this time to focus on the effect of radiative heat transfer.A larger droplet in higher T ∞ and a smaller droplet in lower T ∞ are considered, indicated with red and blue colors in Fig. 11.The curve parameter is (εφ) d .Figure 11a shows the temporal d 2 -and T s -profiles, while Fig. 11b presents the share of Qrad,d in the total heat Lines styles distinguish the various emissivity and view factor cases, while red and blue colors represent the different initial droplet sizes and ambient temperature conditions rate, where the time scale is non-dimensional again.The share barely exceeds 10% even for the black body assumption for the lower temperature case.However, the sensitivity is significant to (εφ) d for higher temperature and larger droplet size.When black body behavior is assumed, an overshooting tendency can be observed for the larger droplet and higher temperature case, when T s reaches a maximum and then starts to decrease, as shown in Fig. 11a.This behavior was also reported by Sazhin et al. [37] and by Harada et al. [20], where they attributed this maximum to the contribution of thermal radiation.Figure 12 shows the relative deviation of λ st in the function of (εφ) d for the same conditions.The relative deviation remains below 10% for the lower temperature case, however, it exceeds 90% for the high temperature and large droplet case in the extreme (εφ) d = 1 value.Consequently, the uncertainty of (εφ) d in typical experimental layouts can notably affect model validation. Conclusions A detailed thermal analysis of single droplet evaporation measurements was performed with a coupled lumped parameter model by revising the thermal balance of the droplet, accounting for the heat conduction through the fiber suspension.The model was validated against experimental data from the literature, showing reasonable agreement.Besides the temporal squared droplet diameter-, and droplet temperature-profiles, the stationary evaporation rate, λ st , was used as an indicator of vaporization characteristics for evaluation.Characteristics of C 6 -C 12 n-alkanes were analyzed in order to cover a broad range of fuel volatility.Pressure-and temperature-dependent thermophysical and transport properties were obtained from the database of the National Institute of Standards and Technology.Based on the results, the following qualitative conclusions can be derived: • The thermal balance of the droplet is dominated by the convective heat rate from the hot gas in the early stage of vaporization.As droplet size decreases, the share of conductive heat rate through the quartz suspension increases, notably enhancing vaporization in the stationary evaporation regime.• The relationship between the quartz fiber diameter-to-initial droplet diameter ratio and the relative deviation of λ st with respect to the case without fiber is non-linear.After a slight increase, a significant rise occurs as the ratio increases.This deviation decreases with increasing ambient pressure due to the increment of droplet boiling temperature.• Using temperature sensors for suspension can lead to serious bias in droplet temperature due to the typically higher thermal conductivity of the sensor, compared to quartz fiber.• Concerning thermal radiation, large (mm-scale) droplets in high-temperature environments show high sensitivity to droplet emissivity and view factor of the experimental setup.Consequently, radiative heat transfer should be carefully considered during measurements and model validation.• The presented coupled lumped parameter model provides reasonable accuracy validated by experimental data from the literature.Consequently, the applied parameter analysis and evaluation method can be the basis of further detailed investigations with more advanced models, where the key factor is the proper definition of the heat transfer coefficient on the contact surface between the suspension and the droplet.permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by/4.0/. Fig. 2 ( Fig. 2 (Left) Thermal balance of the suspended droplet setup and (right) the main concept of the evaporation model Fig. 3 Fig. 3 Obtaining the stationary evaporation rate from the temporal d 2 -profile of the droplet Fig. 5 Fig.5Comparison of experimental data of Yang and Wong[17] and results of the model.Boundary and initial conditions are presented in Table1.Dashed lines correspond to (εφ) d = 0 with no suspension and for different d f /d 0 ratios for an n-dodecane droplet at typical experimental conditions.The time scale is divided by d 0 2 in accordance with several published experimental data, like in Fig. 4. As d f /d 0 is increased, droplet temperature increases too, and droplet lifetime decreases.The relative deviation of λ st with respect to the case without fiber is presented in Fig. 8b in the function of d f /d 0 for the investigated n-alkanes at the same ambient conditions, which are presented in the figures.The 5% deviation value is highlighted with the dashed line.Fuels with different volatility show practically Fig. 7 aFig. 8 Fig. 7 a Share of the different sources in the total incoming heat rate and b temperature difference between fiber suspension and droplet for n-dodecane (red) and n-hexane (blue) Fig. 10 Fig. 10 Effect of fiber suspension material on the a d 2 -profile and droplet temperature and b share of conduction heat rate through the suspension in the total incoming heat rate for an n-dodecane droplet.Thermal radiation is neglected Fig. 12 Fig.12 Effect of emissivity and view factor on the relative deviation of stationary evaporation rate for n-dodecane droplets.Heat conduction through the fiber suspension is neglected
2023-04-16T15:14:55.432Z
2023-07-19T00:00:00.000
{ "year": 2023, "sha1": "fd74b7b7bb8b5fd8ef6f67230ea2fde972ecdb3f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00231-023-03403-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "785e1abf92646cd08e8a93e4989a74883d240999", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [] }
56146414
pes2o/s2orc
v3-fos-license
Studying the effect of different exchange correlation functionals on the structural and electronic properties of a half-Heusler NaAuS compound Theoretically, NaAuS is predicted as topological insulator, while no detail electronic structure study has been done for this compound. Here, we report the structural and electronic properties of NaAuS by using LDA, PBEsol, PBE and revPBE exchange correlation functionals. The calculated values of equilibrium lattice constant for LDA, PBEsol, PBE and revPBE exchange correlation functionals are found to be $\sim$6.128 {\AA}, $\sim$6.219 {\AA}, $\sim$6.353 {\AA} and $\sim$6.442 {\AA}, respectively. The bulk modulus predicted by LDA, PBEsol, PBE and revPBE exchange correlation functionals is $\sim$66.6, $\sim$56.4, $\sim$46.5 and $\sim$39.3 GPa, respectively. Hence, the order of calculated values of bulk modulus is consistent with the order of calculated values of equilibrium lattice parameters for these exchange correlation functionals. The spread of total density of states below the Fermi level decreases as the exchange correlation functional changes from LDA to PBEsol to PBE to revPBE, which is also found to be consistent with the order of bulk modulus for these exchange correlation functionals. In presence of spin-orbit coupling, a direct band gap is observed in NaAuS compound, which is found to be $\sim$0.26, $\sim$0.25, $\sim$0.24 and $\sim$0.23 eV for LDA, PBEsol, PBE and revPBE exchange correlation functionals, respectively. Here, NaAuS is found to be topological insulator as it shows band inversion at $\Gamma$ point. The calculated values of band inversion strength for LDA (PBEsol) and PBE (revPBE) exchange correlation functionals are $\sim$1.58 eV ($\sim$1.57 eV) and $\sim$1.50 eV ($\sim$1.47 eV), respectively. I. INTRODUCTION Heusler alloys are taken great importance to study their various physical properties after discovered by Fredrick Heusler in 1903 1,2 . Generally, the Heusler alloys are divided into two categories, one is called half-Heusler alloy with chemical formula MM ′ X and another is called full-Heusler alloy with chemical formula M 2 M ′ X. These alloys have special feature that their properties differ completely from those of the contained elements. For example, Cu 2 MnAl (full-Heusler alloy) is ferromagnetic, where Cu and Al atoms are non-magnetic and Mn atom is anti-ferromagnetic by themselves 3 . Other example is TiNiSn (half-Heusler alloy), which is semiconducting, even though it is made of three components that are metals 4 . New trend comes to find the topological insulation property from ternary half-Heusler materials in condensed matter physics [5][6][7][8][9][10][11] . Half-Heusler compounds are usually nonmagnetic and semiconducting when the number of total valence electrons is 18, which is known as 18-electron rule 12 . This rule in general is used for searching the topological insulators (TIs) 13 . Hence, half-Heusler material takes great attention to predict TIs. First-principles calculations have been widely used to predict topological insulators with great success 14 . The materials which are insulating in their interior but can support the flow of electrons on their surface, are called TIs. TIs are currently creating a new division of research activities in condensed matter physics [15][16][17] . First TI was theoretically predicted by Bernevig et al. and experimentally observed by Konig et al. in the HgTe quantum well 18,19 . Bernevig et al. have shown that the band inversion property driven by spin-orbit coupling (SOC) is used to discover the TIs. Sawai et al. have observed that the topology of the electronic band structures can be described by band inversion between Γ 6 and Γ 8 energy levels at the Γ symmetry point in the Brillouin zone and they define the band inversion strength ∆ as the energy difference between these two states, i.e., ∆ = [E Γ8 − E Γ6 ], where the E Γ6 and E Γ8 are the energy levels for Γ 6 and Γ 8 at the Γ point 20 . As compared with an ordinary superconductor, TIs have an important role for creating topological quantum computation 21 . In order to find the new topological insulators, Lin et al. have studied theoretically more than 2000 half-Heusler compounds 13 . Out of these, only LiAuS and NaAuS compounds have found to be TIs with band gap ∼0.20 and ∼0.19 eV, respectively. They have studied both compounds using only PBE and hybrid density functionals in face-centered-cubic (FCC) phase. As per our knowledge, the synthesis of NaAuS compound (which we have chosen to study in the present manuscript) in the FCC phase is not available in the literature. However, this compound is synthesized experimentally in orthorhombic phase 22 . In general, half-Heusler compounds are described by space group F -43m. Hence, in NaAuS half-Heusler with space group F -43m, Na, Au and S atoms are located at Wyckoff positions (1/2, 1/2, 1/2), (1/4, 1/4, 1/4) and (0,0,0), respectively. The detailed structural and electronic study of NaAuS compound is not found in the literature 13 . Hence, it will be interesting to see the effect of different exchange correlation functionals on the structural and electronic behaviour of this compound in more detail. Here in the present work, we study the structural and electronic behaviour of NaAuS compound by using density functional theory. The order of calculated values of equilibrium lattice parameter for LDA>PBEsol>PBE>revPBE exchange correlation functionals, which is consistent with the order of bulk modulus predicted by these exchange correlation functionals. Among these exchange correlation functionals, the total density of states below the Fermi level are spread in more region for LDA and less region for revPBE, which indicates that the bulk modulus predicted by LDA is largest and revPBE is smallest. The order of calculated values of direct band gap is found to be LDA>PBEsol>PBE>revPBE exchange correlation functionals. The band inversion is found at Γ point for NaAuS indicates that this compound is a topological insulator. The order of magnitude of calculated values of band inversion strength is found to LDA>PBEsol>PBE>revPBE exchange correlation functionals. II. COMPUTATIONAL DETAILS The electronic structure calculation of NaAuS compound is performed by using the full-potential linearizedaugmented plane-wave (FP-LAPW) method as implemented in Elk code 23 . Here we have employed LDA, PBEsol, PBE and revPBE exchange correlation functionals [24][25][26][27] . SOC is also considered to see its effect on the electronic properties of this compound. The muffin-tin sphere radii used for Na, Au and S atoms are 2.0, 2.5 and 2.0 bohr, respectively. The values of rgkmax and gmaxvr are set to be 8 and 14, respectively. These values are sufficient to get nice parabolic energy versus volume curves. 10 × 10 × 10 k-point mesh size has been used in the present calculations. Total energy to reach convergence has been set below 10 −4 Hartree/cell. The equilibrium lattice parameters are computed by fitting the total energy versus unit cell volume data to the universal equation of state 28 . The universal equation of state is defined as, where P , E, V , B 0 and B 0 ′ are the pressure, energy, volume, bulk modulus and pressure derivative of bulk modulus, respectively and χ = (V /V 0 ) 1/3 . III. RESULTS AND DISCUSSION The total energy difference between the energies (function of volume) and energy corresponding to the equilibrium volume [∆E=E(V)-E(V eq )] per formula unit versus The SOC is expected to play an important role in NaAuS compound because of the heavier Au atom. Hence, we have included the SOC in our calculations. The plot of ∆E per formula unit versus primitive unit cell volume is shown in Fig. 2. It is clear from the figure that a parabolic behavior is also seen here for these exchange correlation functionals, which is similar to that observed without SOC. The equilibrium volume and bulk modulus 29,30 . Now, we discuss the cause of opposite behavior of bulk modulus as compared to equilibrium lattice constant, when exchange correlation functional changes from LDA to PBEsol to PBE to revPBE. As we know that the pressure is defined as the rate of change of energy with respect to volume and the bulk modulus is directly proportional to the rate of change of pressure with respect to volume. We have plotted ∆E per formula unit versus total primitive unit cell volume difference between the primitive unit cell volume and the corresponding equilibrium volume [∆V =V -V eq ] for NaAuS using every exchange correlation functional in Fig. 3. It is evident from the figure that LDA (revPBE) gives the most (least) steeper slope, which indicates that LDA (revPBE) gives higher (lower) rate of change of energy with respect to volume. Hence, LDA (revPBE) gives higher (lower) bulk modulus as shown in Table 1. This is due to the direct relationship between energy and bulk modulus. The plot of the total density of states (TDOS) of NaAuS for every exchange correlation functional is shown in the Fig. 4. It is clear from the figure that the every exchange correlation functional gives almost similar behaviour of the TDOS, when SOC is excluded in the calculations. However, a very small TDOS at the Fermi level is obtained for every exchange correlation functional, which indicates a soft band gap. Here, it is important to note that in the valance band (VB), the peaks of TDOS shifted towards the Fermi energy as the exchange correlation functional changes from LDA to PBEsol to PBE to revPBE. Below Fermi level, the TDOS is spread upto -5.9 eV (-5.6 eV) and -5.1 eV (-4.8 eV) for LDA (PBEsol) and PBE (revPBE), respectively. It is interesting to note that the order of bulk modulus and the spread of TDOS below Fermi level is similar for these exchange correlation functional. Hence, in general one can predict the order of bulk modulus by looking the TDOS of these exchange correlation functionals. The plot of partial density of states (PDOS) of Na, Au and S atoms of NaAuS with and without including SOC in the calculations only for PBEsol exchange correlation functional is shown in the Fig. 5(a-f). This is due to the fact that among LDA, PBEsol, PBE and revPBE exchange correlation functionals, PBEsol is newest one and show almost similar PDOS as compared to other exchange correlation functionals. Here, we discuss the PDOS of Na, Au and S atoms of NaAuS without including SOC in the calculations. Below Fermi level, the dominant electronic contribution comes from 3s and 3p states of Na atom, which are ∼45% and ∼55%, respectively. For Au atom, the most dominant electronic contribution comes from 5d state (∼93%), while the small contribution comes from other states. For S atom, the dominant electronic character comes from 3p state (∼95%) as compared to 3s state. Now, we discuss the PDOS of Na, Au and S atoms above the Fermi level. The electronic contribution of 3s and 3p states to PDOS of Na atom are ∼66% and ∼34%, respectively. For Au atom, the contri- bution of 6s and 5d states are negligible to PDOS. The dominant electronic character for S atom comes from 3p state (∼75%) as compared to 3s state. Almost a similar electronic contribution from various states to PDOS of Na, Au and S atoms of NaAuS is observed by including the SOC in the calculations and is shown in Fig. 5(d-f). The spin unpolarised dispersion curves along the high symmetry direction of first Brillouin zone obtained in calculations for NaAuS compound for above mentioned exchange correlation functionals is shown in Fig. 6. The high symmetric k-points for FCC structure are W, L, Γ and X, respectively. Firstly, we discuss the band structure of NaAuS for LDA exchange correlation functional. Top of the energy band of VB and bottom of the energy band of conduction band (CB) touches at Γ point. The first two energy bands of VB from Fermi level are doubly degenerate from Γ to L and Γ to X directions. Along L to W and X to W directions, the degeneracy of energy bands is completely lifted. Γ to L direction, 4 and 5 energy bands of VB from Fermi level are doubly degenerate, while Γ to X and L to W directions both energy bands are non degenerate. However, both energy bands are crossing to each other with 3 energy band almost middle of X and W points. 6, 7 and 8 energy bands are triply degenerate at Γ point. Along Γ to L and Γ to X directions, only 6 and 7 energy bands remains degenerate. While, from L to W and X to W, the degeneracy of 6 and 7 energy bands is lifted. Now, we discuss the various energy bands from the Fermi level of CB. 2' and 3' energy bands of CB are degenerate at Γ point. However, the degeneracy of Table 1. At last, we discuss the band inversion property of NaAuS at Γ point as it is the most common tool to identify the topological insulating behavior of a compound. Here, we focus on Γ 6 and Γ 8 point when SOC is included in calculations, which is shown in Fig. 7. After the detailed analysis of the bands indicate that Γ 6 point is rich with Au 6s state and the contribution of S 3p state is greater than Au 5d state at the Γ 8 point when SOC is not included in calculations. However, when the effect of SOC is included, Γ 6 point does not change its own contributor, where the contribution of Au 5d state is increased at Γ 8 point as compared to S 3p state. From our basic knowledge, we know that outer electronic state will stay at higher energy with respect to inner state. Au 5d is a inner state than Au 6s state. But, after including SOC Au 6s state goes to lower energy level than the Au 5d state. Hence, Au s − d band inversion between Γ 6 and Γ 8 point is observed here. The band inversion strength for LDA, PBEsol, PBE and revPBE exchange correlation functionals are calculated by using relation, 20 . The calculated values of band inversion strength (shown in Table 1) for LDA, PBEsol, PBE and revPBE exchange correlation functionals are ∼1.58, ∼1.57, ∼1.50 and ∼1.47 eV, respectively. IV. CONCLUSIONS The detail electronic structure study of half-Heusler NaAuS compound has not been done theoretically. Here, we have studied the various comparative physical properties such as, structural and electronic for NaAuS by using LDA, PBEsol, PBE and revPBE exchange correlation functionals. The calculated values of equilibrium lattice constant (bulk modulus) for LDA, PBEsol, PBE and revPBE exchange correlation functionals are found to be ∼6.128Å (∼66.6 GPa), ∼6.219Å (∼56.4 GPa), ∼6.353Å (∼46.5 GPa) and ∼6.442Å (∼39.3 GPa), respectively. Hence, the order of calculated values of bulk modulus for these exchange correlation functionals are consistent with the order of calculated values of equilibrium lattice parameters because of the inverse relationship between bulk modulus and lattice parameter. Among these functionals, the total density of states below the Fermi level were found to be spread in more region for LDA and less region for revPBE, which was also found to be consistent with the order of bulk modulus for these exchange correlation functionals. In presence of spin-orbit coupling, a direct band gap was observed for NaAuS compound, which was found to be ∼0.26, ∼0.25, ∼0.24 and ∼0.23 eV for LDA, PBEsol, PBE and revPBE exchange correlation functionals, respectively. The band inversion was observed at Γ point, which indicates that NaAuS compound shows topological insulating behaviour. The calculated values of band inversion strength for LDA (PBEsol) and PBE (revPBE) exchange correlation functionals were found to be ∼1.58 eV (∼1.57 eV) and ∼1.50 eV (∼1.47 eV), respectively.
2017-07-26T14:17:15.000Z
2017-07-26T00:00:00.000
{ "year": 2017, "sha1": "3a69c5b1924f2e4a336eb99d306883e4ab293486", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3a69c5b1924f2e4a336eb99d306883e4ab293486", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
55775950
pes2o/s2orc
v3-fos-license
Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique This paper presents a numericalmodel of lambwave propagation in a homogenous steel plate using elastodynamic finite integration technique (EFIT) as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capable of properly propagating in the material, interfering with defects/damages, and being received in good conditions. Modern simulation tools based on numerical methods such as finite integration technique (FIT), finite element method (FEM), and boundary element method (BEM) may be used for modeling. In this paper, two sets of simulation are performed. In the first set, group velocities of lamb wave in a steel plate are obtained numerically. Results are then compared with analytical results to validate the simulation. In the second set, EFIT is employed to study fundamental symmetric mode interaction with a surface braking defect. Introduction Lamb wave testing technique is increasingly used for assessing defects in thin-wall structures like plate and pipes [1][2][3].Lamb waves are elastic waves whose wavelength is in the same order as thickness of the structure [4].One of the main advantages of lamb wave technique is that it allows long-range inspection in contrast to traditional ultrasonic testing, where the coverage is limited to a small area in vicinity of each transducer.Lamb waves were first described theoretically by Horace Lamb in 1917 [5].These waves arise from coupling between shear and longitudinal waves reflected at the top and bottom edges of a thin wall structure [6].Lamb wave theory can be found in a number of text books [7].Defects such as corrosion and fatigue cracks cause changes in effective thickness and local material properties and therefore measurement of variations in lamb wave propagation can be used to assess the integrity of plate [1].Successful usage of lamb waves in an inspection system needs to understand its schemes of propagation in a waveguide and its scattering at defects.Thus, there is an increasing demand for powerful, flexible, and accurate simulation techniques.First works on numerical simulation of ultrasonic waves were done by Harumi (1986) and Yamawaki and Saito (1992) who calculated and visualized bulk wave propagation [8].Now, numerical simulation of lamb waves is possible.Common techniques which are used to simulate lamb wave propagation are finite difference time domain (FDTD) [9], finite element method (FEM) [5], boundary element method (BEM) [10], elastodynamic finite integration technique (EFIT) [11,12], and specialized methods for guided wave calculations such as hybrid methods [13] and semianalytical finite element method (SAFEM) [8]. In this work, calculations are based on elastodynamic finite integration technique; historically, finite integration technique was introduced by Weiland in electrodynamics.Fellinger and Langenberg used Weiland's idea for governing equations of ultrasonic waves in solid, calling it EFIT [14].EFIT is a grid based numerical time-domain method, using velocity-stress formalism, and easily treats with different boundary conditions which are essential to model ultrasonic wave propagation [12].Because of its relative simplicity and flexibility, Schubert et al. used EFIT equations to cylindrical coordinates (CEFIT) to simulate axisymmetric wave propagation in pipes with a 2D grid [15].Schubert also used finite integration technique to simulate elastic wave propagation in porous concrete and showed efficiency of EFIT to model a diverse range of applications [16]. Two sets of simulation results are presented in this work using a program developed in MATLAB environment.In the first one, lamb wave propagation in a 2D steel plate is discussed.Results are then compared with analytical results to validate the accuracy of modeling and, in the second example, interaction lamb wave with a surface breaking defect is investigated. The Elastodynamic Finite Integration Technique for Linear Elastics where V is the particle velocity vector, is stress tensor, is density, is the outward normal vector on surface , is the body force vector, and is the compliance tensor.The inverse of is the stiffness tensor .Thus, using stiffness tensor, deformation rate equation can be expressed in another form.Consider In the case of isotropic material can be written as [17] where and are lame constants. Spatial Discretized Form of Two Dimensional EFIT. Consider the Cartesian coordinate {, , } and ultrasonic wave which propagates in two dimensional -plane.To apply FIT to (1) and ( 2), squares shown in Figure 1 are used as integral volume , assuming constant V and for each volume. or Pseudomaterial cells True material cells (coincide with T xx and T zz integration cells) Figure 1: Definition of integration cells for stress and velocity components.The geometry consists of four true material cells and four pseudomaterial cells [12]. The final results for discretized form are A same manner of integration equation ( 1) about a V integration cell centered at ( 1 , 1 ) results in Excitation force Sensor Now, using the normal stress equations, integration of (3) about and centered at ( 2 , 2 ) yields Finally, integration of (3) over integration cell centered at ( 3 , 3 ) the intersection for material cells results in As shown in Figure 1, to simplify indexing into stress and velocity arrays of staggered grids when programming the numerics and to keep the same array sizes for all quantities, pseudomaterial cells are used.These cells have the same material properties as the true material they are added to but are not part of physical simulations. Time Discretization. Central differences are used to discretize the equations in time domain which results in the velocity and stress components being staggered in time by Δ/2 [15].Consider where Δ is time interval, superscript is integer number of time step, and dot {⋅} denotes the time differentiation. Equations ( 5)-( 8) are solved at all points in simulation space and, by use of ( 9), the simulation proceeds in time in a "leap frogging" manner.A specific stability condition and adequate spatial resolution must be satisfied to guarantee EFIT convergence and accurate answers [15]. Propagation of Lamb Wave in a Steel Plate In this part, the propagation of lamb wave in a steel plate is simulated using 2D-EFIT.The steel plate has the length = 300 mm and the thickness = 2 mm.Table 1 shows material properties used in this paper. As excitation source, point sources at top and bottom borders of plate are used.Figure 2 shows location of applied loads. Using excitation patterns shown in Figure 2 and dispersion diagram for steel plate (Figure 3), single mode lamb wave is generated which makes signal interpretation easier.Using 2D-EFIT code developed in MATLAB, propagation of lamb wave in the steel plate is simulated.To guarantee stability and accuracy of results, Δ and Δ are chosen 0.2 mm and Δ is 20 ns.The simulation results using EFITtool for symmetric and axisymmetric modes are presented in Figure 4, where the ultrasonic wave field in the plate at time = 80 s is shown (excitation pulse is a raised cosine with five cycles with center frequency of 500 kHz). As shown in Figure 4, for the fundamental symmetric mode ( 0 ), the lamb wave field is symmetric about half plane line and, for the fundamental axisymmetric mode ( 0 ), normal component of particle velocity V has the same value for every particle with same longitudinal position.From dispersion curve, we find that 0 travels faster than 0 which is validated by simulation results (see Figure 4).In order to check EFIT accuracy, group velocities obtained from simulation are compared with analytical results at both symmetric and axisymmetric modes (Figures 5 and 6). Figures 5-7 show good agreement for simulation results with analytical ones; also Figure 7 shows error dependence on frequency for axisymmetric mode is less than symmetric mode. Reflection of the Fundamental Symmetric Mode (𝑆 0 ) from a Defect In this section, interaction of the 0 mode with a defected steel plate is analyzed.The results presented here were used for a sizing study of rectangular surface braking defect with different depths and opening length 2 mm on a steel plate (Figure 8).The same method used in the proceeding section is used to generate single mode with center frequency of 500 kHz.However, as the lamb wave interacts with a defect, the axisymmetric mode will be generated.To study lamb wave interaction with a defect, the ratio of the maximum amplitudes for two modes 0 / 0 is then calculated and compared at different depths (Figure 9). Figure 10 shows the ultrasonic wave field in the defected plate at time = 60s; the defect depth is 0.4 mm.As shown in Figure 10, because symmetric modes travel faster than axisymmetric ones, mode separation happens after lamb wave interaction with defect. Conclusion EFIT was used for studying lamb wave propagation in a steel plate using a program developed in MATLAB environment.Two sets of simulation results were presented in this paper.In the first example, group velocities of lamb wave for different frequencies were obtained using numerical signals and then the results were compared with analytical results; the comparison shows, for both fundamental symmetric and axisymmetric modes, the group velocity values are in good agreement with theoretical ones.In the second example, reflection of 0 mode from a defect is studied and ratio of reflection coefficients was obtained as a function of crack depth which shows that as the crack depth increases the ratio 0 / 0 increases.Each calculation presented in this paper was done on ordinary PC (Core i5, 2.4 GHz, 4 GB RAM). Figure 4 : Figure 4: Lamb wave propagation in a steel plate at time = 80 s.The snapshot represents normal component of particle velocity (V ): (a) symmetric mode and (b) axisymmetric mode. Figure 5 : Figure 5: Analytical group velocities comparison with 2D-EFIT results for fundamental axisymmetric mode. Figure 7 : Figure 7: Error comparison for symmetric and axisymmetric modes. 5 Figure 10 : Figure 10: Lamb wave propagation in a steel plate with defect at time = 60 s.The snapshot represents normal component of particle velocity (V (m/s)). Table 1 : Material properties used for simulation.
2018-12-12T03:45:56.270Z
2014-07-10T00:00:00.000
{ "year": 2014, "sha1": "3f5bf475a788f08547a0c23a75e87d7ff711e533", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sv/2014/434187.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "751304c9ff7b0aff2e5666865e0b571eb0b93c12", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
211259253
pes2o/s2orc
v3-fos-license
Resources for Turkish Dependency Parsing: Introducing the BOUN Treebank and the BoAT Annotation Tool In this paper, we introduce the resources that we developed for Turkish dependency parsing, which include a novel manually annotated treebank (BOUN Treebank), along with the guidelines we adopted, and a new annotation tool (BoAT). The manual annotation process we employed was shaped and implemented by a team of four linguists and five Natural Language Processing (NLP) specialists. Decisions regarding the annotation of the BOUN Treebank were made in line with the Universal Dependencies (UD) framework as well as our recent efforts for unifying the Turkish UD treebanks through manual re-annotation. To the best of our knowledge, BOUN Treebank is the largest Turkish treebank. It contains a total of 9,761 sentences from various topics including biographical texts, national newspapers, instructional texts, popular culture articles, and essays. In addition, we report the parsing results of a state-of-the-art dependency parser obtained over the BOUN Treebank as well as two other treebanks in Turkish. Our results demonstrate that the unification of the Turkish annotation scheme and the introduction of a more comprehensive treebank lead to improved performance with regard to dependency parsing. Introduction The field of Natural Language Processing (NLP) has seen an influx of various treebanks following the introduction of the treebanks in Marcus et al. (1993), Leech and Garside (1991), and Sampson (1995). These treebanks paved the way for today's evergrowing NLP framework, consisting of NLP applications, treebanks, and tools. Even though the value of a treebank cannot be judged solely by its number of sentences, previous research has shown that the size of a treebank may affect its utility in downstream NLP tasks (Foth et al., 2014). Among the many languages with a growing treebank inventory, Turkish has been one of the less fortunate languages. The latest version 1 of the Turkish IMST-UD Treebank is currently ranked as the 76 th treebank out of 183 treebanks in terms of the number of annotated sentences in the Universal Dependencies (UD) project (Nivre et al., 2016). As of the UD version 2.7, the UD project includes 183 treebanks and the largest of them, the UD German-HDT Treebank, consists of 190,000 sentences (Borges Völker et al., 2019). Turkish has posed an enormous challenge for NLP studies due to its complex network of inflectional and derivational morphology, as well as its highly flexible word order. One of the first attempts to create a structured treebank was initiated in the studies of Atalay et al. (2003) and Oflazer et al. (2003). Following these studies, many more Turkish treebanking efforts were introduced (Megyesi et al., 2010;Sulger et al., 2013;Sulubacak et al., 2016b, among others). However, most of these efforts contained a small volume of Turkish sentences, and some of them were re-introduced versions of already existing treebanks in a different annotation scheme. This paper aims to contribute to the limited NLP resources in Turkish by annotating a part of a brand new corpus that has not been approached with a syntactic perspective before, namely the Turkish National Corpus (henceforth TNC) (Aksan et al., 2012). TNC is an online corpus that contains 50 million words. The BOUN Treebank, which is introduced in this paper, includes 9,761 sentences extracted from five different text types in TNC, i.e. essays, broadsheet national newspapers, instructional texts, popular culture articles, and biographical texts. These sentences have not been introduced within a treebank previously. We manually annotated the syntactic dependency relations of the sentences following the up-to-date UD annotation scheme. Through a discussion of the annotation decisions made in the creation of the BOUN Treebank, we present our take on the annotation of Turkish data, including the challenges that the copular clitic, embedded constructions, compounds, and lexical cases pose. Turkish treebanking studies present an inconsistent picture in the annotation of such constructions, even though these linguistic phenomena are observed and studied extensively within Turkish linguistic studies. In addition, we present a new annotation tool that integrates a tabular view, a hierarchical tree structure, and extensive morphological editing. We believe that other agglutinative languages that offer challenging morphological problems may benefit from this tool due to its ability to split and/or merge words and tokens in a sentence while rearranging the information regarding each word/token automatically, such as the word/token ID. This feature is crucial for the annotation process, since pre-processing of sentences may split the words and tokens erroneously. Lastly, we report the results of an NLP task, namely dependency parsing, where we made parsing experiments on the newly introduced BOUN Treebank together with previous Turkish treebanks. The results show that increasing the size of the training set has a positive effect on the parsing success for Turkish. We observe that using the UD annotation scheme more faithfully and in a unified manner within Turkish UD treebanks offers an increase in the UAS (Unlabeled Attachment Score) F1 and LAS (Labeled Attachment Score) F1 scores. We also report individual parsing scores for different text types within our new treebank. This paper is organized as follows: In Section 2, we briefly explain the morphological and syntactic properties of Turkish. In Section 3, we present an extensive review of previous treebanking efforts in Turkish and locate them with regards to each other in terms of their use and their aim. In Section 4, we report the details of the BOUN Treebank and our annotation process including the morphological and syntactic decisions. We lay out our tool BoAT in Section 5. In Section 6, we report our experiments and their results. In Section 7, we present our conclusions and discuss the implications of our work. Turkish Turkish is a Turkic language spoken mainly in Asia Minor and Thracia with approximately 75 million native speakers. As an agglutinative language, Turkish makes excessive use of morphological concatenation. According to Bickel and Nichols (2013), a Turkish verb may have up to 8-9 inflectional categories per word, such as number, tense, or person marking. This number is about twice of the average of the maximum number of inflectional categories in the other 145 languages covered in Bickel and Nichols (2013). The number of morphological categories increases further when considering derivational processes. Kapan (2019) states that Turkish words may host up to 6 different derivational affixes at the same time. The complexity of morphological analysis, however, is not limited to the sheer number of inflectional and derivational affixes. In addition to such affixes, allomorphies, vowel harmony processes, elisions, and insertions create an arduous task for researchers in Turkish NLP. Table 1 lists the possible morphological analyses of the surface word alın. The table shows that despite the shortness of the word, the morphological analysis is toilsome; and even such a short item may be parsed to have different possible roots. With respect to syntactic properties, Turkish has a relatively free word order, which is constrained by discourse elements and information structure (Taylan, 1986;Hoffman, 1995;Kural, 1997;İşsever, 2003;Kornfilt, 2005;Öztürk, 2008Özsoy, 2019). Even though SOV is the base word order, other permutations are highly utilized, as exemplified in Example 1. 2 The percentages were determined by Slobin and Bever (1982) from 500 utterances of spontaneous speech. We also report word order percentages acquired from the BOUN Treebank in Table 13 and Table 14 in Section 10. These permutations are stemmed from processes including topicalization, focus- Table 1: Possible morphological analyses of the word alın from Sak et al. (2008). The symbol '&' indicates derivational morphemes (originally '-', changed for clarity here), and '+' indicates inflectional morphemes. The strings between these symbols and the squarebracketed feature represent the phonology of the suffix. Upper case within a suffix means that the sound is phonologically conditioned. 'H' stands for the archiphonemic high vowel. 'N' stands for the allomorphy between the alveolar nasal and the lack of it. 'Y' represents the allomorphy between the palatal glide and the lack of it. 'Get taken!' ing, and backgrounding. Contributing new or old information may also affect the place of a constituent, that is, new information may be placed closer to the verb and is always in pre-verbal position, whereas old information may surface both in pre-verbal and post-verbal positions. Another aspect that affects the word order is definiteness and specificity. Indefinite subjects and objects can typically surface in the immediately pre-verbal position. (1) a. Fatma Fatma (adapted from Hoffman, 1995) As for the case system, every argument in a sentence needs to host a case according to its syntactic role, semantic contribution, or the lexical selection of the phrasal head (Taylan, 2015). These groupings, however, are not clear cut and there is not always a one-to-one correspondence between cases and their roles. Moreover, Turkish is a pro-drop language in which the subject can be elided when it is retrievable from the given discourse (Kornfilt, 1984;Özsoy, 1988). Overt subjects are used only to convey certain discourse and/or pragmatic effects, such as a change in context or focus. However, the subject is also retrievable from the agreement marker on the verb. In addition to these properties, Turkish is also a null object language, even though the language does not have an overt agreement marker available for this process (Öztürk, 2006). If the object of a sentence is retrievable from the given discourse, speakers may omit the object without any overt marking on the verb. The final issue with Turkish syntax lies in the fact that it frequently makes use of nominalization processes for embedded clauses (Göksel and Kerslake, 2005). With certain nominalizer suffixes, the embedded sentences may function as an adverbial, an adjectival, or a nominal. Previous Turkish Treebank Initiatives The initial groundwork for Turkish treebanks was laid in Atalay et al. (2003) and Oflazer et al. (2003) following the studies on treebanks for languages such as English, German, Dutch, and many more (Leech and Garside, 1991;Marcus et al., 1993;Sampson, 1995;Brants et al., 2002;van der Beek et al., 2002). The first of its kind, the METU-Sabancı Treebank (MST) consists of 5,635 sentences, a subset of the METU corpus that reportedly includes 16 different text types such as newspaper articles and novels (Say et al., 2002). Oflazer et al. (2003) encoded both morphological complexities and syntactic relations. Due to the productive use of derivational suffixes, they explicitly spelled out every inflection and derivation within a word. As for the syntactic representation, Atalay et al. (2003) used a dependency grammar in order to bypass the problem of constituency in Turkish, which arises from the relatively free word order of the language. Branching off the work of Atalay et al. (2003) and Oflazer et al. (2003), a small treebank with the name of ITU Validation set for MST was introduced. It contains 300 sentences from 3 different genres. The treebank was introduced as a test set for MST in the CoNLL 2007 Shared Task (Eryigit and Pamay, 2007). The treebank was annotated by two annotators using a cross-checking process. Following this work, MST was re-annotated by Sulubacak et al. (2016a) from ground up with revisions made in syntactic relations and morphological parsing. The latest version was renamed as the ITU-METU-Sabancı Treebank (IMST). Due to certain limitations, Sulubacak et al. (2016a) employed only one linguist and several NLP specialists. The annotation process was arranged in such a way that there was no cross-checking between the works of the annotators. Moreover, inter-annotator agreement scores, details regarding the decision process among annotators, and the adjudication process have not been reported. Nevertheless, this re-annotation solved many issues regarding MST by proposing a new annotation scheme. Even though problems such as semantic incoherence in the usage of annotation tags and ambiguous annotation were resolved to a great extent, the noncommunicative nature of the annotation process led to a handful of inconsistencies. The inconsistencies in IMST were also carried over to IMST-UD, which utilizes automatic conversions of the tags from IMST to the UD framework (Sulubacak et al., 2016b). Mappings of syntactic and morphological representations were also included. Consequently, IMST-UD was made more explanatory and clear thanks to the systematically added additional dependencies. While IMST had 16 dependency relations, 47 morphological features, and 11 part of speech types, IMST-UD upped these numbers to 29, 66, and 14, respectively. Yet, the erroneous dependency tagging resulting from morpho-phonological syncretisms lingered long after the publication of the treebank. Moreover, no post-editing effort has been reported. There have been four updates since the first release of the IMST-UD treebank, but there are still mistakes that can be corrected through a post-editing process, such as the punctuation marks tagged as roots, reversed head-dependent relations, and typos in the names of syntactic relations. Apart from the treebanks originating from MST, many other treebanks have emerged. Some of these treebanks can be grouped under the class of parallel treebanks. The first of these parallel treebanks is the Swedish-Turkish Parallel Treebank (STPT). Megyesi et al. (2008) published their parallel treebank containing 145,000 tokens in Turkish and 160,000 tokens in Swedish. Following this work, Megyesi et al. (2010) published the Swedish-Turkish-English Parallel Treebank (STEPT). This treebank includes 300,000 tokens in Swe-dish, 160,000 tokens in Turkish, and 150,000 tokens in English. All the treebanks utilized the same morphological and syntactical parsing tools. For Swedish morphology, the Trigrams'n'Tags tagger (Brants, 2000) trained on Swedish (Megyesi, 2002) was used. On the other hand, Turkish data were first analyzed using the morphological parser in Oflazer (1994), and its accuracy was enhanced through the morphological disambiguator proposed in Yuret and Türe (2006). The Turkish and Swedish treebanks were annotated using the MaltParser that was trained with the Swedish treebank Talbanken05 (Nivre et al., 2006) and MST , respectively. Another parallel treebank introduced for Turkish is the Turkish PUD Treebank, which adopts the UD framework. The Turkish PUD Treebank was published as part of a collaborative effort, the CoNLL 2017 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies . Sentences for this collaborative treebank were drawn from newspapers and Wikipedia. The same 1,000 sentences were translated into more than 40 languages and manually annotated in line with the universal annotation guidelines of Google. After the annotation, the Turkish PUD Treebank was automatically converted to the UD style. Moreover, there are three treebanks that consist of informal texts. One such treebank was introduced by Pamay et al. (2015) under the name of ITU Web Treebank (IWT). In IWT, non-canonical data were included such as the usage of punctuations in emoticons, abbreviated writing such as kib that stands for kendine iyi bak (take care of yourself), and non-standard writing conventions as in saol instead of sagol (thanks). Later on, the UD version of IWT was also introduced (Sulubacak and Eryigit, 2018). Another web treebank has recently been presented by Kayadelen et al. (2020), which is larger than the previous Turkish treebanks in terms of word count, but still smaller than the BOUN Treebank that we introduce in this paper. Kayadelen et al. (2020) used a set of dependency labels similar to the UD framework. However, they diverge from the UD framework in certain issues such as postpositions, indirect objects, and oblique arguments. The Turkish-German Code-Switching Treebank (Çetinoglu and Çöltekin, 2019) is another treebank, in which they did not use formal texts. The Turkish-German Code-Switching Treebank consists of bilingual conversation transcriptions as well as their morphological and syntactic annotation. This treebank includes 48 unique conversations and 2,184 Turkish-German bilingual sentences that have been annotated with respect to the language in use. There is also one grammar book-based treebank introduced in Çöltekin (2015). The Grammar Book Treebank (GB) is the first UD attempt in Turkish treebanking. In this treebank, data were collected from a reference grammar book for Turkish written by Göksel and Kerslake (2005). It includes 2,803 items that are either sentences or sentence fragments from the grammar book. It utilized TRMorph (Çöltekin, 2010) for morphological analyses and the proper morphological annotations were manually selected amongst the suggestions proposed by TRMorph. The sentences were manually annotated in the native UD-style. In addition to these treebank initiatives, we recently started our unifying efforts in the syntactic annotation scheme in Turkish treebanking. We manually corrected the syntactic annotations in the Turkish PUD and IMST-UD treebanks (Türk et al., 2019b,a). In these works, we selected the treebanks that were not annotated natively in the UD style and unified the annotation scheme. This process improved the UAS score for the IMST-UD Treebank from 72.49 to 75.49 and caused only a 0.9 point decrease in the LAS score (from 66.43 to 65.53) in our experiments with the Standford's neural dependency parser (Dozat et al., 2017), despite the number of unique dependency tags increasing from 31 to 40 with the newly included dependency types (Türk et al., 2019b). On the other hand, there was a decrease in the parsing accuracy for the re-annotated version of the PUD Treebank in terms of the attachment scores. While the parser achieved an UAS score of 79.52 and a LAS score of 73.81 on the previous version of the PUD Treebank, its attachment scores for the re-annotated version were 78.70 UAS and 70.01 LAS (Türk et al., 2019a). We want to note that, we used 5-fold cross validation for the evaluation of the PUD Treebank due to its small size. In each fold, the parser had only 600 sentences for training, and 200 sentences were used as the development set. The evaluation was done on the remaining 200 sentences. The small size of the PUD Treebank, which was originally used only for evaluation purposes (not for training) in the CoNLL 2017 Shared Task , renders the results less reliable. Following these studies, with the annotation scheme we unified, we manually annotated the BOUN Treebank, which we present in this paper. In Table 2 In this paper, we introduce a treebank that consists of 9,761 sentences which form a subset of the Turkish National Corpus (TNC) (Aksan et al., 2012). TNC includes 50 million words from various text types, and encompasses sentences from a 20 year period between 1990 and 2009. The principles of the British National Corpus were followed in terms of the selection of the domains. Table 3 shows the percentages of different domains and media used in TNC. In our treebank, we included the following text types: essays, broadsheet national newspapers, instructional texts, popular culture articles, and biographical texts. Approximately 2,000 sentences were randomly selected from each of these registers. All of the selected sentences were written items and were not from the spoken medium. Our motivation for using these registers was to cover as many domains as possible using as few registers as possible, while not compromising variations in length, formality, and literary quality. TNC consists of 39 different registers, reported in section 11 in Section 11. 5 The basic statistics for the BOUN Treebank and its different sections are provided in Table 4. Before the manual annotation of the BOUN Treebank, the sentences were first automatically annotated using an end-to-end parsing pipeline tool that parses raw texts to UD dependencies in CoNLL-U format with POS and morphological tagging information (Kanerva et al., 2018). The manual syntactic annotation of sentences were then performed on this automatically generated CoNLL-U versions of the corpus sentences. In the manual annotation process, we followed the UD syntactic relation tags. Before the annotation process started, we first reviewed the dependency relations in use within the UD framework. Upon reviewing the definitions, we created and annotated a list of unique sentences that we believe are representative of the UD dependency relations in Turkish. Later on, we compared our sentences for certain dependency relations with the examples from already existing Turkish UD treebanks. If our examples and the UD examples were not parallel, we first discussed whether or not our interpretation was correct. We then discussed whether or not there should be any inclusions to the UD guidelines. These discussion were also brought up within the UD community. After settling on the definitions of the dependency relations, two Turkish native speaker linguists manually annotated the BOUN Treebank using our annotation tool that is presented in Section 5. Following the annotation process, two other linguists who did not participate in the manual annotation process cross-checked the syntactic annotations of the two linguists. When a problematic sentence or an inconsistency was encountered, discussions with regards to the sentence and related sentences were held among the team members. After a decision was made, the necessary changes were applied uniformly. In addition to the cross-checking process, we performed a partial double annotation in order to have a consistent annotation scheme before the annotation process of the BOUN Treebank started. For this purpose, the annotators performed an additional annotation task independently for the same set of 1,000 randomly selected sentences. The disagreements were discussed and resolved with the entire team of linguists and NLP specialists. The Cohen's Kappa measure of inter-annotator agreement for finding the correct dependency label of the relations is found to be 0.82. The unlabeled and labeled attachment scores between the annotations are 0.83 and 0.75, respectively. Morphology Turkish makes use of affixation much more frequently than any other word-formation process. Even though it adds an immense complexity to its word level representation, patterns within the Turkish word-formation process allowed previous research to formulate morphological disambiguators that dissect word-level dependencies. One such work was introduced by Sak et al. (2011). Their morphological parser is able to run independently of any other external system and is capable of providing the correct morphological analysis with 98% accuracy using contextual cues, such as the two previous tags. In the morphological annotation of the BOUN Treebank, we decided to use the morphological analyzer and disambiguator of Sak et al. (2011). For this purpose, the tokenized sentences were first given to the morphological parser. The output of the parser was converted to the corresponding UD features automatically. In rare cases where the morphological parser did not return a morphological analysis for a token, the morphological features column from the Turku pipeline (Kanerva et al., 2018) for this token was used. The same operation was done for the lemmas of the tokens as well. Our preference for the morphological tagger of Sak et al. (2011) instead of the morphological tagger of the Turku parsing pipeline (Kanerva et al., 2018), which we used for the automatic processing of the treebank in the first step, is due to their comparison in terms of the token-based accuracy, and the feature-based recall, precision, and f-measure metrics. After randomly selecting 50 words from every text type in the BOUN Treebank (a total of 250 unique tokens excluding punctuations for the five text types), we encoded the errors made by the morphological parsers. The results are shown in Table 5. Token Accuracy column represents the token-based accuracy, namely the percentage of words for which correct morphological analyses are produced. Recall column represents the ratio of the number of correct morphological features to the number of morphological features in the gold standard. Precision column encodes the ratio of the number of correct morphological features to the total number of morpho-logical features predicted by the morphological parser. The F1-measure column is the harmonic mean of precision and recall. Our scores align with the scores reported in the original study of Sak et al. (2011), even though their test set and our set here consist of different text types. While they only used newspaper corpora in the test set, we tested the parser using different text types including broadsheet national newspapers, essays, instructional texts, biographical texts, and popular culture articles. The morphological parser of Sak et al. (2011) does not provide morphological tags in UD format. So, we automatically converted its output to the UD format. In this process, we maximally used the morphological features from the UD framework. When there is no clear-cut mapping between the features that we acquired from the morphological parser of Sak et al. (2011) and features proposed in the UD framework, we used the features previously suggested in the works of Çöltekin (2016), Tyers et al. (2017b), and Sulubacak and Eryigit (2018). These features were already stated in the UD guidelines. Table 12 in Section 9 shows the automatic conversion from the results of Sak et al. (2011)'s morphological disambiguator. As it is clear from the table, the depth of the morphological representation in Sak et al. (2011) and that in the UD framework do not align perfectly, and there is no one-to-one mapping. For example, an output from Sak et al. (2011) may include both Narr and Past features. In the automatic conversion, we would end up with Tense=Past twice and conflicting values for Evident feature. To resolve cases similar to these, we made use of simple rules that detect conflicting features due to our conversion and return appropriate features. Moreover, we used the morphological cues provided by the morphological parser to decide on the UPOS and lemma. All elements of our conversion and post-processing can be found on our Github page. 6 In our treebank, in addition to the words, we encoded the lexical and grammatical properties of the words as sets of features and values for these features. We also encoded the lemma of every word separately, following the UD framework. Table 6 shows an example sentence encoded with the CoNLL-U format. Syntax In the BOUN Treebank, we decided to represent the relations amongst the parts of the sentences within a dependency framework. This decision has two main reasons. The main and the historical reason is the fact that the growth of Turkish treebanks has been mainly within the frameworks where the syntactic relations have been represented with dependencies (Oflazer, 1994;Çetinoglu, 2009). The other reason is the fact that Turkish allows for phrases to be scrambled to pre-subject, post-verbal, and any clause-internal positions with specific constraints, which makes building constituency grammars quite difficult (Taylan, 1984;Kural, 1992;Aygen, 2003;İşsever, 2007). With these in mind, we wanted to stick with the conventional dependency framework and use the recently rising UD framework. 7 One of the main advantages of the UD framework is that it creates directly comparable sets of treebanks with regards to their syntactic representation due to its very nature. By following the UD framework, we implicitly encode two different syntactic information for each dependent: the category of the dependent and the function of this dependent with regards to its syntactic head. This is due to the grouping of the dependency relations introduced by the UD framework. The selection of the syntactic dependency relation for each dependent is mainly based on the functional category of the dependent in relation to the head and the structural category of the head. In terms of the functional category of the dependent, the UD framework differentiates the core arguments of clauses, non-core arguments of clauses, and dependents of nominal heads. As for the category of the dependent, the UD framework makes use of a taxonomy that distinguishes between function words, modifier words, nominals, and clausal elements. In addition to this classification, there are some other groupings which may be listed as: coordination, multiword expressions, loose syntactic relation, sentential, and extrasentential. 8 Table 7 shows the dependency relations that we employed in the BOUN treebank with their counts and percentages. Every dependency forms a relation between two segments within the sentence, building up to a non-binary and hierarchical representation of the sentence. In this way, nodes can have more than two children nodes and every node is accessible from the root node. This representation is exemplified in Example 2 using the sentence in Table 6. (2) Söz-ü uza-t-ıp sen-i merak-ta bırak-tı-m galiba . word-ACC strech-CAUS-NMLZ you-ACC curiosity-LOC leave-PST-1SG probably . Different Conventions Adopted in the Annotation Process In the annotation process of the BOUN Treebank, we stayed faithful to the UD main tag set and the previous conventions of Turkish annotation schemes for the most part. However, there were some instances where we diverged from these conventions or made the linguistic reasoning behind them more explicit. In this section, we provide the justifications of our linguistic decisions for these instances. Our decisions are in the same spirit of unifying the annotation scheme within Turkish UD treebanks, which was done in our previous works (Türk et al., 2019b,a). Our main concern is to reflect linguistic adequacy in the BOUN Treebank following the Manning's Law . During all this work, we paid great attention to follow the previous discussion within the UD framework, such as the discussion on the copular clitic and the objecthood-case marking relation. In the following sections, we will first touch upon the issues where we believe the previous conventions in Turkish UD treebanking were erroneous according to UD. These issues include the annotation of the embedded sentences, the treatment of copular verb, the analysis of compounds, and the annotation of classifiers. Next, we will discuss the issue of objecthood and the case marking relation in Turkish, where we adopt a simpler analysis that has been used in other dependency grammars instead of the recently discussed UD alternatives. Annotation of Embedded Clauses The first issue where we diverged from the previous annotation conventions is the annotation of embedded clauses. In the previous treebanks, the annotation of embedded clauses did not reflect the inner hierarchy that a clause by definition possesses. This is mostly due to the morphological aspect of the most common embedding strategy in Turkish: nominalization. Due to nominalization, embedded clauses in Turkish can be regarded as nominals since they behave exactly like nominals: They can be marked with an accusative case, can be substituted with any other nominal, and can carry genitive-possessive cases as person marking as shown in Example 3. The embedded clause in the given sentence is shown with square brackets. The whole square bracket can be replaced with a simple noun, like otobüs (bus), or a complex noun phrase like senin otobüsün (your bus) as in Example 4. see-PST-1SG 'I saw that you drove the bus.' Due to these surface level morphological and syntactic similarities, previous Turkish treebanks in the UD framework, with the exception of the Grammar Book Treebank (Çöltekin, 2015), used dependency relation obj instead of ccomp, nsubj instead of csubj, amod instead of acl, and advmod instead of advcl to mark the relation of the embedded clause with the matrix verb. In our annotation process, we emphasized the clausal nature of these embedded sentences and their syntactic derivation by focusing on their internal structure reflecting the existence of a temporal domain in the embedded clause. For instance, Example 3 would be unsensical if we had the time adverb tomorrow within the embedded clause. This ungramaticality is due to the tense information introduced by the nominalizer '-düg' in the example sentence. If there were an adverb like tomorrow in an embedded clause marked with '-düg', the previous annotation scheme would not be able to detect the ungrammaticality. However, our annotation scheme is able to detect this ungrammaticality. The same argumentation applies to converbs, as well. Converbs are verbal elements of a non-finite adverbial clause (Göksel and Kerslake, 2005). They may act as adverbial adjuncts or as discourse connectives. In the previous annotation processes of Turkish, they were annotated as nmod. The reason behind this annotation is again the fact that they behave like nominals; they may be marked with inflectional and derivational suffixes that normally nouns bear. Considering their clausal properties, such as their temporal domain, their ability to host a subject, an object, and a tense/aspect/modality information, we annotated them as advcl as in Example 5. 9 (5) Bira-lar-ı devir-dik-çe merak-ım az-dı . beer-PL-ACC topple-NMLZ-CVB curiosity-1SG.POSS get.wild-PSTW . In addition to the annotation of the whole embedded clause, dependents within the embedded clause were erroneously annotated in the previous Turkish annotation schemes. For example, an oblique of an embedded verb used to be attached to the root since the embedded verb is seen as a nominal, and not as a verb as in Example 6. 'The scenery that I passed before I entered the tunnel was completely different from here.' Likewise, the genitive subjects of embedded clauses were wrongly marked as a possessive nominal modifier, whereas they are one of the obligatory elements of the embedded structures. This wrong annotation in the previous treebanks is due to the fact that Turkish makes use of genitive-possessive structure for marking the agreement in an embedded clause as in Example 7 (Göksel and Kerslake, 2005). Despite the morphology, the word senin here serves as the subject. Example 8 shows the causativized version of the embedded verb in Example 7. When we causativize the subject of an intransitive verb, we expect the subject to be marked with an accusative case and act as a direct object. As seen in Example 7 and 8, the word sen reflects the morphological reflex stemming from a syntactic voice change. Thus, it cannot be a modifier and it has to be an argument. Due to the reasons explained above, in the annotation of embedded clauses we used the dependency relations that emphasize the clausal nature of the nominalized verbs, i.e., csubj, ccomp, advcl, instead of the dependency relations that emphasize the final product of the local derivations, i.e., nsubj, obj, advmod, respectively. Copular Clitic One inconsistent issue within the Turkish treebanks was the annotation of the copular clitics. Copular clitics attached to the verbal bases and nominal bases were treated differently although they are essentially the same as we will show below. While the copular clitics on verbal bases were not segmented, the copular clitics on nominal bases were segmented in previous Turkish treebanks. In this section, we will provide our analysis where we segment all copular clitics regardless of their bases. The Turkish copular clitic is the grammaticalized version of the verb "be" which can be indicated as i-. This clitic ihas three allomorphs in Turkish: (i) analytic i-, (ii) suffixal -y, and (iii) zero-marked (Ø). The allomorphy of the analytic form is idiosyncratic, meaning the analytic copula form can be used in place the suffixal copula forms most of the time. The analytic form can surface if suffixes -di (PST), -se (COND), and -ken (WHEN or WHILE) come atop a verb that already hosts a TAM (Tense/Aspect/Modality) marker. The analytic form can also surface in nominal sentences that are marked for tense other than the aorist (-Ar/Ir). However, the analytic form cannot surface with the suffix -mIş (PRF), except for its use with the aorist as in yapar imiş, meaning he or she used to do. Example 9a and Example 9b illustrate some examples of the analytic form. When both the base and the copular verb surface as a single syntactic word indicated with a box in the following examples, either -y (Example 10a, 10b) or Ø (Example 11a, 11b) is used. The selection between the Ø and -y is governed by the phonological characteristics of the previous sound; if the previous segment is a consonant Ø is used, otherwise -y is used. What is important for us is that the contribution of these copular clitics is the same for both nominal and verbal bases. In both cases, these copular clitics host the TAM information that cannot be carried by the base (Göksel, 2001). The TAM information itself also does not change according to the category of the stem. Additionally, the stress patterns of the clitics that attach to nominal and verbal bases are identical. Most of the verbs and common nouns are stressed in the final syllable. When they are marked with a copular clitic, instead of the final syllable which is the copular clitic, the preceding syllable is stressed (Göksel, 2001). This property as well applies regardless of the base the clitic attaches to. In addition to these characteristics, the copular clitic also has a clitic-like behaviour when it co-occurs with other clitics such as the question clitic -mI. Consider Example 12. When attached, the question clitic comes between the TAM marker and the copula. Another clue for the clitic status of the copula is its interaction with vowel harmony. When detached, it has its own phonological domain; thus vowel harmony processes do not percolate from the main verb to the copula as seen in Example 9a. However, semantic contributions of TAM markers and their interaction with each other provides a counterpoint for segmenting the copular clitic. 10 On a first look, verbs with a copular clitic seem to carry two different tense information. However, two consecutive TAM markers in Turkish do not imply two tenses. While one of them still provides tense information, the other one implies additional aspect. Consider the verb gelecektim in Example 13. When either suffix (-ecek or -ti) is attached to a verb without any additional TAM marker, they mainly provide the tense information. When they are used together as in Example 13, the suffix -ti implies the tense information, and the suffix -ecek provides the prospective aspect information. This aspect of the copular clitic points towards a solution in which verbs with a copular clitic should be analyzed as a single unit. change-PST-1SG 'I was going to come to the school, but I changed my mind.' After exchanging ideas on this issue within the UD community 11 and considering points mentioned in this section, we decided to segment all instances of the copular verb ias a copula (cop). With this change, we unified the treatment of all clitics that may attach to a root which include the question particle =mı, focus particles like =da, and copular verb particles; thus, followed the UD dependency relations more faithfully. Compound Another inconsistent annotation in the previous Turkish treebanks was compounds and their classification. The UD framework suggests that compound should be tailored to each language with its particular morphosyntax. Mostly in Turkish PUD, also in other Turkish UD-treebanks, constituents that carry a morphological marker for possessivecompounds are annotated as compound like in Example 14. The name 'possessivecompounds' is how the linguistic literature refers to it, but for our purposes we take it as a compositional structure and separate it from the UD dependency 'compound'. This means that our criteria for compound-hood are syntactic composition properties. We have modified cases with the morphological marker -(s)I(n) as nmod:poss, which is already a convention in use in the UD framework. Turkish employs different strategies for compounding. These strategies can display differences in their morphological and phonological forms. For our purposes, we divide them into two: (i) compounds with the compound marker -(s)I(n) and (ii) compounds without the compound marker -(s)I(n). Some compound types without the compound marker are given in Example 15. These compounds are formed with different types of lexical inputs and can have varying degrees of morpho-phonological properties, none of which employs a compound marker. We annotated the compounds that do not employ a marker as compound. The important distinction for our purposes is the existence of the compound marker -(s)I(n). This marker is only observed in Noun+Noun compounds and most of these compounds can be turned into Genitive-Possessive constructions as in Example 16. 'the school's building' We annotated Noun+Noun compounds that employ the compound marker -(s)I(n) as nmod:poss. There are three reasons behind this decision. The first one is that the marker does not survive in possessive constructions, it is replaced by the possessive markers. If the possessor is 1SG or 2SG, the marker is replaced with first person singular possessive -(I)m or the second person singular possessive -(I)n, respectively. If the possessor is 3SG the marker stays the same. The second reason is plural marking of the compounds. Any plural marking precedes the marker -(s)I(n) as opposed to following it, just like in possessive constructions (Example 17). The third reason is that compounds formed with the marker -(s)I(n) can have their modifier (non-head) be subject to questions, whereas compounds without it cannot (Example 18). Questions are considered to be extractions out of syntactic structures which can not target parts of a word form. As a result, (i) the marker -(s)I(n) not surviving possessive constructions and the ability to transition from a compound to genitive-possessive construction shows that the marker -(s)I(n) and possessive markers are in a disjunctive blocking relation. This suggests that they are competing for similar grammatical functions. (ii) The plural marker linearizes before the marker -(s)I(n). If -(s)I(n) was part of the word form, the plural marking should have linearized to the right of it. This shows that the marker -(s)I(n) is not part of the word form. (iii) Parts of the construction formed by -(s)I(n) can be targeted by questions. Question formations only target syntactic constituents and not part of word forms. This indicates that structures with -(s)I(n) do not constitute an indivisible word form. All these three reasons make constructions involving -(s)I(n) more syntactic (compositional) than morphological. This does not unilaterally rule out the constructions with -(s)I(n) as compounds, but within the framework of UD they are more suited to be classified as nmod:poss than compound. There is a robust linguistics discussion about the status of the marker -(s)I(n) as being classified either as a compound or as an agreement marker. The word forms produced by it are actually referred to as 'possessive compounds' (Hayashi, 1996;Kunduracı, 2013;Taylan and Öztürk Başaran, 2014;Öztürk and Taylan, 2016), introducing a dilemma even in its own name. Classifier The use of the classifier syntactic dependency (clf) was also inconsistent within the already existing Turkish UD treebanks. In the UD guidelines, the use of clf is limited to languages with highly grammaticized classifier systems. The difference between classifier languages and non-classifier languages is depicted with Chinese (classifier) and English (non-classifier). However, this distinction is not always clear-cut in other languages like Turkish (Sag, 2019). According to Göksel and Kerslake (2005), numerals can be followed by certain elements such as the enumerator tane (piece), measurement denoting words such as dilim (slice) andşişe (bottle), and membership/identity denoting words like örnek (example) and kopya (copy). They show that even though these elements are optional between a numeral and a noun, in partitive constructions with ablative cases, they are obligatorily used. The examples below show that the classifier tane (piece) is optional in sentences like Example 19a. However, when the classifier is in inflected form, deleting it makes the sentence ungrammatical as in Example 19b. The sentence becomes marginally acceptable when the inflection is concatenated to the numeral as in Example 19c. Apart from the Turkish PUD Treebank, no previous Turkish treebank has used the clf syntactic dependency. In the Turkish PUD Treebank, both measure words and enumerators are annotated using clf dependency. As for the other Turkic treebanks, a measure word bötelke (bottle) in the Kazakh UD Treebank is annotated using clf. On the other hand, in the Uyghur UD Treebank, no clf is used. In addition to the UD Treebanks, other recent treebanks such as Kayadelen et al. (2020) that use dependency grammar framework in their annotation, make use of the classifier dependency relation for both enumerators and measurement denoting words. In the BOUN Treebank and our re-annotated versions of PUD and IMST-UD, we annotated enumerators like tane (piece) and adet (piece) as classifiers and used the clf dependency relation. A slightly modified example sentence from our treebank can be seen in Example 20. One of the UD framework's core ideas is to create a typologically comparable set of treebanks. In this direction, it is important to reflect the use of classifier words in Turkish, even if they are optional. Core Arguments Turkish also poses a problem with regards to the detection of core arguments. This problem stems from mainly two reasons: core arguments marked with a lexical case and object drop of the core arguments. Like Czech, Turkish allows its direct object to be marked with oblique cases. In addition to the structural accusative case, Turkish also makes use of dative, ablative, comitative and locative on objects, which are the cases that adjuncts can also take. Both the adjunct in Example 22 and the core argument in Example 21 are marked with the same case: COM (comitative). When there is no appropriate context that introduces the object earlier, a COM-marked NP becomes obligatory as in Example 21. However, Example 22 is completely fine regardless of the context and the existence of the COM-marked NP. This is because the COM-marked NP is a core argument in Example 21, whereas it is an adjunct in Example 22. 12 As it can be seen from the examples, Turkish can drop its object without any marking on the verb when it is available in the discourse or it is not contradictory within a given context. Since it is impossible to drop the new information or correction in the case of Example 21 without a context that introduces the direct object earlier, we conclude that 12 Note that in certain environments where there is an immediate follow-up sentence to Example 21, COMmarked argument can still be omitted as in (i). We thank the anonymous reviewer for pointing this out. kız-ma-z-dı. get.angry-NEG-AOR.NEG-PST 'Serap would always make fun of her sister and she would never get angry.' the NP kız kardeşiyle (with her sister) is a core argument. If it were just an adjunct, the phrase can be omittable. Oblique case marking of the core arguments together with the optionality of the contextually available core arguments yields a problem for the annotation process within a framework where the difference between core arguments and non-core arguments is a morphologi-cally-apparent case marking as in the UD framework. Recent discussions in the UD framework also acknowledge this problem (Zeman, 2017;Przepiórkowski and Patejuk, 2018). They propose a new dependency relation: obl:arg. In our annotations, we used the obj dependency relation as in Example 23. The UD guidelines state that even though obj often carries an accusative case, it may surface with different case markers when the verb dictates a different form, in our case lexical cases like COM (Example 21) and ABL (Example 23). This approach is also utilized within the most recent Turkish treebank in which they did not distinguish between the objects with accusative case and the objects with non-accusative cases (Kayadelen et al., 2020). Another core argument specified in the UD guidelines is the iobj argument. In their assessment of Turkic treebanks, Tyers et al. (2017b) suggest using case promotion or demotion in passivization or causativization as a clue for determining argumenthood. When sentences are passivized in Turkish, the structural case accusative on the object is deleted in the transformation whereas oblique cases such as the ablative case is not deleted. They use this asymmetry to argue for a non-core analysis of oblique case marked objects. In their proposed annotation scheme, only tokens with non-oblique cases should be annotated as a core argument since only non-oblique cases go through case promotion or demotion. However, as we have previously shown in this section, objects marked with oblique cases behave the same as the objects marked with the accusative cases. Turkish can have oblique cases as a marker of objects even though they do not go through case demotion in passive sentences as in Example 24. 'How can one not know anything about ironing?' Following the reasons specified in this section, we did not make use of case clues in the annotation of iobj, instead we utilized the effects born out of context. Following our annotation process, we should annotate the dative marked noun bana (to me) using the iobj dependency relation if we cannot omit it when the information is already available in the discourse. Without any existing prior context, one cannot omit the dative marked noun in sentences like Example 25 where the main predicate is ditransitive. In addition to our treebank, the iobj dependency relation is also used in other Turkish and Turkic treebanks. Prior to our re-annotation, the Turkish PUD Treebank already made use of this dependency relation. With our re-annotation, the IMST-UD Treebank also utilizes the iobj dependency. The iobj relation is also used in a Turkic treebank: the UD Kazakh Treebank (Tyers et al., 2017b;Makazhanov et al., 2015). We believe that the non-optionality of cases like bana (to me) in Example 25 and its already existing use in other Turkish and Turkic treebanks justify our usage as well. Summary of the linguistic considerations The points made through the linguistic considerations are based on the idea that a language phenomenon needs to be evaluated with regards to its interactions with other phenomena in the same language. There could be opaque processes which require referring to the derivational history of a construction such as nominalization in embeddings, argument dropping (subject, object, indirect object), compound making strategies, or grammatical functions of a clitic. Additionally, a language does not need to employ a structural property uniformly in its grammatical system. Classifiers in Turkish could be an example for this. Example sentences for the UD tagset could already exist in the provided guidelines, but they lack linguistic diagnostics which are crucial to differentiate between the closely related constructions and the mostly opaque processes in a given language. We hope explicitly stating the diagnostics used for an annotation scheme becomes a practice so that the unification process of the treebanks does not follow from standalone examples but rather from testable predictions. Annotation Tool Annotation tools are fundamental to the facilitation of the annotation process of many NLP tasks including dependency parsing. UD treebanks are re-annotated or annotated from scratch in line with the annotation guidelines of the UD framework (Nivre et al., 2016). There are several annotation tools that are showcased within the UD framework such as UD Annotatrix (Tyers et al., 2017a) and ConlluEditor (Heinecke, 2019). These tools are mostly based on mouse-clicks, and provide graph view and/or text view. Morphological features are, in general, not easy to annotate/edit with the available tools. There are also annotation tools that have been developed for annotating Turkish treebanks (Atalay et al., 2003;Yıldız et al., 2016;Eryigit, 2007;Pamay et al., 2015). However, they are not specific to the UD framework. Apart from that, they do not have practical user interfaces regarding dependency parsing. We present BoAT, a new annotation tool specifically designed for dependency parsing. To the best of our knowledge, it is the first tool that provides tree view and table view simultaneously. BoAT enables annotators to use both mouse clicks and keyboard shortcuts. In addition, unlike previous dependency parsing annotation tools which show morphological features as a whole, in BoAT, morphological features are parsed and expanded into multiple columns, as they are one of the most re-annotated fields according to the observations of our annotators. The enhanced presentation of morphological features is beneficial for annotators. Using BoAT, tokenization can be easily changed by splitting or joining tokens. This is a useful property, especially for agglutinative languages since they have more suffixes, and tokenization may differ according to the used methods. The tool itself, however, is not specific to agglutinative languages and can be used for other languages as well. BoAT is designed with the aim of presenting a user-friendly, compact, and practical manual annotation tool that is built upon the preferences of the annotators. It combines useful features from other tools such as changing the tokenization, using a validation mechanism, and taking notes with novel features such as combining tree and table views, parsing morphological features, and adding keyboard shortcuts to match the needs of the annotators for the dependency parsing task. While developing BoAT, we received feedback from our annotators in every step of the process. One crucial aspect of annotation is speed. Annotation tools are helpful in this regard but they are still open to advancement in terms of speed. The existing tools within the UD framework mostly rely on mouse clicks and dragging, and the usage of keyboard shortcuts is very limited. Unlike them, almost every possible action within BoAT can be carried out via both mouse clicks and keyboard shortcuts. We aim to decrease the time-wise and ergonomic load introduced by the use of a mouse and to increase speed accordingly. We also added the note taking option being inspired by BRAT (Stenetorp et al., 2012). While notes are specific to annotations in BRAT, they are specific to each sentence in our tool. This feature enabled our annotators to have better communication and have better reporting power. Features BoAT is a desktop annotation tool which is specifically designed for CoNLL-U files. It offers both tree view and table view as shown in Figure 1 for an example sentence. The upper part of the screen shows the default table view while the lower part shows the tree view. Below we explain briefly the components and some of the properties of the tool. Tree view: The dependency tree of each sentence is visualized in the form of a graph. Instead of using flat view, hierarchical tree view is used. If the user hovers the mouse pointer over a token in the tree, the corresponding token in the sentence above the tree is highlighted which gives the user a linearly readable tree in order to increase readability and clarity. The tree view is based on the hierarchical view feature in the CoNLL-U Viewer offered by the UD framework. Table view: Each sentence is shown along with its default fields which are ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. The morphological features denoted by the FEATS field are parsed into specific subfields. These subfields are a subset of universal and language-specific features in the UD framework. These subfields are optional in the table view; annotators can choose which subfields they want to see. They are stored in the CoNLL-U file in a concatenated manner. Customizing table view: Annotators can customize the table view according to their needs by using the checkboxes assigned to the fields and the subfields of the FEATS field shown above the parsed sentence. In this way, the user can organize the table view easily and obtain a clean view by removing the unnecessary fields when annotating. This customization ameliorates readability, and consequently the speed of the annotation. The example in Figure 1 shows a customized table view. Actions in table view: To ease the annotation process, the most frequently used functions are assigned to keyboard shortcuts. Moreover, annotators can jump to any sentence by simply typing the ID of the sentence. The value in a cell is edited by directly typing when the focus is on that cell. If one of the features is edited, the FEATS cell is updated accordingly. Changing tokenization: One of the biggest challenges in the annotation process is keeping track of the changes in the segment IDs when new segmentations are intro-duced. In BoAT, new tokens can be added or existing ones can be deleted to overcome tokenization problems generated during the pre-processing of the text. Moreover, annotating multiword expressions often comes at the cost of updating the segment IDs within a sentence in the case of misdetected multiword expressions due to faulty automatic tokenization. Annotators may need an easy way to split a word into two different units. We enabled our annotators to split or join words within our tool by clicking the cells in the first column of the table (written "+" or "-") or using keyboard shortcuts, which permits a more accurate analysis of multiword expressions. Validation: Each tree is validated with respect to the field values before saving the sentence. If an error is detected in the annotated sentence, an error message is issued such as "unknown UPOS value". The error is shown between the table view and the tree view. Taking notes: With the note feature, the annotator is able to take notes for each sentence as exemplified on the topmost line in Figure 1. Each note is attached to the corresponding sentence and stored in a different file with the ID of the sentence. Implementation BoAT 13 is an open-source desktop application. The software is implemented in Python 3 along with PySide2 and regex modules. In addition, CoNLL-U viewer is utilized by adapting some part of the UDAPI library . Resources consisting of a data folder, the tree view, and validate.py are adapted from the UD-maintained tools 14 for validation check. The data folder is used without any changes while some modifications have been made to validate.py. BoAT is a cross-platform application since it runs on Linux, OS X, and Windows. The BoAT tool was designed in accordance with the needs of the annotators, and it increases the speed and the consistency of the annotation process on the basis of our annotators' feedbacks. Currently, BoAT only supports the ConLL-U format of UD since it was designed specifically for dependency parsing. In the future, it may be extended to support other formats such as the ConLL-U Plus format. 15 Experiments We report the results of our parsing experiments on the BOUN Treebank as well as on its different text types, which will serve as a baseline for future studies. In addition to the brand-new BOUN Treebank, we performed parsing experiments on our reannotated versions of the IMST-UD (Türk et al., 2019b) and PUD (Türk et al., 2019a) treebanks, 16 in order to observe the effect of using additional training and test data. Most prior studies (Eryigit et al., 2008;Hall et al., 2007;Durgar El-Kahlout et al., 2014;Sulubacak et al., 2016a,b;Sulubacak and Eryigit, 2018) on Turkish dependency parsing evaluate the treebanks they use (mostly versions of the IMST-UD Treebank) using MaltParser . However, the definition of a well-formed dependency tree for MaltParser is different than the conventions of UD such that the root node may have more than one child in the output of the MaltParser. UD defines a dependency tree with exactly one root node, and it is not possible to have Malt-Parser produce dependency trees that follow the UD convention. For this reason, we use the Stanford's neural parser whose original version (Dozat et al., 2017) achieved the best parsing scores on the IMST-UD Treebank with 69.62 UAS and 62.79 LAS in the CoNLL 2017 Shared Task on Multilingual Dependency Parsing from Raw Text to Universal Dependencies , and its modified version (Kanerva et al., 2018) achieved one of the best performances on the same treebank with 70.61 UAS and 64.79 LAS in the follow-up task in 2018 (Zeman et al., 2018). It is currently one of the state-of-the-art dependency parsers. This parser uses unidirectional LSTM modules to generate word embeddings and bidirectional LSTM modules to create possible head-dependency relations. It uses ReLu layers and biaffine classifiers to score these relations. For more information, see Dozat et al. (2017). As stated in Section 4, the BOUN Treebank consists of 9,761 sentences from five different text types. These text types almost equally contribute to the total number of sentences. For the parsing experiments, we randomly assigned each section to the training, development, and test sets with the 80%, 10%, and 10% percentages, respectively. Table 8 shows the number of sentences in each set of the BOUN Treebank. In order to observe the parsing performance for different types of text, we first evaluated the dependency parser for each section separately. Then, we measured the performance of the parser on parsing the entire BOUN Treebank. As a final set of experiments, we trained the parser on the training sets of the BOUN Treebank and the re-annotated version of the IMST-UD Treebank separately and together, and tested them on five different settings. With that set of experiments, we aim to measure the difference in performance between the BOUN Treebank and the IMST-UD Treebank and to observe the effect of increasing the training data size on performance for Turkish dependency parsing. In our experiments, we did not perform pre-processing actions such as removing the sentences from the training or test sets that include non-projective 17 dependencies. 17 In a non-projective sentence, the dependency edges cannot be drawn in the plane above the sentence All sentences in the treebanks were included in the experiments. As for the pre-trained word vectors used by the dependency parser, we used the Turkish word vectors supplied by the CoNLL-17 organization . For the evaluation of the dependency parser, we used the unlabeled attachment score (UAS) and labeled attachment score (LAS) metrics. UAS is measured as the percentage of words that are attached to the correct head, and LAS is defined as the percentage of words that are attached to the correct head with the correct dependency type. In the experiments, we used gold POS tags instead of automatic predictions of them. Table 9 shows the parsing results of the test sets for each section in the BOUN Treebank and the BOUN Treebank as a whole in terms of the labeled and unlabeled attachment scores. In these experiments, the parser has been trained by using the entire training set of the BOUN Treebank. We observed that the highest and lowest LAS were obtained on the Broadsheet National Newspapers section and the Essays section of the BOUN Treebank, respectively. The parser achieved more or less similar performance on the remaining three sections. Parsing Results on the BOUN Treebank To understand the possible reasons behind the performance differences between the parsing scores of the five sections of the BOUN Treebank, we compared the sections with respect to the average token count and the average dependency arc length in a sentence. Figure 2 shows these statistics for the five sections of the BOUN Treebank. We observed that both the average token count and the average dependency arc length metrics are the highest in the Broadsheet National Newspapers section. The second without any two edges crossing each other, as in (iii). However, in a projective sentence, the dependency edges can be drawn in this manner with no edges crossing, as in (ii) (Nivre, 2009). Note that, the average token count metric, which shows the length of a sentence, and the average dependency arch length metric, which depicts the distance between the nodes of the dependency relations in a sentence, can sometimes correlate, although not all long sentences include long range dependencies. We anticipate that the higher these two metrics are in a sentence, the harder the task of constructing the dependency tree of that sentence will be. In Figure 2, we observe that all of the sections except the Broadsheet National Newspapers conform with this hypothesis. However, the Broadsheet National Newspapers, which has the highest numbers of these metrics holds the best parsing performance in terms of the UAS and LAS metrics. We believe that these high scores in this section are due to the lack of interpersonal differences in writing in journalese and the editorial process behind the journals and magazines. Parsing Results on Combinations of Treebanks In Table 10, we present the success rates of the parser trained and tested on different combinations of the three Turkish treebanks: the BOUN Treebank and the re-annotated versions of the IMST-UD and Turkish PUD treebanks. We chose to include only these two treebanks that we re-annotated because we wanted to measure the effect of our unification efforts for Turkish treebanking on the parsing accuracy. The parser is trained separately on the training sets of the IMST-UD and BOUN treebanks, and then, by combining these two training sets (denoted as BOUN+IMST-UD in the first column of Table 10). Originally created for evaluation purposes , the PUD Treebank is not used in the training phase of these experiments due to its smaller size compared to the other two treebanks; instead, it is used as an additional test set in the evaluations. Five different test sets are provided in the third column of Table 10: the test set of the BOUN Treebank (BOUN), the test set of the IMST-UD treebank (IMST-UD), the Turkish PUD Treebank (PUD), the combined test sets of the BOUN and IMST-UD treebanks (BOUN+IMST-UD), and the combined test sets of the BOUN and IMST-UD treebanks and the PUD Treebank (BOUN+IMST-UD+PUD). Each of the trained models is tested on these five test sets. We observe the following: • The parser model trained on the BOUN Treebank outperforms the one trained on IMST-UD by at least 10% in LAS on the first and third test sets (and~5% on the fourth and fifth sets). Not surprisingly, the parser trained on IMST-UD performs better on its own test set (the second test set) than the parser model trained on the BOUN Treebank. However, the performance difference here is smaller than the one when these two models are tested on the BOUN Treebank's test set. To make a comparison, the parser trained on BOUN outperforms the parser trained on IMST-UD by~8% in UAS and by more than 10% in LAS when tested on the BOUN test set. On the other hand, for the case of the IMST-UD test set, the parser trained on IMST-UD outperforms the parser trained on BOUN by only~2% in UAS and LAS. Having less amount of training data and a more inconsistent annotation history might be the cause of the inferior performance of the IMST-UD Treebank when compared to the BOUN Treebank. • Joining the training sets of the BOUN and IMST-UD treebanks improves parsing performance in terms of the attachment scores. The increase in the training size resulted in better parsing scores, contributing to the discussion on the correlation between the size of the corpus and the success rates in parsing experiments (Foth et al., 2014;Ballesteros et al., 2012). • The worst results by all the models were obtained on the PUD Treebank used as a test set. The different nature of the PUD Treebank compared to the other Turkish treebanks may have an effect on this performance drop. This treebank includes sentences translated from different languages by professional translators and hence, the sentences have different structures than the sentences of the other two treebanks. This difference in structures is a result of the different environments in which these texts were brewed, namely a living corpus (BOUN and IMST-UD) and well-edited translations (PUD). In order to investigate the differences in the percentages of certain dependency relations between the treebanks used in the experiments, we present the distribution of the dependency relation types across the previous 18 as well as the re-annotated versions of the IMST-UD and PUD treebanks, and the BOUN Treebank in Table 11. When comparing the BOUN Treebank and the re-annotated version of the IMST-UD Treebank, we observed that the percentages of the case, compound, and nmod types were lower by more than 1% in the BOUN Treebank. The percentage of the root type was also lower in the BOUN Treebank by almost 2%, which indicates that the average token count in sentences is higher in this treebank with respect to the re-annotated version of the IMST-UD Treebank. However, the percentage of the nmod:poss type was higher by more than 2% and the obl type was higher by more than 3% in the BOUN Treebank. We believe that these differences are due to the text types we utilized. Unlike IMST-UD, the BOUN Treebank includes essay and autobiography text types. These types make frequent use of postpositional phrases such as bana göre (in my opinion) or 1920'ye kadar (until 1920), which are encoded with case dependency relations. Additionally, the language is less formal compared to the non-fiction and news text types, which are the main registers that the IMST-UD Treebank incorporates as indicated in the UD Project. This formality difference explains the lower usage of the compound relation type. When comparing the BOUN Treebank with the re-annotated version of the Turkish PUD Treebank, we observed that the highest percentage difference was for the obl type which is higher in the BOUN Treebank by more than 7%. This difference is again a result of using different text types. The Turkish PUD Treebank consists of Wikipedia articles in which the adjuncts are expected to be used less than the text types we utilized. The other relation types whose percentages are higher in BOUN by more than 1% were the root type which indicates that the average token count is lower in the BOUN Treebank, and the conj type indicating that the BOUN Treebank has more conjunct relations which sometimes increased the complexity of a sentence in terms of dependency parsing. In the comparison of the previous and re-annotated versions of the IMST-UD Treebank with respect to the distribution of dependency relation types, we see that the percentages of the advmod, cc, ccomp, and nsubj types increased by approximately 1% in the re-annotated version. In contrast, the percentage of nmod is reduced by more than 3% in the re-annotated version. The reason behind this decrease lies in the fact that in the previous version of the treebank, nominalized verbs which behave like converbs (Göksel and Kerslake, 2005) are considered nominal modifiers. However, these nominalized verbs actually construct embedded clauses and therefore are treated as clausal modifiers in the re-annotated treebank. In addition, the obl percentage decreased by more than 1% in the re-annotated version. The vocative type no longer exists in the re-annotated version and the newly introduced types that are absent in the previous version are the advcl, advcl:cond, aux, cc:preconj, clf, dislocated, goeswith, iobj, orphan, and xcomp relation labels. When we analyze the differences between the previous and re-annotated versions of the PUD Treebank, we observe that the biggest difference is in the compound relation with a 10% reduction. On the other hand, the biggest increase in the percentage of a relation is in the nmod:poss relation with a more than 6% increase in the re-annotated version. This is because in the previous annotation of the PUD Treebank, some constructions that involve genitive-possessive suffixes are marked with the compound dependency label. Such relations have been corrected as nmod:poss. Other noteworthy differences are in the fixed and xcomp relations with a more than 1% decrease and in the flat, nsubj, and obl relations with a more than 1% increase in the re-annotated treebank. Conclusion In this paper, we presented the largest and the most comprehensive Turkish treebank with 9,761 sentences: the BOUN Treebank. In the treebank, we encoded the surface forms of the sentences, the universal part of speech tags, lemmas, and morphological features for each segment, as well as the syntactic relations between these segments. We explained our annotation methodology in detail. We also gave an overview of other Turkish treebanks. Moreover, we explained our linguistic decisions and annotation scheme that are based on the UD framework. We provided examples for the challenging issues that are present in the BOUN Treebank as well as other treebanks that we reannotated. Our treebank with a history of the changes we applied and our annotation guidelines are provided online. In addition to such contributions, we provided a description of our annotation tool: BoAT. We explained our motivation for such an initiative in detail. We also provide the tool and the documentation online. Lastly, we evaluated our new treebank on the task of dependency parsing. We reported UAS and LAS F1-scores with regards to specific text types and treebanks. We also showcased the results of the experiments where our new treebank was used with the re-annotated versions of the IMST-UD and PUD treebanks. All the tools and materials that are presented in this paper are available on our webpage https: //tabilab.cmpe.boun.edu.tr/boun-pars.
2020-02-25T02:01:23.049Z
2020-02-24T00:00:00.000
{ "year": 2020, "sha1": "1a594aa1c6e96580ebf6d82d23c8370e5c96247b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2002.10416", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1a594aa1c6e96580ebf6d82d23c8370e5c96247b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237915003
pes2o/s2orc
v3-fos-license
Career choices, career related stress and need for career counselling and guidance among young secondary school students: a cross-sectional study Background: Career is one of the factors that determine the future of an individual. Vocational dimension is an important one which may influence the health of a person. The objective of this study was to find out the career choices among secondary school students in district Baramulla of the Kashmir valley and to find out the career related selfreported stress among the students. Methods: It was a cross-sectional study carried out in 2018 for a period of one month. A self-administered pre-designed questionnaire was distributed among students of class 9 and 10 availing tuitions from a private tuition centre. The information was collected regarding the socio-demographic characteristics, career choices, other information related to career, and self-reported career related stress. The data was entered in Microsoft excel 2010 and analysed using SPSS version 23. Results: A total of 100 students participated. Mean age of the students was 15.19±0.84 years with 57% females. Ninety seven percent of the students had been thinking about their career for quite some time. Most of the students wanted to pursue MBBS (52%) followed by engineering (14%). Seventy-four percent of the students were of the opinion that career counselling was necessary. Nineteen percent of students were stressed about their career. Conclusions: Most of the students had already decided on their career but many were stressed and unsure about what path to choose. About 74% of the students were of the opinion that there should be career counselling and guidance available for the students so that they are able to take the right decisions at the right time. INTRODUCTION Career is one of the factors that determine the future of an individual. Vocational dimension is an important one which may influence the health of a person. It is the culmination of one's lifetime efforts. 1 It determines some of the most basic things about the individual like the level of income, type of work etc, and thus leaves a mark on one's personality and outlook. One wrong decision may have devastating consequences on a person's life and in turn affect his family. This individual decision also affects the economic prosperity of a nation in the long run as the people who are misfits in their workplace are mostly less productive. 2 With the advancement of information technology and job competition, career choice has become complex. In olden days, a career or profession continued in generations. 3 College students choose their job fields for many reasons. The factors that determine the career choice may be family, passion, salary, race, gender and past experiences. Another thing influencing the decision of career choice is the people or role models for a person. 4 Kerka says career choice may be influenced by various factors which may include personality, self-concept, interests, cultural identity, globalization, socialization, social support, role model, and available resources like information and financial. 5 Bandura et al stated that the factors may include the context in which a person lives, his/her personal aptitudes, educational attainment and social contacts. 6 Perrone et al believed that anticipated earnings are the most important influential factors for men whereas for females prestigious positions were the most important factor. 7 Rodrigo et al were of the opinion that females were mostly influenced by desire to work for other people whereas males were influenced more by monetary gains in their career choice. 8 According to Eremie et al career is the totality of experience through which a person learns about and prepares to get engaged in work as a part of his way of living. 9 Some students make up their mind early and know exactly what they have to choose while others find themselves switching majors due to a number of courses available. 10 A student in Kashmir has to choose a subject for his further studies in class 11 th at a young age of 16 or 17 years. At this point many do not even take their career seriously and many may be confused as to what they should opt for. Since this is the deciding time for the students and there being almost no concept of career counselling or guidance in Kashmir, students may get stressed out at this stage. The aim of the study was to find out the career choices among students in Kashmir and career related stress among students. Objectives The objective of this study was (a) to find out the career choices among secondary school students in district Baramulla, Kashmir; and (b) to find out the career related self-reported stress among the students. Study design The study design was a cross-sectional study. Study period The study was carried out in 2018 for a period of one month from 10 th February 2018 to 10 th March 2018. Study area The study was carried out at a private tuition centre at Baramulla. Data collection A self-administered pre-designed questionnaire was distributed among the students of class 9 th and 10 th who were availing tuitions from a private tuition centre in Baramulla. Permission was asked from the head of the tuition centre and informed consent was also obtained from the participating students. The students were asked to return the filled questionnaire within a week. The information was collected regarding the sociodemographic characteristics, career choices and other information related to career, and self-reported career related stress. All those students who gave consent to participate were taken into the study while the rest were excluded. Ethical approval was sought from the institutional ethical committee of GMC Baramulla. Statistical analysis The data was entered in Microsoft excel 2010 and analysed using SPSS version 23. The categorical data was summarised as frequencies and percentages while the continuous data was summarised as mean and standard deviation. RESULTS In our study a total of 100 students participated. The sociodemographic characteristics of study participants are summarised in Table 1. Table 1 shows that the mean age of the students was 15.19±0.84 years. Minimum age was 13 years and maximum was 17 years. Participation of females (57%) was more than that of males. Most of the students were studying in 10 th standard. Table 2 summarises the responses to career related questions that were asked in the questionnaire. Table 2 shows that 97% of the students had been thinking about their career for quite some time. Thirty-three percent of the students had thought about their career in 8 th standard whereas 15% had started thinking about it in class 9 th and same percentage in class 10 th . Most of the students wanted to pursue MBBS (52%) followed by engineering (14%). Ninety-two percent of the students said that the career that they wanted to pursue was their choice and 46% of the students had not thought of an alternative if they could not succeed in achieving their desired career. Seventy four percent of the students were of the opinion that there should be a career counselling that could help them decide in a better way. Nineteen percent of the students were stressed about their career. Seventyfour percent of the students were of the opinion that career counselling was necessary. Table 3 shows the relationship of career related stress with age and gender of the students. Mean age of the students who were stressed was lower than those who were not stressed (p=0.003). There was no relationship of stress about career with gender (Table 3). DISCUSSION It has been wisely said that no other choice (other than our spouse) that we make influences each one of us at every stage of our life, our families, children and status as much as our career choice does. 11 Our study is the first of its kind to explore the aspect of career related stress in young students and the need for career counselling. In our study we reached to the students studying in class 9 th and 10 th because in these classes a student in Kashmir usually has to decide the subjects that he wants to choose for himself in the coming years. Based on the subjects that a student chooses in 11 th standard, the career of a student is usually decided. In our study the, mean age of the students was 15.19±0.84 years which is the usual age in class 9 th and 10 th . At such a young age, the students are usually confused as to what they should choose and where a path leads to. 10 Most of the students (97%) of the students had thought about their career which is quite appreciable. But what had they decided? would this career really be suitable for the student? Did he really know the pros and cons? Was it really his choice? These are some very important aspects that will determine the satisfaction of the student in his life ahead. A study conducted by Jonas Masdonati found that career counselling and therefore choosing the right career was associated with life satisfaction. 12 As we see about 52% of the students wanted to be a doctor and pursue MBBS followed by engineering (14%) which reflects that it might be a societal trend or peer pressure or professional prestige that is being followed because there must be a reason for most of students going in one direction. Professional prestige has been found as an important factor for young people that decides their career. 13 What is important here is that can all the students who want to pursue such tough courses like MBBS and engineering get what they aspire for which depends on the working ability, intellect and mental strength of a person. Not achieving the goal may cause stress in some students unless they have thought for some other options. In our study, 46% of the students had not thought of an alternative which means there is a pressure of achieving the goal. Mirvis et al say that following globalization, there is a growing need for career mobility and flexibility. 14 Therefore, this issue needs to be addressed through career counselling in students at a very young age so that they see their future in a broader sense while still be goal oriented. Our study shows that 19% of the students were stressed about their career which is a substantial percentage. Stress about the career was more in younger students. There was no relationship of gender with stress about the career. Kunnen et al reported that career choice guidance decreased significantly the self-reported psychological problems among students. The career guidance intervention was effective equally for all groups of participants regardless of their level of psychological problems before the intervention. 15 Therefore here also career guidance is needed and essential for the young Kashmiri students. Seventy four percent of the students were of the opinion that there should be a career counselling that could help them decide in a better way. It has been suggested that career counselling may have direct and indirect effects on the well-being of the clients and it resolved personal or psychological difficulties. 16 It has also been suggested by Feldman et al that making an adequate career choice and therefore following positive career pathways for an individual leads to personal satisfaction and also social integration. 17 Besides career counselling has positive effects on identity development and increases commitment strength in vocational and personal domains. 18 Limitations The sample size for study could have been large. But the study was conducted among young students who showed a high non-response rate of around 30%. Nevertheless, the study gives an initial idea of the problem and further studies are recommended. Reasons for a particular career choice were not explored as the option was kept initially but it was missed by most of the students because of the descriptive nature of the question. For this qualitative indepth interviews are the best option to know the problems of the students. Also, the questionnaire that was used was not validated as it was intended to measure just the descriptive variables and self-perceived stress by the students. CONCLUSION In our study most of the students had planned about their career but many of them were confused and stressed about their career. About 74% of the students were of the opinion that there should be career counselling and guidance available for the students so that they can take right decisions at right time. Career counselling through professional career counsellors must be made available to students to decrease the career related stress among the students and for a better future of the students and the society as a whole. Recommendations As found in the study most of the students were of the opinion that career counselling is required for them to decide better for their career. Therefore, we recommend that there should be a career counselling centre where specialists in this field can guide the students to decide for their career choice. It should be made available widely to the students, and at a young age. Awareness should be generated among students to seek help through counselling for making the decision for their career.
2021-09-01T15:09:17.078Z
2021-06-25T00:00:00.000
{ "year": 2021, "sha1": "58168380f8cb1173f6448738e8f9f40eea962774", "oa_license": null, "oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/9724/6581", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "82e0dfb31ce3ca71a146b9fbe5e52b18f6f06dc4", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
232273136
pes2o/s2orc
v3-fos-license
Thalamic abscess in a patient with hereditary hemorrhagic telangiectasia successfully treated with an empiric antibiotic regime: case report and review of the literature Background Hereditary hemorrhagic telangiectasia (HHT) is a rare autosomal dominant disease associated with neurological complications, including cerebral abscesses (CA). They tend to be unique, supratentorial and lobar. While the surgical intervention is a rule of thumb when treating and diagnosing the etiology of these lesions, this is not always possible due to dangerous or inaccessible locations. We report the case of a patient solely treated with empiric antibiotics without stereotaxic intervention and satisfactory results. Case presentation We present the case of a 21-year-old patient with a right thalamic abscess due to HHT and pulmonary arteriovenous malformations, previously embolized, treated solely with antibiotics. At first, we contemplated the possibility of a stereotaxic biopsy, but the high-risk location and the fact that our patient received a previous full course of antibiotic treatment (in another center), made us discard this intervention because of the low diagnostic yield. We started an empiric antibiotic regime. We followed up very closely the clinical and radiological evaluation the next weeks, adjusting our antibiotic treatment when necessary. The results were favorable from both the radiological and clinical aspects and 6 months after the diagnosis the images show its almost complete disappearance. Conclusion Carefully tailored antibiotic-only regime and vigilance of its adverse effects and close radiological following is a good treatment approach when surgery is not an option. Background Osler-Weber-Rendu syndrome or hereditary hemorrhagic telangiectasia (HHT) is a rare entity consisting of angiomata of the skin, mucous membranes and viscera. It is an autosomal dominant disorder with high penetrance [1,2]. From the genetic point of view, HHT is a heterogeneous disorder. At least three genes have been identified as culprits: Endoglin, activin A receptor type II-like 1 and SMA-and MAD related protein 4 [3]. Common features are diverse types of bleeding originated from the angiomas, including epistaxis, hemoptysis, hematuria and melaena. The diagnosis is based on the Curacao criteria [4]. Mortality and morbidity are related to central nervous system (CNS) complications, like cerebral abscess (CA), ischemic stroke, transient cerebral ischemic attack, and cerebral hemorrhage due to arterio-venous malformations. In the presence of pulmonary arterio-venous malformations (P-AVM) there is a well-stablished relationship with CA [5]. When a CA is diagnosed, there is an accepted course of action: surgery, diagnosis and antibiotic treatment. Literature reporting cases solely treated with antibiotics without surgery are scarce in the literature and our case is a great example of how to get satisfactory results tailoring a personalized empiric antibiotic treatment when surgery is not doable. Case presentation A 21-year-old man, with a previous diagnosis of HHT and embolization of multiple P-AVM in the past, was admitted at another hospital, while on holidays, with fever, headache and malaise. A non-enhanced computed tomography (CT) of the head showed no significant findings. He was diagnosed with typhoid fever based on clinical features and serology. Consequently, he was discharged with oral antibiotic therapy consisting of ciprofloxacin 500 mg bid and cotrimoxazole 800/160 mg bid. Two weeks later, he came to our emergency department because of the persistent fever and headache. At our center, the neurologic, cardiac and pulmonary examination were normal. Routine hematology showed a white cell count of 17.6 × 10 3 /ml leucocytes (4.0-11.5) and 14.33 × 10 3 /ml neutrophils (1.5-7.5). The C-reactive protein was 2.00 mg/L (0.1-10) and the arterial blood gases were normal. A chest radiograph appeared to be normal other than visualization of previous embolization material (Fig. 1). A contrast-enhanced CT scanner of the head revealed a mass in the right thalamic-capsular region, compatible with an abscess. An MRI, performed later, confirmed the findings (Fig. 2) showing the typical ring-like contrast enhancement. Further investigations were performed. A cardiac transesophageal echography was normal. A CT of his lungs confirmed the presence of embolization material inside the lumen of pulmonary vessels feeding the malformations. After reviewing the images with our radiology team, they concluded that the lesions were successfully treated. Pulmonary angiography was not performed in light of these results and because our patient was being followed for that pathology in another center. Treatment After carefully evaluating the images and discussing treatment options with our patient, we decided to start with intravenous empiric medical treatment, due to the high risk of a surgical intervention. However, we contemplated the possibility of a stereotaxic biopsy, but considering our patient received a previous full course of antibiotic treatment we discarded this intervention because of the low diagnostic yield. Thus, treatment was started with intravenous linezolid 600 mg tid, plus ceftazidime 2 g tid and metronidazole 500 mg qid. A course After 2 weeks, a new CT evidenced a favorable radiological evolution with reduction of the lesion. He was afebrile, the headache had disappeared, and his general condition had dramatically improved. We switched linezolid and metronidazole to an oral regimen. After 1 week he remained asymptomatic and we decided to rotate the antibiotic regimen to ceftriaxone 2 g bid and ciprofloxacin 750 mg bid, maintaining the linezolid and metronidazole, due to its better tolerance profile and towards to a possible discharge. By that time, he was almost finishing his waning course of corticosteroids. We performed a new MRI control showing good radiological results (Fig. 3a). Three days after, he presented headache of moderate intensity and fever. A new CT scan showed an increase in size of the thalamic abscess that we confirmed with a new MRI (Fig. 3b). We attributed this evolution to the change of antibiotics. Our patient remained neurological stable and that's why we decided to reintroduce ceftazidime 2 g tid and re-started a new course of corticosteroids, discarding a surgical intervention. He started to improve again. A new MRI performed 10 days after this regimen demonstrated a reduction in the size of the lesion (Fig. 3c). In the day 21 (Table 1), after a hematologic control showed an increase in hepatic enzymes: AST: 192 U/L (6.0-40) ALT: 649 U/L (6.0-40), GGT: 666 U/L (8.0-61), gastrointestinal discomfort and hyporexia, we performed an abdominal echography and found signs of hepatopathy. Then again, due to the toxicity of ceftazidime and metronidazole, we changed to a new regimen consisting of linezolid and meropenem 1 g tid. Another MRI showed further shrinking of the lesion (Fig. 3D). He remained stable with no neurologic deficit and a clear improvement of his hepatic profile. Because of the prolonged therapy with linezolid, he presented leucopenia: 2.8 × 10 3 /ml leucocytes (4.0-11.5). After a final antibiotic rotation to cotrimoxazole and moxifloxacin orally, he was discharged with radiological stability, clinical improvement and corrected hematologic disturbances. Discussion and conclusion Almost 50% of patients with P-AVM have a family history of HHT, and 10% of members of HHT families have a P-AVM [6]. As estimated by Roman [7] the frequency of a brain abscess in a patient with HHT and a P-AVM is 5%, this is × 10 3 times the risk of developing CNS infection in the general population [6]. Although the pathogenesis is not well understood, it is believed to be the result from right-to-left pulmonary shunts and paradoxical embolization. Two main mechanisms have been proposed: 1) septic micro emboli that can reach the CNS, from the digestive tube, favored by polycythemia and hypoxic conditions that decrease resistance of cerebral tissue to bacteria invasion and 2) secondary infection of previous brain microinfarctions during transient bacteremia [8][9][10][11]. As a rule of thumb, the first diagnosis to be considered in a patient with positive history of P-AVM and HHT and a cerebral mass should be a CA. This was the case of our patient. On the other hand, one should think about P-AVM when a patient has a cerebral abscess and no previous history suggesting any infectious origin [12,13]. Actually, many patients with P-AVM are asymptomatic before presentation with neurologic complications such as brain abscess or stroke [14]. These CAs, when secondary to HHT, are generally supratentorial (frontal lobe in 40% of cases), lobar and unique [9,15]. Our patient had multiple satellite lesions around a main one in the thalamic region (deep basal ganglia). This presentation is rare and estimated around 4% of all CAs [15]. The gold standard for P-AVM is the digital subtraction angiography. It allows diagnosing and treating by embolizing the main vessel feeding the malformation. However, appropriate management does not necessarily exclude the possibility of CA recurrence. In the series by Mathis et al., 15.4% of patients had recurrence even when they were adequately treated [15]. The recurrence in other locations, different from the first presentation, reveals the crucial role of P-AVM. Failure of treatment after a successful embolization of the fistula is considered low by 5 years as reported by White et al. [16]. The treatment of every CA should be multidisciplinary. It must encompass a medical and surgical approach. General recommendations are: 1) surgical drainage if lesion > 2.5 cm in diameter; 2) CT or MRI imaging every 15-20 days and 3) 6-8 weeks of intravenous antibiotic treatment [9]. The concept of treating only with antibiotics when the lesion is not amenable to surgery is not wild; however, literature describing this type of approach is scarce. This is why we believe this case can highlight the importance of taking into account this kind of approach. Non-operative treatment of CA has been successfully reported in some cases but this option should be reserved for poor surgical candidates or small lesions in inaccessible areas [9,17]. In the series reviewed by Sell et al. [9], one patient had no biopsy and survived only with medical treatment. Nonetheless, stereotactic drainage is usually considered the treatment of choice [18]. In our case our patient had a thalamic CA, from our point of view, not amenable to surgery and he received previously a course of antibiotics at another center. We considered a surgical intervention not recommended because of the high risks and low yield ratio of diagnosis after antibiotics treatment. Thus, we decided to manage this CA only with antibiotics. We agree there is a limitation when using antibiotics if no bacterial agent is previously identified, but the vast majority of microorganisms found in CA related to HHT involves anaerobic or facultative anaerobic bacteria. Streptococcus is the most common organism [6]. Staphylococcal infection, although a common organism in extra-cerebral infections in this group, is somewhat extraordinary in the CNS, being more common (up to 30% and always associated with endocarditis) in patients with CA and no HHT. Moreover, CA related to HHT tend to be of multiple germs (2-3 bacteria) which is a striking difference when compared to non-HHT related CA [15]. This highlights the utmost importance of carefully selecting and tailoring the empiric antibiotic therapy, when there is no option to obtain samples for cultures. Mortality, historically considered high in the pre-antibiotic era, has considerably improved in part due to the advancements in imaging modalities, less invasive surgical techniques and a broader antibiotic spectrum and efficacy. As of 6 months after the initial diagnosis, our patient is asymptomatic and his lesion has almost completely disappeared as demonstrated in the radiological followup (Fig. 3 e, f, g and h). Proper knowledge of the relationship between CA and HHT is vital to raise the level of suspicion, especially in patients with no underlying cause. Previous detection and embolization of P-AVM reduces the recurrence of CA but does not exclude them completely, as was the case of our patient. Finally, our case highlights that carefully selecting the antibiotic regime, vigilance of adverse effects and close radiological following is of utmost importance when surgery is not an option.
2021-03-19T14:29:15.539Z
2021-03-19T00:00:00.000
{ "year": 2021, "sha1": "20af7d752a41b23bf07dcc59b88c5aef14af5930", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-021-05955-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20af7d752a41b23bf07dcc59b88c5aef14af5930", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6095398
pes2o/s2orc
v3-fos-license
Male occult triple-negative breast cancer with dermatomyositis: a case report and review of the literature Occult breast cancer is defined by the presence of axillary metastases without an identifiable primary breast tumor. Here, we report a rare case of a male occult breast cancer with dermatomyositis. We performed a modified radical mastectomy consisting of whole breast mastectomy and axillary lymph node dissection. Immunohistochemistry and fluorescent in situ hybridization analyses demonstrated an adenocarcinoma likely of breast origin, which was an occult triple-negative breast cancer. Interestingly, the patient’s previously noted periorbital dermatomyositis resolved promptly following surgical excision. Introduction Occult breast cancer (OBC) is defined by the presence of axillary metastases without an identifiable breast tumor. It has been reported that most hormone receptor (HR)-positive rate of male breast cancer (MBC) is higher than that of female breast carcinomas. We encountered a unique case of OBC, the immunohistochemistry (IHC) and fluorescent in situ hybridization (FISH) studies of which was a triple-negative breast carcinoma (TNBC). Interestingly, the patient developed periorbital erythematous papules 3 months before diagnosis of MBC. This was suspected to be dermatomyositis (DM), which has not been reported in male OBC patients previously. The purpose of this study is to describe and discuss the diagnosis, the clinicopathologic characteristics, and treatment of this rare subtype of breast carcinoma. Case report In May 2015, an 84-year-old male patient who denied any history of smoking or alcohol consumption, with no family history of breast cancer, discovered a palpable nodule in his right axilla. An ultrasound examination confirmed an irregular hypoechoic solid mass in the right axillary cavity measuring 3.9 cm in the longest diameter ( Figure 1). Color Doppler signal demonstrated internal vascularity with absent fatty hilum. A core needle biopsy revealed a poorly differentiated adenocarcinoma (Figure 2), which was likely of breast origin. Three months before presentation to our department, the patient had developed periorbital erythematous papules, distributed over the lower eyelids, measuring 2.0×1.5 cm (Figure 3), which was diagnosed as DM by the rheumatology department. Over the subsequent 3 months, he had also noted Physical examination identified a nonmobile mass in the right axilla, measuring 5.0 cm in diameter; other superficial lymph nodes were not significantly enlarged and the skin covering the mass was normal. No nipple discharge and no masses were identified in either breast or other organs. A computed tomography scan of the brain, abdomen, and pelvis failed to reveal any malignant primary lesions. The bone scan screening for metastasis was negative. On May 19, 2015, the patient underwent a modified radical mastectomy, which consisted of whole breast mastectomy and axillary lymph node dissection. However, no lesions were found on the right breast during surgery. The IHC of the lymph node tissue was negative for estrogen receptor ( Figure 4A), progesterone receptor ( Figure 4B), prostate-specific antigen (PSA) (−), positive gross cystic disease fluid protein-15 (GCDFP-15), AE1/AE3, cytokeratin 7 (CK7), CK20, and human epidermal growth factor receptor 2 (HER2), while HER2 was amplified (negative) by FISH analysis ( Figure 4C). Dissection of axillary demonstrated 0 of 17 signs of metastatic lymph nodes except for the original malignant axillary lymph node. Based on these findings, he was diagnosed with right axillary metastatic TNBC presumably from an OBC. According to American Joint Committee on Cancer Staging, the stage of cancer classification was II stage (T 0 N 1 M 0 ). The patient was submitted to adjuvant chemotherapy with 4 cycles of TC (Paclitaxel: 270 mg, Cyclophosphamide: 0.9 g) and a rest of 21 days. Following mastectomy, he experienced remarkable improvement of the dermal erythematous papules ( Figure 5) and gradual recovery of muscle strength with normalization of creatinine kinase blood levels without glucocorticoid therapy. Over .2 years of follow-up, the patient remained in good condition, without cutaneous or muscle DM recurrence or recurrence of breast cancer or other lesions. Discussion OBC is defined by the presence of axillary metastases without an identifiable breast tumor. It accounts for 0.3%-1% of all newly diagnosed malignant diseases of the breast. 1 A man's lifetime risk of developing breast cancer is ~0.7%, but it has an increase in incidence in the seventh decade of life. 2,3 Because of the rarity of OBC in men, the diagnosis and treatment can be challenging. Most of the studies available in the literature have demonstrated that there were several reasons for palpable axillary masses, most commonly being Ultrasound of the right axillary mass. Notes: a low echo mass of 3.9×2.9 cm can be seen in the right axillary mammary gland. The boundary is still clear and the shape is irregular. A strong blood flow signal can be detected within it. Figure 2 Image of core needle biopsy histologic diagnosis using hematoxylin and eosin staining (original ×100). Note: Metastasis of poorly differentiated carcinoma can be seen in the resected right axillary lymph node tissue. 5461 Male occult TNBC with dermatomyositis metastatic lymph nodes associated with breast cancer. 4 When an axillary mass does occur, the most widely accepted method is to identify whether physical examination and imaging studies identify benign or malignant features. The next question is to confirm the primary source of disease. Data can also be obtained for immunocytochemistry through a core needle biopsy of the metastasis. In this case, immunohistochemical and FISH studies revealed ER (−), PR (−), HER2 (−), GCDFP-15 (+), AE1/AE3 (+), CK7 (+), CK20 (+), and PSA (−). Thus, we diagnosed the axillary tumor as primary breast cancer with axillary lymph node involvement. Based on immunohistochemical and FISH analyses of ER, PR, Ki-67, and HER2, a simplified classification was adopted to identify different subtypes of breast cancer. In the previous literature, it has been reported that most primary lesions of OBC exhibit a significantly higher positive rate for hormone receptors than that of female breast carcinomas. 5 Our case was even more unusual being TNBC. TNBC, which is a special subtype of breast cancer, has been demonstrated to have a significantly higher risk of recurrence and death compared with luminal subtypes. 6 Numerous studies have addressed the classification of TNBC to assist clinical management. Recently, Zhang et al described subnetwork biomarkers of low oncogenic GTPase activity, low ubiquitin/proteasome degradation, effective protection from oxidative damage, and tightly immune response to be linked with better prognosis through integrative analysis of cancer genomics data and protein interactome data. 7 However, clinical data on TNBC in male patients are limited, and it is still unclear if the major prognostic factors in MBC are the same as those in women. An abstract presented at the 45th Annual Meeting of the American Society of Clinical Oncology in 2010 by Forester reported that male TNBC had a significantly worse prognosis than female TNBC or than other subtypes. 8 Principles of treatment for male occult TNBC are similar to those of female breast cancer. Widely used adjuvant chemotherapy in breast cancer for TNBC is more chemosensitive than the HR-positive/HER2-negative phenotype. 9 Compared with that of stage and subtype-matched patients, similar outcome was observed in OBC patients, while there was a trend for higher risk of recurrence and mortality in OBC patients with the triple negative subtype. 10 DM is an idiopathic inflammatory myopathy associated with proximal muscle weakness and cutaneous findings such as heliotrope rashes, Gottron's papules, and periungual telangiectasis. 11 Laboratory studies show creatine kinase elevation, electrophysiologic abnormalities, and inflammatory lesions on muscle biopsy. DM carries an increased risk of malignancy and can present as a paraneoplastic syndrome to multiple types of underlying malignancies, especially lung and prostate cancers, gastrointestinal tumors in men, and gynecological tumors in women, rarely in MBC. Polymyositis (PM)/DM can precede or present at the same time of or after the onset of malignant tumor development. Most commonly, however, the literature reports that malignant tumors appear .2 years after the onset of Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/oncotargets-and-therapy-journal OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. OncoTargets and Therapy 2017:10 submit your manuscript | www.dovepress.com Dovepress Dovepress Dovepress 5462 PM/DM. The risk of malignancy within the first year after the diagnosis of PM/DM is the highest, then decreases by each year, but remains higher than in the normal population. Therefore, the screening of malignant tumor should be carried out extensively within the first year after the appearance of PM/DM. Andras 13 There is little evidence to guide treatment of DM. Wolff et al reported a rare case of DM with triple negative female breast cancer managed with chemotherapy and injectable adrenocorticotropic hormone agonist without surgery. 14 However, the role of neoadjuvant chemotherapy is debatable. Ideally, detection and surgical removal of the malignant tumor yields the best results in improvement of DM. In this case, the patient developed periorbital erythematous papules 3 months before diagnosis of MBC. It is noteworthy that, after surgery, he experienced remarkable improvement in the dermal erythematous papules with gradual recovery of muscle strength without glucocorticoid therapy. These findings strongly confirm the importance of diagnosis and surgical treatment of the underlying malignancy. The prognosis of DM patients with malignant tumors is poor. The main causes of death are extensive metastasis of tumor, secondary infection, and systemic failure. The overall survival rate of PM/DM patients with malignant tumors is worse than that of other forms of myositis, and prognosis and life expectancy are determined by the underlying malignancies. 15 Conclusion We presented a rare case of male occult TNBC with DM. No conclusion has been drawn regarding current drug therapy, and surgery is still the main treatment.
2018-04-03T03:30:53.082Z
2017-11-14T00:00:00.000
{ "year": 2017, "sha1": "e67d42d08e77f766e1ae182917bb5399311668fe", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2147/ott.s151260", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3932cdf263b97d52bf7bbeb577b0eb5124889f75", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212740424
pes2o/s2orc
v3-fos-license
The evolution of metabolism: How to test evolutionary hypotheses at the genomic level Graphical abstract Introduction Plants produce a vast array of metabolites during their lifecycle. Most of these compounds are called ''secondary" or ''specialized" metabolites, to distinguish them from primary metabolites, which are involved in the basic processes of growth and development. Secondary metabolites instead exert their functions during the interactions of plants with the surrounding environment: they are typically synthesized as defense compounds to deter predators (being toxic to plant pathogens), some may instead confer tolerance against abiotic stresses, and other serve as attractants for pollinators or as signals for plant-plant interactions [181]. Although recent research in plant metabolism has made this distinction less clear-cut than it was before [160], secondary metabolites -as a group -keep some distinctive characteristics: they show an amazing diversity of chemical structures and vastly exceed the number of primary metabolites. There are more than 40,000 reported structures for terpenoids [181], for example, and polyphenols are in the range of 8000-9000 known compounds [6]. Secondary metabolites also show marked qualitative and quantitative variation, both between tissues and developmental stages of a single plant, but also within different individuals of a species, and also across different species [273,181]. This chemical diversity originates from the activity of large and numerous families of enzyme-encoding genes which generally operate in highly branched (and often compartmentalized) metabolic pathways [147]. In the first part of this review (Sections 2 through 5) we try to answer the question about how these metabolic pathways emerged and were shaped during evolution, starting both from what is known about the ''RNA world" and the non-enzymatic and primordial metabolism, to later look at the specific events driving the large diversification of plant secondary metabolism. In doing so, we will present the retrograde, Granick, patchwork and shell hypotheses for the evolution of metabolic pathways as well as reviewing current thinking as to their likelihoods. In an attempt to answer the focal question about the adaptive value of metabolic diversity, in the second part of this review (section 6), we will try to cover some of the strategies which may be used to understand whether genes carry signatures of selection, presenting examples about how these genomic approaches may illuminate the evolution of plant metabolism. The RNA world It is received wisdom that ancestral life forms inhabited an environment -commonly known as the primordial soup -rich in the spontaneously formed organic compounds of the prebiotic world [77]. This hypothesis -the Oparin-Haldane theory -predicts that some simple organic molecules could form in the highly reducing atmosphere of primitive Earth, simply with the supply of external sources of energy (e.g. UV, lightining). This theory remained speculative until 1953, when Stanley L. Miller, a graduate student at the University of Chicago, showed that glycine and alanine could form in an artificial system which mimicked the probable conditions of primitive Earth [176]. The primordial soup, in any case, was probably not the only environment in which simple organic molecules could be formed. The fall of meteorites and comets was common on primitive Earth: recent analysis of the content of the Murchison and other meteorites revealed an impressive diversity of organic chemicals, including the presence of ribose and other simple sugars [54,172,227,78]. If the formation of organic molecules in the primordial soup, or their ''delivery" from falling meteorites is now widely accepted, these simple molecules did not yet represent life: ''Life" is, as stated elegantly by Andreas Wagner [267], the combination of metabolism (a metabolic network connecting, in terms of reactions, simple building blocks into something more complex, along with the opposite process of breaking larger objects into simpler ones) and replication (the process of making more of itself). And this led the scientific community to face the first (of many) chicken and egg problems in the origin of life research: which came first, replication or metabolism (supporting ''replication/genetics first": [10] and references therein; original formulation of ''metabolism-first" came from [291], with later support from [263][264][265]). With the discovery of the structure of DNA in 1953 [269], replication seemed to be the perfect process to occur first. DNA was the bearer of genetic information, and its double helix structure provided a perfect model for its replication (however, DNA can not replicate on its own). Later, when RNA was discovered to also possess catalytic properties [8,43], and some ability to perform primer extension on the basis of a RNA template [121], then the ''RNA world hypothesis" acquired growing credit [83,243,125], and the idea of a ''RNA replicator" surpassed metabolism as the most probable process to occur at the inception of life, with an RNA ribosome even being postulated [196]. Thus the ''RNA world" [83] apparently preceded the DNA/protein/metabolite world. More recently, however, this theory has been considerably challenged -not least by Markus Ralser who argues that most reactions in the model cell are protein-enzyme catalyzed with the remainder being driven by sunlight, free radical chemistry or metal ions [132]. Whilst the RNA world hypothesis postulates ribozymecatalyzed reactions, there is a dearth of ribozymes identified that would be important for the core metabolism of any species. Moreover, many of the in vitro-selected ribozymes obtain their catalytic activity indirectly via their binding of metal ions such as zinc [211]. When taken together, these observations suggest that, if ribozymecatalyzed metabolic reactions exist at all, their contribution to cellular metabolism as a whole is only marginal. From the perspective of evolutionary theory, it is additionally difficult to envision a scenario by which an RNA-catalyzed reaction system could evolve, as it could not have come into place one step at a time, but only as an operational entity [211]. Thus, the origin of metabolism as an RNAbased metabolic reaction system seems rather improbable. By contrast, RNA plays a dominant role in all steps of translation indicating that protein biosynthesis followed RNA in evolution and that this role was maintained [183]. Non-enzymatic (pre-biotic) metabolism Credit for the ''metabolism-first" hypothesis, as opposed to ''replication-first", on the other hand, lagged behind with respect to the acceptance of the ''RNA world". This was due to the initial lack of two main essential requirements for the metabolism-first theory to be supported: how could the presence of catalysts be explained and the need of small volumes of liquid to facilitate the interaction of molecules and occurrence of reactions [265,10]. These two conditions were met with the discovery of hydrothermal vents in the deep ocean, which represent fissures in the Earth's crust and harbor chemistries reminiscent of primitive Earth. Hydrothermal vents emit hot water and reactive gases at high temperatures, containing high concentrations of transition metals (Fe (II) and Mn(II)) as well as CO 2 , H 2 S, CH 4 and H 2 . The mixture of hot fluid and gases, at alkaline pH, once they diffuse in the cold ocean water, form chimney-like porous deposits of carbonates. Through these pores, the components of the hydrothermal effluents can react together in a microenvironment characterised by large temperature, pressure and pH gradients. These microenvironments host today a rich microbial community, which represent the deepest branch in the tree of life and were probably the sites of the first metabolic reactions to occur on the planet [15,168]. The first reactions to take place could have included various prebiotic precursors of metabolism, including formamide, a-hydroxy and aamino acids, fatty acids and pyruvate [264,114,50,221,84]. The formation of pyruvate and fatty acids, in particular, has been also confirmed recently under realistically simulated hydrothermal conditions [31,190]. Another recent series of experiments additionally demonstrated that the canonical pathways of glycolysis, the pentose phosphate (PP) pathway and the tricarboxylic acid (TCA) cycle possess highly similar non-enzymatic analogs [130,132]. Pyruvate and glucose, the end-products of glycolysis and gluconeogenesis, respectively, can be formed spontaneously in water, whilst metal ions such as ferrous iron and phosphate and sulfate radicals -which are abundant in hydrothermal vents -catalyze reactions resembling those of the PP and TCA cycles, respectively. These experiments revealed considerable specificity out of the vast chemical space of possible reaction products, providing strong indications that metabolic reactions similar to those used in modern cells did not necessarily originate from the evolutionary selection of complex catalysts [211]. Intriguingly, most modern enzymes of central carbon metabolism are independent on metallic co-factors. However, examples such as that of ribulose 5-phosphate epimerase -for which E. coli uses a ferrous ion for its catalytic function [237], while those of higher organisms do not [143], suggest that this may be the result of selection. Interestingly, in parallel to this loss is the evolution of complex iron transport systems to circumvent the problem of Fenton reactions causing superoxide, which results in oxidative stress and even fatal cellular toxicity [211]. In addition to the issue of toxicity, a number of other issues likely underlie the shift from nonenzyme based catalysis to enzyme-dominated catalysts. These include: (i) limited catalyst availability, (ii) improved substrate specificity of enzyme catalyzed reactions, (iii) the prevention of side reactions and (iv) the greater possibilities of metabolic regulation afforded by enzymatic catalysis [131]. It is important to bear in mind that whilst enzymatic catalysis dominates metabolism, non-enzymatic reactions still occur frequently within the metabolic network of all cell types. Indeed, the possibility of nonenzymatic catalysis is often one of the driving forces for keeping certain metabolites at low levels or compartmentalized within enzyme complexes. This fact notwithstanding, the evidence for an important role of non-enzymatic reactions in the evolution of metabolism is highly persuasive; this also seems to be consistent with the suggestion that metabolism could have emerged by exploitation of the rich chemistries present in hydrothermal vents. With the emergence of these prebiotic metabolisms, it is generally considered that the further step in the evolution of life, i.e., the transition to living matter, might have occurred with the encapsulation of these organic molecules within micelles or vesicles [34]. Although several mathematical models have been proposed to account for the development of evolvable systems from prebiotic chemistries [231], but see also [258,259], lipid micelles have been demostrated to form spontaneously through a self-assembly process starting from free fatty acids and minerals [98]. Also, with the availability of free ribonucleotides from the primordial nonenzymatic reactions (and this seems to be plausible, see [206,240,20], it was shown that RNA polymers can form in the presence of montmorillonite, a clay present in hydrothermal vents [75,126]. Primordial (biotic) metabolism Thus, from a protocell, encapsulating a primordial metabolism and in the presence of a primitive, but catalytic, genetic material, the challenge next turned to understand how the complex biomolecular networks that underpin life took on the forms that we can observe today [37,77]. Indeed cellular metabolism is arguably one of the earliest biological networks and, as such, has been extensively studied and acts as a fantastic model system for studying network evolution. What has been elegantly described as the ''complex fabric of (molecular) interconnections" is responsible for the functioning, survival and reproduction of cells [37]. Thousands of different biochemical processes are linked in highly tailored systems that have been acted upon by billions of years of evolution. These reactions are also highly compartmented -partially as a mechanism of metabolic regulation [245,4] -necessitating the evolution of complex transport systems. Indeed genome scale models of the model plant Arabidopsis suggest the need for a phenomenal 772 transporters in a 1200 reaction model [177]. The focus of this section (4) will be the evolution of cellular metabolism, starting from the current hypotheses made about the origin of life and the general theories to explain the assembly of metabolic pathways; in the next section (5), we will cover the emergence of primary and secondary metabolism of plants presenting some recent examples about how genetic diversification and catalytic promiscuity -two typical phenomena of secondary metabolism -may affect the evolution of novel metabolic traits and contribute to chemical diversity [157]. When life began on Earth, metabolism is believed to have centered on very few chemistries and simplified reaction pathways which likely formed in emergent replicating or organismal units and later developed as primitive cells [37]. It is important to note that the reactions of this primordial metabolism as well as the more ancient prebiotic chemistries are, essentially, non-existent today and we thus depend upon our knowledge concerning the conditions which prevailed on the primitive Earth. As such, they remain speculative, by contrast to chemistries that arose via enzymatic catalysis which can be explored at protein, RNA and DNA levels by coupling knowledge from biochemistry, structural biology, genomics and modern phylogenetic analyses. Despite having these tools at hand, we remain far from understanding the origin of life as several of the main problems associated with it remain unsolved. Ralser defined these eloquently in 2014 [211]. In essence they are (i) how these early biomolecules form and reach lifecompatible concentrations; (ii) how did genetics with its inheritable evolutionary selection come into place and (iii) how did metabolism evolve and facilitate the prototrophy of cells. Having briefly covered the hypotheses made on the first two points raised by Markus Ralser earlier in this review, we now turn to the theories made on the evolution of metabolic networks from the primordial metabolism of the universal common ancestor of all life forms. Upon the biochemical routes of primordial metabolism, in fact, several evolutionary pressures may have acted to shape the genomes modifying the structure and regulation of the metabolic networks. Several hypotheses have been formulated so far -none of them mutually exclusive in the assembly of complex networks -to explain the emergence of novel metabolic traits. We will present these hypotheses below to later focus on the genome dynamics giving rise to the secondary metabolism of plants. Glossary adaptation (signature of -): a specific sequence pattern, which can be detected at the DNA level, that distinguish a locus under selection from one evolving neutrally; the tests to identify these genomic footprints typically compare the frequency of sequence polymorphisms, the extent of linkage disequilibrium and/or population differentiation under the null hypothesis of neutrality. balancing selection: the process by which two alleles of a single gene are maintained in a population at intermediate frequency, higher than that predicted by genetic drift alone, due to heterozygote advantage. compartmentation: in the context of metabolic biology, the distribution of metabolites and enzymes (and thus, metabolic pathways) across different subcellular structures. Macrocompartmentation refers to the differential allocation of metabolites/enzymes between the cytosol and other membrane-bound organelles (as is often the case in cofactor metabolism) while microcompartmentation denotes the association of cytosolic enzymes with cytoskeleton (actin, tubulin) or their localization on the surface of organelles or endomembranes, [245]. duplicate loss: the process by which duplicated copies of a gene are lost across evolutionary time. In plants, this is the common fate of most of the paralogs. A form of duplicate loss is pseudogenization, when one of the paralogs is retained in the genome, but is no longer functional due to a loss-of-function mutation. evolve and resequence (E&R): the measurement of allele frequencies between an ancestral (base) and selected population with Pool-Seq within an experimental evolution setting. experimental evolution: the study of evolutionary processes over multiple generations in populations where the settings are set and controlled by the experimenter. It involves the application of selection pressures where only the individuals exceeding a certain phenotypic treshold are allowed to reproduce. gene duplication: a mutational event which implies the duplication of a particular genomic region (or whole genomes in case of polyploidization) into a different position in the genome. Several genetic mechanisms can originate gene duplicates (paralogs): in addition to whole genome duplications, paralogs may form following tandem/segmental duplications, or may derive from the activity of transposons. The contribution of each of these mechanisms on the amount of gene duplicates typically vary widely across various plant genomes. gene fusion: the process through which a new gene is formed through the fusion of multiple, previously separated ORFs (open reading frames, i.e. stretches of DNA bordered by a start and a stop codon). Fused genes may result from interstitial deletions (deletions of intergenic space) or from larger chromosome mutations (translocations, inversions, etc.). genetic drift: the change of allele frequencies due to random factors. genome scan: a survey of the genome-wide DNA polymorphisms across members of a population. homology: similarity of phenotypes in different lineages due to common ancestry. In evolutionary biology, the term homology can be used to define a common ancestral origin for any structure (e.g., organs, morphologies, genes, etc.): two genes are thus homologs, for example, if they derive from the same common ancestor [271]. linkage disequilibrium (LD): nonrandom association of alleles from different loci. It is usually measured as: where: A and a are two alternative alleles at locus 1, B and b are two alternative alleles at locus 2, p A/B/a/b are their respective allele frequencies, p AB is the haplotype frequency (for the AB allele combination) and r 2 AB is the LD between A and B. Mutation rate, selection, demography (e.g. migration, admixture, bottleneck), genetic drift and mating system (outcrossing Vs self-fertile species) are all factors influencing the extent of LD in natural populations [76]. maximum likelihood (ML): a statistical method to infer unknown parameters of a probability distribution. In phylogenetics, ML is both a method to infer tree topologies but can also be used to test evolutionary processes when the tree topology is known [288]. Typical parameters include, for example, the tree topology itself, branch lengths and substitution rates. Likelihood is simply defined as a quantity proportional to the conditional probability of observing the data (D), given the model (M), i.e., P(DjM). In the inference of phylogenetic trees, likelihood scores are calculated for each possible branching pattern; the maximum value (ML) is the highest score associated to the specific branching pattern which maximises the probability of observing the data (i.e., the individual site patterns of the multialignment) given the substitution model [72,116]. maximum parsimony (MP): a method to infer phylogenies based on minimising the number of character changes; historically it was the first approach to reconstruct ancestral sequences [239] and was later applied also for tests of neutrality in protein coding sequences [174]. MP provides relatively reliable reconstructions of ancestral states only in case of recent divergence [293]. mosaic origin: as a consequence of the processes of endosymbiosis of cyanobacteria and a-proteobacteria, and subsequent gene transfer between the endosymbiont and the nuclear genome, some of the metabolic pathways in eukaryotes display signatures of mixed evolutionary origin. Typical examples of mosaics are the glycolysis and Calvin cycle pathways in plants [167], the pyrimidine biosynthesis [188] and the heme biosynthesis in photosynthetic eukaryotes [193]. In addition to endosymbiosis, also horizontal gene transfer has contributed to shape the mosaic structure of some specific pathways in photosynthetic eukaryotes (e.g. chlorophyll degradation, [192]). Oparin-Haldane hypothesis: a theory regarding the origin of life proposed independently in the 1920s by the Russian biochemist Aleksandr Oparin and British geneticist John B. S. Haldane. Both believed that if the conditions of the primordial atmosphere contained very low level of oxygen (''reducing atmosphere"), then organic compounds could have been formed directly from inorganic molecules with the supply of external energy sources (e.g. UV radiation, lightning). Haldane coined the term ''prebiotic soup" to denote the primordial aqueous environment, rich in ammonia and methane, in which the first organic compounds appeared; later, by further developments, these early organic metabolites could have acquired lipid membranes (Oparin's ''coacervates") to form the first living cells [151][152]. ''one gene-one enzyme" theory: the idea formulated in the 1940s by American scientists George W. Beadle and Edward L. Tatum that one gene specifies the synthesis of a single enzyme in a metabolic process. The idea spurred from Beadle & Tatum work with Neurospora crassa, a mold which could be grown easily in the laboratory with a simple growth medium (sugars, inorganic salts, vitamins). Beadle and Tatum irradiated spores of Neurospora and verified if the derived mutant strains were able to grow on complete and minimal media. Those mutants that failed to grow in the minimal medium were tested sistematically to identify which compound they were unable to synthesize [18]. They found that some mutant strains required the addition of specific aminoacids in order to grow in the minimal media: this result allowed them to associate the mutations with the production of enzymes in a specific metabolic pathway. The work of Beadle and Tatum represented the foundation of biochemical genetics and was awarded the Nobel prize in Medicine or Physiology in 1958 (shared with Joshua Lederberg). Although this theory was solidly verified, we now know it offers an oversimplistic view of molecular biology: not all genes encode for enzymes, and many enzymes act only as multipolypeptide complexes, where each single component is encoded by a distinct gene. orthologs: genes in different species originating from a single ancestral gene in the last common ancestor of the species under comparison [141,142]. paralogs: genes in a single species originating from an event of gene duplication. A broader definition does not make any distinction whether paralogs reside in the same genome (i.e., in the same species) or with respect to when the duplication emerged [142]. positive selection (or directional, also known as darwinian selection): the increase in frequency of mutations (to higher prevalence in the population or even fixation) conferring higher fitness (advantageous mutations). primary metabolism: the ensemble of metabolites (and their pathways) which are essential for growth and reproduction of all organism. It usually includes the central (carbon) metabolic routes involving the four main classes of biological molecules (carbohydrates, proteins, nucleic acids and lipids). The number and structure of primary metabolites are largely conserved across the tree of life. primordial metabolism: the set of simplified reaction pathways which characterised the early heterotrophic living systems emerging from the primordial soup (or from the highly reactive microenvironments in hydrothermal vents). The term may be used in a more general sense to include also the ensemble of prebiotic chemistries that predated the emergence of the first life forms (i.e., the last universal common ancestor of all living forms, around 3.5 billion years ago). In its broader sense, primordial metabolism thus covers the chemistries of the prebiotic phase until the simple metabolism of the last universal common ancestor (from around 4.2 to 3 billion years ago). purifying selection (or negative): the process by which deleterious mutations (decreasing fitness) disappear from a population. relaxed selection: the weakening or elimination of a regime of natural selection which previously acted to maintain the expression of a trait; it usually follows a change in environmental pressure or genetic make-up, such as following gene duplication [150]. secondary (or specialized) metabolism: the ensemble of metabolites (and their pathways) which mediate the interactions of an organism with its surronding environment. Secondary metabolites typically function in various ecological interactions (in response to pathogens, as attractants for pollinators, etc.), although some may regulate processes in growth and development [254,160], a role which was formerly typically assigned to primary metabolites only (see primary metabolism). Secondary metabolites occur in a much wider variety of structures and their number vastly exceed that of primary metabolites. In light of their ecological role in improving the fitness of the organisms, secondary metabolites may represent adaptive traits. Their fragmented distribution patterns across the tree of life could thus reflect adaptations to particular ecological niches. Pathways of secondary metabolism are present in microbes, fungi and plants. Many animals (both Invertebrates and Vertebrates) synthesize defensive metabolites with highly similar structures to the typical secondary metabolites of plants. Some quinazoline alkaloids, for example, which are widely accumulated in plants of the Rutaceae family (Citrus), are also synthesized, with similar structures, by Millipedes, Ascidians (sea squirts) and Amphibians. In all cases, they serve defensive purposes and protect the organism from predators [175]. The presence of very similar metabolites in distant taxa should consider the possibility that specific metabolic traits could have emerged initially to serve a particular function, but were later exapted [87] to serve a different one (e.g., sex attraction and defense). selective sweep: the decrease in nucleotide diversity within a population in regions flanking a locus under positive selection. Hard sweeps are usually defined as those following the emergence of a new mutation which is immediately beneficial (and thus, selected) in a population; soft sweeps, by contrast, are originated by selection on standing variation; as such, they emerge in neutral or nearly neutral loci whose derived phenotypes are subject to a change in selection pressures. site frequency spectrum: histogram representing distribution of allele frequencies in a population. stabilizing selection: in the context of population differentiation, the process by which the same allele is selected in different subpopulations. Schematic diagrams representing the main hypotheses for the evolution of metabolic pathways. In the retrograde hypothesis (a), metabolic pathways are supposed to originate with sequential gene duplications starting from gene catalyzing the last step of current pathways. Depletion of the compounds present in the primordial soup may have originated a selection pressure leading to the survival and reproduction of the primordial cells able to produce the depleted compounds; this process could have then been repeated sequentially, in a backward direction, until the establishment of contemporary pathways. In Granick's hypothesis (b), pathways would have been assembled in a forward direction, from simple precursors to more complex products. Under this model, the older genes across evolutionary timescale would be represented by those catalyzing the earlier steps in contemporary pathways. In our diagram, the decoration and alkylating steps are examples of reactions adding complexity to the initial precursor and to the pathway intermediates. In the patchwork hypothesis (c), ancestral genes encoding promiscuous enzymes could have expanded the metabolic capabilities of primordial cells through gene duplication and subsequent divergence. A possible fate for an event of gene duplication is subfunctionalization (c), in which the catalytic activities of the ancestral gene are divided among the paralogs. In our example, following divergence of the duplicated genes, one of the ancestral reactions is taken on by one of the paralogs. In the shell hypothesis (d), evolution of metabolism can be traced back to the consecutive additions of distinct metabolic pathways. The core central pathway (reductive TCA, fatty acids biosynthesis), i.e., shell A, predated the addition of nitrogen metabolism in shell B; sulphur and cofactor metabolism were later added as shell C. As more pathways are added, the inner shells remain nested in the network as remnants of the earliest metabolisms. Abbreviations: E1, enzyme 1; E2, enzyme 2. The hypothesis of the origin of life from the pores of the deposits formed by hydrothermal vents posits that early organisms were heterotrophic and performed only minimal biosynthesis. The suggestion is that the increasing number of primordial cells would lead to an exhaustion of amino acids in the primordial soup thus imposing increasingly stronger selective pressure favoring those cells that had evolved the capacity to synthesize such molecules themselves. This, ultimately, led to the thousands of extant reactions and transport processes linked into pathways and networks that characterize life as we know it [77]. The presence of such highly complex metabolic networks raises the question as to how they originated starting from ancestral genomes that were likely only composed of a couple of hundred genes [187]. Different molecular mechanisms have clearly been behind both the expansion and the shaping of early genomes and metabolic pathways including, but not limited to, gene duplication, gene fusion and horizontal gene transfer. Whilst horizontal gene transfer (HGT) has been demonstrated in plants [212,25,192], it appears to be relatively uncommon and as such we will largely restrict discussion here to the other two mechanisms. A case of HGT, just to cite one example, involved phenylalanine ammonia lyase, the gene leading to the evolution of phenylpropanoid metabolism: the appearance of phenylpropanoids was in fact a key adaptation of plants to the life on terrestrial habitats [65,255]. Starting from Ohno's classical work, gene duplication is instead considered an immensely important force driving the evolution of life [194]. Comparative analysis of sequenced archaeal, bacterial and eukaryal genomes revealed that a very large proportion of genes are the outcome of duplication events [56,209] that either predate or follow from the appearance of the last universal common ancestor [119]. Moreover, due to the preponderance of whole genome duplications in the plant kingdom this is especially the case for plants [198,74,218,209]. It has been suggested that current genomes may be the result of DNA arrangements involving a limited number (20-100) of ''starter types" [153], although how these originated remains unclear. The opposite phenomenon, i.e. gene fusion, wherein independent cistrons are fused to form bi-or multi-functional proteins is also a common evolutionary process. It provides a mechanism for the physical association of different proteins (be they catalytic or regulatory or both), and frequently involve genes encoding sequential steps of a pathway [66,103], with such fusions being reported for tryptophan and histidine pathways in prokaryotes [279,71], as well as the Calvin-Benson cycle [204], glycolysis [257], and secondary metabolic pathways in plants [277]. The retrograde hypothesis As early as 1945, and based on the Oparin-Haldane hypothesis and the one-to-one correspondence between genes and enzymes [18], see glossary), Horowitz proposed that biosynthetic enzymes emerged via gene duplication that took place in the reverse order to that found in current pathways [109]. This theory posits that if compound A was limiting in the primordial soup, then its synthesis from its direct precursor would be the first (enzyme) catalyzed reaction to be required. Once this was established, this would likely lead, in turn, to selective pressure on the direct precursor, requiring the evolution of another reaction to synthesize this. Over many iterations, a pathway could thus be built from the final product(s) backwards up to the initial precursor [77] (Fig. 1a). The Granick hypothesis A direct alternative hypothesis, that is little discussed [70], is the proposal that biosynthetic pathways develop in the forward direction [90]. The central theme of this hypothesis is that biosynthesis of certain end-products could be explained by the forward evolution from relatively simple precursors. The model thus predicts that simple biochemical compounds predated more complex ones and that the enzymes catalyzing earlier steps of a metabolic role are older than those acting later in the pathway (Fig. 1b). For this hypothesis to hold it is imperative that each of the intermediates are of use for the organism [70]. This seems to hold true for heme and chlorophyll biosynthesis as well as isoprene biosynthesis, but problems arise for pathways such as purine and branchedchain amino acid biosynthesis for which the intermediates are of no apparent use [70]. The patchwork hypothesis The patchwork hypothesis [120], by contrast, posits that metabolic pathways may have been assembled via recruitment of primitive enzymes that could react with a wide range of chemically related substrates [77]. Such relatively slow, non-specific enzymes enabled primitive cells containing small genomes to overcome their limited coding capacities. Gene duplication and neo-/ subfunctionalization is the proposed mechanism underlying this recruitment of an ancestral enzyme to serve novel functions in emergent pathways (Fig. 1c). This hypothesis finds support from both the analysis of sequenced genomes and from directed evolution experiments [70]. It has, furthermore, been invoked to explain the evolution of several processes including the urea and TCA cycles as well as several pathways of amino acid biosynthesis and even more recently evolved pathways [77]. The shell hypothesis A fourth hypothesis which needs to be considered is the shell hypothesis, put forward by Morowitz [184]. This postulates that the reductive citric acid cycle was the earliest pre-biotic selfreplicating chemistry and it evolved in the absence of enzymes. This cycle is then believed to have led to an ''energy amphiphile" core that enabled the discovery of new carbon-based chemistries upon which were built further chemistries (or shells). This hypothesis assumes that pre-biotic chemistries remain ''imprinted" in modern metabolism as relics and suggests that biogenesis of metabolism manifested itself in a hierarchy of nested reaction networks of increasing complexity [37]. Indeed, it predicts the formation of the TCA cycle, glycolysis and fatty acid biosynthesis in shell A preceded that of the introduction of nitrogen via amino acids in shell B, sulphur in shell C, with ring closure giving rise to purines, pyrimidines and many other cofactors (Fig. 1d). Indeed the energy amphiphile core of this hypothesis is consistent with earlier proposals that life evolved on pyrite [264], although the gradual addition of shells, and in particular the late account for sulphur chemistry, are not consistent in light of the recent scenarios for a core organo-sulphur prebiotic metabolism [84]. The emergence of primary and secondary metabolism Whilst all of the above theories have had their supporters, the patchwork recruitment scenario is arguably the best supported by accumulated evidence (see [226]) and [37] for details; we review additional support for the patchwork model with respect to other theories further below). To provide just a handful of examples here, enzymes with (ba) 8 -barrel fold structure have been found to catalyze similar reactions across pathways [53]. Similarly, analysis of the entirety of E. coli metabolism revealed a genuine mosaic with widespread recruitment of protein domains [249]. This is also clearly the case in many plants: the endosymbiotic events that gave rise to the mitochondrion and chloroplast (and possibly the peroxisome) during the evolution of the eukaryotic lineage duplicated enzyme functions, and unless there was a specific selective advantage, one of the duplicated enzymes was lost over evolutionary time. In the absence of a specific selective pressure, duplicate loss was essentially random, which has led to mixed compartmentation of metabolic pathways and a mosaic evolutionary origin for many pathways, irrespective of their compartmentation [245]. Gene duplication and neofunctionalization Whilst the evolution of primary metabolism in plants is dominated by events of endosymbiosis, gene duplication, with the different fates of the resulting paralogs (including gene loss, see for example the fact that plants do not possess a complete urea cycle [5] is considered instead to be the main driver of the diversification of secondary metabolism [219,40]. Several models are usually associated to explain the emergence of novel gene functions following (or predating, as we will see below in case of the escape from adaptive conflict) gene duplication [194,117,52,59,96]). Despite the fact that the majority of gene duplicates is lost over evolutionary time, and that most of those that are retained are subject to strong purifying selection, a few retained paralogs may initially be instead under relaxed selection and may accumulate (potentially adaptive) mutations [165]. This model (neofunctionalization) thus imply that the original gene keeps its ancestral function, while a new function emerges in one of its derived paralogs, which can be maintained in the genome because of positive selection on the new function [104,39,275]. Within the context of metabolism, typical polymorphisms associated to the emergence of novel enzymatic functions may be located in the coding region, resulting in a shift in substrate preference and/or catalytic activity; in other cases, metabolic novelty may be driven by differences in transcript abundances of structural or regulatory genes, as a result of genetic polymorphisms or epimutations in the promoters or other regulatory regions [274,210]. Thus, accompanying gene duplications, these non-deleterious, extremely rare mutational events generating expansion of the existing pathways of primary metabolism could have emerged and be eventually fixed within populations [273] during the major expansion of plant metabolism, which occurred concomitantly with the colonization of terrestrial habitats, around 500 million years ago [65,255]. Many metabolic lineages of secondary metabolism could have emerged, at least initially, through such rare events. Then, following various modalities of gene duplications (either tandem, segmental and retroduplication), some of the retained, additional alleles may be occasionally subjected to relaxed constraints such that at least one copy is able to accrue considerable mutations, leading to greater mechanistic elasticity, entirely new substrate specificities or, more generally, to an alteration of enzyme activity [179]. This explains why expanded substrate recognition, flattened catalytic landscapes and multiple products from a single enzyme are common in specialized metabolism. Although the distinction between genes active in primary and secondary metabolism is, from a functional perspective, relatively indistinct, several evolutionary considerations support this classification. As we have said (but see also the examples further below), secondary metabolism genes originated from gene duplications of pre-existing genes (often from primary metabolism, [40]), have fragmented phylogenetic distribution [276], and are usually characterised by a higher rate of gene birth/loss with respect to the enzyme genes of primary metabolism [185]. Also, they may occur in metabolic clusters [191], are characterised by large expression variation (plasticity) and distinct correlation properties [137,97,253,278]. All these characteristics were used in a machine learning approach to train a model of the Arabidopsis genome which was able to predict whether a particular gene is part of primary or secondary metabolism. The model reached a high prediction accuracy integrating gene information related to their degree of conservation, genomic location, expression profiles, the presence/position of epigenetic marks and protein domain composition [182]. A classical example, which summarises the co-occurrence of several of the phenomena discussed above, comes from the study of methylthioalkylmalate synthase (MAM), an enzyme involved in the elongation steps of methionine, leading to the synthesis of alkylglucosinolates, a class of defense metabolites found in Cruciferae (Arabidopsis and related mustard species). Phylogenetic analyses support the origin of MAM from the duplication of the ancestral a-isopropylmalate synthase (a-IPMS), a gene active in the synthesis of leucine [55]. MAM duplicates evolved distinctive characteristics: they lack the C-terminal domains, typical of a-IPMS, which confer the capacity to be feedback-inhibited by the final product of the ancestral pathway, leucine; moreover, two amino acid changes in the active site shifted the specificity of the duplicates towards the MAM substrates, although both enzymes maintained marginal catalysis with their non-preferred substrates [23,55]. This case thus represents an example of how paralogs of genes of central metabolism may undergo: i) neofunctionalization involving extensive sequence variation, shifting substrate preference and ii) relaxation of ancestral functional constraints, with the loss of feedback inhibition. Both processes thus contributed to the evolution of MAM genes starting from a gene active in primary metabolism, giving rise to the synthesis of novel metabolic traits. Another example, that of the gene encoding for homospermidine synthase (HSS), recapitulates well the processes leading to the appearance of evolutionary novelties in plant metabolism. Homospermidine is a widespread polyamine in the plant kingdom; in several families of Angiosperms it is used as the substrate for the synthesis of pyrrolizidine alkaloids (PAs), a class of secondary metabolites used as feeding deterrents by the plant. In Convolvulaceae, independently from the other families which also accumulate PAs, phylogeny strongly supports the origin of HSS genes from a single duplication of deoxyhypusine synthase (dhs), a gene of primary metabolism involved in the post-translational regulation of the eukaryotic initiation factor (elF5a). DHS transfers an aminobutyl moiety, derived from putrescine, to the lysine residues of elF5a, forming the rare aminoacid deoxyhypusine. In PA-free species of the Convolvulaceae, there were no apparent paralogs of dhs, or, when these were detected, gene duplication gave rise to nonfunctional HSS copies (pseudogenization). On the other hand, in species accumulating PAs (e.g., Ipomoea neei), duplication of the ancestral dhs generated paralogs which later accumulated both functional (i.e., non-synonymous changes in the aminoacid sequence) and regulatory divergence (i.e., tissue-specific expression), acquiring the novel catalytic activity typical of HSSs [127]. Gene duplication and the escape from adaptive conflict (EAC) An alternative process which is invoked to explain the evolution of novel function (and the maintenance of the ancestral one) is the escape from adaptive conflict (EAC), which occurs when a new function emerges in a progenitor single-copy gene before its duplication. Under this model, both the ancestral and the novel function are maintained in the progenitor, but negative intragenic epistasis prevents improvement of both functions (advantageous mutations in the ancestral, bifunctional gene are removed by natural selection). Gene duplication thus resolves the conflict separating the fates of the paralogs, which can then accumulate mutations to improve separately the ancestral and the derived function [105]. An example of escape from adaptive conflict probably operated in Convolvulaceae, during the evolution of dihydroflavonol-4reductase (DFR), a gene in the anthocyanin pathway which acts downstream of dihydroflavonols [59], but see also [12]. In Ipomea purpurea, DFR is present in a locus made of three tandem copies which are all expressed across several tissues. Analyses of sequence evolution and tests of substrate specificities of DFR enzymes from various Ipomea species partially support EAC as the process by which novel function emerged during the evolution of DFR. Before the first round of duplication, in fact, the lineages of single-copy DFR genes are subject to purifying selection, indicating that the gene is under strong functional constraints (mutations in this lineage are thus considered to be mainly deleterious). By contrast, these constraints were apparently released after the first round of gene duplication, when the paralogs show strong evidences of repeated positive selection, indicating the presence of adaptive evolutionary changes which improved, separately, both the ancestral and the derived function. Although there are no large-scale studies on the impact of EAC in plants, and alternative models may also account for the fate of DFR genes in Ipomea [12], EAC remains a possible model describing how new protein functions may evolve while maintaining, at the same time, their ancestral functions. More recently, biophysical methods, incorporating protein stability and population genetics data have been developed to model the most reasonable mode of protein evolution [233]. Evolution of protein functional specialization Although the plant studies cited above, along with others [163,180,203,48,294], all afford extraordinary insights about the functional diversification of extant enzyme genes, and additonally provide support for the role of the patchwork hypothesis, the details about the historical trajectories leading to these functional specialization remain relatively unexplored. This is because we generally lack information about the ancestral genes, so that the progressive effects of the historical nonsynonymous substitutions are unknown. With the revolutionary works of Joseph Thornton [251,100,242] and the recent improvements in phylogenetic methods (summarised in [64]), however, the field of ancestral protein resurrection has made tremendous recent progresses. Indeed, it is now possible to resurrect the most probable sequences of the ancestral alleles from a well-supported phylogeny and characterise the function of their respective products. In such evolutionary contexts, one of the first studies to assess the role of gene duplication on metabolic specialization was the analysis of fungal maltases. These enzymes, encoded by genes of the MALS family, are able to hydrolyse a-disaccharides (e.g., maltose, sucrose, etc.) into monosaccharides [262]. The members of this family underwent both recent and distant duplications and display today functional specialization in the hydrolysis of specific disaccharides. As a result of these duplications events and subsequent diversification of the paralogs, yeast species have variable numbers of MALS genes and, consequently, different activities towards a-disaccharides: if S. cerevisiae (baker's yeast), which hosts seven MALS genes, can hydrolyse most sugars, the related species S. kluyveri, for example, lacks the ability to hydrolyse melezitose, turanose and isomaltose, due to the absence of two MALS paralogs. The resurrection of the ancestral alleles and the functional characterization of the various maltases allowed them to investigate how from a promiscous and relatively inefficient ancestral glycosidase, the function to hydrolyse sugars was differentially partitioned into the descendants through various duplication events. Along these trajectories, some amino acid substitutions led to the increase of the catalytic efficiency of the ancestral enzyme, while others shifted the specificity toward a different disaccharide substrate. The high activity of MAL12 and MAL32 (two extant maltases) towards maltose-like substrates, for example, was the result of a gradual optimization of catalytic activities already present in the ancestral maltase. On the other hand, the capacity to hydrolyse isomaltose-like substrates, typical of a separate clade of the MALS genes (IMA1-4), resulted from a catalytic specialisation which became preponderant after one of the recent duplication events, although a marginal capacity to hydrolyse isomaltose-like sugars was already present in the ancestral maltase. The evolution of fungal glucosidase thus highlights the principle that various models of gene duplication (neo-/ subfunctionalisation and EAC, [96]), rather than acting singularly to account for the specialization of extant enzymes, actually interweaved along evolutionary histories to give rise to metabolic diversification. In plants, ancestral protein resurrection approaches have been pioneered by the group of Todd Barkman, who has studied the functional specialization of the SABATH (Salycilic Acid/Benzoic Acid/Theobromine) gene family of methyltransferases [112] and the convergent pathways leading to the synthesis of caffeine [111]. We will focus here on their first investigation, as an example of how resurrection and characterization of the SABATH ancestral proteins can provide insights about the historical changes at the basis of enzyme functional evolution. SABATH genes are present in several Angiosperms, have accumulated various duplications, and show today enzymatic activities towards benzoic acid and a wide range of its analogs (e.g., salycilic acid, dihydroxybenzoic acids, nicotinic acid, o-anisic acid etc.). All these enzymes are active today in the production of flower scents and in the synthesis of various defensive molecules against pathogens. The resurrected ancestral SABATH enzyme showed high activity towards benzoic acid, but several subsequent amino acid substitutions shifted substrate preference to salycilic acid. Of these changes, one substitution in the active site (the His to Met change at position 201) showed a clear signature of positive selection. Moreover, along the other branches of the SABATH phylogeny, the resurrection of the intermediate ancestors uncovered how the specialization of extant enzymes emerged. In general, latent activities with non-preferred substrates in one ancestral enzyme became instead preponderant with the accumulation of specific amino acid changes in one of the daughter enzymes following duplication. In some cases, these activity shifts could be reconducted to signatures of positive selection on specific amino acid substitutions located in the active site. As with fungal maltases, it is difficult to classify the specialization of SABATH enzymes into a strict model for protein evolution under gene duplication. In this case, along the various branches, ancestral functions were either optimized, or shifted towards different substrates, and also partitioned among different descendant genes; with these three processes occurring concomitantly along evolution. Testing evolutionary hypotheses The studies of extant diversity in plant secondary metabolism necessarily raise the question of how such natural variation originated, but also whether if -and to which extent -this diversity might be considered as the results of adaptive processes. This is one of the key challenges in evolutionary genetics [147], part of the larger theme about the emergence of evolutionary innovations [57], and directly relates to our (largely) incomplete understanding of the consequences of mutations on the fitness of organisms [122]. In the next sections we will try to cover some of the approaches which can be used to detect signatures of adaptation at the genetic level. The statistical tests presented below have variously contributed to reinforce the patchwork hypothesis as perhaps the pervasive mechanism to explain the ramifications of secondary metabolism as we know it today. In plants, several cases of pathway ramification originated from the initial recruitment of neofunctionalized paralogs whose activity conferred some form of (higher) adaptive value. The evolutionary pressures acting on these genes were measured across the branches of phylogenetic timescale with various forms of test of selections involving the codon-based models of sequence evolution (d N /d S or related tests, see for example [23,62,127,102]. At a higher level, also genomic scans and approaches based on population subdivision have provided evidences of selection on a range of alleles variously involved in diversification/expansion of plant metabolism. Some of these evidence points to a role of metabolic genes as direct targets of selection during plant domestication and breeding, rather than being simply seen as ''passive", neutral variants merely linked to the ''real" target locus effectively under selection [281,45,22,295,284]. The success of the patchwork model relies perhaps on relaxing the assumptions the other hypotheses make with regard to the age of the enzymes involved during the assembly of a metabolic pathway. Both retrograde and Granick's forward-evolution hypotheses, for example, posit ancestry relationships in the enzymes along the steps of a metabolic pathway (e.g., in Horowitz's retro-evolution model, the last enzyme in the pathway is the ancestral one): these relationships are weakly supported today both in light of the mosaic distribution of protein folds across metabolic genes [36] and because of the unequal distribution of selective constraints in the genes along a biochemical pathway [195,51]. Also, in plants, genomic comparisons of the paralog copy number in several classes of metabolic gene families, across a large phylogenetic timescale, did not show a clear trend between the position of a specific gene in a pathway and its time of emergence [255]. More generally, it remains difficult, if not impossible, to assess the real contribution of retro-and forward evolution models in the assembly of biochemical pathways: these hypotheses were originally formulated in the context of the evolution of new functions starting from the primordial soup: these reactions are basically nonexistent today and can be re-created only through a synthetic biology/experimental evolution framework. As such, these theories may have some validity in the initial expansion of primordial metabolism, but have today limited explanatory application in providing support for the formation of a highly branched and compartmentalized metabolic network. The patchwork hypothesis, on the other hand, is consistent with the various models of protein functional evolution following gene duplications [24,96,164,194,266], and with the evidences coming from directed evolution experiments on the evolution of promiscuous activities [1]. The picture emerging from the mechanisms underlying the assembly of metabolic pathways is thus adding increasingly solid support to the patchwork model of evolution, with some cases of specific pathway evolution originating instead from horizontal gene transfer [192]. As it will become clear in the examples below, tests of adaptation represent powerful statistical tools both to test evolutionary hypotheses or generate novel ones about the relationship between genotypes, phenotypes and environment; however, sequence analyses and phylogenetics, alone, are not sufficient and clearly need to be integrated within wider ecological experiments in natural settings to fully understand the genetic basis of adaptation [16,207]. The possibility to infer signatures of adaptation from sequence data derives from the work of Motoo Kimura and his neutral theory of molecular evolution [134,135]. This theory states that: ''[. . .] the overwhelming majority of evolutionary changes at the molecular level are not caused by selection acting on advantageous mutants, but by random fixation of selectively neutral or very nearly neutral mutants through the cumulative effect of sampling drift (due to finite population number) under continuous input of new mutations [. . .]" [136]. One of the direct consequences of the neutral theory is that the largest part of sequence polymorphisms that we observe during evolutionary time are fixed by drift and confer no fitness advantage; hence, to distinguish neutral from adaptive loci the principle upon which tests of selection are based is to compare the distribution of empirical data against the null hypothesis of random genetic drift [189,261]. In contrast to the predictions of the neutral theory, however, the advent of whole genome scans has led many researchers to declare footprints of (putative) selection as being relatively common in many plant genomes [101,295,209]. In some plant genera (e.g., Helianthus [17], Capsella [235]), the proportion of adaptive substitutions (a) may well exceed values of 0.2-0.3 [171]. These figures certainly represent an overestimation of the proportion of loci which constitute true adaptive alleles [250]. It is known, for example, that specific DNA patterns may simply arise by chance, or derive from hitchhiking of false positive SNPs (neutral variants) in linkage disequilibrium with the polymorphism under selection or, also, by taking into account erroneous models of demographic history [252,292]. Also, even when a given locus carrying a signature of selection has been demonstrated to be causal for a certain phenotype, it is still necessary to rule out selection acting on its pleiotropic effects [197,222,16]. For these reasons, we here reserve the definition of ''adaptive alleles" to those i) having a functional and causal relationship to phenotypes increasing fitness, and ii) whose frequency has been shown to change, in the expected direction, following selection on its focal trait(s) [16]. Along these lines, obtaining convincing evidences of natural selection acting at the sequence level has proven incredibly difficult; as such, true adaptive alleles can be considered rare in plant genomes [154,144,49,207,282]. Given the importance of secondary metabolites and their implications on plant fitness, it is not, however, surprising that structural and regulatory genes of plant metabolism often show signatures of selection. Indeed, the preponderance of such genes is comparable to other classically selected alleles, such as those of disease-resistance genes and life-history traits [138]. Below we provide a survey of sequence-based approaches to detect various forms of selection, along with a representative, nonexhaustive list of computer programs which can be used to infer phylogenies and perform tests of selection at the inter-specific and population level (Table 1). Identifying selection on single loci The d N /d S ratio test (also known as Ka/Ks, or x) is one of the best known approaches to infer signature of selection in proteincoding genes from multiple-species alignments. The statistic is obtained as the ratio between the number of nonsynonymous substitutions per nonsynonymous site (d N or Ka) and the number of synonymous substitutions per synonymous site (d S or Ks). According to Kimura's theory, d S will largely exceed d N in the majority of protein-coding genes, because, statistically, nonsynonymous substitutions per nonsynonymous site will be mostly deleterious to protein function and will thus be eliminated by purifying selection; synonymous substitutions, on the other hand, will be instead mostly neutral, leading to a value of d N /d S < 1. This is frequently observed in large-scale alignments of several protein-encoding gene families at the macroevolutionary level. The opposite situation (d N /d S > 1) occurs instead when nonsynonymous substitutions per nonsynonymous site exceed synonymous substitutions, and this situation is indicative of the presence of repeated aminoacid changes which acted to favor novel protein structures and functions (positive selection). This type of selection, although extremely rare, especially when initial signatures of adaptation at the sequence level are then tested in ecological settings, is likely to act in coevolutionary processes, like during plant interactions with pollinators and predators. Statistical evidence for positive selection is thus relatively common in secondary metabolism, and has been provided for glucosinolate [23,55,102,148], terpenoid [46] and pyrrolizidine alkaloid metabolism [127] and in the SABATH methyltransferase genes, a family involved in the synthesis of floral volatiles with functions in pollinator attraction and defense [13,14]. The results from these studies provide support to the patchwork model for the evolution of metabolic novelty: they show the recruitment, under a regime of positive selection, of enzymes originating from the duplication of genes already active in existing pathways. Initially, approximate methods were used to calculate the rate of d N and d S , based simply on classification of the sites in sequence alignments and counting the number of nonsynonymous and synonymous substitutions; these methods are considered largely inadequate today as they did not take into account the transition/transversion and the codon usage bias [161]. Later, starting from the work of Messier and Stewart, the method has been used to esti- Table 1 A representative, if inexhaustive, list of computer programs for phylogenetics and population genomics. Name Description References Phylogenetic inference and evolutionary processes ape the core R package for evolution and phylogenetics, includes all main distance methods for phylogeny estimation (neighbor-joining, MP, ML, bayesian methods) [200] BEAST2 estimates phylogenies using bayesian methods and tests evolutionary hypotheses with molecular clock models [30] Datamonkey web-server for analyzing evolutionary signatures in sequence data, includes a wide range of tests for detecting recombination and selection at the level of genes, single sites or phylogenetic branches [270] HyPhy comparative sequence analyses focusing on likelihood-based approach for inference of selection. Its recent version includes a tool to calculate relative evolutionary rates from protein and nucleotide data (LEISR) [205,238] MEGA (version X) a widely used software for phylogenetics with an intuitive graphical interface. Includes all main methods for tree construction, calculation of evolutionary distances and some tests of selection [149] iMKT website for performing various MK-derived tests for selection [ Population genomics (site-frequency spectrum, selective sweeps, measures of diversity, pool-sequencing) adegenet R package for multivariate analyses of marker data, usually the entry-point to other specific packages for population genomics (e.g., pegas) [123,124] Arlequin widely used software for population genomics, includes all major methods for population diversity and tests of neutrality at the population level (SFS, LD, Tajima's D, F st , etc.) [67] CMS (Composite of Multiple Signals) Sensitive approach for inference of selection based on the combination of different test statistics (haplotype frequency, linkage disequilibrium and population differentiation). Generally more robust with respect to the use of separate statistics and less dependent on demographic processes [91,92] DnaSP version 6.12.03 analysis of polymorphisms from single or multiple loci. calculates measures of DNA sequence variation within and between populations; several neutrality tests implemented: e.g., the HKA, Tajima's D and the McDonald and Kreitman test (MKT) [217] GenAlEx 6.5 Excel add-in for basic analyses of population genetics. Includes calculation of indices for population structure [202] Genepop R package for general population genetic methods. Includes exact tests for independence, measures of population differentiation and disequilibrium among pairs of loci [216] hierfstat R package for estimation of population structure using F-statistics (tests for population differentiation) [85] iSAFE a coalescent-based method for identification of the specific mutations favored by selection in a selective sweep [2] pegas from the developers of ape, another R package for the analysis of population genetic data. includes calculation of nucleotide diversity (p), SFS, LD scans and F ST for population differentiation. With adegenet and ape constitutes a unique working environment for a wide range of phylogenetics and population genomics analyses [199] poolfstat R package for the analysis of Pool-Seq data and estimation of F ST (degree of differentiation between populations) [ calculates all main population genetic parameters (e.g. number of segregating sites, nucleotide and haplotype diversity, LD-based statistics, neutrality tests) [260] mate d N /d S across each branch of a phylogenetic tree, identifying episodes of positive evolution along single lineages which were previously undetected simply by pairwise comparison of extant sequences. The innovative aspect brought about by the approach of Messier and Stewart was to reconstruct the sequences in all ancestral nodes of the phylogeny in order to test where, along phylogeny, episodes of selection occurred [174]. In their case, given the recent divergence of the species under study (primates), maximum parsimony (MP) was a reliable criteria to reconstruct the ancestral states; today the d N /d S ratio test is based instead on maximum likelihood (ML) approaches which calculate, for each internal node, all possible alternative character states and ancestral sequences, assigning a weight to all ancient alleles according to their probability of occurrence [286,290,287]. ML approaches allow to extend sequence reconstruction to ancestors in the distant past [99]. The McDonald-Kreitman test (MKT) represents an extension of the d N /d S ratio test taking into account the polymorphisms within species (e.g., between different individuals of the same species, or different accessions) and comparing it to the divergence between species for a protein-coding gene. Under neutrality, the null hypothesis is that the d N /d S within species (i.e., p N /p S for the polymorphisms within species) equals the d N /d S ratio (which measures the divergence between species). Positive evolution can be detected by a between-species ratio greater than the withinspecies ratio. Data in the MKT test take the form of a 2 Â 2 table which be used to obtain statistical significance with a G-test for independence [169,63]. The McDonald-Kreitman test should be used only when comparing recent divergence (i.e., closely-related genes), so that all alleles share the same evolutionary history (no chance of recombination); the test is also unable to distinguish positive selection from other cases where slightly deleterious mutations might have been fixed due to population bottlenecks, a common outcome during speciation. For a detailed review of the potential and limits of the MKT test, see [118]. The methods summarised above may be used to detect signatures of adaptation in protein-coding genes, especially when comparing sequence differences between different species; however, a large part of adaptive phenotypic variation also exist at the microevolutionary level (within species) and may arise as a consequence of changes in non-coding parts of the genome, and, consequently, as part of variation in gene expression. In these cases, the approaches which may be used to detect selection shift from tests of neutrality on single loci to population genetics methods which detect, in general, regions of reduced or increased variability across whole genome sequences. These approaches thus identify regions in the genome that deviate from neutrality, on the basis of the assumption that high sequence conservation is suggestive of negative selection and may indicate the presence of a functional constraint [189,107,272]. We provide a brief overview of these population-genomics methods below. However population genetics methods cannot provide unambiguous identification of the relevant/causal loci under selection, and need to be integrated with functional assays of molecular function and validation of the candidate polymorphisms in natural settings to identify a true adaptive allele [178]. 6.2. Identifying selection at the population level 6.2.1. Linkage disequilibrium (LD) A significant deviation from the neutral model can be detected as an extension of the level of linkage disequilibrium (LD) along defined regions in the genome [236]. While it increases in prevalence in the population, the polymorphism in the selected allele is in strong association (disequilibrium) with other neighboring variants; this association, which extends over a physical region of variable length (haplotype) is maintained by the lack of recombination within the region. Various genome scans techniques can therefore measure the physical size of these haplotypes as a proxy for the extent of LD. An unusual large extension of LD in a particular region in the genome, with respect to other regions which are known, or suspected, to evolve neutrally, may indicate the presence of selection. Also in this case, there are several approaches which can be used to detect regions of extended LD in the genome. The extended haplotype homozygosity (EHH) has been initially applied in the form of a genome scan to the human genome [220] but derived approaches -always based on measuring extended LD (in combination with measures of population subdivision, see below) -have also recently been used in plants [27]. EHH is first based on the identification of ''core haplotypes" across the genome (which correspond to putative selected loci), followed by the calculation of the LD decay along defined distances, propagating in both directions, from the core haplotype. Clearly, as one travels away from the selected allele, haplotype homozygosity decreases in the population, as recombination may increase polymorphisms in the region and reduce the size of the homozygous haplotype. Large haplotype homozygosity, when correlated to high values of haplotype frequency in the population, indicates the presence of directional selection and deviation from neutrality. LD tests are limited in their power to detect historical signatures of selection, however, as once the selected allele is fixed, the size of the haplotype can be rapidly reduced by recombination (the regions flanking the selected polymorphisms, once fixation is reached, are under relaxed selection); they represent powerful approaches, in any case, for detecting recent or ongoing selection events [170,234]. Population subdivision The second family of approaches to detect genomic regions under selection isbased on the measure of population subdivision. Over time, in fact, under the influence of drift, demography and natural selection, populations of animals and plants may differentiate, both at the genotypic and phenotypic level, to form several distinct subpopulations. The magnitude of population differentiation can be measured by Wright's group of F-statistics [280]). Of these, the fixation index, F ST , is one of the most commonly used statistics to describe the partitioning of genetic variation within and among populations. There is a well-developed framework of population genetics theory behind the mathematical definition of F ST and its estimators [108]; here, suffice to say that F ST , in its simplified form, may be expressed as: where r 2 between is the total genetic variation in a defined locus between populations while r 2 within is the total genetic variation in the same locus within the population [156]. Both measures of genetic variation are obtained from the variance of allele frequencies within and between populations. F ST values vary from 0 to 1; a value of 0 indicates that allele frequencies are equally partitioned among populations; a value of 1, on the other hand, indicates the extreme case in which one of the alleles has a frequency of 1 in one of the subpopulations and is thus absent from the others. Such a condition, or, indeed, whenever F ST approaches values close to 1, is indicative of a high level of genetic differentiation between populations. This condition may be reached by selective forces (in case of local adaptation), although several demographic processes, other than selection, may also influence the value of F ST [108]. Especially in the case of genome-wide scans, which mitigate the effect of demographic processes, high values of F ST , in comparison to those obtained from a neutral locus, are suggestive of directional selection acting in one of the subpopulations. On the other hand, low values of F ST may indicate stabilizing selection or directional selection on all subpopulations [19]. One of the earliest methods to test population subdivision was developed by Lewontin and Krakauer [158]; the method is based on the comparison of empirical F ST values against those obtained under a simulated model of neutral evolution. Over the years, the approach has been refined and applied to large collections of SNPs collected across entire genomes [3], and further improved taking into account demographic history through inclusion of the kinship matrix of the subpopulations [28]. The detection of F ST outliers in genome-wide scans has been frequently applied in plants, starting from the use of a small number of microsatellites in sorghum (S. bicolor [41]) and sunflower (H. annuus, [128]) to more recent studies using SNPs or wholegenome sequencing in Arabidopsis and its relatives [115,268] and wheat [42]. A further expansion of the approaches based on population subdivision lies in the comparison between Q ST and F ST [156]. Q ST is the counterpart of F ST , but it is based, rather than on the measure of genetic variation at marker loci, on genetic variance in quantitative traits. If F ST is measured on neutral marker loci, and Q ST is based on quantitative traits with an additive genetic basis, then the comparison between F ST and Q ST assumes relevance with respect to the causes of trait divergence among subpopulations. If F ST % Q ST , for example, then the divergence measured from the trait among subpopulations is comparable to that of a neutral locus, and could have therefore been the result of genetic drift; directional selection could have instead acted in cases where F ST < Q ST which indicates a trait divergence between-populations higher than that obtained from neutral loci: this case is suggestive of directional selection acting on the trait. Q ST -F ST comparisons have been applied in plants, with a main focus on morphological, reproductive and stress tolerance traits [173,33,215,213]. Despite the potential of this approach in detecting selection at the level of quantitative traits, however, very few studies have made use of large-scale molecular phenotypic data (e.g., from transcript/protein profiling or metabolomics) to estimate Q ST ; this is perhaps due to the assumptions of the method, which need to be verified on a case-by-case basis, requiring F ST to be calculated from strictly neutral loci and Q ST to represent variance from purely additive phenotypic traits; also, Q ST -F ST comparisons are framed in a population genetic framework, therefore it is necessary to collect genotypic and phenotypic data from a large number of individuals. In a recent study, however, the divergence of metabolic traits (primary metabolites) and neutral marker loci was measured in a collection of three subpopulations representing the stages of durum wheat domestication. The approach allowed to detect signatures of directional selection directly in molecular phenotypic traits (metabolites), marking the transition from wild accessions to primary domesticates to the subsequent diversification as a result of recent selective breeding [22]. Site-frequency spectrum (SFS) One of the consequences of positive selection, at the population level, is the increase of the frequency of the selected allele, which can rapidly reach fixation (100% frequency). This process is accompanied by a related decrease of sequence diversity, due to hitchhiking effects, in the regions around the beneficial selected allele. These regions (haplotypes) of marked reduction of genetic diversity leave a distinctive hallmark in the genome (selective sweep) and can be detected with a simple plot of the level of genetic diversity or, alternatively, looking at the relative proportion of the mutations in the population. Either plots can be obtained on specific genomic regions (comparing, for example, a locus suspected to be under selection with another locus known to evolve neutrally) or through whole genome scans with a sliding-window approach. Approaches based on the calculation of nucleotide diversity (p) were initially introduced by Tajima [246], but are still frequently used today in plant genome studies to identify sweep regions [29,38,113,162,295]. Tajima later extended his concept of nucleotide diversity into the formulation of a neutrality test, the Tajima's D, which can be used to assess significance of SFS. The test is based on the mathematical relation between two measures of genetic variation: 1) the number of polymorphic (or segregating) sites in n sequences (S); 2) the average number of nucleotide differences, for all possible pairwise combinations, in the same set of n sequences (k). Tajima demonstrated that the difference between these two measures represents the effect of selection [247]. In fact, when a selective sweep emerges, new mutations may accumulate in any case in the proximity of the selected locus [32]. These mutations are initially rare, and influence the parameter S (which is independent from allele frequencies), driving it to larger values with respect to parameter k (which instead depends from allele frequencies). Thus, large negative values of Tajima's D, which, in a simplified form, can be expressed as: are indicative of an excess of rare alleles and may suggest the presence of a recent selective sweep. Statistical significance of D can be obtained by comparing its value to the confidence limits of a beta distribution [247]. As with other neutrality tests, Tajima's D value can also be driven by several demographic processes (bottlenecks, migrations, population subdivisions, etc.) independently of natural selection, therefore caution must be taken in the interpretation of the results, especially when rejecting the null hypothesis of the population in mutation-drift equilibrium. Machine learning approaches in population genomics Machine learning is a set of methods to infer functional relationships existing within the input data without making any a priori assumptions. In their essence, machine learning approaches find a mathematical function, and elaborate a predictive model, between a set of multidimensional datasets (input) and the response variables of the system [9]. Supervised machine learning approaches ''train" on empirical datasets (or datasets generated by numerical simulations) to predict the response of a specific output variable, whose response values are unknown [230]. These approaches were initially developed in computer science, but have been also applied to many areas of computational biology, and we refer the readers to excellent reviews and recent applications [60,82,89,110,296] to focus here on some recent developments of machine/deep learning in evolutionary and population genetics. One of the major scientific challenges in population genetics is discriminating between marks of selection and demographic processes. This has proven to be extremely difficult: bottlenecks and selection, for example, leave very similar signatures in the genomes, and the two phenomena -to add even more complexityoften occur simultaneously in natural populations (e.g., when a population colonizes a new environment, it usually experiences a demographic bottleneck; at the same time the selection pressures present in the new environment may lead to adaptation). The main argument adopted so far to disentangle these contributions -basically that selection occurs in targeted regions, while demographic processes affect genomes globally -was found to be relatively inaccurate, given the large impact that natural selection has apparently had in shaping plant genomes [281,95,159,101,69]. Several approaches have been followed to address the confounding contribution of demography and selection: one of the most commonly used today is based on the combination of several statistics from various selection tests, with the objective of lessen-ing the dependence of selection signatures from the demographic history of the populations. One of these approaches, called ''composite of multiple signals" (CMS), integrates measures of LD (e.g., extended haplotype homozygosity, EHH) with population differentiation (F ST ) and shows a greater power in detecting selection, under different demographic scenarios, also in cases where the single test statistics failed to identify targets of selection [91,92]. More recently, machine and deep learning approaches have contributed to the development of other tools which were generally more robust to the confounding effects that demography has on selection (e.g., S/HIC, see below), or that allow simultenous estimation of demographic processes and selection [232]. In one of these approaches, S/HIC (Soft/Hard Inference through Classification, [228]), nine different statistics (based on nucleotide diversity, haplotype homozigosity, etc.) are calculated with a sliding-window approach to identify the location of soft and hard sweeps. Trained under proper datasets simulating various demograhic events, S/ HIC showed high specificity and sensitivity in detecting both types of sweeps. Other tools recent tools further refine the limitations of composite approaches in detecting selection (for example, when one of the component statistic is undefined, see SWIFr [244]); while some other approaches allow to point specifically at the favored mutation within the selective sweep, without any prior knowledge of the underlying demography (iSAFE, [2]). Whole genome sequencing of pools of individuals (Pool-Seq) Research in population genetics requires collection of a large amount of data about the polymorphisms existing among individuals; although the costs associated with DNA sequencing are constantly decreasing, estimating allele frequencies, from single individuals at the population scale, is still rarely feasible. Pool-Seq approaches were developed to overcome these limitations. Essentially, Pool-Seq implies that DNAs extracted from multiple individuals are pooled before preparing the library for sequencing. The allele frequencies thus obtained can be compared between multiple populations, sampled, for example, from different environments or geopgraphical locations to infer signatures of adaptation. The approach has been shown to be particularly accurate in the estimation of allele frequencies, and also cost-effective with respect to sequencing of single individuals, provided a few requirements regarding the experimental settings are met (e.g., pool size, [224]). In the study of local adaptation, Pool-Seq has been used in Arabidopsis lyrata to identify the polymorphisms associated to the adaptation to serpentine soils [256], in teosinte, between lowland and highland populations [79] and also, on a smaller scale, to define clinal patterns and selection in a collection of Solanum chinense accessions (a wild tomato) from Chile and Peru [26]. In a recent study, Pool-Seq has been used to characterize the genomic differences driving ecotype differentiation of Mimulus gattus across coastal and inland-adapted populations [86]. This study added further support to the importance of selection on chromosomal inversions for determining speciation [155]. In approaches based on Evolve & Resequence (E&R), Pool-Seq is used in combination with experimental evolution [129] to identify the differences in allele frequencies between the ancestral and the selected population: this allows to identify the causal allele(s) driving the phenotypic change in response to the selective agent set by the experimenter [225]. E&R studies have afforded exceptional insights about understanding genetic basis of adaptation, but few have focused on plants (given the difficulties inherent to their longer generation times, [133,21]); E&R studies have instead been popular within the Drosophila community [35,11,166]. Typically, allele frequencies between the base (ancestral) and selected populations are calculated across the whole genome with a sliding-window approach, and then tests of selection are perfomed on each window sepa-rately. We refer the readers to excellent surveys of the test statistics used in E&R approaches [225] and provide a list of software tools for the analysis of Pool-Seq data in Table 1. Summary and outlook In this review we have discussed the evolution of metabolism and the hypotheses made to explain the assembly of the biosynthetic pathways starting from non-enzymatic and primordial metabolism of the earliest living forms. The explosion of chemodiversity of plant secondary metabolites represented a further step where different evolutionary forces acted to expand the metabolic routes of primary metabolism reaching the pathway structures we observe today. Perhaps the greatest research challenge in the study of this metabolic expansion is to understand which part of chemical diversity emerged as a result of selection, i.e., as adaptations to natural environment, and which part, on the other hand, was instead derived from non-selective processes (demographic history of the populations, but also hitchhiking, pleiotropy, epistasis). Also, related to this, and since selection necessarily shaped the structure of metabolism as we observe it today, do selective pressures affected metabolic steps sequentially, as they emerged along evolutionary time, as in the Granick or retrograde hypothesis, or does selection acted globally on whole novel pathways which could have emerged after whole genome duplications? These questions remain largely unanswered. With the relative ease with which we collect sequence data, and the various approaches for detecting selection, however, we have at hand today catalogues of molecular footprints of selection from a large number of plant species; this however should not lead us to infer molecular adaptation to be a pervasive phenomenon in plant genomes. We often assume, for example, that proteins are the result of a long history of functional optimization, but ancestral resurrection has largely showed that historical contingency and random chance have often driven the fixation of suboptimal forms [242]. Also, very few alleles showing molecular signatures of selection have been tested in an ecological background; and especially in the context of secondary metabolism, many true adaptive alleles may easily go undetected by current methods to detect selection. In fact, metabolic traits generally show medium to high heritabilities and have a genetic architecture mostly based on many loci of small effects [44,7]: under these conditions, it is possible that adaptation could have proceeded through subtle changes in the frequencies of several unlinked alleles [208], without leaving a clear signature at the sequence level. In plants, one approach to study polygenic adaptation would be to observe a concurrent shift in the frequencies of many unrelated alleles under some form of selective pressure. This has been done in Arabidopsis (thale cress, the model organism in plant biology), measuring the differential survival of a collection of geographical accessions under extreme drought, a trait resulting from an underlying metabolic phenotype [68]. The adaptive alleles in this case did not show reduction of haplotype diversity, and would have gone undetected by conventional approaches based on detection of hard sweeps. The adaptive nature of metabolic diversity is a complex issue, and its study brings conceptual and empirical challenges which can be faced only through the integration of approaches from multiple disciplines. At the enzymatic level, ancestral resurrection may clarify the epistatic relationships that different mutations may have on selection; progresses are also needed, however, on the computational side, to develop models which could better take into account the confounding effects that pleiotropy, demography and polygenic adaptation have on selection. Also, when possible, the large number of polymorphisms ''putatively" under selection, detected from genome-wide scans, should be also tested in natural environments to provide measures of selection in an ecological setting. Combining all these approaches holds great promise for understanding the consequences of genetic variation on fitness and adaptation.
2020-02-20T09:04:31.045Z
2020-02-20T00:00:00.000
{ "year": 2020, "sha1": "c385f02367827b42b55a820b524f475bae003a1f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.csbj.2020.02.009", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "039e83a3a0b248c051f4d739107383dfd56894b1", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
232241444
pes2o/s2orc
v3-fos-license
Angiogenesis Inhibitors for Colorectal Cancer. A Review of the Clinical Data Simple Summary Targeting angiogenesis, the formation of new blood vessels, is an integral part of many cancer treatments, including colorectal cancer. The overall clinical benefit is well documented but modest. It has been an ongoing task for the last decade to isolate patient and tumor characteristics instrumental in identifying the subgroups to truly benefit; so far with limited success. The introduction of immunotherapy has opened a new era for anti-angiogenic treatment, as these two therapeutic strategies seem to work in synergy. This review will highlight the clinical achievements of anti-angiogenic treatment of colorectal cancer since 2004 and elaborate on the perspectives of combining it with immunotherapy. Abstract Since the late 1990s, therapy for metastatic colorectal cancer (mCRC) has changed considerably, and the combination of doublet or triplet chemotherapy and a targeted agent are now routinely used. The targeting of angiogenesis, the development of new blood vessels, represents a key element in the overall treatment strategy. Since the approval in 2004 of the first anti-angiogenetic drug, multiple agents have been approved and others are currently under investigation. We present an overview of the recent literature on approved systemic treatment of mCRC, with a focus on anti-angiogenic drugs, and current treatment approaches, and elaborate on the future role of angiogenesis in colorectal cancer as seen from a clinical perspective. The treatment of mCRC, in general, has changed from “one strategy fits all” to a more personalized approach. This is, however, not entirely the case for anti-angiogenetic treatments, partly due to a lack of validated biomarkers. The anti-angiogenetic standard treatment at the present primarily includes monoclonal antibodies. The therapeutic field of angiogenesis, however, has received increased interest after the introduction of newer combinations. These approaches will likely change the current treatment strategy, once again, to the overall benefit of patients. Colorectal Cancer Worldwide, 1.8 million new patients are diagnosed each year with colorectal cancer (CRC). Approximately half of the patients will be diagnosed with metastatic CRC (mCRC), at either the time of diagnosis (synchronous) or due to later recurrence (metachronous) [1]. Almost half the number of new cases, 0.86 million, die each year. Treatment Overview For several years, the armamentarium of standard treatment for patients with mCRC have included combination chemotherapy with either 5-FU, oxaliplatin, irinotecan, or two classes of targeted therapies [2,3]. These therapies inhibit the signaling pathways related to the epidermal growth factor (EGF) and vascular endothelial growth factor (VEGF) receptors. The monoclonal antibodies cetuximab and panitumumab, targeting the EGF receptor (EGFR) and bevacizumab, targeting the VEGF-A ligand, are the most commonly used in the field of mCRC. It is well known that the benefit of anti-EGFR is restricted to around 40% of the patients who are RAS and BRAF wild type (wt) [2,3]. The common treatment approach has changed from single agent chemotherapy to a doublet regimen, or occasionally triplet chemotherapy regimen, often in combination with bevacizumab, cetuximab, or panitumumab based on the RAS mutational status. Typically, anti-EGFR therapy improves major efficacy parameters (response rate, PFS and OS) when added to doublet regimens like 5-FU + irinotecan (FOLFIRI) or 5-FU + oxaliplatin (FOLFOX) but results were more equivocal when bevacizumab was added to modern infusional doublet regimens (Table 1). Nerveless, the optimal combination of chemotherapy and targeted therapy for first line therapy have been debated for many years. Three randomized trials have directly compared efficacy of EGFR inhibitors and bevacizumab in patients with RASwt mCRC, but with a very heterogeneous picture and no well-founded conclusion. Prior studies have shown that leftsided mCRC are dependent on EGFR related pathways and when investigators from the major cooperative groups pooled data in patients with left-sided tumors, it became evident, and the efficacy data became much more homogenous, showing a clear advantage of EGFR inhibitors with higher overall response rates (ORR) and prolonged overall survival (OS) in patients with left-sided primaries [4,5]. However, there is currently no solid evidence indicating that RAS mutations should render anti-VEGF-A therapy obsolete in the setting of mCRC. A number of randomized studies, pioneered by Dr. Falcone's group, have evaluated triplet chemotherapy (FOLFOXIRI) in patients unselected by the RAS mutational status. The FOLFOXIRI regimen does have a significant toxicity-profile, thus requiring patients to exhibit a good performance status. Consequently, patients included in the FOLFOXIRI trials are more often younger or more often in performance status 0 than usually in clinical trials. Two Italian phase III trials [15,16] showed that triplet chemotherapy was more effective than a doublet (either FOLFIRI or FOLFOX) in terms of ORR, PFS and OS. In the TRIBE-study, bevacizumab was added to both the triplet and the doublet combinations, and thus we can only conclude that a triplet chemotherapy can be safely combined with bevacizumab but whether bevacizumab adds to the efficacy of a triplet cannot be concluded from these studies [16]. In the randomized phase II OLIVIA trial [17], in which mCRC patients with liver-limited disease were included, a triplet chemotherapy with bevacizumab produced a very impressing ORR of 81%. Consistently, a high response rate of at least 60% was observed in all studies that evaluated FOLFOXIRI with or without bevacizumab [3]. Angiogenesis Basically, the term vasculogenesis describes the process of the initial endothelial differentiation of angioblasts during embryogenesis [18], whereas angiogenesis refers to the formation of new blood vessels from existing endothelial cells [19]. The regulation of the angiogenic process is a complex balance between stimulating and inhibiting stimuli. The VEGF system consists of six ligands and three receptors (VEGFR). The VEGF-A ligand is the most important. It is secreted by multiple cell types including the malignant cells and stimulates endothelial cell (EC) differentiation, migration, growth and survival [20]. The receptor that is primarily responsible for transmitting this VEGF-A-mediated signal in the EC is VEGFR-2, whereas the role of VEGFR-1 probably is more regulatory and inhibitory [21]. The autonomic growth pattern that characterizes malignant neoplasms are contributing to the fact that malignant tumors are often hypoxic to varying degrees. This hypoxia leads to increased transcription of a large number of genes, including VEGF-A, with the common purpose of ensuring a more adequate oxygenation of the tumor [22]. It is the so-called hypoxia-inducible factors (HIF) that are the cause of this gene regulation. The three members are formed from the oxygen-sensitive subunits (HIF-1α, HIF-2α and HIF-3α) and the non-oxygen-sensitive HIF-1β subunit [23]. During hypoxia, HIF-1α (the best described) stabilizes and translocate to the cell nucleus to form the activated HIF-1 complex together with HIF-1β. Anti-angiogenetics The therapeutics targeting angiogenesis are divided into two main groups: the monoclonal antibodies (mAbs) and the small molecules, tyrosine kinase inhibitors (TKIs). The mAbs exert their action by either directly binding VEGF-A or blocking the extracellular binding domain of the corresponding receptor. Bevacizumab (Avastin ® ) binds to all isoforms of VEGF-A and aflibercept (Zaltrap ® ) a soluble decoy receptor binds VEGF, thereby preventing activation of their endogenous receptors whereas ramucirumab (Cyramza ® ) binds with high affinity to the VEGFR-2 extracellular domain, which prevents binding of VEGF ligands and thereby inhibiting receptor activation. The TKIs exert their antiangiogenetic effect after internalization in the cell and binding to, and inhibiting, the kinase domain of the various receptors involved in the angiogenetic process (tyrosine kinase, serine/threonine kinase or dual protein kinase inhibitors). Current Challenge Inhibition of tumor-associated angiogenesis have been utilized for the treatment of patients with mCRC for more than 15 years [2,3]. Since the initial approval of bevacizumab in 2004, several other agents have been investigated within phase III trials, leading to several additional approvals. The obtained survival benefit from these drugs is often limited due to multiple resistance mechanisms and all attempts to individualize treatment have so far been unsuccessful. This class of therapeutics are consequently administered to a broad and unselected patient population constituting a social-economic challenge to the community. The adverse events related to these treatments, although often manageable, may sometimes be severe and even fatal. This scenario calls for the identification of predictive biomarkers or new treatments combinations with a more favorable advantage/disadvantage ratio if this field of therapy is to evolve even further. Existing Treatment Presently, targeting angiogenesis for the treatment of mCRC may be applied to all treatment lines. Bevacizumab is used in combination with chemotherapy in both first and later lines of therapy, ramucirumab and aflibercept, together with chemotherapy, is used in the second line setting and finally regorafenib given as monotherapy is used for patients with chemo-refractory disease. In brief, addition of an antibody targeting VEGF or VEGFR (such a bevacizumab, aflibercept or ramucirumab) to second line treatment significantly improved OS by a median of 1.4 to 2 months in all second line trials, independently whether a VEGF inhibitor had been used before. In total, four second line trials have reported a gain in OS by the addition of an antiangiogenic compound, irrespective of the various first-line regimens [2,3]. Bevacizumab in First Line The most widely used vascular inhibitor is bevacizumab. Bevacizumab as monotherapy has no or only very modest effect in mCRC and is most often used in combination with chemotherapy. The first randomized trials demonstrated that bevacizumab improves the efficacy of chemotherapy as measured by three key efficacy parameters (response rate (RR), progression-free survival (PFS) and OS). In combination with IFL, a bolus regimen consisting of irinotecan and 5-fluorouracil (5FU), OS was extended by almost five months to 20 months [7]. As there were only a few additional side effects at the same time, bevacizumab with combination chemotherapy quickly became standard in most parts of the world, and since then bevacizumab has been on the top list of the best-selling drugs, presently with annual sales of about 7 billion USD [24]. However, due to inferior efficacy, IFL was subsequently substituted by modern combination regimens (e.g., CapOx (capecitabine and oxaliplatin), FOLFOX (5FU and oxaliplatin) or FOLFIRI (5FU and irinotecan)). When bevacizumab was tested in combination with CapOx or FOLFOX, gain in efficacy was lower than expected [13]. The PFS was prolonged by a modest 1.4 months, and surprisingly, no significant improvement in the confirmed RR (38% vs. 38%) or OS (19.9 vs. 21.3 months) was seen. In Table 1, an overview of the principal clinical trials addressing bevacizumab in the first line treatment of mCRC is provided [6][7][8][9][10][11][12][13][14]. Briefly, trials testing monotherapy or bolus combination chemotherapy regimens with bevacizumab showed improvement in all efficacy parameters, but this is somehow in contrast to the NO16966 and the ITACa trials, where response rates and OS were not upgraded. A: 30% had received bevacizumab as part of first line therapy. The benefits of aflibercept with FOLFIRI were observed in subgroups of patients with or without prior bevacizumab treatment. B: BEBYP was interrupted prematurely after accrual of 184/262 planned patients. Thus, there is no doubt that bevacizumab has clinically significant activity, but the challenge in modern oncology is to choose the right treatment for the right patients [2,3]. Unfortunately, to date, there are no approved or generally accepted biomarkers for predicting benefit from bevacizumab. This is in contrast to one of the other very frequently used treatment principles in mCRC, namely the targeting of EGFR with monoclonal antibodies (panitumumab or cetuximab), in which mutation status of RAS in the MAPK pathway has been proven of predictive value [2,3]. The value of RAS mutational status in angiogenesis inhibition in CRC, on the other hand, has been more unclear. For the past 15 years, bevacizumab has been claimed to exert its effect independently of RAS status, but this has never been studied regularly, and some subgroup studies suggest that its effectiveness may be limited in patients with RAS-mutated tumors [3]. Due to the knowledge on both the advantage of anti-EGFR and anti-angiogenic therapy and supported by promising preclinical data, it was obvious to test if multiblockade with the combination of anti-angiogenic and anti-EGFR therapy could improve survival even further. Initial clinical data supported the hypothesis as a randomized phase II study [35] showed that the combination of irinotecan, cetuximab and bevazicumab resulted in a higher RR and longer PFS than cetuximab and bevazicumab in patients with pre-treated mCRC. However, despite the above-mentioned promising results on doubleblockade in preclinical models, and from early clinical data, two large phase III studies-the CAIRO2 and the PACCE studies-failed to confirm these results (Table 3) [36][37][38][39][40]. Both trials showed that addition of bevazicumab to an anti-EGFR antibody and combination chemotherapy in chemo-naïve patients was associated with an inferior outcome compared to an anti-EGFR antibody and combination chemotherapy alone [36,37]. Since tumors cannot grow to more than 2-3 mm 3 without blood supply, it was also obvious to investigate the effect of bevacizumab in the adjuvant setting. Unfortunately, in two large randomized trials, no gain was measured in terms of OS in patients with CRC, and in one study, there was fewer patients alive after ten years than in the control group [41,42]. It is not entirely clear why anti-VEGF-A therapy was ineffective in the adjuvant setting. This may be related to the fact that adjuvant treatment often targets microscopic clusters of cells or even single cells in the circulation situations where the tumor-related blood vessels may not be dependent on VEGF-A to the same extent as in the metastatic setting. Bevacizumab is in general well tolerated, however vascular-related side effects have been seen with the most serious being gastrointestinal perforation, hemorrhage and arterial thrombosis (in less than 1% of patients). More commonly proteinuria and hypertension. In a meta-analysis by Zhu et al. grade three hypertension was reported in approximately 9% of patients treated with low-dose bevazicumab and in 16% of patients receiving doses of 10 mg/kg or above [43]. Ramucirumab In combination with FOLFIRI, second line therapy with ramucirumab did not increase RR but significantly prolonged PFS from 4.5 to 5.7 months and OS from 11.7 to 13.3 months following first-line treatment with fluoropyrimidine, oxaliplatin and bevacizumab [31]. Aflibercept The anti-angiogenic fusion protein aflibercept also produce a survival advantage when added to FOLFIRI in patients progressing on a prior oxaliplatin-containing regimen [28]. The RR was increased from 11 to 20%, PFS was prolonged from 4.7 to 6.9 months and OS from 12.1 to 13.5 months and the benefit was observed independent of prior bevacizumab. Tyrosine Kinase Inhibitors The other main group of anti-angiogenic drugs-the TKIs have also been tested in the mCRC population. Due to the targeting of multiple signalling pathways beyond the VEGFR full dose TKI may be difficult to tolerate and often requires dose modifications, and are most often administered as monotherapy. In the HORIZON II first line trial, cediranib prolonged PFS (secondary endpoint) significantly from 8.3 to 8.6 months but without any prolongation of OS [46]. In the CONFIRM II second line trial, vatalanib prolonged PFS (secondary endpoint) significantly from 4.2 to 5.6 months but without any prolongation of OS [51]. Apart from these two trials showing a modest prolongation of PFS no other trials have shown that TKI add to the efficacy of chemotherapy by extending PFS and no randomized trial has shown an OS benefit neither in first line, nor in second line. In contrast to these depressing results of TKI with chemotherapy, monotherapy TKI compared to placebo has demonstrated a significant benefit in several efficacy parameters and for regorafenib the advantage was proven in several comparable trials. In the largest trial, CORRECT, with 800 chemo-refractory patients with mCRC, PFS significantly was prolonged from 1.7 to 1.9 months (HR 0.49) and OS from 5.0 to 6.4 months (HR 0.77). Thus, despite a large number of well-conducted clinical trials, regorafenib still remains the only TKI occasionally used in the clinical practice of mCRC (Tables 4 and 5). The most frequent adverse reactions (ARs) in patients receiving regorafenib is fatigue, rash or hand-foot skin reaction, diarrhea, and anorexia and often dose-reductions are required to handle regorafenib-related adverse reactions. Several trials have shown that a lower starting dose with gradual dose-escalation is an alternative, safe and better tolerated approach for administration of regorafenib and this strategy should be preferred in clinical practice [57,64]. Economy Anti-angiogenetic therapy is used in an unselected manner, in line with the standard chemotherapy approved for mCRC, due to the lack of validated predictive biomarkers. One consequence is a very broad application, and most patients with mCRC are exposed, at least once, to this class of therapy. As the overall benefit from this addition often is rather limited, and the treatment is very expensive, this has naturally triggered speculations as to the cost-effectiveness of this approach. This theme is addressed in many papers. The conclusions may differ slightly depending on prices in the individual countries, and differences in the willingness-to-pay value for a given outcome, but overall, addition of anti-angiogenetic treatment (often bevacizumab in these calculations) to palliative chemotherapy in mCRC is not cost-effective under the current circumstances. This was also the conclusion in a publication from 2017, by Goldstein et al., summarizing that the addition of bevacizumab to first-line chemotherapy in mCRC failed to be cost-effective in five different countries [65]. The highest incremental costeffectiveness ratio was demonstrated for the U.S. with 571,000 USD per quality-adjusted life years, more than three times higher than the willingness to pay threshold. Similar conclusions have been obtained for bevacizumab used as maintenance therapy in combination with capecitabine and as a regular second line treatment as well. With a possible expansion of the indication for the use of anti-angiogenetic treatment, considering the potential benefit combining these therapies to immunotherapy, these economic considerations will once again be highly relevant as both cost and benefit will likely change. Biosimilars Presently biosimilars of bevacizumab are under investigation in different clinical trials including randomized studies comparing original bevacizumab with chemotherapy and biosimilars with chemotherapy and in near future the results are expected. Patent and regulatory exclusivities will protect Avastin ® until at least June 2020, but maybe until January 2022. Two biosimilar to bevacizumab have been approved for use in the European Union, (Mvasi ® by Amgen and Zirabev ® by Pfizer) in 2018 and 2019, respectively, but marketing of these two biosimilars has been delayed until relevant regulatory exclusivities have expired [66]. Resistance Mechanisms Like other cancer treatments, resistance to drugs targeting the angiogenetic process will lead to disease progression. Resistance is a completely natural consequence due to the incredibly complex regulation controlling the angiogenic process. It is difficult to differentiate between resistances to anti-angiogenic therapy in certain scenarios, such as CRC, where these drugs are used in combination with chemotherapy. At present, we have only limited insight into anti-angiogenic resistance mechanisms, but they can generally be divided into mechanisms that are due to pre-existing conditions or have been acquired due to the treatment. Examples of the former are heterogeneity in the tumor-associated blood vessels (some are immature and vulnerable to antiangiogenic treatment, while others are not), organ and tumor-specific differences in the regulation of angiogenesis, bioavailability of the drug also known from resistance to chemotherapy (drug transport, tumor architecture, vascular delivery), genetic differences between individuals that may explain differentiated responses to treatment, differences in which factors primarily drive angiogenesis in the primary tumor and metastases, the specific mono-targeting of ECs leaving supportive structures such as basement membrane and pericytes for rapid regrowth [67,68]. Acquired conditions are known for upregulation of antiapoptotic and alternative proangiogenic factors that are upregulated as a consequence to a single target inhibition as seen with anti-VEGF-A, selection of hypoxia-resistant tumor cells, alternative vascularization, co-option, and increased tumor aggressiveness or epigenetic upregulation of antiapoptotic factors in the target cells [69][70][71]. Introduction The overall rationale for targeting the process of angiogenesis is the consequence of decades (centuries) of basic and clinical research. Angiogenesis is involved in several physiological processes including wound healing and menstrual cycle, but angiogenesis may as well be involved in pathophysiological conditions characterized by either insufficient or excessive blood vessel formation as seen in malignant neoplasms. Angiogenesis plays a critical role in the continued growth of cancer because solid tumors need a blood supply if they are to grow beyond a few millimeters. Tumor-associated blood vessels are structurally and functionally abnormal and characterized by an irregular chaotic network of leaky blood vessels, resulting in elevated intratumoral pressure [72]. In 1971, Judah Folkman described tumors' dependence on newly formed blood vessels waking the interest in angiogenesis as a process for pharmaceutical targeting [73,74]. The factor, primarily responsible for stimulating the formation of new blood vessels (VEGF-A) was identified in 1989 by Napoleone Ferrara [75]. Basic Tumor-associated Angiogenesis Tumor growth beyond 2 mm 3 is the main initiating event in tumor-associated angiogenesis. At this stage simple diffusion is no longer sufficient and the cancer cells with the longest distance to the existing blood vessels become hypoxic. This triggers the secretion of VEGF-A from the hypoxic cells that eventually interact with ECs (through VEGFR-2) in nearby blood vessels. This shifts the affected ECs from a dormant state to an active proliferative state, classically described as the angiogenic switch [76]. The initiated cascade of events constitutes the degradation of the blood vessel integrity, cleavage of the extracellular matrix, vasodilation, increased permeability, migration of ECs and the forming of so-called sprouts. Once connected to adjacent sprouts the entire process reverses in order to stabilize and mature the newly formed blood vessel [77]. It is during this maturation process that VEGF-A acts as a crucial survival factor [78]. Normalization The structural and functional abnormalities of tumor-associated blood vessels, as previously discussed, provides several advantages for a growing tumor. It ensures nutrients and oxygen, it provides the basic escape route for potential metastatic cancer cells, and it impairs the extravasation of larger molecules such as chemotherapy and thus some degree of treatment resistance. Reversing these processes through anti-angiogenetic treatment are summarized in the normalization theory presented by Rakesh Jain [72] and later proven clinically. It has recently been argued that the optimal window for normalization is rather narrow and over pruning may impair delivery of chemotherapy just as much as insufficient targeting of the tumor vasculature. Anti-angiogenesis and Immunotherapy The limited benefit from anti-angiogenetic therapies in mCRC have paved the way for new combinations in order to sustain tumor control. The currently most promising approach is the combination with immunotherapy. Tumor-Microenvironment Tumor-associated blood vessels have a unique architecture and physiologic properties as previously addressed. These properties lead to the generation of an immune suppressive tumor microenvironment [79]. Malignant tumors have the ability to evade immune suppression through changes in the recruitment, trafficking and infiltration of effector T cells and their final recognition and killing of cancer cells. This is the consequence of several processes related to tumor-initiated angiogenesis. The initial up-regulation of VEGF-A inhibit the maturation of dendritic cells (DCs) [80], a crucial and initial step in the process of immunity, and lead to an upregulation of programmed death-ligand 1 PD-L1 on DCs further suppressing T cell function [81]. Leaky blood vessels increase the intra-tumoral pressure and together with the down-regulation of cell-adhesion molecules complicates extravasation of tumor infiltrating lymphocytes (TILs). Hypoxia itself lead to upregulation of PD-L1 and the simultaneous up-regulation of VEGF-A which furthermore impairs the function of the antigen presenting cells. Finally, the balance of TILs shifts towards increased infiltration of regulatory T cells (Treg) on behalf of cytotoxic effector cells (CD8 + ) due to regulatory changes in the ECs [82]. This is primarily due to an increased expression of FAS ligand on tumor-associated ECs that prevents effector T cells from crossing the EC barrier by inducing apoptosis. Treg are resistant to FAS ligand creating a relative overexpression of Treg in the tumor compared to the effector cells. Treg furthermore inhibit the antigen presenting cells in the tumor enhancing the immune suppressive environment. Tumor vasculature normalization thus creates an immune friendly microenvironment and may turn a "cold" tumor into a "warm". Added to this is the recent discovery of how stimulated immune cells themselves lead to vascular normalization, partly through CD8 + T-cells and interferon gamma (IFN- 11 of 19 degree of treatment resistance. Reversing these processes through anti-angiogenetic treatment are summarized in the normalization theory presented by Rakesh Jain [72] and later proven clinically. It has recently been argued that the optimal window for normalization is rather narrow and over pruning may impair delivery of chemotherapy just as much as insufficient targeting of the tumor vasculature. Anti-angiogenesis and Immunotherapy The limited benefit from anti-angiogenetic therapies in mCRC have paved the way for new combinations in order to sustain tumor control. The currently most promising approach is the combination with immunotherapy. Tumor-Microenvironment Tumor-associated blood vessels have a unique architecture and physiologic properties as previously addressed. These properties lead to the generation of an immune suppressive tumor microenvironment [79]. Malignant tumors have the ability to evade immune suppression through changes in the recruitment, trafficking and infiltration of effector T cells and their final recognition and killing of cancer cells. This is the consequence of several processes related to tumor-initiated angiogenesis. The initial up-regulation of VEGF-A inhibit the maturation of dendritic cells (DCs) [80], a crucial and initial step in the process of immunity, and lead to an upregulation of programmed death-ligand 1 PD-L1 on DCs further suppressing T cell function [81]. Leaky blood vessels increase the intratumoral pressure and together with the down-regulation of cell-adhesion molecules complicates extravasation of tumor infiltrating lymphocytes (TILs). Hypoxia itself lead to upregulation of PD-L1 and the simultaneous up-regulation of VEGF-A which furthermore impairs the function of the antigen presenting cells. Finally, the balance of TILs shifts towards increased infiltration of regulatory T cells (Treg) on behalf of cytotoxic effector cells (CD8 + ) due to regulatory changes in the ECs [82]. This is primarily due to an increased expression of FAS ligand on tumor-associated ECs that prevents effector T cells from crossing the EC barrier by inducing apoptosis. Treg are resistant to FAS ligand creating a relative overexpression of Treg in the tumor compared to the effector cells. Treg furthermore inhibit the antigen presenting cells in the tumor enhancing the immune suppressive environment. Tumor vasculature normalization thus creates an immune friendly microenvironment and may turn a "cold" tumor into a "warm". Added to this is the recent discovery of how stimulated immune cells themselves lead to vascular normalization, partly through CD8 + T-cells and interferon gamma (IFN-Ƴ ), creating a beneficial immune-vasculature crosstalk and a rationale for combining these two classes of therapeutics with a potential synergistic benefit [83]. The optimal dosing and timing of these treatment combinations will likely differ between individual tumor types and represent an essential key for unlocking their full potential. Anti-angiogenetic Therapy and Immunotherapy in Colorectal Cancer This potential synergism between anti-angiogenic and immune checkpoint inhibitor drugs has resulted in numerous clinical trials testing the combination of PD-1/PD-L1 antibodies with anti-VEGF drugs. By now, a number of randomized trials have shown re-), creating a beneficial immune-vasculature crosstalk and a rationale for combining these two classes of therapeutics with a potential synergistic benefit [83]. The optimal dosing and timing of these treatment combinations will likely differ between individual tumor types and represent an essential key for unlocking their full potential. Anti-angiogenetic Therapy and Immunotherapy in Colorectal Cancer This potential synergism between anti-angiogenic and immune checkpoint inhibitor drugs has resulted in numerous clinical trials testing the combination of PD-1/PD-L1 antibodies with anti-VEGF drugs. By now, a number of randomized trials have shown remarkable results which have led to approvals by the FDA and/or EMA in renal cell carcinoma (axitinib plus pembrolizumab, and axitinib plus avelumab), endometrial carcinoma (pembrolizumab plus lenvatinib), non-squamous NSCLC (bevacizumab and atezolizumab), and hepatocellular carcinoma (bevacizumab plus atezolizumab). This broad clinical activity suggest that a combination strategy may also be of benefit in colorectal cancer [84]. The first clinical data in CRC were presented by Bendell et al. at the ASCO-GI conference in 2015 [85]. Among 13 patients with treatment refractory disease, they demonstrated one objective tumor response by adding bevacizumab to MPDL3280A (atezolizumab). The following year, at the AACR 107th annual meeting, Wallin et al. presented the results from 23 patients with mCRC treated with first-line FOLFOX, bevacizumab, and atezolizumab [86]. They demonstrated promising efficacy data, with a median PFS of 14.1 months and parallel translational research argued for immune-related activity by this combination. The corresponding papers to these two initial abstracts are so far not published. In 2017, Yoshida et al. published the results of their pilot study (COMVI study) in anticancer research [87]. Six patients with previously untreated mCRC were included in a prospective single arm study. All patients received, as standard therapy, oxaliplatin (130 mg/m 2 ) on day one, capecitabine (1000 mg/m 2 ) twice daily on the days 1-14, and bevacizumab (7.5 mg/kg) on day one. To this backbone, they added cultured αβ Tlymphocytes (>5 × 10 9 ) combined with interleukin-2 and anti-CD3 on day 18. Two patients achieved a complete response, three a partial response, and one demonstrated stable disease as the best outcome. The median progression free and overall survival was 567 and 966 days, respectively. Adverse events were mild to moderate. Although a small pilot study, these published results, for the first time in patients with CRC, demonstrated that combining chemotherapy and anti-angiogenesis with immune-modulating therapies were feasible and efficacy data were promising. An additional two abstracts were presented the following years but they did not quite meet the initial expectations. In 2018, at the ESMO congress, Grothey et al. presented a late-breaking abstract from cohort 2 of the MODUL trial [88]. After induction therapy with the FOLFOX + bevacizumab regimen, 445 patients were randomized to maintenance fluoropyrimidine and bevacizumab ± atezolizumab. The updated analyses revealed no difference in median PFS and OS between the two strategies. Mettu et al. presented an abstract at the poster discussion session at the same congress the following year [89]. In this study, 133 patients with mCRC, with treatment resistant disease, were randomized to receive capecitabine and bevacizumab plus placebo or atezolizumab as last line treatment. The study reached its primary endpoint demonstrating a significant improvement of PFS by the addition of atezolizumab, although the numerical difference was only one month. The corresponding manuscripts have not been published either. The first publication based on a commercially available immuno-therapeutic, within this specific field in CRC, was published earlier this year, in April 2020, and revealed the results from the dose expansion phase Ib trial REGONIVO (EPOC1603) [90]. Fukuoka et al. included patients with gastric or CRC, 25 of each, who had progressed on a minimum of two previous lines of palliative treatment. All the patients with CRC had previously received anti-angiogenetic treatment, the cancer in one patient had deficient mismatch repair (dMMR) but the remaining 24 were all proficient (microsatellite stable), and six had RAS mutations. Patients were treated with nivolumab 3 mg/kg every two weeks and regorafenib once daily, day 1-21, in a four-week cycle. During the dosing-finding, part of the study regorafenib was reduced from initially 160 mg to the recommended 80 mg at which no patients experienced dose-limiting toxicity. Among the patients with CRC 9 (36%) achieved an objective tumor response and median PFS was 7.9 months. A trend towards better outcome for the patients with lung metastases, compared to liver metastases was presented, which could be due to a more immunosuppressive environment in the liver compared to the lung. This study provided real clinical evidence of synergy between the investigated drugs. Neither of the two would be expected to provide meaningful benefit as singe agents in this group (except the one patient with a dMMR tumor) but combined, one out of three responded. These results, together with similar findings in other cancer types, have paved the way for multiple trials assessing the clinical benefit from combining immunotherapy and anti-angiogenetic treatment in CRC. An example is the recently published study protocol AtezoTRIBE by Antoniotti et al. [91]. In this randomized phase II trial untreated patients with unresectable mCRC will receive FOLFOXIRI and bevacizumab ± atezolizumab in four months followed by maintenance 5-fluoruracil, leucovorin, bevacizumab ± atezolizumab. The study is estimated to be completed in April 2021. A supplementary search at clinical. trials.gov for trials in CRC combining immunotherapy with an anti-angiogenetic drug revealed more than 20 ongoing clinical trials (Table 6). With a specific focus on CRC only, this level of clinical activity underlines the potential impact gained by combining these two classes of therapeutics. Of investigators choice; Atezolizumab (anti-PD-L1); avelumab (anti-PD-L1); bevacizumab (anti-vascular endothelial growth factor A); BNC105 (a vascular disrupting agent); cabozantinib (tyrosine kinase inhibitor of c-MET and vascular endothelial growth factor receptor 2); cediranib (tyrosine kinase inhibitor of vascular endothelial growth factor receptors 1-3); durvalumab (anti-PD-1); ipilimumab (anti-CTLA-4); lenvatinib (tyrosine kinase inhibitor of vascular endothelial growth factor receptors 1-3); nivolumab (anti-PD-1); pembrolizumab (anti-PD-1); regorafenib (tyrosine kinase inhibitor of TIE2 and vascular endothelial growth factor 2); trebananib (anti-angiopoietin-2). Conclusions For many years, the ability to suppress angiogenesis has been exploited in the field of oncology. The efficiency is well documented, and the indications are constantly growing, although the impact often is rather limited, as we argue in this review. Bevacizumab is widely used for patients with CRC, while TKI primarily have been used in other solid tumors. Recent evidence suggests that inhibition of angiogenesis may be clinically meaningful through several lines of treatment but lack of biomarkers limits an individualized approach. The tumor microenvironment is anti-immune and a combination of anti-angiogenic drugs and immunotherapy has demonstrated impressive results and may alter the therapy in the years to come. A significant difference, in terms of standard clinical efficiency parameters, from adding anti-angiogenetic treatments to the existing chemotherapy regimens, have been documented for patients with mCRC. To what extent these differences represent a clinical meaningful benefit is less clear. The process of angiogenesis provides a significant attribution to tumor growth for a fraction of the patients, but unfortunately, it is not identifiable through a single molecular characteristic. This lack of patient selection currently represents the biggest challenge in the field of anti-angiogenetic therapy. Despite this long-lasting challenge, targeting angiogenesis may constitute one of the most important avenues in modern oncology, even after 15 years on the road. Several scenarios contribute to this optimism. Research within the field of biomarker discovery hasn't been more intense than now. The introduction of the consensus molecular subtypes, combined with entities such as improved imaging, digital pathology, in vitro testing of tumor biopsies may help to narrow down the field of candidates for whom anti-angiogenetic therapy is crucial for tumor control. The introduction of new classes of therapy, with anti-angiogenetic properties, may provide additional benefit. Drugs targeting additional angiogenetic factors exemplify this. One example is the monoclonal antibody parsatuzumab (anti-EGFL7) where clinical testing in phase III was halted due to lack of biomarkers [92,93]. Another example may be the modulation of angiomiRs (microRNAS involved in the regulation of angiogenesis) that represent a rather new avenue. This can constitute a mimicking function that compensates for downregulated miRNAs with a tumor-suppressor function, or anti-miRNAs that target elevated oncogenic miRNAs. The first trial results were presented three years ago, with promising results, but this class of therapeutics still face challenges with specific distributions. The combination with more natural substances such as vitamin derivatives represents another scenario where the true benefit from these drugs may be revealed even further. Specifically, several pre-clinical studies [94][95][96] have argued for synergy by combining anti-angiogenetics with vitamin-E derivatives (tocotrienol) and clinical documentation have been provided in other cancer types [97]. Results are currently awaited within the field of mCRC. The biggest potential, however, lies in the combination with immunotherapies, as highlighted in a previous section of this review and the current results and the number of ongoing clinical trials serve as documentation for this standpoint. The combination of these two classes of therapeutics may represent the key to unlocking immunotherapy for the large group of patients with microsatellite stable tumors that currently do not derive benefit from immunotherapy-only strategies. The near future will tell if this forecast holds true. Author Contributions: All three authors reviewed the literature, drafted the manuscript, and approved the final version. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
2021-03-17T05:21:40.997Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "5836ab87138c0fa2632bbe528dee86c8a130676b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/5/1031/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5836ab87138c0fa2632bbe528dee86c8a130676b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119269284
pes2o/s2orc
v3-fos-license
$f$-minimal Lagrangian Submanifolds in K\"ahler Manifolds with Real Holomorphy Potentials The aim of this paper is to study variational properties for $f$-minimal Lagrangian submanifolds in K\"ahler manifolds with real holomorphy potentials. Examples of submanifolds of this kind incuding soliton solutions for Lagrangian mean curvature flow (LMCF). We derive second variation formula for $f$-minimal Lagrangians as a generalization of Chen and Oh's formula for minimal Lagrangians. As a corollary, we obtain stability of expanding and translating solitons for LMCF. We also define calibrated submanifolds with respect to $f$-volume in gradient steady K\"ahler--Ricci solitons as generalizations of special Lagrangians and translating solitons for LMCF, and show that these submanifolds are necessarily noncompact. As a special case, we study the exact deformation vector fields on Lagrangian translators. Finally we discuss some generalizations and related problems. Motivation: Stability of Minimal Lagrangians The stability properties of minimal Lagrangian submanifolds in Kähler manifolds is studied by Chen [6] and Oh [27]. In particular, they derive a beautiful second variational formula as follows. Let (X, J, ω) be a Kähler manifold with metric g, F : L X be a minimal Lagrangian submanifold, and let {F t } be a smooth family of compactly supported normal deformations of F with F 0 = F and d dt t=0 F t = ξ ∈ Γ c (N L). Since L is Lagrangian, ξ is naturally identified with a 1-form α ξ := F * ι ξ ω ∈ Γ(T * L). Then where g is the induced metric on L, and Ric is the Ricci curvature of g. A minimal Lagrangian F : L X is called stable if (δ 2 V ) F (L) (ξ) ≥ 0 for all ξ ∈ Γ c (N L), Lagrangian stable if (δ 2 V ) F (L) (ξ) ≥ 0 for all ξ ∈ Γ c (N L) with α ξ being closed, and Hamiltonian stable if (δ 2 V ) F (L) (ξ) ≥ 0 for all ξ ∈ Γ c (N L) with α ξ being exact. From (1) one can deduce that (i) if Ric < 0, then any minimal Lagrangian in X is strictly stable, (ii) if (X, J, ω) is positive Kähler-Einstein, that is, Ric = c · g for some c > 0, then a minimal Lagrangian in X is Hamiltonian stable if and only if λ 1 (∆ g ) ≥ c, and (iii) if (X, J, ω, Ω) is a Calabi-Yau manifold equipped with Ricci-flat Kähler metric, then any minimal Lagrangian submanifold in X is stable, and Jacobi fields on L are given by solutions of harmonic 1-form equation dα = d * g α = 0, α ∈ Γ(T * L). The fact (iii) can be obtained from a Calibrated Geometry point of view. It is known that any minimal Lagrangian submanifold in a Calabi-Yau manifold (X, J, ω, Ω) is special Lagrangian, that is, F : L X is calibrated by Re(e −iθ 0 Ω), or equivalently, on L for some phase θ 0 ∈ S 1 . This guarantees that if L is compact, F : L X is not only minimal but volume-minimizing in its homology class. McLean [23] shows that the linearization of equation (2) is exactly the harmonic 1-form equation, thus any Jacobi field on L also induces an infinitesimal special Lagrangian deformation. Moreover, the special Lagrangian deformations are unobstructed and the moduli space of special Lagrangians is a smooth manifold of dimension b 1 (L). f -stability of f -minimal Lagrangians and LMCF Solitons In this paper, we aim to generalize the above story to Lagrangian submanifolds which are "minimal" with respect to certain weighted volume functionals. Consider a smooth function f : X → R and the corresponding weighted volume form e −f dV g on a Kähler manifold (X, J, ω). An analogue quantity for Ricci curvature Ric on the metric measure space (X, g, e −f dV g ) is the symmetric 2-tensor Ric f = Ric + Hessf , called the Bakry-Émery Ricci tensor. Since g is Kähler, Ric is Hermitian with respect to J. Hence it is natural to require Hessf to be Hermitian. This is equivalent to requiring that f to be a real holomorphy potential, that is, ∇ 1,0 f is a holomorphic vector field on X. Define the f -volume functional on the space of p-submanifolds by Notice that e − p 2m F * f dV g is the volume of the induced conformal metric F * (e − f m g) on L. A p-submanifold F : L X is a critical point with respect to V f if and only if the generalized mean curvature vector H + p 2m (∇f ) ⊥ = 0, where H is the mean curvature vector of F : L X. Such a submanifold is called an f -minimal submanifold. Under the above settings, we prove the following second variational formula for f -minimal Lagrangian submanifolds: Theorem 1.1. Assume that (X, J, ω, f ) is a Kähler manifold with a real holomorphy potential, and F : L X is an f -minimal Lagrangian submanifold. Then for any compactly supported normal variational vector field ξ on L, where α ξ := F * ι ξ ω is the 1-form on L associated to ξ, and d * f is the adjoint of d in the weighted space L 2 (Λ * T * L, e − 1 2 F * f dV g ). We When f is constant, this equation reduces to Kähler-Einstein equation. In this sense, gradient KR solitons are generalizations of Kähler-Einstein manifolds. For c = 0, the soliton is called steady, for c > 0, it is called shrinking, and for c < 0, it is called expanding. We then obtain a corollary of Theorem 1.1 similar to that obtained by Chen and Oh's formula (1), that every f -minimal Lagrangian in a steady or expanding gradient KR soliton is f -stable, and that a f -minimal Lagrangian in a shrinking gradient KR soliton is Hamiltonian f -stable if and only if λ 1 (∆ f ) ≥ c, where ∆ f is the Witten Laplacian on L associated to g and F * f . Our formula (3) can be applied to study the stability of soliton solutions for Lagrangian mean curvature flow (LMCF). In fact, if X = C m with standard Euclidean metric g 0 and f (z) := ± |z| 2 2 , then (C m , i, g 0 , f ) is a shrinking/expanding gradient KR soliton and the f -minimal Lagrangians are shrinking/expanding solitons for mean curvature flow, respectively, and if f (z) := z, T for some fixed T ∈ C m , (C m , i, g 0 , f ) becomes a gradient steady KR soliton and the f -minimal Lagrangians are translating solitons (see Example 1 and 2). By Theorem 1.1, we see that: Corollary 1.2. Every expanding soliton and translating soliton for Lagrangian mean curvature flow is f -stable. The stability of soliton solutions to mean curvature flow under certain weighted volume functional was first studied by for shrinking solitons in hypersurface case, and generalized to higher codimensional case by Andrews-Li-Wei [1], Arezzo-Sun [2], and Lee-Lue [16], but note that their functional ("entropy") is different from the f -volume functional. For stability of translating solitons, the second variation formula for translating hypersurfaces under f -volume was obtained by Xin in [35], Shahriyari [28] studied the stability of graphical translating surfaces in R 3 , and Yang [38] and Sun [30] studied the Lagrangian translating solitons. In particular, Yang [38] proved that every Lagrangian translating soliton is Hamiltonian f -stable, and Sun [30] showed that they are actually Lagrangian f -stable. f -special Lagrangians and Translating Solitons Next, we will focus on the case c = 0, so we let (X, J, ω, f ) be a gradient steady KR soliton. In [4], Bryant shows that there is a holomorphic volume form Ω f such that (X, J, ω, Ω f ) becomes an almost Calabi-Yau m-fold. The m-form Re Ω f is a calibration on X with respect to the conformal metric e − f m g. A key observation is that, this fact can be rephrase as that Re Ω f is a calibration with respect to the f -volume e −f dV g on X, and hence any submanifold calibrated by Re Ω f minimize the f -volume. We call such submanifolds f -special Lagrangians (f -SLags) and view them not only as generalizations of special Lagrangians in Calabi-Yau manifolds, but also generalizations of Lagrangian translating solitons in C m , since they evolved under LMCF by "translation" along the negative gradient vector field − 1 2 ∇f . It turns out that every f -minimal Lagrangians in a gradient steady KR soliton (X, J, ω, f ) can be viewed as an f -SLag, and hence the f -stability also follows from this point of view. The f -SLag deformations can be characterized by the solutions of the f -harmonic 1-form equation Thus if L is compact, the moduli space of f -SLags is a smooth manifold with dimension b 1 (L). But unfortunately we have a nonexistence result: There is no compact f -minimal Lagrangian in a gradient expanding or steady KR soliton (X, J, ω, f ). Therefore to study the deformation theory of f -SLags, one needs to impose suitable asymptotic conditions. As an experiment, we study the case when F : L C m is an exact Lagrangian translating soliton and assuming that the deformation is exact with weighted L 2 potential. We show that such deformation must be trivial on L, that is, there is no nonzero weighted L 2 f -harmonic function on L. To study the deformation theory of Lagrangian translating solitons further, one needs to impose more complicated asymptotic conditions and study the Fredholm theory of ∆ f in the corresponding weighted spaces. On the other hand, the properties of f -harmonic functions on noncompact f -minimal submanifolds might be useful for describing the topology at infinity. See [11], [12] for some results in this direction in hypersurface case. This paper is organized as follows. In Section 2 we introduce the Kähler manifolds with real holomorphy potentials and f -minimal Lagrangian submanifolds. In Section 3 we prove the second variation formula for the f -volume and stability of solitons for LMCF. We study f -calibrated submanifolds and prove a noncompact result in Section 4. In the final section, some generalizations and related problems are discussed. Kähler Manifolds with Real Holomorphy Potentials In the following, (X, J) will be a smooth, connected, complex manifold with dim R X = 2m, and ω will be a Kähler form with Kähler metric g. The Levi-Civita connection of g will be denoted by ∇, and the corresponding quantities with respect to ∇, such as Hessian and curvature, will be denoted by notations with overline. We will assume that there exists a function f : X → R such that where Hess is the Hessian of f with respect to g. In fact, it is not hard to see that the following conditions are equivalent. Proposition 2.1. Let f : X → R be a smooth function. The following are equivalent: Such a function f is called a real holomorphy potential on (X, J, ω). Some properties of Kähler manifolds admitting real holomorphy potentials can be found in Munteanu-Wang [24], [25]. Typical examples of manifolds of this kind are gradient Kähler-Ricci solitons: for some c ∈ R, where Ric is the Ricci curvature of g. Since by Kähler condition we have Ric(JX, JY ) = Ric(X, Y ), so f must satisfy Thus f is a real holomorphy potential. The quadruple (X, J, ω, f ) is called a gradient Kähler-Ricci solitons (KR solitons for short). The gradient vector field ∇f generates soliton solution to is the Ricci form of ω(t), in the following way. Define where σ(t) = 1 − ct and ϕ t is the flow on X generated by 1 2σ(t) ∇f . Then it is straightforward to verify that ω(t) satisfies the KRF with ω(0) = ω as long as σ(t) > 0. One can classify the gradient KR solitons into three classes in terms of the sign of the constant c ∈ R: (i) When c < 0, (X, J, ω, f ) is called a gradient expanding soliton. The KRF g(t) evolves the metric g by homothetically expanding the length scale since σ(t) ′ = −c > 0. For example, take (X, J) = (C m , i) with Euclidean metric ω 0 , and f (z) := − |z| 2 2 , then c = −1. The resulting expanding soliton (C m , i, ω 0 , f ) is called the expanding Gaussian soliton. (iii) When c = 0, (X, J, ω, f ) is called a gradient steady soliton. The KRF g(t) evolves the metric g by holomorphic reparametrizations ϕ t . For example, take (X, Note that if f is constant, condition (5) reduces to Kähler-Einstein condition. Hence gradient KR solitons can be viewed as generalizations of Kähler-Einstein manifolds. f -minimal Lagrangian Submanifolds Given a Kähler manifold with holomorphy potential (X, J, ω, f ). Let F : L X be an immersed, oriented, connected submanifold. The induced metric will be denoted by g := F * g, and the corresponding quantities, such as Levi-Civita connection and curvature of g, will be denoted by notations without overline. Define the f -volume functional on the space of p-submanifolds by , then the first variation formula of the f -volume is given by where H is the mean curvature vector of F : L X, and ( · ) ⊥ is projection to the normal bundle of L. Hence f -minimal submanifolds can be viewed as generalizations of minimal submanifolds. In the following, we will consider only Lagrangian submanifolds. Recall that an m-submanifold F : L m X in a symplectic manifold (X 2m , ω) is called Lagrangian if F * ω = 0. If F : L X is Lagrangian, any compatible almost complex structure J on X gives rise to an isomorphism between normal bundle N L and tangent bundle T L of L. Then by composing with the induced metric g we obtain an isomorphism between N L and T * L. We will use the same notation as in [27] to denote such isomorphism. In a Kähler manifold, Dazord [8] shows that the mean curvature vector H of a Lagrangian submanifold F : L X satisfies d ω(H) = F * ρ, where ρ = Ric(J·, ·) is the Ricci form. Thus if (X, J, ω) is Kähler-Einstein, then ω(H) is closed, so it induces an infinitesimal Lagrangian deformation. Furthermore, Smoczyk [29] shows that the Lagrangian condition is preserved by the mean curvature flow d dt F t = H(F t ) whenever (X, J, ω) is Kähler-Einstein. Therefore, it is reasonable to consider Lagrangian mean curvature flow (LMCF) in Kähler-Einstein manifolds and soliton solutions for LMCF. The next example explains the meaning of LMCF solitons and shows that they can be viewed as f -minimal Lagrangian submanifolds in C m . (i) Define f (z) := |z| 2 2 , then f is a real holomorphy potential and ∇f (z) = z. Then any f -minimal Lagrangian submanifold F : L C m satisfies Lagrangian submanifolds in C m satisfying (8) are called shrinking soliton for LMCF. Indeed, the homothetically shrinking family about the origin Lagrangian submanifolds in C m satisfying (9) are called expanding soliton for LMCF. Indeed, the homothetically expanding family about the origin Lagrangian submanifolds in C m satisfies Lagrangian submanifolds in C m satisfying (10) are called translating soliton for LMCF. Indeed, the family moving by translation in T -direction {F t = F − tT } t∈R satisfies LMCF with F 0 = F . Second Variation Formula and Stability of f -minimal Lagrangian Submanifolds First we introduce the differential operators that will be used in the following sections. Let (X, J, ω, f ) be a Kähler manifold with a real holomorphic potential and F : L X be a Lagrangian submanifold with induced metric g. To simplify the notations, we will continue to use f to denote the restriction of the ambient function f to L. Consider the space of weighted L 2 differential forms L 2 (Λ * T * L, e − f 2 dV g ) on L with inner product Then the formal adjoint of d with respect to (·, ·) f is given by d * f := d * g + 1 2 ι ∇f . Define then ∆ f is a positive-definite self-adjoint operator with respect to (·, ·) f . This operator is usually called the Witten Laplacian associated to f . We are now ready to prove the second variation formula for V f . Proof. Differentiate (6) again and use the f -minimal condition H + 1 2 (∇f ) ⊥ = 0, By the same computation as in the second variation formula for minimal submanifolds, where R(ξ), ξ = m i=1 R(e i , ξ)e i , ξ for any orthonormal basis {e i } on L andà = A t A for A denoting the second fundamental form. We also compute ∇ ξ ∇f, ξ = Hessf (ξ, ξ), where in the last equality we use the fact that [ξ, ∇f ], ξ = 0. Combining these three terms we get By Gauss formula and f -minimality, given any orthonormal basis {e i } on L, where ∆ is the covariant Laplacian acting on Γ(T * L). By Lemma 3.2 (ii) and Weizenböck formula we have where ∆ h = dd * g + d * g d is the Hodge Laplacian. Now the f -Laplacian acting on 1-forms on L is given by so expressing the Lie derivative by covariant derivative we get where in the third line we use the Lemma 3. Combining everything together we finally obtain Notice that, by the assumption on f , By the same proof as in [27] Theorem 3.6 and Theorem 4.4, we have Notice that if we take f to be a constant, this corollary reduces to Oh's original results. From Example 2 and Theorem 3.1, we obtain the f -stability for LMCF solitons. (ii) Every Lagrangian translating soliton is f -stable. Remark 1. It is known that shrinking solitons for MCF are f -unstable, so one has to consider stability with respect to the "entropy" defined by Coding-Minicozzi [7], called the F -stability. See [18] for some F -stability criterions for closed Lagrangian shrinking solitons. Calibrated Submanifolds with respect to the f -volume 4.1 f -special Lagrangian Submanifolds Recall that Harvey and Lawson [10] shows that if (X, J, ω, Ω) is a Calabi-Yau m-fold, then for any Lagrangian submanifold F : L X we have F * Ω = e iθ dV g for some θ : L → R/2πZ, called the Lagrangian angle. The mean curvature vector satisfies H = J∇θ, thus if F : L X is minimal, then θ = θ 0 is a constant. Moreover, in this case F : L X is calibrated by Re(e −iθ 0 Ω) and hence it is actually volume-minimizing in its homology class. We now generalize the above theory to Lagrangian submanifolds in gradient steady KR solitons, and give an alternative description of f -stability for f -minimal Lagrangians. Given a gradient steady KR soliton (X, J, ω, f ), Robert Bryant [4] shows that there exists a nonvanishing holomorphic volume form, denoted by Ω f , such that In other words, (X, J, ω, Ω f ) is almost Calabi-Yau in the sense of Joyce (see [14], Def. 8.4.3). Define g := e − f m g, then for any θ 0 ∈ R/2πZ, Re (e −iθ 0 Ω f ) is a calibration with respect to the conformal metric g. We rephrase this from the view point of the f -volume. is called an f -calibration if dα = 0 and α P ≤ e − p 2m f vol P for any p-dimensional oriented subspace P ⊂ T x X, for all x ∈ X. (ii) A p-submanifold F : L X in X is said to be f -calibrated by an f -calibration α if where dV g is the induced volume on L. It is not hard to see that any f -calibrated submanifold is f -minimal and any compact f -calibrated submanifold minimizes the f -volume in its homology class. One can show that Re (e −iθ 0 Ω f ) is an f -calibration and the f -calibrated submanifolds are f -minimal Lagrangian submanifolds. Conversely, by choosing an orientation, any f -minimal Lagrangian submanifolds in a gradient steady KR soliton is f -calibrated by Re (e −iθ 0 Ω f ) for some θ 0 ∈ R/2πZ. If F is Lagrangian, then by the same method as in [10] one can show that F * Ω f = e iθ e − F * f 2 dV g for some θ ∈ R/2πZ. We still call θ the Lagrangian angle. It turns out that F : L X is an f -SLag with phase θ 0 if and only if F : L X is Lagrangian with constant Lagrangian angle θ 0 Remark 2. In Joyce's terminology (see [9], Def. 8.4.4), f -SLags in our sense are still called special Lagrangians. We put an f here to emphasize the role of the real holomorphy potential and the relation to f -minimal submanifolds. We now give a family of examples of f -SLags in C m . Example 3 (Lagrangian Translating Solitons). Consider a Lagrangian translating soliton F : L C m with Lagrangian angle θ. Then as in [26], So F satisfies the translator equation for some constant θ 0 ∈ R. We shall show that Lagrangian translating solitons are f -calibrated with phase θ 0 . Let Ω := dz 1 ∧ · · · ∧ dz m and Then Ω f is a holomorphic volume form on C m and satisfies (12). Hence (C m , i, ω 0 , Ω f ) is almost Calabi-Yau. By Lagrangian condition we then have Therefore Proof. Without loss of generality, we may assume F has phase 0. Let {F t } be a family of immersions satisfies F 0 = F and d dt t=0 F t = ξ ∈ Γ(N L). Then ξ preserves f -SLag condition if and only if Notice that (14) also appears in the second variation formula for f -volume (Theorem 3.1) as the Jacobi field equation for f -minimal Lagrangians. From Lemma 4.3, for compact f -SLags, the deformation theory is the same as special Lagrangians, as shown by the following theorem: X be an f -minimal Lagrangian submanifold. Then L must be noncompact. Proof. Let X, Y be tangent vectors of L. Then by f -minimality, We first deal with the steady case. Since Ric + Hessf = 0, be an orthonormal basis of T p L for some p ∈ L. Then is an orthonormal basis of T p X, Hence we have Ric(e i , e i ). Combining these equations we obtain Now by Cao-Hamilton [5], the quantity R + |∇f | 2 is constant on X (see also [4] for a different proof). Therefore f satisfies for some c ∈ R on L. The result then follows from maximum principle. Next we prove the expanding case. We may assume Ric + Hessf = −g. For any vectors X, Y tangent to L we have Taking the tangential trace on both sides and use (15) we get On any gradient expanding soliton we know that (see [22]), after adding a suitable constant to f , R + |∇f | 2 + 2f = 0. Hence f satisfies But from [39] Corollary 2.4, the scalar curvature is bounded from below R ≥ −2m, so Therefore if f L attains minimum on L, then f L ≡ m is constant on L. Hence R L = −2m, which means the minimum of R is attained on L. By [39] Corollary 2.4, g is Einstein, a contradiction. Infinitesimal Deformations of Lagrangian Translating Solitons We consider the special case that F : L C m is a Lagrangian translating soliton. Let (x 1 , · · · , x m , y 1 , · · · , y m ) be standard coordinates in R 2m ≃ C m . The Liouville form is defined by λ : for some β ∈ C ∞ (L). The exact deformations (deformations that preserves exactness) are induced by exact 1-forms on L, that is, ξ ∈ Γ c (N L) is an exact deformation if and only if ω(ξ) is exact (see [20], Lemma 5.4). We will restrict our attention to the study of exact deformations of an exact Lagrangian translating soliton F : L C m . In this case, (14) reduces to the f -Laplace equation Proof. Suppose ∆ f u = 0. Let w := u 2 , then Fix p ∈ L, consider a sequence of cut-off functions {φ R } R>0 satisfying By Young's inequality with ǫ = 1/2 we then have Thus letting R → ∞, by finiteness of L u 2 e − f 2 dV g we obtain L |∇u| 2 e − f 2 dV g = 0, hence u is constant. To show u = 0, it is enough to show that L has infinite weighted volume. First notice that we have the identities ∆f = −2|H| 2 and |H| 2 + 1 4 From this we deduce that λ 1 (∆ f ) ≥ 1 4 (see, for example, Proposition 22.2 of [19]). Then by a simple argument in [34] Corollary 4.2, we conclude that L has infinite weighted volume. From [26] proposition 2.2, we have ∆ f θ = 0 on Lagrangian translating solitons. Thus θ is f -harmonic. This corresponds to the fact that the exact deformation vector field induced by θ is just the mean curvature H = J∇θ, which is just a translation in C m . Therefore there is no nontrivial exact deformation of exact Lagrangian translating solitons whose potential u has finite weighted L 2 distance to the Lagrangian angle θ. This provides a kind of infinitesimal uniqueness of exact Lagrangian translating solitons. From f -harmonicity of θ, we also have a nonexistence theorem in 2-dimensions. Corollary 4.8. If F : L C m is a Lagrangian translating surface as in Proposition 4.6 with Lagrangian angle θ satisfying L θ 2 e − f 2 dV g < ∞, then L is a plane. Proof. By proposition 4.6, θ ≡ 0 on L, so T ⊥ = −H = 0, that is, T is tangent to L. Therefore L ≃ Σ × R ⊂ C × C for some minimal curve Σ. Hence Σ is a line and L is a plane. Generalization to Almost-Einstein Case Suppose now (X, J, ω) is a Kähler manifold and f is a smooth function which is not necessarily a holomorphy potential. Then by the same computations as the proof of Theorem 3.1, one can show that the second variation formula of f -volume becomes where ρ = Ric(J·, ·) is the Ricci form of ω. In this case, the f -stability depends on the bilinear form ρ(·, J·) + i ∂∂f (·, J·). In particular, if (X, J, ω, f ) is almost-Einstein, that is, for some C ∈ R, then ρ(·, J·) + i ∂∂f (·, J·) = C g(·, ·). Thus Notice that the above Hamiltonian f -stability criterion is also obtained in [15]. Generalized Lagrangian Mean Curvature Flow and Dynamic Stability A longstanding problem in Geometry is the existence problem for SLags in Calabi-Yau manifolds. Since SLags are volume minimizing, one approach to tackle the existence problem is to deform an initial Lagrangian submanifold along the negative gradient flow of the volume functional, namely, the mean curvature flow (MCF) Smoczyk [29] proves that the Lagrangian condition is preserved by MCF if the ambient space is Kähler-Einstein, and in this case the flow is called Lagrangian mean curvature flow (LMCF). However, finite-time singularities often occur and therefore in general one cannot have long-time existence and convergence. There are conjectural pictures in dealing with this problem, see for example, Thomas-Yau [31] and Joyce [13]. A relevant question about long-time existence and convergence of LMCF one can ask is the relation between stability of minimal Lagrangians under volume functional and dynamic stability of LMCF, that is, whether a small Lagrangian perturbation of a stable minimal Lagrangian submanifold converges back to the original minimal submanifold along LMCF? Results in this direction can be found in, for instance, Li [17], Tsai-Wang [33], [32], see also Lotay-Schulze [21] for an applications of [32] to LMCF with singularities. The above picture can be generalized to f -minimal Lagrangians. More precisely, we consider the negative gradient flow of the f -volume functional: Behrndt [3] shows that if (X, J, ω, f ) is almost-Einstein, then the Lagrangian condition is preserved by the flow (20), called the generalized Lagrangian mean curvature flow (GLMCF). The stationary points of (20) are the f -minimal Lagrangians. Therefore we can ask the same question about dynamic stability of GLMCF. Kajigaya-Kunikawa [15] recently generalized Li's result [17] and obtained a dynamic stability theorem for compact f -minimal Lagrangians in compact almost-Einstein Kähler manifolds. Besides the compact cases, the dynamic stability for LMCF solitons under GLMCF are especially interesting since in this case the GLMCF corresponds to LMCF with scalings. Problem 1. Are the expanding and translating solitons for LMCF dynamically stable under GLMCF? The author believe that this problem is related to the conjectural theory of formation and desingularization of singularities of LMCF proposed by Joyce [13]. Kähler-Ricci Mean Curvature Flow There is another generalization of LMCF by considering the mean curvature flow along a moving ambient metric. Let {g(t)} be a solution to KRF, that is, its Kähler form ω(t) satisfies d dt ω(t) = −ρ(ω(t)), for t ∈ (a, b), where ρ denotes the Ricci form. We consider the mean curvature flow {F t } along {g(t)}, that is, where the mean curvature H(F t ) of F t is computed with respect to g(t). The couple (g(t), F t ) defined by (21) and (22) is called the Kähler-Ricci mean curvature flow (KR-MCF for short). Smoczyk [29] shows that Lagrangian condition is preserved by KR-MCF. Now, if we are given a gradient KR soliton (X, J, ω, f ), then there is a canonical KRF solution g(t) := σ(t) ϕ * t g, which is defined for all t such that σ(t) > 0 (see Example 1), where ϕ t is the biholomorphism on X generated by 1 2σ(t) ∇f . In this case there is an one-to-one correspondence between GLMCF and KR-MCF, as shown in the following lemma. Lemma 5.2. Let (X, J, ω, f ) be a gradient KR soliton and let g(t) be the solution to KRF defined as above. If (g(t), C t ) is the solution to KR-MCF for t ∈ (a, b), we set F s(t) := ϕ t • C t for s(t) = t a dτ σ(τ ) . Then F s : L X satisfies the generalized LMCF in the fixed background (X, J, ω). Conversely, given a generalized LMCF {F s } in (X, J, ω), then (g(t), C t := (ϕ t ) −1 •F s(t) ) satisfies the KR-MCF. Proof. Let s(t) to be determined. We compute By solving dt ds = σ(t), we obtain s(t) = The converse follows from similar calculations. Notice that the case for shrinking solitons in shrinking Ricci solitons has been proved by Yamamoto [36], [37]. If we put F s = F : L X to be an f -minimal Lagrangian, then the KR-MCF evolves F by C t = (ϕ t ) −1 • F , defined for all t such that σ(t) = 1 − ct > 0. Therefore we conclude that Yamamoto [36], [37] shows that if the Ricci flow and the Ricci-mean curvature flow develop type-I singularities at the same point simultaneously, then the blow-up near the singular point is an f -minimal submanifold in a shrinking Ricci soliton. It would be interesting to see how other f -minimal submanifolds arise as local models for the singularities (in particular, type-II singularities) of KR-MCF.
2019-01-02T04:08:49.000Z
2019-01-02T00:00:00.000
{ "year": 2019, "sha1": "fde64f68f3ded09b685bb06c27b46c36225e0ab8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1901.00259", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cad47647746c820a04fe6194e384f7f558a78890", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
270109023
pes2o/s2orc
v3-fos-license
Comparative study of air quality index of two metropolitan cities (Lucknow and Kanpur) in Year-2023 , Introduction Air pollution means contamination of the air by any chemical, physical or biological agent that modifies the natural characteristics of the atmosphere.Increase in technological, industrial, and agricultural advancement, coupled with increases in population growth, has triggered the deterioration of environmental quality throughout the world.Rapidly growing cities, more traffic on roads, growing energy consumption and waste production, and lack of strict implementation of environmental regulation are increasing the discharge of pollutants into air, water, and soil .Urban ambient air pollution is the result of emissions from a multiplicity of sources, mainly stationary, industrial, and domestic fossil fuel combustion, and petrol and diesel vehicle emissions.(Brulfert et al., 2005;Parra et al.,2006).According to the WHO report, particulate matter (PM) affects more people than any other air pollutant.Even low concentrations of PM have been related to adverse health effects (Agarwal, 2012).WHO report (2006) data revealed that there are more than 80% of urban population are exposed to air quality levels above the NAAQ standards and WHO guideline limits (WHO, 2006).However, in recent studies it has been observed that there are about 90% of world population living in the unhealthy air quality limits (WHO, 2016).It to be seems, past few decades due to human activities such as industrialization, fossil fuels burning, rapid increase in automobiles number and intensively use of agrochemicals have accelerated the levels of harmful gases like SO2, NO2, CO, O₃ and particulate matter (PM) in environment to worrying levels (Wu et al. 2020; Gurjar et al. 2016). Lucknow metropolitan city is one of the most polluted cities of India.Lucknow city, the capital of Uttar Pradesh the most polluted state of India is located 26° 51' N and 80° 56'E This is second largest city of North India and one of the most famous tourist attractions of the country the population of the city is 28,15,601 according to 2011 census with an area of 310 1 km square.Lucknow is the capital of the most populated state of Uttar Pradesh, it is the second largest city in northern and central India.It is placed among the fastest growing cities and now it is a metropolitan city in India and is rapidly emerging as a manufacturing, commercial and retailing hub.Lucknow has insufficient transport infrastructure.Due to increasing urban population, use of personalized vehicles, mainly two wheelers and intermediate public transport is growing at a rapid rate.Total vehicle population is more than 13 lacs with a growth of 8.68% during 2011-2012 (IITR Report, 2012). The city of Kanpur has a population of about 3 million and is situated in north-central part of India (longitude 88°22′E and latitude 26°26′N) in Gangetic Plane.Kanpur is the largest industrial metropolis in the State of Uttar Pradesh, India.The atmosphere over the city is considerably polluted.The population level or on the upswing again after few year of control the air quality data of CPCB reviews that the level of particulate pollution in the Kanpur were four to five times.Above the standard the level can be higher during winters level of NO2 though below standards are rising in the city which is clear sign of growing impact of vehicles about 60% of the geographical area of the city has pollution problem with highly polluted City core. Awareness about air quality index is necessary to the population of especially metro cities camping and awareness programs to know the consequences and health effect of breathing poor air. The study's purpose to compare the quality status of Air of Lucknow and Kanpur city for year 2023.This study depends upon the concentration of various pollutants ( PM10, NO2, SO2) that come under AQI.The Study shows that both the cities are troubling from declining trend of air quality.To compare the AQI of both cities the secondary data our collector from 5 representative monitoring site of different localities of Lucknow and Kanpur city in the year 2023.T this paper has tried to find out the cause of variation and concentration of pollutants in these two cities. Source of air pollution in Lucknow and Kanpur Emission of air pollutants is caused by different anthropogenic processes which can be categorized into the source groups motor traffic, industry, power plants, trade, and domestic fuel.In industrialized countries like Germany, emissions of "classic" air pollutants are decreasing.This trend is pronounced for carbon monoxide (CO), Sulphur dioxide (SO2) and total suspended particulate (TSP) and is weakly evident for nitrogen oxides (NOx) and non-methane volatile organic compounds (NMVOC).(H.Mayer 1999). There are four main types of air pollution sources including natural, area, stationary, and mobile sources producing PM2.5, PM10, reactive gases including volatile organic compounds (VOCs).Primary pollutants (the indicated gases and solid particles) may undergo further toxification in the environment. Study location and data collection. For the comparison of ambient air quality (AQI) in Lucknow and Kanpur city, secondary data has been obtained from the Uttar Pradesh Pollution Control Board (UPPCB), the Central Pollution Control Board (CPCB), and the Centre for Science and Environment (CSE).The assessment of the monthly average concentration of ambient air pollution in Lucknow and Kanpur have been conducted with the recorded data (from Annual Report UPPCB, 2022-2023) against 5 monitoring stations of each city are 2 residential (Mahanagar and Aliganj)in Lucknow and (Kidvai Nagar and Shastri Nagar) in Kanpur, 2 commercial (Hazratganj and Ansal T. C.)in Lucknow and ( Zareeb Chauki and Rama Devi) in Kanpur and 1 industrial (Talkatora) in Lucknow and (Panki) in Kanpur area for each month and comparing the average value with the given NAAQ Standards.Seasonal variations in AQI and its three representative components such as PM10, SO2 and NO2 were also recorded. Respirable Suspended Particulate Matter (RSPM or PM10) The 24 hours mean concentration of Table - Ambient air quality (AIR) The Table 13 Monthly average of Air Quality Index (AQI) in Lucknow.Table 16 Seasonal variation of AQI in residential, commercial, and industrial area of Lucknow and Kanpur. The seasonal variation of AQI were also recorded in residential area, commercial area, and industrial area.In residential areas of Lucknow, 24-hour average of AQI were observed 174.5, 145.37 and 118.12 in winter (Nov-Feb), summer (Mar-Jun) and monsoon (Jul-Oct) respectively. Similarly in commercial areas of Lucknow average AQI were recorded 181.37, 164 and 123.12in winter, summer, and monsoon season respectively and in industrial areas of Lucknow the average AQI were recorded 177.25, 161.5 and 111 in winter, summer, and monsoon, respectively. Figure 12 Seasonal variation of AQI in residential, commercial, and Industrial area of Lucknow and Kanpur city. The maximum AQI was recorded 181.37 in winter in commercial area and was minimum 111 in monsoon season in Industrial area of Lucknow.(Table-16, Fig- 12). The seasonal variation of AQI were also recorded in residential area, commercial area, and industrial area.In residential areas of Kanpur, the 24-hour average of AQI were observed 128.12, 121.62 and 92.87 in winter (Nov-Feb), Summer (Mar-Jun) and monsoon (Jul-Oct) respectively. Similarly in commercial areas of Kanpur average AQI were recorded 227.75, 170 and 123.75 in winter, summer, and monsoon season respectively and in Industrial areas of Kanpur the average AQI were recorded 164.5, 141.75 and 110.25 in winter, summer, and monsoon, respectively. The maximum AQI was recorded 174.5 in winter in residential area and was minimum 92.87 in monsoon season in residential area.(Table -16 Fig- 12). Discussion According to a report released by Swiss air quality monitoring body IQAir, the majority of air pollution ploughing the largest cities is from vehicle, industrial and construction emission.Dust and construction contribute about 59% to the air pollution in India.In the big metro cities, the major cause of air pollution happens to be the vehicular exhaust containing the oxides of carbon and nitrogen, Sulphur and particulate matter, volatile organic compounds (VOCs). Almost all the cities of India having higher concentration of PM10 in year of Indian mega cities such as Delhi (Trivedi et al, 2014), Kolkata (Das et al, 2015), Raipur (Giri et al, 2013), Kanpur (Singh and Gupta, 2015) and Lucknow (Lawrence and Fatima 2014, Saini et al, 2022). Respirable particulate matter has been identified as the major air pollutant of the urban air environment.The vehicles fitted with latest technology engines, usage of CNG in vehicles, other combustion sources and formation as secondary pollutant has resulted in higher level of fine and ultrafine particles in the urban air environment.There are more likely chances in the increase of incidences of respiratory and mutagenic disease due to high levels of finer particulates.Therefore, in the revised National Ambient Air Quality Standards (NAAQS) November 2009 suspended particulate matter (SPM) was excluded and fine particulate fraction PM2.5 was included with existing PM10.(Verma et al.,2016)The level of 24-hour average concentration of PM10 In Lucknow and Kanpur were reported higher than the prescribed level of NAAQ.Only the residential area of Kanpur has an average value of PM10 concentration in monsoon season. Winter season reported worse in the whole year in both Lucknow and Kanpur than summer season.In residential areas the concentration of PM10 is least in comparison to commercial and industrial areas in both cities and highest in the commercial areas.In comparison, the air quality of Kanpur is better than the Lucknow in consideration of PM10 concentration. The AQI for Kanpur city have shown that air quality worsens (extremely poor to severe) in winter months and during the early summer months (March, April, and part of May).These months are characterized by dusty winds resulting in high SPM.The air quality generally improves in monsoon and post-monsoon period (good to moderate) as rain washes out the pollutants.(Mukesh et al., 2003) Sulphur dioxide affects the respiratory system, particularly lung function and can irritate the eyes and increase the risk of tract infections.It causes coughing mucus secretion and aggravates conditions such asthma and chronic bronchitis.SO₂ contributes to the formation of thick haze and smog.Most of the Sulphur dioxide released from power plants, oil refineries, some motor vehicles and domestic boilers and fires.The 24 hours average concentration of Sulphur dioxide were reported highest in the season of winter and lowest in the monsoon.The level of Sulphur dioxide in industrial and vegetation area of Kanpur slightly higher than Lucknow but the average value of SO₂ in commercial area is much higher in Lucknow than Kanpur.24-hour average concentration of SO₂ were reported under the prescribed level throughout the year.One option to reduce SO₂ emission use coal that contains less Sulphur, another is to" wash" the cold to remove some of the Sulphur.The power plant can also install equipment called scrubbers which remove the SO₂ from gases leaving the smokestack. The main source of NO₂ resulting from human activities is the combustion of fossil fuel (coal, gas, and oil) especially fuel used in cars.It is also produced from making nitric acid, welding, and using explosives, refining of petrol and metals commercial manufacturing and food manufacturing.Elevated levels of an auto can cause damage to the human respiratory track and increase a person's vulnerability to, and the severity of respiratory infections and asthma.Long term exposure to high levels of NO₂ dioxide can cause chronic lung disease.The 24-hour average concentration of NO2 maximum in industrial areas of Lucknow and Kanpur and recorded minimum in commercial areas of both cities.The level of NO₂ concentration recorded higher in Kanpur in comparison to Lucknow throughout the entire year.In Lucknow, the average concentration of NO₂ were found under prescribe level of NAAQ in summer and monsoon season but in winter it was recorded slightly above than prescribe level of NAAQ.Concentration of NO₂ level higher than prescribed minimum level of NAAQ which was reported highest in winter season and at lowest in monsoon season.The minimum concentration of SO₂ and" NO₂ were reported in monsoon period maybe attributed due to rainfall which washout pollutants from air similar observations and finding the earlier reported by Mumtaz et al (2017) and Saini et al (2022). India has experienced a sharp rise in air pollution as a result of industrialization, population development, an increase in the number of vehicles on the road, the usage of fuels, inadequate transportation infrastructure, and, most importantly, insufficient environmental legislation.With an increased pace of industrialization, especially in developing countries, environmental problems have also increased.In tandem with population growth and economic expansion, there has been a sharp increase in the sources of air pollution.In addition to making, it more difficult to breathe, air pollution can also make pre-existing respiratory and heart diseases worse.(KM Mansi 2022) According to the World health organization (WHO), air pollution is 92% global burden of diseases of the world's population, currently about 3 million annual debts were reported over the world where the level air quality exceed from the WHO guideline (WHO, 2016 Conclusion The study was conducted to access the comparative study of current air quality of Lucknow and Kanpur city.For this purpose, secondary data were collected from Uttar Pradesh Pollution Control Board website and analyzed monthly and seasonally variation of PM10, SO2 and NO2 in 5 representative locations of both cities.The studies revealed that there is higher concentration of PM10 occurs throughout the year in both cities from the prescribed NAAQ standard and WHO guidelines, but it was less than the previous year.Its peak concentration was reported in the month of January 2023 in Lucknow (Hazarat Ganj) and in Kanpur (Rama Devi).The concentration of SO2 and NO2 were observed below the prescribed level throughout the year in Lucknow and Kanpur.In monsoon season the concentration level of SO2 and NO2 was found to be lower than the summer and winter season.The result indicate that the AQI of Kanpur ranges between unhealthy for sensitive group to very unhealthy and Air quality index of Lucknow indicate and healthy air quality of Lucknow.In comparison to both cities Kanpur always had lower AQI than Lucknow that indicates Kanpur had much better air quality than Lucknow the entire year.The minimum AQI recorded in monsoon season and most polluted season was winter having highest AQI. Figure 1 Table 2 Figure 1 Monthly variation of PM10 concentration in different localities of Lucknow city.Table 2 Monthly average concentration of PM10 (µg/m3) in different localities of Kanpur city (2022-2023) Figure 2 Table 3 Table 4 Figure 2 Monthly variation of PM10 concentrations in different localities of Kanpur city.Table 3 Monthly variation of PM10 (µg/m3) concentrations in residential, commercial, and industrial areas of Lucknow and Kanpur city Figure 3 Figure 3 Seasonal variation of PM10 concentration in residential Commercial and Industrial areas of Lucknow and Kanpur city.The seasonal variation of PM10 concentration were also recorded in residential areas, commercial areas, and industrial area.In residential areas of Kanpur 24-hour average concentration of PM10 were observed 139.79,132.59 and 67.44 µg/m3 in winter (Nov-Feb), summer (Mar-Jun) and monsoon (Jul-Oct) respectively.Similarly in commercial areas of Kanpur average PM10 concentration were recorded 263.64,199.35 and 136.09 µg/m3 in winter, summer, and monsoon season respectively and in Industrial areas of Kanpur the average PM10 concentration were recorded 196.87, 162.53 and 115.22 µg/m3 in winter, summer, and monsoon, respectively.The maximum concentration of PM10 was recorded 263.64 µg/m3 in winter in commercial area and was minimum 67.44 µg/m3 in monsoon season in industrial area.(Table-4, Fig-3). Figure 4 Figure 4 Monthly variation of SO₂ concentration in different localities in Lucknow. Figure 5 Table 7 Figure 5 Monthly variation of SO₂ in different localities of Kanpur.Table 7 Monthly variation of SO₂ (µg/m3) in residential commercial and industrial area Table 8 Seasonal variation of SO₂ (µg/m3) concentration in residential, commercial, and industrial areas of Lucknow and Kanpur cityThe seasonal variation of SO₂ concentration were also recorded in residential area, commercial area, and industrial area.In residential areas of Lucknow 24-hour average concentration of SO₂ were observed 9.31,7.1 and 7.98 µg/m3 in winter (Nov-Feb), summer (Mar-Jun) and monsoon (Jul-Oct) respectively.Similarly in commercial areas of Lucknow average SO₂ concentration were recorded 10.54, 9.28 and 7.80 µg/m3 in winter, summer, and monsoon season respectively and in Industrial areas of Lucknow the average SO₂ concentration were recorded 10.23, 9.72 and 6.41 µg/m3 in winter, summer, and monsoon, respectively.The maximum concentration of SO₂ was recorded 10.54 µg/m3 in winter in commercial area and was minimum 6.41 µg/m3 in monsoon season in Industrial area of Lucknow.(Table-8, Fig-6). Figure 6 Figure 6 Seasonal variation of SO₂ (µg/m3) concentration in residential, commercial, and industrial areas of Lucknow and Kanpur city.The seasonal variation of SO₂ concentration were also recorded in residential area, commercial area, and industrial area.In residential areas of Kanpur 24-hour average concentration of SO₂ were observed 8.49,8.48 and 7.56 µg/m3 in winter (Nov-Feb), summer (Mar-Jun) and monsoon (Jul-Oct) respectively.Similarly in commercial areas of Kanpur average SO₂ concentration were recorded 5.40, 5.51 and 5.06 µg/m3 in winter, summer, and monsoon season respectively and in Industrial areas of Kanpur the average SO₂ concentration were recorded 9.15, 9.25 and 8.45 µg/m3 in winter, summer, and monsoon, respectively.The maximum concentration of SO₂ was recorded 9.25 µg/m3 in winter in commercial area and was minimum 5.06 µg/m3 in monsoon season in industrial area.(Table-8, Fig-6). Table 12 The residential areas of Lucknow (Mahanagar and Aliganj ) the 24 hours average concentration of NO₂ were in the range of15.18 to 43.97 µg/m3 with an average 27.21 µg/m3 and the residential areas of Kanpur (Kidvai nagar and Shastri nagar) the 24 hour average concentration of NO₂ were in the range of 44.58 to 59.70 µg/m3 with an average of 20.83 µg/m3.In commercial areas of Lucknow ( Hazarat Ganj and Ansal T.C.) the average concentration of NO₂ were in the range of 17.37 to 45.59 µg/m3 with an average of 30.19 µg/m3 and In commercial areas of Kanpur ( Zreeb chauki and Rama Devi) the average concentration of NO₂ were in the range of 36.87 to 64.04 µg/m3 with an average of 44.95 µg/m3.The Industrial areas of Lucknow (Talkatora ) the 24 hours average concentration of NO₂ were in the range of 18.06 to 43.81 µg/m3 with an average 32.44 µg/m3 and the Industrial areas of Kanpur (Panki) the 24 hour average concentration of NO₂ were in the range of 46.45 to 64.02 µg/m3 with an average of 56.65 µg/m3.(Table-11).Seasonal variation of NO₂ concentration in residential, commercial, and industrial area of Lucknow and Kanpur.The seasonal variation of NO₂ concentration were also recorded in residential area, commercial area, and industrial area.In residential areas of Lucknow 24-hour average concentration of NO₂ were observed 40.27, 24.26 and 17.14 µg/m3 in winter (Nov-Feb), summer (Mar-Jun) and monsoon (Jul-Oct) respectively.Similarly in commercial areas of Lucknow average NO₂ concentration were recorded 41.27, 29.83 and 19.48 µg/m3 in winter, summer, and monsoon season respectively and in Industrial areas of Lucknow the average NO₂ concentration were recorded 41.65, 34.83 and 20.83 µg/m3 in winter, summer, and monsoon, respectively.The maximum concentration of NO₂ was recorded 41.65 µg/m3 in winter in Industrial area and was minimum 17.14 µg/m3 in monsoon season in residential area of Lucknow.(Table-12, Fig-9). Figure 9 Figure 9 Seasonal variation of NO₂ (µg/m3) concentration in residential, commercial, and Industrial areas of Lucknow and Kanpur city.The seasonal variation of NO₂ concentration were also recorded in residential area, commercial area, and industrial area.In residential areas of Kanpur 24-hour average concentration of NO₂ were observed 56.105, 54.29 and 46.56 µg/m3 in winter (Nov-Feb), summer (Mar-Jun) and monsoon (Jul-Oct) respectively.Similarly in commercial areas of Kanpur average NO₂ concentration were recorded 45.25, 43.51 and 46.15 µg/m3 in winter, summer, and monsoon season respectively and in industrial areas of Kanpur the average NO₂ concentration were recorded 59.74, 57.48 and 52.72 µg/m3 in winter, summer, and monsoon, respectively. Figure 10 Table 14 Figure 10 Monthly variation of AQI in different localities of Lucknow. Figure 11 Table 15 Figure 11 Monthly variation of AQI in different localities of Kanpur.It was recorded maximum 363 in November in Rama devi (commercial area) and minimum 75 September in Shastri nagar (residential area), (Table-14, figure-11). ) .Cohen et al.(2005)were reported the higher concentration of PM causes 8 lakhs premature death and 6.4 million people last per year over the world.The result indicate that the AQI of Kanpur ranges between unhealthy for sensitive group to very unhealthy and Air quality index of Lucknow indicate and healthy air quality of Lucknow.In comparison to both cities Kanpur always had lower AQI than Lucknow that indicates Kanpur had much better air quality than Lucknow the entire year.Both cities need to maintain and reduce the air quality index (AQI).The minimum AQI recorded in monsoon season and most polluted season was winter having highest AQI.
2024-05-30T15:08:01.495Z
2024-05-30T00:00:00.000
{ "year": 2024, "sha1": "be238fcfdc759cbba7b773fe2833e3d38c4defa9", "oa_license": "CCBYNCSA", "oa_url": "https://wjarr.com/sites/default/files/WJARR-2024-1590.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "65d6daabb17ec3e3475051aa83cbd4cde29cb9fa", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
15222560
pes2o/s2orc
v3-fos-license
Everett and Structure I address the problem of indefiniteness in quantum mechanics: the problem that the theory, without changes to its formalism, seems to predict that macroscopic quantities have no definite values. The Everett interpretation is often criticised along these lines and I shall argue that much of this criticism rests on a false dichotomy: that the macroworld must either be written directly into the formalism or be regarded as somehow illusory. By means of analogy with other areas of physics, I develop the view that the macroworld is instead to be understood in terms of certain structures and patterns which emerge from quantum theory (given appropriate dynamics, in particular decoherence). I extend this view to the observer, and in doing so make contact with functionalist theories of mind. The measurement problem A simple way to think about the quantum measurement problem is as follows: 1. The formalism of quantum mechanics describes the evolution of a mathematical object called the wave-function. By analogy with classical physics, the natural move is to treat this wave-function as directly representing a real thing, making it analogous to the phase-space point representing a set of particles, or to the vector field representing a state of the electromagnetic field. (The alternative of treating the wave-function as some sort of probability distribution -analogously to classical statistical mechanics -turns out to be untenable, at least without further modification of the theory. 1 ) 1 There is a 'statistical' or 'ensemble' interpretation of quantum mechanics, discussed by (for instance) Ballentine (1990) and Taylor (1986), which does attempt to take the wave-function as just giving the statistical distribution of outcomes from a large number of measurements; I find it difficult to see how this interpretation manages to avoid both commitment to some unknown hidden-variables theory on the one-hand, or outright anti-realism on the other, but this is not the place for such a debate. 2. Taking this view of the wave-function when it is used to describe microscopic objects like atoms or molecules leads us to the conclusion that these objects often do not have definite values of properties -such as spin or position -which classically we would expect to be definite. This 'superposition' of properties implies a very weird view of the microworld, but since that world is not directly observable such weirdness is not (yet) a problem. 2 3. However, the detailed dynamics of the quantum wavefunction (specifically, linearity and entanglement) imply that this microscopic indefiniteness inevitably leads to indefiniteness at the everyday level -so that pointers sometimes do not have definite positions, and cats sometimes are not definitely alive or dead. This is not merely "weird" but apparently pathological. At first sight, the obvious move seems to be to modify the theory itself: to change either the dynamics, or the assumption that reality is fully represented by the wave-function. Everett's contribution to the debate was to challenge this 'obvious' strategy and to take seriously the idea of superpositions at the macroscopic level. The gain of doing so would be significant: the simple and elegant mathematical structure of quantum theory would be left intact; there would be no need to postulate ad hoc modifications of the dynamics, no need to add extra elements to the theory which play no part in its practical applications, and no conflict with relativity. But Everett's strategy must obviously overcome major problems. The idea of an indeterminate macroworld seems either meaningless or just plain contradicted by observations: what could it mean to say objects have indefinite position? And even if it does mean anything, surely you only have to look at them to see that their positions are definite? The goal of this paper is to show how these problems can be resolved, without compromising the mathematical structure of quantum theory. The approach which I shall advocate is based upon decoherence theory, and very much upon the lines of the recent versions of the Everett interpretation proposed by Gell-Mann and Hartle (1990), Saunders (1998), Zurek (1998), and others; in section 2 I contrast this sort of approach to Everett with earlier versions which modify the mathematical formalism or introduce an explicit role for consciousness. In section 3 I shall argue that the conceptual criticisms of the decoherencebased approach (I don't discuss the more technical objections) are based upon a false dichotomy (that either the macroscopic world is written directly into the quantum formalism or it is simply an illusion), and in section 4 I shall defend a view of macroscopic objects which avoids this dichotomy, based on work by Den-2 At least, there is no epistemic problem; however, it might be argued that -pending an understanding of what (for instance) indefiniteness of position actually means-our theory is simply incoherent as a physical theory. This suggests, as argued recently by Tappenden (2000Tappenden ( , 2002, that we may need to introduce "many-worlds" talk at the microphysical level, before any consideration of macroscopic ontology. For my own attempt to develop this approach without having to change the quantum-mechanical formalism, see Wallace (2001b). nett (primarily Dennett 1991b). In sections 5-7 I apply this view to quantum mechanics, first to Schrödinger's cat and then, in section 7, to human observers. In the latter section I will make contact with the functionalist program in philosophy of mind, which fits very naturally into my framework; at the end of the section I briefly discuss the problem of probability in Everett interpretations, although for the most part I treat probability as a separate foundational problem lying largely outside the scope of this paper. Recovering macroscopic definiteness Traditionally there have been two approaches taken to avoiding the problems of macroscopic indefiniteness mentioned above, whilst preserving the attractive features of Everett's strategy; these are now usually referred to as the "Many Worlds" and "Many Minds" interpretations. Both approaches begin with some superposition like which prima facie is an indefinite state in which neither the detection apparatus nor the observer are definite. The many-worlds strategy interprets the two terms in this superposition as representing two (or possibly two families of) distinct macroscopic worlds -hence the universal state represents a multiplicity of worlds, each one of which is macroscopically definite. The many-minds strategy, on the other hand, accepts that (1) is indefinite, and attempts to recover not definiteness but just the appearance of definiteness. This is done by associating different mental phenomena to each of the "observer" terms in (1), so that associated with each (macroscopically indefinite) brain is a large number of definite minds. Each mind sees one term in the superposition, so that to the minds the world appears definite even though it is not. However, in both of these approaches, it seems that we have to add something to the underlying theory. In the many-worlds case we seem to have to specify a particular Hilbert-space basis (the so-called "preferred basis") to define worlds, and to explain why the wave-function is to be decomposed in one way rather than another. Also, if the world-decomposition is defined in terms of a basis then there would seem to be no fact of the matter as to which world at time t 2 is identical to (or the successor of, etc.) a given world at time t 1 . This creates pressure to add another piece of structure, some sort of "connection rule" linking up worlds across time. 3 Arguably (and controversially! -see Lockwood (1996) for a defence) manyminds theories avoid the need to add a preferred basis to the quantum formal-ism, but they do so at the price of requiring a very close connection between fundamental physics and the philosophy of mind -effectively transferring the problem of selecting a basis onto our theory of mind and requiring that theory to be explicitly quantum-mechanical. The requirement for a "connection rule" to handle transtemporal identity seems just as strong for many-minds theories as for many-worlds theories: how are we to link up definite experiences at time t 1 with those at time t 2 ? Theories can be constructed which provide this extra structure (a number are discussed in Barrett 1999) but the additions to the formalism seem to count against the very reasons which led us to consider Everett's strategy in the first place: the new structure is ad hoc in the sense that it is usually quite underdetermined by observable data, and almost inevitably spoils the relativistic covariance of the theory. From the 1980s onwards, decoherence theory has often been cited as part of the solution to this problem of definiteness. The technical details of this approach shall not concern us here, but the basic idea is that dynamical processes cause a preferred basis to emerge rather than having to be specified a priorihere we can understand 'emerge' in the sense that interference between processes described by separate terms of the preferred basis is negligible. (See Zurek 1991 for details.) Two sorts of objection can be raised against the decoherence approach to definiteness. The first is purely technical: will decoherence really lead to a preferred basis in physically realistic situations, and will that preferred basis be one in which macroscopic objects have at least approximate definiteness? Evaluating the progress made in establishing this would be beyond the scope of this paper, but there is good reason to be optimistic. The other sort of objection is more conceptual in nature: it is the claim that even if the technical success of the decoherence program is assumed, it will not be enough to solve the problem of indefiniteness. This is because the decoherence process is only approximate: the preferred basis is very accurately specified but not given exactly, and the interference between terms, though very small, is not zero. Furthermore, for this reason the program does not apparently help with the problem of giving an exact criterion for transtemporal identity. It is this second, conceptual, objection that I wish to address in the remainder of this paper. The fallacy of exactness The objection above arises from a view implicit in much discussion of Everettstyle interpretations: that certain concepts and objects in quantum mechanics must either enter the theory formally in its axiomatic structure, or be regarded as illusions. Consider, for instance, Kent's influential (1990) critique of Many-Worlds interpretations: It's certainly true that phase information loss is a dynamical process which needs no axiomatic formulation. However, this is irrelevant to our very simple point: no preferred basis can arise, from the dynamics or from anything else, unless some basis selection rule is given. Of course, [Many-Worlds Interpretation] proponents can attempt to frame such a rule in terms of a dynamical quantity -for example, some measure of phase information loss. But an explicit, precise rule is needed. (p.11; page numbering refers to the internet version.) In other words, a preferred basis must either be written into the quantummechanical axioms, or no such basis can exist -the idea of some approximate, emergent preferred basis is not acceptable. The paper goes on to make a similar point about 'worlds': ...one can perhaps intuitively view the corresponding components [of the wave function] as describing a pair of independent worlds. But this intuitive interpretation goes beyond what the axioms justify: the axioms say nothing about the existence of multiple physical worlds corresponding to wave function components. (p.11) Analogous objections are raised about transtemporal identity: Barrett's recent (1999) book gives an example. In so far as one lacks a notion of the identity of a world over time (and thus, no notion of the identity of an observer over time), the splitting-worlds theory is thus empirically incoherent... But if one adds a connection rule to the theory, then this further (because one also needs to chose a preferred basis) detracts from the theory's simplicity. (p.162) Barrett's quote implies that we face the same dichotomy: either there is some precise truth about transtemporal identity which must written into the basic formalism of quantum mechanics, or there are simply no facts at all about the past of a given world, or a given observer. (This seems to be what motivates Bell (1981) to say that in the Everett interpretation the past is an illusion.) I will argue that in defending any worthwhile version of the Everett interpretation, we should reject this view. My claim is instead that the emergence of a classical world from quantum mechanics is to be understood in terms of the emergence from the theory of certain sorts of structures and patterns, and that this means that we have no need (as well as no hope!) of the precision which Kent and others here demand. Before developing this account, I shall briefly address what might appear to be a looming threat to any such approach. The problem of macroscopic indefiniteness is (in part) how we can understand the quantum state as simultaneously describing two macro-objects (A and B, say) with contradictory properties (such as being an alive cat, versus being a dead one). Introducing 'many worlds' at the level of formalism, for all its disadvantages, certainly solves this problem, for then A and B are simply distinct objects. If however, we adopt any account in which A and B each supervene on properties of the micro-world's ontology (say, P and Q), then if A and B have contradictory properties then surely P and Q must themselves be contradictory, and to avoid incoherence we appear to be forced back onto the explicit introduction of 'many worlds' at the level of the micro-ontology. There is a flaw in this argument, however. If A and B have contradictory properties then P and Q must certainly be different properties, but it does not follow that they should have to be contradictory. The underlying microontology is (faithfully represented by) the quantum state, and that state has a far richer set of properties than any classical state (as can be seen, for instance, from a position-basis viewpoint, where the quantum state of the Universe is represented as a function over an enormously high-dimensional configuration space, rather than the paltry three dimensions over which any classical field is defined). If A and B are to be 'live cat' and 'dead cat' then P and Q will be described by statements about the state vector which (expressed in a position basis) will concern the wave-function's amplitude in vastly separated regions R P and R Q of configuration space, and there will be no contradiction between these statements. Understanding higher-order ontology To see why it is reasonable to reject the dichotomy of the previous section, consider that in science there are many examples of objepts which are certainly real, but which are not directly represented in the axioms. A dramatic example of such an object is the tiger: tigers are unquestionably real in any reasonable sense of the word, but they are certainly not part of the basic ontology of any physical theory. A tiger, instead, is to be understood as a pattern or structure in the physical state. To see how this works in practice, consider how we could go about studying, say, tiger hunting patterns. In principle -but only in principle -the most reliable way to make predictions about these would be in terms of atoms and electrons, applying molecular dynamics directly to the swirl of molecules which make up tigers and their environment. In practice, however, this is clearly insane: no remotely imaginable computer would be able to solve the 10 35 or so simultaneous dynamical equations which would be needed to predict what the tigers would do, and even if such a computer could exist its calculations could not remotely be said to explain their behaviour. A more effective strategy can be found by studying the structures observable at the multi-trillion-molecule level of description of this 'swirl of molecules'. At this level, we will observe robust -though not 100% reliable -regularities, which will give us an alternative description of the tiger in a language of cells and molecules. The principles by which these cells and molecules interact will be derivable from the underlying microphysics, and will involve various assumptions and approximations; hence very occasionally they will be found to fail. Nonetheless, this slight riskiness in our description is overwhelmingly worthwhile given the enormous gain in usefulness of this new description: the language of cell biology is both explanatorily far more powerful, and practically far more useful, than the language of physics for describing tiger behaviour. Nonetheless it is still ludicrously hard work to study tigers in this way. To reach a really practical level of description, we again look for patterns and regularities, this time in the behaviour of the cells that make up individual tigers (and other living creatures which interact with them). In doing so we will reach yet another language, that of zoology and evolutionary adaptationism, which describes the system in terms of tigers, deer, grass, camouflage and so on. This language is, of course, the norm in studying tiger hunting patterns, and another (in practice very modest) increase in the riskiness of our description is happily accepted in exchange for another phenomenal rise in explanatory power and practical utility. Of course, talk of zoology is grounded in cell biology, and cell biology in molecular physics, but we cannot discard the tools and terms of zoology to work directly with physics, without (a) losing explanatory power, and (b) taking forever. What moral should we draw from this mildly fanciful example? That higherlevel ontology is to be understood in terms of pattern or structure: in a slogan, A tiger is any pattern which behaves as a tiger. More precisely, what we have is a criterion for which patterns are to be regarded as real, which we might call Dennett's criterion (in recognition of a very similar view proposed by Dennett 1991b 4 ). Dennett's Criterion: A macro-object is a pattern, and the existence of a pattern as a real thing depends on the usefulnessin particular, the explanatory power and predictive reliability -of theories which admit that pattern in their ontology. Dennett's own favourite example is worth describing briefly in order to show the ubiquity of this way of thinking: if I have a computer running a chess program, I can in principle predict its next move by analysing the electrical flow through its circuitry, but I have no chance of doing this in practice, and anyway it will give me virtually no understanding of that move. I can achieve a vastly more effective method of predictions if I know the program and am prepared to take the (very small) risk that it is not being correctly implemented by the computer, but even this method will be practically very difficult to use. One more vast improvement can be gained if I don't concern myself with the details of the program, but simply assume that whatever they are, they cause the computer to play good chess. Thus I move successively from a language of electrons and silicon chips, through one of program steps, to one of intentions, beliefs, plans and so forth -each time trading a small increase in risk for an enormous increase in predictive and explanatory power. 5 Why is it reasonable to claim, in examples like these, that higher-level descriptions are explanatorily more powerful than lower-level ones? In other words, granted that a prediction from microphysics is in practice impossible, if we had such a prediction why wouldn't it count as a good explanation? To some extent I'm inclined to say that this is just obvious -anyone who really believes that a description of the trajectories followed by the molecular constituents of a tiger explains why that tiger eats a deer means something very different by 'explanation'. But possibly a more satisfying reason is that the higher-level theory to some extent 'floats free' of the lower-level one, in the sense that it doesn't care how its patterns are instantiated provided that they are instantiated. (Hence a zoological account of tigers requires us to assume that they are carnivorous, have certain strengths and weaknesses, and so on, but doesn't care what their internal makeup is.) So an explanation in terms of the lower-level theory contains an enormous amount of extraneous noise which is irrelevant to a description in terms of higher-level patterns. See Putnam (1975) for further description of this point. This approach to higher-order ontology applies to physics itself as well as to theories other than physics, as illustrated by one further example: that of quasiparticles. To understand these, consider vibrations in a (quantum-mechanical) crystal. These can in principle be described entirely in terms of the individual crystal atoms and their quantum entanglement with one another -but it turns out to be overwhelmingly more useful to think in terms of 'phonons' i. e. collective excitations of the crystal which behave like 'real' particles in most respects. This sort of thing is ubiquitous in solid-state physics, and the collective excitations are called 'quasi-particles' -so crystal vibrations are described in terms of phonons, waves in the magnetisation direction of a ferromagnet in terms of magnons, collective electron waves in a plasma in terms of plasmons, and so on. But are quasi-particles real? Well, they can be created and annihilated; they can be detected (by, for instance, scattering them off 'real' particles like neutrons); in some cases (such as so-called 'ballistic' phonons) their time-offlight can be measured; and they play a crucial explanatory role in solid-state theories. 6 We have no more evidence than this that 'real' particles exist, so it seems absurd to deny the existence of the quasi-particles. But when exactly, you might ask, are quasi-particles present? This question has no precise answer. It is essential in a quasi-particle formulation of a solid-state problem 7 that the quasi-particles decay only slowly relative to other relevant timescales (such as their time-of-flight) and when this criterion (and similar ones) are easily met then quasi-particles are definitely present. When plans etc. Dennett himself would embrace such claims (see Dennett (1987) for an extensive discussion), and they are at least suggested by the functionalist program in philosophy of mind which I discuss in section 7. However, for the purposes of this section there is no need to resolve the issue: the computer can be taken only to 'pseudo-plan', 'pseudo-believe' and so on, without reducing the explanatory importance of a description in such terms. 6 Any solid-state textbook is replete with explanations of empirical phenomena which are couched in terms of quasi-particles; see Kittel and Fong (1987), for instance. 7 See the first chapter of Abrikosov, Gorkov, and Dzyaloshinski (1963) for a discussion. the decay rate is much too high, the quasi-particles decay too rapidly to behave in any 'particulate' way, and the description becomes useless: hence we conclude that no quasi-particles are present. However, clearly it is a mistake to ask exactly when the decay time is short enough (2.54 × the interaction time?) for quasi-particles not to be present. What actually happens is that, as we lower the decay time, the quasi-particle description becomes less and less advantageous compared to a lower-level description in terms of crystal atoms -hence by Dennett's criterion it becomes less and less viable to regard them as real, until ultimately they are clearly no longer of any use in studying the crystal and we must either revert to the underlying description or look for another, more useful higher-level distinction. But the somewhat blurred borderline between states where quasi-particles exist and states where they don't should not undermine the status of the quasi-particles as real -any more than the absence of a precise point where a valley stops and a mountain begins should undermine the status of the mountain as real. (In fact, although this account of quasi-particles represents them as structures in an ontology of 'real' particles, the description in terms of nonrelativistic particle mechanics is itself effective, and derives from a description in terms of quantum field theory -there is every reason to believe particles like quarks and electrons to be patterns in the underlying quantum field in almost exactly the same sense that quasi-particles are patterns in the underlying crystal. It is interesting to ask whether the existence of some underlying 'stuff' is essential, or whether we can continue this chain of theories forever; such a question lies beyond the scope of this paper, though.) This view of higher-order ontology as pattern or structure has some consequences which, though obvious given the nature of patterns, will play an important role in the later discussion of quantum mechanics. 1. Patterns can be imprecise. As the quasi-particle example should illustrate, a pattern can tolerate a certain amount of 'noise' or imprecision whilst still remaining the same pattern. (A tiger which loses a hair is still the same tiger). Beyond a certain point the noise is such that the pattern can no longer be said to be present, but there is no reason to expect there to be any precise point where this occurs. (It may sometimes be convenient to define such a point by fiat: the biologist sometimes introduces an exact moment when one species becomes another; the astrophysicist defines an exact radius at which the sun's atmosphere starts. But neither believes that any deep truth is captured by this exactness.) 2. Patterns may involve dynamics, or be temporally extended. A 'pattern' in the sense I am using it need not be realised at an instant, but may depend on the behaviour over some timescale of the constituents of a pattern -what distinguishes a tiger from an inanimate facsimile of one is the behaviour of the former, not its shape. 3. There is a concept of transtemporal identity for patterns, but again it is only approximate. To say that a pattern P 2 at time t 2 is the same pattern as some pattern P 1 at time t 1 is to say something like "P 2 is causally determined largely by P 1 and there is a continuous sequence of gradually changing patterns between them" -but this concept will not be fundamental or exact and may sometimes break down. Before ending this section, I should acknowledge that my account is obviously linked to the topic of how one theory can emerge from, or be reduced to, another -and that this latter topic is highly controversial. Space does not permit any detailed engagement with the extensive literature on the subject, but I give here a few recent references: Butterfield and Isham (1999) give a general discussion of emergence using time in quantum gravity as an example; Thalos (1998) discusses the tension between physics and 'higher-level' sciences, in the context of social science, and Auyang (1998) is concerned with the way in which complex behaviour emerges from the interaction of simple systems; she uses quasi-particles as an example, in fact. There is also some overlap with the current debate on structural realism (proposed originally by Worrall (1989), developed by, e. g. , Ladyman (1998), and criticised by, e. g. , Psillos (1995)). Quantum theory in structural terms In order to show how the ideas of the last section apply to quantum mechanics, we consider the time-honoured problem of Schrödinger's cat. Recall the situation: our unfortunate cat is locked in a box and at some time -let us say noon -an unstable atomic nucleus is measured by a device within the box. If the device finds the atom to be undecayed the cat lives, but if it finds that it is decayed then poison gas is released into the box. If the atom's state is indefinite just before the measurement, then so is the cat's state just after the measurement. Now, suppose that the cat is put into the box at 11am and we are asked to predict what happens to it in the next hour. We do not know the wavefunction of the cat at this point, and even if we did know it exactly it would be of little use to us, for we cannot possibly solve the Schrödinger equation for such a complicated system -nor can we even solve some sort of classical or semiclassical approximation to it. Nonetheless we can say useful things about the cat: • from solid-state physics we can predict that the cat won't spontaneously vaporize; • from animal physiology we can predict that the cat won't spontaneously die or grow a second tail; • from cat psychology we can predict that the cat won't start eating itself, and will probably remain asleep for the whole hour. It is because of the power of this cat-level description to tell us about the future evolution of the wave-function, and because of the unavoidable need to work at cat-level in considering that future evolution, that we say -via Dennett's criterion -that there is a cat present in the system. Now consider the evolution of the system after twelve noon, when the measurement is made, but suppose that the atomic nucleus, instead of being in an indefinite state, either definitely did or definitely did not decay. In each case, to predict the system's behaviour in the next hour, we use exactly the same methods -e. g. , if the cylinder of poison gas breaks, then cat psychology tells us that the cat will probably jump backwards, and animal physiology tells us that it will die and in due course start to decompose. Now, quantum mechanics is linear. If we know what happens if the atom definitely does, or definitely does not, decay, then we can predict what happens if we have a superposition of decaying and not decaying. However, in doing so we are using exactly the same methods as before: we are taking advantage of the patterns present in the two branches of the wave-function. In other words -and this is the crucial pointin each of the branches there is a 'cat' pattern, whose salience as a real thing is secured by its crucial explanatory and predictive role. Therefore, by Dennett's criterion there is a cat present in both branches after measurement. 8 Is it the same cat? Well, it is a future version of the same cat, in the sense described in the previous section: i. e. , it is a pattern causally determined by the original cat and linked to it by a continuously changing sequence of cat patterns. It's really just a matter of terminology whether we decide that the whole branching set of living and dead cats 'is the same cat' (as defended in Tappenden 2000); the point to be learned, though, is that when describing patterns we shouldn't expect any more from transtemporal identity than approximate, 'effective' concepts which sometimes break down. (See Wallace (2001b) for further discussion of identity over time in quantum mechanics.) Another question which at first sight should have a precise answer: if there was one cat before the measurement and two after it, when exactly did the duplication of cats occur? But first sight is mistaken. Before the decay there is certainly one cat. When the measurement occurs we will have a coherent superposition of both measurement outcomes -but after a very short time decoherence will remove the interference between these branches, and after this time there will be two cats present. During the decoherence period the wavefunction is best regarded as some sort of 'quantum soup' which does not lend itself to a classical description -but since the decoherence timescale τ D is incredibly short compared to any timescale relevant at the cat level of description, this need not worry us. Put another way, the cat description is only useful when answering questions on timescales far longer than τ D , so whether or not quantum splitting is occurring, it just doesn't make sense to ask questions about cats that depend on such short timescales. Superpositions of patterns To see in a different way how the ideas of Sections 4-5 resolve the problem of macroscopic indefiniteness, consider the following sketch of the problem. 1. After the experiment, there is a linear superposition of a live cat and a dead cat. 2. Therefore, after the experiment the cat is in a linear superposition of being alive and being dead. 3. Therefore, the macroscopic state of the cat is indefinite. 4. This is either meaningless or refuted by experiment. But (1) does not imply (2). The belief that it does is based upon an oversimplified view of the quantum formalism, in which there is a Hilbert space of cat states such that any vector in the space is a possible state of the cat. This is superficially plausible in view of the way that we treat microscopic subsystems: an electron or proton, for instance, is certainly understood this way, and any superposition of electron states is another electron state. But any state of a cat is actually a member of a Hilbert space containing states representing all possible macroscopic objects made out of the cat's subatomic constituents. Because of Dennett's criterion, this includes states which describe • a live cat; • a dead cat; • a dead dog; • this paper . . . We can say (if we want, and within nonrelativistic quantum mechanics 9 ) that the particles which used to make up the cat are now jointly in a linear superposition of being a live cat and being a dead cat. But cats themselves are not the sort of things which can be in superpositions. Cats are by definition "patterns which behave like cats", and there are definitely two such patterns in the superposition. The point can be made more generally: It makes sense to consider a superposition of patterns, but it is just meaningless to speak of a given pattern as being in a superposition. Thus, a pattern view of macroscopic ontology essentially solves the problem of indefiniteness by replacing indefiniteness with multiplicity; since it does so at the level of macroscopic objects including inanimate ones, it is closer in spirit to a Many-Worlds approach than to a Many-Minds one. However, this multiplication of patterns happens naturally within the existing formalism, and does not need to be added explicitly to the formalism. It is important to remain clear what macro-objects are patterns in: they are not patterns in the positions of micro-objects, or in fundamental fields; they are patterns in (the properties of) the quantum state. As mentioned in the footnote on page 2, we can and do remain neutral about how this state is itself to be interpreted, since all we need from it are its structural properties, such as: what its representation is in the eigenbasis of a given operator. Of course, without specification of some set of preferred operators the state is structureless (we can say that it is a vector of unit norm in a countably-infinite-dimensional complex Hilbert space, but that's about it). The details of this specification depend on the particular quantum theory with which we are working: in nonrelativistic quantum mechanics, for instance, they are given by the generators of the Gallilei group for individual particles, whilst in quantum field theory they are given in terms of the map between spacetime regions and operator algebras (see Wallace (2001a, section 2.2) for a discussion). As an aside, the analysis of this paper gives support to Deutsch's claim that the de Broglie-Bohm pilot-wave theory (Bohm 1952;Holland 1993) and its variants are "parallel-universes theories in a state of chronic denial" (Deutsch 1996). In such theories 10 the wave-function is supplemented by a collection of 'corpuscles', particles guided by the wave-function and supposed to define our observed universe. But to predict the behaviour of the corpuscles we have to predict the behaviour of the wave-function, and to predict the behaviour of the wavefunction we have to study the emergent patterns within it. Thus cats and all other macro-objects can be identified in the structure of the wave-function just as in the structure of the corpuscles. But the patterns which define them are present even in those parts of the wave-function which are very remote from the corpuscles. So if we accept a structural characterisation of macroscopic reality, we must accept the multiplicity of that reality in the de Broglie-Bohm pilot wave as much as in the Everettian universal state. The role of the observer We have not yet considered explicitly how observers are to fit into the framework just described. However, if we are happy to extend Dennett's criterion to conscious observers, then they fit into the framework quite straightforwardly: if a tiger is any pattern which behaves like a tiger, then an observer is any pattern which behaves like an observer. This is essentially an expression of an established viewpoint in the philosophy of mind: functionalism. Though there are many versions of functionalism, for our purposes we can define it as follows. The functionalist claim: As a matter of conceptual necessity, 11 mental properties are supervenient on structural and functional properties of physical systems, and on no other properties. Hence, it doesn't matter what a brain is made of, only how it works. Functionalism is at the root of the artificial intelligence project, for it entails that any sufficiently accurate computer simulation of a conscious being will itself be conscious. I will not attempt to defend it here, but will simply explore its implications for quantum theory. 12 Given functionalism, we can see that quantum mechanics implies the multiplication of observers in just the same way as it does the multiplicity of cats. To see this in rather more detail, let us consider an idealised measurement of some 2-state system: the system is assumed to be measured in some basis (|1 , |2 ). First consider the case where the 2-state system is actually in state |1 , then the observer's state will remain definite after the measurement. Let's suppose the joint state of 2-state system and observer some time t after the measurement is where f 1 (t) is some functional process describing the observer in the time following his observation of |1 , and |f 1 (t) (for varying t) is the sequence of states realising that process. Similarly if the 2-state system is actually in state |2 , the joint state post-measurement will be |ψ t ; 2 = |2 ⊗|f 2 (t) . In accordance with comment (2) above, the states |f 1 (t) and |f 2 (t) describe not just the observer, but an entire macroscopic region (where objects in that region are defined in structural terms, as explained above). Now let the 2-state system be in some superposition α |1 + β |2 . Linearity tells us that the overall state at time t must be one observer in an indefinite state -whatever that might mean), and we have again replaced superposition by multiplicity. 13 We can see, then, that worries about observers with indefinite mental states are as misplaced as worries about cats which are indefinitely alive or dead. Patterns are not superposed, but duplicated, by the measurement event, and ultimately we are regarding mental states as just special sorts of patterns (although these patterns need not be visual patterns, realised in the instantaneous physical state; rather, they are likely to be behavioural patterns, which describe regularities in the dynamics of the physical state as well as its instantaneous configuration -c. f. Dennett 1991b.) We can also consider how our observer views the measurement event. In the cases where the state of the system being observed had been definitely either |1 or |2 , the observer's pre-measurement process (f 0 (t), let us say) would have changed unproblematically into f 1 (t) or f 2 (t), and the observer would certainly interpret this as personal survival: hence f 0 (t) and f i (t) describe the same person. It is then legitimate for the observer to understand the measurement as himself surviving as two diverging copies (of different weights) following the measurement. (As Saunders (1998) has pointed out, this is closely analogous to the cases of personal fission considered by Parfit (1984).) (It is tempting to ask: What does it feel like while the split itself is occurring? Hopefully it should be clear by now that this is a bad question: if (as functionalism claims) statements about mental phenomena are statements about the functional behaviour of the brain (i. e. , about the dynamical patterns in it) and if the timescales on which the functional processes occur are very long compared to the decoherence timescale (which they are) then there can be no awareness of the event of splitting at all -thus allowing us to justify Everett's famous claim to this effect (Everett 1957, p. 460). By analogy, suppose an artificial-intelligence program were to be run on a (classical) digital computer; it would be meaningless to ask what it felt like for that program whilst the computer was in the process of changing from one digital configuration to another. Understanding that process requires us to abandon the language of computer programs and descend to the level of electronics, and 'mental' talk about the computer or program doesn't engage with that level.) The approach advocated here also alleviates (though does not solve entirely) the problem of probability in the Everett approach. An observer about to measure a superposed state knows that after the measurement there will exist more than one functional structure which he will regard as the same individual as 13 Note that my argument is rather different from that used by Chalmers (1996) to take superposition into multiplicity. Chalmers proposes a principle (the 'superposition principle') which effectively says that if conscious experience is present in one term of a superposition, then it is present in the superposition; this was shown to be unworkable by Byrne and Hall (1999). I make use only of the much weaker result (following from the functionalist criterion) that a superposition of orthogonal states, each of which is determinately part of a sequence of functional states, realises all the functional processes encoded by those sequences. This in turn relies upon the existence of decoherence to give a preferred basis in which functional sequences are possible; in this use of a preferred basis my approach is similar to that suggested by Vaidman (2000) in his reply to Byrne and Hall. himself. He has no reason not to care about their futures just as he cares about 'his own' future, for even in the absence of splitting his future existence consists only in the future presence of patterns such as these. But the different future copies may have different interests which he could influence by actions prior to the measurement; so how are these different interests to be weighted? There is no a priori reason to weight them equally. Granted, we have not shown that the 'correct' or 'most rational' weighting is the standard one, but we have at least shown that it is rational for the observer to assign some weighting: in other words, we have shown that there is room for probabilistic concepts (at least the decision-theoretic sort) to be accommodated in the theory. This is already enough to bring the Everett interpretation onto the same level as any other physical theory, for -as pointed out in the quantum context by Papineau (1996) -we have no really satisfactory understanding of probability in any other context either! For more constructive attempts to justify the probability rules, though (from a wide variety of perspectives) see Deutsch (1999), Saunders (1998), Tappenden (2000), Vaidman (1998) and Zurek (1998). It is worth remembering the crucial role that decoherence is playing in this account: without it, we would not have the sort of branching structure which allows the existence of effectively non-interacting multiple near-copies of a given process. As it is, though, we are able to identify many different functional structures realised in different parts of the universal state, each with the right sort of complexity to merit the title 'observer'. Would it be possible to reject functionalism -that is, reject the application of Dennett's criterion to conscious observers -without having to reject this paper's 'structural' approach to quantum theory? Not necessarily, for functionalism is neutral about how functional systems are to be realised physically, whereas in this structural approach to quantum theory there is space for us to require the system to be instantiated in a certain way -say, in the position basis (although see Wallace (2001b) for the difficulties of this particular basis choice). However, the structural approach is committed to an approach to the mind which • denies observers some uniquely special status, but describes them as emergent as structures and patterns in lower-level physics (specifically, in lowerlevel classical physics, itself to emerge from unitary quantum physics via decoherence); • is comfortable with some rough edges in the definition of which systems count as observers (for decoherence will never give us an exact macroworld). Functionalism fits these criteria in a very natural way. Conclusions In his critique of many-worlds interpretations, Kent (1990) states that [W]e have tried to clarify the logical structure of the MWI . . . The attempt may not have entirely succeeded. But we are convinced that the procedure is justified, and in fact that axiomatization should have been insisted upon from the beginning. For any MWI worth the attention of physicists must surely be a physical theory reducible to a few definite laws, not a philosophical position irreducibly described by several pages of prose. It is not the purpose of this paper to argue against this view of physical theories. I agree that physical theories should be axiomatizable, and in fact would say that the axioms of any worthwhile Everettian theory should be just those of 'bare' unitary quantum mechanics, without axioms of measurement or collapse. However, when we are describing observations, or cats, or people, within physics, we inevitably need to make contact with higher-order theories -of material science, of cat biology, of psychology and neuroscience. This contact is not made by fiat, via abstractly or generally stated principles; 14 rather, it occurs because those theories are emergent from the microphysics, describing patterns which occur within the microphysics. Indeed we do need several pages of somewhat philosophical prose to describe carefully how this emergence takes place -but the point is that having understood the process in the classical case, there really is no reason to think anything different is going on in quantum theory. To summarise the view of quantum theory that then emerges: • Macroscopic objects are to be understood as structures and patterns in the universal quantum state. • Multiplicity occurs at the level of structure -thus macroscopic objects do not have indeterminate states after quantum measurements, but are genuinely multiplied in number. • We can tolerate some small amount of imprecision in the macroworld: a slightly noisy pattern is still the same pattern. Hence we do not need to worry that decoherence does not give totally non-interfering branches, just very nearly non-interfering ones. • There will be no precise answers to some questions (such as, 'when did the splitting take place?'), just very accurate ones. • Other questions (such as those concerning transtemporal identity, or identity between objects across branches) will not always have good answers at all, because they rely on concepts which though practically very useful, sometimes break down. for useful discussions; and to an anonymous referee for drawing my attention to the argument presented at the end of section 3.
2014-10-01T00:00:00.000Z
2001-07-28T00:00:00.000
{ "year": 2001, "sha1": "3085f1f914f5a437bc6ddd660f44816aff6d5f40", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4e725fcc2d6bc3a1485c51aff1852e825e0004f2", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Physics" ] }
255810923
pes2o/s2orc
v3-fos-license
Molecular cytogenetics and development of St-chromosome-specific molecular markers of novel stripe rust resistant wheat–Thinopyrum intermedium and wheat–Thinopyrum ponticum substitution lines Owing to their excellent resistance to abiotic and biotic stress, Thinopyrum intermedium (2n = 6x = 42, JJJsJsStSt) and Th. ponticum (2n = 10x = 70) are both widely utilized in wheat germplasm innovation programs. Disomic substitution lines (DSLs) carrying one pair of alien chromosomes are valuable bridge materials for transmission of novel genes, fluorescence in situ hybridization (FISH) karyotype construction and specific molecular marker development. Six wheat–Thinopyrum DSLs derived from crosses between Abbondanza nullisomic lines (2n = 40) and two octoploid Trititrigia lines (2n = 8x = 56), were characterized by sequential FISH–genome in situ hybridization (GISH), multicolor GISH (mc-GISH), and an analysis of the wheat 15 K SNP array combined with molecular marker selection. ES-9 (DS2St (2A)) and ES-10 (DS3St (3D)) are wheat–Th. ponticum DSLs, while ES-23 (DS2St (2A)), ES-24 (DS3St (3D)), ES-25(DS2St (2B)), and ES-26 (DS2St (2D)) are wheat–Th. intermedium DSLs. ES-9, ES-23, ES-25 and ES-26 conferred high thousand-kernel weight and stripe rust resistance at adult stages, while ES-10 and ES-24 were highly resistant to stripe rust at all stages. Furthermore, cytological analysis showed that the alien chromosomes belonging to the same homoeologous group (2 or 3) derived from different donors carried the same FISH karyotype and could form a bivalent. Based on specific-locus amplified fragment sequencing (SLAF-seq), two 2St-chromosome-specific markers (PTH-005 and PTH-013) and two 3St-chromosome-specific markers (PTH-113 and PTH-135) were developed. The six wheat–Thinopyrum DSLs conferring stripe rust resistance can be used as bridging parents for transmission of valuable resistance genes. The utility of PTH-113 and PTH-135 in a BC1F2 population showed that the newly developed markers could be useful tools for efficient identification of St chromosomes in a common wheat background. Background Intermediate wheatgrass (Thinopyrum intermedium Barkworth & D.R. Dewey, JJJ s J s StSt, 2n = 6x = 42) and tall wheatgrass (Th. ponticum (Podp.) Barkworth & D. R. Dewey, 2n = 10x = 70) are important allopolyploids of Thinopyrum species. Because of their desirable tolerance to biotic and abiotic stresses, both have been widely used in wheat chromosome engineering for decades [1,2]. The chromosomal composition of Th. intermedium and Th. ponticum have not been fully characterized. For Th. intermedium, the chromosomal composition is generally regarded as JJJ s J s StSt [3] or J r J r J vs J vs StSt [4]. The subgenome J or J r is highly homologous with genome J (Th. bessarabicum, J b , E b )/E (Th. elongatum, J e , E e ) [5], and the main controversy has been whether genome V originating from Dasypyrum villosum (2n = 2x = 14, VV) was involved in the recombinant subgenome J s [6,7]. Additionally, it was determined that Th. intermedium contained a set of St chromosomes probably derived from diploid Pseudoroegneria spicata or P. strigosa (2n = 2x = 14, StSt) [8][9][10]. However, it is still unclear whether Th. ponticum contains St chromosomes and if the St genome or the J/E genome were affected by recombination during the allopolyploidization process [11,12]. Stripe rust (Puccinia striiformis f.sp. tritici, Pst) is a recurrent disease that causes serious annual decreases in wheat yields [28,29]. Development and transfer of novel resistance genes contained in related wild wheat species is one of the most efficient and environmentally-friendly approaches to fighting stripe rust. According to previous studies, St chromosomes originating from Th. intermedium carry several new stripe rust resistance genes, which are potentially optimal genetic resources for wheat breeding. In addition to the named wheat-Th. intermedium DALs, L4 (DA4St) and L7 (DA6St), a DS1St (1D) with stripe rust resistance was produced [30]. Moreover, a DA3St [31] and a DA7St [32] were characterized, both carrying stripe rust resistance gene(s). In our previous study, ES-12 (DS3St (3D)) containing chromosome 3St derived from Th. ponticum also conferred stripe rust resistance [33]. However, at present no wheat-Thinopyrum 2St disomic substitution lines (DSLs) with stripe rust resistance have been reported. Xiaoyan784 and Zhong4 are both significant octoploid Trititrigia lines conferring stripe rust resistance. Xiaoyan784 (2n = 8x = 56) was produced from distant hybridization between common wheat and Th. ponticum [34], while Zhong4 (2n = 8x = 56) was developed from distant hybridization between common wheat and Th. intermedium by Sun in 1965 [35]. Abbondanza nullisomic lines (2n = 40) were developed by Xue in 1991, and have been used as valuable plant materials to efficiently create substitution alien lines for several decades [36]. In the present study, four wheat-Th. intermedium DSLs, ES-22, ES-23, ES-25, and ES-26, were generated from crosses between Abbondanza nullisomic lines and Zhong4 with consecutive self-crosses for several years. Two wheat-Th. ponticum DSLs, ES-9 and ES-10, were derived from Xiaoyan784.Molecular cytogenetic analysis was used to determine and compare the genome compositions of the six alien lines. In addition, stripe rust resistance and potential value of the morphological characteristics for wheat breeding were evaluated. Finally, St-chromosome-specific markers were developed by specific-locus amplified fragment sequencing (SLAF-seq) and validated. These markers could be useful tools for efficient identification of St chromosomes in a common wheat background. Two oligonucleotide probes of pTa535 and pSc119.2 were combined for sequential FISH-GISH to simultaneously examine the elimination of wheat chromosomes in the six substitution lines. Comparisons of the FISH results between substitution lines and the corresponding parent lines, Abbondanza, Zhong4, and Xiaoyan784, were conducted. Chromosome 2A was eliminated in ES-9 and substituted by one pair of Th. ponticum chromosomes with three specific signal bands, including the terminal pTa535 hybridization sites detected on short arms and long arms as well as an interstitial pTa535 signal on the long arms, which was different from the FISH patterns of other wheat chromosomes ( Fig. 1, b1). ES-10 lost chromosome 3D and contained one pair of Th. ponticum chromosomes carrying terminal pSc119.2 hybridization sites on short arms with terminal pTa535 hybridization segments both on the long arms and short arms (Fig. 1, b2). Wheat chromosomes 2A, 2B, and 2D were eliminated in ES-23 ( Fig. 1, b3), ES-25 ( Fig. 1, b5), and ES-26 ( Fig. 1, b6), respectively, and replaced by the same pair of Th. intermedium chromosomes with identical FISH patterns to that detected in ES-9. Moreover, the telomeric region of chromosome 5B carrying a bright-green fluorescence signal was eliminated in ES-25 compared with other related materials. For ES-24, chromosome 3D was substituted by a pair of Th. intermedium chromosomes with the FISH patterns almost consistent with the alien chromosomes detected in ES-10 ( Fig. 1, b4). According to the multicolor GISH (mc-GISH) results, each of the six derived lines contained two alien chromosomes carrying a bright-red fluorescence signal originating from the P. spicata (St) genome DNA (Fig. 1, c1-c6). Combined with the sequential FISH-GISH analyses results, ES-9 and ES-10 carried two different pairs of St chromosomes derived from Th. ponticum. ES-23, ES-25, and ES-26 contained the same pair of St chromosomes from Th. intermedium which was different from the alien chromosomes of ES-24. Wheat 15 K SNP array analysis of the six substitution lines The chromosomal compositions of the six substitution lines were further determined based on genotype data using a wheat 15 K SNP array (Table S1). Generally, compared with Th. ponticum or Th. intermedium, the number of common SNP sequences detected between the substitution lines and Abbondanza was much higher. However, an obvious point of intersection was found in each of the substitution lines ( Fig. 2a-f ). For ES-9 (Fig. 2a), the intersection point was distinct in chromosome 2A, and ES-9 had mostly the same SNP marker loci as Th. ponticum but few SNP marker loci as Abbondanza. These results suggested that chromosome 2A in ES-9 was replaced by the alien chromosomes of Th. ponticum, which was consistent with the FISH result. In ES-10 ( Fig. 2b), the intersection point was detected in chromosome 3D, and ES-10 had mostly the same SNP marker loci as Th. ponticum but shared few SNP marker loci with Abbondanza, suggesting that chromosome 3D of ES-10 was substituted by the pair of Th. ponticum chromosomes. In terms of the wheat-Th. intermedium alien lines, the intersection point was detected in chromosome 3D of ES-24, and ES-24 had mostly the same SNP marker loci as Th. intermedium. Thus, chromosome 3D of ES-24 was replaced by the pair of Th. intermedium chromosomes (Fig. 2d). The intersection points of ES-23, ES-25, and ES-26 were respectively identified in chromosomes 2A (Fig. 2c), 2B (Fig. 2e), and 2D (Fig. 2f ). Combined with the FISH results, it was revealed that chromosome 2A in ES-23, chromosome 2B in ES-25, and chromosome 2D in ES-26 were substituted by the same pair of Th. intermedium chromosomes. PLUG marker analysis of the six substitution lines The 135 PLUG markers were screened to validate the homoeologous groups for the alien chromosomes. Four PLUG markers (TNAC1142-HaeIII, TNAC1142-TaqI, TNAC1132-TaqI, and TNAC1140-TaqI) were mapped to the second homoeologous group in ES-9, ES-23, ES-25, and ES-26 ( Fig. 3a-d, Table S2 and Fig. S2). Three pairs of primers (TNAC1326-HaeIII, TNAC1326-TaqI, and TNAC1359-TaqI) were distributed in the third homoeologous group in ES-10 and ES-24 ( Fig. 3e-g, Table S2 and Although the alien chromosomes belonging to the same homoeologous groups were derived from two different donors, identical FISH karyotypes of the alien chromosomes were detected between ES-23 and ES-9 (2St), as well as ES-24 and ES-10 (3St). Additionally, FISH result of Abbondanza was shown in Fig. 4g, and the FISH pattern comparisons of above materials were shown in Fig. 4h. Notably, ES-10 has a similar chromosome composition to our previously reported ES-12 [33], but the common wheat background is different in the FISH karyotype (as shown in Figs. S3 and S4). Evaluation of agricultural performance and resistance to stripe rust of the six substitution lines The agronomic traits of the six substitution lines as well as their parents Abbondanza and Xiaoyan784 ( Fig. 5) were compared. On average, the tiller number of ES-9 was higher and the spikes were longer than Abbondanza. In terms of the substitution lines derived from Zhong4, both ES-23 and ES-26 showed many more tillers, and the number of spikelets per spike in ES-26 was higher than Abbondanza and Zhong4. Surprisingly, the average thousand-kernel weights of the alien lines containing chromosome 2St (ES-9, ES-23, ES-25, and ES-26) were more than 43 g. This indicated that chromosome 2St increased thousand-kernel weight whether originating from Th. ponticum or Th. intermedium. At the adult stage, a stripe rust reaction test of the six substitution lines was carried out with the susceptible control (HXH). Sequentially, the infection type Table 1 Agronomic traits of the alien substitution lines ES-9, ES-10, as well as their parents (Abbondanza, Xiaoyan784) Different letters a, b and c indicate significant differences between ES-9, ES-10 and its wheat parent (P < 0.05) Materials Plant Meiotic chromosome pairing analysis of F 1 hybrids Crosses were made between the alien lines with the same genome compositions. Fifteen F 1 plants were obtained from the cross between ES-9 and ES-23, and 11 F 1 plants were obtained from the cross between ES-10 and ES-24. Meiotic chromosome pairing analysis of the F 1 hybrids was conducted to further validate the related genome constitution (Table 3). Mostly, the alien chromosomes derived from Th. intermedium and Th. ponticum but belonging to the same homoeologous group (2/3) could form a bivalent at metaphase I, and no trivalent or quadrivalent was observed at meiosis anaphase I. These results further revealed the close homoeologous relationship between the alien chromosomes derived from the two different donors. Utility of the 3St-chromosome-specific markers in a BC 1 F 2 population In order to validate that the stripe rust resistance gene(s) at all stages were carried by chromosome 3St, 60 BC 1 F 2 individuals of ES-24 and HXH were used for a genetic analysis. The evaluation of stripe rust resistance at the seedling stage revealed that Zhong4, ES-24, and the 33 F 2 individuals were highly resistant to Pst race CYR32 (Fig. 7a). Subsequently, ten resistant F 2 individuals as well as ten susceptible individuals were randomly selected for FISH analysis. Compared with the FISH karyotype of ES-24, susceptible individuals had undetectable FISH patterns of chromosome 3St (Fig. 7b), while chromosome 3St was detected in all the resistant individuals (Fig. 7c). These results demonstrated that the novel stripe rust resistant gene(s) originated from the alien chromosome 3St of Th. intermedium. Furthermore, the specificity of PTH-113 and PTH-135 was confirmed by PCR analyses of the 60 BC 1 F 2 individuals ( Fig. 8 and Fig. S6). Combined with the result of seedling stage stripe rust resistance evaluation, Xiaoyan784, Zhong4, ES-9, ES-24, and the 33 BC 1 F 2 plants conferring strong resistance to Pst race CYR32 also carried 3St chromosomespecific markers. In contrast, the other 27 BC 1 F 2 plants, the parental line Abbondanza, and susceptible control HXH, without specific amplification, were seriously susceptible to Pst race CYR32. Hence, the new developed chromosomespecific molecular markers could be used to rapidly trace the alien chromosome 3St in a common wheat background. One of the most commonly used techniques, FISH analysis, is generally used with GISH to discriminate alien chromosomes [33,46] and to detect genomic changes in specific regions [47][48][49]. In this study, after characterization by sequential FISH-GISH and mc-GISH analysis, specific karyotype patterns of chromosomes 2St and 3St derived from Th. intermedium and Th. ponticum were elucidated, which is useful for rapidly identifying the alien chromosomes in germplasm materials. Furthermore, chromosomal structure variation was observed in ES-25 when distant hybridization was detected by FISH. Compared with the parental lines, Abbondanza and Zhong4, the telomere with the subtelomeric region of chromosome 5BS carrying a blight pSc119.2 hybridization signal was eliminated in ES-25, resulting in a similar FISH pattern to chromosome 2B of common wheat. Chromosome 2B is almost metacentric whereas chromosome 5B is fully submetacentric, so it was clear that chromosome 2B was replaced by chromosome 2St of Th. intermedium (Fig. 4h). Due to the dynamic and high frequency variation of subtelomeres in Triticeae species [50,51], it is difficult to identify the possible function(s) of the deleted regions of chromosome 5BS. Because there were no severe effects on viability of ES-25, elimination of the subtelomeric region presumably contributed to genome diversity [52]. The genomic composition of Th. ponticum and Th. intermedium has been an interesting subject for many years [53,54]. During the past several decades, it was determined that the set of St chromosomes contained in Th. intermedium were probably derived from P. spicata [55], whereas it was unclear whether the St genome is one of the sets of chromosomes of Th. ponticum [11,56]. According to molecular cytogenetic identification results, ES-23 and ES-9 (group 1) contained the same genome composition of 12A + 14B + 14D + 2(2St), while ES-24 and ES-10 (group 2) had the same genome composition of 14A + 14B + 12D + 2(3St). Although the alien chromosomes were derived from two different donors, Th. intermedium and Th. ponticum, identical alien chromosome FISH patterns and similar specific agricultural performances were identified in each group of plant materials. This implies that St chromosomes were included in Th. ponticum, and could be stably inherited with desirable genes. Furthermore, combined with the close homoeologous relationships between Table 4 Specific amplification markers of chromosome 2St and chromosome 3St the alien chromosomes analyzed by meiotic chromosome pairing and genomic polymorphism, P. spicata was identified to represent the complete set of St chromosomes that function directly during the speciation of Th. ponticum. However, further analyses are needed to determine the effects of the recombination events that occurred between diverse genomes through the allopolyploidization. In summary, based on the specific SLAFs obtained in this study, it was feasible for us to develop transferable St-chromosome-specific molecular markers from Th. intermedium to Th. ponticum. Although FISH-GISH analysis has been widely utilized to precisely characterize wheat-Th. intermedium lines for several decades, it is very time-consuming. Specific molecular markers can rapidly trace alien chromosomes or even small segment introgression with advantageous traits for wheat improvement breeding programs. However, the complete Th. intermedium genome has not been sequenced, so only a few chromosome-specific markers are available [57][58][59]. With the development of sequencing technology, the first consensus genetic map of Th. intermedium was developed by genotyping-by-sequencing [60]. Subsequently, 635 [9] and 745 [61] unique Th. intermedium SNP markers have been successfully developed, including 225 St-chromosome-specific markers, with 27 2St-chromosome-specific markers and 25 3St-chromosome-specific markers. Due to the much more complex genomic composition of Th. ponticum, molecular marker development work is mainly focused on genome E [56,62,63], especially following publication of the complete genome of Th. elongatum [64]. In the present study, the wheat-Th. intermedium DSLs, ES-23(DS2St (2A)) and ES-24(DS3St (3D)) were sequenced by SLAF-seq for further St-chromosomespecific marker development. Two 2St-chromosomespecific molecular markers, PTH-005 and PTH-013, as well as two 3St-chromosome-specific molecular markers, PTH-113 and PTH-135, were obtained. FISH analysis of the BC 1 F 2 population of ES-24 and HXH combined with a stripe rust resistance test (Fig. 7) confirmed that the stripe rust resistance gene(s) was (were) derived from chromosome 3St of Th. intermedium. The utility of PTH-113 and PTH-135 amplification in the BC 1 F 2 individuals indicated that the St-chromosome-specific molecular markers can serve as useful tools for tracing chromosome 3St of Th. intermedium in a common wheat background. In addition, according to the close genetic relationship between the alien chromosomes of Th. ponticum and Th. intermedium analyzed in this study, the four St-chromosome-specific markers could be simultaneously amplified in Th. ponticum, tetraploid P. spicata, Th. intermedium, and diploid P. spicata, as well as the corresponding substitution lines, ES-9, ES-23, ES-10, and ES-24. These results suggested that the four St-chromosome-specific markers could also be utilized to rapidly detect the St genome chromosomes of Th. ponticum. The remarkable stripe rust resistance of ES-24 and ES-10 thus probably originated from the same gene(s), but this needs to be validated in future genetic analyses. Conclusions Four wheat-Th. intermedium and two wheat-Th. ponticum DSLs conferring stripe rust resistance were characterized and compared by molecular cytogenetic analysis, and can be used as bridging parents for transmission of valuable resistance genes. Furthermore, according to the related homoeologous relationships, two 2St-chromosome-specific and two 3St-chromsome-specific molecular markers were developed by SLAF-seq for rapidly detecting the alien chromosomes of Th. intermedium and Th. ponticum in a common wheat background. Plant materials The plant materials included Th. intermedium intermedium DALs were developed via hybridization between Abbondanza nullisomic lines and Zhong4, including DA1St, DA2St, DA3St, DA5St, and DA7St (unpublished data). All the above-mentioned plant materials were preserved at the College of Agronomy, Northwest A&F University, China. HXH served as a susceptible control in the stripe rust resistance evaluation. The Pst race CYR32 was used for seedling stage of stripe rust resistance evaluation, and the CYR31 and CYR32 mixture was used for adult stage evaluation. All the Pst races were provided by the College of Plant Protection, Northwest A&F University, China. In situ hybridization Chromosome spreads by the drop method [14] were used for in situ hybridization analyses. The protocols of genomic DNA extraction, sequential FISH-GISH, and mc-GISH were based on Wang et al. [33]. According to the nick translation method, total genomic DNA of Th. bessarabicum, Th. intermedium, and Th. ponticum was labeled with fluorescein-12-dUTP, while St genomic DNA from diploid and tetraploid P. spicata was labeled with Texas Red-5-Dutp, and used as GISH and mc-GISH probes. The sheared DNA of CS was used as a blocking DNA. The oligonucleotide probes combination of Oligo-pTa535 (red) and Oligo-pSc119.2 (green) were used for FISH analyses. Hybridization signals were observed and acquired with an Olympus BX53 fluorescence microscope. Wheat 15 K SNP array analysis Wheat 15 K SNP genotyping arrays were used to genotype nine samples, including Abbondanza, ES-9, ES-10, ES-23, ES-24, ES-25, ES-26, Th. ponticum, and Th. intermedium, using Illumina SNP genotyping technology (China Golden Marker Biotechnology Company). There were 13,199 SNP loci contained in the wheat 15 K array and distributed on all 21 wheat chromosomes. Percentages of the same genotypes in each chromosome between two materials were obtained by calculating the rate of the same genotype loci number in total number of markers. The software Origin (OriginLab, USA) was used for data analysis and graphing. Agronomic traits and stripe rust resistance evaluation The stripe rust resistance evaluation was conducted annually in the field at the adult stage, while the seedling stage test was conducted in the greenhouse in 2020 and 2021. In 2020, a mixture of Pst races CYR31 and CYR32 was used to evaluate the adult plant resistance of Abbondanza, ES-9, ES-10, ES-23, ES-24, ES-25, ES-26, Xiaoyan784, and Zhong4, with HXH as susceptible control. Ten plants of each material were evaluated and scored. For further genetic analyses of the resistance, Pst race CYR32 was used to inoculate the above-mentioned materials at the seedling stage in 2020 and 2021 with two replicates (five plants of each material were planted and evaluated as one replicate), while the BC 1 F 2 population individuals of ES-24 and HXH were tested in 2021 with no replication. The infection type (IT) was scored with a scale of 0-4 [66]. To assess the morphological traits, ten plants of each material (Abbondanza, ES-9, ES-10, ES-23, ES-24, ES-25, ES-26, Xiaoyan784, and Zhong4) at the physiological maturity stage were randomly selected during the 2019-2020 growing season. Seven agronomic traits recorded in the field, including plant height, spike length (main spike), number of spikelets per main spike, number of tillers, number of seeds per main spikelets, awnedness, and thousand-kernel weight. The significant differences of each agronomic trait were analyzed by Duncan's multiple range test (P < 0.05). Meiotic chromosome pairing analysis of the F 1 hybrids Young spikes of F 1 hybrids derived from the two crosses (ES-9 × ES-23 and ES-10 × ES-24) at appropriate stages were extracted at the suitable temperature under field conditions, and immediately treated with Carnoy's fixative fluid II (6:3:1 ethanol-chloroform-glacial acetic acid solution). Before cytological observation of pollen mother cells, anthers were extracted and stained with 1% acetocarmine. The chromosome configurations in the meiosis period were observed, recorded and photographed. Genomic polymorphism analysis by pairwise comparisons On the basis of SLAF-seq [67], genomic DNA of Abbondanza, ES-9, ES-10, ES-23, ES-24, Th. intermedium, and Th. ponticum was sequencedby Biomarker Technologies Co. (Beijing, China). The restriction endonuclease HaeIII was selected to digest the genomic DNA. According to the sequence similarity, the filtered SLAF pair-end reads (150 bp per read) were clustered. Using BLAST software, sequences with over 90% identity were divided into one SLAF locus. Genomic polymorphism analyses were conducted in two groups, ES-9 and ES-23 (group 1), as well as ES-10 and ES-24 (group 2). First, all the SLAFs from the two groups were blasted with the wheat genome, and the sequences with high wheat homology (over 80%) were removed. Second, the remaining SLAFs were further blasted with the sequences of Th. ponticum or Th. intermedium. The SLAFs with high identity (over 90%) remained, and served as specific sequences of Th. ponticum attributed to ES-9 and ES-10, as well as the specific sequences of Th. intermedium attributed to ES-23 and ES-24. Finally, intercomparisons within groups were conducted and the specific SLAFs with high identity (over 90%) were acquired. Development and validation of the St-chromosome-specific markers Based on the obtained specific SLAFs, PCR primers were designed to amplify the two groups of materials. All the primers were designed using the online tool Primer3 Plus (http:// www. bioin forma tics. nl/ cgibin/ prime r3plus/ prime r3plus. cgi) and synthesized by AuGCT DNA-SYN Biotechnology Co. (Beijing, China). The amplified products were examined using 2% agarose gel electrophoresis. The markers amplified specific sequences in tetraploid P. spicata, diploid P. spicata, Th. ponticum, Th. intermedium, DA2St, ES-9, and ES-23, but not in CS, Abbondanza, Th. bessarabicum, Th. elongatum, the 1St and 3-7St addition lines, and served as 2St-chromosome-specific molecular markers. The markers present in ES-10 and ES-24, but absent in the 1-2St and 4-7St addition lines, served as 3St-chromosome-specific molecular markers. Subsequently, the 3St-chromosomes-specific markers were utilized in BC 1 F 2 individuals of ES-24 and HXH for further genetic analysis.
2023-01-15T15:17:34.816Z
2022-03-12T00:00:00.000
{ "year": 2022, "sha1": "1303f8d7c5f151f43bbaa843a4735c2f0e306805", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-022-03496-x", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "1303f8d7c5f151f43bbaa843a4735c2f0e306805", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
220436945
pes2o/s2orc
v3-fos-license
pH-Responsive nanocomposite fibres allowing MRI monitoring of drug release † Magnetic resonance imaging (MRI) is one of the most widely-used non-invasive clinical imaging tools, producing detailed anatomical images whilst avoiding side effects such as trauma or X-ray radiation exposure. In this article, a new approach to non-invasive monitoring of drug release from a delivery vehicle via MRI was developed, using pH-responsive Eudragit L100 and S100 fibres encapsulating superparamagnetic iron oxide nanoparticles (SPIONs) and carmofur (a drug used in the treatment of colon cancer). Fibres were prepared by electrospinning, and found to be smooth and cylindrical with diameters of 645 (cid:2) 225 nm for L100 and 454 (cid:2) 133 nm for S100. The fibres exhibited pH responsive dissolution behaviour. Around the physiological pH range, clear pH-responsive proton relaxation rate changes due to matrix swelling/dissolution can be observed: r 2 values of L100 fibres increase from 29.3 (cid:2) 8.3 to 69.8 (cid:2) 2.5 mM (cid:3) 1 s (cid:3) 1 over 3 h immersion in a pH 7.4 medium, and from 13.5 (cid:2) 2.0 mM (cid:3) 1 s (cid:3) 1 to 42.1 (cid:2) 3.0 mM (cid:3) 1 s (cid:3) 1 at pH 6.5. The r 2 values of S100 fibres grow from 30.4 (cid:2) 4.4 to 64.7 (cid:2) 1.0 mM (cid:3) 1 s (cid:3) 1 at pH 7.4, but at pH 6.5, where the S100 fibres are not soluble, r 2 remains very low ( o 4 mM (cid:3) 1 s (cid:3) 1 ). These dramatic changes in relaxivity demonstrate that pH-responsive dissolution results in SPION release. In vitro drug release studies showed the formulations gave rapid release of carmofur at physiological pH values (pH 6.5 and 7.4), and acid stability studies revealed that they can protect the SPIONs from digestion in acid environments, giving the fibres potential for oral administration. Exploration of the relationship between relaxivity and carmofur release suggests a linear correlation ( R 2 4 0.94) between the two. Mathematical equations were developed to predict carmofur release in vitro , with very similar experimental and predicted release profiles obtained. Therefore, the formulations developed herein have the potential to be used for non-invasive monitoring of drug release in vivo , and could ultimately result in dramatic reductions to off-target side effects from interventions such as chemotherapy. Introduction Among a wide variety of available clinical imaging techniques, magnetic resonance imaging (MRI) stands out for its ability to achieve high spatial and temporal resolution without the use of ionizing radiation. 1,2 In order to provide early and precise diagnosis, contrast agents (CAs) are generally employed to enhance resolution and contrast in MRI. 3 Superparamagnetic iron oxide nanoparticles (SPIONs) represent an important class of MRI CA because of their size-dependent magnetic behaviour. 4 By creating local magnetic field gradients, SPIONs can significantly decrease proton transverse relaxation times (T 2 ), boosting signal contrast. 5,6 Their relatively low cytotoxicity and ability to be metabolized by normal biochemical pathways, alongside their unique magnetic properties, also makes SPIONs useful for a range of biological applications, including hyperthermia and magnetic targeting. [7][8][9][10] For example, SPIONs have been explored for the targeted delivery of chemotherapeutics or for local temperature-induced apoptosis. 11 Recent studies into the co-delivery of drugs and SPIONs showed excellent targeting efficiency and MRI contrast coupled with minimal toxicity. 12,13 However, few SPION-based systems have received regulatory approval (e.g., GastroMARK s , Feridex s , Resovist s and Feraheme s ), and of those that have been approved a number have subsequently been withdrawn from the market. 14,15 SPIONs are highly sensitive to external conditions: for instance, they can be oxidised in an acidic environment, and they have a tendency to aggregate. 16 Hence, for biological applications, SPIONs are often coated or surface modified with a biocompatible polymer, such as polyethylene glycol (PEG), polylactic-co-glycolic acid (PLGA) or polyvinyl alcohol (PVA), or natural materials such as dextran, heparin, gelatin or chitosan. 17,18 This approach provides protection from aggregation or degradation, and also offers the opportunity for multifunctionality, such as the loading of a therapeutic active ingredient or specific location targeting. 19 Electrospinning is a straightforward technique which can produce polymer-based nanoscale fibres via the application of an electrical field to a polymer solution. 20 It has been widely explored to produce materials for a variety of fields, including tissue engineering, biosensors, wound dressings, and drug delivery. [21][22][23] Only a few studies have probed the incorporation of SPIONs into electrospun fibres for biological applications, but there is clear promise: Huang et al. reported polystyrene fibres with a high loading capacity of SPIONs, with the resultant formulation effective in killing cancer cells via magnetic hyperthermia. 24 In other work, Wang et al. revealed that drug-loaded dehydroxypropyl methyl cellulose phthalate and cellulose acetate fibres encapsulating SPIONs demonstrated superparamagnetism at room temperature, indicating the feasibility of magnetic-field induced release. 25 Exploitation of the MRI activity of SPIONs for monitoring drug release from electrospun fibres, on the other hand, has never been previously reported, despite its clinical utility and promise. The environment surrounding SPIONs is crucial to their MRI signal boosting capabilities, with diffusive water access to nearby particle surfaces providing strong signal enhancement. SPIONs encapsulated within electrospun fibres are therefore expected to have significantly lower proton relaxation rate enhancement compared to non-encapsulated particles. This difference in signal could be exploited as a mechanism for monitoring the dissolution of the fibres, and quantifying both SPION and loaded active ingredient release. Zhu et al. recently reported PLGA nanoparticles loaded with the anti-cancer drug doxorubicin and SPIONs, and found these to be potent for non-invasive MRI monitoring of drug delivery both in vitro and in vivo after intratumoral injection. 26 We were interested here to develop a formulation which could be given orally, rather than requiring an invasive injection for delivery. To this end, SPIONs and carmofur (an adjuvant chemotherapy for colon cancer, employed here as a model anticancer drug) were loaded into pH-responsive fibres prepared via electrospinning, to permit both pH-responsive drug delivery and concurrent MRI-based monitoring of drug release for application in the small intestine and colon. Two pH responsive polymers (Eudragit L100 and S100, methacrylic acid/methyl methacrylate copolymers which are only soluble in water at pH 4 6.0 or 47.0, respectively) were used to form the fibre filaments. These polymers can protect the SPIONs from the acidic conditions encountered in the gastric fluid following oral administration, and later dissolve to release carmofur and expose the SPIONs (Scheme 1). The fibres were fully characterised, and their drug release and proton relaxivities investigated in detail. Results and discussion Physical and structural properties SPIONs were initially synthesised using co-precipitation and stabilised by polyvinylpyrrolidone (PVP) to prevent aggregation. The as-prepared PVP-stabilised SPIONs (PVP-SPIONs) possessed a mean size of 8.5 AE 2.7 nm, as determined by transmission electron microscopy (TEM; Fig. 1a and b). The X-ray diffraction (XRD) patterns of the SPIONs and PVPstabilised SPIONs (Fig. 1c) (Fig. S1, ESI †) revealed weight loss of 3.4 wt% between 40 and 170 1C, due to the removal of physisorbed water, and a weight loss of 6.2 wt% between 170 and 500 1C which can be attributed to the degradation of the PVP stabiliser. SPIONs therefore comprise around 90 wt% of the PVP-SPIONs mass. PVP-SPIONs and carmofur were encapsulated within pH-responsive Eudragit L100 or S100 nanofibres via electrospinning. Carmofur is a clinically approved antineoplastic agent used to treat breast and colorectal cancer. It is an oral derivative of fluorouracil and will be metabolised to 5-fluorodeoxyuridine monophosphate in vivo, causing interference with RNA and DNA synthesis. 27,28 L100 and S100, anionic copolymers based on methacrylic acid and methyl methacrylate, are only soluble in water at pH 4 6.0 or 4 7.0, respectively. The resulting L100/Carmofur/SPION and S100/Carmofur/SPION composite fibres have uniform linear morphologies and smooth surfaces ( Fig. 1d-g), with mean fibre diameters of 645 AE 225 and 454 AE 133 nm respectively (Fig. S2a, ESI † and Fig. 2b). The smaller diameter of the S100/Carmofur/SPION composites is likely due to the lower polymer concentration used for electrospinning (10% and 12% w/v for Eudragit S100 and Eudragit L100, respectively). This difference in solution concentration was necessary due to the gel-like consistency of Eudragit S100 at high concentrations, which can clog the needle. XRD patterns (Fig. 1c) of the fibres show the characteristic reflections of cubic iron oxide, suggesting successful incorporation of PVP-SPIONs. The polymer raw materials are amorphous, displaying only a broad background between 10-301 (see Fig. S3, ESI †). The characteristic reflections of carmofur (Fig. S3, ESI †) are not observed in the fibres' XRD patterns, demonstrating that it is likely present in the amorphous form in the electrospun composite, owing to the very rapid drying which occurs during electrospinning. 29 The XRD findings are supported by differential scanning calorimetry (DSC) analysis (Fig. 2a), where pure carmofur clearly exists as a crystalline material with two sharp endothermic melting peaks visible at ca. 114 and 115 1C, consistent with the literature. 30 The raw Eudragit polymers display a broad endothermic peak between 60 and 120 1C (Fig. S4, ESI †), attributed to the loss of adsorbed water. No events can be observed in the DSC profile for PVP-SPIONs (Fig. 2a). S100/Carmofur/SPION and L100/Carmofur/SPION fibres show broad shallow endotherms between 40 to 80 1C, which can be ascribed to a loss of solvent (ethanol or DMAc, employed during preparation, or adsorbed water). The absence of the carmofur melting endotherm in the DSC curves of the drug and PVP-SPION loaded fibres confirms that it is present in the form of an amorphous solid dispersion. 30 TGA curves indicate the presence of both the SPIONs and carmofur within the fibres. S100/Carmofur/SPION and L100/Carmofur/SPION fibres display multistage decomposition steps (Fig. 2b). The small mass loss of about 3% before 110 1C for both composite fibres can be attributed to solvent loss (e.g., physisorbed water). Two subsequent decomposition steps, between 100-160 1C and 220-300 1C, can be attributed primarily to loss of the loaded carmofur (TGA data for pure carmofur are shown in Fig. S5a, ESI †). A small mass loss between 170 to 500 1C is caused by removal of PVP from the loaded PVP-SPIONs (it is coincident with the mass loss in the PVP-SPIONs TGA trace in Fig. S1a, ESI †). The final stage of decomposition between 330 to 460 1C causes a weight loss of around 54%, and mainly arises due to degradation of Eudragit (see Fig. S5a for the raw Eudragit TGA data). From the TGA of the fibres from three independent measurements ( Fig. 2b and Fig. S5b, c, ESI †), we can calculate that the iron oxide content is around 20% w/w (17.3 AE 0.34% w/w for L100/ Carmofur/SPION and 20.6 AE 0.26% for S100/Carmofur/SPION; mean AE S.D., n = 3). This is slightly higher than the theoretical loading (15.4% w/w for L100 and 17.9% w/w for S100 fibres). This discrepancy arises because there is a small proportion of residual decomposition products (expected to comprise carbon; around 3%) which remain at 500 1C. The carmofur loadings of the L100/ Carmofur/SPION and S100/Carmofur/SPION fibres were measured by UV-vis spectroscopy and calculated to be 7.5 AE 0.4% and 8.0 AE 0.4% (mean AE S.D., n = 3), with encapsulation efficiencies of 96.1 AE 4.9% and 100.1 AE 5.4% respectively (mean AE S.D., n = 3). Fourier-transform infrared (FTIR) spectra of the L100/Carmofur/SPION and S100/Carmofur/SPION fibres, as well as the raw materials, are given in Fig. S6 (ESI †). Raw carmofur displays bands at 1660-1720 cm À1 and at 1495 cm À1 arising from CQO stretching vibrations. Eudragit L100 and S100 are both co-polymers of methacrylic acid and methyl methacrylate, and share similar spectra, with a characteristic stretch at 1727 or 1726 cm À1 from CQO vibrations of esterified carboxylic groups, as well as stretches corresponding to ester groups between 1148 to 1251 cm À1 . The presence of bands between 2900 and 2990 cm À1 can be attributed to the stretching vibrations of methylene groups. For the S100/ Carmofur/SPION and L100/Carmofur/SPION fibres, the characteristic stretches of the polymer can be clearly identified, but with shifts in their positions. The L100 and S100 CQO stretching vibrations at 1726 or 1727 cm À1 in the raw polymer move to 1721 or 1718 cm À1 respectively in the fibres. These bands also become broader, as they incorporate the stretches of carmofur between 1666-1720 cm À1 . This suggests the successful incorporation of carmofur in these fibres. 29 pH responsive properties Non-encapsulated SPIONs are sensitive to highly acidic environments such as those found in the stomach, resulting in their oxidation, eventual dissolution, and loss of magnetic properties. Their encapsulation in Eudragit-based fibres is expected to overcome this issue, with the polymers providing protection from acidic environments due to their lack of solubility at low pH. 31 In order to test their stability, 10 mg of S100/Carmofur/SPION or L100/Carmofur/SPION fibres were incubated in 25 mL of pH 1.5 aqueous HCl solution, similar to the pH of gastric liquids. 32 To compare their acid stability with bare nanoparticles, 2 mg of PVP-SPIONs was subjected to the same treatment. The release of Fe ions was measured using an o-phenanthroline colorimetric assay after incubation at 37 1C for 2 h, which mimics the gastric transit time. The l max of the ferrous tris-o-phenanthroline product of dissolved iron and o-phenanthroline is around 512 nm at neutral pH. Fig. 3a shows that the [Fe] released from both types of fibres (green and blue line) after 2 h was below that of a control FeCl 3 aqueous solution ([Fe] 1 mg L À1 , black line), indicating less than 2 wt% of the total SPION content was degraded in the loaded fibres. This concentration is significantly lower than the [Fe] released from bare PVP-SPIONs at equivalent concentrations (between 1-10 mg L À1 , equating to up to 17 wt% degradation, purple line). These results show that the Eudragit coating can protect the SPIONs from degradation in an acidic environment. Photographs (Fig. S7, ESI †) and SEM images ( Fig. 3b-e) of the loaded fibres following acid incubation demonstrate the stability of the formulations: the morphology of the fibres appears largely unaffected after exposure to the acidic conditions, and the fibre size also remains similar, at 612 AE 227 nm for the L100/ Carmofur/SPION fibres and 521 AE 166 nm for S100/Carmofur/ SPION fibres ( Fig. S2c and d, ESI †). In order to mimic the conditions encountered during oral delivery, where materials are likely to encounter a range of pH environments (gastric pH is highly acidic (pH 1.0-2.5), while the mean pH values in the proximal small intestine, colon and terminal ileum are 6.6, 7.0 and 7.5), 33 drug release experiments were carried out at different pHs. Initially, S100/Carmofur/SPION Fig. 3 The stability of PVP-SPIONs and the fibres after immersion at pH 1.5 for 2 h. (a) The results of colorimetric assays to determine [Fe] in solution for PVP-SPIONs (purple), S100/Carmofur/SPION (blue) or L100/Carmofur/SPION (green) fibres with two control FeCl 3 aqueous solutions (black and red, 1 and 10 mg L À1 respectively); SEM images of (b), (c) S100/Carmofur/SPION and (d), (e) L100/Carmofur/SPION fibres after immersion in pH 1.5 aqueous HCl for 2 h. and L100/Carmofur/SPION fibres were incubated at pH 1.5 (aqueous HCl solution) for 1 h, and then subsequently transferred to pH 6.5 or 7.4 PBS buffer. Carmofur release was monitored by UV-vis measurement of the supernatant after incubation (l = 262 nm, Fig. 4). Both sets of fibres demonstrated release of around 35% of the drug content at pH 1.5 within the first hour. After introduction to PBS buffer, the L100/Carmofur/SPION fibres exhibit similar release patterns at both pH 6.5 and 7.4, with more than 90% release by 3 h and 100% release after 24 h. For the S100/Carmofur/SPION fibres, carmofur release reaches 80% after 3 h after at pH 7.4, while it takes longer (approximately 5 h) to reach this point at pH 6.5. This is due to the different pH at which L100 and S100 become water soluble. The polymer matrix in the S100 formulation remains insoluble at pH 6.5, and the drug can only reach the dissolution medium through diffusion or swelling of the polymer and the permeation of water into the centre of the fibres. At pH 7.4, S100 is water soluble and hence drug release is mediated by polymer dissolution. In contrast, L100 is water soluble at both pH 6.5 and 7.4 and therefore the release profiles are essentially identical at both pH values. Dynamic light scattering (DLS) size distribution profiles further support the pH-sensitive dissolution properties of the fibres. After the 24 h drug release study, the particle size of the solid material in the dissolution medium was measured, and the results are shown in Fig. S8 and Table S1 (ESI †). At both pH 6.5 or 7.4, the L100/Carmofur/SPION fibres fully dissolve after 24 h and yield a dispersion with a mean particle size of 591 AE 32 nm and 380 AE 11 nm (n = 3), suggesting the amphiphilic copolymer might form micellar structures composed of a hydrophobic core and a shell with ionized carboxylate units. 34 In contrast, the mean particle size of L100/Carmofur/SPION fibres dispersed at pH 5.5 for 24 h was significantly higher (3049 AE 41 nm), suggesting the polymer fibres remain intact and do not dissolve. S100/Carmofur/SPION composites displayed analogous results in DLS, with a mean size of 458 AE 31 nm at pH 7.4, indicating dissolution, and 2959 AE 176 nm at pH 6.5, suggesting that no dissolution is taking place (n = 3). To probe the mechanism of drug release, the Peppas model (eqn (1)) was fitted to the drug release data. 35 M t /M N (Q t ) represents the extent of drug release, t is the elapsed time, k is a rate constant, and m gives information related to the mechanism of release. 35 The first 60% of the release data fit well with the model, as shown in Table 1 and Fig. S9 (ESI †). All the exponents at both pH values are in the range 0.45 to 0.89, indicating drug release occurs through a combination of matrix swelling and drug diffusion. 35 SPIONs are typically T 2 contrast agents, providing negative contrast by decreasing the transverse relaxation time of local protons. To explore the efficiency of their contrast behaviour when encapsulated within the composite fibres, the relaxivity (r 2 ) of L100/Carmofur/SPION and S100/Carmofur/SPION fibres was initially measured in pH 7.4 PBS with 0.1% w/v xanthan gum and calculated according to eqn (2). where r 2 is relaxivity, R 2,obs is the observed transverse relaxation rate of the agent in aqueous suspension (R 2 = 1/T 2 ), R 2,sol is the relaxation rate of the blank solvent system (i.e., in the absence of contrast agent) and [CA] is the mM concentration of the contrast agent in suspension, as measured by inductively coupled plasma -mass spectrometry (ICP-MS). When fibres were immersed in the buffer, the initial relaxivity r 2 value (measured after 10 min suspension at pH 7.4) was low (10.6 AE 1.9 mM À1 s À1 for L100/Carmofur/ SPION, 12.0 AE 3.6 mM À1 s À1 for S100/Carmofur/SPION). The r 2 values increased with time as the fibres remained suspended in buffer. This was due to the dynamic process of matrix dissolution/swelling, allowing water molecules access to the SPIONs, boosting diffusive water access and hence enhancing their relaxation rates and relaxivities. Therefore, instead of measuring a single r 2 value, the fibres' r 2 relaxivity was monitored as a function of time during incubation in different pH PBS (6.5 or 7.4) with 0.1% w/v xanthan gum at 37 1C. The resultant r 2 relaxivity profiles can help to determine whether MRI signal could be utilised as a mechanism of monitoring fibre dissolution/swelling, and hence carmofur and SPION release. Fig. 5a displays the transverse relaxivity (r 2 ) profiles as a function of immersion time at different pH. Due to the very low Fig. 4 Plots showing the release of carmofur from the S100/Carmofur/ SPION and L100/Carmofur/SPION fibres, as measured by UV-vis spectroscopy (l = 262 nm). Data are given from three independent experiments as mean AE S.D. Table 1 The results of fitting the Peppas model to carmofur release from the fibres, constructed from the plots in Fig. S9 initial relaxivity values, profiles were monitored from 10 min immersion onwards. For L100/Carmofur/SPION fibres, the transverse relaxivity increased rapidly and displayed pH-responsive behaviour. At pH 7.4, r 2 rose from 29.3 AE 8.3 mM À1 s À1 after 10 min to 66.9 AE 2.7 mM À1 s À1 over 40 min and subsequently reached 69.8 AE 2.5 mM À1 s À1 after 3 h, due to dissolution/swelling and concomitant water access boosting relaxation rates as previous described. At pH 6.5, the starting relaxivity value at 10 min (13.5 AE 2.0 mM À1 s À1 ) was much lower than that measured at pH 7.4, which is possibly because the L100 polymer is more hydrophilic at elevated pH owing to the ionisation of carboxylic acid groups. This facilitates diffusive water access to SPIONs and thus promotes transverse relaxation in the suspension. 34 The relaxivity increase is also slower at pH 6.5, with r 2 reaching 42.1 AE 3.0 mM À1 s À1 after 3 h. After 3 h of immersion, all the L100/Carmofur/SPION fibres had dissolved at both pH values, and the resultant solutions were clear (Fig. S10, ESI †). However, the relaxivity value at pH 6.5 is notably lower than at pH 7.4 after 3 h. This can be attributed to the larger particle size of the dissolved L100/Carmofur/ SPION formulation at pH 6.5 (Fig. S8, ESI †). The amphiphilic polymer might form micelle structures containing the SPIONs at pH 6.5, with reduced diffusive water access meaning lessened relaxivitiy enhancement, whereas at pH 7.4 the smaller particle size results in a higher surface area to volume ratio and a greater surface area for SPION-water interactions. The S100/Carmofur/SPION fibres displayed a similar relaxivity profile with respect to time at pH 7.4, with r 2 values of 30.4 AE 4.4 mM À1 s À1 after 10 mins and 64.7 AE 1.0 mM À1 s À1 after 3 h. The r 2 values are overall slightly lower than those observed for L100/Carmofur/SPION at the same pH, which again can be attributed to the hydrodynamic size in the suspension following dissolution. By plotting the r 2 values against the hydrodynamic diameter measured earlier ( Fig. S8 and S11, ESI †), a clear inverse correlation is revealed. This is consistent with the literature, where smaller particles with increased surface areas allow improved water access to the magnetic components, leading to boosted relaxivities. 17 As previously noted, the S100-based fibres are insoluble at pH 6.5 (see Fig. S8 and S10, ESI †), and thus the r 2 value remains low throughout the experimental time as the SPIONs remain encapsulated, preventing their effective magnetic interaction with diffusive water protons. According to the relaxivity profiles, it is clear that L100/ Carmofur/SPION composite fibres dissolve at pH 6.5 and 7.4 and S100/Carmofur/SPION fibres dissolve at pH 7.4 only, resulting in the release and potential micellisation of SPIONs, and hence increasing relaxivity due to increased water diffusive access to magnetic centres. On the other hand, the S100/ Carmofur/SPION fibres do not demonstrate dissolution at pH 6.5 (but show some evidence of swelling) and extended immersion at this pH has little effect on the relaxivity. To prove that SPIONs were released from the fibres as proposed, the Fe concentration in solution was determined (Fig. S12, ESI †). The solution [Fe] concentration vs. time plot mirrors the shape of the r 2 recovery profile, providing evidence that r 2 is related to the release of SPIONs from the fibres. Clear linear correlations also can be observed between the concentration of Fe in the supernatant and the relaxivity (Fig. S13, ESI †). Thus, the recovery of r 2 can be regarded as a kinetic process which is proportional to the dissolution of the Eudragit fibres. In order to directly compare drug release with the relaxivity data, carmofur release from the fibres was quantified in PBS with 0.1% xanthan gum, using UV-vis spectroscopy alongside relaxivity changes (Fig. 5b). The presence of xanthan gum causes the release milieu to have a gel-like consistency, which makes it impossible to transfer between different pH values; thus, experiments were performed only at pH 6.5 or 7.4, with no initial acid stage. All the fibres displayed a rapid release of carmofur in 3 hours, consistent with the in vitro drug release tests performed without xanthan (Fig. 4). When the pH is above that at which the Eudragit dissolves, the carmofur release profile closely resembles the change in r 2 with time. To explore the relationship between carmofur release and relaxivity at these pH values, plots of cumulative carmofur release vs. r 2 were constructed. These reveal a clear linear correlation between the two parameters, with R 2 ranging from 0.94 to 0.99 ( Fig. S14 and Table S2, ESI †). In contrast, for the S100/Carmofur/SPION fibres at pH 6.5, a poor linear correlation (R 2 = 0.83; Fig. S14, ESI †) was observed. This arises because the fibres are insoluble at this pH, so SPION release is minimal and the r 2 value remains low throughout the experimental period. This indicates that changes in r 2 signal directly correspond to carmofur release when the pH is above that at which the Eudragit dissolves, meaning that MRI could be exploited as a non-invasive means of monitoring in situ drug release from such fibres in environments such as the small intestine and colon. It should be noted that in the gastrointestinal tract the presence of bile salts, or potentially strong osmolarity in the colon, could affect the Eudragit dissolution process. 36 However, it is clear from our data that there is a strong correlation between the extent of carmofur release and the r 2 signal in pH conditions where the polymer is soluble, regardless of the rate of dissolution. Thus, these additional complexities in vivo are not expected to confound the findings presented here. Our approach could hence provide a noninvasive route to quantification of drug release at a site of interest, and could prove particularly helpful in treatments using highly toxic chemotherapy. The MR signal intensity is related both to the relaxivity properties of the CA and also to its local concentration. However, the equations built considering r 2_t (r 2 at time t) and cumulative carmofur release (Table S2, ESI †) only take the relaxivity into consideration. In the clinic, the local CA concentration might differ as a result of varied dosages, body volumes or other pathological conditions. Hence, the r 2 values in each system were normalised by calculating r 2_t /r 2_max . The r 2_max is the maximum relaxivity value possible with the formulation, which manifests in our experiments as the relaxivity after 180 min, r 2_180 . Plots of drug release percentage vs. r 2_t /r 2_180 (Fig. 5c and Fig. S15, ESI †) reveal direct proportionality (except with the S100/Carmofur/SPION fibres at pH 6.5), and the normalised equations are given in Table 2. In the clinic, r 2_t could be regarded as the local MR signal intensity at a certain time point, and r 2_max as the theoretical maximum signal intensity, a constant related to the specific formulation, dose and individual. Compared to the equations built with only r 2_t and cumulative carmofur release (Table S2, ESI †), these normalised equations also show good correlation coefficients but can be more universally applied. To further validate the predictive ability of the r 2 data, the equations correlating carmofur release (%) with relaxation behaviour ( Table 2) were applied to predict carmofur release in a new series of experiments. Relaxation behaviour changes were determined for L100/Carmofur/SPION and S100/Carmofur/ SPION fibres at different concentrations (n = 3) and used to predict the extent of carmofur release. The latter was then quantified by UV-vis spectroscopy and compared with r 2 predictions. The results are presented in Fig. 6. The predicted drug release curves based on r 2 are very similar to the data obtained by UV-vis spectroscopy, indicating the potency of our theranostic approach. Two fit factors F 1 (the difference factor) and F 2 (the similarity factor) (eqn (3) and (4), respectively) were applied to statistically compare the experimental dissolution profiles determined by UV-vis measurement and those calculated based on relaxivity. 37 # À0:5 8 < (4) R t and T t represents the percentage of active pharmaceutical ingredient released from reference and test samples at time point t, respectively, and n is the number of time points. F 1 is calculated from the relative error between the two release curves. 37 A value of F 1 close to 0 suggests that two release curves can be regarded as 'equivalent', while the FDA regards an F 1 of less than 15 to denote two release profiles being similar. 38 The F 2 factor is a measurement of the mean difference between two release curves at each time point. 34 Strong similarity is indicated when F 2 is close to 100. 39 A value of 50 is obtained when the mean difference at each time point is 10%. Thus, two dissolution profiles are regarded to be ''similar'' by the FDA if F 2 is between 50 and 100. 39 Here we use the experimental release data obtained by UV-vis spectroscopy as the reference (R t ), and the predicted data as the test (T t ). The results are shown in Table 3. The F 1 values of L100/Carmofur/SPION fibres at pH 6.5 and S100 Carmofur/ SPION fibres at pH 7.4 lie in the range of ''equivalent'', while for the L100 fibres at pH 7.4 most of the F 1 values are also consistent with equivalent release patterns. In terms of similarity factor F 2 , the majority of the values are greater than 50. All these results suggest the reliability of our predicted curves. Novel pH-responsive fibres loaded with SPIONs and carmofur were fabricated in this work to permit oral delivery of a drug cargo to the intestine and colon. Predictive curves were established to correlate MRI signal intensity and drug release, thereby ultimately allowing non-invasive monitoring of local drug release. Drug delivery for the treatment of colon cancer remains a very significant challenge owing to the complicated colon physiology and environmental barriers. 40 The release and absorption of chemotherapeutics can be markedly affected by the changeable local environments and variable colonic residence time, making it difficult ensure that effective and safe doses are provided. 40 Hence, the fact that our formulations provide local release information offers great potential benefits to control the delivered dose and provide bespoke and personalised therapy in the clinic. This can ensure that patients are given an appropriate dose to treat their disease without experiencing dangerous or unpleasant side effects. Unlike previous work, 26,41 our fibres display pH-responsive relaxation behaviour around the physiological pH, potentially allowing them to be used to image abnormal local microenvironments in intestinal and colon cancer. Conclusions Composite pH responsive nanofibres have been prepared through electrospinning in this work. SPIONs (a negative MRI contrast agent) and carmofur (a model drug) were incorporated into polymer fibres composed of pH-responsive and biocompatible Eudragit polymers. Fibres with smooth cylindrical morphologies were generated, with an amorphous dispersion of carmofur. The encapsulation of SPIONs in the fibres led to effective protection from digestion in the acid environment of the stomach, and in vitro drug release studies reveal rapid release of carmofur at the pH values typical of the small intestine and colon. These results make our formulations promising as oral-delivery systems for colonic cancer. The fibres also exhibit pH responsive relaxation behaviour around the physiological pH range, making them ideal candidates for the development of ultra-sensitive reporters to detect abnormal microenvironments in the small intestine and colon. Further investigation of the fibres' relaxivity behaviour showed them to have pH responsive r 2 profiles closely correlated to the extent of drug release. On that basis, a novel quantification method allowing drug release to be monitored via proton relaxation changes was established and used to predict with a high degree of accuracy the carmofur release profiles in a new series of experiments. This offers the exciting possibility to non-invasively monitor the extent of drug release in situ. As most chemotherapeutic agents are cytotoxic and nonspecific, their safety remains a critical issue and hence our formulations potentially open up a new route to dramatically decrease off-target side effects in chemotherapy. Experimental Chemicals were purchased as follows: sodium hydroxide and hydrous ethanol (Fisher Scientific Ltd); sodium chloride, N,N-dimethylacetamide (DMAc), acetone, anhydrous ethanol, polyvinylpyrrolidone (PVP; 40 kDa), hydroxylamine hydrochloride, o-phenanthroline, xanthan gum, FeCl 3 Á6H 2 O and FeCl 2 Á4H 2 O (Sigma-Aldrich); Eudragit L100 and S100 (Röhm GmbH); and, Carmofur (ChemCruzt). Ultrapure water was collected from a Millipore MilliQ system operated at 18.2 MO. FeCl 3 Á6H 2 O (6.5 g, 0.024 mol) and FeCl 2 Á4H 2 O (2.48 g, 0.012 mol) were dissolved in 25 mL of deoxygenated ultrapure water. This solution was added dropwise into 250 mL of an aqueous NaOH solution (0.5 M) at 40 1C, and stirred for 1 h at this temperature. The resultant SPIONs were washed by centrifugation with DI water until the supernatant was pH neutral, and the resultant black precipitate dried under vacuum. For PVP stabilisation, SPIONs (100 mL, 10 mg mL À1 in water) were mixed with 2 mL of an aqueous PVP 40k solution (25.6 g L À1 , 0.64 mM), and the suspension was shaken (100 rpm) at room temperature. After 24 h, the suspension was mixed with 500 mL of aqueous acetone (H 2 O/ acetone, 1 : 10 v/v) and centrifuged at 13 200 rpm for 20 min. The supernatant was removed, and the resultant black precipitate washed with ethanol and dried in an oven at 50 1C for 24 h. A 12% (w/v) solution of Eudragit L100 and 10% (w/v) solution of Eudragit S100 were prepared in a mixture of DMAc and ethanol (2 : 8 v/v) and stirred vigorously for 24 h. Carmofur (to give final concentrations of 12 or 10 mg mL À1 ) and PVP-SPIONs (at final concentrations of 24 or 20 mg mL À1 ) were then added to the Eudragit L100 or S100 solutions respectively, with sonication for 20 min. An HCP 35-35 000 power supply (FuG Elektronik GmbH) was used to generate an electric field. A 5 mL plastic syringe fitted with a narrow-bore stainless-steel needle (18 G, with outer and inner diameter of 1.25 and 0.838 mm, respectively) was filled with the required working solution. The spinneret was connected to the positive electrode of the power supply via an alligator clip and a flat plate aluminium collector attached to the grounded electrode. The working solution was dispensed with the aid of a syringe pump (KDS 100, Cole-Parmer) under ambient conditions (22 AE 3 1C and relative humidity 40 AE 5%), with a flow rate of 1.0 mL/h. The applied voltage and the distance from the spinneret to collector were set to 16 kV and 15 cm, respectively. A MiniFlex 600 diffractometer (Rigaku) supplied with Cu-Ka radiation was used to collect XRD patterns (l = 0.15418 nm, 40 kV, 15 mA). Patterns were recorded over the 2y range from 3 to 701 (step = 0.011). The morphology of the fibres was analysed with a field emission scanning electron microscope (FEI Quanta 200F) connected to a secondary electron detector (Everheart-Thornley Detector-ETD). Samples were coated with a 20 nm gold sputter (using a Quorum Q150T coater) before measurement. The size distribution of the fibres was determined from the SEM micrographs by measuring the fibres at 4100 points in the images, with the aid of the ImageJ software (version 1.52s, National Institutes of Health). TEM images were obtained on a JEOL JEM-1200 microscope operated at 120 kV with a beam current of ca. 80 mA. A Gatan Orius 11-megapixel camera was used to take images. TEM samples were prepared by depositing a drop of PVP-stabilised SPIONs in aqueous suspension on a formvar-coated 300-mesh copper grid, and then drying the grids in the oven (45 1C). Average particle size was measured with the ImageJ software (version 1.52s, National Institutes of Health). Thermogravimetric analysis was undertaken on a Discovery instrument (TA Instruments, Waters LLC). Ca. 3 mg of each sample was loaded into an aluminium pan and heated from 40 to 500 1C at 10 1C min À1 under a nitrogen flow of 25 mL min À1 . Data were recorded using the Trios software and analysed with TA Universal Analysis. A Q2000 DSC (TA Instruments, Waters LLC) was used for calorimetry. A small amount of sample (approximately 3 mg) was loaded in a non-hermetically sealed aluminium pan (T130425, TA instruments) and DSC experiments carried out from 40 to 126 1C, with a temperature ramp of 10 1C min À1 and nitrogen purge of 25 mL min À1 . DSC data were recorded with the TA Advantage software package and analysed using TA Universal Analysis. For ICP-MS, samples (containing ca. 0.06 mg SPIONs) were digested using hot HNO 3 digestion and then diluted to 10 mL with DI water. The iron concentrations (mM) were quantified on an Agilent 7500cx spectrometer. An MQC+ benchtop NMR analyser (Oxford Instruments) was used to measure transverse relaxation times (T 2 ) of protons at 37 1C and 23 MHz. The Carr-Purcell-Meiboom-Gill (CPMG) method was used to measure T 2 , with 4 scans per experiment. The water relaxation rate enhancement per mmol of contrast agent (relaxivity) is defined by eqn (2). Stability studies were carried out by dispersing 10 mg of the nanofibres or 2 mg of PVP-SPIONs in 25 mL of an aqueous HCl solution (pH 1.5). Experiments were carried out in a shaking incubator (100 rpm) at 37 1C for 2 h. The resulting solutions were centrifuged for 15 min (13 200 rpm) to sediment any undissolved fibres or particles. 5 mL samples of the supernatants were removed and neutralised with a few drops of aqueous 0.2 M NaOH. 0.9 mL of the neutralised sample was added to 0.3 mL of a 10% (w/v) hydroxylamine hydrochloride solution in water and 0.6 mL of an aqueous 0.2% (w/v) o-phenanthroline solution. Finally, UV-vis spectra were recorded on an Agilent Cary 100 spectrophotometer over the wavelength range 370 to 800 nm. The loading capacity of carmofur (LC%) can be calculated as the amount of entrapped drug divided by the total fibre weight. Encapsulation efficiency (EE) is the percentage of the drug present that is successfully entrapped into the fibres. To calculate the loading and encapsulation efficiency of carmofur, 10 mg of the fibres (n = 5) was added into 10 mL of ethanol and sonicated until the polymer was fully dissolved. A PVDF-type syringe filter (0.22 mm) was used to filter the resultant solutions, and the filtrates centrifuged for 10 min (13 200 rpm) to remove the SPIONs. The supernatants were analysed with UV spectroscopy at 262 nm (Cary 100 instrument, Agilent), and the LC and EE calculated based on a pre-determined calibration curve. The carmofur release study was undertaken using a 50 mL suspension of fibres (B0.5 mg mL À1 ). Samples were incubated in a pH 1.5 HCl solution for 1 h, and then transferred to pH 6.5 or 7.4 PBS. Experiments were undertaken in a shaking incubator (100 rpm) at 37 1C. 1 mL aliquots were withdrawn from the dissolution medium at predetermined time points and filtered through a PVDF-type syringe filter (0.22 mm). To maintain a constant volume, 1 mL of fresh pre-heated buffer was added to the dissolution vessel. The filtrates were centrifuged for 15 min (13 200 rpm) to remove any SPIONs, and then analysed with an Agilent Cary 100 spectrophotometer. Carmofur quantifications were performed at l max of 262 nm. Dilutions were undertaken when necessary to bring concentrations into the linear range of the calibration curve. Experiments were performed in triplicate and the results are reported as mean AE standard deviation (S.D.). In a separate set of experiments, 5 mg of L100/Carmofur/ SPION fibres was incubated in 10 mL of pH 5.5 acetate buffer under the same conditions as used for drug release (100 rpm, 37 1C). After 24 h, 2 mL aliquots were taken from each of the pH 7.4, 6.5 or 5.5 experiments for dynamic light scattering (DLS) measurement. For the L100/Carmofur/SPION system at pH 5.5 and S100/Carmofur/SPION at pH 6.5, where the fibres were aggregated in the form of mats, sonication was applied to ensure a homogenous suspension was obtained before taking aliquots and performing DLS measurements. To obtain the particle size data, a ZetaSizer ZS instrument (Malvern) fitted with a 4 mW He-Ne 633 nm laser module was used, and scattered light measured at 1731 (back scattering). The attenuator position was selected automatically by the instrument and particle sizes are reported as the mean of 5 measurements. Each sample was analysed three times. To monitor changes in proton relaxivity, a dispersion of approximately 10 mg of each fibre formulation in 8 mL of a 0.1% (w/v) aqueous xanthan gum solution was placed into a 10 mm-diameter NMR tube, which was held at 37 1C. The transverse relaxation time (T 2 ) was directly monitored over 3 h. At predetermined time points, 0.3 mL of suspension was taken from the NMR tube, diluted, and filtered through a PVDF-type syringe filter (0.22 mm). To measure the free iron concentration, suspensions were analysed by ICP-MS after hot nitric acid digestion. To measure the carmofur concentration, centrifugation (13 200 rpm for 10 min) was conducted to remove the SPIONs, and the supernatant analysed on an Agilent Cary 100 spectrophotometer. All experiments were performed in triplicate and the results are reported as mean AE S.D. In a set of experiments to predict drug release, dispersions of fibre samples at different concentration (0.5, 1.0 and 2 mg mL À1 ) in 0.1% w/v xanthan gum buffer (n = 3) were placed into 10 mm NMR tubes. The transverse relaxation time was monitored at 37 1C for 3 h. At selected time points, 0.3 mL aliquots were taken from the NMR tube and the [Fe] and carmofur content in each aliquot quantified as above. Conflicts of interest There are no conflicts to declare.
2020-07-02T10:30:55.944Z
2020-07-09T00:00:00.000
{ "year": 2020, "sha1": "b7f53e2dbf87aaaa97034521d3a4d89ea1709e67", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/tb/d0tb01033b", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9bbc39659451dcfc238a4f9d88c15691169cd281", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
231202990
pes2o/s2orc
v3-fos-license
FAK regulates IL-33 expression by controlling chromatin accessibility at c-Jun motifs Focal adhesion kinase (FAK) localizes to focal adhesions and is overexpressed in many cancers. FAK can also translocate to the nucleus, where it binds to, and regulates, several transcription factors, including MBD2, p53 and IL-33, to control gene expression by unknown mechanisms. We have used ATAC-seq to reveal that FAK controls chromatin accessibility at a subset of regulated genes. Integration of ATAC-seq and RNA-seq data showed that FAK-dependent chromatin accessibility is linked to differential gene expression, including of the FAK-regulated cytokine and transcriptional regulator interleukin-33 (Il33), which controls anti-tumor immunity. Analysis of the accessibility peaks on the Il33 gene promoter/enhancer regions revealed sequences for several transcription factors, including ETS and AP-1 motifs, and we show that c-Jun, a component of AP-1, regulates Il33 gene expression by binding to its enhancer in a FAK kinase-dependent manner. This work provides the first demonstration that FAK controls transcription via chromatin accessibility, identifying a novel mechanism by which nuclear FAK regulates biologically important gene expression. Focal adhesion kinase (FAK) is a non-receptor tyrosine kinase that is overexpressed in many cancers, including squamous cell carcinoma (SCC) 1 , breast, colorectal 2 and pancreatic cancer 3 . In addition to well-known localization at integrin-mediated cell-matrix adhesion sites (focal adhesions), FAK can localize to the nucleus, where it binds a number of transcription factors, including p53 4 , Gata4 5 and Runx1 6 , to regulate the expression of Cdkn1a (which encodes p21), Vcam1 and Igfbp3, respectively. By binding to these transcription factors, FAK has been linked to cancer-associated processes such as inflammation 5 , proliferation 6 and survival 4 . Our previous work demonstrated that, in mutant H-Ras-driven murine SCC cells, nuclear FAK controls expression of cytokines and chemokines, for example Ccl5, to drive recruitment of regulatory T cells into the tumor microenvironment, resulting in suppression of the antitumor CD8 + T cell response and escape from antitumor immunity 7 . FAK regulates biologically important chemokines via its kinase activity and adaptor functions in the nucleus; briefly, nuclear FAK interacts with many transcription factors and accessory proteins in a gene expression-regulatory network 7 . Crucially, this includes the pro-inflammatory cytokine IL-33 that can either be found in the nucleus or be released from cells that are damaged or dying (alarmin) 8 . In SCC cells however, IL-33 is not secreted, but instead is translocated to the nucleus and functions there; in turn, nuclear IL-33 drives expression of immunosuppressive chemokines, such as Ccl5 and Cxcl10, and we showed that IL-33 functions exclusively downstream of FAK in promoting pro-inflammatory gene expression and tumor growth 9 . However, the mechanisms by which FAK activity controls Il33 gene expression are not understood, and this important extension of previous work is investigated here. Our previous findings have suggested that FAK is present in the chromatin fraction and interacts with chromatin modifiers in the nucleus 7 . Therefore, we wanted to investigate whether FAK controls genome-wide chromatin accessibility changes and if these chromatin changes contribute to FAK-dependent gene expression, which to our knowledge has never been explored. To understand potential molecular mechanisms by which FAK regulates expression of genes like Il33, we examined FAK-dependent chromatin accessibility changes and transcription factor motif enrichment across the genome using ATAC-seq and integrated those data with RNA-seq. This Scientific Reports | (2021) 11:229 | https://doi.org/10.1038/s41598-020-80111-9 www.nature.com/scientificreports/ revealed that FAK regulates chromatin accessibility of a subset of genes, and a number of these were differentially expressed in a FAK-and FAK kinase-dependent manner, including the previously identified FAK downstream effector Il33. Motif-enrichment analysis indicated that there was enrichment of sequence motifs known to bind ETS and c-Jun/AP-1 transcription factor family members on the Il33 promoter and enhancer regions which are affected by FAK. Validation experiments confirmed that c-Jun is a key regulator of IL-33 expression in SCC cells by binding to the Il33 enhancer, and that FAK's kinase activity is important for regulating chromatin accessibility at this site. Analysis of genome-wide motif enrichment indicated that FAK likely regulates many more transcription factors beyond those already reported. Taken together, our data suggest that FAK is a common regulator of gene expression via modulating transcription factor binding to biologically relevant target gene promoters/ enhancers by controlling chromatin accessibility, such as we demonstrate here using Il33 as an exemplar. In turn, FAK-dependent gene expression changes, including Il33, are critically associated with cancer-associated phenotypes 7,9 . This is the first demonstration of how nuclear FAK can contribute to gene expression, and we report a new activity in the nucleus for a classical adhesion protein. Results FAK regulates transcription factor motif accessibility across the genome. FAK Table S1 for further details of ATAC-seq statistics). The standard peak number in ATAC-seq experiments can vary depending on cell type, species, context and variations in the ATAC-seq protocol. Importantly, the peak number reported in our study is in the medium-to-high range for an ATAC-seq experiment performed in mouse cells (see additional file 2 in 11 ). The majority of peaks in FAK-WT-, FAK-nls-, FAK-kd-expressing as well as FAK −/− cell lines were located 0-100 kb from the transcriptional start sites (TSS) (Supplementary Fig. S1B). The distance of ATAC-seq peaks from the TSS suggests that the accessible regions were predominantly located in likely enhancer 12 and promoter 13 regions. We identified differentially accessible gene regions using the DiffBind package 14 . Differential peak calling was performed for each pairwise comparison for which FAK-WT samples were compared with each of the FAK knockout (FAK −/− ) and FAK mutant (FAK-nls-and FAK-kd-expressing) cell lines. From this analysis, it was apparent that a subset of genes are regulated by FAK-dependent changes in chromatin accessibility (discussed further below). We next analyzed the transcription factor motif sequences present in FAK-dependent differentially accessible peaks (hereafter termed motif-enrichment analysis). Motif-enrichment analysis allowed us to predict which transcription factors were regulating genes across the genome by analyzing the motif sequences in ATAC-seq peaks. We used the HOMER tool 15 to identify motif binding sites (i.e. genomic regions that match known transcription factor motifs) in the differentially accessible peaks identified by DiffBind analysis. Motifs in FAK-WT-enriched peaks were statistically compared to all the motifs identified in peaks called in the FAK-nls-, FAK-kd-expressing or FAK −/− cells. This was expressed as a proportion of target sequences containing that motif (motifs in peaks enriched in FAK-WT-expressing cells) compared to the proportion of background sequences containing that motif (all motifs identified in the respective comparison, i.e. FAK-nls-, FAK-kd-expressing or FAK −/− cells). This analysis detected multiple statistically significant changes in motif enrichment in differentially accessible peaks from FAK-WT vs FAK-deficient (FAK −/− ) SCC cells (196 transcription factor motifs), FAK-WT-vs FAKkd-expressing SCC cells (118 transcription factor motifs) and FAK-WT vs FAK-nls-expressing SCC cells (205 transcription factor motifs) (Benjamini-Hochberg-corrected P ≤ 0.05) (Supplementary Data S1). Importantly, the number of motifs identified in the ATAC-seq peaks were similar to those in previous published ATAC-seq datasets (see supplementary file 9 in 16 ). These findings suggest that FAK regulates transcription factor motif enrichment in accessible regions of chromatin across the SCC genome. In the motif-enrichment analyses of FAK-WT-expressing cells vs FAK −/− cells and FAK-WT-vs FAK-nlsexpressing cells, the two most highly enriched transcription factor motifs were for Jun-AP-1 and Fosl2 (all Benjamini-Hochberg-corrected P = 0), which exhibited an almost two-fold enrichment in motifs in the target (% target, FAK-WT-expressing cells) compared to the motifs identified in the background (% background, FAK −/− cells or FAK-nls-expressing cells) (Fig. 1A). The top two hits in the motif-enrichment analysis of FAK-WT-vs FAK-kd-expressing cells were motifs for Ets1 and Etv1 (all Benjamini-Hochberg-corrected P = 0), which likewise revealed a two-fold enrichment in these motifs in the target (% target, FAK-WT-expressing cells) compared to the background motifs (% background, FAK-kd-expressing cells) (Fig. 1A). These data imply that FAK and specific FAK functions (kinase activity and nuclear localization) robustly regulate enrichment of particular AP-1 and ETS motifs within accessible chromatin regions in SCC cells. We used set analysis to identify FAK-dependent transcription factors motif sequences in the ATAC-seq peaks (Supplementary Data S1). This analysis revealed enrichment of transcription factor motifs controlled by www.nature.com/scientificreports/ specific FAK functions (scaffolding, nuclear-localization and kinase activity). For example, in the FAK-WT vs FAK −/− motif-enrichment analysis, there was enrichment of motifs known to primarily bind p53, which were not enriched in the FAK-WT vs FAK-nls or FAK-WT vs FAK-kd analyses (Supplementary Data S1), suggesting that FAK scaffolding functions may regulate exposure of p53 binding motifs. To establish the most relevant transcription factors responsible for FAK-WT-dependent gene expression, we filtered the transcription factors known to bind to FAK-regulated motifs that were only enriched in the FAK-WT-expressing cells when compared to FAK −/− cells, FAK-nls-expressing cells and FAK-kd-expressing cells (63 transcription factor motifs; Fig. 1B and worksheet 4 in Supplementary Data S1). To identify which transcription factors may regulate gene expression in the FAK-WT-expressing cells, we performed protein domain-enrichment analysis on the set of transcription factors known to bind FAK-regulated motifs (Fig. 1C). This analysis indicated that there was an over-representation of transcription factors known to bind motifs containing ETS (term SM00413:ETS, Benjamini-Hochberg-corrected P = 1.02 × 10 −10 ) and PNT domains (term SM00251:SAM_PNT, Benjamini-Hochberg-corrected P = 1.06 × 10 −5 ; Fig. 1C), including the ETS transcription factor family members Fli1, Elf3, Elf5, Gabpa, Spdef, Erg, Ehf and Ets1. Furthermore, there was also an enrichment for transcription factors known to bind motifs that contain basic-leucine zipper domains (term SM00338:BRLZ, Benjamini-Hochberg-corrected P = 0.0033; Fig. 1C), including members of the AP-1 family, such as c-Jun, JunB, Fosl1, Fosl2 and Atf3. Thus, our analyses revealed FAK-dependent enrichment of a set of sequence motifs known to bind AP-1 and ETS transcription factors. To understand better the likely transcription factors responsible for FAK-dependent gene expression, we performed interactome analysis to determine putative connections between transcription factors known to associate with FAK-regulated motifs. We reasoned that transcription factors with exposed motifs in FAK-WT-expressing cells that have a large number of functional associations with other predicted transcription factors are more likely to be important mediators of FAK-dependent transcription. We constructed a functional association network, incorporating curated protein-protein and protein-DNA interactions, of the transcription factors whose motifs were enriched in FAK-WT-expressing cells (Fig. 1D). The network analysis revealed that transcription factors known to bind FAK-regulated motifs have a large number of connections with other transcription factors known to bind FAK-regulated motifs (Fig. 1D). The most highly connected transcription factor was the AP-1 member c-Jun (outlined in red in Fig. 1D), and network topology implied that c-Jun is a key signal integrator between all the other transcription factors in the network. Other well-connected nodes in the network were members of the AP-1 family, including JunB, Atf3 and Fosl1 (outlined in red in Fig. 1D). In addition, certain members of the ETS transcription factor family had many physical and functional connections within the network, namely Ets1 and Spi1 (outlined in purple in Fig. 1D). Collectively, these data suggest that FAK regulates motif enrichment in accessible regions of chromatin, in particular sequences known to bind to the AP-1 and ETS transcription factor family members. FAK regulates chromatin accessibility at a subset of genes, including Il33. Differential peak-calling analysis identified chromatin accessibility changes that were dependent on FAK, as well as FAK kinase activity and its nuclear localization ( Fig. 2A). All ATAC-seq peaks were set to 500 bp to allow comparison between peaks in the SCC cell lines used in this study, and we reported distances from the ATAC-seq peak center as a heatmap (red indicates high read count (highly accessible region) in Fig. 2A). This analysis revealed ATAC-seq peaks across the genome with differential accessibility (varied read count) when comparing FAK-WT SCC cells to FAK −/− , FAK-nls-expressing and FAK-kd-expressing SCC cells, identifying changes in a subset of genes that varied depending on FAK status ( Fig. 2A). These data implied that FAK regulates the chromatin accessibility at a subset of genes. We next identified which genes were regulated by FAK-dependent changes in chromatin accessibility using comparisons between the cell lines that varied only in FAK status. We wanted to determine which genes were associated with the ATAC-seq peaks enriched in FAK-WT-expressing cells (as identified by differential peak calling) to understand which genes are regulated by FAK-dependent accessibility changes. To assign each ATAC-seq peak to genes, we used ChIPseeker 17 , which links each peak to the closest TSS using data from the University of California, Santa Cruz, genome browser annotation database (https ://genom e.ucsc.edu/). We used FAK RNA-seq data to confirm whether the genes that were regulated by FAK-dependent changes in chromatin accessibility were also differentially expressed in a FAK-and FAK kinase-dependent manner ( Fig. 2B and FAK RNA-seq dataset reported in Supplementary Data S2). Set analysis identified genes that were either up-or down-regulated in a FAK-or FAK kinase-dependent manner, and also those genes whose FAK-dependent changes were associated with chromatin accessibility changes (intersection sets in upper panels in Fig. 2B). We found 36 genes whose expression and chromatin accessibility profiles were both regulated by FAK and its kinase activity (intersection sets in lower panel in Fig. 2B). Comparison of the FAK-nls mutant chromatin accessibility data to this subset of genes revealed that most of these are also dependent on FAK's ability to localize to the nucleus (asterisks in lower panel in Fig. 2B). As an exemplar, we next focused on one of these genes, Il33, because we had previously reported it as a FAK-regulated cytokine of biological significance in mediating FAK-dependent anti-tumor immunity 9 . Using ATAC-seq data to investigate whether chromatin accessibility was one mechanism by which FAK regulates Il33, we found that there were ATAC-seq peaks in FAK-WT-expressing SCC cells on Il33 enhancer (− 3199 from TSS) and promoter (+ 821 from TSS) regions (Fig. 2C). Moreover, these peaks were absent in the FAK-kd-and FAK-nls-expressing cells and reduced in FAK −/− cells, which had no detectable peak on the promoter region and a suppressed ATAC-seq peak on the enhancer region. However, we note that the suppressed ATAC-seq peak on one replicate of the FAK −/− cells (FAK −/− 2) did not have sufficient read count to be identified as an ATAC-seq peak, and therefore the peak was not called. We conclude that FAK regulates chromatin accessibility at a subset www.nature.com/scientificreports/ www.nature.com/scientificreports/ of gene promoters, and some of these are differentially expressed in a FAK-dependent manner, as exemplified by Il33. This suggests that FAK-regulated, biologically important gene expression alterations may be controlled by FAK-dependent chromatin accessibility changes. FAK regulates IL-33 expression via chromatin accessibility at the c-Jun motif in the Il33 enhancer. In order to define key transcription factors that drive FAK-dependent Il33 expression in mouse SCC cells, we performed motif-enrichment analysis on the ATAC-seq peaks proximal to the Il33 promoter and enhancer regions in FAK-WT-expressing cells using HOMER (using data depicted in Fig. 2C). Analysis of the raw peak-calling data revealed that there was a number of peaks upstream of the Il33 gene in the FAK-WT-expressing cells as well as FAK −/− cells, FAK-nls-and FAK-kd-expressing SCC cell lines between − 7480 and − 42,315 bp upstream of the TSS. To create a refined list to identify the key transcription factors important for Il33 expression in FAK-WT-expressing cells, we excluded all the transcription factor motifs that were present in the aforementioned peaks upstream of the Il33 gene in FAK −/− cells, FAK-nls-and FAK-kd-expressing cells from our list of putative Il33 transcription factor motifs. This identified 24 FAK-dependent transcription factor motifs, including sequence motifs known to be bound by the AP-1 components c-Jun, Atf2 and Atf7 (Fig. 3A and Supplementary Data S3). It is well established that in order for a transcriptional event to occur, transcription factors often need to form complexes with other transcription factors in the same or different families. For example, it is well known that c-Jun homodimerizes, as well as heterodimerizes with c-Fos and Fra or Atf family members, to regulate the expression of AP-1-dependent genes 18 . Furthermore, the transcription factor Nr4a1 has been shown to bind and co-operate with c-Jun to regulate the transcription of the Star gene 19 . Therefore, we addressed whether the transcription factors predicted to regulate Il33 expression in FAK-WT-expressing cells can bind to and/or regulate each other. We reasoned that highly connected transcription factors may represent key nodes in the Il33 transcription factor network and, therefore, potentially important regulators of IL-33 expression. Generation of an Il33 transcription factor regulatory network for FAK-WT-expressing cells indicated that the transcription factors known to bind FAK-regulated motifs at the Il33 gene have multiple functional connections (Fig. 3B). Indeed, the most highly connected transcription factor was the AP-1 member c-Jun (largest node in Fig. 3B), suggesting it may be a key node in the FAK-dependent Il33 transcription factor network. We next examined nuclear FAK binding partners (described previously in 7 ) and used interactome analysis to contextualize these with regard to transcription factors that may bind to the identified sequence motifs in the Il33 gene where accessibility is FAK-regulated. This enabled the prediction of putative Il33 transcription factors that have functional connections with nuclear FAK binders and thereby may be regulated by FAK in a direct manner (Fig. 3C). The resulting network indicated that the transcription factors with motif sequences on the Il33 promoter/enhancer have varying numbers of functional associations with putative nuclear FAK-interactors (indicated by node size in Fig. 3C). The transcription factors with the most links to nuclear FAK binding proteins were c-Jun and Nr4a1 (Fig. 3C). This implied that there were likely interesting connections between FAK and the transcription factors known to access motifs in the Il33 promoter in a FAK-dependent manner. Our network analysis suggested that c-Jun interacts with a number of FAK binders identified previously 7 , such as Pin1, which has been shown to bind to c-Jun and increase its transcriptional activity 20 . The FAK binding partner and transcription factor Sp-1 9 has been reported to bind both the two most highly connected nodes in the network, c-Jun ( 21 ; left in Fig. 3D) and Nr4a1 ( 22 ; right in Fig. 3D). Other nodes that had connections with validated FAK binders included Tbp, which binds to the FAK binding protein Taf9 ( 7 ; Supplementary Fig. S2) to form the TFIID component of the basal transcription factor complex 23 . Therefore, our interactome analysis indicates that FAK is functionally well connected to transcription factors known to bind to sequence motifs on the Il33 enhancer whose accessibility is FAK-regulated. c-Jun regulates IL-33 expression by binding to Il33 enhancer regions. Our nuclear FAK interactome analysis showed that c-Jun was a hub (i.e. a highly connected node) in the Il33 transcription factor network (Fig. 3C). c-Jun is a component of the AP-1 family of transcription factors, and it is an important regulator of skin inflammation 24 . For example, c-Jun proteins are known to be important for the expression of the cytokine CCL5 25 , which we have shown to be regulated by FAK and IL-33, and loss of Jun proteins can lead to the onset of a chronic psoriasis-like disease 26 . Therefore, we hypothesized that c-Jun may be a likely regulator of inflammatory gene expression in SCC cells (which originate from skin keratinocytes) used in our studies. We performed siRNA-mediated depletion of Jun mRNA (which encodes c-Jun) (Fig. 4A), which led to a parallel significant downregulation of Il33 mRNA (Fig. 4B) and reduced IL-33 protein expression (Fig. 4C). In addition, the FAK and IL-33 target gene in SCC cells, Cxcl10, also showed reduced mRNA levels as a result of Jun knockdown (Fig. 4D). Taken together, these data imply that c-Jun is likely an important regulator of IL-33 expression and of FAK-and IL-33-dependent target genes. Next, we performed chromatin immunoprecipitation (ChIP)-qPCR analysis to confirm that c-Jun binds to the predicted c-Jun sequence-binding motif at the Il33 enhancer, and whether or not FAK-dependent chromatin accessibility changes on the Il33 enhancer are linked to perturbed c-Jun binding. Our HOMER 15 analysis identified a cAMP response element (CRE) (5′-TGA CGT CA-3′) within the Il33 enhancer peak, which are known to bind c-Jun-Atf dimeric complexes 18 . We therefore used ChIP to show that c-Jun binds to the CRE motifs at the Il33 enhancer in a FAK-dependent manner via accessibility. Primers were designed around the region containing the CRE sequence motif in the Il33 enhancer and an unrelated region upstream of this site to control for background binding (depicted in Fig. 4E). We used an anti-c-Jun ChIP-grade antibody to pull down DNA in formaldehyde-crosslinked, sonicated chromatin preparations from FAK-WT-and FAK-kd-expressing cells, since loss of FAK's kinase activity displayed the most striking loss of chromatin accessibility at the Il33 enhancer www.nature.com/scientificreports/ www.nature.com/scientificreports/ (Fig. 2C). Following immunoprecipitation, the DNA was purified and a qPCR was performed, whereby the Il33 enhancer region and an upstream background region were amplified. We used the % input method to normalize the ChIP-qPCR data for potential sources of variability, including the starting chromatin amount in the chromatin extract, immunoprecipitation efficiency and the amount of DNA recovered (see "Materials and methods"). Using ChIP, we found that c-Jun bound to the Il33 enhancer in the FAK-WT-expressing cells (Fig. 4F). Furthermore, there was a significant loss of c-Jun binding in FAK-kd-expressing cells in comparison to FAK-WT-expressing cells (Fig. 4F). These data are consistent with FAK kinase activity regulating chromatin accessibility at the enhancer region upstream of the Il33 gene at the predicted c-Jun binding site. Next, we wanted to address whether FAK kinase activity may regulate the levels of phosphorylation of c-Jun. Transcriptional activation of c-Jun is mediated by phosphorylation of Serine 73 by c-Jun N-terminal kinase (JNK) 27 . Treatment of FAK-WT SCC cells with the FAK kinase inhibitor VS4718 resulted in a significant loss S73-c-Jun phosphorylation (Supplementary Figure S4). Interestingly, there was also a significant reduction of total c-Jun protein levels. There was therefore a change in the amount of cellular pS73-c-Jun upon treatment with a FAK kinase activity inhibitor, and we conclude that FAK kinase activity contributes to the amount of transcriptionally active c-Jun in the SCC cells used here. Thus, we conclude that FAK, which is classically thought to be primarily an integrin adhesion protein, can function in the nucleus to control chromatin accessibility at specific gene promoters/enhancers. In turn, this leads to FAK-dependent transcription of specific genes, an example of which is the cytokine Il33. FAK/IL-33 downstream effectors significantly influence tumor biology 9 . Discussion In this study, we have discovered an undescribed function of nuclear FAK as a key regulator of chromatin accessibility changes and transcription factor binding. Furthermore, we have confirmed that nuclear FAK regulates c-Jun binding at the Il33 enhancer region via chromatin accessibility changes to control Il33 expression. As IL-33 is an important regulator of cytokine expression and tumor growth 8 , FAK-dependent c-Jun regulation of IL-33 expression would be predicted to influence cancer cell biology, such as that we described previously 9 . It is perhaps not surprising that FAK can regulate c-Jun, since cytoplasmic-localized FAK is known to transduce signals through pathways such as MAPK 28 and Wnt 29,30 , which are known to control c-Jun expression and its transcriptional activity 18,31 ; however, what is surprising is the more direct link we have uncovered here between nuclear FAK function and its regulation of c-Jun transcriptional activity at the Il33 enhancer via chromatin accessibility. Consistent with the links between nuclear FAK and c-Jun activity being more common, nuclear FAK is reported to regulate the expression of Jun (which encodes c-Jun) in response to 'stretch' in cardiac myocytes by binding to, and enhancing, the transcriptional activity of MEF2 32 . Focal adhesion proteins other than FAK have been detected in the nucleus, such as Lpp 33 and Hic-5 34 , which are believed to function as transcription factor co-regulators 33,35 . Furthermore, the focal adhesion protein paxillin can also translocate to the nucleus 36 , where it contributes to proliferation 37 , and we believe that there are other integrin-linked adhesion proteins capable of translocating to the nucleus and functioning at the nuclear membrane or inside the nucleus 38 . Relevant to the work we present here, a number of consensus adhesome components containing LIM (Lin11-Isl1-Mec3) domains have been directly linked to the regulation of chromatin accessibility and dynamics. For example, Hic-5 can inhibit the binding of the glucocorticoid receptor to the chromatin remodelers chromodomain-helicase DNA binding protein 9 (also known as ATP-dependent helicase CHD9) and brahma (also known as ATP-dependent helicase SMARCA2), resulting in a closed chromatin conformation and transcriptional repression of a subset of glucocorticoid receptor target genes 35,39 . Also, paxillin can regulate proliferation-associated gene expression by controlling promoter-enhancer looping via nuclear interactions with the cohesin and mediator complex 37 . Taken together, these reports suggest that focal adhesion enhancer regions. (A,B) FAK-WT SCC cells were transfected with a non-targeting control (NTC) or Jun SMARTpool (SP) siRNA. Jun (A) and Il33 (B) qRT-PCRs were carried out using Jun and Il33 primers, respectively. Fold gene expression changes were calculated by normalizing cycle threshold (Ct) values to GAPDH and FAK-WT NTC Ct values. (C) FAK-WT-expressing cells were transfected with NTC or individual Jun siRNAs, and whole cell lysates were subjected to SDS-PAGE analysis. Blots were stained with IL-33, c-Jun and GAPDH antibodies (left panel). IL-33 protein expression was quantified by densitometry using ImageJ/Fiji software (v2.1.0, imagej.net/Fiji) and values normalized to GAPDH densitometry values (right panel). c-Jun knockdown was checked on a separate blot. Full length blots are reported in Supplementary Fig. S3. (D) FAK-WT cells were transfected with NTC or Jun SMARTpool siRNA. qRT-PCR was carried out using Cxcl10 primers. (E) Schematic detailing the locations of ChIP primers upstream of the Il33 gene. (F) Primers were designed to capture the c-Jun motif upstream of the Il33 gene (c-Jun motif primer) and in the upstream region of the Il33 gene to control for background binding (background primer). Pull-down efficiency was calculated using the % input method (see "Materials and methods"). (G) FAK (blue) translocates to the nucleus and binds to transcriptional regulators (i.e. chromatin accessibility regulators and co-activators) (TR, green). At the level of the Il33 gene, TRs potentially scaffold FAK to chromatinmodifying complexes to regulate chromatin accessibility changes at the Il33 gene enhancer, allowing binding of the AP-1 complex containing c-Jun (yellow). AP-1 binding stimulates IL-33 expression, which suppresses the anti-tumor immune response and promotes tumor growth, as shown previously 9 . Images from Servier Medical Art (http://smart .servi er.com/) were adapted under terms of a Creative Commons Attribution 3.0 Unported License: CC BY 3.0 Servier. Data are mean ± SEM. n = 3 biological replicates (A-D) or 5 biological replicates (F). ns not significant; *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001 by unpaired two-tailed t test (A,B,D), oneway ANOVA (C) or t test with Welch's correction (F). www.nature.com/scientificreports/ proteins in the nucleus are capable of scaffolding chromatin remodeling complexes to regulate chromatin structure and gene expression changes. An unanswered question is the mechanism by which FAK controls chromatin accessibility at regulated genes. In this regard, our previous nuclear interactome proteomics revealed that FAK can interact with proteins known to regulate chromatin accessibility 7 . These include the Smarcc2 and Actl6a components of the BRG1/BRMassociated factor (BAF) complex 7 , which have been shown to be recruited to target gene enhancers by AP-1 to regulate chromatin accessibility 40 . IL-33 is required for the chromatin recruitment of the Wdr82 component of the chromatin-modifying protein serine/threonine phosphatase (PTW/PP1 phosphatase) complex 9,41 . IL-33 binds to the Brd4 (bromodomain-containing protein 4) transcriptional coactivator 9 , which is known to recruit the BAF complex to target genes 42 . The nuclear FAK binding protein Sp-1 also interacts with the BAF complex to facilitate its recruitment to specific promoters 43 . Therefore, there is abundant evidence of connections between FAK or FAK binding proteins (i.e. FAK-Sp-1, FAK-IL-33) and FAK-regulated transcription factors (e.g. AP-1) to chromatin accessibility factors, such as the BAF complex and PTW/PP1 phosphatase complex. It is likely that FAK, and proteins to which it binds, scaffold chromatin remodeling proteins at target genes, such as Il33 as we describe here, in order to determine the state of chromatin accessibility, the binding of transcription factors like AP-1 and transcription (Fig. 4G). Once IL-33 expression is activated, FAK and IL-33 bind and co-operate to regulate target gene expression (e.g. Cxcl10) in the nucleus by interacting with a network of chromatin modifiers and transcriptional regulators 9 . One limitation to this study is that our ATAC-seq analysis was only performed in one species. However, we note that the HOMER tool 15 uses motifs from a number of sources, including human and mouse. Indeed, many of the transcription factor motifs are highly conserved between related organisms, with DNA binding profiles for human and mouse transcription factors being almost identical. This high homology makes the information about transcription factors specifically interchangeable between organisms 15 . Furthermore, many transcription factors have high protein sequence homology between species. For example, mouse c-Jun is 96% homologous to its human counterpart 44 . In addition, the information used to predicted the c-Jun/CRE motif upstream of the Il33 enhancer originated from a human ChIP-seq dataset [K562-cJun-ChIP-Seq(GSE31477)]. Therefore, our mouse c-Jun ChIP analysis at this CRE motif has experimentally confirmed that there is homology in AP-1 binding motifs between mouse and human. As such, we believe that the data presented in this manuscript are also applicable to human cell lines. In summary, we have discovered a completely new paradigm for how FAK may regulate transcription in the nucleus, i.e. as a critical regulator of chromatin accessibility changes at biologically important target genes, such as Il33 we show here. Translocation of FAK to the nucleus, where it can bind to factors that control chromatin accessibility, can therefore communicate extracellular cues to the gene transcription machinery in the nucleus by this route. Materials and methods FAK SCC cell line generation. Generation of the FAK SCC cell model has been described previously 10 . ATAC-seq. ATAC-seq samples were prepared similarly to described previously 45 . The specific ATAC-seq protocol used in this study has been previously reported 46 . ATAC-seq data were aligned to the Mus musculus reference genome mm10 using the bcbio ATAC-seq pipeline 47 . Accessible regions (i.e. ATAC-seq peaks) were called from the BAM files using the MACS2 algorithm 48 with the following parameters: -B -broad -q 0.05 -nomodel -shift -100 -extsize 200 -g 1.87e9. Differentially accessible regions between the FAK-WT-expressing cells and the FAK −/− , FAK-nls-expressing and FAK-kd-expressing cells were identified by differential peak calling using the R/ Bioconductor package DiffBind 14 , where significantly different peaks were defined as those with a false discovery rate (FDR) of below or equal to 0.05. Motif-enrichment analysis was performed using HOMER 15 following default parameters. ATAC-seq peaks were assigned to genes using ChIPseeker 17 www.nature.com/scientificreports/ Chromatin immunoprecipitation (ChIP)-qPCR. The ChIP-qPCR experiments were performed as described previously 50,51 . FAK-WT-and FAK-kd-expressing cells (4 × 10 6 ) were plated on 10-cm dishes (Corning) and then, the following day, were formaldehyde crosslinked and fractionated as described in 50 ChIP and input DNA were amplified using the following primers: c-Jun motif/Il33 enhancer, F: ACC CTG GAG TGT TCT TTG CA and R: TGC CTT CTG AAG CTT ACT CGA; negative control region, F: ATG TGT GCT GTG TGT ATG CC and R: ACA TTA AGG GCA GGA GAC GT. ChIP-qPCR analysis was performed using SYBR Green master mix (Thermo Scientific) following manufacturer's instructions. The following cycling conditions were used: 98 °C for 10 s, 30 × (98 °C for 10 s, 60 °C for 1 min and 72 °C for 4 min) and 72 °C for 5 min. The % input method was used for c-Jun ChIP data normalization, whereby the cycle threshold (Ct) values of the ChIP samples are divided by the Ct values of the input sample (starting amount of chromatin used for the ChIP). The input sample Ct value was first adjusted using the following equation: adjusted input = Ct of input sample − log 2 (dilution factor). Then the % input was calculated for the CRE c-Jun ChIP and the background control region upstream of the Il33 gene using the following calculation: 100 × (PCR amplification factor) (adjusted input − PCR Ct value(c-Jun ChIP)) . Then the % input of the CRE motif was subtracted from the % input of the background control region to determine the amount of enrichment of c-Jun binding to the CRE motif over the background control region. RT-qPCR. RNA extraction was performed using an RNeasy Mini kit (Qiagen) following manufacturer's instructions. cDNA synthesis was performed using the SuperScript First-strand Synthesis System (Invitrogen) following the manufacturer's random hexamers protocol. qRT-PCR analysis was performed using SYBR Green master mix (Thermo Scientific) following manufacturer's instructions. The following cycling conditions were used: 98 °C for 10 s, 30 × (98 °C for 10 s, 60 °C for 1 min and 72 °C for 4 min) and 72 °C for 5 min. Primers used were as follows: Il33, F: GGA TCC GAT TTT CGA GAC TTA AAC AT and R: GCG GCC GCA TGA GAC CTA GAA TGA AGT; Cxcl10, F: CCC ACG TGT TGA GAT CAT TG and R: CAC TGG GTA AAG GGG AGT GA; GAPDH, F: CTG CAG TAC TGT GGG GAG GT and R: CAA AGG CGG AGT TAC CAG AG. Jun was amplified using predesigned primers from Qiagen (cat no. QT00296541). Cell lysis and immunoblotting. Cell lysis and immunoblotting was performed exactly as described previously 9 . Antibodies used in this study were as follows: IL-33 (cat. no. BAF3626; R&D Systems), c-Jun (cat. no. 9165; Cell Signaling Technology), Phospho-FAK (Tyr397) (cat. no. 3283; Cell Signaling Technology), Phosphoc-Jun (Ser73) (cat. no. 3270; Cell Signaling Technology), GAPDH (cat. no. 5174; Cell Signaling Technology). RNA-seq. RNA was extracted from FAK-WT, FAK −/− and FAK-kd SCC cells using an RNeasy kit (Qiagen) following manufacturer's instructions. To verify sample purity, the samples were run on a 2100 Bioanalyzer using the Bioanalyzer RNA 6000 pico assay (both Agilent). Samples that achieved an RNA integrity number (RIN) of 8 or above were considered suitable purity for sequencing. Samples were prepared for sequencing using the TruSeq RNA Library Prep Kit v2 (low-sample protocol) (Illumina) and paired-end sequenced using a HiSeq 4000 platform (Illumina) at BGI. Transcript abundance was determined using the pseudoalignment software kallisto 52 on the mouse transcriptome database acquired using the kallisto index, implementing default parameters. Quality control was performed on the kallisto output using MultiQC software (https ://githu b.com/ewels /Multi QC). Transcript abundance was summarized to gene level and imported into the differential expression analysis R package DESeq2 53 using the R package tximport 54 . Genes which had zero read counts were removed prior to differential expression analysis. Differential expression analysis was performed using DESeq2 using default parameters, where FAK-WT vs FAK −/− and FAK-WT vs FAK-kd SCC cell line gene read counts were compared. The Wald test was used for hypothesis testing in DESeq2, and all P-values were corrected for multiple testing using the Benjamini-Hochberg method. Transcripts that acquired a Benjamini-Hochberg corrected P-value of 0.05 or below and a log 2 -transformed fold change in expression of ≥ 1 or ≤ -1 were considered significantly different between the cell lines. RNA-seq results from FAK-WT and FAK −/− cells were previously analyzed 38 ; RNA-seq data from all cell lines (Supplementary Data S2) were deposited in the NCBI Gene Expression Omnibus (GEO) 55 www.nature.com/scientificreports/ Protein domain-enrichment analysis. Protein domain-enrichment analysis was performed for SMART domains using DAVID 56,57 . All terms that acquired a Benjamini-Hochberg corrected P-value of below 0.05 were considered statistically significant. Network analysis. Network analysis was performed using Ingenuity Pathway Analysis (QIAGEN Inc., https ://www.qiage nbioi nform atics .com/produ cts/ingen uityp athwa y-analy sis). The following parameters were used for network construction: database sources (Ingenuity expert information, protein-protein interaction database, BioGrid, IntAct), direct interaction, experimentally observed, protein-protein and functional interactions, mammalian interactions only. All networks were exported into Cytoscape 58 , and the NetworkAnalyzer plugin 59 was used to visualize the most connected nodes in the networks before applying yFiles layout algorithms (yWorks). Statistical analysis. Statistical analysis was performed using Prism 8 (GraphPad Software). All P-values below 0.05 were considered statistically significant. Data availability The RNA-seq and ATAC-seq data have been deposited in the Gene Expression Omnibus and are accessible through GEO series accession identifiers GSE147670 and GSE161022, respectively.
2020-12-09T17:28:34.936Z
2021-01-08T00:00:00.000
{ "year": 2021, "sha1": "37bc08d1d54e4a244cdf6955140f4717bb714927", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-80111-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "430ba24c760f9252c396330d6c3414e292019477", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119618309
pes2o/s2orc
v3-fos-license
Center of U(n), Cascade of Orthogonal Roots, and a Construction of Lipsman-Wolf Let $G$ be a complex simply-connected semisimple Lie group and let $\g=\hbox{\rm Lie}\,G$. Let $\g = \n_- +\hh + \n$ be a triangular decomposition of $\g$. One readily has that $\hbox{\rm Cent}\,U(\n)$ is isomorphic to the ring $S(\n)^{\n}$ of symmetric invariants. Using the cascade ${\cal B}$ of strongly orthogonal roots, some time ago we proved (see [K]) that $S(\n)^{\n}$ is a polynomial ring $\Bbb C[\xi_1,...,\xi_m]$ where $m$ is the cardinality of ${\cal B}$. The authors in [LW] introduce a very nice representation-theoretic method for the construction of certain elements in $S(\n)^{\n}$. A key lemma in [LW] is incorrect but the idea is in fact valid. In our paper here we modify the construction so as to yield these elements in $S(\n)^{\n}$ and use the [LW] result to prove a theorem of Tony Joseph. 1. Introduction 1.1. Let g be a complex semisimple Lie algebra and let g = n − + h + n be a fixed triangular decomposition of g. Let ∆ ⊂ h * be the set of h roots in g. The Killing form (x, y) on g, denoted by K, induces a nonsingular bilinear form (µ, ν) on h * . For each ϕ ∈ ∆ let e ϕ ∈ g be a corresponding root vector. The root vectors can and will be chosen so that (e ϕ , e −ϕ ) = 1 for all roots ϕ. The set ∆ + of positive roots is then chosen so that ∆ + = ∆(n), and one puts ∆ − = −∆ + . If s is a Lie subalgebra, then S(s) and U (s) are respectively the symmetric and enveloping algebras of s. Our concern here is with the case where s = n. Let b = h + n so that b is a Borel subalgebra of g. Let G be a Lie group such that Lie G = g and let H, N, B be Lie subgroups corresponding, respectively, to h, n, b. Then S(n) is a B-module since B = HN normalizes N . Let m be the maximal number of strongly orthogonal roots. Then we proved the following some time ago, generalizing a result of Dixmier (case where g is of type A ℓ ), Theorem A. There exists ξ i ∈ S(n) N , i = 1, . . . , m, so that S(n) N = C[ξ 1 , . . . , ξ m ] is a polynomial ring in m-generators. Furthermore, so that one has a similar statement for Cent U (n). We will present an algebraic-geometric proof of a much stronger statement than Theorem A and relate it to a representation-theoretic construction, due to Lipsman-Wolf, of certain elements in S(n) N . See [K], [LW]. A key tool is the cascade B = {β 1 , . . . , β m } of orthogonal roots which will now be defined. 1.2. Let Π ⊂ ∆ + be the set of simple positive roots. For any ϕ ∈ ∆ + and α ∈ Π there exists a nonnegative integer n α (ϕ) such that Then Π(ϕ) is a connected subset of Π and hence defines a simple Lie subalgebra g(ϕ) of g. We will say that ϕ is locally high if ϕ is the highest root of g(ϕ). Obviously the highest roots of all the simple components of g are locally high. Remark 1. If g is of type A ℓ , but only in this case, are all ϕ ∈ ∆ + locally high. Let ϕ ∈ ∆ + be locally high and let let g(ϕ) o be the semisimple Lie algebra having Π(ϕ) o as its set of simple roots. We will then say that a root ϕ ′ ∈ ∆ + is an offspring of ϕ if ϕ ′ is the highest root of a simple component of g(ϕ) o . Remark 2. One notes that an offspring of a locally high root ϕ is again locally high and that it is strongly orthogonal to ϕ. A sequence of positive roots will be called a cascade chain if β ′ 1 is a highest root of a simple component of g, and if 1 < j ≤ k, then β ′ j is an offspring of β ′ j−1 . Now let B be the set of all positive roots β which are members of some cascade chain. Let W be the Weyl of (h, g). Theorem 1. The cardinality of B is m and is a maximal set of strongly orthogonal roots. Furthermore, if s β i is the W -reflection of h corresponding to β i , then the long element w o of W may be given by (1.1) B is the cascade of orthogonal roots. 1.3. One has the vector space direct sum Let P : g → n be the projection defined by (1.2). Since b is the K-orthogonal subspace to n in g we may identify n − with the dual space n * to n, so that for v ∈ n − and x ∈ n, one has v, x = (v, x). The coadjoint action of N on n − may then be given so that if u ∈ N , then on n − Coad u = P Ad u. (1.3.) In fact, using (1.2) the coadjoint action of N on n − extends to an action of B on n − , so that if b ∈ B and v ∈ n − , one has b · v = P Ad b(v). In addition we can regard S(n) as the ring of polynomial functions on n − . Since B normalizes N the natural action of N on S(n) extends to an action of Recalling m = card B, let r be the commutative m-dimensional subalgebra of n spanned by e β for β ∈ B and let R ⊂ N be the commutative unipotent subgroup corresponding to r. In the dual space let r − ⊂ n − be the span of e −β for β ∈ B. For any z ∈ r − , β ∈ B, let a β (z) ∈ C be defined so that (1.5) and let As an algebraic subvariety of n − clearly (1.6) Also for any z ∈ n − let O z be the N -coadjoint orbit containing z. Let N z ⊂ N be the coadjoint isotropy subgroup at z and let n z = Lie N z . Since the action is algebraic, N z is connected and hence as N -spaces (1.7) and O τ is a maximal dimensional coadjoint orbit of N . Now consider the action of B on n − . In particular consider the action of H on and furthermore r × − is an orbit of H. In addition H permutes the maximal N -coadjoint orbits O τ , τ ∈ r × − . More precisely, Theorem 3. For any a ∈ H and τ ∈ r × − , one has (1.11) 1.4. If V is an affine variety, A(V ) will denote its corresponding affine ring of functions. Note that S(n) = A(n − ). Let Q(n − ) be the quotient field of S(n). Furthermore X is an affine variety so that (1.13) Moreover n × − ⊂ X, and in fact one has a disjoint union so that all N -coadjoint orbits in X are maximal and isomorphic to N/R. Let Λ ⊂ h * be the H-weight lattice and let Λ ad ⊂ Λ be the root lattice. Let Λ B ⊂ Λ ad be the sublattice generated by the cascade B. Since the elements of B are mutually orthogonal note that Recalling the definition of r × − and (1.6), note that Λ(A(r × − )) = Λ B and each weight occurs with multiplicity 1. ( 1.16) We can now give more information about X and its affine ring A(X). Define a B action on r × − by extending the H-action so that N operates trivially. Next define a B-action on N/R, extending the N -action by letting H operate by conjugation, noting that H normalizes both N and R. With these structures and the original action on X, we have the following. Theorem 5. One has a B-isomorphism X → N/R × r × − of affine varieties so that as B-modules (1.17) Furthermore, taking N -invariants, one has an H-module isomorphism and each H-weight occurs with multiplicity 1. Recalling (1.13) one has the N -invariant inclusions of H-modules so that (1.21) But since S(n) is a unique factorization domain, any u ∈ Q(n − ) may be uniquely written, up to scalar multiplication as where f and g are prime to one another. Furthermore, it is then immediate (since N is unipotent) that if u is N -invariant, one has f, g ∈ S(n) N . If, in addition, u is an H-weight vector, the same is true of f and g so that, using Theorem 5, one readily concludes the following. Theorem 6. Every H-weight in Λ(S(n) N ) occurs with multiplicity 1 in S(n) N . In fact Λ(Q(n − ) = Λ B and every weight γ in Λ(Q(n − ) occurs with multiplicity 1 in Q(n − ) N and is of the form where µ, ν ∈ Λ(S(n) N ). For any γ ∈ Λ B let ξ γ ∈ Q N n − be the unique (up to scalar multiplication) Hweight vector with weight γ. Thus if γ ∈ Λ B , we may uniquely write (up to scalar multiplication where µ, ν ∈ Λ(S(n) N ) and ξ ν and ξ µ are prime to one another. Let Λ dom = {λ ∈ Λ | λ be a dominant weight}. Remark 3. By the multiplicity 1-condition note that if ν ∈ Λ(S(n) N ), then ξ ν is necessarily a homogeneous polynomial. Define deg ν so that ξ ν ∈ S deg ν (n). Furthermore, clearly ξ ν is then a highest weight vector of an irreducible g-module in S deg ν (g) and in particular ν ∈ Λ dom . That is, (1.25) 1.5. If ν ∈ Λ(S(n) N ), it follows easily from the multiplicity-1 condition and the uniqueness of prime factorization that all the prime factors of ξ ν are again weight vectors in S(n) N . Let P = {ν ∈ Λ(S(n) N ) | ξ ν be a prime polynomial in S(n) N }. (1.26) We can then readily prove Theorem 7. One has card P = m where, we recall m = card B, so that we can write P = {µ 1 , . . . , µ m }. (1.27) Furthermore the weights µ i in P are linearly independent and the set P of prime polynomials, ξ µ i , i = 1, . . . , m, are algebraically independent. In addition, one has a bijection Λ(S(n) N ) → (N) m , ν → (d 1 (ν), . . . , d m (ν)) (1.28) such that, writing d i = d i (ν), up to scalar multiplication, and (1.29) is the prime factorization of ξ ν for any ν ∈ Λ(S(n) N . Finally, so that S(n) N is a polynomial ring in m-generators. Remark 4. One may readily extend part of Theorem 7 to weight vectors in Q(n) N . In fact one easily establishes that there is a bijection so that writing e i (γ) = e i one has ξ γ = ξ e 1 µ 1 · · · ξ e m µ m . (1.31) Separating the e i into positive and negative sets yields ξ ν and ξ µ of (1.24). We wish to prove Theorem 8. One has 32) and as a function ξ ν | r × − does not vanish identically and up to a scalar (1.33) Proof. Let S deg ν (n)(ν) be the ν weight space in S deg ν (n). It does not reduce to zero since ξ ν ∈ S deg ν (n)(ν). Let Γ be the set of all maps γ : ∆ + → N such that (1.34) the set {e γ | γ ∈ Γ} is clearly a basis of S deg ν (n)(ν) and consequently unique scalars s γ exist so that ξ ν = γ∈Γ s γ e γ . (1.35) But by Theorem 5 there exists x ∈ X such that ξ ν (x) = 0. However since X is B-homogeneous, the H-orbit r × − is contained in X and there exists t ∈ r × − such that x = u · t for some u ∈ N . But since ξ ν is N -invariant one has ξ ν (t) = 0. But from (1.34) this implies that γ∈Γ s γ e γ (t) = 0. (1.36) But e γ (t) = 0 for any γ ∈ Γ such that γ(ϕ) = 0 for ϕ / ∈ B. Thus there exists γ ′ ∈ Γ such that γ ′ (ϕ) = 0 for all ϕ / ∈ B and e γ ′ (t) = 0. (1.37) But by the independence of B one has that γ ′ is unique and hence one must have γ ′ (β) = b β . A similar argument yields (1.33). QED 2. A representation-theoretic construction, due to Lipsman-Wolf, of certain elements in S(n) N 2.1. Let λ ∈ Λ dom and let V λ be a finite-dimensional irreducible g-module with highest weightλ. Then, correspondingly, V λ is a U (g)-module with respect to a surjection π λ : U (g) → End V λ . Let 0 = v λ ∈ V λ be a highest weight vector. Also let V * λ be the contragredient dual g-module. The pairing of V λ and V * λ is denoted by v, z with v ∈ V λ and z ∈ V * λ . (We will use this pairing notation throughout in other contexts.) But as one knows V * λ is g-irreducible with highest weight λ * ∈ Λ dom given by But then by (1.1) and the mutual orthogonality of roots in the cascade On the other hand, regarding U (g) * as a g-module (dualizing the adjoint action on U (g)) it is clear that if f ∈ U (g) * defined by putting, for u ∈ U (g), then f is n-invariant and f is an h weight vector of weight λ + λ * . (2.5) Now it is true (as will be seen below) that λ + λ * ∈ Λ(S(n) N ). It is the idea of Lipsman-Wolf to construct ξ λ+λ * using f . The method in [L − W ] is to symmetrize f and restrict to S(n). However Lemma 3.7 in [L-W] is incorrect (one readily finds counterexamples). But the idea is correct. One must modify f suitably and this we will do in the next section. 2.2. Assume s is a finite-dimensional Lie algebra. Let U j (s), j = 1, . . . , be the standard filtration of the enveloping algebra U (s). Let 0 = f ∈ U (s) * . We will say that k ≥ −1 is the codegree of f if k is maximal such that f vanishes on U k−1 (s). But then if k is the codegree of f and if x i ∈ s, i = 1, . . . , k, and σ is any permutation of {1, . . . , k}, then (x 1 · · · x k − x σ(1) · · · x σ(k) ) ∈ U k−1 (s) so that f (x 1 · · · x k ) = f (x σ(1) · · · x σ(k) ). (2.6) But this readily implies that there exists a unique element f (k) ∈ S k (s) such that for any u ∈ U k (s) one has where u ∈ S k (s) is the image of u under the Birkhoff-Witt surjection U k (s) → S k (s). Now let s = g and let f be given by (2.4). Let k be the codegree of f . Identify g with g * using the Killing form. Then f (k) ∈ (S k (g)) N and is an H-weight vector of weight λ + λ * . On the other hand, by (1.2), U k (g) = U k (n − ) ⊕ U k−1 (g)b. (2.8) However b · v λ ⊂ C v λ so that f vanishes on U k−1 (g)b. But this readily implies f (k) ∈ S(n) N . We have proved Theorem 9. Let f be given by (2.4) and let k be the codegree of f . Then λ + λ * ∈ Λ(S(n) N ). Furthermore k = deg(λ + λ * ) and up to scalar multiplication f (k) = ξ λ+λ * . (2.9) The inclusion (1.25) is actually an equality Λ(S(n) N ) = Λ dom ∩ Λ B . (2.10) This equality is due to Tony Joseph and I was not aware of it until read it in [J]. However, the equality (2.10) follows immediately from the modified Lipsman-Wolf construction Theorem 9. Indeed let ν ∈ Λ dom ∩ Λ B . To show ν ∈ Λ(S(n) N , it suffices to show that e i (ν) ≥ 0 (2.11) in (1.31) for any i = 1, . . . , m. But putting λ = ν, one has λ + λ * = 2ν and by Theorem 9 one has all e i (2ν) ≥ 0. But clearly e i (2ν) = 2e i (ν). This proves (2.11). The results in this paper will appear in [K1] in Progress in Mathematics, in honor of Joe.
2012-01-21T17:42:02.000Z
2012-01-21T00:00:00.000
{ "year": 2012, "sha1": "ab672711c69730f6df32979ccdb772aeeac28b28", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ab672711c69730f6df32979ccdb772aeeac28b28", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
217262165
pes2o/s2orc
v3-fos-license
An Epidemiological Approach of ACTH Dependent Cushing's syndrome Cushing’s disease, or pituitary ACTH dependent Cushing’s syndrome, is a rare disease responsible for increased morbidity and mortality. Signs and symptoms of hypercortisolism are usually nonspecific: obesity, signs of protein wasting, increased blood pressure, variable levels of hirsutism. Diagnosis is frequently difficult, and requires a strict algorithm. First-line treatment is based on transsphenoidal surgery, which cures 80% of ACTH-secreting microadenomas. The rate of remission is lower in macroadenomas. Other therapeutic modalities including anticortisolic drugs, radiation techniques or bilateral adrenalectomy will thus be necessary to avoid long-term risks (metabolic syndrome, osteoporosis, cardiovascular disease) of hypercortisolism. This review summarizes potential pathophysiological mechanisms, diagnostic approaches, and therapies. Introduction Epidemiology The incidence of Cushing's syndrome is estimated to be equal to 1-3 cases per million inhabitants per year, whereas its prevalence is close to 40 cases per million inhabitants. Of note, prevalence of hypercortisolism is thought to be equal to 2-5% of patients with poorly controlled diabetes and hypertension. Female preponderance is generally assumed to be close to 3:1 [2]. Cushing's disease is an extremely rare condition in children, with a peak in adults in the 3 rd or 4 th decade. Cushing's disease leads to death if untreated; it is responsible for increased morbidity and mortality, due to cardiovascular complications, infections and psychiatric disturbances [3,4]. Characteristics of corticotroph adenomas Cushing's disease is frequently due to monoclonal benign and slow growing microadenomas (less than 10 mm) [9,10]. Plasma ACTH (and cortisol) classically lose their physiologic circadian periodicity. They are partially resistant to physiologic stimuli (i.e., glucocorticoids), and do not respond to the normal feedback negative loop. In contrast, corticotroph adenomas are inappropriately sensitive to CRH and AVP. Altered CRH secretion as well as POMC qualitative changes in gene expression were also reported to be involved in the pathogenesis of Cushing's disease. Cushing's disease can be more atypical: secretion profiles are sometimes cyclic, with hypersecretion preceding a long period of normal secretion [8,11]. Some corticotroph adenomas are called "silent" as they are clinically and biologically comparable to non-secreting pituitary adenomas: diagnosis is made by the pathologist [12]. Finally, rare cases of aggressive pituitary adenomas or carcinomas have been reported [13]. Whether hyperplasia of corticotroph cells is or not a required initial step before the genesis of corticotroph adenoma remains a matter of debate. The origin of the disease, primary pituitary condition or secondary to an abnormality in the hypothalamus (chronic stimulation by CRH [14]), remains a matter of debate. Potentially involved molecular mechanisms Triggering signals leading to Cushing's disease remain unclear. Oncogenes do not appear to be involved, as somatic mutations are usually not present in corticotroph adenomas cells. Recent studies in mice identified a potential role of loss of function of Brg1 (brahma-related gene 1) and HDAC2 (Histone Deacetylase 2) in the pathogenesis of Cushing's disease. Both proteins form a complex with the glucocorticoid receptor and the orphan nuclear receptor nuclear growth factor IB (NGFI-B) to repress POMC secretion. Interestingly, about 50% of corticotroph adenomas do not express these proteins anymore. The loss of Brg1 could lead to overexpression of cyclin E, leading to increased cell proliferation and sporadic hyperplasia or tumors. Interestingly, tumors with a loss of nuclear localization of Brg1 seem to be more responsive to anticortisolic drugs in vitro compared to the ones with a complete loss of Brg1 oncogene [16,17]. Diagnosis Diagnosis of Cushing's disease is difficult [20]. Clinical signs and symptoms are often non-specific; no single biological test combines optimal sensitivity and specificity for the diagnosis of hypercortisolism and for the determination of its etiology [21]. Moreover, pituitary and adrenal imaging can sometimes be confusing. Several steps are needed to first confirm the diagnosis of hypercortisolism and then determine its origin: the first will be to confirm the lack of exposure to exogenous glucocorticoids that induces the same clinical characteristics as Cushing's syndrome and makes hypercortisolism screening unavailable [22]. In normal subjects, cortisol levels reach a peak at early morning and a nadir < 50 nmol/l around midnight. Patients with Cushing's syndrome lose this circadian rhythm. As a consequence, early morning ACTH and cortisol values are of poor diagnostic value in the screening methods of hypercortisolism. In contrast a midnight cortisol value > 200 nmol/l is strongly suggestive of Cushing's syndrome [23]. Evaluation of the circardian rhythm of cortisol is however not recommended as a first line screening method for hypercortisolism. CRH test (100 μg intra-venously): more than 50% ACTH and 20% cortisol increase is in favor of Cushing's disease. Sensitivity and specificity are close to 90% [34]. Desmopressin test (10 μg intravenously), ACTH and cortisol increases similar to those observed with the CRH test are in favor of CD with 70% sensitivity and 85% specificity [20,35]. Concordant responses to at least 2/3 of these tests should lead to the diagnosis of Cushing's disease, and pituitary MRI. However, the sensitivity of MRI in CD is hardly greater than 60-70% and specificity close to 85%, as most corticotroph adenomas are microadenomas. In one study, 10% of the general population presented MRI pituitary images of less than 5 mm that might be considered as adenomas [36]. Cushing's disease diagnosis is thus confirmed in the presence of an adenoma > 6 mm and concordant responses to tests. Clinical Management Transsphenoidal surgery is the first line treatment of Cushing's disease [41,42]. It allows remission in 60-90% of microadenomas, and 50-70% of macroadenomas, depending on local invasion and the experience of the neurosurgeon [43-45]. Remission should be defined by normal ACTH and cortisol circadian rhythms, and suppressed cortisol value after overnight/low dose dexamethasone suppression test. ACTH-lowering agents Cabergoline is a dopamine agonist well known for its anti-secretory and anti-tumoral efficacy in prolactinomas. Corticotroph adenomas can express dopamine receptors. Recent studies reported that about 25% of patients treated by high doses of carbergoline for CD could be controlled as well [64][65][66]. A strict echocardiographic follow-up is required, due to a dose-dependent risk of valvulopathy. Pasireotide is a somatostatin agonist with a particular binding affinity for somatostatin receptor (sstr) isoforms 1, 2, 3 and 5. This specific affinity for sstr5 could be of major interest in CD. Clinical trials are ongoing to determine efficiency of this drug. Preliminary results suggest that pasireotide is able to decrease cortisol levels in the majority of patients, but only few reach normalized values. There is a risk of induction or worsening of hyperglycemia in 1/3 cases [67-69]. Prognosis The risks of chronic hypercortisolic state include excess morbidity and mortality due to increased cardio-vascular risk factors (hypertension, dyslipidemia, diabetes mellitus, metabolic syndrome) leading to heart defect. Moreover, hypercortisolism is responsible for coagulopathy [77] and atherosclerosis [78], which also increase the risk to develop cardiovascular diseases. Recent data suggest that part of these defects due to hypercortisolism might remain after remission [78] even if the mortality rate would go back to normal [79]. Frequency of infectious diseases is also increased, as well as delayed healing. Hypercortisolism can induce severe osteoporosis in about 30% cases, and osteopenia in half of them. Also, acute cortisol excess can induce severe hypokalemia, as well as elevated blood pressure levels, and sometimes psychiatric signs [26,80]. Finally, more than half of patients with CS can present with psychiatric signs, from mild to severe depression, and cognitive dysfunction [81]. Pituitary Radiotherapy Persistent hypercortisolemia after transsphenoidal surgery due to residual tumor can be treated with radiotherapy. Adjunctive medical control of hypercortisolemia may be needed while awaiting the effects of radiotherapy. Conventional fractionated radiotherapy is very effective, but its effects may be delayed up to 10 years, and it can be associated with long-term hypopituitarism.125 Stereotactic radiosurgery is more rapidly effective, but has been associated with a relapse rate of 20%.126. Causes Cushing's syndrome There are two types of Cushing's syndrome: exogenous and endogenous. The symptoms for both are the same. The only difference is how they are caused. The most common is exogenous Cushing's syndrome and is found in patients taking cortisol-like medications such as prednisone. These medications are used to treat inflammatory disorders such as asthma and rheumatoid arthritis, or to suppress the immune system after an organ transplant. This type of Cushing's is temporary and goes away after the patient has finished taking the cortisol-like medications. Endogenous Cushing's syndrome is rare, it usually comes on slowly and can be difficult to diagnose. It is caused either by a problem with the adrenal glands or the pituitary (a gland located at the base of the brain). In the adrenal glands, the problem is caused by a tumor (usually noncancerous) that produces too much cortisol. When the problem is with the pituitary, the problem is caused by a tumor that produces too much ACTH (the hormone that tells the adrenal glands to make cortisol). When the tumors form in the pituitary the condition is often called Cushing's disease. The majority of tumors that produce ACTH originate in the pituitary but sometimes non-pituitary tumors (usually in the lungs) can also produce too much ACTH and cause Cushing's syndrome.
2020-01-02T21:51:05.718Z
2018-04-05T00:00:00.000
{ "year": 2018, "sha1": "8d62aa68eff43a652a7e93d4ddf5c01f37ec9e39", "oa_license": "CCBY", "oa_url": "https://doi.org/10.31579/2640-1045/027", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dbcf716a422ae9fc6e35feb3e3a2faf3455615e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
49299103
pes2o/s2orc
v3-fos-license
Vitamin C in Cancer: A Metabolomics Perspective There is an ongoing interest in cellular antioxidants and oxidants as well as cellular mechanisms underlying their effects. Several reports suggest that vitamin C (L-ascorbic acid) functions as a pro-oxidant with selective toxicity against specific types of tumor cells. In addition, reduced glutathione plays an emerging role in reducing oxidative stress due to xenobiotic toxins such as metals and oxidants associated with diseases such as cancer, cardiovascular disease, and stroke. High-dose intravenous vitamin C and intravenous glutathione have been used as complementary, alternative, and adjuvant medicines. Here, we review the molecular mechanisms underlying the regulation of oxidation/reduction systems, focusing on the altered metabolomics profile in cancer cells following treatment with pharmacological vitamin C. This review focuses on the role of vitamin C in energy metabolism in terms of adenosine triphosphate, cysteine, and reduced glutathione levels, affecting cancer cell death. There is an ongoing interest in cellular antioxidants and oxidants as well as cellular mechanisms underlying their effects. Several reports suggest that vitamin C (L-ascorbic acid) functions as a pro-oxidant with selective toxicity against specific types of tumor cells. In addition, reduced glutathione plays an emerging role in reducing oxidative stress due to xenobiotic toxins such as metals and oxidants associated with diseases such as cancer, cardiovascular disease, and stroke. High-dose intravenous vitamin C and intravenous glutathione have been used as complementary, alternative, and adjuvant medicines. Here, we review the molecular mechanisms underlying the regulation of oxidation/reduction systems, focusing on the altered metabolomics profile in cancer cells following treatment with pharmacological vitamin C. This review focuses on the role of vitamin C in energy metabolism in terms of adenosine triphosphate, cysteine, and reduced glutathione levels, affecting cancer cell death. Keywords: vitamin C, cancer, metabolomics, glutathione metabolism, glucose metabolism SYSTEMS BIOLOGY PERSPECTIVES ON VITAMIN C From a systems biology perspective, the integrated use of proteomics, genomics, and transcriptomics is extremely important for translational metabolomics-based research (Shin et al., 2016). Microarray analysis and qPCR have been performed to investigate the effect of vitamin C on gene expression. A recent study has reported that a series of genes in embryonic stem cells are differentially regulated by vitamin C treatment (Shin et al., 2004). Most of these upregulated genes belong to gene families that regulate neurogenesis, neuronal maturation, and neurotransmission (Shin et al., 2004;Belin et al., 2010). Based on the observation that vitamin C treatment suppresses the expression of PMP22, a myelin gene that is overexpressed in one of the hereditary motor and sensory neuropathies, it has been suggested that vitamin C induces dose-dependent suppression of PMP22 expression by inhibiting the production of cAMP, a regulator of CREB-binding promoter located in PMP22 (Hai et al., 2001;Kaya et al., 2007;Belin et al., 2010). Vitamin C acts as a competitive inhibitor of adenylate cyclase, and represses the expression of a variety of genes under the control of cAMP-dependent pathway (Belin et al., 2010). Microarray data suggested that 5 days of vitamin C supplementation under normal physiological condition, but not under cancer condition, induce an upregulation of calnexin isoform (Canali et al., 2014). On the other hand, microarray analysis using human colon carcinoma HT29 cells has shown that vitamin C downregulated the expression of translational initiation factor subunits, tRNA synthetases, and genes crucial for cell cycle progression accompanied by S-phase arrest of proliferative cells induced by vitamin C (Belin et al., 2009). In addition, microarray analysis using mouse models grafted with HT29 cells has consistently shown a decreased expression of translational initiation factor and tRNA synthetases in tumors following vitamin C treatment (Belin et al., 2009). Proteomics research also elucidates the protein expression in terms of post-translational modifications triggered by a specific stimulus independent of protein neo-synthesis. Post-translational modifications such as phosphorylation of tyrosine or serine/threonine, sulfur oxidation of cysteine, and glutathionylation represent key mechanisms of cell stimulation related to oxidative stress. Our group conducted proteomics analyses of the effect of vitamin C on cancer at the cellular level and in mouse models grafted with tumor cells (Park et al., 2006(Park et al., , 2009. When human leukemia cell line NB4 was treated with relatively high concentration (0.5 mM) of vitamin C, approximately 200 differentially expressed spots were detected by two-dimensional electrophoresis. This proteomics analysis suggested that the domain polymerization state of quaternary structure protein composed of four domains via disulfide bond was altered in response to vitamin C treatment. One of these proteins included protein disulfide isomerase (PDI) belonging to the thiol/disulfide exchange catalyst superfamily. It acts as a protein-thiol-oxidoreductase enzyme. It also shares sequence homology with thioredoxin (Park, 2013). Another protein was immunoglobulin heavy chain binding protein (BiP), a multi-domain chaperone identical to chaperone Hsp70. BiP binds via a disulfide bond to the α-subunit of prolyl 4-hydroxylase (P4-H), a partner of PDI (John and Bulleid, 1996). P4-H is a multimeric protein composed of α-subunit and β-subunit. Its α-subunit is catalytically more important than its β-subunit. In addition, its β-subunit is identical to the multifunctional PDI enzyme (John and Bulleid, 1996). These results suggest that vitamin C oxidizes intracellular levels of reduced glutathione and the valence change of glutathione and reduced glutathione results in disulfide bond rearrangement in the quaternary structure of proteins such as PDI and BiP. Our previous study also demonstrated that changes in intracellular valence of glutathione between reduced glutathione occur shortly after exposure to vitamin C (Park et al., 2004). Regional changes in oxidation state induced by vitamin C lead to a variety of alterations involving sulfur oxidation in the cellular milieu and result in transitions in the protein quaternary structure. The oxidation state of cysteine sulfur is important for the determination of the tertiary structure of proteins (Park, 2013). An important example of protein influenced by regional changes in oxidative state associated with vitamin C is glyceraldehyde 3-phosphate dehydrogenase (GAPDH) involved in glycolysis metabolism. It has been reported that GAPDH activity is reduced by reactive oxygen species (ROS) or vitamin C treatment (Hwang et al., 2009;Yun et al., 2015). High concentration of vitamin C generating ROS suppresses GAPDH via Cys glutathionylation (Hwang et al., 2009;Yun et al., 2015). The role of GAPDH in vitamin C-dependent alterations suggests that vitamin C influences glucose metabolism via altered oxidation/reduction status. It also suggests an interface between proteomics analysis and metabolomics approach to determine the effect of vitamin C. METABOLOMICS OVERVIEW Metabolomics is appropriate for the study of biological processes induced by endogenous developmental changes or drugs and other xenobiotics via endogenous metabolome (Oskouie and Taheri, 2015). Approximately 38000 chemical compounds in metabolites are generally detected in the human body according to a recent report (Oskouie and Taheri, 2015). Metabolome is typically composed of carbohydrates, amino acids, lipids, nucleotides, and other organic compounds. Metabolites exhibit varying levels of volatility and polarity, and therefore, a variety of analytical technologies are employed in metabolomics studies (Oskouie and Taheri, 2015). The most common methodologies used for the identification of metabolites include nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry (Meister and Anderson, 1983). Glucose Metabolism in Cancer Cell Metabolic profiling indicates altered metabolomics in the cancer cells during cancer progression. Therefore, metabolite-based investigations of various cancers represent a useful approach to identify diagnostic, therapeutic, and prognostic biomarkers in cancer. It has been suggested that glucose metabolism in tumor cells varies from that of normal cells. Glucose metabolism is known to be associated with sustainable proliferation. Otto Warburg suggested that tumor cells metabolize approximately ten-fold more glucose to lactate under aerobic conditions than normal tissues in a given time (Koppenol et al., 2011). Moreover, higher conversion of glucose to lactic acid under aerobic condition in cancer cells is accompanied by retaining mitochondrial respiration (Koppenol et al., 2011). Specific metabolic proteins have been identified as potential oncoproteins (Table 1). For example, pyruvate kinase type M2 is an oncoprotein expressed in squamous cell carcinoma (Wong et al., 2008). Specific oncoproteins alter cancer cell metabolism by directly regulating key metabolic enzymes and pathways (Nagarajan et al., 2016). For example, oncogenic transcription factor MYC activates the transcription of glycolytic enzyme genes and glucose transporters that enhance aerobic glycolysis (Shim et al., 1997;Osthus et al., 2000;Ahuja et al., 2010). In addition, oncogenic kinase Akt activates hexokinase 2, phosphofructokinase 1 (PFK1), and phosphofructokinase 2 (PFK2). It also induces localization of glucose transporters to the cell surface, resulting in enhanced glycolysis (Deprez et al., 1997;Robey and Hay, 2009). It is well known that mitochondrial metabolism is regulated by oncoprotein Bcl-2 (Krishna et al., 2011). Ha-Ras and β-catenin oncoproteins reprogram metabolic flows in mouse liver tumors (Unterberger et al., 2014). Hepatitis B X-interacting protein, an oncoprotein, also enhanced glucose metabolism by suppressing the synthesis of cytochrome c oxidase 2 and pyruvate dehydrogenase alpha 1 in breast cancer (Liu et al., 2015). Regulation Cancer type (Reference) Other evidence implicating oncogenes in aerobic glycolysis include phosphorylation of a variety of glycolytic enzymes by oncogenic Src kinase and enhanced glucose uptake by oncogenic Ras activation in fibroblasts (Cooper et al., 1984;Flier et al., 1987). Ras oncogene links metabolomic alterations by vitamin C with tumor suppression by vitamin C. A recent report found that high-dose vitamin C is selectively toxic to human colorectal cancer cells carrying either K-Ras or B-Raf mutations (Yun et al., 2015). Mutant K-Ras or B-Raf activate downstream mitogen-activated protein kinase (MAPK) pathway, leading to the up-regulated expression of GLUT1, a glucose transporter that imports dehydroascorbate (DHA, an oxidized form of vitamin C) into cells (Yun et al., 2009(Yun et al., , 2015. Imported DHA is then reduced back to vitamin C by oxidizing glutathione, resulting in depletion of glutathione, and high levels of intracellular ROS (Vera et al., 1993(Vera et al., , 1995Yun et al., 2015). It has been suggested that such oxidative stress in highly glycolytic K-Ras-or B-Raf-mutant cells triggers inactivation of GAPDH via Cys oxidation, leading to abnormal glycolysis that is rarely seen in K-Ras or B-Raf wild-type cells (Yun et al., 2015). In addition to glucose metabolism, vitamin C induces specific changes in other cellular metabolic pathways in cancer cells. Oxidative stress is an important mechanism of vitamin C in cancer cells. Glutathione-related metabolism also affects cancer progression by vitamin C because glutathione is a major cellular antioxidant. REDOX METABOLISM VIA GLUTATHIONE IN CANCER CELLS An underlying hypothesis is that ROS production is an inevitable consequence of electron transport combined with oxidative phosphorylation under physiological conditions. High levels of ROS induce cellular senescence or death. However, oxidationevading mechanisms of tumor cells differ from that of normal cells (Andrisic et al., 2018). As discussed above, a distinct feature of many cancer cells is their metabolic dependence on anaerobic glycolysis in spite of functional glucose metabolism at the expense of oxygen (Koppenol et al., 2011). Although energetically less efficient, glycolysis produces ATP at a much faster rate by avoiding mitochondrial oxidative phosphorylation (Andrisic et al., 2018). Therefore, cancer cells are protected from deleterious ROS generation that normally should be expected during enhanced proliferation (Andrisic et al., 2018). In addition, enhanced glycolysis is likely to act as a pentose phosphate pathway shunt to provide NADPH and substrates for nucleotide synthesis. NADPH also acts as a reducing agent for oxidized GSH and provides intracellular redox balance (Andrisic et al., 2018). Nonetheless, production of ROS is stimulated in cancer cells compared with that of normal cells (Trachootham et al., 2009). Therefore, cancer cells generally up-regulate multiple antioxidant systems including GSH and thioredoxin, buffering ROS levels to allow tumor cell progression (Harris et al., 2015). Although thioredoxin is not so abundant as GSH in cells, it reduces ROS and is regenerated in a GSH-independent manner by thioredoxin reductase (Holmgren and Lu, 2010). Because GSH and thioredoxin pathways synergistically contribute to cancer cell survival, it has been suggested that blocking both GSH and thioredoxin pathways inhibits cancer promotion (Harris et al., 2015). Cancer cells show metabolic alterations to manage oxidative stress, and therefore, a recent study has suggested that glutathione synthetic pathway is a promising therapeutic target (Beatty, 2015). Mass spectrometry was used to conduct metabolomics profiling of triple-negative breast cancer (TNBC) compared with control cells (Beatty, 2015). TNBC does not represent oncogenic HER2 amplification. It does not express estrogen receptor or progesterone receptor. TNBC is an aggressive and genetically heterogeneous subset of breast cancer, which is refractory to usual targeted therapies (Beatty, 2015). A distinct feature of TNBC metabolic profiling is that levels of glutathione, a cellular redox buffer, are lower in TNBC cell lines compared with the controls (Beatty, 2015). Glutathione biosynthesis is required to suppress ROS in TNBC cells. Thus, inhibition of glutathione biosynthesis leads to reduced tumor cell growth both in vitro and in vivo (Beatty, 2015), illustrating the role of GSH metabolic alterations in cancer. Metabolomics contributes to a better understanding of cancer therapeutically. Likewise, malignant mesothelioma is a fatal cancer with no effective cure. Recently, disabling mitochondrial peroxide metabolism or reducing Akt signaling suppressed mesothelioma malignancy (Tomasetti et al., 2014;Cunniff et al., 2015). This finding may be linked to the examination showing that ROS induced by high dose of ascorbate in mesothelioma inhibited cell death (Takemura et al., 2010). EFFECT OF VITAMIN C ON GSH METABOLISM AND GLUCOSE METABOLISM Vitamin C (L-ascorbic acid) is a well-known reducing agent that is easily oxidized to dehydroascorbate (DHA) in solution. Physiologically, vitamin C is transported into cells as ascorbate in specific cell types by sodium-dependent ascorbic acid transporters. It can also be administered into cells in oxidized DHA form facilitated by glucose transporters (GLUTs) (Nishikimi and Yagi, 1991;Vera et al., 1993;Tsukaguchi et al., 1999;Rumsey et al., 2000;Liang et al., 2001). Following the transportation of DHA into cells via glucose transporters, it is reduced to ascorbate using GSH, and is trapped inside the cells where it accumulates as ascorbic acid (Vera et al., 1993(Vera et al., , 1995. Therefore, vitamin C is considered as a pro-oxidant that produces oxidative stress (Halliwell and Foyer, 1976;Heikkila and Cabbat, 1983). Accordingly, vitamin C enhances arsenic trioxide (As 2 O 3 )-induced cytotoxicity in multiple myeloma cells by decreasing intracellular GSH levels . A clinical study has reported such results in patients with multiple myeloma treated with a combination of vitamin C and As 2 O 3 . In vitro, vitamin C suppresses the growth of mouse myeloma cells. In vivo, vitamin C inhibited the growth of leukemic progenitor cells isolated from a patient with acute myeloid leukemia (AML) in our previous study (Park et al., 1971(Park et al., , 1992Park, 1985). In a few clinical studies, manipulation of vitamin C levels in AML patients has produced clinical benefit (Park et al., 2001(Park et al., , 2002. Based on such result, complementary and alternative medicine practitioners have used high concentrations of vitamin C to treat their patients (Meister and Anderson, 1983;Park et al., 2001Park et al., , 2002Park, 2013). The physiological concentration of vitamin C is <0.1 mM in plasma. Plasma vitamin C concentrations (1-10 mM, depending on cell lines) that are toxic to cancer cells in vitro can be attained clinically by i.v., and not via oral administration of a high dose of vitamin C (Park, 2013). Recent studies have found that serum concentrations of GSH are associated with various disease conditions (Droge and Breitkreutz, 2000;Prousky, 2008;Forman et al., 2009;Smeyne and Smeyne, 2013). For example, decreased serum concentration of GSH has been linked to cancer and neurodegenerative disease susceptibility (Smeyne and Smeyne, 2013). Because GSH is so poorly absorbed in the gastrointestinal system, i.v. GSH (rather than most oral GSH supplements) represents another complementary and alternative medicine therapy . We have previously reported that in vitro treatment with 0.25-2.0 mM vitamin C induces apoptosis of leukemia cells (Park et al., 2004). Vitamin C-stimulated oxidation of GSH to dimerized oxidized form (GSSG) leads to accumulation of hydrogen peroxide (H 2 O 2 ), resulting in the induction of apoptosis. A number of previous reports also suggested that highdose vitamin C kills cancer cells by acting as a pro-drug that generates H 2 O 2 (Chen et al., 2008;Takemura et al., 2010;Du et al., 2012;Uetaki et al., 2015). The direct role of H 2 O 2 in the induction of apoptosis in acute myeloid leukemia (AML) cells has been confirmed using catalase to completely abrogate vitamin C-induced apoptosis (Park et al., 2004). A recent metabolomics study has suggested an important relationship between vitamin C and GSH in terms of glucose metabolism, including glycolysis, citric acid cycle (tricarboxylic acid; TCA cycle), and pentose phosphate pathway (Uetaki et al., 2015). A list of metabolites associated with metabolic perturbations related to glucose metabolism is provided in Table 2, which is in line with a previous report showing that vitamin C influenced glucose metabolism (Hwang et al., GSH plays a significant role in cellular defense against oxidative stress by reducing free radicals and ROS. It acts in various cysteine-mediated intracellular processes, including the metabolism of cysteine amino acids and biosynthesis of leukotrienes and DNA (Larsson et al., 1983;Meister and Anderson, 1983). GSH is synthesized via sequential steps of two enzyme reactions containing γ-glutamylcysteine synthetase (γ-GCS) and GSH synthase. γ-GCS catalyzes the rate-limiting step of GSH synthesis (Meister and Anderson, 1983). GSTs are a major group of detoxification enzymes that conjugate GSH to reactive metabolites. Multiple forms of GST isozymes have been identified (Shepherd et al., 2000). To date, eight distinct classes (α, κ, µ, ϕ, π, θ, σ, and ζ) encoding soluble cytosolic GSTs have been identified in mammals on the basis of their degree of sequence identity (Hayes and McLellan, 1999). GST-P1 is a gene that encodes a GST belonging to the π class. GST-A1, A2, A3, and A4 genesc encode human GST subunits belonging to the α class. GST-M1, M2, M3, M4, and M5 genes encode GST subunits belonging to the µ class (Sheehan et al., 2001). Substantial evidence suggests that ROS play an important role in cellular signaling linked to transcriptional machinery or act as a second messenger (Griffith and Meister, 1979;Palmer and Paulson, 1997;Kunsch and Medford, 1999;Hensley et al., 2000;Sheehan et al., 2001;Carcamo et al., 2002a,b). Furthermore, evidence indicates that phase II detoxification enzymes such as GSH S-transferase, NAD(P)H:quinone oxidoreductase1, UDP-glucuronosyltransferase, and epoxide hydrolase can be induced by various compounds, including food phytochemicals (Wattenberg, 1981;Nakamura et al., 2000). Our previous data established the regulation of GSH levels via transcriptional regulation of glutathione synthase and GST synthesis by vitamin C (Park, 2007). The role of vitamin C-induced changes in GSH/GSSG ratio was first established in this report. We have investigated the relationship of vitamin C with GSH in leukemia cell lines. We found that vitamin C-induced decrease in intracellular GSH/GSSG ratio and H 2 O 2 accumulation led to transcriptional induction of intracellular protein and protection against oxidative stress, such as γ-GCS in HL-60 and NB-4 cells. Although the effect of H 2 O 2 accumulation induced by vitamin C was eliminated by catalase, vitamin C-mediated transcriptional induction of these enzymes has been observed, indicating that the altered GSH/GSSG ratio was more important than H 2 O 2 accumulation in inducing the activity of enzymes that protect against oxidative stress (Park, 2007). A redox cycle requires adequate support via GSH reductase and GSH peroxidase for defense against redox stress. In addition, relatively high concentrations of GSH via synthesis and active transport of GSSG or GSH S-conjugates are needed. Stimulation of γ-GCS transcription increases GSH concentration (Goto et al., 1995). Our observations suggested that vitamin C stimulated the expression of γ-GCS, resulting in an increase in the level of GSH via de novo synthesis at the expense of cysteine (Park, 2007). Concentrations of GSH in three types of myeloid leukemia cells were elevated within 3 h after treatment with vitamin C and gradually returned to their baseline levels by 12 h (Park, 2007). Such increase in the concentration of GSH was associated with enhanced expression of γ-GCS. GSH synthesis and GST activation in response to vitamin C occurred rapidly (in 1 h) (Park, 2007). The elevated expression of γ-GCS in response to vitamin C is accompanied by corresponding increase in the concentration of GSH, representing an important function of vitamin C in cellular GSH homeostasis (Park, 2007). Cysteine is known to be a rate-limiting precursor for GSH synthesis (Watanabe and Bannai, 1987). Therefore, we investigated cysteine uptake in AML cells after treatment with vitamin C (Park, 2007). Intracellular L-Cys incorporation was measured in intact HL-60, NB4, and KG1 cells exposed to vitamin C using 35 S-labeled-L-Cys containing media (Park, 2007). The rate of uptake in the absence of vitamin C was very low (at most 119% of baseline by 16 h) (Park, 2007). However, it peaked after 1 h and 3 h (Park, 2007). An inhibitor of gamma-glutamylcysteine synthetase, buthionine sulfoximine potently inhibited the second peak, suggesting glutathione synthesis following the incorporation of cysteine. These results indicate that vitamin C induced GSH synthesis in parallel with intracellular cysteine uptake. Interestingly, intracellular GSH levels in these AML cells incubated with vitamin C peaked around 3 h and declined thereafter, while the increase in [ 35 S]-L-Cys incorporation occurred at 3 h and continued (Park, 2007). This result demonstrated that transporation of [ 35 S]-L-Cys into cells through cysteine uptake is followed by incorporation and intracellular transfer. Thus, the sulfhydryl transfer system might be affected by vitamin C. In view of the signaling effects of vitamin C, the association between vitamin C and glutathione in myeloid cells may partly explain the potential effect of vitamin C on cellular signal transduction. It appears that vitamin C has a positive effect on sulfhydryl (-SH) uptake. Considering that intracellular concentration of glutathione determines cellular thiol-disulfide redox potential to a large extent, it might regulate a variety of cellular processes via disulfide bridge formation and protein glutathionylation. CONCLUSION Recently, biological and pre-clinical studies suggest that high dose intravenous vitamin C combined with conventional chemotherapy agent synergistically increase the effectiveness of cancer therapy. (Espey et al., 2011;Hoffer et al., 2015). A phase I study states that high dose intravenous vitamin C in combination with gemcitabine and erlotinib in patients with metastatic pancreatic cancer did not reveal increased toxicity (Monti et al., 2012). In view of the metabolic effect, we conclude that vitamin C plays a key role in the challenges associated with glucose and GSH metabolism (Figure 1 and Table 2). Vitamin C induces high ROS level and oxidation of GSH. Accompanied by direct glutathionylation of GAPDH in glycolysis, glucose metabolism was altered by vitamin C treatment. Furthermore, changes in reduced glutathione ratio triggered by vitamin C resulted in altered GSH metabolism via de novo synthesis. From the information available, it seems clear that vitamin C is involved in a variety of oxidative mechanisms. Therefore, vitamin C may be an adjuvant medicine combined with conventional chemotherapy drug to induce cancer cell death. In the future, another issue pertaining to vitamin C is whether its use as an adjuvant medicine is valid in all populations or only in some populations depending on the range of intakes. Therefore, further studies are required to identify the molecular targets of vitamin C sensitivity such as transporter. AUTHOR CONTRIBUTIONS SP conceived the idea and wrote the original draft. SA and YS drafted the manuscript. YY and CY reviewed and supervised the manuscript writing process.
2018-06-19T13:06:31.452Z
2018-06-19T00:00:00.000
{ "year": 2018, "sha1": "e8ba6e46040b3634049a0b434b551fd00e3871d4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2018.00762/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8ba6e46040b3634049a0b434b551fd00e3871d4", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
1107496
pes2o/s2orc
v3-fos-license
On the Lower Central Series Quotients of a Graded Associative Algebra We continue the study of the lower central series L_i(A) and its successive quotients B_i(A) of a noncommutative associative algebra A, defined by L_1(A)=A, L_{i+1}(A)=[A,L_i(A)], and B_i(A)=L_i(A)/L_{i+1}(A). We describe B_{2}(A) for A a quotient of the free algebra on two or three generators by the two-sided ideal generated by a generic homogeneous element. We prove that it is isomorphic to a certain quotient of Kaehler differentials on the non-smooth variety associated to the abelianization of A. Introduction For an associative algebra A, define its lower central series by We are interested in the successive quotients of these subspaces B i (A) = L i (A)/L i+1 (A), more precisely in B 2 (A). The study of these quotients began in [FS], where the interest was focused on the case where A = A n is the free algebra over C with n generators. One of the main results of this paper was that the space B 2 (A n ) is isomorphic, as a graded vector space, to the space Ω even>0 closed (C n ) of closed differential forms of positive even degree on the space C n , the algebraic variety associated to the abelianization C[x 1 , . . . x n ] of the algebra A n . We will call this map the FS isomorphism. Next, [DKM] provided an explicit basis for B 2 (A n ) and a new proof of the isomorphism with differential forms, along with partial results about B 3 (A 2 ). The appendix of the paper, by P. Etingof, studies B 2 (A) for any associative algebra A. There it is proved that if the variety associated to the abelianization A ab of A is smooth, and certain other mild conditions are satisfied, then there exists an equivalent of the FS isomorphism between B 2 (A) and Ω even>0 closed (Spec(A ab )). This paper studies the case when A = A n / P is a quotient of the free algebra on n generators by an ideal P generated by a single homogeneous relation P . The abelianization of this algebra is no longer smooth, as the origin is not a smooth point if deg(P ) > 1. Hence, results from the appendix of [DKM] no longer apply. We attempt to determine the extent to which singularities influence the structure of the B 2 . Our main result is an analogue to the FS isomorphism: Theorem 1.1. For generic P , n = 2, 3, the algebra A = A n / P , Ω 0 = A ab , Ω 1 the module of Kähler differentials over A ab , and d : Ω 0 → Ω 1 the differential map, there exists an isomorphism of graded vector spaces φ : B 2 (A) → Ω 1 /dΩ 0 . Thus, the structure of B 2 (A n / P ) is not affected by the singularity for generic P (although there are special polynomials P for which the FS map is not an isomorphism, because the space B 2 (A n / P ) has graded pieces of larger dimension than the corresponding Ω 1 /dΩ 0 ; examples include n = 2, 3, P = x 2 y, or see Remark 5.2). This suggests that a certain wider class of "mild" singularities of A ab does not affect the structure of B 2 (A). While the proof that FS map is really an isomorphism from [DKM] works only in the smooth case, and our proof works only for a special kind of singularity, it would be interesting to find a unified approach, and the one that would potentially extend this to a wider class of noncommutative algebras A. The organization of the paper is as follows. Section 2 contains main definitions, some results from [DKM], and several technical propositions that we will use in calculations in the next sections. Sections 3 and 4 offer explicit bases of B 2 (A n / P ) for P = x d + y d and n = 2, 3 respectively. Section 5 uses the results of the appendix of [DKM] to connect our case (A = A n / P ) and the corresponding smooth case (A = A n / P − 1 ). Section 6 then uses the maps constructed in section 5 and data about dimensions in the special case of sections 3 and 4 to prove the main theorem for generic P . The Appendix contains some numerical data that we obtained with MAGMA algebra software [BCP] in the early stages of the project. Acknowledgements. The authors are very grateful to Pavel Etingof for introducing us to this area of research, suggesting the problem, and devoting a great deal of energy to it through many helpful conversations. We are also grateful to David Jordan for help with the software MAGMA, and to Travis Schedler for explaining to us the results of [EG]. The work of both authors was supported by the Research Science Institute, and conducted in the Department of Mathematics at MIT. The work of M.B. was partially supported by the NSF grant DMS-0504847. Preliminaries Throughout the paper, all the algebras are associative over C and with a unit. For A such an algebra and B, C subspaces of it, denote by [B, C] the subspace of A spanned by all the elements of the form [b, c] Let us begin with definitions of the spaces upon which this paper will focus. Definition 2.1. For any associative algebra A, define its lower central series to be the sequence L i (A) of subspaces of it defined inductively by Denote its successive quotients by Definition 2.2. For R a commutative associative algebra, let Ω 0 (R) and Ω 1 (R) be R-modules as follows: Ω 0 (R) = R, and Ω 1 (R) is defined by generators {df, f ∈ R} and relations The module Ω 1 (R) is called the module of Kähler differentials of R. There is a natural map d : Ω 0 (R) → Ω 1 (R) given by d(f ) = df . Remark 2.3. If SpecR is smooth, Ω i (R) are exactly the spaces of regular differential i-forms on the algebraic variety SpecR, with d the differential map. If additionally dim SpecR ≤ 2 and the space has zero deRham cohomology in positive degrees (e.g. C 2 ), then the space of even closed forms is exactly Ω 1 (R)/dΩ 0 (R). In this light, our results are an extension of the results of [FS] and [DKM]. For A an associative algebra, denote by A ab its abelianization, i.e. the commutative associative algebra A ab = A/(A[A, A]A). Let A n = C x 1 , ..., x n be the free algebra in n generators, graded by deg x i = 1. We will consider the quotient of A n by an associative ideal generated by one homogeneous relation P of degree d. We will denote this ideal by P and the algebra by A = A n / P . As P is homogeneous, A inherits the grading from A n . In cases where we have few variables, we will call them x, y, z, etc. instead of x 1 , x 2 , x 3 , etc. We will denote the i-th component of any graded vector space W by W [i], and its Hilbert- For brevity, we shall denote Ω i (C[x 1 , ...x n ]/ P ) by Ω i P =0 . It will always be clear which n we are considering. Note that the Ω i are also graded by total degree of polynomial, and that d is a homogeneous map. We shall now prove a few lemmata used extensively in the paper. Lemma 2.4. The space B 2 (A n / P ) is a quotient of B 2 (A n ) by the image in B 2 (A n ) of the intersection of P with L 2 (A n ). Proof. For brevity, write L i instead of L i (A n ). Using the definition, the second and third isomorphism theorems, and the fact that the same holds for lower central series: , we get: which is isomorphic to the quotient of B 2 (A n ) by the image of the intersection of the ideal P with L 2 (A n ), as claimed. This lemma allows us to use the explicit bases from Theorems 2.1 and 2.2 of [DKM] in our context. Namely, we use it to conclude that the image under the quotient map of the basis for B 2 (A n ) spans B 2 (A n / P ), with the only new relations contained in P ∩ L 2 . Another result form [DKM] generalizes easily to our context: Lemma 2.5. For q i , Q, a, b, and c arbitrary elements of A n / P , the following relations hold in B 2 (A n / P : Proof. [DKM], proof of Proposition 2.1, proves these relations hold in B 2 (A n ). Because of Lemma 2.4, they hold in B 2 (A n / P ). Most of the calculations in the paper will be concerned with algebras A n / P , n = 2, 3 for the specific polynomial P = x d + y d , d ≥ 2. Let us prove a useful proposition for that algebra: Proof. We will find two linearly independent equations relating [x i+d y j , a], [x i+d a, y j ], and [ay j , x i+d ]. Lemma 2.5 (1) and (2) imply To obtain another such equation, use Lemma 2.5 and This identity is not a multiple of identity (1) above, as 0 ≤ i i+d < 1 and 0 < j j+d < 1. Thus, we can eliminate two of the three terms of our initial spanning set. It is now easy to see that [x i+d a, y j ] and [ay j , x i+d ] can always be expressed in terms of [x i+d y j , a]. Description of We now construct a basis for B 2 (A 2 / x d + y d ) and calculate the Hilbert-Poincaré series for the space. Proof. We shall first show that this set spans B 2 (A 2 / x d + y d ) and then prove its linear independence. By Proposition 2.1 of [DKM], we have that Proof. We will only show the case i ≥ d; the proof for j ≥ d is exactly the same. Write i = d + u, for u ≥ 0. Using x d + y d = 0 and Lemma 2.5(3), we have . By Lemma 2.4, this linear combination, seen as an element of L 2 (A 2 ), belongs to L 3 (A 2 )+A 2 · (x d + y d ) · A 2 . In other words, it can be expressed as a linear combination of triple brackets of monomials in A 2 plus some element f ∈ x d + y d . We observe, however, that terms that come from the combination of [x i , y j ] have degree < d in both x and y, while nonzero terms that come from the ideal always have degree either in x or in y (or in both) strictly larger than d. Finally, we observe that all four terms that come from each triple bracket of monomials have the same degrees in both x and y. Hence, triple brackets of monomials which cancel the terms from the ideal cannot influence the left hand side, and vice versa. But then we can cancel out the ideal's contribution and assert that the linear combination of [x i , y j ] lies in L 3 (A 2 ). This, however, constitutes a contradiction, because [x i , y j ] were linearly independent in B 2 (A 2 ). We will later prove that there is an isomorphism between B 2 (A n / P ) and Ω 1 . For now, we can state: Proof. Using Theorem 3.1, To calculate the Hilbert-Poincaré series for Ω 1 The only elements in the kernel of map d are constants, , is a module over Ω 0 with two homogeneous generators dx and dy of degree 1 and one relation of degree d, namely Clearly, the series coincide. Theorem 4.1. The following set constitutes a basis for B 2 (A 3 / x d + y d ), with the constraints 0 < i, j, k and j < d: Proof. Theorem 2.2 of [DKM] states the above list, subject only to the conditions i, j, k > 0, constitutes a basis for B 2 (A 3 ). By Lemma 2.4 it must be a spanning set for B 2 (A 3 / x d + y d ). Because x d + y d = 0, we can express all elements as linear combinations of those that contain only powers of y up to d − 1, so we can add the constraint j < d. As in Lemma 3.2 we can show that [x i , y j ] = 0 if i ≥ d. Finally, Proposition 2.6 with a = z k expresses any [x i z k , y j ] with i ≥ d in terms of [x i y j , z k ]. Thus the above set is a spanning set for B 2 (A 3 / x d + y d ). We now claim that this spanning set is indeed a basis. Assume that some linear combination of the elements above is 0 in B 2 (A 3 / x d + y d ). Then, there exist α ∈ L 3 (A 3 ), a sum of triple brackets, and i β i (x d + y d )γ i ∈ A 3 · (x d + y d ) · A 3 such that this linear combination equals α + i β i (x d + y d )γ i in A 3 . α is expressible as a linear combination of triple brackets of monomials; each such bracket expands into 4 monomials, all of which have the same degree in x, y, and z. Thus, any such triple bracket that affects the left-hand side has all y-degrees strictly less than d, and thus does not affect β i y d γ i ; vice-versa, we note that any triple bracket affecting β i y d γ i cannot affect the left hand side. Thus, all the β i y d γ i are canceled out by triple brackets. Alternatively stated, we i β i y d γ i ∈ L 3 (A 3 ). By symmetry, i β i x d γ i ∈ L 3 (A 3 ). Thus, our nontrivial linear combination of elements from the statement is in L 3 (A 3 ). This would mean that these elements were linearly dependent in B 2 (A 3 ), which is impossible, because they are a part of a basis of B 2 (A 3 ). Again, we will eventually prove the existence of an isomorphism between the spaces for which we can now only say they have the same Hilbert-Poincaré series: Corollary 4.2. The Hilbert-Poincaré series for B 2 (A 3 / x d + y d ) and Ω 1 x d +y d =0 /dΩ 0 x d +y d =0 coincide and are given by: Proof. We can encode the size of the basis from Theorem 4.1 in a series: If we do so similarly for the others, we obtain the following expressions. Summing these equations, we conclude: On the other hand, Ω 0 x d +y d =0 = C[x, y, z]/ x d + y d is a graded ring with three generators of degree one and a relation of degree d, so The kernel of the differential map d again consists of just constants. Hence, Next, Ω 1 x d +y d =0 is a module over Ω 0 x d +y d =0 with three generators dx, dy, and dz of degree one and one relation x d−1 dx + y d−1 dy of degree d, so So, the series coincide. A Connection to the Smooth Case We want to prove that for n = 2, 3 and generic homogeneous P there exists an analogue of the FS isomorphism, i.e. a map B 2 (A n / P ) → Ω 1 P =0 /dΩ 0 P =0 . If we replace P by P − 1, the resulting algebra has smooth abelianization. If P is generic, then the algebra A n / P − 1 satisfies the conditions of the appendix of [DKM], and there is an isomorphism φ : B 2 (A n / P − 1 ) → Ω 1 P =1 /dΩ 0 P =1 . We want to show that the associated graded map to this isomorphism is the isomorphism we want. To do this, we need to establish, for generic P, a relationship between the structures we get in the smooth case P-1=0 and in the graded case P=0. There is an obvious surjection A n / P → gr(A n / P − 1 ). For generic P the algebra A n / P is an noncommutative complete intersection (NCCI) algebra as defined in [EG], Theorem 3.1.1. This can be seen for example from [EG], Theorem 3.2.4, as for a generic P condition 2) from that theorem is satisfied. So, by Theorem 3.2.4(5) of [EG], its noncommutative Koszul complex is acyclic in higher degrees with the homology at 0 being A n / P . If we build the analogous complex for the filtered algebra A n / P − 1 , its associated graded complex will be the noncommutative Koszul complex of A n / P . So, the complex of the filtered algebra is also acyclic in higher degrees, with zero degree homology A n / P − 1 . Therefore, gr(A n / P − 1 ) ∼ = A n / P . STEP 2: L 2 (A n / P ∼ = grL 2 (A n / P − 1 ). For B a an algebra with an ascending filtration by nonnegative integers (B 0 ⊂ B 1 ⊂ . . .), and X, Y subspaces of B, there exists an injection [grX, grY ] ֒→ gr[X, Y ]. If P is generic, then by STEP 1 gr(A n / P − 1 ) ∼ = A n / P . So, for X = Y = A n / P − 1 , we have: L 2 (A n / P ) = [A n / P , A n / P ] = [gr(A n / P − 1 ), gr(A n / P − 1 )] ֒→ ֒→ gr[A n / P − 1 , A n / P − 1 ] = grL 2 (A n / P − 1 ). For generic P the algebra A n / P is an asymptotic representation complete intersection (asymptotic RCI), as in [EG], Definition 2.4.8. Then one can conclude from Theorem 3.7.7 in [EG] that the Hilbert-Poincaré series of B 1 (A n / P ) and grB 1 (A n / P − 1 ) coincide. This, together with STEP 1 and the existence of an injection from the beginning of STEP 2, implies that the injection is in fact an isomorphism. Steps 2 and 3 now imply the statement. (1) It is useful to note that the isomorphism gr(A n / P −1 ) ∼ = A n / P from step 1 of the proof of Lemma 5.1 does not hold for every P . For example, consider A 2 / xyx . Then, A 2 / xyx − 1 is commutative because xy = xyxyx = yx, so A 2 / xyx − 1 = C[x, y]/ x 2 y − 1 is an algebra of polynomial functions on the curve x 2 y = 1, and has linear growth in its graded components. The algebra A 2 / xyx , on the other hand, has exponential growth in its graded components, as for any sequence of integers m 1 , m 2 , . . . , m k , with all m i ≥ 2, the elements xy m1 xy m2 . . . xy m k are linearly independent. Hence, the map from STEP 1 is surjective, but not injective. (2) A sufficient condition for statement of Lemma 5.1 is for P to be such that A n / P an asymptotic RCI. An inspection of the proof shows that the only things we used about A n / P is that it is an asymptotic RCI and NCCI, which follows from being an asymptotic RCI. For a more detailed discussion, see [EG]. Theorem 5.3. For generic homogeneous P and n = 2, 3 The requirement that P is generic can be made more precise by requiring that P satisfies: (1) A n / P is an asymptotic RCI, (2) C[x 1 , .., x n ]/ P − 1 is smooth, Proposition 7.8 and Theorem 7.2 of [DKM] in our setting state that if condition (ii) is satisfied, then Ω odd P =1 /dΩ even P =1 ։ B 2 (A n / P − 1 ) and the kernel of the map is zero if a certain homology (see [EG]), HC 1 ((A n / P − 1 ) ab ) is zero. Condition (i) and Theorems 3.7.1 and 3.7.7 in [EG] guarantee that HC 1 ((A n / P ab ) = 0. The complex that calculates HC • ((A n / P − 1 ) ab ) is filtered and its associated graded complex is the one calculating HC • ((A n / P ab ); so HC 1 ((A n / P ab ) = 0 implies HC 1 ((A n / P − 1 ab ) = 0. For any graded commutative algebra A, any filtered A-module M and any submodule I in M , we can consider the associated graded modules grM, grI, gr(M/I) Dente the m-th The isomorphism ψ : AP → grA(P − 1) is sending aP, a ∈ A[m] to aP in the m-th graded piece of grA(P − 1). It is well defined as aP maps to the image of the element a(P − 1) in the graded module grA(P − 1). For the same reason it is surjective. If aP, a ∈ A[m], maps to 0 in grA(P − 1), then aP = 0 ∈ A(P − 1) m /A(P − 1) m−1 , so there exists b ∈ A such that b(P − 1) ∈ A(P − 1) m−1 and aP + bP − b = 0 ∈ A(P − 1) m . However, this means (looking at degrees) that aP = 0. So, grΩ 0 P −1 = grM/I ∼ = M/grI = C[x 1 , . . . x n ]/ P = Ω 0 P . Next, do the same for A = C[x 1 , . . . x n ], M = ⊕ i Adx i , I = dP, P − 1 . We claim that if P has no double factors, then gr dP, P − 1 = dP, P . There exists a map as above ψ ′′ : dP, P → gr dP, P − 1 . If P has degree d, so does dP , and for any a ∈ A[m−d], b ∈ M [m−d], the element adP +P b cannot be 0 in gr dP, P − 1 unless it is 0 in gr dP, P . So, the map is injective. To prove surjectivity, let a ∈ A, b ∈ M , and consider adP + b(P − 1). We want to show that the top degree part of this element is indeed in dP, P . If deg a = deg b, this is true. Now assume deg a = deg b = m, and denote the m-th degree parts of them by a m , b m . We will proceed by induction on m. The image of adP + (P − 1)b in the graded module is either a m dP + P b m , which is in dP, P , or a m dP + P b m = 0 and the image is in a lower degree then m + d. If a m dP + P b m = 0, then, using that P and dP have no common factors, there exists c ∈ A such that a m = P f, b m = −f dP . So we can write a = P f +ā, b = −f dP +b, with degā, degb < m, and then adP + b(P − 1) = P f dP +ādP − f P dP + f dP +b(P − 1) = (ā + f )dP +b(P − 1). As bothā+ f andb have lower degrees than m, we can conclude that the top degree of (ā + f )dP + b(P − 1) is in dP, P by previous argument or by induction assumption. So, . . x n ]dx i / dP, P = Ω 1 P . The maps d : Ω 0 P → Ω 1 P and grd : grΩ 0 P −1 → grΩ 1 P −1 coincide on the generators, so they are the same. Main Result The following lemma will allow us to connect our Hilbert-Poincaré series computations to the results from Section 5. Lemma 6.1. P = x d + y d is generic enough in A 2 and in A 3 to satisfy claims of Lemma 5.1, Theorem 5.3 and Lemma 5.4; in other words for n = 2, 3, the algebra A n / P is an asymptotic RCI, (A n / P − 1 ) ab is smooth, and P ∈ (A n ) ab does not have repeated factors. Proof. First we demonstrate that A n / P satisfies the sufficient condition for being an asymptotic RCI given by Theorem 5.4.1 in [EG]. A change of variables a = y−ξx, where ξ d = −1, makes x d +y d into a homogeneous polynomial of degree d in x and a. We can impose an ordering on monomials in x and a by M > M ′ if deg x M > deg x M ′ or deg x M = deg x M ′ and the sum of positions of x, counting from the left, is smaller for M than for M ′ . (For example, x 2 a 2 > xa 3 , and x 2 a 2 > xaxa.) In this ordering, the leading monomial of x d + (a + ξx) d is not x d , which appears with coefficient 1 + ξ d =0, but x d−1 a, which appears with the nonzero coefficient ξ d−1 . So the leading monomial of P satisfies the conditions of Theorem 5.4.1 from [EG] (it is "non overlapping", meaning there does not exist a nontrivial sub-word w that the monomial both begins and ends with). That implies that the quotient of A n by this leading monomial is an asymptotic RCI. But the quotient of A n by a leading monomial can be considered an associated graded algebra of the quotient of A n by the entire polynomial, where filtration of A n is given by the above ordering on monomials. However, if an associated graded algebra is an asymptotic RCI, then so is the original algebra A n / P . Theorem 6.2. For n = 2, 3, and P = x d + y d the associated graded map to the FS isomorphism is the isomorphism grφ : B 2 (A n / P ) → Ω 1 P =0 /dΩ 0 P =0 . Proof. Lemma 5.1, Theorem 5.3, Lemma 5.4 and Lemma 6.1 give us a series of graded morphisms: . The first and the last map are natural, and the middle map is grφ. Because the Hilbert-Poincaré series of B 2 (A n / P ) and Ω 1 P =0 /dΩ 0 P =0 are the same by Theorems 3.3 and 4.2, the first map is also an isomorphism. Composing all maps, we get the isomorphism Ω 1 P =0 /dΩ 0 P =0 ∼ = B 2 (A n / P ), which can, with proper identifications, be thought of as grφ. Theorem 6.3. For n = 2, 3, and generic homogeneous P the associated graded map to the FS isomorphism φ : is the isomorphism grφ : B 2 (A n / P ) → Ω 1 P =0 /dΩ 0 P =0 . Proof. As in Theorem 6.2, for a generic P and using Lemma 5.1, Theorem 5.3, Lemma 5.4, we have the following series of graded morphisms: , as this is the same for all P of the same degree. On the other hand, dim B 2 (A n / P )[l] is going to be the same for generic P , and higher for special P . In this aspect, we can only say that for generic P , However, we have shown in Theorems 3.3 and 4.2 that , so putting (2)-(5) together we can conclude that is an isomorphism, and so is the composite map grφ : B 2 (A n / P ) → Ω 1 P =0 /dΩ 0 P =0 .. The proof would be analogous to the n = 2, 3 case, with some changes. First one needs to prove the isomorphism B 2 (A n / P )[l] ∼ = Ω 1 P =0 /dΩ 0 P =0 [l] for l >> 0 and a specific P . In this case, the polynomial P = x d + y d is not generic enough; MAGMA computations of the dimensions of graded components of B 2 (A n / P ) show that they are larger then those of Ω 1 P =0 /dΩ 0 P =0 at low degrees. The same is true for P = x d + y d + z d . The polynomial P = x d + y d + z d + w d is, according to MAGMA computations, generic enough as the beginning of the Poincaré-Hilbert series for B 2 (A n / P ) and Ω 1 P =0 /dΩ 0 P =0 match. The Hilbert-Poincaré series for Ω 1 P =0 /dΩ 0 P =0 is easily calculated, as in Corollary 3.3 and Corollary 4.2, to be (1 − t) 4 . We conjecture that the series for B 2 (A 4 / P ) is The second term of it is a polynomial (so it influences only finitely many degrees), and it corresponds to the facts that for n = 4, the analogue of a finite dimensional space Ω 3 /dΩ 2 also appears here, and to the fact that the smooth variety corresponding to the abelianization of A n / P − 1 will have cohomology of dimension (d − 1) 4 at degree 3. After this, generalization to a generic P should be proved as it was in the n = 2, 3 case in Theorem 6.3. For n > 4, a similar statement should hold, but different techniques should be considered for a precise statement and a proof. Appendix A. Computational Results Here is a series of tables obtained in the first phase of the project using MAGMA, containing dimensions of the graded pieces of B i (A).The columns are labeled by l, the degree or graded component we are interested in, and rows by the B i . Each row, in turn, gives coefficients of the Hilbert-Poincaré series of the appropriate B i . We used similar tables constructed for B 2 (A n / x d + y d ), n = 2, 3, to derive conjectures about bases of these spaces. These conjectures became Theorems 3.1 and 4.1.
2012-07-17T13:13:50.000Z
2010-04-21T00:00:00.000
{ "year": 2010, "sha1": "79f8d75709637dccc2fce7252f2d9495585246bc", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jalgebra.2010.08.023", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "cba9cdf6947f55fe7ea5f2ab733d1ee86f1d5859", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
209083968
pes2o/s2orc
v3-fos-license
Soil Property Analysis Process by Multi Sensor Technique : Two significant issues in present day Agriculture are water shortage and high work costs. These issues can be settled utilizing agricultur task robotization, which empowers accuracy agribusiness. Thinking about plenitude of daylight in India, this paper examines the plan and advancement of an IoT Agribot that computerizes water system task and empowers remote ranch checking. The Agribot is created utilizing a PIC microcontroller. While executing the undertaking of water system, it moves along a pre-decided way of a given ranch, and faculties soil dampness substance and temperature at standard focuses. At each detecting point, information obtained from different sensors is prepared locally to choose the need of water system and in like manner homestead is watered. Further, Agribot goes about as an IoT gadget and transmits the information gathered from numerous sensors to a remote server utilizing GPS/GPRS. At the remote server, crude information is prepared utilizing sign handling tasks, for example, separating, pressure and expectation. As needs be, the broke down information measurements are shown utilizing an intuitive interface, according to client demand. INTRODUCTION As indicated by the ongoing insights, the land utilized for yield development in India is diminishing at a quickening rate. Obsolete water system procedures and accessibility of water assets are the essential purposes behind disjointed creation. Subsequently, innovative answers for agribusiness task robotization are the need of great importance. Specifically, improved water system components lessening water wastage are basic, which empower exactness agribusiness. Innovative answers for water system and agrarian undertaking mechanization are driven by electric power. Henceforth innovative answers for agribusiness task mechanization can yield better benefits for Indian ecological conditions. The proposed framework in this paper is planned by considering the necessity of a sugarcane crop for Indian climatic conditions. [3] Sanjukumar et. al. (2013): The Soil dampness substance based water system framework was created and effectively executed alongside stream sensor. [4] Swarup et al (2013) : Smart sensors based checking framework for agribusiness have been utilized to build the yield of plants by observing the ecological conditions (parameters) and in this manner giving the important data to the customers (ranchers). [5] The proposed framework is primarily produced for the improvement of ranchers. [6] Saleemmaleekh et el (2013) : With the headway in innovation, our general surroundings in all aspects of our life getting computerized. [7] Fredlund and Xing (1994) recorded the different works did to infer conditions for the dirt water trademark bend. [8] Estimating water content in soils is a significant assignment and numerous creators have proposed and endeavored a few inventive, savvy and efficient methodologies. Sun et al (2008) built up a multi-sensor framework, which comprises of a phone with three sensors for estimating soil water content, mechanical quality and Electrical Conductivity (EC). [9] Zhao et al (2009) built up a counterfeit neural system (ANN) model to foresee soil surface (sand, earth and residue substance) in view of soil qualities got from existing coarse goals soil maps joined with hydrographic parameters got from a computerized rise model (DEM) of the Black Brook Watershed (BBW) in northwestern New Brunswick, Canada. [10] III. II. LITERATURE REVIEW SENSORS AND OTHER HARDWARE USED A. Turbidity Sensor TCS3200 Color Recognition Sensor is a little module planned with TCS3200 Color Sensor that can change over light power to recurrence. The TCS3200 can identify and quantify an almost boundless scope of noticeable hues. The TCS3200 has a variety of photograph identifiers, each with either a red, green, or blue channel or no channel. The channels of each shading are conveyed equally all through the cluster to dispense with area predisposition among the hues. Interior to the gadget is an oscillator which creates a square-wave yield whose recurrence is relative to the power of the picked shading. B. Humidity Sensor Humidity and mugginess is the nearness of water in air. The measure of water vapor in air can influence human solace just as many assembling forms in businesses. The nearness of water vapor likewise impacts different physical, synthetic, and organic procedures. Dampness estimation in ventures is basic since it might influence the business cost of the item and the wellbeing and security of the faculty. Consequently, stickiness detecting is significant, particularly in the control frameworks for mechanical procedures and human solace. C. pH Sensor Utilize the pH Sensor similarly as you would a customary pH meter with the extra points of interest of robotized information accumulation, diagramming, and information investigation. Average exercises utilizing our pH sensor incorporate; Acid-base titrations, Studies of family unit acids and bases, Monitoring pH change during synthetic responses or in an aquarium because of photosynthesis, Investigations of corrosive downpour and buffering, Analysis of water quality in streams and lakes. D. Temperature Sensor LM35 Precision Centigrade Temperature Sensors: The LM35 series are precision integrated-circuit temperature sensors, whose output voltage is linearly proportional to the Celsius (Centigrade) temperature. The LM35 thus has an advantage over linear temperature sensors calibrated in ° Kelvin, as the user is not required to subtract a large constant voltage from its output to obtain convenient Centigrade scaling. The LM35 does not require any external calibration or trimming to provide typical accuracies of ±1⁄4°C at room temperature and ±3⁄4°C over a full −55 to +150°C temperature range. Low cost is assured by trimming and calibration at the wafer level. The LM35's low output impedance, linear output, and precise inherent calibration make interfacing to readout or control circuitry especially easy. It can be used with single power supplies, or with plus and minus supplies. E. Soil Moisture Sensor This sensor can be utilized to test the dampness of soil, when the dirt is having water deficiency, the module yield is at significant level, and else the yield is at low level. By utilizing this sensor one can consequently water the blossom plant, or some other plants requiring programmed watering strategy. Module double yield mode, advanced yield is basic, simple yield progressively precise. Soil dampness sensors measure the volumetric water content by implication by utilizing some other property of the dirt, for example, electrical opposition, dielectric consistent, or cooperation with neutrons, as an intermediary for the dampness content. H. Transformer It is a universally useful case mounting mains transformer. Transformer has 240V essential windings and focus tapped optional winding. The transformer has flying hued protected associating drives (Approx. 100 mm long). The Transformer go about as venture down transformer diminishing AC -240V to AC -12V. Power supplies for a wide range of undertaking and circuit sheets. Venture down 230 V AC to 12V with a limit of 500mAmp current. In AC circuits, AC voltage, current and waveform can be changed with the assistance of Transformers. Transformer assumes a significant job in electronic gear. Air conditioning and DC voltage in Power supply gear are nearly accomplished by transformer's change and replacement. I. LM7805 This arrangement of fixed-voltage integratedcircuit voltage controllers is intended for a wide scope of uses. These applications incorporate on-card guideline for end of clamor and appropriation issues related with single-point guideline. Every one of these controllers can convey up to 1.5 An of yield current. The inward currentlimiting and warm shutdown highlights of these controllers basically make them resistant to over-burden. Notwithstanding use as fixed-voltage controllers, these gadgets can be utilized with outer segments to get movable yield voltages and flows, and furthermore can be utilized as the powerpass component in accuracy controllers. 1) Features: Output Current up to 1A , Output Voltages of 5v, Thermal Overload Protection , Short Circuit Protection, Output Transistor Safe Operating Area Protection. IV. SOFTWARE USED A. MPLAB MPLAB is a restrictive freeware incorporated advancement condition for the improvement of installed applications on PIC and dsPIC microcontrollers, and is created by Microchip Technology. MPLAB and MPLAB X bolster venture the board, code altering, investigating and programming of Microchip 8-piece PIC and AVR (counting ATMEGA) microcontrollers, 16-piece PIC24 and dsPIC microcontrollers, just as 32-piece SAM (ARM) and PIC32 (MIPS) microcontrollers. MPLAB is intended to work with MPLAB-guaranteed gadgets, for example, the MPLAB ICD 3 and MPLAB REAL ICE, for programming and troubleshooting PIC microcontrollers utilizing a PC. PICK it software engineers are likewise upheld by MPLAB. B. Embedded C Inserted C is a lot of language expansions for the C programming language by the C Standards Committee to address shared trait gives that exist between C augmentations for various implanted frameworks. Verifiably, installed C programming requires nonstandard augmentations to the C language so as to help extraordinary highlights, for example, fixed-point number-crunching, various unmistakable memory banks, and essential I/O activities. In 2008, the C Standards Committee stretched out the C language to address these issues by giving a typical standard to all executions to hold fast to. It incorporates various highlights not accessible in ordinary C, for example, fixed-point math, named address spaces and fundamental I/O equipment tending to. Inserted C utilizes the greater part of the language structure and semantics of standard C, e.g., principle () work, variable definition, information type revelation, restrictive articulations (if, switch case), circles (while, for), capacities, exhibits and strings, structures and association, bit activities, macros, and so forth. V. CIRCUIT DESCRIPTION AND POWER SUPPLY A power supply (once in a while known as a power supply unit or PSU) is a gadget or framework that provisions electrical or different sorts of vitality to a yield burden or gathering of burdens. The term is most regularly applied to electrical vitality supplies, less frequently to mechanical ones, and once in a while to other people. Circuit Description: This circuit is a little +5V power supply, which is helpful when exploring different avenues regarding advanced hardware. Little reasonable divider transformers with variable yield voltage are accessible from any gadgets shop and grocery store. Those transformers are effectively accessible, however for the most part their voltage guideline is exceptionally poor, which makes then not truly usable for computerized circuit experimenter except if a superior guideline can be accomplished here and there. The accompanying circuit is the response to the issue. This circuit can give +5V yield at around 150 mA current, yet it very well may be expanded to 1 A when decent cooling is added to 7805 controller chip. The circuit has over-burden and therminal assurance. The capacitors must have enough high voltage rating to safely handle the input voltage feed to circuit. The components used are 7805 regulator IC 100 uF electrolytic capacitor, at least 25V voltage rating 10 uF electrolytic capacitor, at least 6V voltage rating 100 nF ceramic or polyester capacitor. VI. RESULT Following is a
2019-11-14T17:12:49.788Z
2019-10-31T00:00:00.000
{ "year": 2019, "sha1": "6f6575f0123f133647fdefe89421c479b39205e5", "oa_license": null, "oa_url": "https://doi.org/10.22214/ijraset.2019.10132", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a64415db776a01abeb5501af9229c040692a151c", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
226096841
pes2o/s2orc
v3-fos-license
Factors Affecting Customer Satisfaction: A Case Study of Grab in Vietnam Grab has recently become one of the biggest companies about applying technology in transportation. One of the key challenges would be how to manage service quality to achieve customer satisfaction. This study aimed to evaluate factors affecting customer satisfaction in order to help Grab gain more profit in Vietnam. Five dimensions used in this study were determined are Reliability, Information, Responsiveness, Dignity, Tangibles and Price. The research conducted a survey with participation of 213 respondents. The data was analyzed by using descriptive statistics, factor analysis and regression. The empirical results showed that Information significantly affects customer satisfaction, whereas Price and Tangibles slightly has an impact on customer satisfaction. Finally, this study recommended some solutions which Grab can apply to improve the service. defined as customers' evaluations of a product or service with regard to their needs and expectations. A theory called "Expectation -Confirmation" developed by Oliver (1980) which focused on customer satisfaction and product quality. Theory considered customer satisfaction through two small processes: primary expectation before purchasing products and feeling about products after using. For consumer researchers the issue of customer satisfaction is very important and for organizations to remain and keep their competitive advantages in present competitive scenario. A study of Henard (2001) identified some well establish determinants of customer satisfaction such as expectations, disconfirmation of expectations, performance, affect and equity. Customers are said to be satisfied when actual performance outcome exceeds expectation (positive disconfirmation), and dissatisfied when expectation exceeds performance outcome (negative disconfirmation) (Goode, 2005). A study of Liu (2008) identified that some criteria which can measure customer satisfaction which are satisfaction, gratification, credibility, novelty and surprise. In addition to Cavana and Corbett (2007), service frequency, reliability, convenience and responsiveness are service quality variables that are considered important in customer satisfaction. According to Olsen (1998) quality is consistently doing the right thing and it is complex process to measure consumer perception of service quality. Emmanuel Nondzor and Solomon Tawiah (2015) proved that service frequency, convenience, affordability and reliability had positive and significant impact on customer satisfaction. Nattapong Techarattanased showed in the study is tangibles, responsiveness, dignity and assurance highly impact on customer satisfaction. Dachyar and Rusydina (2015) concluded brand image has strong impact on customer satisfaction. Ho Khanh Ngoc Bich (2015) built a evaluation model with 7 criteria are tangibles, capacity, reliability, empathy, responsiveness and responsibility. A study of Mai Ngoc Khuong and Ngo Quang Dai (2016) listed 7 factors have significant impact on customer satisfaction which are reliability, comfort, information, responsiveness, dignity, tangibles and price. The most well-known scale of measurement for service quality is SERVQUAL developed by Parasuraman (1988). This study proposed ten dimensions of quality included reliability, responsiveness, competence, access, courtesy, communication, credibility, security, understanding the consumer, and tangibles. Through an iterative process with alpha coefficient and among others, these dimensions were refined during SERVQUAL development period. Gronroos (2004) argued that there are seven factors to perceive the service quality and those are skills and professionalisim, behavior and attitude of employees, flexibility and convenience, trustworthiness and reliability, recovery of services, scope of service, credibility and reputation. Service is the behaviors, processes, and ways of doing something to create value for customers to satisfy customers' needs and expectations (Zeithaml & Britner, 2000). According to Philip Kotler (2012), service is an activity or benefit provided for exchange, mainly invisible and does not lead to transfer of ownership, the performance of services may or may not be associated with physical products. Parasuraman (1985) argued that Service quality is the difference between the expectations of customers and the services that customers feel. According to Parasuraman, customers' expectations are what the company must do, not what it will do. According to Hurbert (1995), before using services, customers have formed a "scenario" of that service. If the customer and supplier scenarios are not matching then the customer will not be satisfied. Service will be considered outstanding, if awareness exceeds expectations; it will be considered good or complete, if it is only equal to expectations; services will be classified as bad, poor or inadequate, if they do not meet their expectations (Vázquez et al, 2001). Oliver (1993) pointed that customer satisfaction is a general measure of the experience received and expectations for using the service. Meanwhile, the quality of service only assesses a specific part of how the service is performed. Curry and Sinclair (2002) identified that by providing the service quality to meet customer expectations, it will lead to customer satisfaction and vice versa if the service quality is lower than expected, the customer will not be satisfied. Cronin et al. (1992) argued that service quality is the driving force for customer satisfaction, and that satisfaction affects loyalty to businesses. Satisfaction of the customer is an estimated result of executed activities of marketing; the firm can obtain the success by offering the quality products and services in this highly competitive business. According to Fornell (1992), the satisfaction depends on the overall buying and utilization of the target service and products presentation. Oliver (1997,1999) argued that customer satisfaction is an enjoyable completion which the customers get in the utilization. Spath and Fahnrich (2007) found that the satisfaction of the customer can also be measured through the life cycle of the relationship of the customer that contains different phases of the relationship of the customer and need to focus on definite goal and expectations of the customer in different phases. From the review of the above studies, in this study, we focus on determining the impact of factors on customer satisfaction when using Grab service in Hanoi, Vietnam. We pay attention to the impact of reliability, information, responsiveness, dignity, tangibles and price on customer satisfaction. Figure 1: Diagram of Factors Effecting on Customer Satisfaction According to all above dependent and independent factors, this study proposes two hypotheses:  Hypothesis 1: Reliability, Information, Responsiveness, Dignity, Tangibles, Price has significantly impact on customer satisfaction.  Hypothesis 2: The effect of Reliability, Information, Responsiveness, Dignity, Tangibles, Price are mediated by customer satisfaction. Data Collection and Method The study period is selected in 2019 when Grab owned all shares of Uber in Southeast Asia. At that time, a massive amount of consumers in Vietnam using Grab so it will be easier to conduct survey. This research paid high attention on customer satisfaction. Quantitative data collection method is used for this study with using numbers, mathematics, statistics and vice versa in order to measure precisely research data with target of accepting or rejecting research hypotheses and answer research questions. Most of the questions in the survey were designed based on five-point Likert scale, respondents rate the items on the five-point scale, on which 1 and 5 indicated "strongly disagree" and "strongly agree" respectively. The survey questionnaires were directly sent to people who have experiences of using Grab services in Hanoi, Vietnam. Participants are both males and females. To get more consistent, only people using Grab service within 5 months will be nominated. This research applied the convenience sampling method with 213 respondents were asked to fill in questionnaire. The questionnaires were mainly distributed in the public places such as supermarket, office, parks, school and vice versa. There were 59% female and 41% male. The primary objective of this study is to investigate the effect of factors on customer satisfaction and measure level of impacts. To achieve the goals set out, in this study, the authors used quantitative methods, built regression models to show the relationship between factors and customer satisfaction; In which customer satisfaction is dependent variable; reliability, information, responsiveness, dignity, tangibles and price are dependent variables, by using software SPSS 20.0. Factor Analysis and Reliability Exploratory Factor Analyses (EFA) were conducted with group of variables: 16 items of the independent variables and 3 items of the dependent variables. There are many factor extraction methods, the factor extraction method used in this study is the Principal Components method with perpendicular rotation (varimax). Moreover, descriptive statistics were used to illustrate the demographic data and other variables. Multiple regressions were applied to determine the effects of the independent variables on the dependent variables. The results off EFAs showed that Kaiser-Meyer-Olkin measure of sampling adequacy was 0.659 for independent variables, according to study of Tabachnick and Fidell (1996) the data were suitable for analysis. The Bartlett's test off sphericity, significant = 0.000 < 0.05 which means the factor analysis was appropriate. The study provides that all components extracted and all eigenvalues are greater than 1. According to Hoang Trong and Chu Nguyen Mong Ngoc (2008), the eigenvalue quantity represents the amount of variability explained by the factor, factors with eigenvalue less than 1 will be excluded from the analysis model because there is no summary effect information better than an original variable. The table of model summary shows the value of R square of factors on customer satisfaction (CUSA) is 0.586, which means that six factors can explain 58.6% the variation of customer satisfaction (CUSA). The Cronbach's coefficients ranged from 0.644 to 0.818 proving the consistency of these variables. Names Number *PR From the standardized regression coefficient beta, the most powerful factor affecting CUSA is the level of IN, when the impact of IN level increases by 1%, CUSA increases to 0.426%. Next is the factor of RE and RS, when the RE increases by 1%, CUSA increases to 0.417%. This is quite true in reality, when Grab provides the service as committed or promised to customers, it will make customers feel comfortable and secure when using the service. In addition, the RS improved, customer satisfaction also increased to 0.287. It can be seen that responding to customers when they have a complaint or want to know something quickly can make customers more satisfied with the service. Meanwhile factors such as TA or PR have little impact on customer satisfaction. Grab is a fairly reputable and quality system which means that they hire drivers or the quality of the car is also carefully tested. This indicates that the level of IN provided to the previous customer has made it reassuring for the driver or driver. Likewise, the price is a factor that affects quite a bit, in fact, the price of Grab cars is quite cheap compared to any other types of motorbikes or regular taxis, and the previous information also makes customers know in advance. All of these factors are statistically significant, as evidenced by the fact that they all have Sig. <0.05. Model Sum of Squares et al. (2012) we can see that the research model is highly relevant for explaining the behavior of the dependent variable. Discussion and Conclusion With the results of regression analysis, we can see that information and reliability are the two factors that have the strongest influence on the dependent variable with beta coefficients of 0.426 and 0.417 respectively. Meanwhile, other factors of dignity, responsiveness, tangibles and price have a weaker effect on customer satisfaction when using Grab. This can be explained, because of these factors, Grab traditional taxi companies have responded quite well and quite similarly. With characteristics of the service industry such as taxis, the quality of service depends very much on the service provider -the driver. Drivers in traditional taxi companies are carefully selected, so the dignity of these people is always guaranteed. With Grab, the dignity of employees is good and there is no bad phenomenon of Grab's dignity. Customers can send recommendations, requests and feedback through the switchboard of each taxi company. Thanks to the disclosure of
2020-07-23T09:07:17.379Z
2020-02-29T00:00:00.000
{ "year": 2020, "sha1": "beaa8b6ad8a6733f83f91c25ff9aa0ffd00195c5", "oa_license": null, "oa_url": "http://www.internationaljournalcorner.com/index.php/theijbm/article/download/151124/105365", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1a260845039f4095d1adc05a01dc7711b54dad77", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
266279146
pes2o/s2orc
v3-fos-license
Individual and Combined Effects of Predatory Bug Engytatus nicotianae and Trichoderma atroviride in Suppressing the Tomato Potato Psyllid Bactericera cockerelli in Greenhouse Grown Tomatoes : The tomato potato psyllid (TPP) Bactericera cockerelli is a serious pest of the Solanaceae family. The management of this pest using synthetic pesticides is problematic because of the development of pesticide resistance and environmental concerns including impacts on non-target organisms. The predatory bug Engytatus nicotianae has recently been identified as a useful biocontrol agent for TPP in greenhouses. The soil fungus Trichoderma Pers. is commonly used as a plant growth enhancer and biocontrol agent against phytopathogenic fungi. Therefore, there could be advantages associated with the combined use of these biocontrol agents. Some reports in other systems suggest that Trichoderma inoculation may alter the behaviour of pests and their natural enemies by modifying plant defence metabolites such as volatile organic compounds (VOCs). For this reason, this study aimed to investigate the individual and combined efficacy of these biocontrol agents (i.e., Trichoderma atroviride and E. nicotianae ) against TPP in greenhouse grown tomatoes ( Solanum lycopersicum cv. Merlice). To this end, we compared the effect of each biocontrol agent and their combination on TPP abundance across different developmental stages (egg, nymphs, adults) and the number of infested leaves. We also investigated plant VOC emissions under the different treatments. Across all measured TPP stages, the treatments tested ( E. nicotianae alone, T. atrovirdae alone, and T. atrovirdae + E. nicotianae ) significantly reduced mean TPP counts relative to the control, and no significant differences were observed in VOC emissions among treatments. Overall, T. atrovirdae alone was less effective than E. Introduction The tomato potato psyllid (TPP) Bactericera cockerelli (Šulc) (Hemiptera: Triozidae) is a widely recognised pest of several solanaceous crops [1][2][3][4].All TPP nymphal stages and adults can damage host plants by injecting salivary toxins that lead to foliar symptoms, such as leaf curling and yellowing.This condition was designated by Munyaneza [5] as "psyllid yellows".Moreover, TPP is also a vector of the bacterial pathogen Candidatus Liberibacter solanacearum (CLso), which is responsible for zebra chip disease in potatoes [6,7].Clso can cause the decline and death of infected plants [5], reducing yields and costing growers millions of dollars each year [8,9]. The predatory bug Engytatus nicotianae (Koningsberger) (Hemiptera: Miridae) has recently shown potential as a biological control agent to the extent that it can be used to prevent the establishment of TPP populations in caged greenhouse tomato plants [26][27][28][29].However, it was found that such protection does not always off-set the potential physiological damage resulting from even limited TPP feeding [28]. Trichoderma species are soil-borne fungi commonly used as biocontrol agents against plant pathogens and as plant growth enhancers [30][31][32].Different isolates of Trichoderma exhibit various mechanisms for their antagonistic effect.These include mycoparasitism, competition with pathogens (including nutrient and niche), antibiosis through fungal volatile and non-volatile compounds, enzyme activity, and changes in plant secondary metabolites with different bioactivities [30,[33][34][35][36].Some Trichoderma isolates have also been reported to modify the behaviour of phytophagous insects and their natural enemies through the activation of plant-defence pathways, e.g., by altering the emission of plant volatile organic compounds (VOCs) involved in host-finding and selection [37][38][39][40].This suggests that Trichoderma may confer additional protection against insect pests. Given the advantages associated with the individual use of E. nicotianae and Trichoderma, it would be of interest to explore the potential advantages of their combined use against TPP.However, the effect of Trichoderma on plant defence can be variable depending on the host plant, pest organism, biocontrol agent, and biotic and abiotic factors such as temperature and soil nutrients [37,40].Likewise, VOC emission can be quite system-specific and influenced by biotic and abiotic factors [41,42].Therefore, it is important to explore the effect of different isolates for specific plant species (or cultivars), pests, and growth conditions when developing a biocontrol and IPM strategy.To this end, the objective of this study was to explore the individual and combined efficacy of two biocontrol agents, T. atroviride and E. nicotianae, against TPP in greenhouse-grown tomato seedlings, and to explore VOC emissions under different treatments. Seed Inoculation with T. atroviridae and Plant Growth Conditions Seeds of Solanum lycopersicum cv.Merlice were purchased from Kings Seeds (Katikati, New Zealand).One hundred of these seeds were then sent to Agrimm Technologies Ltd., (Lincoln, New Zealand) to be commercially coated with an inert carrier containing spores of a four-strain mix of T. atroviride obtained from the Lincoln University Culture Collection (Karst bio-inoculant).These strains had been patented for the biological control of soilborne plant pathogens and plant growth promotion [43].The 100 seeds and a further 100 non-coated seeds were sown in a seedling-raising mix in separate 100-cell propagation trays that were placed in a glasshouse at Lincoln University. When the plants were 15-20 cm high, a total of 42 plants (21 grown from T. atroviride coated seeds and 21 from non-coated seeds) were randomly selected and each transplanted into a 6 L pot containing the growing medium.The medium was obtained from a 500 L mix that comprised 400 L of composted bark, 100 L of pumice, 2 kg of Osmocote ® NPK fertiliser (www.growwithosmocote.com,accessed on September 2023), and 500 g of horticultural lime.During the transplanting process for T. atroviridae-treated plants, pellets containing the four T. atroviride strains manufactured by Agrimm Technologies Ltd. were mixed into the growing medium at a rate of 0.3 g/6 L pot (equivalent to 15 kg/ha).For the duration of the greenhouse experiment, single plants were kept in 60 cm × 60 cm × 180 cm cages (BugDorm 6E630; www.bugdorm.com,accessed on September 2023) and trickleirrigated daily such that each plant received 0.25 L of water.The mean ambient greenhouse temperature was 21.9 • C (max 38 • C; min 15 • C), and mean relative humidity (RH) was 61.6% (max 90.5%; min 30%) Experimental Design The experimental design comprised a randomised complete block design with the following treatments: TPP-only (henceforth, control), TPP + E. nicotianae, TPP + T. atroviride, and TPP + E. nicotianae + T. atroviride.These were arrayed in seven blocks.For VOC collection purposes, only two additional treatments were added to each block (uninfested plant and T. atroviride-only) to assess the baseline emission of healthy uninfested plants and of plants inoculated with T. atroviride in the absence of TPP.Each block contained one cage with a single potted tomato plant for each of the treatments.The cages were laid out in two parallel rows with a main irrigation pipe down the middle lane.The distance between the cages in each block was 30 cm with 1 m between the blocks.Once the pots were placed into cages, a thin 1.8 m support stake was inserted into the centre of the pots and a drip irrigation pipe secured at soil level.A light source (16 h light: 8 h dark) was hung above each block so that the conditions for each were uniform during the experiment.After being placed in the cages, plants were then left to acclimatise for two weeks. Infestation of Plants with TPP and Introduction of E. nicotianae The entomological experimental methodology used in this study was based on that of [28,29] combined with unpublished data obtained from BioForce Limited (a commercial supplier of biological control agents, Karaka, New Zealand; www.bioforce.co.nz, (accessed on 2 November 2023)).All the TPP used were young adults (5-7 days old) and all E. nicotianae adults belonged to the same cohort (adults were c. 15 days old, nymphs were c. 7 days old). After a one-day acclimatisation period (7 December 2021), two healthy TPP males and two healthy TPP females, obtained from a rearing cage at Lincoln University, were placed in each designated cage.E. nicotianae adults and nymphs purchased from BioForce, Karaka, New Zealand.E. nicotianae were randomly selected from a shipment of 300 individuals; one adult female and two unsexed nymphal E. nicotianae were then released into each of the designated cages.The E. nicotianae nymphs were unsexed because they are cryptic, very active, and hide when exposed; it was therefore impossible to determine their sex without risking injuring the insect.A second release of both TPP and E. nicotianae was made on 16 December 2021 following the same procedure as above. Weekly Data Collection The TPP population within each cage was assessed once per week, between 10 am and 4 pm.During this time, the numbers of TPP eggs, nymphs, adults, and TPP-infested leaves were recorded.To assess the number of TPP (eggs, nymphs, and adults), a 5 min time limit was adopted [28,29], as the exponential growth of TPP (especially in TPP-only treatments) made a full census impractical towards the latter part of the experiment. VOC Sampling Volatile organic compounds (VOCs) were sampled between the 7 and 10 of December 2021, using a dynamic push-pull headspace sampling technique as described by Effah et al. [44,45].For this experiment, six treatments were used with seven replicates each, as described in the experimental design section.One individual leaf per treatment was enclosed in a multi-purpose 50 cm × 30 cm cooking bag (AWZ Products Inc., China) with both ends fastened using a cable tie.Using a portable PVAS22 pump (Volatile Assay Systems, Rensselaer, NY, US), carbon-filtered air was pushed into the bags through a PTFE tube (0.9 L/min) and simultaneously pulled out through another tube (0.8 L/min), creating a slight positive pressure to reduce external contaminants. To collect the VOCs, a volatile collection trap with 30 mg HayeSep Q adsorbent (Volatile Assay Systems, Rensselaer, NY, USA) was inserted in the pull tube.Collections of the VOCs from each target plant were conducted for two hours under greenhouse conditions.Thereafter, the foliage enclosed in the bags was removed and oven-dried to measure dry weight (grams).The collection filters were subsequently eluted using 200 µL of hexane (95% purity) with 10 ng/µL of nonyl acetate (Sigma Aldrich, Merck KGaA, Darmstadt, Germany) as an internal standard. The VOC samples were analysed using gas chromatography coupled to mass spectrometry (Shimadzu, Tokyo, Japan) with a 30 m × 250 µm × 0.25 µm TG-5MS column and helium as the carrier gas.Operating conditions were as follows: injector temperature 230 • C; split ratio of 10; initial oven temperature at 50 • C, which was held for 3 min then increased to 95 • C at a rate of 5 • C/min.Tentative identification of compounds was achieved by comparing them with target spectra in the MS library from the National Institute of Standards and Technology (NIST) and, when available, verified using authentic standards (Sigma Aldrich). Statistical Analyses All statistical analyses were conducted using the Stats package in R statistical software version 4.2.2.[46].A non-parametric Kruskal-Wallis test, followed by Dunn's post-hoc tests was conducted to evaluate whether the treatments had significant effects on average TPP population numbers and infested leaves across the study and differences among treatments.Furthermore, to account for changes in TPP population numbers over time, mixed-effects models were used, where the response variables were assumed to be Poissondistributed.These Poisson regression models accommodated the 'count' nature of the dependent variables (count of eggs, nymphs, adults, and number of infested leaves).In these models, the blocked design was accounted for by including the block number as a fixed effect.We included random intercepts for each individual plant to account for repeated-measures nature of the experimental design.The treatment groups and time (in days) were evaluated as fixed effects, and treatment x time interactions were calculated. For VOC analyses, the random forest algorithm was used [47].Random forest is a multivariate statistical tool suited to datasets with more variables than sample size and variables of autocorrelated nature, such as plant volatiles with common biosynthetic pathways.In this case, n = 100,000 bootstrap samples were drawn, with seven (variables) randomly selected at each node (the number of variables selected is based on the square root of all variables).The chance of a random sample being improperly classified is expressed as the out-of-bag (OOB) error rate.Lower OOB values indicate that the treatments differ substantially from one another, allowing the algorithm to classify samples correctly.In contrast, high OOB values suggest that there is poor discrimination among treatments leading to high error in the classification of a sample.It is further possible to identify which of the dependent variables (in this case individual compounds) contribute to separation between treatments.The importance of each compound for the distinction is expressed as the mean decrease in accuracy (MDA).However, this indicator is only relevant if adequate classification scores (low OOB values) are achieved. Average Rates of TPP Suppression The average rates of TPP suppression per treatment across the experiment are shown in Figure 1.Across all four measured TPP variables, each of the three treatments significantly reduced the mean TPP counts relative to the control.However, T. atroviride alone was significantly less effective than either E. nicotianae or the combined treatment.Moreover, the combined treatment was no better overall than E. nicotianae alone (i.e., T. atroviride did not improve the average performance of the E. nicotianae treatment across the sampling period). Comparison of TPP Growth Rates We calculated the growth rates of the TPP stages under all treatments throughout the experiment.The resulting growth curves are plotted in Figure 2. A mixed-effects model based on the Poison distribution showed the significant effects of E. nicotianae, the combined treatment, and time (in days) on the different growth stages of TPP vs. the control (Table S1).However, the interaction effect of treatment x time was variable and was only significant across all measured parameters for E. nicotianae (Table S1).In contrast, the use of T. atroviride alone showed no interaction with time (except for the nymphal stages). We further explored the effects of the treatments on the daily population growth rates of TPP and daily percentage of TPP-infested leaves over the duration of the experiment (Table 1).Here, E. nicotianae was found to consistently reduce the number of infested leaves and daily TPP population growth rates of all developmental stages.T. atroviride alone was found to have had little suppressive effect on the growth rates of TPP eggs and adults, and TPP-infested leaves, although it caused a significant reduction in the population growth rate of TPP nymphs.In contrast, the combined treatment significantly reduced all measured parameters except daily nymph population growth. Treatment Effect of on Plant VOC Emissions Thirty-three compounds were tentatively identified and quantified in the collected samples (Table S2).β-Phellandrene and 2-Carene were the most abundant compounds in all samples.Healthy, uninfested plants had, on average, the highest volatile organic compound (VOC) emissions, while plants infested with TPP in the absence of biocontrol agents had the lowest VOC emissions (Figure 3a).However, univariate statistical analysis of the total VOC emissions (ANOVA) showed no significant differences in the total VOC emission among treatments (N = 7, F = 0.551, p = 0.737).Top-ranked compounds: δ-eIemene (dElem), heptane (Hep) and 6-isopropylidene-1-methylbicyclo [3.1.0]hexane(X6Iso).A full list of compounds with their abbreviations is provided in Table S2. When comparing the entire volatile blends, a random forest analysis revealed a very high out-of-bag (OOB) error rate (83.33%) showing poor separation between treatments.The mean decrease in accuracy (MDA) values (Figure 3b) suggested that δ-eiemene, heptane and 6-isopropylidene-1-methylbicyclo [3.1.0]hexanecould have played a role in the separation between treatments (Figure 3b), but individual compound exploration using ANOVA did not yield significant differences among treatments for these compounds. Discussion In this study, we explored the independent and combined effect of two biocontrol agents (T.atroviride and E. nicotianae) on suppressing populations of tomato potato psyllid (TPP).Both biocontrol agents and their combination had a significant effect in reducing TPP populations at different developmental stages (egg, nymph, and adults) and the number of infested leaves when compared to the control.However, the treatments containing the predatory bug and T. atroviride were more effective than using T. atroviride alone. Previous studies have shown the potential of the predatory bug E. nicotianae in controlling TPP under greenhouse conditions [26][27][28][29].However, its use alone may not be enough to manage established populations.Therefore, it was suggested that it could be used in combination with another biocontrol agent to enhance protection against TPP [28] but simultaneous use of biocontrol agents is not always positive and can result in interference, e.g., [48][49][50][51].In this study, we observed excellent results when the predator was used in early phases of TPP establishment, and it retained its effect, even when a fungal biocontrol agent was applied simultaneously, suggesting both agents can be safely used together to reduce TPP populations.However, there seems to be no added benefit in their simultaneous use to control TPP. To assess whether there is an economic advantage in using both biocontrol agents, we recommend further studies using a similar experimental design taking into consideration other response variables such as plant growth and yield.Growth promotion and enhanced pathogen protection have been associated with Trichoderma use in other systems [30][31][32].However, they have been seldom explored in a setting where Trichoderma is used alongside a pest insect and its natural enemy, which more closely resembles real crop conditions. The observed reduction in the number of TPP eggs, nymphs, and adults, and decreased number of TPP infested leaves when using T. atroviride is consistent with other observations.For example, Trichoderma atroviride strain P1 was tested against two pests with different feeding habits on tomato plants, a leaf-chewing noctuid moth (Spodoptera littoralis) and a phloem-feeding aphid (Macrosiphum euphorbiae).In both cases, Trichoderma inoculation resulted in pest reduction.The authors suggested different mechanisms for both pests.In the case of aphids, a direct reduction was associated with the up-regulation of genes involved in the oxidative burst reaction early in the defence response, while the effect on the moth was linked to the enhanced expression of protective enzymes downstream in the defence cascade, e.g., proteinase inhibitors [38].The authors also reported an indirect effect through increased attraction of the aphid parasitoid Aphidius ervi due to an increase in emission and de novo production of plant VOCs [38]. In this study, we did not observe an increase in foliar VOC emissions using Trichoderma that could be linked to increased attraction of the natural enemy, so we assume that the observed effects on TPP are linked mainly to direct effects on the pest (probably similar to those observed for the aphid M. euphorbiae).These contrasting results are not surprising, as there is evidence that the Trichoderma effects on plants are system-specific and depend on abiotic and biotic factors.For instance, a study on tomato using T. afroharzianum T22 and T. atroviride P1 showed differential induction of plant defence responses against M. euphorbiae and S. littoralis, and temperature-dependent effects [37].Furthermore, biotic and abiotic factors can lead to plants producing highly plastic VOC blends [44,45,52]. Interestingly, we observed lower (albeit not significant) VOC emissions in TPP-infested plants (without biocontrol agents).While chewing herbivores often induce volatile emission, this is not always the case with phloem feeders, e.g., [53][54][55].Some phloem feeders may suppress plant signalling and defence responses through their endosymbionts [56][57][58].In fact, TPP is known to manipulate plant responses through its associated endosymbiont Candidatus Liberibacter psyllaurous [59].Therefore, the apparent reduction in VOCs after TPP attack observed here is not surprising and requires further investigation. The impact of Trichoderma on other natural enemies that can provide TPP biocontrol in this system (e.g., parasitoid Tamarixia triozae) must also be investigated, since natural enemies vary in their sensitivity and attraction to plant VOCs [60][61][62], and the possibility remains that highly sensitive parasitoid antennae may respond to minor blend variations or minor compounds in the VOC blend [63].The role of previous experience and learning in parasitoid and predator responses to plant VOCs (and other cues) could also be further studied [64][65][66][67]. In general, it is important to note that plants grown under greenhouse conditions, as described in this contribution, are often optimally resourced.They may opt for prioritising growth, reproduction, or other forms of defence, beyond the effect of volatile emissions at low infestation densities [68][69][70].To test this, further experiments could be conducted using different herbivore densities/damage levels and varying soil nutrient conditions. Conclusions Both biocontrol agents (E.nicotianae and T. atroviride) suppressed TPP populations respective to the control when used alone and in combination.E. nicotianae alone and its combination with T. atroviride were significantly more effective in reducing initial TPP numbers than Trichoderma alone, but there was no significant difference among these treatments.We found no indication of Trichoderma-induced changes in plant VOC emissions that could potentially lead to increased natural enemy recruitment.Therefore, at least under the conditions described here, there seems to be little advantage in combining E. nicotianae and Trichoderma to suppress TPP in greenhouse tomato crops.However, other advantages of the use of Trichoderma such as enhanced resistance to pathogens and growth promotion were not considered here, and these may add value to the combined use of both agents.Hence, further research considering other aspects of Trichoderma use in this system are needed to support its use alone or in combination with other biocontrol agents. Table 1 . Bactericera cockerelli (TPP) daily population growth rates per developmental stage and daily percentage of TPP infested leaves per treatment.
2023-12-16T17:13:45.500Z
2023-12-08T00:00:00.000
{ "year": 2023, "sha1": "6d2972db74736fcb53720017dd2199b5d169ebd8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/13/12/3019/pdf?version=1702294751", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "40a6febf8f65d2761158b71c637060bf4862f01e", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
41113045
pes2o/s2orc
v3-fos-license
Quantum Phase Gate Operation Based on Nonlinear Optics: Full Quantum Analysis We present a full quantum treatment of a five-level atomic system coupled to two quantum and two classical light fields. The two quantum fields undergo a cross-phase modulation induced by electro-magnetically induced transparency. The performance of this configuration as a two-qubit quantum phase gate for travelling single-photons is examined. A trade-off between the size of the conditional phase shift and the fidelity of the gate is found. Nonetheless, a satisfactory gate performance is still found to be possible in the transient regime, corresponding to a fast gate operation. Single photons are natural candidates for the implementation of quantum information processing systems [1]. This is due to the photon's robustness against decoherence and the availability of single-qubit operations. However, it is difficult to realize the necessary two-qubit operations since the interaction between photons is very small. A possible solution is the enhancement of photon-photon interaction either in cavity QED configurations [2] or in dense atomic media exhibiting electromagnetically induced transparency (EIT) [3]. In this latter case, optical nonlinearities can be produced when EIT is disturbed, either by introducing additional energy level(s) [4,5], or by mismatching the probe and control field frequencies [6,7]. In this letter, we address the feasibility of EIT-based systems for the implementation of a two-qubit quantum phase gate (QPG) for travelling single photons [8,9,10], by means of a full quantum treatment of the system dynamics. In a QPG, one qubit gets a phase conditional to the other qubit state according to the transformation [11,12] |i 1 |j 2 → exp {iφ ij } |i 1 |j 2 where {i, j} = 0, 1 denote the logical qubit bases. This gate is universal when the conditional phase shift (CPS) is nonzero, and it is equivalent to a CNOT gate up to local unitary transformations when φ = π [11,12]. The existing literature focused only on the evaluation of the CPS and on the best conditions for achieving φ = π [8,9,10], while the gate fidelity, which is the main quantity for estimating the efficiency of a gate, has been never evaluated. In this letter we calculate both the fidelity and the CPS of the QPG, enabling us to discover a general trade-off between a large CPS and a gate fidelity close to one, hindering the QPG operation. However, we show that this trade-off can be bypassed in the transient regime, which has never been considered before in EIT situations, still allowing a satisfactory gate performance. The qubits are given by polarized single-photon wave packets with different frequencies, and the phase shifts φ ij are generated when these two pulses cross an atomic ensemble in a five-level "M" configuration (see Fig. 1 The population is assumed to be initially in the ground state |3 . From this ground state, it could be excited by either the single-photon probe field, coupling to transition |3 ↔ |2 , or by the single-photon trigger field, coupling to transition |3 ↔ |4 . If the five levels are Zeeman sub-levels of an alkali atom, and both pulses have a sufficiently narrow bandwidth, the Zeeman splittings can be chosen so that the atomic medium is coupled only to a given circular polarization of either the probe or trigger field, while it is transparent for the orthogonally polarized mode, which crosses the gas undisturbed [9]. In this way, the logical basis for each qubit practically coincides with the two lowest Fock states of the mode with the "right" polarization, |0 j and |1 j (j = p, t). When the probe (trigger) is on two-photon resonance with the classical pump field with Rabi frequency Ω 1 (Ω 4 ), i.e. δ 1 = δ 2 (δ 3 = δ 4 ) (see Fig. 1 for a definition of the detunings), the system exhibits EIT for probe and trigger simultaneously. In fact, the scheme can be seen as formed by two adjacent Λ systems, perfectly symmetric between probe and trigger. A nonzero CPS occurs whenever a nonlinear cross-phase modulation (XPM) between probe and trigger is present. This cross-Kerr interaction takes place if the two-photon resonance condition is violated. For small frequency mismatch ǫ 12 = δ 1 − δ 2 and ǫ 34 = δ 3 − δ 4 (both chosen to be within the EIT win-dow), absorption remains negligible and the cross-Kerr interaction between probe and trigger photons may be strong. The consequent CPS may become large, of the order of π, if the probe and trigger pulse simultaneously cross the atomic medium and interact for sufficient time. This is achieved when the group velocities of the two pulses are small and equal (to v g , see Refs. [8,9]), so that the interaction time is given by t int = L/v g , L being the length of the gas cell [13]. The inherent symmetry of this scheme guarantees perfect group velocity matching whenever δ 1 = δ 4 , δ 2 = δ 3 and g p / is the coupling constant between the probe (trigger) quantum mode with frequency ω j and the corresponding transition with electric dipole moment µ j . These features are shared by all the proposals for an EIT-based, nonlinear two-qubit quantum gate [8,9]. They essentially differ only in the way in which group velocity matching is achieved. The scope of this paper is to find the ultimate physical limits imposed on QPG operations in systems with EIT-based optical nonlinearities. To this end, we neglect all the possible technical limitations and experimental imperfections. First, we assume perfect spatial mode matching between the input single-photon pulses entering the gas cell and the optical modes excited by the driven atomic medium, and which are determined by the geometrical properties of the gas cell and of the pump beams [14]. This allows us to describe the probe and trigger fields in terms of single travelling optical modes, with annihilation operatorsâ p,t . Next, we assume that the pulses are tailored in such a way that they simultaneously enter the gas cell and completely overlap with it during the interaction. This means that their length (compressed due to group velocity reduction) is of the order of the cell length L and their beam waist is of the order of the cell radius. In this way, the two pulses interact with all N a atoms in the cell, and moreover one can ignore spatial aspects of pulse propagation. With these assumptions, and neglecting dipole-dipole interactions, the interaction picture Hamiltonian may be written as where we have defined the collective atomic operatorŝ contains at most two excitations, the time evolution . What is relevant is that the dynamics remain simple and restricted within a finite-dimensional Hilbert space even when we include spontaneous emission, so that time evolution is described by the following master equation for the system density matrix ρ, where γ kl denotes the decay rate from the excited states l = 2, 4 to the ground states k = 1, 3, 5 [15]. Spontaneous emission seems to complicate the system dynamics. However, the Hamiltonian evolution involves only the singly excited symmetric atomic states of Eq. (4). This means that these collective states decay with a rate equal to the single-atom decay rate γ kl , and that spontaneous emission involves only a restricted number of additional collective atomic states in the dynamics. To state it in an equivalent way, the atomic medium behaves as an effective single 5-level atom, with a collectively enhanced coupling with the optical modes g j √ N a , but with the same single-atom decay rates γ kl , Rabi frequencies Ω i , and detunings δ i (see also Ref. [16]). Spontaneous emission causes the four independent Hilbert subspaces corresponding to the four initial state components to become coupled. Moreover, the joint effect of the "cross" decay channels |4 → |1 and |2 → |5 together with the Hamiltonian dynamics couples the above-mentioned collective states with six new states, |e This analysis allows us to fully characterize the QPG operation, by calculating both the CPS φ of Eq. (1) and the fidelity of the gate, at variance with former treatments [8,9,10]. The accumulated CPS as a function of t int is obtained by using the fact that the phase shifts φ ij of Eq. (1) are given by combinations of the phases of the off-diagonal matrix elements (in the Fock basis) of the reduced density matrix of the probe and trigger modes, ρ f (t int ). The gate fidelity is given by [12] where |ψ id (t int ) = c 00 exp{iφ 00 (t int )}|0 p , 0 t + c 01 exp{iφ 01 (t int )}|0 p , 1 t + c 10 exp{iφ 10 (t int )}|1 p , 0 t + c 11 exp{iφ 11 (t int )}|1 p , 1 t is the ideally evolved state from the initial condition (3), with phases φ ij (t int ) evaluated from ρ f (t int ) as discussed above. The overbar denotes the average over all initial states (i.e., over the c ij , see Ref. [17]). The above fidelity characterizes the performance of the QPG as a deterministic gate. However, one could also consider the QPG as a probabilistic gate, whose operation is considered only when the number of output photons is equal to the number of input photons. The performance of this probabilistic QPG could be experimentally studied by performing a conditional detection of the phase shifts, and it is characterized by the conditional fidelity F c (t int ), similar to that of Eq. (6), but with ρ f (t int ) replaced by ρ c f (t int ) = Tr atom {|ψ nj (t int ) ψ nj (t int )|}/ ψ nj (t int )|ψ nj (t int ) , where |ψ nj (t int ) is the (non-normalized) evolved atomfield state conditioned to the detection of no quantum jumps [18], i.e., of no spontaneous emission. The conditional fidelity is always larger than the unconditional one, but they become equal (and both approach 1) for an ideal QPG in which the number of photons is conserved and all the atoms remain in state |3 . This ideal condition is verified in the limit of large detunings δ j ≫ γ kj (to significantly suppress spontaneous emission) and very small couplings g j √ N a ≪ Ω j . In this limit, each component of the initial state of Eq. (3) practically coincides with the dark state of the four independent Hamiltonian dynamics discussed above. The four phase shifts φ ij can be evaluated as a fourth-order perturbation expansion of the corresponding eigenvalue, multiplied by t int , obtaining the following CPS × ǫ 34 (ǫ 2 12 + Ω 2 1 ) (ǫ 12 δ 1 − Ω 2 1 ) . This prediction is verified by the numerical solution of Eq. (5) in the limit of large detunings and small couplings. However the resulting CPS is too small, even for very long interaction times (i.e., long gas cells): for example, for g p,t √ N a = 0.5 MHz, ǫ 12,34 = 1.9 MHz, Ω 1,4 = 65 MHz and δ 1,3 = 1.9 GHz, we obtain a tiny CPS of only 3 × 10 −4 radians when t int = 10 −4 s. This is not surprising because this limit corresponds to a dispersive regime far from EIT, and one has to explore the non-perturbative regime of larger couplings in order to exploit EIT and achieve a satisfactory QPG operation. We have found good QPG performance for the following parameters, corresponding to a gas cell of N a ≃ 10 8 87 Rb atoms: γ kl = γ = 2π × 6 MHz, δ 1 = δ 3 = 15γ, ǫ 12 = ǫ 34 = 0.01γ, g p = g t = 0.0022γ, Ω 1 = Ω 4 = 4γ. The results are shown in Figs. 2 and 3, where we see that a CPS of ∼ π radians is obtained in the transient regime for t int ≈ 0.4/γ ∼ 10 ns, corresponding to a fast operation of the gate. At the same interaction time, the unconditional gate fidelity (Fig. 3, full line) is about 94%, while the conditional gate fidelity reaches the value of 99% (Fig. 3, dashed line), in correspondence with a success probability of the gate equal to 0.94. The probe and trigger group velocity is v g ≃ 3×10 6 ms −1 , yielding a gas cell length L = v g t int ≃ 3.1 cm. The value of g j yields an interaction volume V ≃ 2 · 10 −3 cm 3 , corresponding to a gas cell diameter of about 330 µm and to an atomic density N a /V ≃ 5 · 10 10 cm −3 . EIT is a stationary phenomenon, while the above results are obtained in the transient regime where γt int < 1. However we can attribute these results to a sort of "non-stationary", EIT process. This is suggested by the reduction of v g (by a factor ≃ 100), which has been estimated by evaluating the "instantaneous" susceptibility from the reduced atomic density matrix given by Eq. (5) and then averaging over the time interval between 0 and t int . This "non-stationary" v g is one order of magnitude smaller than the conventional v g obtained from the steady-state susceptibility corresponding to the above parameters. The presence of a moderate EIT process is also confirmed by the fact that in a numerical study of the three-level ladder atomic scheme, yielding XPM without EIT [4], we have found a slower accumulation of the CPS and a smaller conditional fidelity (∼ 78%) for a corresponding set of parameters. Our study of Eq. (5) also shows that it is not possible to achieve a comparable QPG performance in the steadystate regime γt int ≫ 1. In fact, we have found at best a CPS of π in correspondence with fidelities F (t int ) and F c (t int ) equal to 77% and 83%, respectively. This is due to the general presence of a trade-off between the size of the CPS and of the gate fidelity. In fact, we have seen that both gate fidelities approach 1 in the small perturbation limit, but with a CPS which becomes appreciable only for unrealistically long gas cells. A larger CPS requires a larger ratio g j √ N a /Ω j . This condition however increases the population of atomic states |1 and |5 at the expense of the initial atomic state |3 , unavoidably decreasing the gate fidelity. Similar conclusions hold for other options, such as increased detunings δ j , or adjusting two-photon detunings ǫ ij . This trade-off is present also at large ratios g j √ N a /Ω j in the transient regime, where, however, it may be less effective. In fact, in this case one has significant oscillations of the atomic populations, but it is possible to find appropriate interaction times t int at which high fidelities are achieved (see Fig. 3), simultaneously with a CPS of about π. In conclusion, our study shows that the implementation of efficient EIT-based nonlinear two-qubit gates for travelling single-photons is possible. In fact, even if there is a trade-off between the size of the CPS and the fidelity of the gate in the stationary regime, it is possible to have a satisfactory gate performance in the transient regime, where a fast gate operation and fidelities equal to 0.99 are achievable. The experimental realization might be challenging, but the implementation of this quasi-deterministic two-qubit gate would be ex-tremely useful, not only for quantum computation, but also for quantum communication purposes: for example, a QPG allows a complete Bell-state discrimination for single-photon polarization qubits [19]. We expect that these considerations apply to all EIT-based crossed-Kerr schemes [8,9], regardless of the specific level scheme considered. Finally, we note that our analysis does not apply to situations where the nonlinearity comes from independent processes such as atomic collisions or dipole-dipole interactions [10]. We acknowledge enlightening discussions with G. Di Giuseppe.
2017-02-11T01:58:15.278Z
2005-07-14T00:00:00.000
{ "year": 2005, "sha1": "8e3ead7a63fbec2909f389f125042915891e86e7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ee4e82ef594dfbfa55bfb964baa63acb1d22ae03", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
201829963
pes2o/s2orc
v3-fos-license
THP-1 Cells and Pro-Inflammatory Cytokine Production: An In Vitro Tool for Functional Characterization of NOD1/NOD2 Antagonists THP-1 cells express high levels of native functional nucleotide-binding oligomerization domain 1 (NOD1), NOD2, and Toll-like receptor 4 (TLR4) receptors, and have often been used for investigating the immunomodulatory effects of small molecules. We postulated that they would represent an ideal cell-based model for our study, the aim of which was to develop a new in vitro tool for functional characterization of NOD antagonists. NOD antagonists were initially screened for their effect on NOD agonist-induced interleukin-8 (IL-8) release. Next, we examined the extent to which the selected NOD antagonists block the NOD-TLR4 synergistic crosstalk by measuring the effect of NOD antagonism on tumor necrosis factor-α (TNF-α) secretion from doubly activated THP-1 cells. Overall, the results obtained indicate that pro-inflammatory cytokine secretion from THP-1 provides a valuable, simple and reproducible in vitro tool for functional characterization of NOD antagonists. Introduction The cytoplasmic nucleotide-binding oligomerization domain (NOD)-like receptors (NLRs) NOD1 and NOD2 belong to the pattern recognition receptor family and play a vital role in the formation of innate immune response by recognizing distinct pathogen-associated molecular patterns [1][2][3]. The dipeptide d-Glu-meso-DAP (iE-DAP) and muramyl dipeptide (MDP) constitute minimal sequences required for activation of NOD1 [4][5][6] and NOD2 [7,8], respectively. Both downstream signaling pathways trigger activation of nuclear factor κB (NF-κB) and mitogen-associated protein kinases, resulting in an inflammatory response [1,2]. iE-DAP and MDP by themselves are rather weak stimulants of immune cells, consequently inducing the release of modest amounts of pro-inflammatory cytokines. However, they were found to synergize with lipopolysaccharide (LPS), a Toll-like receptor 4 (TLR4) agonist, and many other TLR ligands, resulting in increased pro-inflammatory cytokine releasing capacity [9][10][11][12]. NF-κB, which is activated downstream of both TLR and NOD receptors, is a common denominator of both pathways and therefore serves as the primary mediator of synergistic effects of combined stimulation. The NOD1/2 antagonistic activity of compounds and their NOD1 vs. NOD2 selectivity is usually determined using NOD1/2-overexpressing HEK293T cells transfected with a NF-κB-driven luciferase/secreted embryonic alkaline phosphatase (SEAP) reporter gene, usually by pretreating these cells with potential antagonists, subsequent stimulation with NOD1/2 agonists, followed by a simple measurement of luciferase/SEAP activity [13,16,17,22,24]. To extend the analysis beyond reporter gene assays, a few methods have also been developed for assessment of the functional activity of compounds, namely their influence on the authentic downstream effect of NOD-triggered NF-κB activation, the release of interleukin-8 (IL-8). Several cell lines have been considered for functional characterization of NOD ligands. For example, colonic epithelial HCT116 cells express both NOD1 and NOD2 and respond to their activation with increased NF-κB activity, which is reflected in the subsequent secretion of IL-8 [21,25]. Unfortunately, they do not respond to TLR4 agonists and cannot be utilized to investigate how NOD antagonists modulate NOD-TLR synergy. Next, human breast cancer epithelial cell lines MCF-7 overexpressing NOD1 or NOD2 have been used in a similar fashion for characterization of NOD1/2 ligands [26,27]. The major limitations of the MCF-7 assay include the need for overexpression as well as the use of cycloheximide as an IL-8-releasing adjuvant [26,27]. Lastly, in vitro assays using freshly isolated peripheral blood mononuclear cells/monocytes from human blood have also been utilized [28][29][30][31][32]; however, such assays are somewhat inconvenient due to a lengthy isolation procedure, which limits broader application. It is worth noting that assays using primary cells are also highly susceptible to biological donor-to-donor variability. THP-1 cells derived from an acute monocytic leukemia patient have been used in a variety of studies [33,34] and have been proven to be useful in investigating the immunomodulatory effects of small molecules [35], including NOD agonists [28,36]. In addition to TLR4, THP-1 cells also express high levels of native functional NOD1 and NOD2 and respond to NOD stimulation by IL-8 secretion, due to which they constitute a viable alternative to the aforementioned cell lines [9]. Based on this information, we postulated that THP-1 cells would represent an ideal cell-based model for our present study. Specifically, our aim was to develop a new in vitro assay for functional characterization of validated NOD antagonists by determining how they modulate pro-inflammatory cytokine secretion from activated THP-1 cells. These antagonists were first screened for their ability to inhibit C12-iE-DAP (NOD1 agonist) and/or MDP (NOD2 agonist) induced IL-8 production from THP-1 cells. Given the fact NOD agonists are known to synergistically act with LPS in terms of pro-inflammatory cytokine secretion, the antagonists were further assessed for their ability to inhibit NOD1/NOD2 agonist-induced TNF-α release from LPS-stimulated THP-1 cells. Overall, the results obtained indicated that pro-inflammatory cytokine secretion provides a valuable in vitro tool for functional characterization of NOD antagonists to supplement the results obtained in reporter gene assays. Determination of Optimal Experimental Conditions for Screening (Dose-Finding Study) THP-1 cells represent a convenient, robust, and reproducible alternative, which enables intra-assay comparison. A consistent protocol is of key importance to evaluate the compounds in a reproducible and unbiased manner. Since cell densities have been shown to affect the outcome of the assay, with the release of cytokines being considerably diminished at low cell densities [37], a cell density of 10 6 was chosen as optimal, while passaging was performed in agreement with the protocol for the human cell line activation test (h-CLAT) [38]. Recognition of an NOD1/NOD2 agonist triggers a signaling pathway leading to the activation of NF-κB and the production of pro-inflammatory cytokines (e.g., TNF-α, IL-1β, IL-6, IL-8) [9]. To define the optimal experimental conditions, preliminary experiments were conducted with C12-iE-DAP (lauroyl-γ-d-glutamyl-meso-diaminopimelic acid), an acylated derivative of iE-DAP, as a reference NOD1 agonist, and MDP as a NOD2 agonist. They were initially screened for their ability to induce IL-8 and TNF-α release from naive THP-1 cells (shown in Figure 1A cytotoxic at the maximum concentration tested (50 µM) with the exception of GSK669 that showed cytotoxicity at concentrations ≥ 10 µM. Determination of Optimal Experimental Conditions for Screening (Dose-Finding Study) THP-1 cells represent a convenient, robust, and reproducible alternative, which enables intraassay comparison. A consistent protocol is of key importance to evaluate the compounds in a reproducible and unbiased manner. Since cell densities have been shown to affect the outcome of the assay, with the release of cytokines being considerably diminished at low cell densities [37], a cell density of 10 6 was chosen as optimal, while passaging was performed in agreement with the protocol for the human cell line activation test (h-CLAT) [38]. Recognition of an NOD1/NOD2 agonist triggers a signaling pathway leading to the activation of NF-κB and the production of pro-inflammatory cytokines (e.g., TNF-α, IL-1β, IL-6, IL-8) [9]. To define the optimal experimental conditions, preliminary experiments were conducted with C12-iE-DAP (lauroyl-γ-D-glutamyl-mesodiaminopimelic acid), an acylated derivative of iE-DAP, as a reference NOD1 agonist, and MDP as a NOD2 agonist. They were initially screened for their ability to induce IL-8 and TNF-α release from naive THP-1 cells (shown in Figure 1A Stimulation of cells with C12-iE-DAP/MDP brought about a dose-dependent increase in IL-8 release at all tested concentrations. On the other hand, neither MDP nor C12-iE-DAP produced substantial TNF-α release by themselves, suggesting that this cytokine is not a suitable biomarker for functional evaluation of NOD antagonists. From these preliminary data, the maximum concentration of 10 µM of either NOD1 or NOD2 agonist were chosen for further evaluations of selected NOD antagonists. Previous work demonstrated that co-stimulation of NODs and TLRs brought about a substantial increase in cytokine production (e.g., IL-8, TNF-α) [36]. To corroborate these results in our assay, we investigated how selected representative NOD1 and NOD2 agonists modulate LPS-induced cytokine secretion from THP-1 cells (shown in Figure 1C,D). THP-1 cells were treated with MDP and C12-iE-DAP (both at 2 µM and 10 µM), alone or in combination with 1 or 10 ng/mL LPS. IL-8 and TNF-α release were assessed 20 h later. As expected, an overwhelming response was observed in terms of IL-8 secretion on co-stimulation with either concentrations of LPS and, therefore, we deemed this cytokine not suitable for determining the effect of NOD antagonists on NOD-TLR4 synergy. On the contrary, in agreement with previous studies, an evident synergistic effect on LPS-induced TNF-α secretion was observed. Stimulation of THP-1 either with MDP or C12-iE-DAP (both at 10 µM) in combination with 1 ng/mL of LPS significantly potentiated the production of TNF-α. We addressed the possible effect of NOD antagonist pretreatment duration on the extent of inhibition of cytokine release using the selective bona fide NOD1 and NOD2 antagonists, respectively, ML130 and GSK669. The THP-1 cells were pretreated with 5 µM of ML130 or GSK669 (this concentration was chosen as it corresponds to their respective IC 50 values) for 0, 1, or 3 h; they were then stimulated with the corresponding NOD agonist (10 µM) and IL-8 release was determined after 20 h (Figure 2). The duration of pretreatment did affect the performance of NOD antagonists, albeit to a minor extent. The maximum inhibition was achieved with pretreatment lasting 3 h, however, 1 h of pretreatment was chosen for reasons of convenience. Effect of NOD Antagonists on IL-8 Secretion from Stimulated THP-1 Cells Treatment of THP-1 cells with the selected NOD antagonists alone did not produce substantial IL-8 release and only negligible TNF-α secretion at the maximum concentration tested (data not shown). Having established that the compounds themselves do not affect the cytokine secretion, we investigated the dose-dependent effect of selected NOD1, NOD2, and dual NOD1/2 antagonists on NOD agonist-induced IL-8 release. Pretreatment of THP-1 with increasing concentrations of NOD antagonists (0.5-50 µM) resulted in dose-dependent suppression of IL-8 release from NOD agonist-stimulated THP-1 cells (Figure 3A,B). Notably, GSK669 was not tested at the highest concentration (50 µM) due to cytotoxicity. As expected, the results demonstrated predominant actions of ML130 and ML146 towards NOD1 agonist-induced cytokine release, while GSK669 mostly affected NOD2 agonist-induced release. The dual antagonist SZA-39 antagonized the response to both stimuli. Overall, all tested compounds induced a dose-dependent inhibitory effect on IL-8 release, following a nonlinear semilogarithmic model ( Figure A2); their IC 50 values were also determined. The relationship between IC 50 values determined in previous in vitro reporter assays using commercially available NOD-overexpressing HEK cell lines and the IC 50 values obtained in our functional assay are summarized in Table 1 and shown for comparison. The obtained results in THP-1 cells revealed a low micromolar IC 50 of the reference NOD1 antagonist ML130 on NOD1 (IC 50 = 2.97 ± 0.31 µM) and selectivity towards NOD2 (IC 50 > 50 µM), which is in rather good agreement with the previously reported activities in reporter gene assays [22]. Similarly, the results for ML146 (IC 50 (NOD1) = 10.5 ± 1.32 µM; IC 50 (NOD2) = 32.2 ± 5.16 µM) indicated a somewhat lower activity as opposed to those obtained in reporter gene assays [22]. By contrast, GSK669 possesses a low micromolar IC 50 on NOD2 (IC 50 = 1.57 ± 0.15 µM) and is selective towards NOD1 (IC 50 > 50 µM), thus representing a good match to the activities measured in HEK reporter cell lines [21]. Finally, the dual NOD1/NOD2 antagonist SZA-39 showed a slightly weaker activity in THP-1 cells (IC 50 (NOD1) = 27.5 ± 6.85 µM; IC 50 (NOD2) = 14.4 ± 1.97 µM), compared to those measured in HEK cells [22]. Effect of Selected NOD1/NOD2 Antagonists on NOD1/2-TLR4 Synergy Combined NOD/TLR stimulation reflects the scenario of pathological conditions of infection and chronic inflammation, thus recapitulating the innate immune responses to invading bacteria. Previous work has shown that co-stimulation of NOD1/2 and TLR4 brought about a substantial increase in pro-inflammatory cytokine production (e.g., IL-8, TNF-α) [40]. This prompted us to investigate whether our assay has the capacity to elucidate, and if so, to what extent the NOD antagonists block such a synergistic response. Specifically, we determined the effect of NOD antagonism on TNF-α secretion from activated THP-1 cells. Firstly, in order to exclude a possible effect of NOD antagonists on the TLR4 pathway and ascertain the compounds' selectivity profile, an assay measuring the effect of NOD1/NOD2 antagonists on TLR4-dependent IL-8 and TNF-α release was utilized. Evidently, the pretreatment with NOD antagonists did not prevent the LPS-elicited pro-inflammatory cytokine release from THP-1 (Figure 4). Secondly, the observed diminished levels of TNF-α production triggered by combined stimulation of THP-1 cells clearly demonstrates that antagonists of the NOD1/2 signaling pathway successfully suppressed NOD-TLR4 synergy ( Figure 5). Namely, in co-stimulated cells, GSK669 reduced TNF-α release to the level induced by the TLR4 agonist alone. In accordance, the synergistic effect induced by C12-iE-DAP and LPS was markedly suppressed in NOD1-antagonized cells. These results corroborate those obtained by Uehara et al., though it should be noted they employed RNA interference technology to suppress NOD1/NOD2 [36]. Importantly, our findings indicate that NOD-TLR4 crosstalk can be completely abolished by small molecules. Cell Culture For all experiments, THP-1 cells (Istituto Zooprofilattico di Brescia, Brescia, Italy) were diluted to 10 6 cells/mL in RPMI 1640 containing 2 mM L-glutamine, 0.1 mg/mL streptomycin, 100 IU/mL penicillin, 50 µM 2-mercaptoethanol, and supplemented with 10% heat-inactivated fetal calf serum (media) and cultured at 37 • C in 5% CO 2 . The experiments were carried out on passages 3-12. To evaluate IL-8 and TNF-α production, cultures were set up in 24-well culture plates containing 500 µL of cells. They were pretreated with selected NOD antagonists at increasing concentrations for 1 h before the addition of either MDP or C12-iE-DAP (both at 10 µM) and then incubated for 20 h. Each concentration was tested in duplicate, and untreated cells were exposed to DMSO, which represented the vehicle control (not exceeding 0.2% final concentration). In synergy studies, cells were additionally stimulated with lipopolysaccharide (LPS, from Escherichia coli serotype 0127:B8, Sigma) at a final concentration of 1-10 ng/mL, as indicated in the figure legends. Cell-free supernatants were collected at 20 h by centrifugation at 3000 rpm for 5 min and stored at −80 • C. Cell Viability Prior to investigating the effects of selected NOD antagonists on cytokine production, their potential cytotoxicity for THP-1 cells was assessed. Cell viability was assessed by leakage of LDH from damaged cells. LDH is a well-known indicator of cell membrane integrity and cell viability [41]. Cells were treated for 20 h with the compound of interest at concentrations of up to 50 µM. LDH activity was determined in cell-free supernatants using a commercially available colorimetric kit (Takara Bio Inc., Kusatsu, Japan). Results are expressed as OD. Cytokine Production (ELISA) IL-8 and TNF-α release from THP-1 cells were measured in cell-free supernatants obtained by centrifugation at 3000 rpm for 5 min and stored at −80 • C until measurement. IL-8 and TNF-α production were assessed by specific sandwich ELISA (Immunotools, Friesoythe, Germany; eBioScience/R&D Systems, Minneapolis, MN, USA). Results are expressed in pg/mL. The limit of detection was 15.6 pg/mL for IL-8 and 4 pg/mL for TNF-α. Data Analysis and Statistics All experiments were performed at least three times, with average values expressed as means ± standard deviation (SD). Statistical analyses were performed using GraphPad Prism 6 (La Jolla, CA, USA). Statistical significance was determined either with the Mann-Whitney or Kruskal-Wallis followed by a post hoc Dunn's multiple comparison test, as indicated in the figure legends. Differences were considered nonsignificant for p > 0.05, significant (*) for p < 0.05, very significant (**) for p < 0.01 and extremely significant (***) for p < 0.001. IC 50 values of NOD1/2 inhibition were calculated by a nonlinear regression model using GraphPad Prism 6 software. Conclusions In this study, we clearly demonstrated that the THP-1 assay, as described above, represents a convenient screening tool for functional activity of NOD antagonists. While IL-8 release proved to be a suitable biomarker for functional characterization of NOD antagonism, the TNF-α release additionally provides an estimate of the NOD antagonists' capacity to block NOD-TLR crosstalk. Overall, the results obtained indicate that the pro-inflammatory cytokine secretion profile from THP-1 cells constitutes a valuable, simple, and reproducible in vitro tool for functional characterization of NOD antagonists to supplement the results of reporter gene assays.
2019-09-05T13:17:28.189Z
2019-08-30T00:00:00.000
{ "year": 2019, "sha1": "4f47ff8d28de867c25b0e3d6087c1ea94c37b146", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/17/4265/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd9a62f3ef2b6aa58f29eb179dce35880eb07392", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
250503659
pes2o/s2orc
v3-fos-license
Nasal cytology can predict clinical efficacy of subcutaneous immunotherapy in intermittent allergic rhinitis Introduction Allergen immunotherapy (AIT) is the only disease-modifying treatment option available for patients with IgE-mediated allergic rhinitis. The identification of specific biomarkers, which may predict response to AIT, is currently an active field of research in the aspect of recommended personalization of medicine. Aim To assess the changes in rhinological parameters in intermittent allergic rhinitis (IAR) patients resulting from subcutaneous immunotherapy (SCIT). Material and methods Forty-two patients (female: 19; 45%) with IAR qualified for subcutaneous immunotherapy were enrolled in this study. Fourteen (33.3%) patients were desensitized with grass pollen allergen extracts, 12 (28.6%) with tree pollen allergen extracts, and 16 (38.1%) with grass and tree pollen allergen extracts. The patients were evaluated before AIT during the pollen season and in the next pollen season after introduction of subcutaneous immunotherapy. On both occasions, determination of total nasal symptom score (TNSS), rhinomanometry and nasal cytology were performed. Results All examined parameters significantly improved after one course of allergen immunotherapy: the percentage of eosinophils in nasal mucosa, TNSS and nasal resistance decreased, whereas the nasal flow rate increased. The decrease in percentage of nasal eosinophils significantly correlated with improvement in TNSS (rs = 0.39, p < 0.05) and was the highest in the subgroup sensitive to grass pollen (44.5 (40–52)). Conclusions The rhinological assessment confirmed high effectiveness of SCIT in intermittent allergic rhinitis. A high percentage of eosinophils in nasal cytology before subcutaneous immunotherapy can predict its clinical efficacy for intermittent allergic rhinitis, especially in grass pollen allergy. Introduction Allergic diseases are one of the most serious health problems worldwide. According to the latest European Academy of Allergy and Clinical Immunology (EAACI) report from 2016, it is estimated that by 2025 over 50% of all Europeans will be patients with at least one type of allergic disease [1]. The most common problem in clinical practice, apart from bronchial asthma, is allergic rhinitis (AR). AR is an inflammatory disease of the nasal mucosa, induced by an IgE-mediated reaction [2]. The costs generated by treatment, prophylaxis and diagnostic procedures as well as the deterioration in the quality of life of patients mean that AR is also an important epidemiological and social problem [3,4]. Allergen immunotherapy (AIT) is the only disease-modifying treatment option available for patients with IgE-mediated allergic diseases [5,6]. AIT induces immune tolerance, and prevents the development of new sensitization and the progression from AR to asthma [7,8]. The effectiveness of immunotherapy depends on the type of sensitizing allergen. Studies show that the treatment in selected patients with intermittent allergic rhinitis is more effective [9]. The identification of specific biomarkers, which may predict response to AIT treatment in AR patients, is currently an active field of research in the aspect of recommended personalization of medicine [10,11]. Aim The aim of the study was to evaluate rhinological parameters that are practicable in the specialist office in patients with intermittent allergic rhinitis qualified for treatment with subcutaneous allergen immunotherapy (SCIT). Study setting and participants This single-center prospective observational study included patients of an allergy and ENT outpatient clinic with intermittent allergic rhinitis qualified for subcutaneous immunotherapy in accordance with the EAACI guidelines in the pollen seasons from January to July 2018 and 2019 [12]. Diagnosis of intermittent allergic rhinitis was based on clinical history and skin prick test results. Rhinitis was classified as intermittent according to ARIA criteria [13]. The patients with intermittent allergic rhinitis were evaluated twice: before AIT during the pollen season (V1) and in the next pollen season (V2) after introduction of subcutaneous immunotherapy ( Figure 1). Both during the initial visit (V1) and at follow-up (V2) total nasal symptom score (TNSS), rhinomanometry and nasal cytology were performed. All patients denied the use of nasal or systemic corticosteroids, antihistamines or antileukotrienes for least 14 days prior to each assessment. Exclusion criteria covered: patients under 18 years of age, concomitant bronchial asthma, allergy to perennial allergens such as house dust mites, Alternaria, Cladosporium, animal dander, patients with chronic rhinosinusitis with nasal polyps (CRSwNP), clinically relevant nasal septum deviation and contraindications to SIT according to EAACI recommendation [12,14]. Skin prick tests The skin prick tests were performed according to the European Academy of Allergy and Clinical Immunology guidelines. Wheal diameters ≥ 3 mm were considered as positive [15]. The panel consisted of: house dust mites (Dermatophagoides farinae and Dermatophagoides pteronyssinus), Alternaria tenuis, animal dander (cat, dog), grass pollen, rye, tree (birch, alder, hazel, beech) pollen, mugwort, positive (histamine 10 mg/ml) and negative (physiological saline 0.9% NaCl) controls. Allergen-specific IgE serum concentrations were not measured in the studied patients. Severity allergic rhinitis evaluation Total nasal symptom score (TNSS) was used to measure the clinical severity of AR and it included the following four symptoms: nasal congestion, runny nose, nasal itching and sneezing in the last 2 weeks. The symptoms were rated on a scale of 0 to 3 for each symptom, with 0 meaning no symptoms and the 3 most pronounced symptoms in the last 2 weeks. The maximum number of points was 12. The severity was assessed as mild for points 0-4, moderate for 5-8 and severe for 9-12. Rhinomanometry Rhinomanometry using Rhinotest MP 1000 (producer MES Poland) according to the International Committee for the Standardization of Rhinomanometry (ICSR) guidelines was performed. The values of total flow and total resistance at the 150 Pa level were calculated [16]. Nasal cytology evaluation The nasal mucosa samples under direct vision in anterior rhinoscopy were collected. Two scrapes of the epithelial membrane of the inferior turbinate using disposable nasal brushes were performed to obtain the sample. The specimen was immediately smeared on a glass slide and fixed for 1 min in 95% ethyl alcohol and stained with hematoxylin and eosin. Slides were examined using oil immersion light microscopy. The percentage of eosinophils per 100 cells was calculated [17]. Immunotherapy Therapy was performed with depot allergoids (Purethal-HAL Allergy B.V., Leiden) -a mixture of tree pollen, grass pollen or grass/tree pollen at a concentration of 20 000 BAU (bioequivalent allergy unit)/1 ml. Patients received a conventional administration schedule of SIT by subcutaneous injections of Purethal, starting with 0.05 ml, and then administered at weekly intervals until the maintenance dose (0.5 ml) was reached. Subsequently, maintenance dosages, corresponding to 0.5 ml of drug solution, were given at 4-weekly intervals. Statement of ethics The research was approved by the Bioethics Committee at the Medical University of Silesia in Katowice, Poland. Resolution No. KNW/0022/KB1/107/I/16/18. Written informed consent was obtained from each participant prior to the study. Statistical analysis Data are presented as median with interquartile range for variables with non-normal distribution and mean ± SD for variables with normal distribution. Wilcoxon's test to compare non-normal variables, Kruskal-Wallis test to compare more than two groups and Spearman´s rank test to evaluate associations between variables were used. P-values < 0.05 were considered significant. Analysis was performed using Statistica 13.3 software (Statistica 13.3, StatSoft Poland). The characteristics of the study groups are summarized in Table 1. All patients were followed up in the next season (V2). All examined parameters significantly improved after 1 year of allergen immunotherapy: both percentage of eosinophils in nasal mucosa samples and TNSS symptoms decreased, whereas the nasal flow rate increased and nasal resistance decreased. All results are shown in Table 2. In 26 (62%) TNSS was assessed as mild, in 14 (33%) as moderate and in 2 (5%) as severe rhinitis. A significant correlation was found between the absolute change in TNSS and change in percentage of eosinophils in nasal mucosa assessed before beginning and after the first year of subcutaneous allergen immunotherapy (r s = 0.39, p < 0.05; Spearman correlation test). The comparisons of subgroups of patients sensitive to grass pollen, tree pollen or grass and tree pollen revealed that the change in percentage of eosinophils in nasal mucosa samples was significantly different in the subgroups and was the highest in the subgroup sensitive to grass pollen (44.5 (40-52)), less in the subgroup sensitive to tree pollen (30.5 (26-36.5)) and the least in the grass and tree pollen sensitive group (18 (31-21)), p < 0.001, Kruskal-Wallis test (Figures 2, 3). Discussion Ineffective pharmacological treatment in patients with intermittent allergic rhinitis may be reduced via allergen immunotherapy [5]. Allergen immunotherapy is a long-term therapeutic process and can cause some side effects; therefore qualification for this treatment should be considered very carefully [18,19]. The new treatment strategies for SCIT require identification of factors predicting better effectiveness of this treatment [20,21]. The efficacy of SCIT has been confirmed by numerous clinical trials and a meta-analysis [22,23]. In this study, the authors performed a rhinological assessment in patients with intermittent allergic rhinitis during the pollen season and in the next season after initializing subcutaneous immunotherapy. In order to assess the rhinology state, the standardized methods of high diagnostic relevance, available in everyday clinical practice, were used. Objective and subjective methods were used. Among the objective methods of rhinological assessment, rhinomanometry is recommended [16]. In our study, we used Eccles' guidelines for rhinomanometric testing [24]. After one course of SCIT, a significant increase of the air flow in rhinomanometry was observed, accompanied by a significant decline in total resistance. Rhinomanometric evaluation was confirmed by a subjective standardized test using TNSS. TNSS is recommended by the authors in order to objectively assess the severity of symptoms of intermittent allergic rhinitis [25]. The TNSS assessment showed a significant reduction in the severity symptoms. It is recommended to find biomarkers which may predict a positive response to SCIT treatment in AR patients [26]. The ratio of specific IgE to total IgE ratio be- Table 1. Characteristics of all patients and subgroups sensitized to grass pollen (G), tree pollen (T), and to grass and tree pollen (GT) TNSS -itching: TNSS -sneezing: Rhinomanometry: Flow: fore treatment and basophil activation can be considered as a potential biomarker which can predict the efficacy of SCIT [27,28]. For the selection of the right extracts for allergen immunotherapy composition Matricardi recommends a molecular diagnosis (Matricardi 2019). In our study we found that percentage of eosinophilic infiltration in nasal cytology can predict clinical efficacy of subcutaneous immunotherapy. The patients with high eosinophilic infiltration of the mucous membrane had markedly greater rhinitis symptoms reduction after SCIT. The reduction was particularly significant in patients allergic to grass pollen. Nasal cytology results confirm the reports suggesting that monovalent allergies to grass pollen correspond to a better response to SCIT [29]. The high percentage of nasal eosinophilia in intermittent allergic rhinitis was confirmed by other researchers [30,31]. However, nasal cytology is a simple tool which may be considered as a predictive marker of successful subcutaneous immunotherapy. To our knowledge, there are no other data concerning the nasal cytology after SCIT. The limitation of the study was the inability to compare the results with other studies, and lack of data on pollen count in particular seasons. Another limitation was the short period of observation of the allergic rhinitis patients, made only during one course of SCIT immunotherapy. Consequently, further studies are needed to elucidate the role of percentage eosinophilic infiltration of the nasal mucous membrane after allergen immunotherapy in intermittent allergic rhinitis. Conclusions Objective and subjective methods of rhinological assessment confirmed high effectiveness of SCIT in intermittent allergic rhinitis. A high percentage of eosinophils in nasal cytology before subcutaneous immunotherapy can predict its clinical efficacy for allergic rhinitis, especially in grass pollen allergy.
2022-07-14T18:19:14.679Z
2022-07-10T00:00:00.000
{ "year": 2022, "sha1": "47b554e419abc889061f0eb88119546d3f6dad16", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-7/pdf-47448-10?filename=Nasal%20cytology.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18e44a45460c0bfa80c1b4674edeb7d61c8c728a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15134929
pes2o/s2orc
v3-fos-license
Short-Term Results of Transforaminal Lumbar Interbody Fusion Using Pedicle Screw with Cortical Bone Trajectory Compared with Conventional Trajectory Study Design Case-control study. Purpose To evaluate clinical and radiological results of transforaminal lumbar interbody fusion (TLIF) performed with cortical bone trajectory (CBT) pedicle screw insertion with those of TLIF using 'conventional' or percutaneous pedicle screw insertion. Overview of Literature CBT is a new trajectory for pedicle screw insertion in the lumbar spine; clinical and radiological results of TLIF using pedicle screws inserted with CBT are unclear. Methods In total, 26 patients (11 males, 15 females) were enrolled in this retrospective study and divided into three groups: TLIF with pedicle screw insertion by conventional minimally invasive methods via the Wiltse approach (M-TLIF, n=10), TLIF with percutaneous pedicle screw insertion (P-TLIF, n=6), and TLIF with pedicle screw insertion with CBT (CBT-TLIF, n=10). Surgical results and preand postoperative radiological findings were evaluated and compared. Results Intraoperative blood loss was significantly less with CBT-TLIF (p=0.03) than with M-TLIF. Postoperative lordotic angles did not differ significantly among the three groups. Complete fusions were obtained in 10 of 12 levels (83%) with M-TLIF, in seven levels (100%) with P-TLIF, and in 10 of 11 levels (91%) with CBT-TLIF. On postoperative computed tomography, correct positioning was seen in 84.1% of M-TLIF screws, 88.5% of P-TLIF screws, and 90% of CBT-TLIF screws. Conclusions CBT-TLIF resulted in less blood loss and a shorter operative duration than M-TLIF or P-TLIF. Postoperative rates of bone union, maintenance of lordotic angles, and accuracy of pedicle screw positions were similar among the three groups. Introduction The clinical results of transforaminal lumbar interbody fusion (TLIF) have been favorable for degenerative spon-dylolisthesis, kyphoscoliosis, and instability of the lumbar spine [1,2]. However, there has been concern regarding pedicle screw placement during TLIF. Exposure lateral to the facet joint to insert a pedicle screw requires a rela-tively long incision and muscle dissection, which may be related to postoperative low back pain from injury to the posteromedial branch of the nerve root crossing the facet joint and damage to the exposed and retracted back musculature. To minimize the incision and muscle dissection and thus reduce these problems, TLIF with minimally invasive pedicle screw insertion (M-TLIF) [3] and TLIF with percutaneous pedicle screw insertion (P-TLIF) [4] have been developed. However, several clinical concerns, such as low back pain, learning curve, radiation exposure, and incorrect pedicle screw placement, have also been associated with M-TLIF and P-TLIF [5][6][7]. A new trajectory for pedicle screw insertion of pedicle screw placement, the cortical bone trajectory (CBT), was reported by Santoni et al. [8] in 2009 and may address these problems. The new trajectory was from medial to lateral and cranial to caudal; this does not require wide exposure of the back muscle and thus reduces operative invasion compared with conventional or percutaneous pedicle screw insertion. However, the differences in operative invasion, accuracy of pedicle screw insertion, and postoperative fusion rate between TLIF with CBT (CBT-TLIF) and other methods of pedicle screw placement, such as M-TLIF and P-TLIF, remain unknown. In this study, we compared the clinical and radiological results of CBT-TLIF with those of M-TLIF and P-TLIF. Patients In total, 26 patients (11 males, 15 females; mean age, 67 years; range, 34-80 years) who underwent TLIF from April 2011 to February 2013 at our hospital were enrolled in this retrospective study. The indications for TLIF were Meyerding grade I or II spondylolisthesis [9] or intraforaminal to lateral disc herniation. We performed three different methods of pedicle screw insertion depending on the time period. From April to November 2011, pedicle screws were placed minimally invasively via lateral inter-muscular Wiltse approach (M-TLIF, n=10; 6 males, 4 females; mean age, 63 years). From December 2011 to October 2012, pedicle screws were inserted using a percutaneous system (P-TLIF; n=6, 2 males, 4 females; mean age, 71 years). From November 2012 to February 2013, pedicle screws were placed with CBT (CBT-TLIF; n=10, 3 males, 7 females; mean age, 67 years). Surgical procedures M-TLIF was performed as follows. A unilateral facetectomy was performed at the location of the symptoms to expose the intervertebral foramen via a 6-cm incision. A thorough discectomy was completed and the disc space was filled with local bone graft material and an appropriate parallel Devex cage (DePuy Spine, Raynham, MA, USA) was placed. Open conventional pedicle screws were placed using the Expedium Spine System (DePuy Spine) through a bilateral Wiltse approach. Under fluoroscopic guidance in a perfect posteroanterior projection, a pedicle probe was introduced into the pedicle at a 30° medial angle and the pedicle was tapped for a screw, taking care not to penetrate the medial wall. A feeler was used to identify breakage of the cortical pedicle walls, and a pedicle screw of appropriate length, as assessed on computed tomography (CT) images, was inserted. The lengths of screws were 40 or 45 mm and 6.0 or 7.0 mm in diameter. Finally, under a lateral fluoroscopic view, the length and craniocaudal direction of the screws were checked (Fig. 1). P-TLIF was performed using the Viper MIS Spine System (DePuy Spine). Following decompression of the affected site and placement of a cage into the disc space via a 6-cm skin incision, the targeting needle was placed on the superolateral border of the pedicle under fluoroscopy via another fascia incision created 1 cm lateral to the midline skin incision. The targeting needle was introduced into the pedicle under posteroanterior and lateral fluoroscopic visualization. The targeting needle was replaced with a K wire, and a screw with an extended sleeve was then placed over the K wire and inserted into the vertebral body after tapping. Pre-bent rods were placed bilaterally using the Viper system and fixed with compressive force at the facetectomy side (Fig. 2). CBT-TLIF was performed using the CD HORIZON SOLERA Spinal System 4.75 mm (Medtronic, Memphis TN, USA). After exposure of the surgical field, an entry point for insertion of the CBT screw was drilled in the medio-caudal side of the pedicle with a 2 mm-diameter air drill under fluoroscopic guidance. A straight probe was used to create a trajectory for the CBT screw from the entry point to the opposite corner of the pedicle and vertebral body under anteroposterior fluoroscopic guidance. A short L-shaped K wire was placed to mark the trajectory. Decompression and cage placement were performed in the same fashion as in M-TLIF and P-TLIF. After cage placement, we tapped a hole with successive 4.0-, 4.5-, and 5.5-mm taps targeted to the posterior one-third of the vertebral body. When the tap reached the endosteal cortex of the vertebral body under lateral fluoroscopic guid- A B C ance, screw length was determined. We then inserted 5.5mm screws from 30 to 40 mm in length into the hole and placed the rods (Fig. 3). Diagnoses and surgical levels Diagnoses at operation (degenerative spondylolisthesis and foraminal stenosis or hernia), surgical levels, and their distributions among types of pedicle screw placement are presented in Table 1. Evaluations Patient age, gender, body mass index, bone mineral den- sity, diagnosis, duration of operation, estimated blood loss (EBL), intraoperative complications, level of fusion, approach of pedicle screw insertion, and radiological findings were obtained from medical records and plain radiographs. Operative duration, EBL during operation, and lordotic angle of fusion levels were evaluated and complications during operation recorded. The lordotic angle-i.e., the angle between the cranial end of the upper vertebra and the caudal end of the lower vertebra (but the cranial end of the S1 vertebra) of the fusion levelwas measured preoperatively and postoperatively and at final follow-up. Bone union at final follow-up was also evaluated on plain radiographs, including flexion and ex-tension lateral images. Definitive fusion was identified by formation of trabecular bony bridges between contiguous vertebral bodies at the instrumented levels and less than 4° of segmental movement [10]. CT was performed in all patients to check the postoperative positions of screws. The examinations were performed from 2 weeks to 1 year after the operation. The positions of screws were evaluated according to the criteria of Learch et al. [11]. Pedicles were considered to be 'correct' if screws were centered in the pedicle (Fig. 4A, C). We evaluated screws that were in contact with the medial or lateral pedicle wall (Fig. 4B, D) and for screws that were seen to penetrate the medial or lateral pedicle wall. Statistical analyses Data are presented as means±standard deviations. The lordotic angles of each group were evaluated with the paired t-test preoperatively, postoperatively, and at final follow-up. The differences in each parameter among the three groups were evaluated by one-way analysis of variance followed by multiple comparison using Scheffe's method. All statistical analyses were performed using the Statistical Package for Biosciences software (SPBS, ver. 9.54) [12]. Results Operative duration, EBL, complications, and radiological findings are presented in Table 2. The operative duration of P-TLIF was longer than that of M-TLIF (p=0.06). Operative durations were not significantly different between M-TLIF and CBT-TLIF. EBL was significantly smaller in CBT-TLIF than in M-TLIF (p=0.03), and smaller in P-TLIF than in M-TLIF but not significantly so. During CBT-TLIF, one case of dural tear and two cases of pedicle fracture at the insertion site on the facetectomy side occurred. The pedicle screw at each fracture site was inserted using a conventional trajectory, and fixation at the affected sites was stable in both cases. Mean lordotic angle did not differ significantly among the three groups preoperatively but did increase postoperatively in all three groups. The increase in lordotic angle was statistically significant in M-TLIF (p=0.01) but not in P-TLIF or CBT-TLIF. The postoperative and final lordotic angles were not significantly different among the three groups. Complete fusions were obtained in 10 of 12 levels (83%) in 10 cases of M-TLIF, in seven levels (100%) in six cases of P-TLIF, and in 10 of 11 levels (91%) in 10 cases of CBT-TLIF. Table 3 presents data on the postoperative positions of pedicle screws. Postoperative CT images revealed that 84.1% (37/44) of screws were positioned correctly with M-TLIF, 88.5% (23/26) were positioned correctly with P-TLIF, and 90% (38/42) were positioned correctly with CBT-TLIF. Seven screws in M-TLIF, one in P-TLIF, and four in CBT-TLIF were in contact with the medial wall of the affected pedicle. Two screws, both in P-TLIF, were in contact with the lateral wall of the pedicle. No screw was seen to penetrate the medial or lateral wall of the pedicle. p=0.01 vs. preoperative angle by paired t-test. Discussion In the present study, the perioperative results of TLIF using three different pedicle screw-insertion techniques were evaluated. CBT-TLIF resulted in a smaller intraoperative EBL volume and a shorter operative duration compared with conventional M-TLIF and P-TLIF. The fusion rate of the affected levels and the accuracy of screw positioning were similar in the three groups. TLIF was first reported in 1998 by Harms and Jeszenszky [1]. The conventional open technique for pedicle screw insertion requires significant paraspinal muscle dissection and retraction to expose the screw entry point. To reduce damage to paraspinal muscles, minimally invasive approaches that preserve the lumbar spine musculature have been used [3,[13][14][15]. Several studies have described advantages of M-TLIF with transpedicular screws, including reductions in blood loss and postoperative pain [3,14,15]. However, disadvantages of these techniques, including a steep learning curve, long operative duration, and technically demanding insertion of the pedicle screws, have been also reported [10,16]. The limited surgical field sometimes makes accurate insertion of pedicle screws difficult compared with a conventional open approach. Thus, the percutaneous cannulated screw system, introduced by Magerl [17], was developed. Since then, the evolution of percutaneous pedicle screw instrumentation systems and expandable tubular retractors have contributed to the popularity of P-TLIF [4,18]. Minimally invasive lumbar fusion with pedicle screw insertion using a percutaneous system allows smaller incisions, less muscle stripping and blood loss, and excellent fusion and clinical outcomes [4,19]. On the other hand, several disadvantag-es have been reported, such as the learning curve [20], the accuracy of pedicle screw insertion [6], and complications associated with the use of Jamshidi needle or K wire as a guide for the pedicle screws [21]. In the present study, the duration of P-TLIF operations was longer than that of the other groups, suggesting the existence of a learning curve for this procedure, despite its lower EBL. CBT is considered to have several advantages. First, the trajectory reduces the amount of paraspinal muscle exposure required. Second, the screw is placed from the inferior and medial border of the pedicle to the cranial and lateral corner of the posterior one-third of the vertebral body in a bicortical manner; thus, screws placed by CTB may provide stable fixation even in osteoporotic bone. However, there has been no report evaluating the clinical and radiological results of CBT-TLIF. Post-CBT-TLIF lordotic angles were still being maintained a mean of 8 months following surgery, and the fusion rate of the affected levels and the accuracy of pedicle screw placement were similar to those of M-TLIF and P-TLIF procedures. We did experience two pedicle fractures with CBT. The fractures occurred at the caudal and facetectomy sides of the involved pedicles, and pedicle screws at these fracture sites were re-inserted using the conventional method to complete the fixation. The screws selected for the caudal and facetectomy sides were one size smaller than those we have recently used in the cranial side. Pedicle screws placed by CBT were intended to contact the medial wall of the pedicle, especially on the caudal side, but the screws did not penetrate the medial wall of the pedicle. In a study by Oh et al. [22] on pedicle screw placement using percutaneous and open methods, the accuracy of pedicle wall penetration during screw fixation did not differ between the two techniques. In the present study, postoperative CT revealed the accuracy of pedicle screw insertion to be similar among the three groups. With CBT, the pedicle screw was inserted from medial to lateral; thus, the rate of medial perforation of the pedicle would be expected to be lower than with conventional or percutaneous pedicle screw insertion. Limitations of this study include the small number of patients in each group and the short follow-up period; however, evaluation of larger numbers of patients with longer durations of follow-up is still ongoing and includes postoperative clinical results. Conclusions TLIF using CBT-inserted pedicle screws resulted in a smaller EBL volume and shorter operative duration than TLIF using conventionally or percutaneously inserted pedicle screws. CBT pedicle screw placement also resulted in rates of bone union, maintenance of lordotic angles, and accuracy of pedicle screw position that were similar to those of conventional or percutaneous methods.
2018-04-03T04:20:16.825Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "3dfbe97c7d5fb50e095802e4363cba74492191d2", "oa_license": "CCBYNC", "oa_url": "http://www.asianspinejournal.org/upload/pdf/asj-9-440.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3dfbe97c7d5fb50e095802e4363cba74492191d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118549217
pes2o/s2orc
v3-fos-license
One-loop corrections to holographic Wilson loop in AdS4xCP3 The evaluation of BPS Wilson loops in N=6, D=3 Chern-Simons matter theory is reduced to ordinary matrix integrals via localization technique. It is easy to check that the vacuum expectation value of 1/2 BPS Wilson loops at leading order in planar limit agrees with the regularized classical string action, via AdS/CFT. Then the subleading terms in principle can be calculated by treating the string theory semi-classically. In this article we calculate the one-loop determinant for fluctuation modes of holographic Wilson loop in the dual geometry AdS4xCP3. The fermionic normal mode frequencies are expressed in terms of the hypergeometric function, and we compute the one-loop effective action numerically. The discrepancy with localization formula is due to the zero mode normalization constant, which is yet to be determined. I. INTRODUCTION Wilson loops are essential objects in the study of gauge field theories. In the context of the AdS/CFT correspondence [1], they have dual description as a macroscopic fundamental string [2,3]. In this article, we are mainly interested in the M2-brane conformal field theory as Chern-Simons matter model, suggested by Aharony, Bergman, Jafferis and Maldacena (ABJM) [4]. The supersymmetric Wilson loop operators in ABJM model, with dual geometry AdS 4 × CP 3 , are studied earlier in [5][6][7][8]. The computation of their expectation values can be greatly simplified if one utilizes the localization technique [9]: when we put the gauge theory on S 3 , the full path integral is reduced to an ordinary matrix integral [10]. It is a fascinating achievement that at strong coupling the free energy scales as N 3/2 and the coefficient is related to the internal space S 7 , precisely as predicted by AdS/CFT [11,12]. According to the matrix model calculation at strong coupling and planar limit, the 1/2-BPS circular Wilson loop's vacuum expectation value is (up to a framing-dependent phase) where λ is the 't Hooft coupling constant. On the other hand, the gravity side computation from classical string solution is e √ 2π 2 λ . The next-order correction for S ≡ − ln W should be ln 2 ≈ 0.69, and it is our goal to see if this number can be reproduced as one-loop correction on string world sheet. From the fluctuation lagrangian around 1/2-BPS holographic circular Wilson loop, we find that the string one-loop determinant is given as It turns out that part of the fermionic normal mode frequencies in the numerator are given in terms of hypergeometric functions. This is in contrast with the Wilson loop of IIB string in AdS 5 × S 5 , where the frequencies are logarithm of rational functions and the sum is given exactly using the Gamma function [13]. We evaluate Γ numerically and extract the finite piece after regularization, and obtain Γ reg ≈ −1.1. In Section II we setup the notation and calculate the quadratic lagrangian for string fluctuation around 1/2-BPS circular Wilson loop. In Section III, we calculate the normal modes and discuss how their sum can be regularized numerically. In Section IV we discuss how to resolve the discrepancy between field theory and supergravity side results. II. OPEN STRINGS AND THEIR FLUCTUATION LAGRANGIAN We consider type IIA open strings in AdS 4 × CP 3 background which preserve 3 4 supersymmetry. This geometry is conjectured to be dual to N = 6, D = 3 Chern-Simons field theory [4] with U (N ) × U (N ) gauge symmetry and levels (k, −k). In the convention we adopt here, the D = 10 supergravity solution takes the following form. R s sets the length scale of this background, the metric tensor ds 2 AdS 4 , ds 2 CP 3 are scaled to have radius one, and J CP 3 represents the Kähler 2-form of the internal space. The AdS/CFT correspondence relates the string and Chern-Simons description in the following way. where λ ≡ N/k is the 't Hooft coupling constant. For simplicity we will henceforth set k = 1. It is convenient for us to use Poincare coordinates for AdS space, For a circular Wilson loop with radius 1, a simple solution is given as In conformal gauge r = 1/ cosh σ and the induced metric on worldsheet is Note that this is hyperbolic with scalar curvature R (2) = −2. In order to regularize the divergence of the classical action, we introduce a cutoff at z = or equivalently at σ = 0 which are related via = tanh 0 . The regularized value of the classical action is [5][6][7] Now we are to consider the fluctuation modes around this classical solution. Similar computations have been performed in a number of articles including [13][14][15][16][17][18][19]. For the bosonic sector, the computations should be very similar to those of the circular Wilson loop in AdS 5 × S 5 presented in [13]. One easily finds that after the gauge fixing, there are two modes from AdS space with effective mass parameter 2, and there are six massless modes from CP 3 . Altogether they account for the denominator of (2). For the fermionic part, up to quadratic order the κ-symmetric Green-Schwarz action is where M is vielbein, and h ab is the worldsheet metric. The spinors θ 1 and θ 2 have opposite chirality, i.e. The covariant derivative for spinor field is spelt out as [14] D JK After some calculation one can rewrite the fermion fluctuation lagrangian simply as We note that this expression is obtained after rotating the spinor by a unitary matrix Ψ also satisfies P + Ψ = Ψ with P + = (1 + Γ 01 Γ 11 )/2, d = 2 gamma matrices τ i satisfy It is obvious that Γ 3/4 is hermitian and traceless. When diagonalized, it can be written as for instance diag(1, 1, 1, 0) ⊗ diag(1, 1, −1, −1). This implies that we should have 4 massless fermionic modes, and 12 modes with mass 1, on the worldsheet. We note here that this result is in agreement with similar analysis done for instance in [17,19]. For the computation of the determinant, we might as well consider the square of Dirac operator. We consider where and for the solution we have here R (2) =−2. Our results so far can be summarized in the following expression for one-loop partition function for fluctuation modes. Note that in the denominator ∇ 2 is the usual scalar Laplacian, while ∇ 2 F is understood to contain spin connection for spinor fields. One can repeat the same computation for a straight line which is also 1/2-BPS and we have checked the result is again given exactly as (16). III. CALCULATION OF THE DETERMINANT Now let us consider the evaluation of (16). Thanks to the axial symmetry of the string worldsheet, we can easily perform the mode expansion for τ variable. We impose periodic (anti-periodic) boundary condition for bosonic (fermionic) fields. Then Z can be expressed using determinants of ordinary second-order differential operators. More concretely, we have for instance The conformal factor sinh 2 σ cancel between bosonic and fermionic determinants. We define where we have included C = det(−∂ 2 σ ) as an overall normalization. The 1-loop effective action can be written as It turns out that each sum ω n is divergent and there is an ordering problem. This problem is of course commonplace in quantum field theory, and for the energy correction of spinning strings in AdS 4 × CP 3 the ordering issue has been addressed in [17,18,20]. Here we follow the prescription in [13,21]: one introduces a regulator µ in the process of synchronizing the summation indices for bosonic and fermionic modes. For small µ, we have Here we have defined A. Calculation of the frequencies To evaluate ω B n and ω F ν , following [13,[22][23][24] we utilize the Gelfand-Yaglom theorem: For a differential operator O with periodic boundary condition in σ ∈ [a, b], the product of all eigenvalues can be alternatively obtained by solving the homogeneous differential equation Oψ = 0 with initial condition ψ(a) = ψ 0 (a) = 0, ψ (a) = ψ 0 (a) = 1. In particular, where O 0 = −∂ 2 σ . For our problem originally σ ranges in 0 < σ < ∞, but we will introduce both UV and IR regulators and consider instead 0 < σ < L. Eq.(28) will be used for non-zero modes, while for the zero-modes we take Neumann boundary conditions at L, and we need to use ψ (b)/ψ 0 (b) instead on the right hand side of (28). For the bosonic part the operators are exactly the same as the counterpart in AdS 5 ×S 5 of IIB string theory, and we simply import the results in [13]. For large L ( 0 is not necessarily Let us now turn to the fermionic modes. For the differential operators associated with fermionic fluctuations, we find it useful to introduce a new variable Note that for 0 < σ < ∞, we have 1 < ζ < ∞. We start with the equation associated with ω F 1 ν . We should originally consider The two linearly independent solutions can be chosen as follows (for ν = ± 1 2 ) Writing down the solution with appropriate initial condition and taking the limit L → ∞, we obtain the following result. Here we introduced = tanh 0 for cutoff of z-coordinate. ν = ± 1 2 are studied separately, and in particular ν = −1/2 is the fermionic zero mode and we have used Neumann boundary condition. For the other fermionic determinant ω F 3 ν , we may employ the following reparametrization Again the differential equation is easily solved, and we choose the basis Except for ν = −1/2 which is zero-mode, the frequency is then (before taking L → ∞ limit) And in the limit L → ∞, and one should substitute this into (39). Here we have expressed the integral in terms of the incomplete beta function, Since ν is half-integer for our purposes, we may do the integration explicitly and obtain 1 For ν = −1/2 we study separately with Neumann boundary condition. Summarizing, we have 1 It is also a special case of the Lerch Φ-transcendent, i.e. 2 F 1 (n, 1; n + 1; x) = n Φ(x, 1, n). Φ is defined as B. The regularized action We are now ready to go back to (24) and evaluate the finite part. First of all, one can easily convince oneself that so G should have no contribution after regularization. It turns out that G n → 0 for large n, but the series G n for given is logarithmically divergent. We do the sum for large Λ and drop terms proportional to ln Λ. After rather tedious but straightforward computation, we may rewrite G n in the following way. One can see that the total sum is independent of cutoff L, as it should be the case. It is also obvious that the finite part of the first term in (45) is 7 2 ln 2. The large-Λ behavior of S n can be studied using Stirling's formula. After we drop the terms proportional to ln Λ, 1/ , ln etc, For the summation of T n , unfortunately we are not able to find the sum in closed form for large Λ. We will resort to numerical methods. Our strategy is as follows. We first fix and consider Λ → ∞. Since the sum is log-divergent, T n = f ( ) ln Λ + g( ) + (subleading in Λ). We can read off f ( ), g( ) from a least-square fit after evaluating the sum numerically for a number of large values for Λ. To obtain a regularized value, we now concentrate on g( ). g( ) = α + β ln + γ 1 + δ log 1 in the leading orders. The numerical results versus this curve is shown in Figure 1. When we implement this method however, one has to be careful since the result depends rather sensitively on the choice of cutoffs Λ and . It is not surprising since the series is not convergent, after all. We want to send → 0 eventually, but since we take Λ → ∞ first, should not be too small, i.e. Λ 1 should be always satisfied. also a discrepancy and it was deemed to come from the normalization of the zero modes [13,25,26]. As far as we know, this coefficient is not determined for AdS 4 × CP 3 , let alone AdS 5 × S 5 in Type IIB. We note that the normalization convention of holographic Wilson loops in ABJM theory was discussed in Section 5.2 of [12]. To bypass the normalization problem and check the validity of string one-loop computations, we may study other supersymmetric Wilson loop operators and calculate the ratio between physically different BPS Wilson loops. In the field theory description, there are 1/6-BPS Wilson loops with W ≈ λ/2 exp( √ 2π 2 λ). While 1/2-BPS Wilson loops are pointlike in CP 3 and break the global symmetry SU (4) into SU (3), 1/6-BPS ones preserve only SU (2) and it is natural to expect that they are smeared over CP 1 ∈ CP 3 [5,7]. We plan to construct such classical string solutions in AdS 4 ×CP 3 explicitly and study its fluctuations in a separate publication. Although we only studied circular Wilson loops in detail here, (16) Yang-Mills theory was studied in [27,28]. It is also pointed out that the angle dependence of general BPS Wilson loop operators can be related to interesting physical quantities such as cusp anomalous dimension, radiation emitted by a moving quark etc. [29][30][31][32][33]. For a recent study of cusp anomalous dimension in ABJM model, see [19]. With such applications in mind, it will be intriguing to construct general BPS Wilson loops and pursue their exact evaluation.
2012-05-13T01:37:23.000Z
2012-03-28T00:00:00.000
{ "year": 2012, "sha1": "6daf7f39ad16864fbb98758e4108d7204ad9b1b4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1203.6343", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6daf7f39ad16864fbb98758e4108d7204ad9b1b4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247154118
pes2o/s2orc
v3-fos-license
Effect of Chlorhexidine, Ozonated Olive Oil and Olive Oil Mouthwash on Oral Health Status of Patients with Gingivitis: A Randomised Controlled Trial Background: Effective plaque control is important to prevent gingival and periodontal diseases. In recent years, Olive oil and Ozonated olive oil are gaining paramount importance in dentistry because of its minimal side effects as compared to chemical agents. Aim: To assess the effect of olive oil, ozonated olive oil, and chlorhexidine mouthwash on oral health status of patients with gingivitis. Materials and Methods: The present double blinded, parallel designed, randomized clinical trial was carried out among 66 gingivitis patients in the Department of Periodontics, Saveetha Dental College and Hospitals, Chennai, India. Participants were assigned to three groups: 22 participants in each group [Group A (CHX mouthwash), Group B (OOO Ozonated olive oil) and Group C (OO olive oil)]. Complete ultrasonic scaling was done and subjects were asked not to use any oral hygiene aids and recalled after three days and the gingival index and OHI(S) were noted Original Research Article Akash and Rajasekar; JPRI, 33(62B): 401-410, 2021; Article no.JPRI.77925 402 (baseline). Then subjects were provided with respective mouthwash and instructed to use them for 15 days. Again, gingival index and OHI(S) were noted after 15 days. The data was analyzed using Statistical Package for Social Sciences (SPSS Software, Version 23.0). One-way ANOVA was used to compare the mean values of gingival index between the groups. Tukey’s HSD post hoc test was done to find means that are significantly different from each other.Also, student’s paired t-test was used to compare the mean values of gingival index within the groups. Results: A statistically significant difference was obtained between group A and C (p=0.000) and between Group B and C (p=0.000). However, there was a non-significant difference between mouthwash A and B (p>0.05) showing that OOO and chlorhexidine mouthwashes were equally effective in reducing gingivitis. Conclusion: The present study suggests that efficacy of OOO was equally effective in reducing plaque and gingivitis as compared to chlorhexidine mouthwash. Therefore, oil pulling using OOO can be a better alternative to chlorhexidine mouthwash. INTRODUCTION The most common diseases of the mouth are gingivitis and periodontitis.Gingivitis can be defined as gum disease with inflammation of the gums, where the gums will appear red and puffy, and will usually bleed during tooth-brushing or dental examination [1]. Gingivitis is a mild condition of inflammation which is usually asymptomatic, hence they are unnoticed. The primary etiology of gingivitis is plaque, but there are several aggravating factors including habits like smoking, stress, genetic factors, systemic diseases and hormonal distress [2][3][4][5][6][7][8][9][10]. Untreated gingivitis will cause periodontitis, which manifests as increased pocket depth, recession, furcation involvement, mobility, bone loss [11][12][13][14][15][16]. The development of a preventive regimen that targets the microbial risk factor is the most comprehensive and successful approach toward gingivitis. The gingivitis is curable and easy to revert back to healthy gingiva, but once it worsens to periodontitis, the grade depends on their recovery. Prevention is better than cure, hence the maintenance of oral health is the best way to prevent all these. Oral health is multifaceted and includes the ability to speak, smile, smell, taste, touch, chew, swallow and convey a range of emotions through facial expressions with confidence and without pain, discomfort and disease of the craniofacial complex. Oral hygiene can be defined as the maintenance of teeth and gums in a healthy condition.The most reliable methods of oral hygiene maintenance are mechanical methods of tooth cleaning using toothbrushes in adjuvant with chemotherapeutic agents [17]. Even though brushing removes the debris out of the oral cavity, there is a high chance of debris over interdental areas [18]. In order to counter this, interdental brushes and mouthwashes have been introduced. These mouthwashes have the ability to flush out all the debris in the mouth [19]. Among them, chlorhexidine (CHX) is considered as the "gold standard," as it has a broad spectrum of activity [20]. However, overuse or increase in quantity of usage causes unpleasant taste and undesirable side effects such as tooth staining and altered taste sensation. In some cases, the dysfunctioning or death of papilla over the tongue is also reported. Also, there are some conventional drugs to prevent them which include vitamins and antibiotics [21,22]. Nowadays, herbal treatments as an adjunct to routine scaling and root planing are gaining importance because of its minimal side effects [23-25]. One of the traditional Indian remedies is oil pulling which was familiarized by Dr. F. Karach [26]. Oil pulling is an ancient ayurvedic therapy where a tablespoon of oil is used to gargle or rinse all over the mouth every morning [27]. This process should be done for 5 mins, but the ideal usage is 2 to 3 mins. The swishing of oil activates enzymes and draws toxins out of the blood. Their antioxidant effects damage the cell wall of microorganisms and destroy them [28]. The emulsification process which occurs due to agitation of oil in the mouth leads to the formation of a soapy layer, which can alter the adhesion of the bacteria on the tooth surface, remove superficial worn out squamous cells, and improve oral hygiene. Since it inhibits bacterial adhesion, it also prevents plaque coaggregation over the mouth [29]. At present, there are a number of indigenous natural medicinal products which deserve due recognition for their contribution to improving oral health. Olive oil (OO) has the following advantages over the standard and commercially available mouthwashes [30]. It causes no staining and no allergic reactions. It is available at the same cost of chlorhexidine mouthwash [31]. One of the real considerations is, olive oil is a substance we use over our kitchen daily, so there is no need of getting a separate one. Considering these benefits, oil pulling therapy with OO could be promoted as a measure for the prevention of oral disease [32]. Currently, ozone therapy is gaining popularity as a modern noninvasive method of treatment. It is a powerful oxidizing agent with a high antimicrobial power against oral pathogens [33]. Ozonation is a process of addition of superoxide o(-) ion to the normal oxygen molecule. This can be done only in the superoxide state and they are highly unstable. Their stability is purely because of the resonance of the molecule [34]. The olive is superheated and oxygen gas is superheated to achieve a renaissance phase and combined together to form an ozonated olive oil (OOO) [35]. Ozone (O 3 ), when in contact with organic fluids, causes the formation of reactive oxygen molecules (O 2 ) which influence the cellular metabolism, tissue repair, and antimicrobial effect [36]. In addition, ozone therapy can be systemically or locally applied, cost-effective and has restricted intolerance or contraindication with minimal side effects. Study Population The present double-blinded, parallel designed randomized clinical trial was carried out in the Department of Periodontics, Saveetha Dental College and Hospitals, Chennai, India. A total of 66 patients with gingivitis within the age group of 35-45 years were enrolled. The ethical clearance was obtained from the Institutional Ethical Committee (IHEC/SDC/UG-1860/20/320 and a written informed consent was obtained from all the study participants. Inclusion criteria Participants within the age group of 35-45 years who were systemically healthy, presence of at least 20 teeth, probing depth of 1-3 mm, presence of bleeding on probing (BOP) in at least 30% of the sites were included in the study. Exclusion criteria Participants allergic to herbal extracts, smokers, pregnant or lactating mothers, participants under long term medications, systemically compromised patients were excluded from the study. Study Design A pilot study was conducted using similar oils and mouthwashes to check the feasibility of the study. The prevalence of gingivitis was 80% in the pilot study. Considering the dropouts, the sample size was inflated by 20%, hence the sample size was 66 with 22 participants in each group [Group A (CHX mouthwash), Group B (OOO -Ozonated olive oil) and Group C (OOolive oil)]. Participants were assigned to the groups by a person not involved in the study. All the subjects were provided with their assigned mouthrinses and were divided into Group 1, Group 2 and Group 3 randomly using a simple lottery method with 22 participants in each group. All the mouthrinses were dispensed in identical bottles thereby ensuring subject masking. The examiner and the participants were also blinded with regard to the mouthrinse allocated to them thereby ensuring a double-blinded study. Subjects were instructed to use 10 ml of mouthwash for 1 min twice daily after tooth brushing for a period of 1 month. Complete ultrasonic scaling was done for all the participants and the subjects were provided with a standard tooth brush,standard tooth paste and they were advised to brush their teeth by following modified bass technique. and recalled after three days and gingival index and OHIS was recorded (baseline). Then subjects were provided with respective mouthwashes for a period of 15 days. The gingival index was noted again after 1 month. Statistical Analysis The data was analyzed using Statistical Package for Social Sciences (SPSS Software, Version 23.0). Descriptive and inferential statistics were done for data summarization and presentation. One-way ANOVA was used to compare the mean values of OHI(S) and gingival index between the groups. Tukey's HSD post hoc test was done to find means that are significantly different from each other. Also, student's paired t-test was used to compare the mean values of OHI(S) and gingival index within the groups. Preparation of Ozonated Olive Oil Ozonated OO was prepared by passing ozone gas through commercially available OO (PurO3) using an ozone generator (Ozone Engineers). The output was titrated to 2 g/h for about 2 min to adjust the concentration of ozone to 0.01 ppm. Since half-life of ozone is only 20 min and it was freshly prepared every day just before the usage. RESULTS A total of 66 study participants were enrolled in this study and were divided into three groups each of 22 participants. Group A -CHX, Group B -OOO and Group C -OO. One-way ANOVA showed there was no statistically significant difference between the baseline GI and OHI(S) values as compared to the three mouthwash groups (p=0.865), but there was a statistically significant difference (p=0.000) observed between the three mouthwash when compared after 15 days. The baseline TI values between the three groups were statistically not significant (p=0.865), whereas after 15 days, there was a statistically significant difference (p=0.000). (Table 1) Tukey's HSD post hoc test was done to find means that are significantly different from each other. A statistically significant difference between Group 1 and Group 3 was observed in terms of post GI (p=0.000) and post OHI(S) (p=0.000) but statistically no significant difference was observed between Group 1 and Group 2 in terms of post GI (p=0.171) and post OHI(S) (p=0.338). (Table 2) Student's paired t-test was done to compare the mean values of gingival index and OHI(S) within the groups. The mean difference between the baseline and post gingival index and baseline and the post OHI(S) was statistically significant in both Group 1 and Group 2 with the p value of 0.000. Whereas, no statistically significant difference was observed between the baseline and post GI (0.24) and baseline and post OHI(S) (0.27) in Group 3 (Table 3). DISCUSSION The present study assesses the effect of olive oil, ozonated olive oil and chlorhexidine mouthwash on oral health status of patients with gingivitis. The present study showed that the mean OHI(S) at baseline was statistically not significant between the three groups (p=0.47). However, the mean OHI(S) after 15 days was highly significant between CHX mouthwash and OO (p=0.000) and OOO and OO (p=0.000); showing that both chlorhexidine mouthwashes and OOO were equally effective in preventing plaque formation (p=0.338). Kamnath et al studied the clinical efficacy of aloe vera and tea tree oil and observed that both herbal and chlorhexidine mouthwashes showed significant reduction in OHI(S)scores after 1 month of usage [57]. Botelho et al studied the clinical efficacy of essential oil mouth rinse and observed that both essential oil mouth rinse and chlorhexidine mouthwashes showed significant reduction in OHI(S) after 1 month of usage [58]. Also, the present study revealed that the mean GI after 15 days was highly significant between CHX mouthwash and OO (p=0.000) and OOO and OO (p=0.000); showing that both chlorhexidine mouthwashes and OOO were equally effective in reducing inflammation (p=0.171).This might be due to the substantivity of CHX and OOO, which adhere to the tissues such as oral mucosa and teeth [59]. This helps to maintain a potent sustained release, which, in turn, reduces the bacterial count and prevents the accumulation of dental plaque and so the gingivitis [60]. Haas AN et al studied the clinical efficacy of essential oils containing mouthwashes and observed that they showed significant reduction in gingival index scores after 1 month of usage [61]. Richards et al studied the clinical efficacy of essential oil mouthwash and observed that both herbal and chlorhexidine mouthwashes showed significant reduction in gingival index scores after 1 month of usage [62]. Nardi GM et al studied the clinical efficacy of ozonated olive oil chlorhexidine mouthwashes, they showed significant reduction in gingival index and scores after 15 days of usage [33]. Similar results were obtained in the studies of Claffey N et al [63] and Eid Alroudhan I et al [20]. Our findings are in accordance with the previous studies. From the study results, it can be stated that the OOO had a promising antiplaque and antigingivitis property as similar as the chlorhexidine mouthwash. However, further long term follow-up studies are needed to substantiate the present finding and hence can be used as an adjunct to scaling and root planning in the management of gingival diseases. CONCLUSION The present study suggests that efficacy of OOO was equally effective in reducing plaque and gingivitis as compared to chlorhexidine mouthwash. Therefore, oil pulling using OOO can be a better alternative to chlorhexidine mouthwash. DISCLAIMER The products used for this research are commonly and predominantly use products in our area of research and country. There is absolutely no conflict of interest between the authors and producers of the products because we do not intend to use these products as an avenue for any litigation but for the advancement of knowledge. Also, the research was not funded by the producing company rather it was funded by personal efforts of the authors. CONSENT AND ETHICAL APPROVAL The ethical clearance was obtained from the Institutional Ethical Committee (IHEC/SDC/UG-1860/20/320 and a written informed consent was obtained from all the study participants.
2022-02-28T16:08:20.652Z
2021-12-29T00:00:00.000
{ "year": 2021, "sha1": "acd19df812bf492134b959805abc39a6ef319e1d", "oa_license": "CCBY", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/35627/67318", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ba4729ddf7722f87957c6149ad2d746b02f0e4d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
233472479
pes2o/s2orc
v3-fos-license
Total Extra-Peritoneal Inguinal Hernia Repair: a Single-Surgeon Preliminary Findings Report Introduction: Inguinal hernia repair is one of the most frequent operations in general surgery. Various techniques have been used to repair inguinal hernias since the first reconstructive technique described by Bassini in 1887. In 1989 Lichtenstein reported a new technique: tension free inguinal hernia repair. Laparoscopic inguinal hernia repair was introduced in the early 1990s, and soon also became popular. Literature has shown the benefits of laparoscopy (in comparison with open repair) to be mostly related to the more minimally invasive nature of the surgery, having lower wound infection rates, faster recovery, and less postoperative pain. Aim: To evaluate our totally extraperitoneal (TEP) inguinal hernia repair initial results and compare them to literature data. Materials and methods: In a prospective review and analysis, we examined 61 cases of hernia repair via laparoscopy (specifically TEP), performed by a single surgeon, between April 2019 and December 2019 at the Kaspela University Hospital in Plovdiv. The centre’s Institutional Review Board approved the study with no specific consents required due to the retrospective, minimal risk nature of the study. The routine informed consent required by the National Insurance Fund has been considered sufficient for the study objectives. The surgical outcome measures included operating time (hours/minutes), conversion, peritoneal injury, surgical emphysema; and the clinical outcome measures included postoperative seroma, post-operative infection, and post-operative chronic groin pain. Results: Inguinal pain on discharge was characterized as mild by 56 (96.55%) patients and moderate by 2 (3.44%), there were no patients describing the pain as severe. The most frequently reported postoperative complications were annoyance and discomfort (10.34%), swelling (6.9%), seroma (3.44), hematoma (1.72%), paresthesia 1.72% (1); however, only those with seromas required special treatment. Conclusions: Limitations of the present study include the relatively small number of patients, all cases were operated on by a single surgeon and short postoperative follow-up period, but we are sharing our initial six months results. These results demonstrate that laparoscopic TEP inguinal hernia repair without mesh fixation is a reliable technique, which can reduce postoperative morbidity when applied by experienced surgeons. INTRODUCTION Inguinal hernia repair is one of the most frequent operations in general surgery. There are two main categories -the direct and indirect hernias, which differ in the direction at which the protrusion is apparent. In case of direct inguinal hernia, a protrusion of an organ or tissue through the inguinal canal runs medially, whereas in indirect hernia it runs laterally to the inferior epigastric vessels. Folia Medica I 2021 I Vol. 63 I No. 2 Various techniques have been used to repair inguinal hernias since the first reconstructive technique described by Bassini in 1887. In 1989 Lichtenstein reported a new technique: tension free inguinal hernia repair, and soon this approach became a gold standard. 1 Laparoscopic inguinal hernia repair was introduced in the early 1990s, and has also become popular. 2 Literature has shown the benefits of laparoscopy in comparison with open repair to be mostly related to the more minimally invasive nature of the surgery, with lower wound infection rates, faster recovery, and less postoperative pain. The two most common variations of laparoscopic technique for inguinal hernia repair are the trans-abdominal pre-peritoneal (TAPP) repair and the total extra peritoneal (TEP) repair. 3 In recent years, the robotic approach to hernia repair has evolved as a promising operative technique. The selection of a mesh for every patient must take into account individual characteristics, and especially mesh properties (durability, pliability, resistance to infection, and minimal mesh-induced foreign body responses). Currently available meshes differ with respect to their composition, structural, and mechanical parameters. 4 Treatment of inguinal hernia can lead to a various complications. The most common problems following this surgery are recurrence and chronic pain. Recent large volume systematic reviews, comparing laparoscopic with open repair, do not report difference in these treatment options, but they point out the advantages of the laparoscopic techniques, which are reduced chronic pain and an earlier return to daily activities. 5 Common complications from the laparoscopic inguinal hernia repair are urinary retention, bowel obstruction, visceral injury (small and large bowel, bladder), vascular injury, gas embolus, and port site hernia. A comparison of TEP with TAPP shows a higher postoperative complication rate for TAPP which did not, however, result in any difference in the re-operation rate. 6 AIM The aim of the study was to evaluate our TEP inguinal hernia repair initial results and compare them to literature data. MATERIALS AND METHODS In a prospective review and analysis we examined 61 cases with hernia repair via laparoscopy (specifically TEP), performed by a single surgeon, between April 2019 and December 2019 at the Kaspela University Hospital in Plovdiv. The centre's Institutional Review Board approved the study with no specific consents required due to the retrospective, minimal risk nature of the study. The routine informed consent required by the National Insurance Fund was considered sufficient for the study objectives. All TEPs were performed by a single experienced consultant surgeon. The surgical outcome measures included operating time (hours/minutes), conversion, peritoneal injury, surgical emphysema; and the clinical outcome measures included postoperative seroma, infection and chronic groin pain. The observational period was too short to evaluate the recurrence rate. Three of the patients were excluded from the research because of conversion to open surgery, due to missing of sufficient working space. In two of them the reasons were extreme BMI>32, and in another case the reason was the tearing of peritoneum initially during insertion of a blind trocar, this was our fourth patient for TEP, and we do not have too much experience during this period. After excluding 3 patients because of conversion to open surgery, all of the rest participants were hernia patients of one surgeon; they were surgically treated electively with a TEP repair for a unilateral or bilateral hernia defect. A total of 58 patients were included. There were 54 males (93.1%) and 4 females (6.9%) during this time interval (Table 1). The mean age of the patients was 41.4 years (range, 18-82 years). Methods The procedures were performed under general anesthesia. The patients were placed in the supine Trendelenburg 30-degree position. Infra umbilical 12 mm skin incisions were done. Anterior rectus fascia was incised on the same side with hernia (if the hernia is bilateral we usually choose the side of the smallest hernia) and rectus muscle was abducted and a trocar was inserted bluntly through symphysis pubis direction gently in preperitoneal space. We temporary closed the opening in the fascia by mattress suture. After that we started CO 2 insufflation of the chamber of 12 mm Hg pressure. Then 30 degree-angle optic was inserted and gentle blunt dissection of preperitoneal space starts using "angles hair" method. We reached symphysis with camera, and after visualization of rectus muscles, a median 5-mm trocar was inserted four finger breaths bellow the camera trocar. Another median 5-mm trocar was also inserted at midpoint close to the symphysis. We continued with dissection of the chamber to visualize inferior epigastric vessels, inferior parts of rectus muscle and symphysis pubis. The pre-peritoneal space was created underneath the transversalis fascia containing the deep inferior epigastric vessels by a combination of blunt and/or sharp dissection from the midline to the ASIS (anterior superior iliac spine). The Cooper ligament was dissected to the point where it met the femoral vein and the iliopubic tract was exposed. The spermatic cord was found and the hernia sac was separated off the cord and reduced. Then a 10×15cm mesh was located to cover the myopectineal orifice, the Hasselbach area and the femoral canal orifice. We did not fix the mesh to symphysis pubis. The anterior rectus fascia was closed with No 0 Vicryl suture and the skin incision with No 4/0 Vicryl intra cutaneous stitches. In bilateral hernias we preferred to use two separate meshes. The majority of surgical mesh devices used to strengthen the hernia repair were lightweight monofilament, ultra-thin, non-absorbable polyester. The mesh fixation technique was not used. The standard approach to postoperative pain consisted of paracetamol and non-steroidal anti-inflammatory drugs (NSAIDs). Additionally, single dose antibiotic prophylaxis, mainly 2nd generation cephalosporins, was given to patients (n=42/72.4%), before induction of anesthesia to prevent the occurrence of postoperative infectious complications. The operation time was determined as the time from beginning of skin incision to the end of its closure. Duration was 48.88±8.16 min (range: 34-91 min) in unilateral and 96.14±21.44 in bilateral hernias. Intra-operative complications were observed in 6 (10.34%) patients. They included bleeding from epigastric vessels in 1 (1.72%) and tearing of sac during dissection because of dense adhesions in 5 (8.62%) patients. RESULTS The time from postoperative day 1 to day 10 was defined as "short term interval. " At day 10, the first follow-up in the clinic was scheduled. The median duration of hospital stay was 36 hours ( Table 2). One of the most important short-term postoperative symptoms was pain on the first day. Patients were asked to rate their pain on a visual analog scale (VAS) from 1 to 9 (1-3 -mild, 4-6 -moderate, 7-9 -severe). For the purposes of this study, postoperative pain was alternatively categorized into groups 1, 2, 3 corresponding to no, moderate, and severe pain, respectively. Inguinal pain on discharge was characterized as mild by 56 (96.55%) of the patients and moderate by 2 (3.44 %), while there was no severe pain described by the patients (Fig. 1). DISCUSSION Similarly to previous publications our study found out that inguinal hernias occurred more frequently in males, median age was 41.4 years, and types were right sided and oblique (indirect) in 42 (72.41%) of cases in contrast to Zendejas et al. who reported a higher frequency of direct hernias. 7 The operation time in our series was 48.88±8.16 min (range: 34-91 min) for unilateral and 96.14±21.44 for bilateral hernias. Hisham reported operation time of 99±25 min (range 70-170 min), which coincides with our results. 8 However, as expected, simultaneous bilateral TEP took more time compared to the unilateral TEP. Laparoscopic TEP inguinal hernia repair is, however, a challenge for surgeons, especially at the beginning of the learning curve, because of the unfamiliar posterior anatomical view of the inguinal wall anatomy and orientation technical difficulties of laparoscopy. These challenges may cause conversion and serious complications. A problem unique to the TEP procedure is that technical difficulties can happen any time. We believe that conversion is a difficult and serious situation for both surgeon and patient, because patients have great expectations for maximal cosmetic results with minimally invasive surgery, and the surgeon may be concerned that conversion to conventional open surgery may result in a disaster for patients, because of the need for a new incision. 9 In the present study, three out of 61 TEP were converted to open repair, with an overall conversion rate of 4.91%. The reason for conversion to open surgery was lack of sufficient working space. In two of them it was because of extreme BMI>32, and in another case because of the tearing of peritoneum. In these early cases we do not have too much experience. A similar conversion rate of 4% was reported by Cohen et al. 10 , but there are conversion rates of up to 10% reported by some other researchers. 11 The causes for conversion include irreducible and complicated hernia, peritoneal injury/pneumoperitoneum, inability to unroll mesh, difficulties in creating space, high BMI, adhesion, epigastric vessel injury, iliac vessel injury, bowel injury, CO 2 retention, preformed anatomy, early phase of study and inexperience. Tearing of the sac and pneumoperitoneum is common especially in old hernias. It results in migration of insufflated gas to the intraperitoneal cavity. This not only affects the respiratory dynamics but also results in loss of working space, making dissection difficult and dangerous. Pneumoperitoneum can also precipitate postoperative ileus. All such tears should be closed, usually with an absorbable endo loop. Larger tears may need multiple absorbable loops or intra corporeal sutures. At times, the pneumoperitoneum may warrant the placement of a Veress needle in the left subcostal position (Palmer's point) to deflate the gas and restore the domain. We observed this complication in 5 (8.62%) of our patients. We placed a Veress needle and closed the peritoneal openings by clip placement. A missed tear can result in future omental or intestinal herniation. 12 The incidence of inferior epigastric artery and vein injury in laparoscopic extra peritoneal inguinal hernia repair ranges from 0.1 to 0.4%. These vessels are important landmark in inguinal hernia surgery, differentiating direct from indirect hernia and serving as a guide for hernia dissection. These are the most commonly injured abdominal wall vessels during surgery. These injuries can happen during creation of space especially in TEP, during separation of the hernia sac from the cord structure and tacking of mesh. Separation of the sac from the cord structures should be done in the middle part or in the lower part of sac, far from the deep ring. 13 The bleeding in our patients was caused by a lesion of the epigastric vein during mesh insertion in the preperitoneal space and it was controlled by bipolar coagulation with subsequent aspiration of blood and mesh placement without further obstacles. In contrast with our intraoperative complication rate, Köckerling et al. 6 repor-ted intraoperative complications to be 1.19%. In the present study, the overall incidence of the post-operative seroma was 3.44% (n=2) of all cases. The mean size of the seromas was 4.2 cm and within eight weeks, the seromas resolved spontaneously. Zanella et al. 14 and Dulucq et al. 15 have reported <5% incidence of seroma, while Hisham et al. 8 have reported a very high incidence of 21%. Significant clinical factors associated with seroma formation included old age, large defects, an extension of the hernia into the scrotum, and presence of a residual distal indirect sac. By logistic regression, a large hernia defect and an extension of the hernia into the scrotum were found to be independent risk factors for seroma formation. 15 Minor short-term postoperative complications included annoyance and discomfort, swelling, and numbness, which is completely in accordance with the literature evidence. 16 There were no patients' reports of complaints like post-operative chronic pain in our study, while the registered literature incidence varies from of 1-16%. Chronic (postoperative) pain has been defined as pain lasting at least 2-3 months (after surgery), but modifications are proposed to this time frame. A group of experts in hernia surgery and chronic pain has suggested modifying the definition for chronic pain after hernia repair as pain lasting at least 6 months after operation. The reason for this extended period of time is because the inflammation around the mesh is still ongoing after 3 months, and there is a chance that some patients will improve substantially from 3 to 6 months postoperatively. 17 In our patients, we used two meshes in bilateral preperitoneal inguinal hernia repair. The laparoscopic insertion and manipulation of two smaller meshes in the preperitoneal space is easier than that using a larger mesh. And there are no differences in the early and late outcomes when one or two meshes were used for the laparoscopic repair of bilateral inguinal hernias; however, the cost was lower when a single mesh was used. The intensity of the inflammatory response is directly proportional to the mesh size. Utiyama EM et al. reports that the inflammatory responses to the mesh during the acute phase are similar in the two groups. 18 For this short study period (six months) no hernia recurrence was recorded in the present study. Dulucq et al. reported 2.5% while Hisham et al. reported 4% incidence of hernia recurrence. 8,15 Several other randomized studies showed that non-fixation of the mesh is not associated with increased hernia recurrence rate and actually reduces the cost and the postoperative complications compared with mesh fixation techniques. The slit in the preformed mesh used in this study is placed around and behind the spermatic cord, providing some form of fixation and thus preventing mesh migration after preperitoneal desufflation. Recently two large case series 19 of TEP repairs with no mesh fixation reported recurrences rates of less than 0.3%.
2021-05-02T06:16:38.035Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "ce31711db1a20a9d364debb7d4b4f42e485887c4", "oa_license": "CCBY", "oa_url": "https://foliamedica.bg/article/54133/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3403070e4edb717bdca8a1b81080d4e9f321d940", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18436168
pes2o/s2orc
v3-fos-license
Follow, listen, feel and go: alternative guidance systems for a walking assistance device In this paper, we propose several solutions to guide an older adult along a safe path using a robotic walking assistant (the c-Walker). We consider four different possibilities to execute the task. One of them is mechanical, with the c-Walker playing an active role in setting the course. The other ones are based on tactile or acoustic stimuli, and suggest a direction of motion that the user is supposed to take on her own will. We describe the technological basis for the hardware components implementing the different solutions, and show specialized path following algorithms for each of them. The paper reports an extensive user validation activity with a quantitative and qualitative analysis of the different solutions. In this work, we test our system just with young participants to establish a safer methodology that will be used in future studies with older adults. INTRODUCTION Ageing is often associated with reduced mobility which is the consequence of a combination of physical, sensory and cognitive degrading. Reduced mobility may weaken older adults' confidence in getting out alone and traveling autonomously in large spaces. Reduced mobility has several serious consequences including an increase in the probability of falls and other physical problems, such as diabetes or articular diseases. Staying at home, people lose essential opportunities for socialisation and may worsen the quality of their nutrition. The result is a self-reinforcing loop that exacerbates the problems of ageing and accelerates physical and cognitive decline [3]. In the context of different research initiatives (the DALi project 1 and the ACANTO project 2 ) we have developed a robotic walking assistant, that compensates for sensory 1 http://www.ict-dali.eu 2 http://www.ict-acanto.eu and cognitive impairments and supports the user's navigation across complex spaces. The device, called c-Walker ( Fig. 1), is equipped with different types of low level sensors (encoders, inertial measurement unit) and advanced sensors (cameras) that collect information on the device and its environment. Such measurements are used by the c-Walker to localise itself and to detect potential risks in the surrounding environment. By using this information the c-Walker is able to produce a motion plan that prevents accidents and drives the user to her destination with a small effort and satisfying her preferences. The projects follow an inclusive design approach which requires older users involvement and participation at appropriate moments in the process once the evaluation protocols have been validated. There are different interfaces that the c-Walker can use to guide the user. Some of them generate acoustic and tactile stimulation to suggest the correct direction of motion. The user remains in charge of the last decision on whether to accept or refuse the suggestions. A different type of mechanisms operates "actively" on the walker, by physically changing the direction of motion. In this work we describe four different mechanisms for guidance available in the c-Walker (mechanical, haptic, and two types of acoustic guidance) showing how they can be applied in the context of a guidance algorithm and their effect on a student population. The mechanical guidance is based on the action of two stepper motors that can change the orientation of the front wheels forcing a turn in the desired direction. The haptic guidance is a passive system based on the use of a pair of bracelets that vibrate in the direction the user is suggested to take. The same effect can be obtained by administering acoustic signals to the user through a headphone: a sound on the right side to suggest a right turn, and a sound on the left side to suggest a left turn. The acoustic medium has a richer potential. Indeed, by using appropriate algorithms, it is possible to simulate a sound in the space that the user should follow in order to move in the right direction. In the paper, we describe the theoretical foundations of the different mechanisms and algorithms and offer some details and insight on how they can be integrated in the c-Walker . In addition, we present the results of two evaluation studies aimed at providing a protocol for the evaluation of the performance of the guidance systems and initial knowledge on user behavior and experience. Results suggest that the mechanical performance is the best and expose strengths and weaknesses of the other solutions, opening important design directions for future guidance systems design. The paper is organised as follows. In Sec. 2, we review the most important scientific literature related to our work. In Sec. 3, we describe the hardware and software components of the system, while in Sec. 4, we illustrate the different guidance mechanisms used in the c-Walker . In Sec. 5, we describe the guidance algorithms of the different guidance mechanisms. We report our testing and validation activities performed with young participants on all of the systems in Sec. 6, and finally we conclude with Sec. 7. RELATED WORK The robot wheelchair proposed in [26] offers guidance assistance in such a way that decisions come from the contribution of both the user and the machine. The shared control, instead of a conventional switch from robot to user mode, is a collaborative control. For each situation, the commands from robot and user are weighted according to the respective experience and ability leading to a combined action. Other projects make use of walkers to provide the user with services such as physical support and obstacle avoidance. In [4], the walker can work in manual mode where the control of the robot is left to the user and only voice messages are used to provide instructions. A shared control operates in automatic mode when obstacle avoidance is needed and user intention is overridden acting on the front wheels. The two projects just mentioned can be considered as "active" guidance systems, meaning that the system actively operates to steer the user toward the desired direction. The c-Walker 's mechanical guidance considered in this paper falls in the same category. Another point of commonality is in the strict cooperation between the system that generates suggestions and the user that has to implement these decisions. In the c-Walker , the user comfort has the same importance as the accuracy and efficiency of the guidance solution. In fact, not only does the user provides motive power but she can also decide to override the system's decisions forcing her way out of the suggested path. Key to any guidance system of this kind is the ability to detect and possibly anticipate the user's intent. A valuable help in this direction can be offered by the use of force sensors. [27] use force sensors to modify the orientation of the front wheels of a walker in case of concerns about comfort and safety of the user motion. In [5] an omnidirectional mobile base makes possible to change the centre of rotation to accommodate user intended motion. Contrary to these projects, the c-Walker is intended as a low cost system, for which expensive force sensors are not affordable. The user intent is inferred indirectly by observing gait, and by estimating her emotional state. More similar to our ideas, is the JAIST active robotic walker (JaRoW), proposed by [14], which uses infrared sensors to detect lower limb movement of the user and adapt direction and velocity to her behaviour. A possible idea to reduce intrusivenenss is to use passive devices, where suggestions on the direction of motion take the form of visual, auditory or tactile stimuli, and the user remains totally in charge of the final decision. Haptic interfaces can be used as a practical method to implement this idea. Successful stories on the use of haptic interface can be found in the area of teleoperation of vehicles for surveillance or exploration of remote or dangerous areas. For this type of applications, haptic interfaces are used to provide feedback on sense of motion and the feeling of presence, as in [1]. Similar requirements can be found in rescue activities, where the robot helps the user to move in environments where visual feedback are no longer available ( [11]). In the latter application the robot provides information on its position and direction to the user in order to help the user follow the robot. Guidance assistance can be provided by giving feedback on the matching between the trajectory followed by the user and the planned trajectory. In [24], a bracelet provides a warning signal when a large deviation with respect to the planned trajectory is detected. In [8] a belt with eight tactors is used to provide direction information to the user in order to complete a way-point navigation plan. As shown below, haptic bracelets are one possible guidance method offered by the c-Walker . Another "passive" guidance system is based on acoustic signalling and is well-suited to users with partial or total visual disabilities. Acoustic guidance can be achieved by synthesising a sound from a virtual point in the direction the user should move toward. A key element of this method is the ability to efficiently and accurately render sound signals from a specified point. The main method to achieve this is based on the Head Related Transfer Function (HRTF) which changes and needs to be determined for each individual, as explained in [2]. It represents the ears response for Figure 2: The block-scheme of the c-Walker architecture with its core components: mechatronic subsystem, localisation subsystem and planner. a given direction of the incoming sound. Other approaches are based on the modelling of the sound propagation. In the modelling process, attenuation of the sound is taken into account using the Interaural Level Difference (ILD) which considers the presence of the listener head. Instead, Interaural Time Difference (ITD) considers the distance between ears and sound source. These filtering processes are computationally demanding. The acoustic guidance mechanism implemented in the c-Walker is based on the adoption of lightweight algorithms amenable to an embedded implementation and detailed in [20,21]. SYSTEM ARCHITECTURE The c-Walker hardware and software architecture has been designed for an easy integration of heterogeneous components (possibly developed by different teams). The core modules of the architecture are shown in Figure 2. The Planner decides the plan to be followed based on: 1. the requests and the preferences of the user, 2. the map of the environment, 3. the presence of obstacles and crowded areas along the way. While the c-Walker is moving, it collects information from the environment and the planned path can be updated to avoid obstacles or safety risks [6]. The Localisation module integrates information from several sources (encoders, Inertial platform, cameras, RFID reader) to produce an updated information on the estimated position of the vehicle in the environment with a few centimetres position [17]. A mechatronic subsystem encapsulates all the modules that are used to read and process sensor data from the encoders and from the inertial sensors. Additionally, the mechatronic module contains all the logic required to send command to the actuators (e.g., the motors on the caster wheels). The mechatronic system is reached through a CAN bus. These core modules can be interconnected with other modules to implement the different guidance solutions discussed above, as shown in Fig. 2. The different components are interconnected using a publisher/subscriber middleware, whereby a component can publish messages that are broadcast through all the diffferent level of networking (CAN bus for mechatronic components, ethernet for high level sensors and computing nodes) in the c-Walker . This is a key enabler for the adoption of a truly component-based paradigm, in which the different guidance systems can be obtained by simply tuning on some of the modules and allowing them to publish messages or subscribe to messages. Three different configurations are schematically shown in Fig. 3. Fig. 3 shows how the three guidance systems that we present in this work interact with c-Walker . The scheme on the top of the figure refers to mechanical guidance. The Planner periodically publishes updated plans (i.e., the coordinates of the next points to reach). This information subscribed to by a path follower that implements the algorithm presented in Section 5.2. This component decides a direction for the wheel that is transmitted to a Wheel Position Controller using the Publish/Subscribe middleware. This component also receives real-time information on the current orientation of the wheel and decides the actuation to set the direction to the desired position. The schemes on the middle and on the bottom apply, respectively, to haptic and acoustic (and binaural) guidance. In this case the Path Follower Components implements the algorithms discussed in Section 5.1 and transmits its input either the Haptic Slave (see Section 4.1) or the the Audio Slave (see Section 4.2). GUIDANCE MECHANISMS In this section, we describe the three main mechanisms that can be used as "actuators" to suggest or to force changes in the direction of motion. Bracelets Haptic guidance is implemented through a tactile stimulation that takes the form of a vibration. A device able to transmit haptic signals through vibrations is said "vibrotactile". Vibration is best transmitted on hairy skin because of skin thickness and nerve depth, and it is best detected in bony areas. Wrists and spine are generally the preferred choice for detecting vibrations, with arms immediately following. Our application is particularly challenging for two reasons: I. the interface is designed to be used by older adults, II. the signal is transmitted while the user moves. Movement is known to affect adversely the detection rate and the response time of lower body sites ( [12]). As regards the perception of tactile stimuli by older adults, [10] present studies on the effects of aging in the sense of touch, which revealed that detection thresholds for several vibration intensities are higher in older subjects in the age class 65+. Bearing in mind these facts, we designed a wearable haptic bracelet in which two cylindrical vibro-motors generate vibratory signals to warn the user (Fig. 1). The subject wears one vibrotactile bracelet on each arm in order to maximize the stimuli separation while keeping the discrimination process as intuitive as possible. In particular, vibration of the left wristband suggests the participant to turn left, and vice versa. On each bracelet the distance between the two motors is about 80 mm. In two-point discrimination, the minimal distance between two stimuli to be differentiated is about 35 mm on the forearms and there is no evidence for differences among the left and right sides of the body, according to [28]. In order to reduce the aftereffect problem typical of continuous stimuli and to preserve users' ability to localize vibration, we selected a pulsed vibrational signal with frequency 280 Hz and amplitude of 0.6 g, instead of a continuous one. In particular, when a bracelet is engaged its two vibrating motors alternatively vibrates for 0.2 s. The choice of using two vibrating motors instead of one was the effect of a pilot study in which a group of older adults tested both options and declared their preference for the choice of two motors. The choice of frequency and amplitude of the vibrations was another outcome of this study (see [23]). Audio interface The acoustic interface communicates to the user the direction to take by transmitting synthetic signals through a headphone ( Fig. 1). For instance, when the system aims to suggest a left turn to the user, it reproduces a sound that is perceived by the user as coming from a point on her left aligned with the direction she is supposed to take. This is possible thanks to the application of the binaural theory. The software module that generates this sound is called Audio Slave and it receives from a master the spatial coordinates (Sx, Sy) of the point that is required to be the source of the sound. The audio slave converts the cartesian co-ordinates into a pair (r, θ) of relative polar coordinates, in which r represent the distance between the virtual sound source and the centre of the listener's head, and θ represent the azimuthal angle. The pair (r, θ) univocally identifies the position of the sound source on the horizontal plane. θ takes on the value 0 when the source is in front of the user, positive angles identify positions on the right hand side, and negative values of θ identify positions on the left of the listener. The guidance signal is a white noise with duration 50 ms, which is repeated every 150 ms. The binaural processing algorithm has been used to implement two different versions of the guidance interface: • Left/Right Guidance; • Binaural Guidance. Using the Left/Right Guidance Interface, the system reproduces only virtual sources placed at θ = 90 • or at θ = −90 • to suggest a right turn or a left turn, in the same way as the haptic interface. With the Binaural Guidance Interface, a virtual sound source is allowed to be in any position. The resulting suggestion is not merely for a turn, but it specifies a finer grained information on the exact direction. In this case, to ensure the correct displacement of the virtual sound relative to the user head orientation, an Inertial Measurement Unit (IMU) monitors the listener's head position with respect to the c-Walker . Both the interface implementations are based on the same sound rendering engine which is based on the physics of sound waves. Each of the sound samples is delayed, and attenuated according to the principles of sound wave propagation. The binaural effect is obtained by proper filters that reproduce the presence of the listener's head and consider the ears displacement. However, the guidance interface is meant to generate recommendations on the direction to follow; therefore, stimuli have been processed without reverberation. As a consequence, users will perceive the sounds as intracranial, since the absence of reverb makes it difficult to externalize virtual sound stimuli. Mechanical Steering The mechanical system based on steering uses the front caster wheels to suggest the user which direction to follow. The positioning of the wheels causes the c-Walker to perform a smooth turn manoeuvre without any particular intervention from the user and therefore is considered as an active guidance. That is, the user provides only the necessary energy to push the vehicle forward. Other active approaches exploiting the braking system acting on the back wheels of the walker have been presented recently in the literature, among which [9,22]. However, the front wheels steering approach is more robust and less demanding, in terms of processing power and sensor measures requests and was then adopted for this paper. More in depth, the c-Walker is endowed with two caster wheels in front of the device, which are connected to a swivel that enables them to move freely around their axis. Taking advantage of this feature, we applied steppers motors to the joints to change the direction of the wheels by a specified amount. The presence of non-idealities (e.g., friction and slippage of the gears) can possibly introduce a deviation between the desired rotation angle and the actual one. Therefore, we need a position control scheme operating with real-time measurements of the current angular position of the wheels. Such measurements are collected by an encoder that is mounted on the same joint as the stepper motor. The connection between wheel and motor is through a gear system such that a complete turn of the wheel is associated with 4 turns of the motor. Every complete turn of the motor is 400 steps. The stepper motor and the encoder are controlled by a small computing node that is interfaced to the rest of the system through a CAN bus. The motor, together with the absolute encoder and the relative CAN bus node, is visible in Fig. 1. With a fixed periodicity, the node samples the encoder and broadcasts the sensor reading through the bus. The node can also receive a CAN message coming from other computing devices that requires a rotation of the wheels specifying the number of degree of rotation and the angular velocity (deg/s). The values are automatically converted in steps and used in a PID control loop that moves the wheel to the specified angular position. GUIDANCE ALGORITHMS The guidance algorithms relies on an accurate estimate of the position of the c-Walker with respect to the planned path. Since the latter is generated internally by a module of the c-Walker (see [7]), only the knowledge on the position Q = [x y] T and of the orientation θ expressed in some known reference frame is needed. This problem, known in the literature as localisation problem, is solved in the c-Walker using the solutions proposed in [16,18]. With this information it is possible to determine the Frenet-Serret point Fa, that is a point on the path representing the intersection between the projection of the vehicle and a segment that is perpendicular to it and tangent to the path, as in Fig. 4 (a). We define as y d and θ d respectively the distance along the projection of the vehicle to Fa and the difference between the orientation of the c-Walker and the orientation of the tangent to the path in the projection point. All the proposed guidance algorithms use this information to compute the specific "actuation". We observe that the objective of the guidance algorithms is not the perfect path following of the planned trajectory. In fact, such an objective would be very restrictive for the user and perceived as too authoritative and intrusive. In order to give the user the feeling of being in control of the platform, she is allowed an error (in both, position and orientation) throughout the execution of the path that is kept lower than a desired performance threshold. Therefore, the path can be considered as the centre line of a virtual corridor in which the user can move freely. Haptic and Acoustic algorithms The haptic and acoustic guidance algorithms generate a quantised control action, which can be described with an alphabet of three control symbols: a) turn right; b) turn left; c) go straight. This is a good compromise between accuracy and cognitive load for the interpretation of signals. The symbol to be suggested to the user is determined by the desired turning towards the path. A straightforward way to compute such a quantity is to determine the angular velocity an autonomous robotic unicycle-like vehicle would follow in order to solve the path following problem. To this end, we have designed a very simple control Lyapunov function which ensures a controlled solution to the path following in the case of straight lines acting only on the vehicle angular velocity and irrespective of the forward velocity of the vehicle. Such a controller works also for curved paths if we are only interested on the sign of the desired angular velocity. To see this, consider the kinematic model of the unicycle (which is an accurate kinematic model of the c-Walker ) where v = 0 is the forward velocity and ω its angular velocity. y d and θ d are the quantities defined in the previous section, while x d is the longitudinal coordinate of the vehicle that, in the Frenet-Serret reference frame is identically zero by definition. It has to be noted that (x d , y d ) are then the cartesian coordinates, in the Frenet-Serret reference frame, of the midpoint of the rear wheels axle. In light of model (1) and remembering that x d does not play any role for path following, we can set up the following control Lyapunov function which is positive definite in the space of interest, i.e., (y d , θ d ), and has as time derivativė where ky > 0 and k θ > 0 are tuning constants. Imposing ω equals to the following desired angular velocity with q θ > 0 additional degree of freedom, the time derivative in (3) is negative semidefinite; using La Salle and Krasowskii principles, asymptotic stability of the equilibrium point (y d , θ d ) = (0, 0) can therefore be established, with the c-Walker steadily moving toward the path. As a consequence, the sign of ω rules the direction of switching: a) if ω > tω then the user has to turn left; b) if ω < −tω then the user has to turn right; c) if ω ∈ [−tω, tω] then the user has to go straight. tω is a design threshold used to be traded between the user comfort and the authority of the control action. In order to implement the idea of the virtual corridor around the path and to increase the user comfort, the actuation takes place only when V1 in (2) is greater than a certain V max 1 , which is defined as in (2) when y d = y h is half the width of the corridor, and θ d = θ h defines half of the amplitude of a cone centered on the corridor orientation in which the c-Walker heading is allowed. For the haptic, acoustic algorithms the parameters that define the corridor are the same, that are y h = 0.3 m and θ h = 0.52 rad. Similarly, the constants q θ , ky and k θ are fixed to the same values for both haptic guidance and acoustic guidance. However, they are changing according to the c-Walker actual position: when the position is outside the corridor, ky = 1 and k θ = 0.1, so that the controller is more active to steer the vehicle inside the corridor; when, instead, the c-Walker is within the corridor boundaries, ky = 0.1 and k θ = 1 in order to highly enforce the current orientation tangent to the path. The position of the c-Walker inside the corridor is determined by simply checking if y d ≤ y h . Finally, to take into account the corridor, the ω = αω d , where α is a time varying parameter related to the corridor, i.e., The turning rule related to the sign of ω is then applied as previously described. Acoustic source computation For the acoustic guidance system, the sound source position has to be properly identified. To this end, let us define the circle centered in the vehicle position Q and having radius ds (ds = 1.2m in the experiments). Let us further define dp as the segment joining the origin of the Frenet-Serret reference frame Fa with the intersection point P between the circle and the tangent to the circle in the origin of Fa. If multiple solutions exist, the one being in the forward direction of the walker is considered. If only a solution exists, i.e., y d = |ds|, then P coincides with the origin of Fa. Finally, no solution exists if y d ≥ |ds|, therefore P lies on the segment that connects Q to Fa. We define S as the point closer to P and lying on the path. If the c-Walker is close to a straight component of the path or at a distance greater than ds, S = P . Otherwise, S is computed as the projection of P on the path. S is the desired sound source with respect to a fixed reference frame, which has to be transformed in the c-Walker reference coordinate systems by With the choice just illustrated, if the distance from the path is greater than y d ≥ |ds|, the target is pushed to the planned path following the shortest possible route. Actuation Haptic: The bracelets are actuated according to the direction to follow. There are two choices of actuation: the first considers the value of ω as discussed above, while the second considers the value of sy cw as computed in Sec. 5.1.1. In both cases, the sign determines the direction of turning. Left/Right Three cones, having the vertices in Q, are defined: L, R and S. The cones divide the semicircle in front of the vehicle and with center in Q in three equal sectors. If Scw ∈ L, the user has to turn left; if Scw ∈ R , the user has to turn right; if Scw ∈ S, the user has to go straight. Positions behind the user are transformed in turn left or right depending on the position of Scw. Using this taxonomy and the value of α in (5), the slave application determines whether the sound has to be played or not and from which position. Binaural The binaural algorithm fully exploits the reference coordinates Scw using a finer granularity of positions then the Left/Right acoustic guidance. The number of cones is now equal to 7, with three equally spaced cones on the right and on the left the forward direction of the trolley. Each cone has a characteristic angle βi, that is the one that equally splits the cone from the cart perspective. The described mechanism has the role of discretising the possible sound directions, since the human auditory system does not have the sensibility to distinguish a finer partition. Again, positions behind the user are treated as in the Left/Right approach in order to avoid front/back confusion that commonly affects binaural sound recognition. As a result of this quantisation, the new position of the sound source is Ss. By defining with θi the user's head orientation measured with the IMU placed on top of the headphone, the final sound source Sp is computed as Sp = cos(θi) sin(θi) − sin(θi) cos(θi) Ss. Mechanical system: steering The rationale of the steering wheels controller is in the nature of the kinematic model. With respect to the model adopted in (1), which represents a unicycle-like vehicle model with differential drive on the back wheels, controlling the c-Walker using the steering wheels implies a different dynamic for the orientation rate, that becomeṡ where φ is the steering angle. Since the steering angle is generated through the actuation of the front caster wheels, it is directly controlled in position by means of the stepper motors. Moreover, φ ∈ [−π, π) and, hence, there is no theoretical limit on the value ofθ d . Nonetheless, there exists a singular point when v = 0, which can be ruled out because in such a case the path following does not have any sense even for the model in (1). This condition implies that acting on the steering wheels do not allow a turn on the spot. As a consequence, it is possible to select any feasible path following controller conceived for the unicycle to solve the problem at hand. The controller adopted is the one proposed by [25], which is flexible (indeed, there are tuning parameters for the approaching angle to the path), and can be extended to include dynamic effects and uncertain parameters. The adopted controller, which is an extension of the one presented in [15], is based on the idea of a virtual target travelling on the path. Its adaptation to our context is discussed below. Let V be the coordinates of the Virtual vehicle. The objective is to make the c-Walker perfectly track the Virtual vehicle. The position of the walker Q can be expressed in a global frame G with G Q = |Q 0| T = |x y 0| T . Alternatively, the point can be expressed in the frame V, which coincides with the point V , with V Q = |sv yv 0| T . we first derive the velocity of the c-Walker : and, then, we can express them as where θv = θ d −θc. Using Lyapunov techniques, it is possible to define the following set of control laws (see [25] for further details)ṡ = v cos(θv) + K2sv, whereṡ represents the progression of the Virtual vehicle on the path, δ(yv, v) is the angle of approach of the vehicle with respect to the path (that can be tuned as necessary), while the Ki are tuning constants. With this choice,θv + c(s)ṡ is the angular velocity referenceθv of the c-Walker , which can be generated by solving with respect to φ the equation (6). It has to be noted that φ is the reference of the wheel if the half-car model is adopted. In order to transform the reference φ to a reference for the left and right wheel, the constraint imposed by the Ackerman geometry are imposed. Finally, in order to implement the idea of the corridor previously presented, the actual steering angle imposed to the wheels considers the reference computed as described in this section as a reference φ d , which is used in combination with the actual orientation φa. More precisely, using the value of α in (5), the commanded orientation of the steering wheel is given by φ = αφ d + (1 − α)φa. The value of the threshold in this case is increased to θ h = 1.62 rad. Study 1 A formative evaluation was designed to compare and contrast the performance of the different guidance systems. Since the preliminary state of user research in this field [30,29], the main focus of the evaluation was on system performance, rather than on the user experience. The study had two concurrent objectives: to develop a controlled experimental methodology to support system comparisons and to provide practical information to re-design. In this way, future development will provide a tested methodology to preserve elderly participants from stress and fatigue. In line with an ethical application of the inclusive design process [13], at this early stage of the methodological verification process of an evaluation protocol, we involved a sample of young participants. Participants Thirteen participants (6 females, mean age 30 years old, ranging from 26 to 39) took part in the evaluation. They were all students or employees of the University of Trento and gave informed consent prior to inclusion in the study. Design The study applied a within-subjects design with Guidance (4) and Path (3) as experimental factors. All participants used the four guidance systems (acoustic, haptic, mechanical, binaural) in three different paths. The order of the system conditions was counterbalanced across participants. Apparatus The experimental apparatus used in the experiment is a prototype of the c-Walker shown in Figure 1. An exaustive description of the device and of its different functionalities can be found in [19]. A distinctive mark of the c-Walker is its modularity: the modules implementing the different functionalities can be easily plugged on or off based on the specific requirement of the application. The specific configuration adopted in this paper consisted of: 1. a Localisation module, 2. a short term Planner, 3. a Path Follower. The Localisation system of the c-Walker utilises a combination of different techniques. A relative localisation system based on the fusion of encoders on the wheels and of a multi-axial gyroscope, operates in connection with different absolute positioning systems to keep in check the error accumulated along the path. The experiments reported here were organised as multiple repetitions of relatively short trajectories. We believe that the adoption of this paradigm produces results comparable to "fewer" repetitions of longer trajectories, in a more controllable and repeatable way. This simplifies the localisation problem. Indeed, the mere use of relative localisation provides acceptable accuracy with an accumulated error below 5cm, when the system operates for a small time (e.g., smaller than 50m) [18]. Therefore the activation of absolute positioning systems which would entail some instrumentation in the environment (e.g., by deploying RFID tags in known positions) was not needed. The short term Planner in the c-Walker is reactive: it collects real-time information in the environment and uses it to plan safe courses that avoid collisions with other people or dangerous areas [7]. In this context, we could disable this feature since the experiments took place in free space, without any dynamic obstacles along the way. The planner was configured to generate three different virtual paths (60 centimetres wide and 10 meters long): straight (I), C shaped (C) and S shaped (S). The width of the virtual corridor was above 30 centimetres to the left and to the right of centre of the c-Walker . The C path was a quarter of the circumference of a circle with a radius of 6.37 meters. The S path comprised three arches of a circumference with a radius of 4.78 meters. The first and the third arches were 1 /12, while the one in the centre was 1 /6 of the whole circumference. The second arch was bent in the opposite direction compared to the other two. In total there were 6 path variations, two symmetric paths for each shape. Finally, the Path Follower component implements the guidance algorithms described in Section 5. The concrete implementation was adapted to the different guidance algorithms. For mechanical guidance, the component decides a direction for the wheel that is transmitted to a Wheel Position Controller. This component also receives real-time information on the current orientation of the wheel and decides the actuation to set the direction to the desired position. For the haptic and the acoustic (and binaural) guidance, the Path Follower implements the algorithms discussed in Section 5.1 and transmits its input either to the Haptic Slave or to the Audio Slave, as detailed in Section 4.1 and Section 4.2 respectively. Procedure The evaluation was run in a large empty room of the University building by two experimenters: a psychologist who interacted with the participants and a computer scientist who controlled the equipment. At the beginning of the study, participants were provided with the instructions in relation to each guidance system. It was explained that they had to follow the instruction of the c-Walker : while they were on the correct trajectory there would be no system intervention. Otherwise, each system would have acted in different ways. The mechanical system would have turned the front wheels modifying its direction onto the right path. In this case, participants could not force the walker and might only follow the suggested trajectory. At the end of the mechanical correction, the participants were given back the control of the walker. For the haptic/acoustic guidance, a vibration/sound (either on the left or right arm/ear) would have indicated the side of the correction necessary to regain the path. It was stressed that under these conditions there was no information indicating the turn intensity. Finally, the binaural guidance would have provided a sound indicating the direction and (the amount of the correction). Participants were told to be careful in following the instructions to avoid bouncing from one side to the other of the virtual corridor. It was also suggested that whenever they felt like zigzagging, the actual trajectory might be likely in the middle. Before each trial, the appropriate device was put on the participant (i.e., headphones or haptic bracelets). Only in the case of the binaural system, participants were given a brief training to make them experience the spatial information of the sounds. The starting position of each trial varied among the four corners of a rectangular virtual area (about 12 x 4 meters). The c-Walker was positioned by the experimenter with a variable orientation according to the shape of the path to be followed. Specifically, at the beginning of each I trial, the walker was turned 10 degrees either to the left or to the right of the expected trajectory. At the beginning of each C and S trials, the walker was located in the right direction to be followed. Participant started walking after a signal of the experimenter and repeated 10 randomised paths for each guidance system. At the end of each system evaluation, participants were invited to answer 4 questions, addressing ease of use, selfconfidence in route keeping, acceptability of the interface in public spaces and an overall evaluation on a 10 points scale (10=positive). Moreover, participants were invited to provide comments or suggestions. The evaluation lasted around 90 minutes, at the end participants were thanked and paid 10 Euros. Data analysis Performance was analysed considering four dependent variables. A measure of error was operationalized as deviation from the optimal trajectory and calculated using the distance of the orthogonal projection between the actual and the optimal trajectory. We collected a sample of 100 measurement (about one value every 10 centimetres along the curvilinear abscissa of the path) that were then averaged. Time was measured between the start of participant's movement and the moment the participant reached the intended end of the path. Length measured the distance walked by the participant, whereas speed corresponded to the ratio between the length and the time. For each participant and guidance system, we averaged an index scores for the four S, the four C and the two I paths. Data analysis was performed employing the analysis of variance (ANOVA) with repeated measures on the factors 'Guidance' and 'Path'. Post-hoc pairwise comparisons corrected with Bonferroni for multiple comparisons (two tails) were also computed. Results Error Descriptive statistics of error are reported in Fig. 5 (a) as a function of Guidance and Path. The ANOVA highlighted a significant effect for Guidance F (3, 36) = 27.4, p < .01, Path F (2, 24) = 17.3, p < .01 and for the interaction F (6, 72) = 10.3, p < .01. Post-hoc pairwise comparison (Tab. 1) indicated that the mechanical guidance differed significantly from all the others (p < .01) being the most precise. Moreover, the acoustic guidance was significantly different from the haptic (p < .01). Post-hoc comparisons indicated that the I path was significantly easier from the other two (p < .01). In the mechanical guidance condition, the error was not affected by the path and showed very low variability among participants. On the contrary, for all other conditions there was an effect of Path on the magnitude of the error. Mostly for the haptic, but also for the acoustic guidance, the S path had the highest error. Interestingly, for the binaural guidance, the highest error emerged with the C path. Fig. 6 shows some qualitative results of the experiments. Time The ANOVA on time showed a significant effect for Guidance F (3, 36) = 3.98, p < .05, Path F (2, 24) = 7.54, p < .01 and for the interaction F (6, 72) = 2.89, p < .05. Fig. 5 (b) shows the average time in relation to both Guidance and Path. Post-hoc pairwise comparison showed that the mechanical guidance system was significantly faster than the haptic (p < .05), and that the I path differed significantly from the S path (p < .05). Walking time was independent of Path for the mechanical and the binaural guidance. Conversely, the S path was performed significantly slower than the I path for both the acoustic and the haptic guidance. Length Only two participants (both in the S path and the binaural guidance condition) walked less than the optimal path length. The ANOVA showed a significant effect for Guidance F (3, 36) = 15.1, p < .01, Path F (2, 24) = 9.1, p < .01 and for the interaction F (6, 72) = 6.1, p < .01. Posthoc comparisons indicated that the haptic guidance differed significantly from all the others (p < .01 mechanical and acoustic and p < .05 binaural). Moreover, the mechanical guidance differed significantly from the acoustic (p < .01). The I path differed significantly from the C (p < .01) and S (p < .05) paths (Tab. 3). The haptic guidance showed the worst result in the S path. For the mechanical condition, the performance was different between the I and S path. For the binaural condition there was no effect of Path. Fig. 5 (c) shows the average length in relation to both Guidance and Path. Speed The analysis of variance on speed reported only a significant interaction between Guidance and Path F (6, 72) = 3.05, p < .01. Questionnaire Participants scores to the the four questionnaire items were normalized for each participant in relation to the highest score provided among all the answers. The ANOVA indicates that the mechanical guidance is perceived as easier to use with no other significant differences among the other systems. The same results have emerged in relation to the confidence to maintain the correct trajectory. Concerning the acceptability to use the guidance systems in public spaces, the mechanical guidance was again the preferred one in relation to both the acoustic and binaural while no difference has emerged in relation to the haptic. Finally, participants liked the mechanical guidance the most in relation to both haptic and acoustic systems, while no difference has emerged in relation to the binaural one. Participants spontaneously commented that the mechanical system was easy to follow and required little attention. However, some of them complained that it might be perceived as coercive and risky due to possible errors in route planning. Other people worried about the dangerous effect of a quick turn of the wheels mostly for older users. Participants reported a general dislike about wearing headphones mostly because they might miss important environmental sounds and because of the look. Most of the participants agreed that the binaural condition required more attention than all the other systems. Participants however appreciated that it was something new, interesting and provided a constant feedback on the position. Most of them preferred the binaural system to the acoustic one because it provided more information, yet some reported a difficulty in discriminating the direction the sound was coming. Most of the participants reported to prefer the haptic guidance system to the acoustic, as easier and less intrusive. In relation to the both guidance condition, participants complained about the poverty of the left and right instructions and the lack of a modulation. Some participants suggested possible ways to increase communication richness, such as, for the acoustic system, different volume indicating the magnitude; verbal feedback; different tones in relation to the an- gle. For the haptic system comments included modulating the frequency of the vibration in relation to the magnitude of the correction. Some participants reported a kind of annoyance for the haptic stimulation but only for the first minutes of use. Discussion The aim of this study was to gather quantitative and qualitative information in relation to the evaluation of four different guidance systems. To this aim participants had the opportunity to navigate non-visible paths (i.e., virtual corridors) using four the different guidance systems. To maintain the correct trajectory participants could only rely on the instructions provided by the c-Walker and, after using each system, they were asked to provide feedback. As expected, in terms of performance, the mechanical guidance was the most precise. Although an error emerged because of the freedom left to participants, the results show the consistency of the deviation along the different paths, a low variability among the participants and a slight difference in relation to the shape of the paths. The results of the questionnaire further support quantitative data showing that, on average, participants liked the mechanical guidance the most in relation to easiness, confidence in maintaining the trajectory, acceptability and overall judgment. The only concern for some users was that it might be perceived as coercive and risky due to possible errors in route planning. In fact, the mechanical guidance was active in the sense that participants had to passively follow the trajectory imposed by the walker. Differently, in the other three guidance systems, the participants were actively driving based on the interpretation of the provided instructions. In the acoustic guidance, there were only left and right sounds while in the binaural guidance, the sound was modulated by modifying the binaural difference between the two ears. Although more informative, in terms of quantifying the angle of the suggested trajectory, the binaural guidance system emerged to be worse than the acoustic system in the C path. However, it is likely that with adequate training the performance with the binaural system could improve a lot. The results of the questionnaire suggest that both the systems using headphones were not very acceptable because of the possibility to miss environmental sounds and because of the look. Moreover, the binaural system was reported to require more attention than the acoustic one, although no difference emerged in terms of confidence in maintaining the correct trajectory. Overall, the binaural guidance was appreciated because it was something new and provided detailed information. Indeed, most of the participants' suggestions related to the acoustic and haptic guidance systems were addressed at codifying the instructions in terms of the angle of the correction. Significant performance differences emerged between the haptic and the acoustic guidance, which could in part be ex- (c) (d) plained by the natural tendency to respond faster to auditory stimuli rather than to tactile stimuli, and in part by the different algorithm employed in the evaluation. This issue is addressed in the second study presented in this paper. Looking at participants performance however it is evident that, independent of the communication channel, the dichotomous nature of the stimulation (left-right) tended to stimulate long left and right corrections leading to zigzagging. One participant explicitly mentioned this feeling while commenting on the haptic guidance. In terms of user experience, the haptic guidance was perceived as more acceptable than the acoustic and the binaural systems, and no different from the mechanical one. Indeed, most of the participants commented that the haptic bracelets could be hidden and did not interfere with the environmental acoustic information. Study 2 This evaluation study was designed to clarify the differences emerged in study 1 between the haptic and the acoustic guidance. To this aim both input devices (bracelets and headphones) were interfaced to the same guidance algorithm and tested following the same experimental protocol as study 1, except that participants were required to test only the acoustic and the haptic guidance systems. Moreover, the haptic guidance system was modified using the acoustic guidance algorithm. In this way, we could test directly the effect of the interface. Ten participants (2 females, mean age 30 years old range 24-35) took part in the study. Results Descriptive statistics of error are reported in Fig. 7 as a function of Guidance and Path. The ANOVA showed a significant effect for the factor Path F (2, 18) = 11.0, p < .01 but not Guidance. The post-hoc pairwise comparison showed that the S path differed significantly from the other two (p < .05) confirming its higher complexity. The ANOVA on time, length, and speed returned the same trend of results: a main effect only for Path. The analysis of the questionnaire confirmed a preference for the acceptability of the haptic guidance in public spaces. Finally, a between-study analysis of variance comparing the performance of participants using the sound system in study 1 and study 2 returned no significant differences due to study, path, or their interaction. Discussion The study indicates that the haptic and acoustic interfaces do not differ in terms of performance, and that the results of study 1 may be attributed entirely to the different algorithms tested. Furthermore, they confirm a preference for the haptic guidance but only regarding its social acceptability in public spaces. Furthermore, the similarities in both performance and user-experience of the acoustic guidance in the two studies is an indicator of the strong reliability and external validity of the evaluation protocol. Tab. 5 propose a ranking of the 4 guidance systems, combining empirical observations, measurements and participants' comments in both studies. The best guidance was no doubt the mechanical one, followed by the haptic, acoustic and binaural systems. The evaluations highlighted new challenges for the sociotechnical design of future guidance system. In particular a major issue emerged with regards to the acceptability of the practical requirement of wearing headphones. The binaural system was perceived as a promising solution which captured the user attention. However, more work is needed in order to improve the communication of the directional information. CONCLUSIONS In this paper we have presented four different solutions for guiding a user along a safe path using a robotic walking assistant. One of them is "active" meaning that the system is allowed to "force a turn" in a specified direction. The other ones are "passive" meaning that they merely produce directions that the user is supposed to follow on her own will. We have described the technological and scientific foundations for the four different guidance systems, and their implementation in a device called c-Walker . The systems has been thoroughly evaluated with a group of young volunteers, allowing us to test the methodology, and providing a baseline for future tests with potential elderly users. This paper contributed a novel evaluation protocol for comparing the different guidance systems, and opens new challenges for interaction designers. The use of virtual corridors allowed us to test the precision of the guidance systems to maintain the correct trajectory in the absence of any visual indications of the route. However, in a real-life scenario, users would most likely walk along a wide corridor with walls on the left and right that might help maintaining a straight path in a particular part of the corridor (i.e., in the centre or towards the left/right). Moreover, corridors' crossings are often orthogonal. In such scenarios, left and right instructions might be enough to allow the user to reach their goal and the haptic solution could be the best trade off among precision, freedom and cognitive workload, leaving vision and audition free to perceive environmental stimuli. Future research, will repeat this study in more ecological contexts. From the technical point of view, an interesting future direction is the implementation of guidance solutions based on the use of electromechanical brakes, along the lines suggested in our previous work [9].
2016-01-15T13:41:25.000Z
2016-01-15T00:00:00.000
{ "year": 2016, "sha1": "5d6a685ce6a68f23c0f21e2e9c2bd6e55e21a3b5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5d6a685ce6a68f23c0f21e2e9c2bd6e55e21a3b5", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
266182040
pes2o/s2orc
v3-fos-license
Managing Sustainable Working Hours within Participatory Working Time Scheduling for Nurses and Assistant Nurses: A Qualitative Interview Study with Managers and Staffing Assistants , Introduction Healthcare organisations operate 24/7, which requires shift work.In the European Union, about 40% of healthcare workers are exposed to shift work [1].Among nurses and assistant nurses, rotating shift work is common, alternating between morning, evening, and night shifts.Working hours afect sleep and recovery [2] and maintenance of a work-life balance [3], which are important factors for employees' health [4] and intention to stay in the organisation [5,6].Furthermore, insufcient sleep and recovery cause fatigue which can be a patient safety hazard [7].Tus, working hour arrangements are an important consideration for healthcare organisations in order to ensure employee health and patient safety, as well as managing the world-wide challenges with recruitment and retention of staf in healthcare [8,9]. In participatory working time scheduling (hereafter referred to as participatory scheduling), working hours are planned with the aim of meeting both employees' individual preferences and the wards' specifc stafng needs.Usually, ward managers, stafng assistants, and employees cooperate in the schedule planning, which often takes place in several steps and in cycles of negotiations and adjustments [10,11].Participatory scheduling implementations vary, e.g., regarding how the process is organised, degree of employee infuence, and ward-specifc scheduling rules.Participatory scheduling is commonly used among shift-working nurses and assistant nurses in Sweden, although there is a heterogeneity in how it is implemented [12]. Infuence over working hours has been related to several positive outcomes among shift-working healthcare employees, such as higher job satisfaction [13], improved worklife balance [14], reduced fatigue after work [15,16], reduced risk of short sleep and poor workability [17], and higher selfrated quality of care [18].Use of participatory scheduling was also found to be related to decreased sickness absence among nursing staf [19].Furthermore, satisfaction with schedule fexibility has been related to lower intention to leave the workplace [20], while being forced to work night shifts has been cited as a reason for leaving the workplace [21]. Concerns have been raised that employee infuence over scheduling could result in working hours that impair recovery and health, e.g., through prioritisation of social activities over recovery, sleep, and health [22].While such concerns have been realised in some studies, e.g., an increase of long work shifts [10,15], other studies have found few such unfavourable efects of participatory scheduling [23].A similarly mixed picture emerges with regard to the efects of participatory scheduling on job satisfaction [24]. Given some contradictory results regarding the impact of participatory scheduling, it is important to identify which factors are important for the successful implementation of participatory scheduling.Key enablers of implementation, as identifed by a recent systematic review [25], were an understanding among the employees that the process will not always run smoothly and changes will become necessary, having a team-based approach involving all employees, continuous support and involvement of the head nurse, assessing the nursing workload before implementation, and using a computerised self-scheduling system.Examples of barriers were when nurses see self-scheduling as an individual entitlement (instead of a joint agreement to enhance both the employee's life and the ward's functioning), organisations underestimating how sensitive the issue of scheduling is for employees, favouritism by the schedulers, and stafng shortage.Te review's fndings highlighted the importance of the implementation process and contextual issues, for the success of participatory scheduling. Previous research has indicated that certain shift schedule characteristics are associated with sleep and fatigue problems, such as a high frequency of quick returns (<11 hours between working shifts) [26,27], many consecutive working days [28], night work [29], and backward rotation of shifts (night-eveningday) [30].Also, night shifts per se [31], >3 consecutive night shifts [31], quick returns [32], and long working hours [31,33] have been related to increased risk for occupational injuries.Among nurses, long working hours (>12 h) [34], a high frequency of quick returns [35], and night shifts [36] have also been associated with a higher risk for medical errors, with fatigue as one plausible mechanism [36].Furthermore, a single day of after night work seems insufcient to fully recuperate with respect to alertness [37] and cognitive function [38]. Support exists for an association between shift work and future development of chronic diseases (e.g., type 2 diabetes, coronary heart disease, and cancer) with higher risks for shift work including night work, with disturbed sleep and circadian disruption as plausible mechanisms [4].Regarding night work, >9 h shift duration [39], >3 consecutive shifts [39], and <28 h rest after the last night shift [40] have been associated with increased risk for disease development. Accordingly, schedule design in shift work has health and safety implications for both employees and organisations.In this article, we defne sustainable working hours as working hours that promote both short-and long-term employee health, sleep, and recovery, as well as patient safety.Te requirements for sustainable working hours, as identifed by previous research, are that the shift schedule should limit the number of consecutive shifts [28] and quick returns [26,27,32,35]; limit the length of shifts [31,33,34]; limit consecutive night shifts to a maximum of 3 [39]; enable sufcient (>48 hours) rest time after night work [41]; and feature forward rotation of shifts [30]. Leadership behaviours characterised by consideration and support, and a good quality leader-employee relationship, are positively related to employee well-being and lower stress levels [42].Recent studies have also suggested that leadership is an important factor in facilitating employees' sleep.Te concept of sleep leadership has been defned by a set of behaviours in which leaders both encourage and enable employees to obtain healthy sleep [43].A series of studies among military personnel indicates that employees' experience of sleep leadership is associated with higher subjective sleep quality and sleep quantity [44], as well as with less sleep disturbance and sleep-related impairment during daytime [45]. Te current project uses the human-technologyorganisation concept as its theoretical basis, to understand the role of the individual in the complex organisation of healthcare.Te concept suggests that work activities can be described, analysed, and understood by describing the interactions between the three subsystems-human (also referred to as "individual" in this work), technology, and organisation [46].Tese subsystems play a key role in participatory scheduling, where both employees and the organisation are involved in planning of working hours, often using computerised scheduling systems. Given the importance of working hours for both employee health and patient safety, and the widespread use of participatory scheduling among healthcare personnel, there is a need to understand how to optimise participatory scheduling models while ensuring sustainable working hours.Te aim of this study was to bring insights into how healthcare managers and stafng assistants work to achieve sustainable working hours within a participatory scheduling system. Materials and Methods 2.1.Design.Tis qualitative descriptive study examined participants' experiences and thoughts [47], as part of a larger project investigating how healthcare organisations can achieve sustainable working hours.Te study adhered to the Consolidated criteria for reporting qualitative research (COREQ) [48], see Appendix 2. Context. Four regions in Sweden (including one metropolitan region) were represented in this study.Tere were a variety of ways of organising the scheduling process, involving the manager, the employees, one or more stafng assistants, and/or scheduling groups (a group of nurses or assistant nurses who had time designated for work with scheduling).Schedules were planned for between 5 and 12 weeks at a time (up to 16 weeks during summertime).A scheduling period commonly started with a planning period, where the employees proposed which shifts they would like to work during the coming period, either using paper and pen, a whiteboard, or a technological system ("Tessa," "Heroma" and/or "Adacta").Tere were rules about a minimum number of certain shifts (evening, weekend, and/or night shifts) in each scheduling period which employees had to follow.Also, the employees were allowed to place "vetoes" on shifts they did not want to work (varying between 1 veto/week and 3 vetoes per 10 weeks).After the planning period, the adjustment process started and lasted for between 1 and 3 weeks, where stafng shortages and excesses on shifts were identifed and shift changes were made to fulfl stafng and competence needs on each shift. Te frst part of the adjustment process was carried out by the employees themselves.In some wards, a scheduling group or stafng assistant was responsible for either the whole adjustment process or for making further necessary adjustments after the employees' own adjustments.Final approval of the schedule was given by the manager, or in some cases by stafng assistants, although the manager had the formal responsibility for the schedule.Sometimes the planning and adjustment process was divided into two steps, where planning and adjustment of weekend and/or night shifts were made in a frst step, and the remaining shifts in a second step. Participants. Purposive sampling was used to obtain participants from diverse regions in Sweden.Inclusion criteria were frst-line managers and stafng assistants who worked actively with working time scheduling and used participatory scheduling.Twenty-seven participants were invited, of whom eleven frst-line managers and nine stafng assistants accepted.Te participants were 19 women and one man, aged between 28 and 61 years (M � 46) and had worked with planning work schedules between 3 and 30 years (M � 9).Managers' professions were registered nurses (n � 7), specialist nurses (n � 3), and midwife (n � 1).Stafng assistants' professions were assistant nurses (n � 7), behavioural scientist (n � 1), and unknown (n � 1).Education about working hours and scheduling varied, with participants often being introduced by their predecessor who educated them in the scheduling software and informed them about working time regulations.A few (n � 6) had received an education about "healthy working hours."Te participants worked at wards with the following medical specialties: neurology, maternity, pulmonary medicine and hematology, orthopedics, medicine, oncology, pediatric emergency, medical emergency, and medical intermediate care. 2.4.Data Collection.Te participants were contacted by the research group through their electronic work e-mail addresses and informed about the aim of the study.After receiving written informed consent, the last author (associate professor with previous experience of qualitative semistructured interviewing and analysis) and a master's student, trained and supervised by the last author, conducted the interviews, which took place during March 2020-October 2021 using faceto-face (n � 4), phone (n � 15), and video call (n � 1) methods.Te participants chose the interview method and location (their homes or workplaces).Te interviews which lasted between 24 and 73 minutes (M � 47) were audio-recorded and transcribed verbatim for further analysis.Te interviews were conducted in Swedish, which was the native language of the informants participating in the study and the interviewers. Te Interview Guide.An interview guide with semistructured open-ended questions was designed for the purpose of this study.Te guide started with demographic questions, followed by nine (stafng assistants) or ten (managers) main questions about the work scheduling process, follow-up procedures, rules and regulations, challenges and need for support, ideas for improvement, technical support, and eventual conficts during scheduling.Probing questions such as "please tell more/explain more" were used to deepen the discussions in the interviews.Participants were asked to focus on work procedures during normal operation and not during the peaks of the COVID-19 pandemic.Te questions difered slightly between managers and stafng assistants (see Appendix 1). 2.6.Data Analysis.Data were analysed using the six phases in thematic analysis according to Braun and Clarke [49] (see Table 1).Initial coding and searching for themes were conducted in Swedish.From phase 5, defning and naming themes, and during the rest of the process, English language was used.All authors who analysed data were fuent in Swedish and English in both writing and speech.Experiences referring to the working hours and scheduling during the COVID-19 outbreak and peaks were identifed and excluded from this analysis.Te frst author (MSc, licensed psychologist) coded all the interviews.Te second and the last authors coded 50% of the interviews each, independently.Te second author is an experienced qualitative researcher (associate professor), who confrmed the coding structure and the analysis process.Te fnal themes are a result of several discussions between all authors.Te frst interview was treated as a test interview, meaning that the participant was asked if he/she understood all questions asked and whether the order of the questions felt relevant.After the frst interview, the authors reviewed the interview guide regarding whether answers were received on what was sought by the questions in the guide.As no major changes were made to the interview guide, the frst interview was also included in the data analysis.During the last interviews, no new information was identifed and the research team considered that data repeated itself in the last interviews. Ethical Considerations. Tis study was approved by the Swedish Ethical Review Authority (2019-05245).Te study followed the Declaration of Helsinki regulations [50] and local ethical guidelines and regulations [51]. Results Four themes and fourteen subthemes were identifed (see Table 2).Te results described are from both managers and stafng assistants' viewpoints.Diferences in their experiences are pointed out with subheadings or in texts. Distributed Responsibilities and Decision Making. Responsibility for the schedule was usually distributed between diferent persons.Commonly, the stafng assistants and/or scheduling groups did much of the administration, scheduling adjustments, and communication with employees, but it was sometimes undertaken by managers.Participants felt that some employees did not engage sufciently in the process, e.g., not adjusting the schedule to fulfl stafng needs during specifc shifts or not complying with rules during planning.Te manager usually had a continuous dialogue with the stafng assistant or scheduling group during the process and was often more directly involved during difcult situations, such as when the stafng assistant and/or scheduling group could not fnd a scheduling solution or when employees expressed high dissatisfaction. Te views of managers and stafng assistants, respectively, are described below. (1) Managers' View.Some managers perceived the scheduling groups' work as unsatisfactory, such as planning schedules without enough recovery opportunities.Managers reported identifying working hours with a potential risk for health and/or safety, such as many consecutive shifts or insufcient competence mix on shifts, after the employees' and stafng assistant's adjustments.Te managers who did not have a formal stafng assistant reported needing support in the scheduling process due to the large number of employees: "it is impossible for me as a manager to check a group of 70 people" (Manager 6).Attitudes towards involvement in scheduling varied.Some felt that this was an important part of their leadership, providing insight into the employee's schedule and having positive efects on the employeemanager relationship, which in turn made the scheduling process smoother.Others felt that responsibility for the schedules should be allocated to stafng assistants: "I don't think managers should work so much with schedules (. ..) it could actually be done by stafng assistants" (Manager 8). (2) Stafng Assistants' View.Stafng assistants often described themselves as intermediaries between employees and the manager.Tis could be challenging as they received opinions and criticism regarding the schedule from employees, yet they had no formal mandates to meet these, or make fnal decisions.Moreover, sometimes they were a part of the group of employees that were being scheduled which made it difcult to stay neutral.Some experienced good collaboration and support from managers, whereas others did not: "you have no answer as a stafng assistant (to give the employees) (. ..) it means that the manager must be engaged and ofer support" (Stafng assistant 3).Often the stafng assistants had the role of asking employees to work extra shifts, which was experienced as emotionally demanding when they knew the employees were tired.In those cases, support from the manager was important. Time-Consuming. Much time was spent on scheduling, both by managers, stafng assistants, and employees.One manager recounted that "when you are done with one (scheduling period), you almost have to start with the next, it takes a lot of time" (Manager 2).A few managers questioned the benefts of participatory scheduling given how timeconsuming the process was.Other managers, and stafng assistants, thought the time spent was worth it for the benefts it brought employees having infuence over their working hours.It was also discussed that employees and/or scheduling groups did not have enough time allocated for scheduling, which was suggested as one explanation for why the employees' engagement in the scheduling was sometimes insufcient. Table 1: Description of the analysis process according to the six phases in thematic analysis as described by Braun and Clarke [49]. Phase 1: familiarisation with data Data were read through by all authors separately to grasp the whole, which was a refective phase including writing notes and own refections Phase 2: generating initial codes Data were coded separately, and the authors took notes about their own thoughts Phase 3: searching for themes Te codes were discussed by all authors and searching for themes started Phase 4: reviewing themes Te interviews and codes were revised again by all authors, frst separately and then in discussion with each other, and the themes were reviewed once again Phase 5: defning and naming themes Te fnal themes were identifed, and their content was described Phase 6: producing the report Te content of the themes and subthemes was formed and was checked against the raw data one last time 4 Journal of Nursing Management Establishing a Shared Responsibility Framework and Fairness.Te importance of establishing a shared responsibility framework in scheduling, between the workplace and the employees, was emphasised.Some perceived that the employees expected to freely choose their working hours, a misunderstanding that was counteracted through continuous communication about the importance of "giving and taking" (Stafng assistant 5).Te scheduling process could also be made smoother by pointing out to employees that they had ample possibility to infuence their working hours and showing them that the workplace aimed to be highly fexible in meeting employees' requests.Other ways of establishing a shared responsibility framework were to gather the whole stafng group to discuss solutions to scheduling issues, e.g., if many employees had applied for vacation during the same weeks. Respondents also emphasised the importance of fairly distributing unpopular shifts, typically evening, night, and weekend shifts, and public holidays.For example, if a day shift was overstafed during the adjustment period, the choice of which employee should be moved to the evening shift the same day might be based on who worked the least evening shifts in that scheduling period.Fairness also played a role in determining how many changes were made in the employees' proposed schedules.Some respondents reported that they kept track of how many changes were made in each individual schedule during each scheduling period and tried to even that out in the coming periods.If changes were needed to fulfl stafng and competence needs on a shift, the process began with the adjustment of schedules of those employees who had not engaged in the scheduling process. Te Individual Relationship with the Employee: Continuous Dialogue, Mutual Problem Solving, and Adaptations. Tere was an emphasis on the importance of the individual relationship with the employee.Tis provided insight into individual life circumstances, preferences, and tolerance for shifts and shift combinations, which could be considered in scheduling, for example, making special adaptations in the schedule if the employee had experienced a signifcant life event, or letting an employee work only day shifts every other week for private reasons.It was considered to be important to have an open dialogue regarding the employees' working hours, as this gave insight into the employees' schedules, workload, and need for recovery.Tis also facilitated mutual problem solving and discussions about the importance of sustainable scheduling.Participants felt that it was important that the stafng assistants were easily accessible to the employees. Managers described continuously looking at their employees' schedules and sometimes had to remind them about recommendations for sustainable scheduling.Some managers also reported that they investigated the employee's past and current working hours if the employee seemed to feel unwell, and that they had noted potential associations between compressed working hours and sick leave.Moreover, working hours were discussed during the yearly staf appraisal. Stafng assistants sometimes had knowledge of individuals' weekly leisure activities and took those into consideration in the planning.However, having a lot of private information about the employees could made the work more difcult: "it was easier in the beginning when you had no idea, now I know that this person goes riding Monday evenings (. ..) and he doesn't want to work evening-day, and she doesn't want to work day-evening (. ..) it is a lot" (Stafng assistant 4).Moreover, dialogue with employees was described as having positive consequences for sustainable working hours: "I have talked to them (. ..) the schedules are looking much better.Tey (the schedules) were awful (when I started working here), it was every weekend, and it was many consecutive shifts (. ..) because nobody had talked to them."(Stafng assistant 8) Managing Dissatisfaction. It was reported that working hours and infuence over scheduling were of great importance for many employees and sometimes provoked strong feelings.Dissatisfaction was sometimes expressed by employees when their scheduling requests were not met, and it was described as "difcult making everyone satisfed with their schedule" (Stafng assistant 6).An uneven distribution of weekend shifts or unmet scheduling requests could also cause dissatisfaction, and work during public holidays could provoke strong feelings.Dissatisfaction was managed by explaining and giving a rationale for the shift changes.Another approach was to highlight to the employee the extent to which their scheduling requests had been met.In wards where the technological system made much of the adjustment process automatically, problems with employees' experiences of injustice with scheduling had decreased. 3.2.4.Education, Support, and Clear Scheduling Rules.New employees were given an introduction to the scheduling process, including information about rights and obligations and the importance of recovery.Sometimes, all employees were ofered continuous support from stafng assistants, and scheduling was a recurrent topic in workplace meetings.Communication about rules (e.g., number of weekend shifts, vetoes, etc.) facilitated the scheduling process.It was communicated to the employees that to be fully guaranteed days of, the employees had to use vacation days instead of vetoes, but also vetoes were commonly approved.More education of the employees about scheduling and implications for health was needed, in order to increase "understanding of the body and the circadian rhythm (in relation to scheduling)" (Manager 11). Balancing Sustainable Working Hours, Employees' Scheduling Requests, and Competence Needs Ofcial/Unofcial Guidelines for Sustainable Working Hours.Guidelines for sustainable working hours were communicated to the employees and considered during the adjustment process.A majority had guidelines for a limit of weekly working hours and a maximum number of consecutive work shifts (usually fve or six).Other guidelines were for a minimum of two consecutive days of, a maximum number of consecutive night shifts (often three), 48-72 hours of after working night shifts, and forward rotating shifts, i.e., day-evening-night.A minority described a lack of guidelines for sustainable working hours.While some workplaces had stricter guidelines, others let the employees choose how to relate to them, i.e., the guidelines were more unofcial: "We have presented research about healthy working hours (. ..) but we give them (the employees) the freedom to schedule as they like (. ..) we have no rules prohibiting them to plan as they want anyway."(Manager 3) Shift combinations with quick returns (usually an evening shift followed by day shift resulting in <11 hours between shifts) were discussed with varied attitudes and recommendations.Some encouraged employees to try to avoid or minimise quick returns and informed them about potentially negative health efects; others lacked guidelines regarding these.Some emphasised the problem with general guidelines about quick returns, referring to individual variances in tolerance. Employees' Scheduling Requests versus Sustainable Working Hours.Many participants reported that the employees themselves took responsibility for self-scheduling sustainable working hours.However, examples were given of self-scheduled unsustainable working hours, such as compressing working shifts in order to get longer continuous periods of time of.Several managers and stafng assistants attached importance to the employees' freedom in the scheduling process, stating that potentially unsustainable working hours (e.g., 7-10 consecutive shifts, double shifts, working a day shift the day after leaving the night shift, and quick returns) were accepted if the employee had chosen it.It was stated that "if they themselves have proposed an unhealthy schedule (. ..)I do not change it automatically, then you have lost the point of having an individual schedule" (Manager 3), and that tolerance and what is experienced as a healthy schedule could vary between individuals.However, not all had this approach, and some clearly stated that sustainable working hours had the frst priority regardless of the employees' scheduling requests. Considering Recovery Opportunities. Recovery opportunities were considered important in scheduling.One stafng assistant discussing the adjustment process described having "a checklist for healthy working hours (. ..) how many consecutive shifts, how much daily rest and weekly rest" (Stafng assistant 5).To plan schedules with enough recovery in-between shifts for employees working full-time on rotating three shifts was described as a great challenge by stafng assistants.Overstafng of weekday shifts was sometimes necessary to facilitate an even distribution of recovery among individual schedules. Competence Mix on Shifts. Competence mix on shifts was considered during the adjustment process.In some wards, a competence grading based on experience was used, with the aim of covering every shift with a mix of new and more experienced employees.Sometimes, this was difcult due to high staf turnover and that "new nurses are starting all the time" (Stafng assistant 2).Sometimes, employees wanted to choose which colleagues to work with, which could result in an insufcient competence mix (e.g., many new employees working the same shift). Stafng Levels, Short-Term Absence, and Solutions. Stafng shortage and high turnover rates were described as major barriers for achieving sustainable working hours.Understafng led to difculties meeting employees' shift requests, irregularity in individual schedules, less approved vacations, more overtime work, and shifts with insufcient competence mix.Understafng on shifts also reduced recovery opportunities for employees during shifts.Covering night and weekend shifts was a big challenge.Furthermore, stafng shortage became a serious problem during short-term absence causing shift vacancies, which were described as "a permanent stressor" (Stafng assistant 1), and covering night shift vacancies was especially difcult.At the few wards where the stafng level was described as sufcient, the scheduling process also worked better. 6 Journal of Nursing Management Various attempts were made to manage problems with understafng and shift vacancies, for example, reducing the number of hospital beds, having part-time employees covering weekend and night shifts, having a local nurse/assistant nurse substitute pool, or hiring temporary agency nurses/ assistant nurses.Another strategy involved forecasting workload peaks and planning for higher stafng levels in advance.Some interviewees reported sharing staf with adjacent wards.However, regarding employees rotating to other wards, one manager noted that "it's a disadvantage to not have a full overview of the employees' working hours (including overtime work)" (Manager 6). Specifc solutions for short-term shift vacancies included borrowing employees from other wards (however, many employees disliked this), moving employees from upcoming overstafed shifts, asking employees to work the vacant shift instead of a coming shift (i.e., postponing the vacancy), or asking employees to work extra shifts or to stay and work until the vacancy was flled.Working extra shifts could lead to guidelines for sustainable scheduling being breached.Before asking employees to work extra shifts, individual life circumstances and recovery opportunities in the schedule were considered.At some wards, employees could choose not to be asked to work extra shifts.It was reported that employees usually cooperated and were helpful with covering shift vacancies.One stafng assistant thought that it was "difcult for the employees to say no, when they know how high the workload is when you are understafed" (Stafng assistant 1).Sometimes, there were employees who were willing to work many extra shifts.While there was an ambition not to ask employees who had worked many extra shifts recently, sometimes there was no choice: "but that is very difcult, because if no one wants to work an extra shift, and the patient safety is threatened, you choose the person that says yes" (Manager 9).Double shifts were avoided, if possible, but they could occur in periods with high workload and/or many shift vacancies, if the employee agreed. Te scheduling process for temporary agency personnel was sometimes organised diferently, as they covered the shifts that the ordinary employees opted out of.Tey tended to work a lot of overtime, double shifts, and inconvenient working hours.One stafng assistant reported having a poor overview of the temporary personnel's working hours: "Tey (temporary personnel) usually have one or two other workplaces that they go to, it feels like they work all the time. (. . .) what they do in other places, I don't know if they work 31 days in a row." (Stafng assistant 8) 3.4.2.Working Procedure at the Wards.How care was organised infuenced the need for quick returns.Continuity of care was facilitated if employees on the morning shift had also worked the evening shift the day before.It was believed that some employees preferred that because "they want that overview in the morning (. ..) a lot happens in a short time in the morning, and they have to do all their tasks and be prepared for the round, which starts quite early" (Manager 11).To reduce the need for quick returns, other managers described changes such as efcient procedures for handing over between shifts, i.e., verbal reporting or bedside reporting, and standardised documentation templates stating what was last done and what needs to be done next, thus facilitating working morning shifts without quick returns. Working Time Arrangements for Night Work. It was common for employees to get a reduction in their working hours if they worked night shifts.However, it was reported that many employees felt that they had to work a very high number of night shifts to get a satisfying working hour reduction, which some experienced as too burdensome and therefore had left the workplace.In some wards, full-time night workers were hired to cover night shifts.Tis was experienced as a good solution since working rotating three shifts was seen as strenuous.In some wards, employees who had been identifed as having low night shift tolerance were excluded from night work, while other wards shared night shifts among all employees irrespective of tolerance: "from a safety perspective, the nights are not perfect, especially when you force people to work night shifts (. ..) who have been awake for 24 hours when they come to work" (Manager 6). Technological Enablers and Barriers for Sustainable Working Hours.Technological systems were widely used in the scheduling process and were experienced as time saving and helpful.Tey could facilitate the creation of sustainable working hours by automatically generating and adjusting schedules based on predefned settings, such as individual general preferences (e.g., avoidance of certain shifts), employees' shift requests, stafng needs and competence, guidelines for sustainable working hours, and working time regulations.Technological systems could also provide an overview of competence mix, vacant or understafed shifts, and stafng resources both daily and over time.It was found to be helpful when the system could provide an overview of an employee's entire schedule, including details of when the employee had worked in other wards, the amount of individual overtime worked, shift changes, and whether employees had followed rules for scheduling. Te technological systems usually had functions to generate warnings when working time regulations were breached, such as an insufciently long weekly rest period, a short rest in-between shifts, or too few days of during a scheduling period.While some interviewees always examined the reasons for the warnings and made necessary changes if possible, others reported that most warnings could be dismissed without any further actions.One manager stated that "there is nothing to do about it (warnings), when they (the employees) have already switched their shifts" (Manager 1).Te reasons for the warnings were also sometimes hard to understand: "it says that there is not enough weekly rest, and too short, too close shifts (. ..) maybe every week during a 10-week period, and I have to try to understand, why does it say this?" (Stafng assistant 8).Some technological systems were described as being sluggish and difcult to navigate.Technological errors were common, with settings and changes suddenly disappearing. Journal of Nursing Management Some interviewees highlighted having insufcient knowledge to use all functions.Another disadvantage was when the systems generated work schedules based on working time patterns from a period with high workload and overtime work, which resulted in unsustainable working hours that had to be adjusted manually.Also, sometimes the systems made unnecessary adjustments, resulting in a suboptimal solution for both employees and the workplace.Another problem was lack of notifcations of unsustainable working hours and a poor overview for employees when planning and adjusting their schedule.When only one week at a time was visible, the employees planned too many consecutive shifts by mistake (i.e., continued planning shifts during the beginning of a week although they had worked the preceding weekend).Furthermore, the technological scheduling systems required adequate stafng to work properly. Discussion Te results point to several factors that may be important for achieving sustainable working hours within the participatory working time scheduling process.Tese include the distribution and clarifcation of responsibilities and guidelines, leadership factors, considerations of recovery opportunities and competence mix on shifts, contradictions between employee requests and sustainable working hours, and contextual factors (e.g., stafng, work procedures, night work arrangements, and technology).Te most important fndings are discussed within the context of the human/ individual-technology-organisation framework [46], which shows the complexity of scheduling in healthcare organisations where employees' individual preferences, organisational factors (e.g., demands and leadership behaviours), and technological solutions are interconnected. Despite the existence of guidelines for sustainable working hours, these could be breached due to individual factors (e.g., employees' requests) or organisational factors (e.g., stafng shortage and shift vacancies).Tis demonstrates that sustainable working hours are not always priorities at the individual and organisational levels.Te results also demonstrate that employees' requests were highly valued and sometimes prioritised over sustainable scheduling.Te issue is complex.While employee infuence over working hours is important in many aspects [13][14][15][16][17][18][19][20], certain scheduling characteristics are associated with poor employee sleep, health, and patient safety [26][27][28][29][30][31][32][33][34][35][36][39][40][41].Moreover, at the organisational level, employers are responsible in law for employees' health and safety at work [52], where working hours play an important role.Hence, when employees are given a high degree of responsibility for their own working hours, the resulting schedules may not be compliant with the law.Future studies are needed to examine the driving forces determining priorities in scheduling, at the levels of the individual (employee) and the organisation, and to study the consequences with respect to employee health and patient safety. At the organisational level, the results identifed ways in which leaders, working together with individual employees, could promote sustainable working hours, namely, through establishing a shared responsibility framework, fostering an individual relationship with the employee, providing support, and managing dissatisfaction.Similar to the concept of sleep leadership, that has been related to better sleep outcomes [44,45], leadership behaviours that enable and facilitate sustainable schedules together with the individual (employee) might be important.Challenges for leadership were also identifed, such as the difculties of maintaining an overview of schedules when the group of employees is very large.To achieve and maintain sustainable working hours, scheduling needs to be made a priority issue for managers, with clearly defned responsibilities established within the organisational leadership.It was notable that few managers and stafng assistants in the current study had received formal education about healthy working hours.Organisations could beneft from the development of standardised education programs that are made a prerequisite for being responsible for working hour scheduling. Stafng assistants, rather than managers, were most commonly involved in discussions with employees about scheduling and working hours.Tis sometimes placed the assistants in difcult positions.Tey often knew the employees' individual preferences and life circumstances and would try to take these into consideration, adding to the challenges of creating schedules.Assistants were often the recipients of employees' requests and complaints but had no formal responsibility for determining working hours or for decision making.Teir experiences suggest a need for formal scheduling guidelines with clearer rules for sustainable scheduling, handed down to stafng assistants from higher up in the organisation.Tey also highlight the need to ensure that the stafng assistant's role, responsibility, and mandates are clearly defned. Employees' perception (individual level) of unfairness in scheduling was identifed as a source of dissatisfaction and as a hindrance to the scheduling process.Hence, fairness was described as important to take into consideration during scheduling.However, fairness can be a barrier to sustainable working hours, if the more sustainable scheduling solution is not the most "fair."Terefore, organisational guidelines for scheduling should also specify what factors (e.g., sustainability, fairness, etc.) should have the highest priority, when stafng assistants and managers plan and adjust the schedules. Technological systems were both enablers and barriers in scheduling.Tey often had usability issues such as unclear warnings for unsustainable scheduling that were hard to understand and easy to dismiss.However, some featured technological solutions that facilitated the scheduling process, through the automatic generation and adjustments of schedules, and by providing overviews of schedules.Technological systems have great potential to enhance sustainable scheduling and merit further development, following a user-centred systems design approach that incorporates the users' knowledge, skills, and perspectives into the design process [53].A technological solution that considers individual preferences and organisational demands could be a useful means of support for stafng assistants' work during the adjustment process and mitigate employees' perceptions of unfairness or favouritism [25]. 8 Journal of Nursing Management With regard to contextual factors, stafng shortage and short-term shift vacancies were identifed as especially large barriers to the scheduling of sustainable working hours.Inadequate stafng levels (organisational factor) are associated with burnout and low job satisfaction among nurses, and with low patient care quality [54].At the same time, healthcare organisations face challenges in recruiting and retaining staf [8,9] which contributes to the stafng problems.In a previous study, including nurses from 10 diferent countries, satisfaction with schedule fexibility was associated with lower intention to leave the workplace [20].Also, having fexible work hours has been cited as one reason for choosing to work for a temporary employment agency instead of working as a permanently employed nurse [55].Hence, ofering employees the possibility to participate in scheduling might increase intentions to stay in the organisation.However, it is important that the process is optimised to meet the needs of both the employees and the organisation, and that it does not result in working hours that might jeopardise employee health and patient safety.Optimisation will be supported by taking into account the interactions between individual, organisational, and technological factors highlighted in this study, thereby promoting the retention of nurses. Sufcient stafng is an essential prerequisite for achieving sustainable working hours within a participatory scheduling system.Te use of temporary agency personnel was a common solution to stafng shortages.However, there was a risk of such staf working excessive or unhealthy hours if, for example, managers and/or stafng assistants lacked a full overview of their working hours.Such cases highlight the need to pay special attention to sustainable scheduling for temporary personnel.In addition, mixing temporary and permanent nurses in work teams might trigger social comparisons and envy and afect communication within nursing teams.Organisations should seek to address such issues when using temporary agency staf by, for example, striving for transparency in how resources are allocated, promoting fairness perception, and working to promote exchange of experiences and knowledge to foster mutual learning [56]. Te results also demonstrated that working hours are highly intertwined with contextual factors, such as the work procedures on the wards (organisational factor).For example, consistent with previous fndings [57], quick returns were believed to facilitate work on the morning shift, leading some individuals to prefer those shift combinations.Tus, the way in which work procedures are organised can infuence preferences for certain shift combinations, while the removal of certain shift combinations may hinder work procedures and diminish employees' satisfaction.Tus, a framework for complex interventions should be used when evaluating changes to working hours that also takes into account what impact the intervention has in addition to its intended outcome and considers how it interacts with the context where it is implemented [58]. A shared responsibility framework (i.e., active engagement by all persons involved in the scheduling process) was regarded as essential for the process to run smoothly.However, employees' engagement during planning and adjustment of schedules was sometimes felt to be lacking.One suggested explanation for employees' failure to engage was the absence of allocated time within the workday for scheduling activities.Increasing employee engagement in the scheduling process and strengthening the sense of shared responsibility will be challenging, but it is likely that employees' engagement is partly dependent on organisational factors (e.g., allocated time, degree of met requests, leadership behaviours, education, and support).Engagement can thus be considered within the context of the interaction between the human and the organisation [46], highlighting the need for organisational changes or interventions.Further research is needed to identify organisational changes that could motivate employees to take greater responsibility for formulating their own work schedules. Methodological Considerations. Te fndings are based on a rich set of data, with information repeating itself in interviews, indicating that the number of informants was sufcient [59,60].One potential limitation is the use of multiple interview methods, although the quality of the interviews does not vary.Trustworthiness [61] in this study was ensured according to the following criteria.(1) Credibility (the ft between researcher's views and the representation of them) was obtained by researcher triangulation.Several researchers conducted the analysis, involving peer debriefng with external checks on the research process and examination of referential adequacy where the results were checked against the raw data conducted as the last step in the analysis.(2) Confrmability was accomplished by explaining and describing the theoretical, methodological, and analytical choices made throughout the manuscript.Moreover, the fndings were demonstrably derived from the data, as shown by the provision of quotations.(3) Dependability was assured by the clear descriptions of the analysis process, enabling the reader to evaluate the process.(4) Refexivity was addressed by involving authors from multiple disciplines in the analyses.All of the authors involved in the analyses were female, two of whom were experts in working hours and participatory scheduling and the third was an expert in the conduct of qualitative research.Te fourth author (male), associate professor and an expert in working hours and participatory scheduling, was involved in the conceptualisation of the study and the preparation of the manuscript. Te authors frequently discussed their preunderstanding throughout the analysis process.Professional preunderstanding is necessary for deeper understanding of the context and the interviews, but they carry a risk that familiar facts may be overlooked.Te text was read through several times, and our preunderstanding was discussed throughout the analysis process. 4.2. Limitations.Some limitations of this study should be noted.Firstly, the vast majority of the participants were women, refecting the fact that healthcare is a female-dominated occupational sector in Sweden.Secondly, as only 4 out of 21 Journal of Nursing Management regions in Sweden were represented, key issues may have been neglected.However, the sample was drawn from regions of diferent sizes and locations, thus providing data from a broad range of contexts, suggesting that the results are transferable to other healthcare settings.Finally, as the data collection took part during the COVID-19 pandemic, it is possible that this has afected participants' views, although the focus of the interviews was on normal operations.Experiences referring to scheduling during the COVID-19 outbreak and peaks were excluded from the analysis. Conclusions Participatory working time scheduling ofers potentially signifcant benefts for healthcare organisations that are facing major challenges in recruiting and retaining staf.However, to ensure sustainable working hours within the context of participatory scheduling, it is important to address a range of factors at multiple levels of the organisation.Te factors identifed in this study include clarifying responsibilities between employees, stafng assistants, and managers; making working hours a priority issue for leaders; defning clearer guidelines for sustainable scheduling (including adjustments of schedules) that are endowed from higher up in the organisation; allocating time for scheduling; and increasing engagement and involvement of the employees in the scheduling process.In addition, contextual factors need to be addressed, such as adequate stafng levels, working procedures on the wards, working hour arrangements for night work, and technological solutions.Achieving sustainable working hours within the context of participatory scheduling requires targeting multiple levels of the organisation.Future research should investigate the impact that the factors identifed in this study have upon realised working hours (e.g., through the study of payroll data) and upon employee health.In addition, research is warranted that addresses participatory scheduling from the employees' perspective. Table 2 : Overview of main themes and subthemes.
2023-12-13T16:07:42.628Z
2023-12-09T00:00:00.000
{ "year": 2023, "sha1": "19cfa578677f4d0b4bbccf4cdb644579f34b3ff1", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jonm/2023/8096034.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "76d6e438b765911b9aeec4324442c1f1665c83ac", "s2fieldsofstudy": [ "Medicine", "Business" ], "extfieldsofstudy": [] }
14071117
pes2o/s2orc
v3-fos-license
Artificial selection reveals the energetic expense of producing larger eggs Background The amount of resources provided by the mother before birth has important and long-lasting effects on offspring fitness. Despite this, there is a large amount of variation in maternal investment seen in natural populations. Life-history theory predicts that this variation is maintained through a trade-off between the benefits of high maternal investment for the offspring and the costs of high investment for the mother. However, the proximate mechanisms underlying these costs of reproduction are not well understood. Here we used artificial selection for high and low maternal egg investment in a precocial bird, the Japanese quail (Coturnix japonica) to quantify costs of maternal reproductive investment. Results We show that females from the high maternal investment lines had significantly larger reproductive organs, which explained their overall larger body mass, and resulted in a higher resting metabolic rate (RMR). Contrary to our expectations, this increase in metabolic activity did not lead to a higher level of oxidative damage. Conclusions This study is the first to provide experimental evidence for metabolic costs of increased per offspring investment. Background The environment experienced during early development can have significant and long-lasting consequences for an individual's phenotype [1,2]. Mothers are in a unique position to influence these early life conditions through, for example, the quantity and quality of resources they provide to their offspring [3]. Despite the positive effects of increased maternal investment on offspring fitness [3,4], there is a large amount of variation in reproductive investment seen in natural populations [5,6]. Life-history theory predicts that this variation is maintained by trade-offs between the benefits of increased investment for the offspring and the associated costs to the mother [7][8][9]. However, despite being a central tenant of life history theory, the mechanisms underlying these costs of reproduction are not well understood [10,11]. Various mechanisms have been proposed to mediate the costs of reproduction. Costs may, for example, occur because females reallocate energy or resources from self-maintenance to reproduction [12]. If these reallocations cannot fully cover the increased energetic demands during reproduction, females have to increase their rate of energy conversion, through an increase in metabolic rate. This, in turn, can lead to a higher production of reactive oxygen species (ROS), produced in the mitochondria as a by-product of cellular respiration [13]. When not balanced by antioxidant defences, high levels of ROS are associated with cellular damage, referred to as oxidative stress [13], which has been proposed to be a key mediator of life-history trade-offs [14,15]. Furthermore, an increased energetic demand may lead to extended food searching and so a higher predation risk [16,17]. To date, most studies that explored the costs of reproduction, and in particular the costs of per offspring investment, are correlative [15] and therefore cannot reveal trade-offs [18]. In birds, and especially in precocial species that do not show extensive parental care after hatching, per offspring maternal resource investment is reflected in the size of the egg [19], which varies considerably in natural populations [20]. Although egg production per se is known to be an energetically demanding process [12,21], few studies have explicitly quantified the costs of increased per offspring investment and those that have mainly focused on egg size-number trade-offs, for which there is little evidence [22][23][24][25]. Maternal egg investment (i.e. per offspring investment) is notoriously difficult to alter experimentally and, to our knowledge, no study has manipulated maternal egg investment and examined the costs to the mother. To address this gap, we established artificial selection lines for high and low maternal egg investment in a precocial bird, the Japanese quail (Coturnix japonica). Through artificial selection, we experimentally manipulated egg size, producing females that differ genetically in how much they invest in their eggs (relative to their body size). This selection resulted in a correlated response in resource investment (dry egg components), but there was no evidence for a trade-off with the number of eggs laid [24]. Here we show that the mothers' reproductive organ size increased in line with the level of their reproductive investment, but there was no evidence for a reallocation of lipid or protein reserves. The increase in reproductive organ size in high investment mothers was associated with an increase in metabolic rate, but no apparent increase in oxidative damage. Our study suggests that metabolic costs for the mother may play an important role in the maintenance of variation in reproductive investment observed in natural populations. Study population and selection lines For this study we used established replicated, divergent Japanese quail selection lines for high and low maternal egg investment (see [24] for a detailed description of the selection procedure). In brief, we selected for high and low relative egg size (i.e. egg size corrected for female body size), by incubating eggs from the highest and lowest 25 % of females from a base population (generation one), creating high investment and low investment lines respectively. In subsequent generations we selected the most extreme 50 % of females within each line. This procedure was repeated twice to create two independent replicates per line (i.e. High 1 / Low 1, High 2 / Low 2). As well as originating from the same base population, high and low investment line birds within a replicate were bred at the same time, meaning that they were all of the same age and experienced the same environmental conditions. By generation four, the lines differed in absolute egg size by 1.2 standard deviations (High investment: 12.46 ± 0.94 g (mean ± SD); Low investment: 11.12 ± 0.91 g; [24]). The quantity of dry components in the egg (i.e. lipids and protein) responded positively to selection on relative egg size (i.e. larger eggs contained more resources), whilst the rate of egg laying did not change between the two lines as a consequence of selection [24], suggesting that females of the high investment line do not compensate for laying larger eggs by laying fewer or lower quality eggs. Furthermore, this increase in resource investment had a positive effect on the size and early survival of offspring [26]. The birds were kept at the University of Zurich in outdoor aviaries (5 × 7.5 m each). For data collection, females were brought into cages (122 × 50 × 50 cm) within our breeding facility (see below for details about different groups). The bottom of the cages was filled with sawdust, and contained a house, a raised sandbath and ad libitum food, water, grit and shell. Reproduction in quail is strongly influenced by photoperiod [27], so we can manipulate the breeding status of a female by controlling the daylength within our breeding facility. Breeding (i.e. egg-laying) was induced by keeping females on a 16:8 h light:dark cycle, whilst non-breeding birds were kept on a 10:14 h light:dark cycle. At all times our breeding facility was maintained at approximately 20°C. When entering the cages, body mass (to nearest 1 g) and tarsus length (to nearest 0.1 mm) were measured. Eggs were collected each morning and weighed to the nearest 0.01 g (hereafter referred to as egg size). Body composition We dissected breeding females (aged between 38 and 43 weeks) from the fourth generation of the selection lines to investigate differences in body composition between the high (N: High 1 = 15; High 2 = 20) and low (N: Low 1 = 16; Low 2 = 14) investment lines. Females were kept in cages for 4 weeks with a male, and then kept for three to six days in female pairs before dissection (as part of a separate experiment). The day before dissection, all cages were checked every hour up until one hour before the lights were switched off (21:00) and for every female it was recorded when the egg was laid. The following day females were euthanised, where possible 18 h after laying to standardise the stage of egg production. Body mass was measured before euthanisation. Oviduct, ovary (including yolky follicles), oviductal egg, liver and pectoral muscles (pectoralis and supracoracoideus) were dissected out and weighed (wet mass to nearest 0.01 g). Preliminary data showed that wet and dry masses are highly correlated (oviduct: r = 0.927, N = 32, P < 0.001; liver: r = 0.890, N = 32, P < 0.001; pectoral muscle: r = 0.977, N = 14, P < 0.001). In the second replicate we also weighed the fat in the body cavity (omentum, N = 34 females). The liver is the site of yolk precursor synthesis [28] and so is expected to change proportionally to egg size. The pectoral muscles and body fat were dissected to test for a potential reallocation of resources from organs involved in flight ability [29] and lipid storage, respectively, to reproduction. Although Japanese quail feed and nest on the ground, flight is a vital function in this species, both for their escape response and long-distance migration [30]. To obtain a baseline from which to interpret the differences in organ sizes of breeding individuals between the selection lines (see above), we dissected ten non-breeding females from the unselected base population (aged between 24 and 26 weeks; same founders as selection lines) as described above and compared them to the breeding females from the selection lines. Given the limited number of females from the selection lines, it was not possible to use non-breeding females from the selection lines for this comparison. We expected the differences between high investment and low investment females to mirror (although to a lower magnitude) those between breeding and non-breeding females. Metabolic rate We measured the metabolic rate of females from the fifth generation of the high investment (N: High 1 = 7; High 2 = 7) and low investment (N: Low 1 = 7; Low 2 = 8) lines. These females were measured twice, once in breeding condition (aged between 14 and 33 weeks) and once, 11 weeks later, in non-breeding condition. Metabolic rate measurements began 5 days after females were put into cages (at which point they were already in breeding or non-breeding condition). These measurements took place over five nights, with six females being measured each night (ensuring that the lines were balanced over the nights). Food was withdrawn from the cages for two to three hours before the measurements started to ensure a post-absorptive state. Females were weighed before being put into respirometry chambers (3.9 l plastic containers; 234 × 165 × 165 mm; Lock and Lock, Hanacobi Co. Ltd., Korea) and weighed again in the morning. The chambers were covered by dark material and lights in the windowless room were switched off. The temperature was kept at 24 -27°C, which is within the thermo-neutral zone for this species [31]. We measured the rate of oxygen consumption (VO 2 ) using a flow-through respirometry system (Sable Systems International, Las Vegas, USA). Our setup consisted of eight metabolic chambers, six containing quail and two as controls. Air was pumped from the room into each chamber by an eight-channel mass flow meter system (Flowbar-8 Mass Flow Meter/Pump FB-8-1, Sable Systems International, Las Vegas, USA). Air was sampled from one chamber at a time (Multiplexer Intelligent RM-8-2, Sable Systems International, Las Vegas, USA), dried (magnesium perchlorate, Sigma-Aldrich, USA) and analysed (Foxbox, Sable Systems. International, Las Vegas, USA). The mean flow rate across the nights was 1671 ± 16 mL min −1 . We recorded O 2 , CO 2 , flow rate and temperature in consecutive 45 min periods throughout the course of the night. During these 45 min periods, all eight chambers were measured once for five minutes. One control box was measured twice, once at the beginning and once end of each period, and the other control box was measured once in the middle of each period. As the equipment took a certain time to adjust between chambers, we excluded the first 100 s of each reading, leaving 200 s per reading (with 14-18 readings per bird). We regressed all control chamber readings for both CO 2 and O 2 against time within a 45 min period, and used this to predict baseline gas levels for chambers containing quail during the same 45 min period. These baseline readings were then used to calculate VO 2 : [32], where FiO 2 and FiCO 2 are the baseline O 2 and CO 2 readings (divided by 100), respectively, FeO 2 and FeCO 2 are O 2 and CO 2 readings (divided by 100), respectively, for the chamber in question and FR is the flow rate. We define metabolic rate as the lowest, stable VO 2 reading of a resting animal, in a post-absorptive state, within its thermalneutral zone. Typically this is described as the basal metabolic rate (BMR), but given that half of our measurements were of females in breeding condition, and so physiologically 'active' , we use the broader term resting metabolic rate (RMR; following [33]). RMR therefore represents the basic cost of living. To obtain RMR we calculated the mean VO 2 of the lowest, stable one minute during the whole night for each bird. This measurement of RMR was highly correlated with mean VO 2 across the whole night (r = 0.977, N = 64, P < 0.001). Repeatability of RMR, based on six birds that were measured twice in non-breeding condition (with 5 days between measurements), was high after correcting for the overall difference between nights (r = 0.868 ± 0.106, F 5,6 = 14.14, P = 0.003). Oxidative damage Three days after the metabolic rate measurements we took a blood sample from all females from the brachial vein using heparinised capillary tubes. Samples were kept on ice until centrifugation (5 min at 20°C and 2000 × g). Plasma was then separated and frozen at −80°C until analysis. As a measure of oxidative damage we quantified the plasma concentration of reactive oxygen metabolites (ROMs) using the dROMs test (Diacron International, Grosseto, Italy). This is a colorimetric assay, which measures intermediate oxidative damage molecules (mainly hydroperoxides; [34]) that are produced by the peroxidation of a diverse range of biomolecules [35]. Our analysis followed previously published protocols [36,37]. In short 8 μl plasma was diluted with 200 μl of a solution containing acetate buffer (pH 4.8) and an aromatic alkylamine (chromogen). The samples were incubated at 37°C for 75 min, centrifuged and the supernatant was pipetted onto a microplate. The absorbance was then read with a spectrometer (Multiskan Spectrum, ThermoFisher, Vantaa, Finland) at a wavelength of 505 nm. All samples were run in duplicate. Results were calculated as mM of H 2 O 2 equivalents. There was a high repeatability of ROMs measures within samples (r = 0.993 ± 0.002, F 55,56 = 282.61, P < 0.001). The inter-assay coefficient of variation was 8.12 %, and the intra-assay coefficient of variation was 1.55 %. In order to correct for plate differences in ROMs, we centered all samples from a plate on the control samples for that plate. One low line female had blood taken only once, and so was excluded from the oxidative damage analyses. Statistical analysis We compared differences in total body mass (at time of dissection), reproductive organ mass, non-reproductive mass, liver, fat and pectoral muscle mass, as well as metabolic rate and oxidative damage between the selection lines and between non-breeding and breeding individuals. Reproductive organ mass included oviduct mass, ovary mass and the mass of yolky follicles. Non-reproductive mass was calculated as the total body mass minus reproductive organ mass and oviductal egg mass. All measures were log transformed prior to analysis to account for scaling effects on variance. To test for differences in body composition between non-breeding and breeding females, we used two sample t-tests and a Welch/Satterthwaite approximation for the degrees of freedom due to unequal sample sizes and variances. One non-breeding female was excluded from the analysis, as during dissection it was clear from the state of her ovary and oviduct that she had started to come into breeding condition. To test for differences in body composition between breeding females from the high and low investment lines, we used linear models, including selection line and replicate as factors. Tarsus length (cubed prior to log transformation) was included as a covariate to account for body size differences among females. For the analysis of total body mass and reproductive organ mass, we included only the 55 females that were dissected approximately 18 h after laying, as the mass of the reproductive organs varies with the stage of egg development (N: High 1 = 12; High 2 = 19; Low 1 = 13; Low 2 = 11). Body mass (i.e. mass when entering the metabolic chamber), metabolic rate and oxidative damage of the females measured in generation five, were measured twice, once in breeding and once in non-breeding condition. To test whether the change in these traits between non-breeding and breeding condition was different between the lines, we ran linear mixed models with selection line, breeding status and replicate as factors as well as the interaction between selection line and breeding status. Age and tarsus length were included as covariates and again tarsus length was cubed prior to log transformation. Female ID was included as a random effect. In the metabolic rate models, we also included measurement date as a random effect to account for stochastic differences in RMR measurements between nights. If the level of reproductive investment affects the increase in body mass, metabolic rate and/or oxidative damage, we predict to see a significant interaction effect between line and breeding status in all of these models. If the increase in metabolic rate is driven by an increase in body mass, then we predict to find this interaction when correcting for body size (tarsus length) but not when correcting for body mass. Similarly, if the increase in oxidative damage is driven by an increase in metabolic rate, we predict to no longer find an interaction effect between breeding status and line when correcting for metabolic rate or body mass. To test these hypotheses we ran an additional model for metabolic rate with log transformed body mass as a covariate instead of tarsus length and two additional models for oxidative damage including log transformed body mass and log transformed metabolic rate, respectively. In these models the added covariate was always retained in the model. Additionally we used paired t-tests to test whether body mass, metabolic rate and oxidative damage differed between individuals in breeding and non-breeding condition. We included these tests to demonstrate the magnitude and direction of the difference between breeding and non-breeding birds, and to allow a comparison with the body composition data. Finally we tested whether the within individual change between non-breeding and breeding condition in all three measures correlated with each other, as well as with mean egg size and tarsus length. All analyses were run in R (3.0.3, [38]). In all models, we performed backward stepwise deletion of nonsignificant terms. Significance was determined using F statistics in linear models and likelihood ratio tests with one degree of freedom in mixed effects models. We present means ± SD. Body composition Breeding females (from both selection lines) were significantly heavier than non-breeding females (non-selected birds; Table 1). This mass difference between breeding and non-breeding females (30 g) was mainly due to an increase in reproductive organ mass (15.18 ± 1.73) plus oviductal egg mass (11.72 ± 1.14; total = 26.36 ± 3.88 g). Non-reproductive mass did not differ between non-breeding and breeding females (Table 1). Breeding females also had heavier livers and less body fat than nonbreeding females, but there was no difference in pectoral muscle mass (Table 1). Similarly, high investment females tended to be heavier than low investment females when correcting for body size (generation 4; Table 2). Furthermore, the change in body mass between breeding and non-breeding condition was significantly larger in high investment females (22 ± 14 g) than low investment females (7 ± 19 g; generation 5; Table 3, Fig. 1a). After correcting for body size, the reproductive organs were significantly heavier in high investment females than in low investment females, whereas non-reproductive mass did not differ between the lines (Table 2). Moreover, egg size was highly correlated with reproductive organ mass (r = 0.810, N = 54, P < 0.001). No differences in fat, liver or pectoral muscle mass were observed between the lines after correcting for body size (Table 2). Metabolic rate and oxidative damage Females increased their resting metabolic rate by 70 % when entering breeding condition (Table 1). This change Means ± SD are shown. In generation 4, females were measured once, either in breeding (N = 65) or in non-breeding (N = 10) condition. In generation 5, females (N =29) were measured twice, once in breeding and once in non-breeding condition. Repro. is abbreviation for Reproductive. Significant results are displayed in bold 1 The difference in body mass between the two states is less than in generation 4 due to measuring the birds at different times of day (here the majority of females had already laid an egg) Table 3, Fig. 1b). When correcting for body mass instead of body size, the change in RMR did not differ between the lines, but there was still a significant difference in RMR between breeding and non-breeding individuals (Table 3). This demonstrates that the differential increase in RMR between the lines was mediated by the greater increase in body mass between non-breeding and breeding states in high investment line females. Overall, there was no difference in oxidative damage when an individual was in breeding or non-breeding condition (Table 1). There was a trend for an interaction between line and breeding status on oxidative damage (Table 3; Fig. 1c), but in the opposite direction than predicted: the oxidative damage of low investment females tended to increase with breeding (paired t test: t 13 = 1.96, P = 0.072) whilst there was no change in oxidative damage between non-breeding and breeding condition in high investment females (paired t test: t 13 = 0.24, P = 0.815). When correcting for either body mass or RMR instead of body size, there was no qualitative change in the results ( Table 3). The changes in both body mass and RMR between non-breeding and breeding were highly correlated with each other, and both were correlated with egg size, but not with tarsus length (Table 4). Change in oxidative damage was not correlated with any other variable. Egg size and tarsus length were not correlated (Table 4). Discussion Female body mass increased when entering breeding condition. This increase was larger in females from the high investment lines than the low investment lines and was mainly driven by an increase in reproductive organ mass. Whereas an increase in body mass when entering reproductive condition has been documented in other species [39][40][41], this is the first experimental evidence that the magnitude of body mass change relates to the level of maternal reproductive investment. Previous work has shown that in many taxa predator escape responses are negatively affected by body mass and the additional weight of carrying eggs [42][43][44][45]. Furthermore, within-female decreases in flight performance between non-breeding and breeding has been shown to be correlated with the corresponding increase in body mass [45]. As high investment females increase their body mass to a larger degree than low investment females, their predator escape response is likely more strongly compromised, given that small increases in mass have large impacts on the time taken to reach cover [46]. High investment line females also displayed a greater increase in RMR between non-breeding and breeding condition than low investment line females. This differential change in RMR was driven by body mass, but not body size, and so by the change in reproductive organ mass. Although it is generally observed that females increase both daily energy expenditure (DEE) and RMR when entering breeding condition (e.g. [47]), there is only inconsistent correlative evidence of a link between the level of maternal egg investment and DEE [48][49][50][51] or RMR [33,47,52]. Our study thus provides the first experimental evidence that the level of maternal egg investment leads to a proportional increase in metabolic rate. This energetic cost of increased maternal investment is likely to be severe, as egg production occurs at a time of relatively low food abundance [12,39]. Additionally, previous studies have found that both RMR and DEE are associated with food intake and activity [12,39,53,54] and that changes in RMR are compensated by changes in food intake [55]. Therefore birds with higher energetic demands will have to spend more time searching for food, which will increase their predation risk [16,17,56]. Surprisingly, despite an increase in RMR, we did not find a corresponding increase in oxidative damage, either between non-breeding and breeding condition or between the selection lines. If anything, there was a tendency for low investment line females to suffer a more marked increase in oxidative damage between non-breeding and breeding than high investment line females. Although oxidative stress has been proposed to be a key mediator of life-history trade-offs [14,15], the empirical evidence for a link between reproduction and oxidative stress is equivocal [57]. Furthermore, the idea has been criticised on the basis that ROS may not be produced in direct relation to metabolic rate, with some studies even showing that high metabolic rates can lead to a proportionally lower production of ROS (reviewed in [57]). A recent meta-analysis showed that the levels of oxidative stress do not change, or if anything tend to decrease, between non-breeding and breeding individuals [58]. Furthermore, although the authors also found that oxidative damage tends to increase with reproductive effort [58], this finding is driven by mammalian studies and there is, in line with our results, no compelling evidence of this phenomenon in birds [59][60][61]. Moreover, high levels of oxidative stress due to increased reproductive effort should come at the cost of reduced future survival, of which there is little evidence [62]. Together, these findings raise questions about the role of oxidative stress in mediating life-history trade-offs in birds. It is important to note, that although we use a very common measure of oxidative damage (i.e. dROMS in blood plasma [37]), our results may have been different if we measured a different biomarker or tissue [57]. Furthermore, the effects of higher RMR on oxidative status could have been masked by ad libitum food conditions, although it is not clear that food availability mediates such a link (see discussion in [57]). The change in RMR between non-breeding and breeding was not completely explained by the increase in body mass. This additional increase in RMR might be explained by a change in body composition. Indeed, between non-breeding and breeding condition, the size of the liver, which is metabolically highly active, increased whilst body fat decreased (see also [63,64]). The liver is the site of yolk precursor synthesis, which is the only part of egg synthesis that requires lipids [65]. Therefore, these changes in fat and liver size may reflect the general mobilisation of lipids from storage to the liver for yolk precursor synthesis and the associated biosynthetic activity [64]. However, there was no evidence that these changes differed between the selection lines. This is in agreement with previous studies that found that the amount of yolk precursors in the plasma of breeding females was not positively correlated to yolk size or composition [12,66,67]. Moreover, hormonally increased yolk precursor levels caused no change in egg size [68,69] and selection on yolk precursor levels affected liver size, but not egg size or production [70]. This has lead to the suggestion that females overproduce lipid-rich yolk precursors [12]. Our results corroborate this hypothesis, showing that changes in liver size and fat storage are related to reproduction per se, rather than the level of reproductive investment (see also [63]), which may explain why lipid supplementation has little effect on egg size [64]. Thus, there is no indication for a trade-off between reproductive investment and fat reserves, or that the liver contributes to the increased metabolic rate of high investment line females. This contrasts with other taxa in which fat is a major energetic currency in the trade-off between reproduction and somatic maintenance [71]. Traditionally body mass relative to body size has been used as a measure of body condition, which is thought to represent levels of lipid reserves [72] and is usually interpreted as an environmentally determined quality trait [73]. Several studies have shown that egg size correlates with body condition (reviewed in [20]) and concluded that female nutrient reserves influence variation in egg size (e.g. [74][75][76]). However, our results suggest that laying larger eggs requires larger reproductive organs, which results in an increase in body mass and so the appearance of better 'body condition'. This measure of body condition has recently been criticised [72], and our finding that females investing differently in their eggs display a difference in body mass but no difference in body fat, further shows that this measure is inappropriate in breeding females, as it may lead to the false conclusion that residual fat reserves influence reproductive output. Pectoral muscles are often used as a source of protein during egg production [12]. Several studies have shown that females reduce the size of their pectoral muscles when they are experimentally forced to lay more eggs ( [77][78][79]; but see [63]), in line with the idea that the availability of proteins, rather than lipids, limit egg production [64]. This reallocation of resource can have negative effects on flight ability [78], and so the ability to raise offspring [77] and evade predators (but see [80]). Despite these previous findings, we found no difference in pectoral muscle mass either between non-breeding and breeding females or between the selection lines. It is possible that there are more subtle changes in pectoral muscle composition or structure that we could not detect. However, our measure of muscle mass is highly correlated with dry muscle mass. Moreover, all previous experimental studies that found a decrease in muscle mass with increased egg laying [77][78][79], muscles condition was assessed non-destructively through the use of external measurements [81,82], which correlates strongly with the method we used [83]. Furthermore, our method has been used to demonstrate a decrease in muscle mass between non-breeding and breeding condition [84], showing that the method is sensitive enough to detect a reallocation, if one was present. One explanation for the lack of a reallocation of resources from muscle tissue to reproduction could be that, in our study, birds had access to ad libitum food and so protein may not have been limiting. However, previous studies have found this reallocation under similar conditions [78,84,85], showing that trade-offs between reproduction and muscle maintenance can also be detected in captivity. We did not measure leg muscle mass, which is likely important for locomotory function in ground living birds such as quail. However, it seems unlikely that a protein reallocation would be confined to the leg muscle, especially given that the pectoral muscle is much larger. Overall, our results therefore suggest that protein reallocation is not an obligate response to reproduction or the level of reproductive investment. Several authors have suggested that the cost of increased reproductive investment may be passed to the offspring, rather than being dealt with by the mother [62,86]. For example, a recent study showed that food supplemented females experience lower oxidative damage, but do not provision their eggs with more antioxidants [87], suggesting a prioritisation of selfmaintenance. Similar results have been found in other taxa (e.g. [61,88,89]), and may explain why, across studies, females do not appear to suffer survival costs as a consequence of experimentally increased reproductive investment [62]. Furthermore, females may experience a trade-off between reproduction and functions that we have not measured in our study, such as immune function [90] or brain size [91]. Testing for such additional costs, as well as for differences in egg constituents [87] and the oxidative stress of offspring [92] between the lines will thus give greater resolution to our understanding of the costs of per offspring investment. Conclusions Our study provides experimental evidence that increased female egg investment is associated with an increase in reproductive organ mass, leading to an increase in both body mass and metabolic rate during breeding. Surprisingly, the increased metabolic rate of high investment females did not result in higher levels of oxidative damage. Both increased body mass and increased metabolic rate are likely to increase predation risk, through increasing food requirement whilst reducing escape ability. This study thus provides the first experimental evidence for metabolic costs of increased per offspring resource allocation, which are likely to be a key driver in the maintenance of variation in maternal reproductive investment. Availability of data and materials The data that support the findings of this study are available from the corresponding author upon reasonable request. Authors' contributions Study was conceived by JLP and BT. Selection lines were established by JLP, dissections were conducted by JLP, PH and CE, respirometry by AZ and JLP, and oxidative stress assays by MG. Data were analysed by JLP and results interpreted by JLP and BT. Manuscript was drafted by JLP and revised by all authors. All authors read and approved the final manuscript.
2017-08-03T02:09:57.077Z
2016-08-23T00:00:00.000
{ "year": 2016, "sha1": "3e796f06e44c65779be7b8d568f3085c8069104e", "oa_license": "CCBY", "oa_url": "https://frontiersinzoology.biomedcentral.com/track/pdf/10.1186/s12983-016-0172-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e796f06e44c65779be7b8d568f3085c8069104e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237581191
pes2o/s2orc
v3-fos-license
Towards a computational definition of the tresillo rhythm and its tracing in popular music This paper discusses the use and popularity of a rhythm, which henceforth is referred to as"Tresillo rhythm". We first define and formalizes the Tresillo rhythm. Given a mathematical representation of the rhythm, it is then traced in the US Billboard Top 20 Charts of the last 20 years. To detect and determine the use of this rhythm in a song, we compute the similarity of a song with this rhythm. The calculated similarity, then indicates how similar the rhythm of a pop song is compared to the prior defined Tresillo rhythm. To assert and cross-validate the computer rhythm similarity, two different formalizations of the Tresillo rhythm have been compiled and several different approaches to calculated rhythm similarities have been tested and compared. This similarity measure is then used to do an empirical study on the usage of the Tresillo rhythm in the US Billboard Top 20 Charts of the past 20 years (1999-2019). Finally, we argue about some of the possible reasons for the observed trend. RESEARCH QUESTION Can it be computed to which extent the Tresillo rhythm is used in a given pop song and if so how has the intensity of Tresillo rhythm use in the US Billboard Top 20 Charts changed over time? INTRODUCTION The Tresillo is a rhythm that originated in Africa and was brought to the Caribbeans during the Atlantic Slave Trade period. Made popular in Cuba, the rhythm spread all over the world from there [1,4], and can be found in many music genres. While being used as main rhythm on its own, the Tresillo is also used as rhythmic pattern in other rhythms such as the Reggaeton rhythm or the Clave rhythm. Orientated on [4], the Tresillo rhythm can be defined as followed (see The rhythm pattern consists of a dotted eighth note, followed by a sixteenth note, an eighth rest and an eighth note and is repeated two times in a 4/4 bar. If one adds a beat on all fours to the rhythm, one obtains the Reggaeton rhythm (see Figure 3). By changing the second half of the bar to a eighth rest, two eighth notes and another eighth rest, one creates the basic Clave rhythm pattern (see Figure 2). The Tresillo rhythm is commonly used in latin american music. However, the use of this "danceable Cuban Clave son" [15] is not restricted to Latin American music only, but also entered the rhythm sections of Western music [2,15]. Popular recent examples include songs like "Shape of you" by Ed Sheeran or "Cheap Thrills" by Sia, both topping the Billboard Charts. Further investigation of the pop charts by the authors by listening to the Billboard Charts confirms the regular use of this rhythm in Pop Music. While the use, evolution and popularity of the Tresillo rhythm has been explored in qualitative studies [1,2,4,13,15], the popularity of this rhythm has not been studied quantitatively before.In search of a possibility to investigate the use of this specific rhythm, we developed a method to computationally represent the main rhythm of a pop song and the Tresillo rhythm and compare both with different similarity measurements. Given these similarity measures, a distinct time trend in Tresillo use can be found in the US Billboard Top 20 Charts of the last 20 years (1999-2019). This paper proceeds as follows: First, the secondary literature of several different fields, which are relevant to this paper, will be discussed. Then several assumptions necessary to conduct the presented analysis will be stated in the problem statement section. The Data section discusses the chosen data sources and data format for the analysis. In the method section, the final data representation, the proposed rhythm similarity measures and evaluation metrics are presented and explained. In the results section, the different rhythm similarity measures are evaluated and compared. The results section also comprises a description and analysis of the time trend. The paper concludes with a discussion of the chosen methods and obtained results. Furthermore, a possible interpretation of the produced results is presented. Lastly, we suggest possible extensions of the presented work. SECONDARY LITERATURE This paper touches upon different scientific fields such as musicology, audio retrieval and digital musicology. Prior works have already investigated the evolution and spread of the Tresillo and Clave rhythm patterns and thus provide a clear definition and formalization of those rhythmic patterns from a theoretical view point [1,4,13]. We heavily rely on those theoretical accounts to define the Tresillo rhythm used in this project. Music scholar furthermore investigated the diffusion of the Tresillo rhythm from Africa to Latin America and then to United States from a cultural perspective [1,4]. More generally, there also have been several works discussing the rise in popularity of Latin American music and its influence onto U.S. mainstream music [11,14]. However, the mentioned musicology research is predicated upon qualitative analyses of musicology books, sheet music, recordings and interviews with specialists and practitioners. This paper in contrast chooses to employ computational methods to draw conclusion about the influence and popularity of the Tresillo rhythm in US popular music. Another research area that is deeply connected to the discussed topics in this paper is concerned with the formalization of rhythm and the statistical corpus studies of rhythmic patterns. While theoretical formulations of rhythm [7,12] help us to assess the posed problem and possible pitfalls of the chosen methodology, this paper mainly refers to rhythm representations, which were used for corpus studies [6] or more generally the study of onset frequency distributions [8]. To represent rhythm in this project we will thus employ rhythm histograms as used and described in prior works [6,8]. A last field of research that is highly relevant for this paper is concerned with computing rhythmic similarity between different songs. Such techniques are often used for audio retrieval tasks [5] or music genre classification tasks [3,10]. More generally, this literature is concerned with measuring similarity and dissimilarity of audio signals or signals in general [16]. This literature provides valuable metrics and techniques to compare the rhythmical structure of two songs, however, is mainly based on using raw audio files to extract signal features and more specifically rhythm features [3,5,9,10]. Thus the methods proposed in those papers, have been adjusted to work with our already discretized data representation. This paper extends on the discussed secondary literature by using computational methods to trace the usage of a specific rhythm, which is associated to Latin American musical culture, in US popular music. PROBLEM STATEMENT To answer the research questions, a way of defining the main rhythm of a pop song is necessary. The vast majority of pop songs consist of a simple melodic and rhythmic structure. We, therefore, assume that one can identify one dominant rhythm per song. This rhythm is repeatedly played throughout the song and therefore can be characterised by counting the onsets and comparing the onsets counts for every bar position. To present the music in a usable format, quantification is needed. 16th notes are chosen as the smallest unit. Assuming that all songs used for our analysis are in a 4/4 meter, this gives 16 possible events per bar. All songs in the data which are found not to be in 4/4 are excluded from the analysis. Aggregating all bar onsets of a song to one bar results in one bar which can be described as a 16-dimensional vector, where every value represents the number of onsets on a given bar position. The Tresillo rhythm is used as a rhythm on its own or as part of other more complex rhythms. For our definition of clean Tresillo rhythm, we use the notation in Figure 1. DATA To answer the posed research question four different kinds of data sets from different sources are needed. First, to evaluate the proposed methodology which aims to compute a similarity between a defined Tresillo rhythm and a given song, two validation data sets have been collected. Of which the first data set consists of Tresillo songs and the second of songs that do not contain the Tresillo rhythm. Both data sets have been evaluated and handselected by the authors them self, by listening to spotify songs and choosing suitable examples. More specifically, to obtain songs which contain the Tresillo rhythm, a precompiled spotify play list was evaluated, which claimed to contain Tresillo songs 1 . After obtaining artist and song names of suitable validation set songs, appropriated MIDI files were searched on MIDIdb 2 and downloaded. To trace the Tresillo rhythm in the popular music of the past 20 years a publicly available data set which contains the song names and artist names of the Hot 100 US Billboard Charts (1999-2019) was used 3 . However to reduce To assert the representativeness of the collected sample, the sample distribution was compared to the ground truth distribution of the US Billboard Top 20 Charts by evaluating t-test statistics of several features (e.g.: weeks on charts, peak position in charts, date of release). All t-tests indicate that the two distributions are not significantly different. Initially, the data format of the collect musical data was MIDI. Audio was not used as initial data format, because obtaining onset tables for every voice would require complicated and elaborate computational processing of the data. MIDI has the practical advantage that, in contrast to other formats (e.g.: Musescore) it contains often multiple voices of a song. Furthermore, most pop songs are not available in score format. However, to obtain onset lists for every musical event, the MIDI files have been converted to the Musescore format. Onset tables are then the final data representation used to obtain our results. With those onset tables Figure 4 was compiled, which displays the frequency of onsets as notated in 1/128 notes aggregated to one bar. In addition, Muse Score provides the time signature for each song. The data set includes 9 songs in 3/4 or 6/8 time signature, which are excluded for further analysis. Defining the Tresillo rhythm To define the Tresillo rhythm computationally, the clean Tresillo rhythm as discussed in the introduction was used (see Figure 1). The clean version of the Tresillo is referred as Synthetic Tresillo in the following sections. Rhythm vectors To be able to measure the similarity between two rhythms one must have a clear definition of rhythm. In general, one can define rhythm as "a series of onsets and durations of musical events." [12]. Given that this paper investigates the dominant and repeating rhythm of a given song, it is however assumed that every musical event is sufficiently represented by its onset. To obtain a computational representation of the dominant rhythm of a song, we aggregate all musical onset of a voice to one bar. Collapsing all musical onsets to one bar and thus obtaining onset 'histograms' is a common practice and has been used besides others to analyze Western classical music [8] and American folk music [6]. Given prior assessment of the Billboard data (see Figure 4) in conjunction with only working with songs with a 4/4 meter, this paper uses 16-dimensional vectors for the representation of rhythm. This method provides for each voice of each song a 16 bin histogram denoting the cumulative number of onsets on a given beat. Here it is important to note that, although specific voices carry more information about the main rhythm of the song, considering each voice distinctly would require knowledge of which voice contributes how much to the perception of the main rhythm. As a method to obtain such knowledge is beyond the scope of this paper, no further steps are performed to differ between the voices. By aggregating the onsets across all voices onto one single histogram, we obtain a single onset histogram for a given song. The onset histograms are then normalized to transform them into a 16 dimensional vector, which we will refer to as the rhythm vector of a song. This research assumes that the information about the rhythm of the song is captured by these rhythm vectors. The obtained rhythm vectors can be displayed as bar plots to allow visual inspection. Aggregating all rhythms vectors into one normalized rhythm vector shows the mean rhythm of our Billboard data set, as can be seen in Figure 5. Figure 6 shows a song with high Tresillo similarity and Figure 7 the synthetic Tresillo pattern. Visual inspection and comparison of the compiled rhythm histograms, indicate similarities and differences between the rhythm vectors, which motivates our following methods. Thus, we present methods to systematically compare the rhythm vectors to each other in the following section. Tresillo similarity measures Each rhythm vector is a 16-dimensional vector for which similarity compared to another vector in the same space can be computed using a cosine similarity measure. We compare the similarity of rhythm vectors with two different Tresillo vectors which are defined as follows. 1) Template similarity center is the point in this vector space that corresponds to a plain Tresillo beat. 2) Centroid similarity point is defined as the centroid of all rhythm vectors corresponding to Tresillo songs. The Tresillo rhythm is defined by its syncopated pattern. This information is visible in Figure 6 by a sharp peak on the 3rd and 12th beat. Since each rhythm is defined by a higher and lower values along these 16 dimensions, it is fair to assume that each axis or onset position does not carry equal weight in the identification of this pattern. For example, it is very common in songs to have an onset at the start of a bar. Since the onset on the first beat is so ubiquitous in music, it will not carry a higher weight in the deduction of a rhythm in this vector space. To encapsulate this information into a similarity measure, we learn the scaling factors for each dimension of the rhythm space, which are referred by θ i . These θ i are used to increase the gap between "Tresillo similarities" of rhythm vectors which do contain Tresillo and vectors which don't. These θ i are scaling the rhythm vector of a song along that axis and in turn scale the similarity measure accordingly. The resulting parameterized cosine similarity is defined in equation 1 where, Θ refers to the set of scaling factors, A and B are two vectors with the same dimension between which the similarity needs to be computed. The i th dimension of these vectors are denoted by a i and b i . A Θ and B Θ are the linearly transformed vector after scaling i th dimension by θ i . n denotes the total number of dimension in the rhythm space, i.e. 16 in our case. Parameterized distance measures have been successfully used in the past in pattern recognition and machine learning [16]. Equation 1 defines the parameterized cosine similarity used in this research. The parameters for this model are learned by maximizing S * (defined in the next section), which can also be modeled into a minimization problem as formulated in Equation 2. Where A is the set of songs with Tresillo present in them, A is the set of songs with Tresillo not present in them and T refers to the reference point for computing the cosine similarity. Evaluation To evaluate the proposed similarity methods two different metrics were chosen to assess the variance produced by a given model and to compare the model fits between different models. To assess the variance in Tresillo similarity estimated by a given model, the bootstrapping method on the validation data sets was used. Thus using the Tresillo and the non-Tresillo validation data sets, we used a given model to calculated the mean Tresillo similarity in a given validation set and its 95% confidence intervals, as obtained by bootstrapping. The bootstrapping was performed with 1'000 draws with replacement. The number of samples per draw, correspond to the sample size of a given validation set (e.i.: either 9 or 10 samples). To compare different models it was assumed that a good model would have high similarity for all songs which have a Tresillo pattern and a low similarity for songs which do not have such a pattern (see Equation 3). This can be measured by defining 'Similarity Goodness' S * as the ratio of mean similarity in songs that have Tresillo and mean similarity of songs that do not. Higher value of S * denote high similarity for songs with Tresillo and low similarity for songs without Tresillo. Here similarity refers to the similarity computed between the rhythm vector of a song and a Tresillo rhythm vector. This similarity could be computed by using either an unparameterized or a parameterized cosine similarity, depending on the different models defined in the previous section. The error lines on the bar plot denote the 97.5% confidence interval based on 'leave one out' cross-validation. It can be inferred that the models based on the synthetically defined Tresillo outperform the models based on the centroid methods(Both 'p' values < 0.001 using t-test). Parameterized models also outperform the un-parameterized models. (Both 'p' values < 0.001 using t-test) Figure 9 shows the learned theta for each beat after fitting the model. Low and negative value for 0 th and 8 th beat bolster our claim about the ubiquitous 0 th beat in popular music. The third beat along with the second beat carry a lot of information about the Tresillo beat and hence has a high value. Other Tresillo beats also share a high peak with Figure 8: Comparing model goodness. Here, C refers to rhythm similarity measured with cosine similarity, using Tresillo template as centre. Centroid refers to rhythm similarity measured with cosine similarity, using the centroid of Tresillo songs as centre. C* refers to rhythm similarity measured with parameterized cosine similarity, using Tresillo template as centre. Centroid* refers to rhythm similarity measured with parameterized cosine similarity, using the centroid of Tresillo songs as centre. Figure 9: Theta for each dimension the exception of 6 th and the 8 th beat. This may be because the onsets for popular rock pop songs coincide here. There is asymmetry across the 8th beat. One of the possible reason for this could be the drum fills, which are often in the later half of the bar. Time Trend Considering the evaluation of the proposed models based on our validation data sets, it was inferred that models using the synthetic defined Tresillo outperformed models using a data representation of the Tresillo. Thus to analyze the Tresillo time trend, only the two models, which are based on the synthetic Tresillo were used. Those two models correspond to the models with best performance on the validation set. However, it must be mentioned that the parametrized model clearly outperformed the cosine similarity model. Given those two models, Tresillo similarity for all songs in the Billboard data set were calculated. By plotting the Tresillo similarity over time, a proxy for the trend in Tresillo use over time was obtained. In a first naive analysis, the mean weekly use of the Tresillo rhythm in the US Billboard Top 20 Charts can be seen in Figure 10. To reduce the noise and variance visible in Figure 10, a rolling yearly Table 1: Cosine similarity and parameterized cosine similarity for popular Tresillo songs Here C * Denotes parameterized cosine similarity and C denotes cosine similarity The resulting rolling yearly average of Tresillo use can be seen in Figure 11. In both Figures, 95% confidence intervals have been obtained via bootstrapping (1'000 draws with replacement, with the number of samples per draw equal to the sample size) and are indicated by a light blue coloring. DISCUSSION This paper shows, that, given a clear definition, the intensity of Tresillo rhythm use in a given song can be measured with computational methods. Several methods have been introduced which identify the use of the Tresillo rhythm and its intensity in the collected validation data sets. However, it is still disputable how the proposed models deal with noise which might dilute the Tresillo rhythm. Assessing the uncertainty of our models (e.g.: by looking at outliers as determined by the bootstrapping method), it is notable that not every song which was labeled to contain the Tresillo rhythm, has a very high Tresillo similarity. This is exemplified by Table 1. Although both model in Table 1 have high similarity for a Tresillo song and low similarity for a 'non-Tresillo' song, parameterized model C * has a larger gap between the two. Assessing the time trends of Tresillo use in the US Billboard Top 20 Charts of the past 20 years, no clear linear time trend is observable. However, several interesting peaks and patterns are noticeable. First, looking at Figure 10 it is observable that the obtained results are by nature very noisy and there is high variance in Tresillo use from week to week. Using a rolling yearly mean, the trend of Tresillo use over time gets more visible as can be seen in Figure 11. Figure 11 offers us some interesting insights. Even though there is considerable yearly variance in the intensity of Tresillo rhythm use (as illustrated by the 95% confidence intervals), there are identifiable peaks and valleys in Tresillo rhythm use. Furthermore, the unparameteriaed cosine similarity and the parametrized cosine similarity produce consistent results, however on different scales. Looking at Figure 11, we observe a trend which starts relatively high around the new millennial, stays constant or slightly decreases till around 2008 from where on the trend collapses to its all time low in 2010. From 2010 on there is an increase in Tresillo use, although there is another valley around 2014. Finally around 2018 the trend in Tresillo rhythm intensity reaches its all time high. After subjectively evaluating the calculated Tresillo similarity and the corresponding Billboard Charts, we interpret the prior described trend as follows. In the early 2000 it seems that many songs which peaked the Billboards were either by Latin artist or by artist, who used Latin music themes in their songs (e.g.: Maria Maria, Santana, 2000; Be With You, Enrique Iglesias 2000; Baby Boy, Beyonce, 2003). This trend then decreases steadily, till the Tresillo rhythm seems to reappear in Western dance music after 2010. After 2010 there are several peaks which can be associated to popular dance music songs with particular high Tresillo similarity (e.g.: Where Have You Been, Rihanna, 2012; Shape Of You, Ed Sheeran, 2017; Cheap Thrills, Sia, 2016). However, to substantiate this interpretation of the time trend further empirical research would be needed, which clearly defines and differentiates the use of the Tresillo rhythm in the context of 'Latin American' music and its usage in Western dance music. CONCLUSION This paper formalizes a mathematical representation of the Tresillo rhythm and offers a methodology to compute the intensity of the Tresillo rhythm in a given song. It uses this methodology to trace the intensity of Tresillo use in the US Billboard Top 20 Charts (1999-2019). This paper evaluates and compares several models to compute Tresillo similarity and tests the performance of the given models on validation data sets. Furthermore, the uncertainty of the obtained results is quantified. Assessing the obtained time trend, distinct peaks and valleys can be observed, however there seem to be no linear time trend in the use of the Tresillo rhythm. After a relatively high starting level in Tresillo intensity around the new millennial, the average Tresillo similiarity decrease till 2010. Then there is a quadratic trend pointing towards increasing use of this rhythm. By subjectively evaluating some Billboard charts and their corresponding Tresillo similarity, we interpret this trend to be explained by an initial high popularity of Latin music and after 2010 to the increasing use of this rhythm in Western dance music. However, further research would be needed to empirically substantiate this claim. FUTURE WORK The channels in a song carry valuable information and could be leveraged upon if a sophisticated algorithm could be developed which is agnostic to meta data information but rather works on a symbolic level. The above work assumes a 4/4 meter for a song, this assumption could also be removed by developing a algorithm to map songs with different time signature (3/4, 7/4) into the same rhythm space. The lack of well annotated midi data is also a limiting factor. Annotating more data will result in better parameterized models which in turn, would improve the S * . The benefits of more data are not limited to this. More sophisticated learning algorithms which could not be used given over-fitting concerns, might become viable. For instance, a non linear transformation of the rhythm vector space may results in better results as this would be better suited at modeling the nuances on, for instance the 3rd beat. Dimension reduction techniques like PCA could also be employed to reduce over-fitting. Finally, to substantiate our subjective impression that there are two waves in usage of this rhythm, once in Latin American music and once in Western dance music, further empirical research would be needed, which differentiates in which musical context this rhythm is used.
2021-09-22T01:16:12.373Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "15659cbed16d5ec63350676357f80a073fa977c4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "15659cbed16d5ec63350676357f80a073fa977c4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
238223733
pes2o/s2orc
v3-fos-license
Pervasive detachment faults within the slow spreading oceanic crust at the poorly coupled Antilles subduction zone Oceanic crust formed at slow-spreading ridges is currently subducted in only a few places on Earth and the tectonic and seismogenic imprint of the slow-spreading process is poorly understood. Here we present seismic and bathymetric data from the Northeastern Lesser Antilles Subduction Zone where thick sediments enable seismic imaging to greater depths than in the ocean basins. This dataset highlights a pervasive tectonic fabric characterized by closely spaced sequences of convex-up Ridgeward-Dipping Reflectors, which extend down to about 15 km depth with a 15-to-40° angle. We interpret these reflectors as discrete shear planes formed during the early stages of exhumation of magma-poor mantle rocks at an inside corner of a Mid-Atlantic Ridge fracture zone. Closer to the trench, plate bending could have reactivated this tectonic fabric and enabled deep fluid circulation and serpentinization of the basement rocks. This weak serpentinized basement likely explains the very low interplate seismic activity associated with the Barbuda-Anegada margin segment above. Pervasive detachment faults in the oceanic basement at the NE Antilles Subduction Zone could enable deep fluid circulation and serpentinization and partly control the reduced interplate seismicity, according to seismic reflection and bathymetric data O ceanic basement formed at slow-spreading mid-ocean ridges (MORs), exhibits remarkable variations in crustal thickness, seismic velocity and tectonic fabric, as previously inferred from bathymetric data [1][2][3] , paleomagnetic studies 4,5 sampling and drilling of outcrops of deep-seated rocks 6,7 and numerical modelling 8,9 . In contrast, few seismic data image this fabric variability at depth [10][11][12][13] . Poorly sedimented seafloor near MORs causes severe scattering of seismic waves during mapping expeditions, which impedes accurate intracrustal imaging at depth 14 . As a result, deciphering the complex variability of oceanic tectonic fabric in seismic images remains challenging. This variability, from magmatically robust to tectonically dominated segments, depends on the spreading rate and the relative contribution of tectonic extension and magmatic diking to oceanic spreading 8,9 . At magmatically robust segments, oceanic spreading is mainly taken up by vigorous melt delivery, which leads to typical velocity-depth "Penrose" structure of extrusive basalts overlying intrusive gabbros 15 . In contrast, tectonically dominated spreading favors stretched and thinned crust possibly hosting widely spaced, long-lived, low-angle, ridgeward-dipping detachment faults 16 , exhuming serpentinized peridotites with a varying amount of gabbro bodies. This heterogeneous crustal composition is referred to as the "plum-pudding model" 17,18 . The velocity-depth structure then usually consists of one layer with a rather constant velocity gradient [19][20][21] depending on a decreasing serpentinization degree with depth 22 . Reston et al. 11 argue that more closely spaced faulting may result from a tectonic sequence where a detachment fault forms, slips, flexes and becomes inactive when a new detachment fault develops nearby. However, in the absence of convincing deep seismic images for such tectonic sequences, the model of widely spaced detachment faults frequently prevails. New bathymetric and multichannel seismic (MCS) data, collected in the NLA trench during cruises ANTITHESIS 1 and 3 23,24 call into question this generic model for the first time. The sedimentary layer reduces the scattering, allowing up-to-6-s-two-waytraveltime (stwt) deep seismic imaging, which is unprecedented for slow-spreading oceanic basement. These data reveal impressive along-strike variations in oceanic fabric, showing an unexpected deep and pervasive tectonic pattern within the basement created at a segment end of the Mid-Atlantic Ridge. These images challenge the long-lived detachment model, highlight that the tectonic imprint of slow-spreading onto the oceanic basement has possibly been underestimated, and raise questions about the seismic consequences of subduction of tectonically dominated, hydrated, serpentinized, and weak oceanic basement patches. Results and discussion Oceanic tectonic pattern near the Jacksonville Fracture zone. The~N120°-trending Jacksonville Fracture Zone extends from the Northwestern Atlantic to the NLA Subduction Zone 25 (Fig. 1). To the south, the 15-20 Fracture Zone subducts beneath the margin at a convergence rate of 20 mm/yr in the N254°E direction 26 . Between these fracture zones, the Cretaceous oceanic basement in the trench (Fig. 1A) ages westward 27 . Based on the 300-330 km distance between chron C34 (83 Ma) and C32 (71.6 Ma) 28 , the mean half-spreading rate was low, 26-29 mm/yr, at the spreading center. The Jacksonville Patch, originated from the ridge segment end close to the Jacksonville Fracture Zone, is currently located in the trench between 18 and 19°N. The bathymetry and deep structure within this patch drastically differs from that of the incoming oceanic plate in neighboring zones. To the southeast and the northwest of this patch, the oceanic fabric of the incoming plate corresponds to~N20°-trending elongated topographic highs sub-parallel to the magnetic anomalies (Fig. 1B, Supplementary Fig. 1). In addition, in every seismic line perpendicular to the trench (Ant01 07, 10, 11, 12, 14, and 50) reflectors of the seafloor, oceanic sediments, and the basement top step down westward along steep fault planes that dominantly dip toward the margin ( Fig. 2A). These normal faults penetrate the basement down to gently southwestward-dipping discontinuous reflections M, 1.9 to 2.2 stwt beneath the top of the basement, interpreted as the Moho. These faults crop out at bathymetric scarps, directed N100-120°E, sub-parallel or slightly oblique to the deformation front. This margin sub-parallel faulting in the outer trench wall has long been described as the result of the incoming plate bending into the subduction zone 29,30 . According to these studies, plate bending mainly reactivates inherited tectonic structures of the oceanic fabric when favorably oriented (sub-parallel to the trench) and produces new faults when the fabric is highly oblique with respect to the trench. Offshore of the NLA, the oceanic fabric trends at more than 70°angle to the trench. The southwestward-dipping faults, sub-parallel to the deformation front, are thus likely to be newly formed plate bending faults. Within the Jacksonville Patch, the bathymetric map and associated dip and strike seismic lines (Ant06, 43, 44, 45, 52, 53, and 54) show a drastically different tectonic pattern in the trench (Fig. 1B, Supplementary Fig. 1B). The smoother seafloor is neither spiked with the N20°-trending ridges of the oceanic fabric, nor deformed by the margin-subparallel scarps of platebending normal faults. In contrast, short, shallow and steep faults dipping toward east and west bound~4-6 km wide grabens in the oceanic basement. These grabens define~N100-110°-directed seafloor undulations trending at 40°angle to the margin front. The seismic lines do not show organized reflections at typical Moho depths. The most striking features are 5-10-km-spaced, convex-up, high-amplitude reflector sequences, which dip from the top of the oceanic basement down to 5 stwt below the seafloor (Fig. 2B). The sequences are 0.1-0.2-stwt-thick (200 to 500 m) and locally up-to-0.5-stwt-thick (1 to 1.5 km). Ridgeward-Dipping oceanic-basement Reflectors (RDRs). In order to estimate the true dip direction and geometry of these reflector sequences, we performed depth-conversion of MCS lines Ant45 and 53 interpretations (Fig. 3) as well as a pre-stack depth migration of line Ant45. We used a combined MCS / wide-angle seismic (WAS) velocity model based on nearby WAS line Ant06 31 ( Supplementary Fig. 3). In this model, the basement corresponds to a 5.6-6.5-km-thick single layer with a 5.5-7.4 km/s velocity range from top to bottom and a constant velocity gradient of 0.27 s −1 . Moreover, we used seismic attributes derived from seismic data in order to confirm the reflector sequences geometry. Computing RMS amplitude provides information about reflection physical properties and particularly fluid content 32 . This analysis suggests that the reflector sequences, compared to other intracrustal reflections, show physical properties consistent with fluidrich and/or serpentinized rocks within the upper 6 km of the oceanic basement. At greater depths, the dimming of reflections suggests a decreasing fluid content and/or serpentinization degree ( Supplementary Fig. 4). The mean apparent dip angle increases from 17°to 25°in N125°E-trending line Ant53 and from 15°to 35°in N40°E-trending line Ant45 (Fig. 3A, Supplementary Fig. 5). These lines intersect each other (Fig. 3B) and reveal that the reflectors dip in a N60-90°E direction, towards the Mid-Atlantic Ridge, with a dip angle that increases from 20-30°in the upper 3 km, up to 45°between 3 and 8 km depth (Fig. 3C). We refer to these sequences as Ridgeward-Dipping oceanic-basement Reflectors (RDRs) (Fig. 4). Previous seismic data depicted distant convex-up ridgewarddipping reflectors with similar dipping angle at segment ends of slow-spreading ridges, interpreted as large-offset long-lived detachment faults, for instance in the Cretaceous-aged Eastern Central Atlantic 11,13 and at the South West Indian Ridge (SWIR) 10 . At the SWIR, the faults are associated with similar 0.5-stwt-thick sub-parallel bright discontinuous reflectors interpreted as damage zones. The RDRs are also partly consistent with closely spaced,~1-km-thick sequences of convex-up LCRs (Lower-Crust ridgeward-dipping Reflectors) in the Northwestern Atlantic 12,33 as well as in lower crust generated at the faster Mid-Pacific spreading ridge offshore of the Middle America Trench 34 , Japan 35-37 , Alaska 38 , and Hawaii 39 . These LCRs have been interpreted as lithological layering resulting from magma flow in the Atlantic 33 and the Pacific 40,41 . However, discrete spacing of reflectors rather than pervasive layering more readily supports ductile shear zones 37,42 due to spreading-related deep tectonic events 12 and/or anomaly in melt delivery at the mid-ocean ridge 42 . The RDRs size and geometry partly differ from these analogues. These reflectors extend from the top of the oceanic basement to, at least, 6 km below (Fig. 3, Supplementary Fig. 5) while the LCRs are restricted to the lower crust and sole out downward onto the Moho. Discretely spaced thin sequences of subparallel RDRs are poorly consistent with pervasive and massive fan-shaped layering at Seaward Dipping Reflectors (SDRs) and lava flows. At last, the RDRs are closely spaced and most of them do not deform or fracture the top of the oceanic basement and the sediment layer, contrasting with the classical image of distant detachment faults 11 with topographic expressions 3 , in the Northeastern Atlantic. However, this fault spacing at a slow to intermediate spreading axis depends on the fraction of the plate separation rate that is accommodated by magmatic ridge-axis dyke intrusion 8 . According to these authors, a tectonically dominated slow-spreading ridge segment with moderate magmatic activity can generate closely spaced detachment-type deformation zones during early stages of basement exhumation. Based on this discussion, we propose that the Jacksonville Patch lithosphere consists of serpentinized mantle rocks, possibly hosting gabbro bodies exhumed along low-angle detachment systems 16 or by serpentine diapirism up high-angle faults 43 . Although we cannot rule out serpentine diapirism, inside corners of fracture zones are known to be prone to detachment faulting 44 , the RDRs more readily image pervasive proto-detachment shear zones related to early tectonic extension at a magma-poor inside corner of the segmented MAR.. In this interpretation, the RDRs more readily image pervasive proto-detachment shear zones related to early tectonic extension at a magma-poor inside corner of the segmented MAR. Approaching the trench, the plate bending, reactivates extensional strain along the RDRs, favoring fluid percolation, rock alteration and serpentinization, increasing acoustic impedance contrast and reflection amplitude. The RMS analysis supports this interpretation showing high RMS amplitude along the RDRs within the upper 6 km and at the top of the oceanic crust above the RDRs (Supplementary Fig. 4). Seismogenic behavior of subducting serpentine-bearing rocks. The NLA Subduction Zone has hosted only 39 thrust-faulting earthquakes, detected teleseismically (Mw > 5), with focal mechanisms compatible with interplate co-seismic slips since 1973 (Fig. 5). Most of these subduction-type events occurred to the North of Guadeloupe where they are aggregated in two seismicity clusters: from Montserrat to Barbuda and from the Anegada Passage to Virgin Island 45 . Very few of them occurred along the~110-km-wide margin segment in between and to the South of Guadeloupe. In the Southern and Central Lesser Antilles, numerous fracture zones in the subducting South American Plate (Fig. 1A) likely trigger deep crustal hydration and mantle serpentinization [46][47][48] . This high water budget is prone to impede large interplate coseismic rupture, rather favoring alternate slip behavior (SSE, VLFE, EETS) 49 and/or numerous low-magnitude events 46,48 . In contrast, in the NLA, the only fracture zone (the 15-20 FZ) of the subducting North American Plate to interact with the subduction zone ( Fig. 1) has not subducted deep enough to favor dehydration of the subducting serpentinized mantle 47 . This fracture zone, located at less than 30 km depth beneath the forearc 31 , could trigger shallow dewatering and margin tectonic deformation, weakening the interplate contact, reducing the seismic coupling and affecting the megathrust seismogenic behavior. However, the fracture zone underthrusts similarly the two clusters of subduction-type teleseisms (Mw > 5), and the gap in between (Fig. 5) suggesting a low influence onto the interplate seismicity in the NLA. We propose that the reduced strength of the subducting plate basement at least partly made of serpentinized mantle rocks strongly contributes to the megathrust weakness and the interplate seismicity reduction. Low-temperature species of serpentine minerals, chrysotile, and lizardite have a low coefficient of internal friction, low fracture strength, and a nominally non-dilatant mode of brittle deformation, which favor localized slip on discrete surfaces, cataclastic flow by shear microcracking [50][51][52] , and plastic flow within individual grains 53 . This substantial weakening of serpentine-bearing rocks is not a linear function of the degree of serpentinization but is similar in slightly hydrated peridotites and pure serpentinites 22 . The subduction of an heterogeneously faulted, hydrated and serpentinized basement is likely to generate an interplate patchiness of contrasting frictional properties, which may impede full interplate coupling 53 , instead favoring a mix of stable and unstable behaviors prone to triggering small-Mw, slow-slip, and/or very-low frequency earthquakes 54 . Similar conditions are suspected in anomalous non-seismic regions in locally hydrated forearc mantle within Northeast Japan 53 . The Lesser Antilles is an end-member subduction zone, which undergoes the subduction of highly hydrated fracture zones and unsuspected large-scale tectonically dominated oceanic patches. Our data depict for the first time pervasive and closely spaced proto-detachment shear planes, reactivated by the plate bending in the trench within the oceanic basement, at least partly made of serpentinized mantle rocks exhumed at a former inside corner of the MAR. The landward extent of this patch is unclear beyond 40 km from the deformation front, because of seismic amplitude loss at great depth. However, downdip, tectonic interaction and fluid circulation between the patch and the 15-20 Fracture Zone possibly alter the forearc strength. Thus, the reduced strength and fluid circulation related to the Jacksonville serpentinized basement, its pervasive tectonic fabric, and the proximity of the hydrated 15-20 Fracture Zone are likely to account for the heterogeneous distribution of subduction earthquakes in the NLA. Methods Data acquisition. Our results are based on recent multichannel seismic (MCS) wide-angle (WAS) seismic and bathymetric data collected during cruises ANTITHESIS 1 23 and ANTITHESIS 3 24 . Multibeam swath bathymetry data were recorded using a Kongsberg EM122 and a RESON Seabat7150 (432 -880 beams echosounders) during ANTITHESIS 1 and 3, respectively. We recorded MCS lines Ant01, 10.2 and 12 during Antithesis 1 23 , using a 7699 cu in 18-elements airgun seismic source towed at 17-m-depth and a 3-km-long streamer composed of 288 channels spaced at 12.5 m and towed at 20-m-depth. We acquired lines Ant45, 50, 53, and 54 during Antithesis 3 24 , using a 6500 cu in, 16-elements airgun seismic source towed at 14-m-depth and a 4.5-km-long streamer composed of 720 channels spaced at 6.25 m and towed at 15-m-depth. Shots were fired every 75 m providing a 30-fold coverage. Data processing. Swath processing consists in spikes and excessive slopes removal by automatic procedure and manual ping editing using Caraïbes ® and Globe ® softwares (IFREMER). Digital terrain models were produced with a grid spacing of 75 m. Vertical accuracy is between a few meters and tens of meters depending on depth. Bathymetric and slope maps were calculated and processed using QGis. Maps reveal reliefs in the order of tens of meters high and few hundred meters apart. MCS data processing includes quality control, binning, band-pass filtering, fK filtering, external and internal mutes, noise attenuation, predictive deconvolution, multiple suppression, velocity analysis, normal move out and dip move out corrections, stacking and pre-stack time migration, using Solid-QC ® (Ifremer) and Geovation ® (CGG-Veritas) Softwares 55 . We performed iterative Prestack Kirchhoff time migration (PSTM) to yield optimal migration velocities and form the final prestack migrated images. PSTM results in focusing correctly seismic energy from genuine basement reflections but not from out-of-planes arrivals from seafloor or basement propagating through regions of lower root-mean-square velocity. Thus, any intra-basement event observed on presented PSTM images can be interpreted as true reflection. Depth-converted interpretation and depth-migrated seismic data. Converting and/or migrating to depth the seismic images is a mandatory condition to address the questions of the geometry, dipping angle, and orientation of the RDRs. The depth of investigation (11 to 18 km) is much larger than the streamer length (4 and 4.5 km for Antithesis 1 and 3, respectively). At great depth, this relative shortness of the streamers results in high uncertainties in interval velocity model strictly inferred from Normal-Move-Out (NMO) velocities. In order to reduce this uncertainty, we build composite velocity models based on MCS lines Ant45 and 53 and WAS line Ant06 31 ( Supplementary Fig. 3), located within the Jacksonville Patch 70 km to the northwest. These models consists in: 1/ NMO velocities converted to interval velocities using the Dix formula at shallow depth (i.e., from the seafloor to the topmost hundreds of milliseconds in the subducting basement) and Fig. 2B and Supplementary Fig. 2, respectively. Figure 4 shows an interpreted perspective view of the RDRs. 2/ velocities inferred from first-arrival traveltime tomography for line Ant06 at greater depth in the basement and the mantle. We then base our investigations on two complementary methodological approaches. We convert to depth the interpretation for seismic lines An45 and 53 (Fig. 3) using these combined MCS/WAS velocity models. In order to confirm this conversion, MCS line Ant45 is migrated to depth ( Supplementary Fig. 5) with a preserved amplitude Pre-Stack Depth Migration (PSDM) approach 56-60 performed in the angle domain. The velocity macro-model is iteratively corrected during migration, using the "migrationvelocity-analysis" approach 61 until Common Image Gathers (CIG) show flat reflections. When this condition is satisfied, the CIG are stacked, providing an increased accuracy for the migrated image. In this methodological approach, it is noteworthy that parallel lines Ant45 and 06 are located 70 km from each other. As a result, the PSDM of line Ant45 should be considered as an additional constraint for the RDRs geometry complementing the rougher depth conversion, more than as a robust image for the deep structure of the subduction zone. Despite this uncertainty, both methods result in similar location, depth, geometry, and dipping angles for the RDRs. These reflectors are slightly deeper and steeper in the depthconverted image than in the interpreted PSDM MCS line. RMS amplitude analysis. Seismic amplitude attributes analysis is commonly used in basin and oil exploration in order to identify and delineate structural and stratigraphic features associated with fluid-rich intervals 32 . The Root Mean Square (RMS), based on reflection coefficient, independently from the reflection polarity, is particularly suited for fluid content analysis. The RMS amplitude A RMS is calculated from original signal amplitudes a i (t) over a time window of N samples indexed with i, using the ≪ Petrel ≫ software (Schlumberger): As a result, this analysis estimates the signal overall amplitude and describes the signal average amplitude within a time window. Data availability Every geophysical data of the ANTITHESIS cruises are available on the internet site of the French Oceanographic Fleet (https://campagnes.flotteoceanographique.fr/search). Interested readers write/select "Antithesis" in field "search campaign" and the desired data set in field "Data Managed by SISMER." Once every needed data set is selected, the readers can download it from "My basket" page. Code availability Seismic data processing used softwares SolidQC (Ifremer) and Geovation (CGG), bathymetric data were processed with softwares Globe and Claritas (Ifremer). We used Kingdom Suite for the RMS analysis. Maps and 3D bathymetric views were drafted using GMT, Qgis, and Adobe Illustrator. The ray-Born PSDM code derives from an original private version by P. Thierry
2021-09-30T14:01:35.761Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "8a056295473af4a0308d652d07b3a33b2bb5dcb0", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s43247-021-00269-6.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "8a056295473af4a0308d652d07b3a33b2bb5dcb0", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [] }
3727817
pes2o/s2orc
v3-fos-license
Differential protein expression in chicken macrophages and heterophils in vivo following infection with Salmonella Enteritidis In this study we compared the proteomes of macrophages and heterophils isolated from the spleen 4 days after intravenous infection of chickens with Salmonella Enteritidis. Heterophils were characterized by expression of MMP9, MRP126, LECT2, CATHL1, CATHL2, CATHL3, LYG2, LYZ and RSFR. Macrophages specifically expressed receptor proteins, e.g. MRC1L, LRP1, LGALS1, LRPAP1 and a DMBT1L. Following infection, heterophils decreased ALB and FN1, and released MMP9 to enable their translocation to the site of infection. In addition, the endoplasmic reticulum proteins increased in heterophils which resulted in the release of granular proteins. Since transcription of genes encoding granular proteins did not decrease, these genes remained continuously transcribed and translated even after initial degranulation. Macrophages increased amounts of fatty acid elongation pathway proteins, lysosomal and phagosomal proteins. Macrophages were less responsive to acute infection than heterophils and an increase in proteins like CATHL1, CATHL2, RSFR, LECT2 and GAL1 in the absence of any change in their expression at RNA level could even be explained by capturing these proteins from the external environment into which these could have been released by heterophils. Electronic supplementary material The online version of this article (doi:10.1186/s13567-017-0439-0) contains supplementary material, which is available to authorized users. Introduction Macrophages and heterophils represent professional phagocytes acting as effectors and modulators of innate immunity as well as orchestrators of adaptive immunity [1]. Heterophils, the avian counterparts of mammalian neutrophils, belong among the first responders to bacterial infections and sensing of pathogen associated molecular patterns (PAMPs) stimulates heterophils for phagocytosis as well as release of bactericidal proteins stored in heterophil granules into the extracellular environment [2]. In agreement with their general function in host protection against pathogens, heterophils play a crucial role in the protection of chickens against Salmonella infection and chickens with heterophil depletion are not protected against colonization of systemic sites [3][4][5]. However, although there are several reports on specific heterophil functions during infection of chickens with Salmonella enterica, their genome-wide response to infection has not been characterized so far. Macrophages are professional phagocytes responsible for the destruction and clearance of pathogens. When activated, macrophages increase their antibacterial activity by the expression of antimicrobial peptides like cathepsins B, C, D and S, avidin, ferritin or ovotransferrin [6], and production of NO radicals from arginine by inducible NO synthase. The antimicrobial proteins expressed by macrophages are commonly produced also by heterophils though it is not known to what extent these may differ in their immediate availability and total amount produced by both cell types. Macrophages can also regulate the immune response by the expression of cytokines e.g. IL1β, IL6, IL8, IL18 or LITAF [7] and are capable of antigen presentation [8][9][10]. However, similar to heterophils, an unbiased report on total proteome expressed by chicken macrophages is absent. In our previous study we showed that heterophils and macrophages increase in the spleen of chickens when intravenously infected with Salmonella Enteritidis (S. Enteritidis) [7]. Next we characterized the gene expression at the tissue level in the whole spleen and expression of selected transcripts was tested in sorted leukocyte subpopulations [6]. However, none of this provided general data on the protein expression in chicken heterophils and macrophages. Although intravenous infection of chickens only partially represents specific Salmonella-chicken interactions which are mixed up with a general response to bacteremia caused by Gram negative bacterium, this way of infection represents a model for the understanding heterophil and macrophage functions during early response to infection. In the current study we therefore isolated heterophils and macrophages from chicken spleens by fluorescence-activated cell sorting (FACS), purified proteins from these cells and identified them by mass spectrometry. This allowed us to (1) characterize the total proteome of heterophils and macrophages, (2) define proteins which exhibited differential abundance in chicken heterophils compared to macrophages and (3) identify proteins that changed in abundance following the intravenous infection with S. Enteritidis in either of these populations. Since we also included a group of chickens which was vaccinated prior to challenge, we also addressed whether there are any proteins specifically expressed by the macrophages or heterophils from the vaccinated chickens. Using this approach we identified over one hundred proteins characteristic of either chicken heterophils or macrophages which allowed us to further refine their function in chickens. Ethics statement The handling of animals in this study was performed in accordance with current Czech legislation (Animal protection and welfare Act No. 246/1992 Coll. of the Government of the Czech Republic). The specific experiments were approved by the Ethics Committee of the Veterinary Research Institute (permit number 5/2013) followed by the Committee for Animal Welfare of the Ministry of Agriculture of the Czech Republic (permit number MZe 1480). Bacterial strains and chicken line Newly hatched ISA Brown chickens from an egg laying line (Hendrix Genetics, Netherlands) were used in this study. Chickens were reared in perforated plastic boxes with free access to water and feed and each experimental or control group was kept in a separate room. The chickens were vaccinated with S. Enteritidis mutant completely lacking Salmonella pathogenicity island 1 (SPI-1) constructed as described earlier [11] and infected with isogenic wild type S. Enteritidis 147 spontaneously resistant to nalidixic acid. The strains were grown in LB broth at 37 °C for 18 h followed by pelleting bacteria at 10 000 × g for 1 min and re-suspending the pellet in the same volume of PBS as was the original volume of LB broth. Experimental infection There were 3 groups of chickens. Six chickens from the control group were sacrificed on day 48 of life. An additional 6 chickens (group 2) were infected intravenously with 10 7 CFU of wild type S. Enteritidis in 0.1 mL PBS on day 44 of life. The last 6 chickens (group 3) were orally vaccinated on day 1, revaccinated on day 21 of life with 10 7 CFU of S. Enteritidis SPI-1 mutant in 0.1 mL of inoculum and challenged intravenously with 10 7 CFU of wild type S. Enteritidis on day 44 of life. Intravenous mode of infection was used mainly to stimulate macrophage and heterophil response rather than to model natural infection of chickens with S. Enteritidis. All chickens in groups 2 and 3 were sacrificed 4 days post infection, i.e. when aged 48 days. The spleens from the chickens from all three groups were collected into PBS during necropsy. To confirm S. Enteritidis infection, approximately 0.5 g of liver tissue was homogenised in 5 mL of peptone water, tenfold serially diluted and plated in XLD agar, as described previously [11]. Collecting heterophil and macrophage subpopulations by flow cytometry The cell suspensions were prepared by pressing the spleen tissue through a fine nylon mesh followed by 2 washes with 30 mL of cold PBS. After the last washing step, the splenic leukocytes were re-suspended in 1 mL of PBS and used for surface marker staining. In total 10 8 of cells were incubated for 20 min with antimonocyte/macrophage:FITC (clone KUL01 from Southern Biotech) and CD45:APC (clone LT40 from Southern Biotech), followed by wash with PBS. Monocytes/macrophages (CD45+KUL01+) and heterophils (identified based on FSC/SSC characteristics within CD45+ cells) were sorted using a FACSFusion flow cytometer operated by FACSDiva software (BD Biosciences). Only for simplicity, the monocytes/macrophages population will be called as "macrophage (Ma)" in the rest of this paper. Sorted cells were collected in PBS and immediately processed as described below. A small aliquot from each sample was subjected to immediate purity analysis. The purity of macrophages was 88.6 ± 5.3% and of heterophils 88.1 ± 4.2% when counting cell of expected staining, and FSC and SSC parameters out of all particles. When we gated at the area with live cells, the purity of macrophages and heterophils was between 97 and 98%. Majority of contaminants therefore represented cellular debris and only around 2.5% of contaminants were formed by non-target cells. Protein and RNA isolation from sorted cells, reverse transcription of mRNA and quantitative real time PCR (qPCR) Sorted leukocyte subpopulations were lysed in 500 µL of Tri Reagent (MRC) for parallel isolation of RNA and proteins. Upon addition of 4-bromoanisole and 15 min centrifugation at 14 000 × g, proteins were precipitated with acetone from the lower organic phase. RNA present in upper aqueous phase was further purified using RNeasy purification columns according to the instructions of the manufacturer (Qiagen). The concentration of RNA was determined spectrophotometrically (Nanodrop, Thermo Scientific) and 1 µg of RNA was immediately reverse transcribed into cDNA using MuMLV reverse transcriptase (Invitrogen) and oligo dT primers. After reverse transcription, the cDNA was diluted 10 times with sterile water and stored at −20 °C prior qPCR. qPCR was performed in 3 µL volumes in 384-well microplates using QuantiTect SYBR Green PCR Master Mix (Qiagen) and a Nanodrop pipetting station from Innovadyne for PCR mix dispensing following MIQE recommendations [12]. Amplification of PCR products and signal detection were performed using a LightCycler II (Roche) with an initial denaturation at 95 °C for 15 min followed by 40 cycles of 95 °C for 20 s, 60 °C for 30 s and 72 °C for 30 s, followed by the determination of melting temperature of resulting PCR products to exclude false positive amplification. Each sample was subjected to qPCR in duplicate and the mean values of the Cq values of genes of interest were normalized (ΔCt) to an average Cq value of three reference genes (GAPDH, TBP and UB). The relative expression of each gene of interest was finally calculated as 2 −ΔCq . Statistical analysis using a two sample t test for means equality was performed when comparing levels of mRNA expression between chicken groups and results with p value ≤ 0.05 were considered as significantly different in expression. Sequence of reference genes GAPDH, TBP and UB have been published elsewhere [13,14]. Sequences of all newly designed primers used in this study including their location within different exons and sizes of PCR products are listed in Additional file 1. Sample preparation for LC-MS/MS analysis Precipitated proteins were washed with acetone and dried. The pellets were dissolved in 300 µL of 8 M urea and processed by the filter aided sample preparation method [15] using Vivacon 10 kDa MWCO filter (Sartorius Stedim Biotech). Proteins were washed twice with 100 µL of 8 M urea and reduced by 100 µL of 10 mM DTT. After reduction, proteins were incubated with 100 µL of 50 mM IAA and washed twice with 100 µL of 25 mM TEAB. Trypsin (Promega) was used at 1:50 ratio (w/w) and the digestion proceeded for 16 h at 30 °C. For comparative analysis, peptide concentration was determined spectrophotometrically (Nanodrop, Thermo Scientific) and samples from the same group of chickens were pooled. Pooled samples were then labelled using the stable isotope dimethyl labelling protocol as described previously [16]. Labeled samples were mixed and 3 subfractions were prepared using Oasis MCX Extraction Cartridges (Waters). The samples were desalted on SPE C18 Extraction Cartridges (Empore) and concentrated in a SpeedVac (Thermo Scientific) prior to LC-MS/MS. High resolution (30 000 FWHM at 400 m/z) MS spectra were acquired for the 390-1700 m/z interval in an Orbitrap analyser with an AGC target value of 1 × 10 6 ions and maximal injection time of 100 ms. Low resolution MS/MS spectra were acquired in Linear Ion Trap in a data-dependent manner and the top 10 precursors exceeding a threshold of 10 000 counts and having a charge state of +2 or +3 were isolated within a 2 Da window and fragmented using CID. Data processing, protein identification and quantification Raw data were analysed using the Proteome Discoverer (v.1.4). MS/MS spectra identification was performed by SEQUEST using the Gallus gallus protein sequences obtained from Uniprot database. Precursor and fragment mass tolerance were 10 ppm and 0.6 Da, respectively. Carbamidomethylation (C) and oxidation (M) were set as static and dynamic modifications, respectively. Dimethylation (N-term and K) was set as static modification in the label-based analysis. Only peptides with a false discovery rate FDR ≤ 5% were used for protein identification. Spectral counting, the protocol in which abundance of a protein is expressed as the total number of tandem mass spectra matching its peptides (peptide spectrum matches, PSM), was used for comparative label-free analysis of heterophil and macrophage proteomes [17]. For a general comparison of protein abundance between heterophils and macrophages, PSMs belonging to a particular protein from all three groups of chickens, i.e. 18 samples, were summed up. The identification of at least two distinct peptides belonging to the particular protein and the threshold of at least 5 PSMs in at least one sample was required for its reliable identification [18,19]. All data were normalized to the total number of PSMs in individual samples. Statistical analysis using a t test was performed and the proteins with p value ≤ 0.05 and with at least four fold differences in its amounts were considered as significantly different in their abundance between the subpopulations. In the label-based quantification, only unique peptide sequences with at least 20 PSMs were considered for peptide ratio calculations. Subsequent analysis of labelbased data was performed in R (https://www.R-project. org). For each protein, its individual peptide ratios were log 2 transformed, mean values were calculated and tested with a one sample t test. Benjamini-Hochberg correction for multiple testing was then applied to the obtained p values. Only proteins having ≥ twofold change and adjusted p value ≤ 0.05 were considered as being significantly different in abundance. Bioinformatic analysis Protein interaction networks were built using the online database resource Search Tool for the Retrieval of Interacting Genes (STRING). Proteins were further analyzed using Gene Ontology (GO) database and the Kyoto Encyclopedia of Genes and Genomes (KEGG) for their classification into specific pathways. PCA plots were calculated and created in R (https://www.R-project.org). S. Enteritidis infection Intravenous S. Enteritidis infection resulted in a high colonization of systemic sites. Average log 10 S. Enteritidis counts were 5.03 ± 0.54 and 3.06 ± 0.99 CFU/g of liver in the infected chickens and the vaccinated and infected chickens, respectively. Despite this, no fatalities were observed among infected chickens. No S. Enteritidis was detected in any of the control non-infected chickens. Identification of heterophil and macrophage specific proteins Proteins specific for chicken heterophils or macrophages were determined irrespective whether these were obtained from the infected or non-infected chickens. Altogether, 858 proteins from heterophils and 1032 proteins from macrophages were detected. Out of these, 654 proteins were expressed both in heterophils and macrophages. Two-hundred and eight proteins were detected in macrophages only and an additional 126 proteins were 4 times or more abundant in macrophages than in heterophils. On the other hand, 34 proteins were detected in heterophils only and an additional 44 proteins were 4 times or more abundant in heterophils than in macrophages (Additional file 2). Proteins characteristic for heterophils Out of 78 proteins characteristic for heterophils (Additional file 2), 20 with the highest PSM difference between heterophils and macrophages are listed in Table 1. These included MRP126, LECT2, CATHL1, CATHL2, CATHL3, LYG2, LYZ and RSFR proteins, all with antibacterial functions. STOM and RAB27A proteins controlling storage and release of granular proteins in neutrophils also belonged among the characteristic and highly expressed proteins in heterophils. Two serine protease inhibitors, SERPINB10 and SERPINB1, were also found among the 20 most characteristic heterophil proteins (Table 1). Only a single KEGG pathway was specifically enriched in heterophils and this was the starch and sucrose metabolism pathway comprising PYGL, PGM1 and PGM2 proteins (p = 1.7E−4). Despite the KEGG pathway designation, all these proteins represent enzymes involved in glycogen metabolism [20]. Heterophil proteins responding to in vivo infection with S. Enteritidis Altogether, 153 proteins were present in different abundance in the heterophils before and after S. Enteritidis infection. Of these, 109 proteins increased and 44 proteins decreased in abundance (Additional files 3 and 4 for all quantified heterophil proteins). Proteins belonging to 2 KEGG categories were enriched in heterophils following S. Enteritidis infection. These included the category translation with 39 proteins (p = 2.58E−62) and protein processing in endoplasmic reticulum (12 proteins, p = 1.74E−11). Twenty proteins with the highest increase in abundance, except for those belonging to the category translation, are listed in Table 3. Among others, these included AVD, F13A, ANXA2, ANXA7 or CTSC. Forty-four proteins decreased in abundance in heterophils following S. Enteritidis infection and 20 of these with the highest decrease are listed in Table 4. Proteins with decreased abundance were those found in heterophil granules such as MPO, LYZ, LYG2, CTSG, CTSL1, CATHL1, CATHL2, RSFR, MMP9 and LECT2. Another set of proteins which decreased in heterophils following S. Enteritidis infection included ALB, FN1 and OTFB (Table 4). RNA expression Finally we verified the expression of 37 genes coding for selected proteins listed in Tables 1, 2, 3, 4 and 5. Expression of 4 genes, LRP1, MPO, PPIB and TUBA3A was too low and these genes were excluded from further consideration (Additional file 7). Six genes (LGALS1, MRC1L, GDA, MECR, DMBT1, LRPAP1) out of 7 proteins selected as specific for macrophages were transcribed in macrophages at a higher level than in heterophils. Only HSP70 was transcribed in macrophages and heterophils at the same level though it was present in higher abundance at the protein level in macrophages. Nine genes (MRP126, OTFB, LYG2, LYZ, SERPINB1, CATHL1, CATHL2, MMP9, LECT2) out of 14 heterophil specific proteins were transcribed in heterophils at a higher level than in macrophages. Two genes of this group (GPX, CTSG) were transcribed in heterophils and macrophages at the same level and the remaining 2 genes (RSFR, LTA4H) were transcribed at a higher level in macrophages though protein mass spectrometry indicated their higher abundance in heterophils. Expression of 11 proteins which increased in abundance in macrophages following infection of chickens with S. Enteritidis was also tested at the RNA level. Except for MRP126, 10 of these (MECR, CTSC, ERAP1, RSFR, SOD1, CALR, CATHL1, CATHL2, LECT2, GAL1) did not exhibit any difference at the transcriptional level. Similar to the results of protein mass spectrometry, RNA levels of the tested genes in the heterophils or macrophages from the vaccinated chickens were in between the expression in non-infected chickens and chickens infected without previous vaccination. Only 3 genes in heterophils did not follow this scheme and CATHL1, CATHL2 and LECT2 were expressed in heterophils from the vaccinated chickens at significantly higher level than in the heterophils from infected chickens. Discussion Until now, chicken heterophils and macrophages have been characterized only by their specific characteristics like cytokine signaling or production of antimicrobial peptides [2,6,7,24,25] and an unbiased report characterizing their total proteome, before and after infection, has been missing. In the current study we therefore isolated proteins from heterophils and macrophages and quantified their abundance before and after infection with S. Enteritidis by mass spectrometry. We have to remind that mass spectrometry provides reliable data for approximately 800 the most abundant proteins. The lowly represented proteins, despite their potential specificity or responsiveness to infection, could not be therefore detected. Chicken macrophages differed from heterophils in 3 specific features. First, macrophages specifically expressed receptors such as MRC1L, LRP1, LGALS1, LRPAP1 and DMBT1L. Second, macrophages exhibited higher mitochondrial activity including fatty acid degradation, TCA cycle and oxidative phosphorylation. And third, macrophages specifically expressed enzymes involved in arginine and proline metabolism (Figure 1). Receptors specifically expressed by macrophages indicate their potential to sense signals from the external environment which allows them to modulate immune response [6,7] including their own polarization [26,27]. The dependency of macrophages on oxidative phosphorylation and mitochondria functions was already described for human macrophages and neutrophils [28]. Macrophages were also enriched in arginine and proline metabolism since one of their bactericidal activities is the production of NO radicals by iNOS and arginine [29]. Following infection with S. Enteritidis, macrophages increased the expression of lysosomal and phagosomal proteins what could be associated not only with S. Enteritidis inactivation but also with macrophage ability of antigen presentation. Heterophils specifically expressed granular proteins MPO, LYZ, LYG2, RSFR, LECT2, CATHL1, CATHL2, CTSL1, CTSG, OTFB, SERPINB1 and MMP9, and endoplasmic reticulum proteins SSR1, PDIA4, PDIA6, PPIB, BiP, HSP90B1 and CANX. The latter group of proteins is activated when lumenal conditions in endoplasmic reticulum are altered or chaperone capacity is overwhelmed by unfolded or misfolded proteins [30]. Induction of an unfolded protein response leads to neutrophil degranulation in mice [31] and based on our results, a similar response can be predicted also in chicken heterophils. Granular proteins decreased in heterophils in response to infection. Since transcription of genes encoding these proteins did not change and the number of ribosomal proteins increased, these genes must have remained continuously transcribed and translated even after initial degranulation [24,[32][33][34][35]. However, not all proteins that decreased in heterophils following S. Enteritidis infection were assigned to pathogen inactivation. Matrix metalloproteinase MMP9 is used for degradation of the extracellular matrix to enable leukocyte infiltration to the site of inflammation [36], and ALB and FN1, are found at the surface of granulocytes and inhibit their migration [37,38]. The decrease of ALB and FN1 together with the degradation of extracellular matrix by MMP9 leads to heterophil translocation from the blood circulation to the site of inflammation. Comparing expression at the protein and RNA levels provided several unexpected results. Changes in expression at the RNA level in response to infection were more pronounced in heterophils than in macrophages. We can exclude any technical issues in macrophage gene expression analysis since there were at least 3 genes inducible at the RNA level also in macrophages (AVD, MRP126 and F13A). Unlike macrophages, there were also greater differences in the expression profiles of heterophils obtained from vaccinated chickens in comparison to those obtained from naive but infected animals and an increase in CATHL2 and LECT2 in the heterophils from the vaccinated chickens following S. Enteritidis challenge appeared as a specific positive marker of vaccination. Despite this, expression in heterophils and macrophages in naive but infected chickens tended to approach a similar expression profile ( Figure 2). In this study we characterized protein expression in chicken heterophils and macrophages in response to intravenous infection with S. Enteritidis. Heterophils decreased ALB and FN1, and released MMP9 to enable their translocation to the site of infection. Secondly the endoplasmic reticulum proteins increased in heterophils which resulted in the release of granular proteins. On the other hand, macrophages were less responsive to acute infection and an increase in proteins like CATHL1, CATHL2, RSFR, LECT2 and GAL1 in the absence of any Figure 1 The most characteristic proteins and their functions in chicken heterophils and macrophages. Heterophils express MMP9, MRP126, LECT2, CATHL1, CATHL2, CATHL3, LYG2, LYZ and RSFR proteins. Following S. Enteritidis infection, heterophils decreased fibrinogen FN1 and albumin ALB, and increased ribosomal proteins. In addition, endoplasmic reticulum proteins are activated which results in the release of granular proteins. Heterophils expressed glycogen (Gly) metabolism pathway which allows for rapid glucose (Glu) availability and anaerobic ATP generation via glycolysis while macrophages increased mitochondrial activity. Macrophages expressed receptor proteins MRC1, LGALS1, LRPAP1 and DMBT1L, mitochondria-localized proteins and arginine metabolism proteins. Following infection with S. Enteritidis, macrophages increased the expression of lysosomal and phagosomal proteins (CTSB, CTSC, RAB7A, CATHL1, RSFR, GAL1, SOD1). Figure 2 PCA cluster analysis of chicken heterophils and macrophages using expression data from qPCR. Each spot represents heterophils (circles) or macrophages (triangles) isolated from non-infected (green color), infected (red color), and vaccinated and infected chickens (blue color), 6 chickens per group. Heterophils from vaccinated chickens responded to infection more than macrophages from the same chicken. Transcription of heterophils and macrophages from naive but infected chickens approached the same profile.
2017-08-08T05:24:47.165Z
2017-06-17T00:00:00.000
{ "year": 2017, "sha1": "d882481327ea51e03242a87a2f1bfddc352862d2", "oa_license": "CCBY", "oa_url": "https://veterinaryresearch.biomedcentral.com/track/pdf/10.1186/s13567-017-0439-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d882481327ea51e03242a87a2f1bfddc352862d2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
112844561
pes2o/s2orc
v3-fos-license
The Interaction of Exhaustion and the General Law: A Reply to Duffy and Hynes In Statutory Domain and the Commercial Law of Intellectual Property, John Duffy and Richard Hynes argue that IP exhaustion — the doctrine that limits a patentee’s or copyright holder’s control over goods in the stream of commerce — was created and functions exclusively to confine IP law within its own domain and prevent it from displacing other laws. In this essay, we explain why we are not persuaded. A central theme in Duffy and Haynes work is the argument that the common law did not play a role in the emergence and development of exhaustion. However, we show that the evidence they offer is inconclusive, incomplete, and at times inaccurate. Close examination of early exhaustion cases paints a more complex picture that cannot be squared with the idea that exhaustion was created independently of common law principles. Next, we explain how the approach Duffy and Hynes advocate would strip exhaustion of any normative content. While we agree that exhaustion draws a line between the domain of IP law and other laws and thus prevents the former from displacing the latter, the placement of that line is far from arbitrary, and has always reflected policy considerations. Finally, we note that Duffy and Hynes’ theory oversimplifies the relationship between IP law and state law, partly because it does not fully consider federal preemption. INTRODUCTION N Statutory Domain and the Commercial Law of Intellectual Property, 1 Professors John Duffy and Richard Hynes argue that exhaustionthe doctrine that limits a patentee's or copyright holder's control over goods in the stream of commerce-was created and functions to confine Intellectual Property ("IP") law within its own domain and prevent it from displacing other laws. Exhaustion, in their description, sets aside a space that other areas of the law, such as contracts and property, are left to regulate. Like Duffy and Hynes, we believe that the intersection of IP and commercial law is an important topic with serious ramifications that would benefit from more scholarly attention, so we welcome their contribution to the ongoing debate over exhaustion. It is a debate in which the three of us have been deeply engaged, and one in which we rarely find ourselves entirely aligned. 2 However, when it comes to many of Duffy and Hynes's fundamental insights about the relationship between IP and other areas of law, we not only agree with each other, we also agree with them. And we suspect most scholars engaged in the exhaustion debate would as well. Like Duffy and Hynes, the scholarly consensus acknowledges that other areas of law-most notably contractshave a role to play in structuring transactions even when exhaustion limits copyright and patent exclusivity. IP law does not and should not exist in a vacuum. It must take into account the rights and obligations established under other bodies of law. So far so good. But Duffy and Hynes make broader claims about the origins of exhaustion and its relationship to other bodies of law. That is where we part ways. They argue that the desire to confine IP law within its own domain and prevent it from displacing other laws is the exclusive explanation for both the emergence of exhaustion and its current function. In doing so they reject the idea that courts developed exhaustion in light of long-standing common law principles. Acknowledging the common law origins of the doctrine, they suggest, requires courts to wield exhaustion as a bludgeon, pummeling any commercial law doctrine that stands in its way. In this Essay, we explain why we are not persuaded. We first discuss the role of the common law in shaping the exhaustion doctrine. We show that the evidence Duffy and Hynes offer is inconclusive, incomplete, and at times inaccurate. Close examination of early exhaustion cases paints a more complex picture that cannot be squared with the idea that exhaustion was created independently of common law principles. Next, we explain how Duffy and Hynes mischaracterize the prevailing scholarly understanding of exhaustion and how the approach they advocate would strip exhaustion of any normative content. While we agree that exhaustion draws a line between the domain of IP law and other laws and thus prevents the former from displacing the latter, the placement of that line is far from arbitrary, and has always reflected policy considerations. Finally, we note that Duffy and Hynes's theory oversimplifies the relationship between IP law and state law, partly because it does not fully consider federal preemption. I. THE COMMON LAW AND THE EMERGENCE OF EXHAUSTION Did the common law play a role in the emergence of exhaustion? Duffy and Hynes vigorously argue it did not. But in reaching that conclusion, they largely ignore a line of early exhaustion decisions that invoke common law principles. And they struggle to square their approach with the Supreme Court's most recent copyright exhaustion decision-in their own words, "one of the most important decisions on the commercial law of [intellectual property]" 3 -that described exhaustion as "a common-law doctrine with an impeccable historic pedigree." 4 Duffy and Hynes insist the common law played no part in the creation of exhaustion; the doctrine is a matter of statutory interpretation and nothing else. Our claim is modest by comparison. We argue that the common law did play an important role. But unlike Duffy and Hynes, we don't see the common law and statutory interpretation as incompatible. Courts are not forced to either faithfully interpret statutes, or alternatively exercise "a free ranging power to create federal common law." 5 We instead argue that courts rely on existing common law principles in choosing between competing statutory interpretations. Framing the alternative as a power to fabricate federal common law conjures up an activist bogeyman when in fact, the courts that developed the principle of exhaustion followed a well-trodden judicial path of erring on the side of the common law. As Duffy and Hynes point out, statutory interpretation is not confined to the text alone. 6 Courts must look to-among a range of sourcesother bodies of existing law. This is especially true when a statute is enacted against an existing body of common law. When Congress legislates in an area "previously governed by the common law," courts must start from the assumption "that Congress intended to retain the substance of the common law." 7 Where the courts have already spoken, "Congress does not write upon a clean slate." 8 If Congress wants to depart from common law principles, the statute "must 'speak directly' to the ques- 2016] Exhaustion and General Law 1 1 tion." 9 That canon of construction is as old as Congress itself 10 and is still accepted today. 11 Bobbs-Merrill Co. v. Straus demonstrates this point. 12 There the Supreme Court had to decide whether the copyright owner's right to "vend" gave it control over just the first authorized sale or extended to subsequent sales too. The Court limited the right to "vend" to the first sale. 13 Many scholars and subsequent courts explain that choice as at least partly motivated by common law principles-in particular those favoring the free alienability of personal property and reflecting skepticism of servitudes on chattels. 14 We agree with Duffy and Hynes that the text of the opinion does not compel that reading; the Court did not explicitly invoke the common law. But neither did it explain exhaustion as a bulwark against copyright law encroaching upon the "commercial law generally," as Duffy and Hynes argue. 15 As such, Bobbs-Merrill does not contradict the consensus view that centuries-old common law principles played an important role in the creation of exhaustion. Bobbs-Merrill might not have made the connection explicit, but when read together with other contemporaneous decisions, the link between the emergence of exhaustion and those common law principles becomes apparent. In this short Essay we cannot explore every contemporaneous opinion that explicitly or implicitly used com- 9 Id. (quoting Mobil Oil v. Higginbotham, 536 U.S. 618, 625 (1978)). 10 Brown v. Barry, 3 U.S. (3 Dall.) 365, 367 (1797) (noting that an act "in derogation of the common law is to be taken strictly"); Theodore Sedgwick, The Interpretation and Construction of Statutory and Constitutional Law 267 (2d ed. 1874) ("[S]tatutes are not to be presumed to alter the common law farther than they expressly declare . . . ."). That treatise was broadly used by courts, including the Supreme Court, including at the time in which the principles of exhaustion were developed. See, e.g., Ca. mon law principles in constructing exhaustion, but in the next few paragraphs we would like to point to a few of them. 16 Consider, for example, Doan v. American Book Co., 17 one of the decades-long line of copyright exhaustion cases that culminated in Bobbs-Merrill. In that decision the Seventh Circuit held that a purchaser of a book could repair and restore it notwithstanding the copyright holder's objections. The decision was not rooted in any statutory text, but in the intrinsic nature of personal property rights, as the court explained: "It would be intolerable and odious" to deny that a "right of ownership in the book carries with it and includes the right to maintain" it. 18 To take another example, the same year the U.S. Supreme Court decided Bobbs-Merrill, the Australian High Court interpreted the term "vend" in that country's patent statute. 19 The High Court, in light of "the recognized rule that the legislature is not to be taken to have made a change in the fundamental principles of the common law without express and clear words announcing such an intention," concluded that the right to vend did "not refer to any sale of the article after it has once, without violation of the monopoly, became part of the common stock." 20 On appeal the Privy Council reversed, focusing on the need to reconcile the apparent inconsistency between the common law principles and the patent statute. The U.S. Supreme Court would rely on this judgment a year later in Henry v. A.B. Dick Co., 21 and courts continue to cite it, including the Federal Circuit in an important 2016 patent exhaustion decision. 22 This brings us to the early twentieth century Supreme Court patent exhaustion case law. In 1912, in Henry, the Court held that patentees could impose restraints on downstream purchasers and that "[t]here is no 16 See also Samuel F. Ernst, Why Patent Exhaustion Should Liberate Products (And Not Just People), Denver L. Rev. (forthcoming 2016) (manuscript at *21-*27) (on file with authors) (noting the role of the policy against servitudes on chattels in early patent exhaustion cases, as well as the impact of the single recovery and statutory domain theories). 17 2016] Exhaustion and General Law 1 3 collision between the rule against restrictions upon the alienation or use of chattels not made under the protection of a patent and the right of the patentee through his control over his invention." 23 Duffy and Hynes describe the disagreement between the majority and dissent in Henry as "primarily about the scope or domain of the patent statute, not about common law policies." 24 But, read in context, it is clear that the common law baseline, and whether Congress intended to deviate from it, was one of the key points of contention in a rather bitter division among the Justices. Both the majority and the dissent in this long decision relied heavily not just on the statutory language and existing precedent, but also on general legal principles and on the need to promote public policy goals. Writing for the dissent, Chief Justice White raised concerns regarding the expansive reading of patentees' rights. His views were partly rooted in the common law. For example, he noted that the various forms in which patentees purported to extend their control "tend to increase monopoly and to burden the public to the exercise of their common rights." 25 In another place, the dissent chastised the majority for not applying the rule that the Court had set forth a year earlier in Dr. Miles Medical Co. v. John D. Park & Sons Co. 26 In that decision the Courtrelying explicitly and extensively on the common law aversion to restraints of trade-held that downstream control of nonpatented goods, in the form of a retail price maintenance scheme, was invalid. 27 Chief Justice White's dissenting views prevailed five years later when the Court explicitly reversed Henry. 28 The same day, in Straus v. Victor Talking Machine Co., another patent exhaustion case, the Court offered its most explicit early reference to the common law, stating that "[c]ourts would be perversely blind" if they failed to recognize restric-23 Henry, 224 U.S. at 39. 24 Duffy & Hynes, supra note 1, at 23. 25 Henry, 224 U.S at 70 (White, C.J., dissenting) (emphasis added). The term "common rights" is synonymous with "common law rights." See, e.g., Strother v. Lucas, 37 U.S. 14 Virginia Law Review Online [Vol. 102:8 tive patent licenses as an attempt "to sell property for a full price, and yet to place restraints upon its further alienation, such as have been hateful to the law from Lord Coke's day to ours, because obnoxious to the public interest." 29 Lord Coke is, of course, Edward Coke, one of the greatest common law jurists, whose opposition to restraints on trade influences exhaustion case law to this day. A much more recent exhaustion case reinforces the point that even when the Court is undeniably engaged in statutory interpretation, the common law has informed its reasoning. In Kirtsaeng v. John Wiley & Sons, Inc., 30 the question was whether the first sale doctrine embraced the importation and resale of books manufactured and lawfully sold abroad. Specifically, the case turned on the meaning of the phrase "lawfully made under this title." 31 Despite the clearly statutory nature of the question, the majority described the first sale doctrine as one "with an impeccable historic pedigree" dating back to "the early 17th century." 32 The Court relied on the fact that "[t]he common-law doctrine makes no geographical distinctions" to bolster its statutory reading. 33 And it emphasized the policy considerations disfavoring "restraints on the alienation of chattels" and embracing the "importance of leaving buyers of goods free to compete with each other when reselling or otherwise disposing of those goods." 34 Those considerations, along with " § 109(a)'s language, its context, and the common-law history of the 'first sale' doctrine, taken together, favor a non-geographical interpretation." 35 The Court thus had no trouble reconciling the common law with statutory interpretation. The Court's approach in Kirtsaeng thus strongly reinforces our views and the consensus among scholars that the common law played a role in the development of exhaustion and thus challenges Duffy and Hynes's rejection of that consensus. In discussing Kirtsaeng, Duffy and Hynes are forced to concede that the court was invoking a "'canon of statutory interpretation' disfavoring expansive readings of statutes that 'invade the common law.'" 36 2016] Exhaustion and General Law 1 5 squares with their overall rejection of the consensus approach. In other words, we are puzzled by Duffy and Hynes's failure to consider that other decisions, including those that established the core of IP exhaustion doctrine, were similarly relying on this centuries-old canon of interpretation. 37 Duffy and Hynes make another claim to support their account of the emergence of exhaustion. They note that a number of early exhaustion decisions "disclaim any attempt to adjudicate the relief plaintiffs might obtain outside of IP law" 38 and argue that "[s]uch agnosticism about ultimate results would be difficult to explain if the Court were engaged in pure policymaking directed toward substantive goals." 39 For example, in Bobbs-Merrill, the Court noted there was no contract claim before it. 40 Similarly, in Motion Picture Patents Co. v. Universal Film Manufacturing Co., the Court noted that whether the patentee can restrict the buyer "by special contract between the owner of the patent and a purchaser or licensee is a question outside the patent law and with it we are not here concerned." 41 We are unpersuaded that the courts were agnostic to the consequences or substance of post-sale restraints, and that their only concern was ensuring the correct legal form and forum for implementing them. We disagree with Duffy and Hynes for two reasons. First, reading the Court's unsurprising failure to decide an issue that was not properly before it as a disavowal of the common law and other policy considerations is a leap we are unwilling to take. Second, a close examination of contemporaneous decisions reveals statements that are inconsistent with the agnosticism hypothesis. Rather than conveying agnosticism, those courts objected to certain contracts on substantive policy grounds and expressed skepticism as to their enforcement as a matter of general commercial law. For example, Chief Justice White, in his dissent in Henry, recog-37 See supra note 13. 38 Duffy & Hynes, supra note 1, at 8. 39 Id. at 12. 40 Bobbs-Merrill, 210 U.S. at 346. 41 243 U.S. 502, 509 (1917). It should be noted that later in the opinion the Court expressed deep concerns with legal mechanisms that allow patentees to exercise control over downstream usage, stating that "[t]he perfect instrument of favoritism and oppression which such a system of doing business, if valid, would put into the control of the owner of such a patent should make courts astute, if need be, to defeat its operation." Id. at 515 (emphasis added). While the Court does not explicitly state that its concerns extend beyond a patent cause of action, we believe that if the Court were truly agnostic with respect to enforcing post-sale restrictions through contract law, it would not have used such strong language. 16 Virginia Law Review Online [Vol. 102:8 nized that the validity of contractual post-sale restrictions ought to be governed by contract law. However, he noted that if not for the majority opinion, those contracts would be void as against public policy, asking rhetorically: "Who . . . can put a limit upon the extent of monopoly and wrongful restriction which will arise, especially if by such a power a contract which otherwise would be void as against public policy may be successfully maintained?" 42 That majority opinion was, as we already noted, short lived. In Boston Store of Chicago v. American Graphophone Co., Chief Justice White, now writing for the majority, continued to express skepticism as to whether post-sale restrictions are enforceable under "general law." 43 He explored the Court's recent case law and concluded that [a]pplying the cases thus reviewed there can be no doubt that the alleged price-fixing contract disclosed in the certificate was contrary to the general law and void. There can be equally no doubt that the power to make it in derogation of the general law was not within the monopoly conferred by the patent law . . . . 44 This statement, we believe, plainly indicates that the Court was not agnostic to the possibility of enforcing post-sale restrictions via contracts, as it perceived the contracts at issue as void under general law. Moreover, in relying on its recent case law-which included numerous IP exhaustion cases as well as Dr. Miles, which deals with nonpatented products-to reach this result, the Court indicated that it did not consider the rights under IP law and the rights under general law as completely separated, as Duffy and Hynes argue, 45 but as highly related. 46 As we further discuss below, the interaction between these two bodies of law is indeed complex. In short, the arguments raised by Duffy and Hynes do not convince us that courts ignored well-established common law principles while developing exhaustion. We remain persuaded that the history of exhaustion 42 Henry, 224 U.S at 70-71 (emphasis added). 43 246 U.S. 8, 20 (1918). 44 Id. at 25 (emphasis added). 45 Cf. Duffy & Hynes, supra note 1, at 27 (noting that the Boston Store Court "distinguishes between issues within the patent domain from those governed by "the general law ); id. at 28 ("[I]n creating the exhaustion doctrine, the Supreme Court did sharply distinguish statutory issues under federal IP laws from common law issues concerning contract and property."). 46 See also Boston Store, 246 U.S. at 20-21, 27. 2016] Exhaustion and General Law 1 7 shows that those principles played-and continue to play-a role in shaping the doctrine. Likewise, we reject their assertion that the courts showed no interest in public policy and specifically that the courts' concern about post-sale restraints had nothing to do with the substance of those restraints. II. THE NORMATIVE IMPACT OF STATUTORY DOMAIN Duffy and Hynes view exhaustion as exclusively a matter of statutory domain. That claim plays a dual role in their analysis. First, it contrasts their theory with what they describe as the prevailing wisdom about exhaustion's relationship to other areas of law. But as we will describe, in drawing that distinction, Duffy and Hynes mischaracterize much of the prior exhaustion scholarship. Second, it restricts the ability of courts to consider broader policy goals, reducing the judicial function to identifying largely arbitrary triggers for exhaustion and stripping the doctrine of much of its normative content. The consensus view among modern commentators, Duffy and Hynes suggest, leads to IP doctrine running roughshod over distinct bodies of law like contract and property. Exhaustion, they argue, is required to preserve these other areas of law undisturbed. Modern commentators, they say, hold very different beliefs. Skeptics of exhaustion want "complete freedom to contract around exhaustion." 47 And exhaustion proponents see the doctrine as a "free ranging power" 48 to "allow or forbid a particular transaction." 49 Many scholars, they tell us, "want the courts to forbid any circumvention[s]" of exhaustion. 50 Later, they claim that many of those same scholars view leases as "unjustified circumventions of the exhaustion doctrine." 51 But they fail to cite any scholars who actually espouse these categorical views. We do not think this characterization reflects the majority of scholarship on exhaustion. It certainly does not reflect our views. We believe that even if exhaustion applies, a valid agreement may often give rise to a claim of breach and contractual remedies. 52 Similarly, we believe that 47 Duffy & Hynes, supra note 1, at 10. 48 Id. at 28. 49 Id. at 9. 50 Id. at 10. 51 53 Of course, not all attempts at licensing or contracting around exhaustion will succeed. In some instances they might be preempted or invalid for violating public policy, a decision that might be partly guided by some of the same policies that informed the development of exhaustion. But it is not our position, nor, we believe, the position of most modern commentators, that exhaustion necessarily or routinely undermines general commercial law. The contention that contract and property law can coexist with exhaustion is entirely consistent with the prevailing wisdom. That is not to say that the argument put forward by Duffy and Hynes is without consequences. If courts adopt the view advocated by Duffy and Hynes, it would significantly limit the tools at their disposal for resolving pressing questions about the scope of exhaustion. Duffy and Hynes claim that exhaustion draws a formal line between what is regulated by IP law and what is not. As they admit, "formalist boundary lines are inherently arbitrary." 54 As a result, their theory urges courts to ignore the impact of exhaustion on other policy goals. We find this outcome inconsistent with well-established practices, difficult to sustain, and undesirable. Consider, for example, two contemporary exhaustion questions: the choice between international and national exhaustion and the applicability of the doctrine to digital distribution. As a matter of copyright law, the Supreme Court resolved the first of these questions when it adopted international exhaustion in Quality King Distributors, Inc. v. L'anza Research International, Inc. 55 and Kirtsaeng. 56 Most commentators agree that the text of the Copyright Act provides plausible arguments both for and against international exhaustion. The Court's choice between them straints, in Research Handbook on IP Exhaustion and Parallel Imports (Irene Calboli & Edward Lee eds., Edward Elgar 2016) (suggesting that the common law doctrine of restraint of trade could evolve to distinguish between valid and invalid contracting around exhaustion); Katz, supra note 2, at 90-100 (proposing some parameters to distinguish between valid and invalid instances of contracting around exhaustion). 53 See, e.g., Perzanowski & Schultz, supra note 2, at 904 ("Copyright owners committed to price discrimination can avoid [exhaustion] by structuring transactions not as sales but as leases or subscription services."). This does not mean, however, that right holders should be able to avoid exhaustion by merely labeling a sale or other transfer of ownership a "license." Rub, supra note 2, at 814-16. 54 2016] Exhaustion and General Law 1 9 was not limited to a narrow examination of the Act; it also considered broader policy questions, including access to creative works, 57 "competition, including freedom to resell," 58 judicial administrability, 59 and "basic constitutional copyright objectives." 60 The Federal Circuit recently provided a different answer to that question when it affirmed national patent exhaustion in Lexmark International, Inc. v. Impression Products, Inc. 61 Granted, both the majority and the dissent partly based their decisions on the language and the structure of the Patent Act, as compared to Kirtsaeng's interpretation of the Copyright Act. However, both the majority and the dissent extensively addressed policy concerns. They analyzed how national and international exhaustion would affect certainty in the market, allow patentees to recoup their investments through price discrimination, might foster perpetual control over downstream distribution, and more. 62 Therefore, the Lexmark majority and dissent, like the Kirtsaeng majority and dissent, agree with the scholarly consensus that policy considerations play a vital role in interpreting and shaping exhaustion. Digital distribution provides another example of the difficulty in understanding exhaustion as an "inherently arbitrary" line between IP law and general law, as Duffy and Hynes maintain, 63 because this view limits the ability of courts to adjust the scope of exhaustion over time and in response to changing conditions. The primary reason for the recent at-57 Quality King, 523 U.S. at 151 (noting, for example, that the plaintiff's position in that case, promoting national exhaustion, "would merely inhibit access to ideas without any countervailing benefit"). 58 Kirtsaeng, 133 S. Ct. at 1363. 59 Id. (noting the "burden of trying to enforce restrictions upon difficult-to-trace, readily movable goods"). 60 Id. at 1364-65 (noting the impact of national exhaustion on libraries and museums). 61 Nos. 2014-1617, 2014-1619, 2016 WL 559042 (Fed. Cir. Feb. 12, 2016) (en banc). 62 See, e.g., id. at *18-19 (discussing how patents provide "market-based reward" to the patentee and the problem of vagueness); id. at *25 (discussing the need to "incentivize creation and disclosure"); id. at *26 (discussing the social benefits from patentee's ability to offer a menu of products); id. at *33-34 (discussing the practical effects of national exhaustion on the market and noting that "there is no concomitant risk of 'perpetual downstream control'"); id. at *34-36 (discussing how exhaustion affects the patentees' markets, income, and costs); id. at *44-45 (comparing certain aspects of the markets for copyrighted and patented goods and analyzing the impact of exhaustion regimes on those markets); id. at *58-59 (discussing the importance of allowing purchasers to compete, the effects of exhaustion on administrative costs, the need to allow free trade in goods embodying patented inventions, the impact on transaction costs and prices, and the role of international trade). 63 Duffy & Hynes, supra note 1, at 36. 20 Virginia Law Review Online [Vol. 102:8 tention exhaustion has received is that modern markets are increasingly global and digital. As a result, those markets prompt questions about the ideal scope of IP rights and their exhaustion. The ability to apply longstanding IP doctrines to new technologies and market realities-as courts have done in various contexts 64 -depends on the recognition of broader principles. Those principles cannot flout statutory directives, of course, but they should not be ignored altogether either, when statutes lend themselves to more than one plausible meaning. 65 Admittedly, Duffy and Hynes might see the elimination of the policy considerations as a feature, not a bug. If technological or market conditions alter the policy implications of exhaustion, they might argue that it is the task of Congress to weigh those concerns and enact a new statute. We agree that Congress could act, as it, from time to time, has acted. 66 And once Congress acts, courts would be bound to interpret the statute as faithfully as they can. But IP law regulates a fast moving technological world, and historically it has been the role of courts to help keep IP law up to speed. Moreover, when it comes to exhaustion, Congress has repeatedly signaled its acceptance of the judicial role in defining the broad contours of the doctrine. 67 The theory offered by Duffy and Hynes has two primary normative implications. The first-that their theory avoids the trampling of commercial law by IP law-rests on a false premise. The bulk of the cases and commentary reveal that the IP-domination Duffy and Hynes fear is more specter than reality. The second implication-that courts should ignore policy considerations in favor of focusing solely on the statutory text-unnecessarily ties the hands of courts applying the exhaustion doctrine, even when no conflict between IP and commercial law is at 2016] Exhaustion and General Law 2 1 stake. Below, we explore a final question left unresolved by Duffy and Hynes's discussion on the interaction between exhaustion and commercial law-the role of preemption. III. EXHAUSTION AND STATE LAW: THE PREEMPTION PROBLEM Duffy and Hynes claim that non-IP law was and should be taken into account in developing and applying exhaustion. We agree. In fact, when courts utilized the common law to develop exhaustion, they did just that. When Bobbs-Merrill was decided, more than forty years before the Uniform Commercial Code was created, commercial law was, in large part, the common law. From that perspective, the stark dichotomy Duffy and Hynes describe between "general commercial law," which exhaustion was designed to preserve, and "the common law," which was allegedly irrelevant, is more of a porous membrane. We also agree with Duffy and Hynes that non-IP laws have a role to play even when exhaustion limits the rights of copyright owners and patentees. That role should be considered when developing IP policy. 68 Duffy and Hynes explore the interaction between exhaustion and other areas of commercial law that regulate secondary markets. This indepth analysis can lead to important normative insights regarding the desirable scope of IP rights. It is indeed vital that IP commentators acknowledge the role of general commercial laws within IP policy. The contribution of Duffy and Hynes will surely advance that discussion. We want, however, to make two comments on the interaction between IP law and general commercial law. First, this interaction is not limited to exhaustion. IP laws incorporate-but do not define-basic commercial terms, such as sale, license, assignment, or mortgage. 69 Federal IP laws rely on state law definitions of those terms. 70 This symbiosis between federal IP law and general commercial law cuts across many IP doctrines. Because each of those doctrines must be developed in tandem with state commercial law, it is 22 Virginia Law Review Online [Vol. 102:8 hard to see why exhaustion should be singled out as a unique doctrine that is meant to preserve general commercial law, as Duffy and Hynes suggest. In some respects, this makes their analysis of the role of state law in regulating secondary markets even more valuable. It could serve as a model to explore similar interactions with other IP law doctrines. Second, considering the interaction between federal IP law and state law requires a careful analysis of federal preemption, and in particular copyright preemption. Copyright preemption is a thorn in the side of the Duffy and Hynes theory. Exhaustion cannot be a doctrine that is purely designed to preserve other laws, such as contract and private property, if it might also preempt some of those other arrangements. However, federal IP law does not give state commercial law unlimited power to regulate secondary markets. While state law does generally regulate those markets, 71 the power of states to create certain legal regimes-for example, one that grants copyright owners a copyright-like exclusive right over the resale of copyrighted works-is limited by federal preemption law. Duffy and Hynes make two arguments to prevent preemption from casting a shadow over their theory. First, they suggest that because exhaustion limits the scope of the exclusive rights under federal law, then rights created under state law to circumvent exhaustion are, by definition, not equivalent to rights under the Copyright Act, as required by § 301(a), its explicit preemption provision. 72 Second, they argue that "broad preemption arguments have had very little success in the courts" 73 following the Seventh Circuit decision in ProCD v. Zeidenberg. 74 We find both arguments problematic. The main difficulty with their first argument is that it ignores the purpose and uniform interpretation of § 301(a). Limiting the scope of copyright preemption to the scope of the exclusive rights, as suggested by Duffy and Hynes, will allow states to interfere with federal policy in a way that is inconsistent with the pur- 71 And, in doing so, they take into account some of the policy considerations that are also reflected in exhaustion doctrine. For example, the Restatement of Contracts suggests that a contractual promise is unenforceable as a matter of state law "on grounds of public policy if it is unreasonably in restraint of trade," Restatement (Second) of Contracts § 186 (Am. Law Inst. 1981), a policy that, as we have seen, played a role in the development of exhaustion as well. 72 Duffy & Hynes, supra note 1, at 73-74. 73 2016] Exhaustion and General Law 2 3 pose of the Act. For example, such an approach would give states carte blanche to regulate ideas, methods, and fair uses. This approach has been consistently rejected by courts. In fact, the Seventh Circuit rejected it in ProCD, stating that [o]ne function of § 301(a) is to prevent states from giving special protection to works of authorship that Congress has decided should be in the public domain, which it can accomplish only if "subject matter of copyright" includes all works of a type covered by sections 102 and 103, even if federal law does not afford protection to them. 75 The Sixth Circuit has similarly stated that "the shadow actually cast by the [Copyright] Act's preemption is notably broader than the wing of its protection." 76 The second argument, which relies on ProCD, faces two weaknesses. First, while ProCD was adopted by several federal circuit courts, it is not the law of the land. 77 The Second Circuit, for example, refused to endorse it, 78 and the Sixth Circuit expressly rejected it. 79 Second, and more important, the argument that Duffy and Hynes make is significantly broader than the Seventh Circuit's approach in ProCD. ProCD and its progeny deal exclusively with contractual rights. In fact, the distinction between property rights and contractual rights is the main rationale for those decisions. 80 Therefore, ProCD does not support the proposition that states are free to create any property-like arrangement they please with respect to information goods. Again, our claim is not that IP law and policy necessarily trump any or even most state law claims and doctrines. We, however, maintain that courts do not and should not be categorically denied the opportunity to 24 Virginia Law Review Online [Vol. 102:8 consider IP policy and preemption when a dispute touches on areas that are regulated by commercial law, including secondary markets. While commercial law should undoubtedly help shape IP law, preemption doctrine makes the relationship between exhaustion and other areas of the law more complex than Duffy and Hynes suggest. CONCLUSION The three of us do not always agree on the socially desirable scope of IP exhaustion. However, we do agree on the ways in which that scope should ideally be set. It should explore the justifications for exhaustion, examine how strong and applicable they are nowadays and going forward, study the effects it has on initial and secondary markets for copyrighted goods, and yes-consider other legal (as well as non-legal) ways to regulate those markets. The various competing interests and considerations should continue to inform the evolution of the law. Duffy and Hynes focus on one of these considerations, the role of general commercial law, and provide important insights about it. But focusing exclusively on that single consideration significantly narrows the perspective of what exhaustion is and what it should be. We find such an approach neither consistent with a century and a half of existing law nor advisable.
2018-12-26T21:53:01.461Z
2016-04-07T00:00:00.000
{ "year": 2016, "sha1": "767f7e04a30ed7a46d1f4d430b12ff20dfe85533", "oa_license": "CCBYNCSA", "oa_url": "https://tspace.library.utoronto.ca/bitstream/1807/76934/1/Katz%20et%20al%20Statutory%20domain.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "3a739d5657cee72326ea02987a3924f8e3eeeab3", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Engineering" ] }
55119450
pes2o/s2orc
v3-fos-license
Characterizing Log-Logistic ( L L ) Distributions through Methods of Percentiles and L-Moments The main purpose of this paper is to characterize the log-logistic (LL) distributions through the methods of percentiles and L-moments and contrast with the method of (product) moments. The method of (product) moments (MoM) has certain limitations when compared with method of percentiles (MoP) and method of L-moments (MoLM) in the context of fitting empirical and theoretical distributions and estimation of parameters, especially when distributions with greater departure from normality are involved. Systems of equations based on MoP and MoLM are derived. A methodology to simulate univariate LL distributions based on each of the two methods (MoP and MoLM) is developed and contrasted with MoM in terms of fitting distributions and estimation of parameters. Monte Carlo simulation results indicate that the MoPand MoLM-based LL distributions are superior to their MoM based counterparts in the context of fitting distributions and estimation of parameters. Mathematics Subject Classification: 62G30, 62H12, 62H20, 65C05, 65C10, 65C60, 78M05 Introduction The two-parameter log-logistic (L L ) distribution considered herein was derived by Tadikamalla and Johnson [1] by transforming Johnson's [2] S L system through a logistic variable. The L L distribution is a continuous distribution with probability density function (pdf) and cumulative distribution function (cdf) expressed, respectively, as: where x ≥ 0 and δ > 0. The pdf in (1) has a single mode, which is at x = 0 for 0 < δ ≤ 1, and at x = e −γ/δ ((δ − 1)/(δ + 1)) 1/δ for δ > 1. When 0 < δ ≤ 1, the pdf in (1) has a shape of reverse J. For the pdf in (1), the rth moment exists only if δ > r. A variant of log-logistic distribution has received a wider application in a variety of research contexts such as hydrology [3], estimation of scale parameter [4], MCMC simulation for survival analysis [5], and Bayesian analysis [6]. The quantile function of L L distribution with cdf in (2) is given as: where u ∼ uniform(0, 1) is substitued for the cdf in (2). The method of (product) moments (MoM)-based procedure used in fitting theoretical and empirical distributions involves matching of MoM-based indices (e.g., skew and kurtosis) computed from empirical and theoretical distributions [7]. In the context of L L distributions, the MoM-based procedure has certain limitations. One of the limitations is that the parameters of skew and kurtosis are defined for L L distributions only if δ > 3 and δ > 4, respectively. This limitation implies that the MoM-based procedure involving skew and kurtosis cannot be applied for L L distributions with δ ≤ 3. Another limitation associated with MoM-based application of L L distributions is that the estimators of skew (α 3 ) and kurtosis (α 4 ) computed from sample data are algebraically bounded by the sample size (n) as |α 3 | ≤ √ n and α 4 ≤ n [8]. This limitation implies that for simulating L L distributions with kurtosis (α 4 ) = 48.6541 (as given in Figure 3C in Section 3.2) from samples of size (n) = 25, the largest possible value of the computed sample estimator (α 4 ) of kurtosis (α 4 ) is only 25, which is 51.38 % of the parameter value. In order to obviate these limitations, this study proposes to characterize the L L distributions through the methods of percentiles and L-moments. The method of percentiles (MoP) introduced by Karian and Dudewicz [14] and the method of L-moments (MoLM) introduced by Hosking [9] are attractive alternatives to the traditional method of (product) moments (MoM) in the context of fitting theoretical and empirical distributions and in estimating parameters. In particular, the advantages of MoP-based procedure over the MoM-based procedure are that (a) MoP-based procedure can estimate parameters and obtain fits even when the MoM-based parameters do not exist, (b) the MoP-based estimators have relatively smaller variability than those obtained using MoMbased eprocedure, (c) the solving of MoP-based system of equations is far more efficient than that associated with the MoM-based system of equations [14][15][16][17]. Likewise, some of the advantages that MoLM-based estimators of L-skew and L-kurtosis have over MoM-based estimators of skew and kurtosis are that they (a) exist whenever the mean of the distribution exists, (b) are nearly unbiased for all sample sizes and distributions, and (c) are more robust in the presence of outliers [8][9][10][11][12][13][18][19][20][21][22]. The rest of the paper is organized as follows. In Section 2, definitions of method of percentiles (MoP) and method of L-moments (MoLM) are provided and systems of equations associated with MoP-and MoLM-based procedures are derived. Also provided in Section 2 are the boundary graphs associated with these procedures. Further, provided in Section 2 are the steps for implimenting the MoP, MoLM, and MoM-based procedures for fitting L L distributions to empirical and theoretical distributions. In Section 3, a comparison among the MoP-, MoLM-, and MoM-based procedures is provided in the context of fitting L L distributions to empirical and theoretical distributions and in the context of estimating parameters using a Monte Carlo simulation example. In Section 4, the results are discussed and concluding remarks are provided. Method of Percentiles Let X be a continuous random variable with quantile function q(u) as in (3), then the method of percentiles (MoP) based analogs of location, scale, skew function, and kurtosis function associated with X are respectively defined by median (ρ 1 ), inter-decile range (ρ 2 ), left-right tail-weight ratio (ρ 3 , a skew function), and tail-weight factor (ρ 4 , a kurtosis function) and given as [14, pp. 154-155] where q(u) u=p in (4)-(7) is the (100×p)th percentile with p ∈ (0, 1). Substituting appropriate value of u into the quantile (percentile) function q(u) in (3) and simplifying (4)-(7) yields the following MoP-based system of equations associated with L L distributions: The parameters of median (ρ 1 ), inter-decile range (ρ 2 ), left-right tail-weight ratio (ρ 3 ), and tail-weight factor (ρ 4 ) for the L L distribution are bounded as: where ρ 3 = 1 and ρ 4 = 1/2 are the limiting values when δ → ∞. For a sample (X 1 , X 2 , · · · , X n ) of size n, let denote the order statistics. Let q(u) u=p be the (100 × p)th percentile from this sample, where p ∈ (0, 1). Then, q(u) u=p can be expressed as [14, p. 154 where i is a positive integer and a/b is a proper fraction such that (n + 1)p = i + (a/b). For a sample of data with size n, the MoP-based estimatorsρ 1 -ρ 4 of ρ 1 -ρ 4 can be obtained in two steps as: (a) Use (13) to compute the values of the 10th, 25th, 50th, 75th, and 90th percentiles and (b) substitute these percentiles into (4)-(7) to obtain the sample estimatorsρ 1 -ρ 4 of ρ 1 -ρ 4 . See Section 3 for an example to demonstrate this methodology. Figure 1 (panel A) displays region for possible combinations of ρ 3 and ρ 4 for the MoP-based L L distributions. Fitting Empirical Distributions Provided in Figure 2 and Table 1 is an example to demonstrate the advantages of MoP-based fit of L L distributions over the MoLM-and MoM-based fits in the context of fitting empirical distributions (i.e., real-world data). Specifically, Fig. 2 displays the MoP-, MoLM-and MoM-based pdfs of L L distributions superimposed on the histogram of total hospital charges (in US dollars) data of 12,145 heart attack patients discharged from all hospitals in the state of New York in 1993. These data were also used in [17] and can be accessed from the website http://wiki.stat.ucla.edu/socr/index. php/SOCR_Data_AMI_NY_1993_HeartAttacks. The estimates (ρ 1 −ρ 4 ) of median, inter-decile range, left-right tail-weight ratio, and tail-weight factor (ρ 1 −ρ 4 ) were computed from total hospital charges data in two steps as: (a) Obtain the values of the 10th, 25th, 50th, 75th, and 90th percentiles using (13) and (b) substitute these values of percentiles into (4)-(7) to compute the estimatesρ 1 −ρ 4 . The parameter values of γ and δ associated with the MoP-based L L distribution were determined by solving (9) and (10) after substituting the estimates ofρ 2 andρ 3 into the right-hand sides of (9) and (10). The solved values of γ and δ can be used in (8) and (11), respectively, to compute the parameter values of median (ρ 1 ) and tail-weight factor (ρ 4 ). The MoP-based fit was obtained by using a linear transformation in the form x = q(u) + (ρ 1 − ρ 1 ). Fitting Theoretical Distributions Provided in Figure 3 is an example to demonstrate the advantages of MoPbased fit of L L distributions over the MoLM-and MoM-based fits in the context of fitting Dagum distribution with shape parameters: p = 2 and a = 5 and scale parameter b = 4. See [12] for a comparison of MoLM and MoM-based fits of Dagum distributions. The values of ρ 1 − ρ 4 associated with the Dagum distribution were computed using (4)- (7), where the quantile function q(u) of Dagum distribution was used. The parameter values of γ and δ associated with the MoP-based L L distribution were determined by solving (9) and (10) after substituting the values of ρ 2 and ρ 3 of Dagum distribution into the right-hand sides of (9) and (10). These values of γ and δ can be used in (8) and (11), respectively, to compute the parameter values of ρ 1 and ρ 4 associated with the L L distribution. The MoP-based fit was obtained by using a linear transformation x = q(u) + (ρ 1 − ρ 1 ), whereρ 1 is the median of Dagum distribution. The values of λ 1 , λ 2 , τ 3 , τ 4 associated with the Dagum distribution were computed using (18) and (14)- (17) and using the formulae for τ 3 and τ 4 from Section 2.2. The parameter values of γ and δ associated with the MoLM-based L L distribution were determined by solving (24) and (25) after substituting the values of λ 2 and τ 3 of Dagum distribution into the right-hand sides of (24) and (25). These values of γ and δ can be used in (23) and (26), respectively, to compute the parameter values of λ 1 and τ 4 associated with the L L distribution. The MoLM-based fit was obtained by using a linear transformation x = q(u) + (λ 1 − λ 1 ), whereλ 1 is the L-mean of Dagum distribution. The values of µ, σ, α 3 and α 4 associated with the Dagum distribution were computed using (28), formulae of mean and standard deviation and (30) Discussion and Conclusion One of the advantages of MoP-and MoLM-based procedures over the traditional MoM-based procedure is that the distributions characterized through the former procedures can provide better fits to real-world data and some the- oretical distributions [8][9][10][11][12][13][14][15][16][17]. In case of L L distributions, inspection of Figures 2 and 3 indicates that the MoP-and MoLM-based procedures provide better fits than the MoM-based procedure in the context of both fitting real-world data and theoretical distributions. Furthermore, the Euclidian distances related with MoP-and MoLM-based fits in Tables 1 and 2 are substantially smaller than those associated with the MoM-based fits. For example, inspection of Table 1 indicates that d = 0.0151 associated with MoP-based fit of L L distribution is approximately one-fifth of d = 0.0735 associated with MoM-based fit of L L distribution over total hospital charges data in Fig. 2. Similarly, d = 0.0239 associated with MoLM-based fit is approximately one-third of d = 0.0735 associated with MoM-based fit. The MoP-based estimators can be far less biased and less dispersed than the MoM-based estimators when distributions with larger departure from normality are involved [14][15][16][17]. The MoLM-based estimators can also be far less biased and less dispersed than the MoM-based estimators when sampling is from distributions with more severe departures from normality [8][9][10][11][12][13][18][19][20][21][22]. Inspection of the simulation results in Tables 3-5 clearly indicates that in the context of L L distributions, the MoP-and MoLM-based estimators are superior to their MoM-based counterparts for the estimators of third-and fourth-order parameters. That is, the superiority that MoP-based estimators of left-right tail-weight ratio (ρ 3 ) and tail-weight factor (ρ 4 ) and MoLM-based estimators of L-skew (τ 3 ) and L-kurtosis (τ 4 ) have over their corresponding MoM-based estimators of skew (α 3 ) and kurtosis (α 4 ) is clearly obvious. For example, with samples of size n = 25 the estimates of α 3 and α 4 for the L L distribution in Fig. 3C were, on average, only 36.63% and 3.66% of their respective parameters, whereas the estimates of ρ 3 and ρ 4 for the L L distribution in Fig. 3A were, on average, 105.07% and 90.60% of their respective parameters and the estimates of L-skew and L-kurtosis for the L L distribution in Fig. 3B were, on average, 91% and 93.73% of their respective parameters. From inspection of Tables 3-5, it is also evident that MoP-based estimators of ρ 3 and ρ 4 and MoLM-based estimators of τ 3 and τ 4 are more efficient estimators as their relative standard errors RSE = {(SE/Estimate) × 100} are considerably smaller than those associated with MoM-based estimators of α 3 and α 4 . For example, inspection of Tables 3-5 for n = 500, indicates RSE measures of: RSE(ρ 3 ) = 0.07% and RSE(ρ 4 ) = 0.04% for the L L distribution in Fig. 3A compared with RSE(τ 3 ) = 0.09% and RSE(τ 4 ) = 0.08% for the L L distribution in Fig. 3B and RSE(α 3 ) = 0.36% and RSE(α 4 ) = 1.03% for the L L distribution in Fig. 3C. Thus, MoP-based estimators of ρ 3 and ρ 4 have about the same degree of precision compared to the MoLM-based estimators of τ 3 and τ 4 , whereas both MoP-and MoLM-based estimators have substantially higher precision when compared to the MoM-based estimators of α 3 and α 4 . In the context of L L distributions, the MoM-based procedure involves solving of (33) and (34) for the parameters of γ and δ after given values (or, estimates) of standard deviation (σ) and skew (α 3 ) are substituted into the right-hand sides of (33) and (34). The solved values of γ and δ can be substituted into (32) and (35), respectively, for computing the values of mean and kurtosis.
2018-12-13T11:24:11.181Z
2017-01-25T00:00:00.000
{ "year": 2017, "sha1": "c1334ce309388d84767e33290aea2ebcd6bde865", "oa_license": "CCBY", "oa_url": "https://rc.library.uta.edu/uta-ir/bitstream/10106/26352/1/Characterizing%20Log-Logistic.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "64d1e249ccd06e7d40466a9a48b51aeaed947705", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233487990
pes2o/s2orc
v3-fos-license
Rational use of PPE and preventing PPE related skin damage On 31st December, 2019, an outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was declared in Wuhan, China. On 24 March 2020, there was a nationwide lockdown for 21 days, followed by Janata curfew on 22nd March. As the pandemic has developed and spread across continents, everyone including policy makers have realized shortage of personal protective equipment (PPE) such as N95 respirators, coverall, and face shields. This is one of the major factors putting healthcare workers not only at risk of infection but also to various side effects of prolonged use of PPE. Based on international experiences, new ideas in procuring and mass manufacturing, rational use of PPE equipment is the need of hour, especially for developing nations which lack adequate resources and infrastructure for manufacturing PPEs. Introduction Types and Components of PPE Broadly, there are two types of PPE -(i) standard PPE and (ii) customized PPE. Standard components of PPE are face shields, goggles, mask, gloves, coverall/gowns (with or without aprons), head cover/ surgical cap, and shoe cover. Customized PPE is recommended by CDC when the healthcare systems become stressed and enters the contingency mode. [3] This may be an alternative to what is known as jugaad innovation. During crisis, CDC even recommends use of disposable aprons, laboratory coats as an alternative to gowns and cloth face masks, and reusing medical masks as an alternative to single use masks. [4] During crisis, alternate sources of manufacturing customized gowns using synthetic raw materials (e.g., polyester) should be explored. Fabrics can be engineered to achieve desired properties after chemical or physical treatments. Reusable gowns made of 100% polyester or polyester/cotton is a viable option when the demand is unpredictable and end not known. [5] Face shields and goggles A face shield provides barrier protection to the eyes, nose, and lips. The face shield should be made of clear plastic and provide good visibility. It should have adjustable band to fit snuggly against the forehead, preferably fog resistant. It should be made of reusable material which can be disinfected without losing its functionality. Goggles should be made of transparent glasses and covered from all sides. It should have vent valves and be able to accommodate prescription glasses. It should be made of reusable material which can be disinfected without losing its functionality. Mask The type of mask to be used depends upon the risk profile and category of the personnel. The two categories of mask recommended for COVID 19 are triple layer surgical mask and N95 respirator mask depending upon the risk involved. N95 respirator should ensure quality compliance and preferably be NIOSH N95, EN 149FFP2, or equivalent. Gloves Nitrile gloves are preferred over latex gloves because they are chemical resistant. Non-powdered latex gloves are preferred to powdered gloves if nitrile gloves are not available. Coverall/Gowns Coveralls provide 360° protection from top to bottom protecting the torso, back, lower legs, head, and sometimes feet of healthcare worker. Coveralls/gown should be made of fluid or blood impervious fabric. Shoe cover Shoe covers should be made of same water impervious fabric as of coverall and should preferably reach up to mid-calf. Head cover Head cover provides protection to the hair and scalp from possible exposure. Coveralls usually have head cover also known as hood. Those using gowns should use a separate head cover. When to use which PPE? When to use which PPE depends upon whether the PPE is being used as a standard precaution or as an expanded isolated precaution. Standard precautions It was previously known as universal precaution. Gloves, gowns, mask, and goggles or a face shield are used as standard precautions depending upon the level and body part being exposed. Expanded isolation precautions In some instances, healthcare personnel are required to wear PPE where contact, droplet, or airborne infection is anticipated. Contact precautions require gloves and gown for contact with the patient. Droplet precautions require the use of a surgical mask within 3 feet of patient and a respirator if less than 1 feet, and airborne infection isolation requires that only a respirator (N95 mask) be worn. Level 1 PPE For Standard Infection control precautions, it includes disposable gown, disposable gloves. If risk of spraying or splashing is anticipated, surgical mask and face shield/goggles is recommended. Level 2 PPE For direct/indirect contact precautions/droplet precautions/ airborne precautions, it includes fluid-resistant disposable gown, disposable gloves. If risk of spraying or splashing is anticipated, surgical mask and face shield/goggles is recommended. Head cover and N95 respirator is to be considered in cases of airborne infection. Level 3 PPE Enhanced precautions for suspected or confirmed infectious diseases of high consequence which spread by direct/indirect contact or by airborne route, it includes fluid-resistant coverall with hood/long-sleeved gown with disposable fluid-resistant hood, N95 mask, face shield, 2 sets of gloves, shoe covers. Recommendations for appropriate use of PPE At AIIMS Jodhpur, we follow the recommendations defined by WHO and MOHFW for use of PPE in COVID and non COVID areas. [6][7][8] We have designed our own customized coverall and gown. This customized PPE consists of full inner coverall with hood and additional outer gown (giving double-layer protection), shoe cover [ Figure 1]. The customized PPE is made of water impervious Polyester fabric with coating on one side of fabric to make it water impervious. This fabric has also been approved for reuse by the Centres for Disease Control and Prevention, USA. [9] This material is available in market and can be manufactured by any local textile manufacturer. At places where this customized PPE is not available, water impervious quality checked coverall is to be used. The list of various components of PPE being used in AIIMS Jodhpur at various COVID and non-COVID areas are listed in Tables 1-3. PPEs are not alternative to basic preventive public health measures such as hand hygiene, respiratory etiquettes which must be followed at all times. Cost-effectiveness analysis Customized PPE which is being used at AIIMS Jodhpur costs Rs. 850/unit which includes coverall, gown, shoe cover, and a face shield. Imported PPEs cost Rs. 1,500/unit which includes coverall and a shoe cover. Face shield has to be purchased separately in imported PPE which cost around Rs 300/unit. So the total cost of imported PPE foe end user is approximately Rs 1,800/unit. Figure 2 shows cost-effective analysis of imported and customized PPE. It is clearly seen that customized PPE is more cost-effective as compared to imported PPE. Ethical considerations for rational use of PPE During a pandemic like COVID-19, method of allocating PPEs should be collaborative, transparent, equitable, and accountable. The Centers for Disease Control and Prevention (CDC) issued statement regarding the distribution of vaccines and ventilators during the influenza pandemic in 2007 and 2011. [10,11] Similar ethical considerations can be applied while allocating different types of PPE to the healthcare providers. Considering the shortage of PPE, hospitals must implement policies that must be scientific and ethical while allocating these scarce resources. [10][11][12] Utilitarian approach This approach considers protecting those clinicians who are best able to save the most number of patients. Hospitals should avoid elective surgeries and work in teams while operating and visiting wards so that minimum people from one Department is exposed at a given time. Sickest first This approach is routinely used to triage patients for Prioritize allocation of PPE to those healthcare workers who are treating patients who are most likely to recover. Social worth This principle is usually not accepted but in absolutely necessary limited circumstances, this can be invoked. Social worth principle refers to patient's overall worth to society. Multiplier effect This principle also known as instrumental value refers to an individual's ability to carry out function that is essential to prevent social disintegration. This principle prioritizes those healthcare workers who have the ability to save more lives which will achieve a multiplier effect in the society. Principle of reciprocity Giving priority to those who put themselves at risk during a severe pandemic. Using sodium hypochlorite 0.1% followed by rinsing with clean water or cleaning with 70% alcohol wipes Face shield Recommended Not recommended for reuse after aerosol generating Using sodium hypochlorite 0.1% followed by rinsing with clean water or cleaning with 70% alcohol wipes Gloves Not recommended - Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2021-05-04T13:40:25.491Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "fe52570725134230ca12d50fc7039188f04b04b9", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_1772_20", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dff6eae9b2de16d36e1998438748ea65691a37d0", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
244073388
pes2o/s2orc
v3-fos-license
Threshold space for prevention and control of COVID-19 exposed environment Dwelling as an alternative to cure and isolate confirmed positive or asymptomatic people of COVID-19 becomes an essential place. However, it is necessary to ensure no physical contact between dwelling users since COVID-19 can be transmitted through droplets. Preventing and controlling the transmission is achieved by inserting transitional space between users, activities, or programs. The idea of transitional space is determined from Rumoh Aceh, an adaptive vernacular design that provides a boundary between public and private zones to limit access to strangers. This paper aims to translate space configurations of Rumoh Aceh as local wisdom to break the chain of COVID-19 transmission by making transitional space, a separation between confirmed positive or asymptomatic and healthy people. The data is obtained from the observation of three Rumoh Aceh in Banda Aceh and Aceh Besar. The space configurations are translated into five types of threshold space, promoting social distancing between users, providing cleaning space for personal hygiene, giving atmosphere for self-isolation, having natural ventilation features, and daylight exposure. Then, these types are explored and adapted in a contemporary dwelling design. As a final translation and exploration, this paper provides strategies and design recommendations for a threshold space in a contemporary dwelling design. The strategies and recommendations are explored and adapted in a 60 square meter house plan. Introduction COVID-19 outbreak has spread and become local transmission in the family cluster. The number of confirmed positive cases is increasing but at the same time, the healthcare facilities do not provide adequate service. Therefore, a house as an alternative place for self-isolation becomes essential, especially for asymptomatic patients. Since COVID-19 can be transmitted through droplets, it is necessary to ensure no physical contact between dwelling users. Personal hygiene and self-isolation are the important keys to prevent transmission and separate positive or asymptomatic people from healthy people during incubation. Then, activities and other layers occurring in the domestic environment should be adapted and transformed to protect inhabitants from threats [1]. On the other hand, local wisdom from vernacular architecture in Indonesia provides design solutions for prevention and control from pandemics in traditional houses for years. One of the local wisdoms is the transition that occurs because of zoning in traditional houses. In this study, Rumoh Aceh, the Acehnese traditional house, is observed and translated because the architectural and structural model of Rumoh Aceh is an adaptive design [2]. At first, transitional activity and space in Rumoh Aceh existed to give a boundary between public and private zones, to limit access for strangers. The adaptive design of Rumoh Aceh for social distancing, personal hygiene, daylight, and airflow, etc is adapted and transformed to respond to the resilient interior of modern dwellings recently. This paper aims to translate the local wisdom of Rumoh Aceh in the context of breaking the disease transmission of COVID-19 by making the transition, a separation between affected and healthy people. Finally, the translation results are proposed to a healthy and resilient contemporary dwelling design. This paper also challenges the study of Suryandari [3] that analyzes spatial relation layout learned from the outdoor layout of Chinese and Javanese yards. Suryandari proposed that for prevention of disease transmission, there are three layout typologies of program combination (indoor, transition, outdoor) applied into modern dwelling according to five variables such as disinfectant, cleaning, natural lighting, isolation, health worker circulation, and social distancing [3]. Instead, this paper proposed a new way to translate and implement the local wisdom to contemporary dwelling design. Threshold as material and experience for healthy space Thresholds are transitional situations and experienced spaces that state different conditions [4]. In the context of materiality, thresholds exist as physical architectural elements such as doors, gates, portals, doors, bridges, porches, etc [4]. Subjectively, a threshold is an experienced space as moving in between medium qualities [4]. As a material, the threshold is a function to block, limit, to ensure the boundary in-between space. Instead, the threshold as space elicited transitional activities to minimize disease transmission through physical contact. These activities can include cleaning, changing, and interaction. A study by Emmanuel [5] reveals that there are four strategies for infection prevention and control in healthcare facilities, such as social distancing, natural ventilation, daylight, and materials. These strategies can be applied to threshold space design and configuration. Since the COVID-19 virus is transmitted through droplet transmission and physical contact, providing enough space at least 100 cm apart supports social distancing including circulation and waiting [5]. Space that is exposed to daylight and natural airflow restricts threat transmission [5]. Threshold space design should consider the material used, specification, and treatment of the surfaces since the COVID-19 reacts and occupies differently depending on the material [5]. COVID-19 virus occurs on plastic and stainless-steel surfaces up to 3 days more than on aerosol, copper, and cardboard (less than 24 hours) [6]. To briefly summarize, threshold space for infection prevention and control of COVID-19 is an experience and material space in between which has four qualities and configurations to users. Supporting social distancing, providing natural ventilation, exposing daylight, and using materials with COVID-19 treatment surfaces are the four qualities and configurations. Methods The study used a qualitative method that relied on primary and secondary data. The primary data was obtained from observation, which observed the space configuration and material of Rumoh Aceh. The secondary data was obtained from a literature study to get information about strategies for healthy and safe space, as well as information about other local wisdom in Rumoh Aceh. Field observations are conducted in three Rumoh Aceh located in Aceh Besar and Banda Aceh owned by Mrs. Mahyuni in Lambunot Village, Aceh Besar, Mrs. Hindun binti Abdul Qodir in Peurada Village, Banda Aceh and Mr. Sulaiman Abda in Tibang Village, Banda Aceh. Data from observation are displayed in images, diagrams, and descriptive texts. The data from the literature study is coded based on the adequate information demanded. The data obtained was 3 displayed in descriptive texts along with appropriate figures. Analysis of the data using an interpretive technique to translate the idea of the application in Rumoh Aceh to a contemporary design. The configurations of space and adaptive use of these Rumoh Aceh are translated into ideas to form threshold design exploration. The translation was displayed in the table to simplify the keywords. The exploration design is based on the definition by Atmodiwirjo [4] and the strategies of Emmanuel's study [5] applied in contemporary dwelling design. This study applied the translation into a dwelling model of a healthy and safe interior. Translation of threshold space in Rumoh Aceh Rumoh Aceh is a stilted house that has ground level or underneath space and upper level (rumoh). In the interior zoning of the upper level, Rumoh Aceh is divided into three areas, seuramoe agam/seuramoe keu, seuramoe teungoh/dalam and seuramoe inong/seuramoe likot [2]-[7]- [8]. This section explores the area of Rumoh Aceh that provides transitional space according to the definition explained in the previous section. The Rumoh Aceh observed are owned by Mrs. Hindun binti Abdul Qodir, Mr. Sulaiman Abda and Mrs. Mahyuni. As seen in the figures below (figure 2a-f), even though these Rumoh Aceh have been adapted according to its users' life cycle, it still preserved the original type of main configurations. Since the upper level of Rumoh Aceh owned by Mrs. Hindun binti Abdul Qodir and Mr. Sulaiman Abda still has a different elevation ( figure 3a-b), it can be a boundary of threshold space as experienced space. There are eight zoning areas in Rumoh Aceh that have transitional activities and spaces (figure 4-11). The transitional activities and spaces occur in the underneath level (rumoh yub), cleaning space, stair, hallway (seulasa), seuramoe keu, seuramoe teungoh or kama, rambat and seuramoe likot. The underneath or rumoh yub area is an open space which is the most desirable space [2], flexible and multifunctional public space [8]. Even though the majority of people assemble in this area, the minimum 600-900 centimeters wide and long and 250-300 centimeters high provides adequate social distancing and natural ventilation. Hence, rumoh yub is the type of transitional space that supports social distancing and ensures airflow. The space which is occupied mainly during the day [2] is a transitional experience to strangers before entering Rumoh Aceh (upper level). Mahyuni. Rumoh Aceh owner provides a cleaning space next to the stairs [8]. In this area, there is a clay jar as a water reservoir, a wood stick to hook a dipper, and composed stone as floor. Thus, everyone has cleaned his or her hands and feet before entering the house. This space includes transitional activities such as cleaning personal stuff and body, and mechanism from contaminated conditions from outside into hygienic conditions. Only Rumoh Aceh owned by Mr. Sulaiman Abda still has a cleaning space put next to one of the entrance stairs. This space also provides experience transition with gravel in the bottom layer. Stairs in Rumoh Aceh function as social control and a threshold to strangers [9]. Then, some types of Rumoh Aceh provide 100 centimeters wide hallways [8] that connect stairs to the seuramoe keu entrance as the foyer. Hallway as foyer can be a waiting space and separate activities between man, guest, and woman. In Rumoh Aceh owned by Mrs. Hindun binti Abdul Qodir, there is a door as a physical architecture element that separates the stair area and hallway. All the hallways in observed Rumoh Aceh are open spaces that get natural air circulation and daylight exposure. Both stair and hallway give material transition to users. Seuramoe agam or seuramoe keu area functions as public space [2]-[7]- [9] and semipublic [8] for man's (agam) and private guest activities. Seuramoe agam area separates activities between manwoman and owner-private guests in Rumoh Aceh [7]. Seuramoe agam also becomes the last zoning that allows non-owner to stay. So, it indirectly inhibits private guests from entering private areas such as seuramoe teungoh and seuramoe likot [7]. These characteristics minimize transmission by physical contact. This area is experience and material transition. Seuramoe teungoh [8] or dalam [2] or tungai/kama [9] or seuramoe inong [8]- [9] area is a bedroom for parents and/or daughter/girl [7]. The bedroom is located in West-East orientation so it gets natural ventilation and exposure to daylight. This private zone has the highest hierarchy in Rumoh Aceh and allows the owner to do self-isolation. In this area, rambat become a space for circulation and separation. Rambat not only connects seuramoe keu and seuramoe likot but separates between two bedrooms. Mrs. Mahyuni combined seuramoe keu with one of dalam (bedroom) to become a wider space for seuramoe. Both seuramoe teungoh and rambat provide experience and material transition. Mr. Sulaiman Abda does not connect seuramoe keu and seuramoe likot with rambat but with seulasa or hallway. As rumoh yub or underneath of Rumoh Aceh function as a communal kitchen [2], Acehnese people use Seuramoe likot as a private kitchen and only women's area [7]. Rumoh Aceh owned by Mrs. Hindun binti Abdul Qodir has private access from the hallway to seuramoe likot. This characteristic of threshold space separates two circulations, hallway-seuramoe keu and hallway-seuramoe likot. It can be the threshold between a suspect and a healthy person. All of these observed Rumoh Aceh have other access to seuramoe likot through back stairs. This condition can allow seuramoe likot as a selfisolation zone if it is separated from the kitchen. Zoning in Rumoh Aceh reveals the distinct transitional activities that can be translated and adapted into spatial configurations or programming of contemporary dwellings. In addition, Rumoh Aceh has moving structural systems allowing it to be moved from one place to another [10] and combining spaces [2]. Thus, movable elements of the structure of Rumoh Aceh can be explored to either form new configurations or merge spaces into new threshold space. Table 1 below concludes the translation of the transitional characteristics of threshold space in Rumoh Aceh. Exploration threshold on contemporary dwelling The COVID-19 pandemic provides lessons learned on how spatial configurations can alter to respond for protection [11]. Architects should rethink the dwelling configurations to allow users to do daily life activities at the same time to do self-isolation securely. For instance, inserting cleaning space between the garage and living room or living room and bedroom. Domestic activities are forced to adapt as COVID-19 outbreak that prohibits contact with the threatening world [1]. Figure 12. Threshold is transitional state between threatening outdoor and safe indoor As figure 12 illustrates, the occurrence of threshold becomes essential as transitional between dangerous zone and healthy zone. In an ideal context, the outdoor is threatening and the indoor should be a safe zone. Instead, threshold space is a buffer in between this zone, especially during pandemics. This section aims to explore the translation of the existing threshold of Rumoh Aceh into contemporary dwelling design. Based on the translation of transitional space in Rumoh Aceh, there are five types of threshold space that can be adapted into contemporary dwellings. Type A is a threshold space that promotes social distancing between users. Type B is a threshold space that provides cleaning space for personal hygiene and changing. Type C is a threshold space that gives an atmosphere for self-isolation. Natural ventilation is a feature of threshold space of type D. Exposing daylight in space is a threshold space of type E. These types and their relation to other spaces are illustrated in the diagrams below. Model In this section, this paper attempts to 'insert' the explored threshold space into contemporary dwelling design. The type of designed dwelling is a 60 square meter house plan, the domestic design which has enough space for 4-6 users. The explored dwelling design has a large courtyard, a cleaning space, a kitchen, a dining/living room, three bedrooms, and two bathrooms. Figure 14 shows the configuration of these programs. Stair, seulasa, and rambat as experience and material transition are adapted into a hallway and a dining/living room ( figure 15-right). A hallway separates outdoor (courtyard) with indoor (cleaning space and kitchen) for strangers. In addition, a dining/living room gives an experience and material boundaries to users before entering the bedroom (private area). Figure 16. The mechanism of type B Type B is inserted in dwelling design as a cleaning space. Cleaning space put in between outdoor and kitchen (indoor). It has spaces for personal hygiene (washing hands and showering) and changing clothes. This programming promotes users to change contaminated conditions from outside into hygienic conditions. There are three scenarios that occur when exploring type C, D, and E as threshold space for selfsolation. The designed bedrooms support self-isolation by giving access to the confirmed positive or asymptomatic people to the toilet, providing natural ventilation, and exposing users to daylight. Green zones are the material transition that separate incubation space from the safe zone or healthy people. Conclusions Threshold space is one of the local wisdoms of Rumoh Aceh, an adaptive house model design that provides design mechanisms. Its mechanism can prevent and control dwelling users from pandemics by five types such as separating threatening activities and safe zones, allowing natural ventilation and daylight, promoting social distancing and personal hygiene, and self-isolation. This paper showed that the vernacular house in Aceh had its own genius loci which sustained for years that can be adapted to contemporary design. It proves that local wisdom translated from vernacular houses can not be ignored. This study contributed to finding a shifting of space strategies from the local wisdom of vernacular houses which is not yet a concern in solving virus spread in the pandemic era. Further research should be undertaken to measure the natural lighting and ventilation demanded in a quantitative method as well as the anthropometric space required. In addition, other typologies of dwelling should be explored to 'insert' threshold space in order to prevent and control the transmission of COVID-19.
2021-11-13T20:07:24.699Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "766ef9b090ee073d380fd0d7fa7f6384ad8b8b03", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/881/1/012047/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "766ef9b090ee073d380fd0d7fa7f6384ad8b8b03", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
52311289
pes2o/s2orc
v3-fos-license
Oral health status of adult heart transplant recipients in China Abstract Limited information on the oral health status of adult heart transplant recipients (HTRs) is known, and no available data exist in China. A prerequisite dental evaluation is usually recommended for patients’ postorgan transplantation because lifelong immunosuppression may predispose them to infection spread. The aim of this study was to investigate the oral health status of Chinese adult HTRs and determine the association between oral health status and history of heart transplantation (HT). We carried out a cross-sectional study to collect clinical, demographic, socioeconomic, and behavioral data from 81 adult patients who received heart transplantation during 2014 to 2015 in China. Clinical examinations for the presence of dental plaque, dental calculus, dental caries, and periodontal health conditions were performed in a standardized manner by one trained examiner. Sociodemographic, socioeconomic, and behavioral data were self-reported using questionnaires. The prevalence of the above conditions was compared with 63 age- and sex-matched controls. General liner regression analysis was used to assess associations between mean number of decayed, missing, and filled teeth (DMFT) and mean community periodontal index of treatment needs (CPITN) scores and history of heart transplant. Mean age of the HT group was 47.7 ± 12.2 years and men accounted for 69.1% of the sample. The overall median DMFT score in the HT group was 3 (1–5) and caries prevalence was 80.2%, which were similar to the control group (P > .05). The overall mean CPITN score of the HT group was 1.84, which was significantly higher than the control group (1.07, P = .001). Participants in the HT group had worse oral hygiene status and more teeth with probing depth ≥ 4 mm than controls (P = .043). Compared with participants who had no history of heart transplantation, HTRs presented worse periodontal health conditions (mean CPITN score, adjusted odds ratio (OR) = 1.39, 95% confidence interval (CI) = 1.12–1.71, P = .003) and similar dental caries status (DMFT score, adjusted OR = 0.58, 95% CI = 0.37–0.91, P = .058). Periodontal health status was positively associated with history of heart transplantation in Chinese adult HTRs. Introduction Over the past 50 years, organ transplantation has become a widely accepted and successful method of treatment around the world that has enabled hundreds of thousands of patients to receive optimal therapeutic benefit. [1] Owing to organ donation, a total of 231 organs were transplanted in China during 2013. [2] Because all transplant recipients are under continuous immunosuppressive therapy to prevent chronic transplant rejection, they are more susceptible to the development of systemic complications and are at increased risk of oral and dental infections. Fungal infection, cytomegalovirus infection, gingival hyperplasia (GH), and malignant oral lesions may also arise as a direct result of immunosuppression or drug interactions. [3,4] Bacterial, viral, and fungal infections that arise among heart transplant patients are usually consequences of drug-induced immunosuppression. [5] Routine dental care is important to reduce potential sources of infection during the drug-induced immunosuppression phase of heart transplantation. Several studies have drawn attention to inadequate oral hygiene behavior and increased dental and periodontal diseases among patients receiving an organ transplant. [6][7][8][9] Ziebolz et al [10] revealed that the occurrence of dental caries is similar before and after solid organ transplantation, but organ transplant recipients experience more dental caries than the general population. Oral hygiene in patients with a solid organ transplant has been found to be significantly worse than that of patients on the transplant waiting list. Segura-Saint-Gerons et al [11] found that Spanish heart transplant recipients had poor self-perceived oral health-related quality of life. However, data on the dental health status of heart transplant recipients (HTRs) are scarce in China. Given the increased number of transplants in recent years, the possibility is increased of a general dentist encountering a transplant patient who requires special dental care. [1,2,12] Although some studies have been conducted among adults who undergo heart transplantation (HT) in other countries and regions, these studies have mostly focused on GH induced by immunosuppressive therapy and the effect of periodontal treatment on GH. [13][14][15][16] However, the complete oral health profile remains unclear in this population. Therefore, it is of great importance to collect oral health information on HTRs. Lifestyle and oral health-related behaviors (e.g., the frequency of tooth-brushing, frequency of eating snacks, dental utilization pattern) of HTRs may change after a transplant operation, [17] which may contribute to dental neglect and increased risk of oral diseases. To obtain more information concerning dental and periodontal health status in these patients and better understand the impact of a history of heart transplantation on oral health, we conducted a cross-sectional study to describe the oral hygiene, dental, and periodontal status among adult HTRs in China, and to identify the potential differences in oral health between HTRs and controls. This information could be used to improve the oral health management of heart transplant patients. Study design and study population This cross-sectional study was conducted at Beijing Anzhen Hospital, affiliated to Capital Medical University, from January 2015 to December 2015. A total 81 HTRs who underwent heart transplantation surgery between January 2014 and September 2015 in the Department of Cardiac Surgery and 63 sex-and agematched controls were recruited from the physical examination center of Anzhen Hospital. The control participants were neither receiving organ transplants nor under immunosuppressive therapy. We excluded individuals meeting the following criteria: <18 years old, pregnant, with a history of periodontal treatment in the last 6 months, current serious systemic infection, addiction to alcohol or drugs, current seizure or any neurological disorders. Ethics Ethical approval was obtained from the Ethics Committee of Beijing Anzhen Hospital (Approval No. 2015017) prior to the implementation of the study, and written informed consent was obtained from all study participants. Data collection After providing their informed consent, all participants received a comprehensive oral examination and a survey using a structured questionnaire. Oral examinations were conducted at the Dental Department of Beijing Anzhen Hospital by a trained dentist in a standardized manner. The Simplified Oral Hygiene Index (OHI-S), including the Simplified Debris Index (DI-S) and Simplified Calculus Index (CI-S), was used to assess the oral hygiene status of study participants. Dental caries were assessed using the World Health Organization 1997 criteria. [18] The number of decayed (D), missing (M), and filled (F) teeth (DMFT) and caries prevalence were calculated to assess overall caries occurrence. Periodontal health was evaluated using probing depth (PD), bleeding index (BI), and the community periodontal index of treatment needs (CPITN). The CPITN score was recorded for 10 index teeth (17,16,11,26,27,37,36,31,46, and 47) using the following criteria: 0 = healthy; 1 = bleeding on probing; 2 = calculus; 3 = PD 4-5 mm; and 4 = PD 6 mm or more. [16] The mean CPITN score was calculated for each participant. GH is characterized by an increase in gingival volume that is usually located in the gingival papillae. We documented all occurrence of GH among HTRs in this study. In addition to oral examination, participants were asked to complete a structured questionnaire that included the following aspects: sociodemographic information, general health status and medication, oral health-related behaviors (e.g., snacking pattern, daily oral hygiene practice, and dental utilization in recent years), and self-perceived value of oral health. HTRs were also queried about the reason for heart transplantation, the date of transplant, as well as current immunosuppressive therapy. The self-perceived value of oral health was measured using a questionnaire consisting of 13 items. For each question, participants were asked how they perceived the value of different aspects of oral health. Responses were recorded using a Likerttype scale (1 = no importance, 2 = little importance, 3 = does not matter, 4 = quite important, 5 = extremely important); 0 indicates no response. The sum of scores for the self-perceived value of oral health ranges from 0 to 65, with higher scores indicating a higher perceived value of oral health. Prior to initiation of the study, the examiner was trained and calibrated to a gold standard examiner. Oral examinations were completed for 10% of the sample to assess examiner reliability. The minimal time interval between examinations was 2 days. Cohen's unweighted kappa for caries status at tooth level and periodontal conditions were 0.92 and 0.83, respectively. Statistical analysis Descriptive analyses were first conducted to compare sociodemographic, oral, and systemic characteristics between the HTRs and controls using the Student t-test (parametric data following a normal distribution), Mann-Whitney U test (nonparametric data), and chi-square test (categorical data). To assess the association between a history of HT and oral health, 2 multivariate regression analyses were conducted. First, a multivariate negative binomial regression was developed to study the association between a history of heart transplantation and dental caries, adjusting for sociodemographic and other confounders (i.e., sex, age, educational attainment, snacking frequency, tooth-brushing frequency, dental visit pattern, selfperceived value of oral health score, and body mass index (BMI)). A multivariate general liner model was developed to examine the association between a history of heart transplantation and periodontal health measured by CPITN score, adjusting for sociodemographic and other confounders (i.e., sex, age, educational attainment, smoking history, diabetes mellitus history, tooth-brushing frequency, dental visit pattern, self- Results The study sample consisted of 81 HTRs and 63 controls. Demographic, socioeconomic, and clinical characteristics of the study participants are detailed in Table 1. The mean age of the HT group was 47.7 ± 12.2 years and male participants accounted for 69.1% of the sample. The mean age of the control group was 49.6 ± 13.3 years and males accounted for 63.5% of the sample. Age and sex distributions were similar in both groups, with no significant differences. Regarding educational level, nearly 80% of participants in the control group had a bachelor's degree, which was significantly more than the proportion in the HT group (30.9%, P < .001). The annual household income in the control group was also significantly higher than that in the HT group (P < .001). Variables HT group N = 81 (n,%) Control group N = 63 (n,%) Hypertension was highly prevalent in all study participants but was more prominent among HTRs. About 42% of HTRs had hypertension, significantly higher than the proportion in the control group (17.5%, P < .001). Participants in the HT group was also more likely to have diabetes mellitus (30.9%) than those in the control group (14.3%, P = .006). HTRs had more chronic medical conditions than the controls. There was a greater proportion of patients with 2 or more systemic diseases in the HT group than in the control group (39.5% vs 14.3%, P = .01). With respect to oral health-related behaviors, there were significantly more participants who regularly visited a dentist once or twice a year in the control group than in the HT group (P = .041). However, there was no statistically significant difference in smoking history, snacking frequency, toothbrushing frequency, or self-perceived value of oral health score between the 2 groups. Oral health outcomes are shown in Table 2. Dental caries prevalence did not differ between the groups (80.2% vs 79.4%, P = .896). The DMFT index was also similar in the 2 groups (median = 3 in both, P = .144). HTRs had worse oral hygiene status and more periodontal conditions than the control participants. On average, the DI-S (mean = 0.90) and CI-S (mean = 0.33) in the HT group were significantly higher than the DI-S (mean = 0.61) and CI-S (mean = 0.17) in the control group (P = .029, P = .005, respectively). BI, the number of teeth with PD ≥ 4 mm, and mean CPITN score in the HT group were statistically higher than those in the control group (P = .042, P = .04300, and P = .001, respectively). Of 53 HTRs who took CsA, 24 (40.7%) presented with GH, which was significantly higher than in HTRs who took Tac or other immunosuppressive drugs (0/28, P < .001). A negative binomial model was chosen to study the relationship between dental caries and history of heart transplantation ( Table 3). For occurrence of dental caries measured by the DMFT index, after adjusting for sex, age, educational level, tooth-brushing frequency, dental visit pattern, self-perceived value of oral health, and BMI, the history of heart transplantation was not significantly associated with DMFT (P > .05). A general linear model was chosen to study the association between periodontal conditions and the history of heart transplantation ( Table 4). For periodontal status measured by CPITN score, multivariable GLR analysis showed that a history of heart transplantation was significantly associated with CPITN score (OR = 1. 38, 95% CI = 1.121-1.710, P = .003,) after adjusting for other factors include sex, age, educational attainment, snacking frequency, tooth-brushing frequency, dental visit pattern, self-perceived value of oral health score, and BMI. Discussion This study aimed to describe the oral health status among Chinese adult HTRs, and to investigate the associations between a history of heart transplantation and dental caries or periodontal diseases in these individuals. Our findings revealed that periodontal health was significantly associated with a history of heart transplantation whereas dental caries were not associated with heart transplantation history in the studied participants. Heart transplantation is known to increase survival in patients with end-stage heart failure, but many patients still experience problems that affect their functional abilities after transplantation. [19] In addition, they experience dramatic physiological and psychological changes and as well as a high treatment-related financial burden. High disease burden and intensive medical treatment may also induce oral complications. [20][21][22] As these Table 3 Relationships between DMFT scores of participants and selected independent factors, by negative binomial regression (n = 144). Adjusted sociodemographic and other confounders include sex, age, educational attainment, snacking frequency, tooth-brushing frequency, dental visit pattern, self-perceived value of oral health score, and BMI. BMI = body mass index, DMFT = decayed, missing, and filled teeth, HT = heart transplantation. factors might affect activities of daily living in HTRs, some daily habits including oral hygiene and oral health behaviors may change after transplantation, which may increase the risk of dental and periodontal diseases. The present study showed that HTRs had higher DI-S and CI-S scores compared with control participants, indicating a worse oral hygiene condition among HTRs than in controls. Poor oral hygiene was the shared risk factor for dental caries and periodontal disease. Dental caries was one of the main oral infectious diseases observed among study participants. Dental caries prevalence varies among people according to age, sex, socioeconomic status, race, geographical location, food habits, and oral hygiene practices. [23] As shown in this study, the HT group had poorer oral hygiene, lower annual household income, and were less likely to visit a dentist regularly, which may increase the risk of dental caries. However, the level of dental caries in the HTRs was similar to that in the controls. The present study also showed that the prevalence of dental caries in HTRs was lower than that in people aged 35 to 44 years and 65 to 74 years in China, according to the Third National Oral Health Survey in mainland China. [24] A number of potential confounding factors such as educational level, tooth-brushing frequency, dental visit pattern, self-perceived value of oral health, and BMI were considered in the negative binomial regression model, but none of these were significantly associated with dental caries occurrence. This finding suggests that some other unmeasured factors may affect participants' dental caries experience. This requires further investigation in future studies. HTRs have an increased risk of developing periodontal diseases as a result of long-term immunosuppressive therapy. The most commonly described condition in the literature is GH induced by CsA, with prevalence ranging from 5% to 70%, as reported in previous studies. [25,26] The positive relationship between inflammation and GH is widely recognized. [27,28] In the present study, two-thirds of HTRs routinely received CsA as immunosuppressive therapy, and nearly 40% (24 of 53) of HTRs who took CsA developed some degree of GH. GH greatly affects oral hygiene, chewing ability, and social activities, and increases the accumulation of dental plaque, which can cause periodontal disease. Thus, it is not surprising to observe that the BI score and number of teeth with PD ≥ 4 mm in HTRs were significantly higher than those in the control group, indicating worse periodontal status among HTRs. Variables Some risk factors such as poor oral hygiene, diabetes, smoking, medication, age, genetics, and stress are related to periodontal diseases. Recent longitudinal studies have suggested that longterm regular recipients of dental care have better oral health compared with non-regular recipients. [29][30][31] In this study, the lack of access to regular dental care may escalate the risk of periodontal diseases in HTRs. Other factors might also contribute to the poor periodontal health of HTRs. Hyperglycemia, glucose intolerance, and diabetes mellitus are the most common metabolic disorders and clinically relevant complications after solid organ transplant. Diabetes can promote the occurrence, progression, and severity of periodontitis. [32] In the present study, diabetes mellitus was highly prevalent in the HT group, with 30.9% of participants in the HT group diagnosed with diabetes mellitus, significantly higher than the proportion in the comparison group (P < .001). This can also be attributed to the worse periodontal conditions among HTRs. CPITN has been widely used recently to measure the level of periodontal disease and define periodontitis. [33] HTRs in our study had significantly higher CPITN scores than control participants, and the periodontal health condition of adult HTRs was positively associated with a history of heart transplantation after adjusting potential confounding factors such as sex, age, educational attainment, snacking frequency, tooth-brushing frequency, dental visit pattern, self-perceived value of oral health score, and BMI in the final general liner model. Oral diseases and infection can compromise quality of life in individuals undergoing solid organ transplantation. [34,35] Because a local oral infection may lead to systemic complications, [33,36,37] patients requiring heart transplant should be carefully evaluated and treated for dental infections and good oral health status should be maintained during the pre-and postsurgical stages. This poses a challenge for dental professionals owing to the complex medical history of these patients. Because many HTRs are immunologically compromised, it is challenging to individualize dental treatment plans and determine the appropriate treatment and intensity to maintain health status among HTRs and thus, their ability to tolerate dental treatment. Management of these patients usually requires comprehensive oral and systemic assessment, careful planning, and good communication between dental providers, medical professionals, and patients. As a cross-sectional study, this research has several limitations. First, the study includes a small sample size, which may limit its statistical power. Thus, future studies with larger sample sizes are needed, to support the present findings. Second, this cohort of patients was recruited in a single-center study at our hospital. Although ours is one of the largest cardiovascular disease hospitals in the country and admits patients from all parts of China, the data described herein cannot be extrapolated to the entire population of Chinese HTRs. Pooling of data from national databases on the oral health status of HTRs in multicenter studies will reinforce the generalizability of such findings. Finally, because the present study was a cross-sectional analysis, we did not collect precise information about the pretransplant oral health status and oral health behavior of HTRs, which might affect the post-transplant oral health status and oral health behavior of HTRs in this study. Therefore, we were unable to assess how these factors may influence the oral health status of HTRs. Further clinical and basic research studies are required to clarify this issue. Table 4 Relationships between mean CPITN score of participants and selected independent factors, by general liner regression (n = 144). Adjusted sociodemographic and other confounders include sex, age, educational attainment, snacking frequency, tooth-brushing frequency, dental visit pattern, self-perceived value of oral health score, and body mass index. CPITN = community periodontal index of treatment needs, HT = heart transplantation. Conclusions The present study showed that periodontal health was poor among Chinese adult HTRs. Poor periodontal health was associated with a history of heart transplant in adult HTRs. However, the association between dental caries and heart transplantation history was uncertain. Collectively, these findings highlight the need for better management of the periodontal condition in patients who undergo heart transplantation. These patients should be carefully evaluated and treated for dental infections and good oral health should be maintained during the pre-and post-transplant stages.
2018-09-24T14:56:15.546Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "909c06132ce94498ce5a7a993210fddc0bfdf41c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000012508", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "909c06132ce94498ce5a7a993210fddc0bfdf41c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244212759
pes2o/s2orc
v3-fos-license
Medication discrepancies in a hospital in Southern Brazil: the importance of medication reconciliation for patient safety Medication discrepancies are of great concern in hospitals because they pose risks to patients and increase health care costs. The aim of this study was to estimate the prevalence of inconsistent medication prescriptions to adult patients admitted to a hospital in southern Santa Catarina, Brazil. This was a patient safety study on patients recruited between November 2015 and June 2016. The participants were interviewed and had their medical records reviewed. Discrepant medications were considered those that did not match between the list of medicines taken at home and the prescribed drugs for treatment in a hospital setting. Of the 394 patients included, 98.5% took continuous-use medications at home, with an average of 5.5 medications per patient. Discrepancies totaled 80.2%, The independent variables associated with the discrepancies were systemic arterial hypertension, hypercholesterolemia, vascular disease, number of medications taken at home, and poor documentation of the medications in the medical record. Findings from this study allowed us to conclude there was a high rate of prescription medication misuse. Medication reconciliation is crucial in reducing these errors. Pharmacists can help reduce these medication-related errors and the associated risks and complications. INTRODUCTION Patient safety in health care is a basic requirement that reflects the quality of care provided within health institutions (WHO, 2003;Parand et al.,2014). Incidents are situations that may result in harm to patients during care provided by health care professionals, not associated with the patient's underlying disease (WHO, 2015;Pippins et al., 2008;van Melle et al., 2016). Health organizations designate an injury incident, also called an adverse event, as ''impairment of body structure or function and impairments thereof, including physical, social or psychological dysfunction, illness, injury, disability or suffering, and death" (WHO, 2003;Runciman et al., 2009;Leistikow et al., 2017;Tejal, Danald, Kaveh, 2016). Adverse drug reactions, medication errors, and drug discrepancies are among the drug-related incidents (WHO, 2009). Medication reconciliation is a process designed to prevent discrepancies between home drug treatments and those conducted in hospital settings. Medication reconciliation can reduce errors and adverse events related to the prescription and use of medications. The purpose of this practice is to avoid or minimize transcription errors of medical prescription, omission of doses, and duplicity of therapy (Pippins et al., 2008). Discrepancies are the incompatibilities found between the list of medicines taken at home and the prescription medicines in hospital settings used for disease treatment regarding the omission of doses, therapeutic duplicity, and non-prescription of medications for home use (Pippins et al., 2008;Sheikhtaheri, 2014). Discrepancies can be intentional or unintentional, of which the unintentional ones focused on patient safety programs (Runciman et al., 2009;Rodriguez et al., 2016). Previous studies have found that the patient's age, the number of medications prescribed, and the use of drugs for the cardiovascular system are likely to increase the number of discrepancies found. In addition, the small number of hospital pharmacists, lack of training and knowledge about the medication reconciliation service by pharmacists and other health team members also contribute to poor medication reconciliation (Huynh et al., 2016;Andreoli et al., 2014). Prescription discrepancies have been studied for their effects (WHO, 2015;Pippins et al., 2008). Paying special attention to them in the pharmacotherapeutic treatments of hospitalized patients and improving health care quality and safety is beneficial not only to the patient, but also to the hospital and the health care system (Pippins et al., 2008). Based on the foregoing background, the aim of this study was to estimate the prevalence of medication discrepancy between home medications for continuous use and those prescribed for the patient in the hospital. The study also aimed to evaluate the factors associated with these errors in a general hospital located in southern Santa Catarina, Brazil. METHOD This patient safety study was carried out in a 400bed hospital located in southern Santa Catarina, Brazil. Patients were surveyed in the medical clinic sector, which has 30 hospital beds and 100 monthly hospitalizations in average. The study included adult patients of both genders and who had been hospitalized for at least 24 hours. The exclusion criteria encompassed subjects who had already been picked out earlier, those who were discharged at the time of the interview or were undergoing procedures and exams, and those who did not agree to participate or were unaccompanied at the time of the visit to respond to the survey questionnaire. The population was composed of patients aged 18 years or over, whose hospitalization was covered by the National Unified Health System (SUS), from October 2015 to June 2016. For sample calculation, an estimated population of 1,500 patients in the study period and a 60% incidence of discrepancies (WHO, 2009) were considered, with an increase of 20% for eventual losses or refusals, which resulted in a minimum sample size of 356 patients, for a 95% confidence interval. The patients were randomly selected by simple random sampling technique, using the patient's bed number. The patients were interviewed just once, when information about medications taken at home or drugs prescribed in the hospital setting were collected. Afterwards, the patient's chart was consulted to compare the medical record prescription with the information provided by the patient during the interview. The medication reconciliation was performed for each patient by the researchers, using a data collection form. The sociodemographic profile (age, gender, and education) were reported by the patient. Age was calculated by the difference between the date of the interview and the patient's date of birth, counted in full years; gender was identified as male or female, and full years of school attendance were taken into account to identify the education level. Clinical data, such reasons for hospitalization, length of hospital stay, if there was a transfer between hospital sectors, presence of comorbidities, prescribed medicines and the outcome -discharge or death, were extracted from the medical diagnosis noted in the electronic medical record. The cause of hospitalization was determined by the codes assigned according to the International Classification of Diseases. The length of hospital stay was calculated by the difference between the date of the patient's discharge or death and date of hospital admission, in days. The medications used at home and in the hospital were classified according to the Anatomical Therapeutic Chemical (ATC). The medical record of each patient was examined to identify which medications were used at home, how the info was obtained, and who had prescribed them. In this study, polypharmacy was defined as the use of five or more medications simultaneously taken at home (Gnjidic et al., 2012). After listing all medications taken at home and those prescribed during hospitalization, a comparison between them was made, classifying the drugs based on Page et al. (2010) and Andreoli et al. (2014) as follows: 1) continuous-use home medications neither prescribed in the hospital nor taken by or administered to the patient; 2) non-prescribed home medications, but taken Page 3/11 by or administered to the patient; 3) home medications administered in duplicate (taken by the patient and administered by the nursing service in the hospital); 4) medication omission (medications prescribed, but not administered to the patient by the nursing service). Medication discrepancy consisted of differences between the list of medicines taken at home and those prescribed for the treatment in a hospital setting, classified into one of the four categories listed above (Paci et al., 2015). The active principle of the medicine was considered to define discrepancy (no dosage or medication schedule was assessed). Medication discrepancy was categorized as follows; a) no discrepancy, when the full list of medicines used at home was included in the medical record; b) partial discrepancy, when the medical record did not contain one or more continuous-use medications informed by the patient; c) total discrepancy, when none of the medicines used at home were registered in the medical record. The OpenEpi software, version 3.01, was used to calculate the sample size. The data collected were entered into Excel Workbook Gallery for Mac, 2011, version 14.6.6, and statistical analysis was performed using SPSS software v.21.0 (IBM Armonk, New York, USA). The numerical variables were expressed as the mean or median and standard deviation (SD), and the nominal variables were presented as absolute and proportional values. The Kolmogorov-Smirnov test was used for quantitative variables to verify the normality of the data distribution. Student's t-test was used for mean comparison, and non-parametric statistics with Wilcoxon-Mann-Whitney U test was used for the nonnormal distribution of variables. Prevalence ratios (PR) were calculated for independent variables, and discrepancies found in the medication prescriptions, crude analysis and, subsequently, adjusted for potential confounding variables, using modified Poisson regression models. Confounding factors were selected among variables associated with medication discrepancy, in the bivariate analysis (P-value < 0.20), or those described as such in the literature (hospital stay and number home medications). A P-value < 0.05 was considered statistically significant. This study was approved by the Unisul Research Ethics Committee (Opinion No. 1.207.715), Plataforma Brasil (CAAE 47597615.0.0000.5369), on August 31, 2015. Study sample patients eligible for study were only included after signing an informed consent form. RESULTS During the study period, 448 patients were recruited, of whom 54 were excluded because of the following reasons: patients in isolation rooms, those unable to answer the questions, and those who were not in the room at the time of the interview, resulting in a final sample of 394 patients included in the study. The participants' ages ranged from 18 to 97 years, with a mean of 61.6 ± 15.1 (median 62) years of age. The average length of hospital stay was 13.0 days (SD ± 11.3), with a median of 10 days, ranging from 2 to 64 days. Circulatory system disorders were the leading cause of hospitalization, according to the International Code of Diseases (ICD10). Table I presents a description of the sociodemographic and clinical characteristics of study participants admitted into the hospital and their association with drug discrepancies. Of the 394 surveyed patients, 388 (98.5%) took continuous-use medications at home, which corresponded to an average of 5.5 medications per patient (ranging from 1 to 18 medicines), whereas in the hospital setting, the average was 9.2 medications per patient (ranging from 1 to 31 medicines). By examining the list of medications taken at home to check whether they were prescribed during the hospital stay, we discovered that 956 (43.8%) of the drugs were found to be in the medical prescription, 194 (8.9%) drugs were replaced by drugs of the same class, and 1,031 (47.3%) medicines were neither prescribed nor replaced, being considered omissions. Metformin was the most neglected drug (18.0%) in the patients' prescriptions. Drug discrepancy was associated with a higher number of medications consumed at home (6.1 ± 3.4) as compared to patients who did not present drug discrepancies (3.7 ± 2.8), p <0.001. Patients with prescriptions containing discrepancies totaled 316 (80.2%). Of these, 72 presented prescriptions with more than one type of discrepancies, as illustrated in Figure 1. The classes of medications most commonly used by patients at home and during the hospital stay are described in Table II. They were classified according to the anatomical group (1 st level) of the ATC classification. Adjusted analysis revealed that vascular disease, number of medication taken at home, and partial history of the medication in the medical record were independent factors associated to the occurrence of medication discrepancies, as shown in Table III. Some prescriptions presented more than one discrepancy. DISCUSSION This study focused primarily on the assessment of drug discrepancies in the prescriptions for hospitalized patients. There was a high frequency (80.2%) of drug discrepancies in comparison to other studies (Zoni et al., 2012;Bishop et al., 2015), which ranged from 19.1% to 51%. More recently, a study conducted by Armor et al. (2014) found a frequency of 81% of drug discrepancies. It should be noted that in the surveyed hospital there was neither a drug reconciliation service, nor the presence of clinical pharmacists. The percent variability may depend on the characteristics of the hospital health care and the way the data collection of medicines taken at home is made. In this study, household polypharmacy was considered a factor associated with the presence of medication discrepancy or error. It can be attributed to the higher mean age of the study participants as compared to other age groups. In the present study, the number of drugs used at home and in the hospital can be classified as polypharmacy, since the average number of drugs was 5.5 and 9.2 per patient, respectively. Polypharmacy may lead to the appearance of iatrogenic diseases, which are responsible for increased morbidity and mortality, large number of hospitalizations, and high costs to health care systems (Armor et al., 2014;Ferreira, Rodrigues, 2012). Patients using various medications are usually more vulnerable to medication errors, with a large proportion of elderly people presenting more than one comorbidity and/or chronic diseases (Paci et al., 2015). This association may occur due to the pathophysiological process of aging, leading to greater intake of medications, iatrogenic cascade, and polypharmacy (Zoni et al., 2012). The patient's adherence and understanding about the pharmacological treatment should also be considered a challenge in the reconciliation process, in which patients or family members have difficulty naming the medications used. In Brazil, this is even more aggravating because we do not have a computerized dispensing system or adequate communication between commercial establishments (pharmacies), public health services, and hospitals. The same occurs with communications between different health professionals (prescribers and pharmacists) (Ferreira, Rodrigues 2012;Coutinho et al., 1999). Diseases of the circulatory system were the most prevalent illnesses among the hospitalized patients, a situation that can be attributed to the fact that most participants were of an advanced age (Saint-German et al., 2016). Hypertension, hypercholesterolemia, and DM are the major factors for the onset of cardiovascular diseases and polypharmacy, requiring a great amount of medication to treat these disorders, which may increase the rate of hospital admissions due to these causes (Afaras et al., 2016;Charlesworth et al., 2015). In the general population, hypertension and DM prevalence rates were 21.4% and 6.2%, respectively. (Pesquisa Nacional de Saúde, 2013). The adjusted analysis revealed that vascular diseases was independent factor associated with the occurrence of drug discrepancies. Andreoli et al. (2014) have corroborated this finding, since their study showed that the age of the patient, the number of medications prescribed, and the use of drugs for the cardiovascular system may increase the number of discrepancies. Drug reconciliation has different forms to classify discrepancies by using varied instruments, according to the clinical pharmacist's needs. Medical prescriptions of home medicines, medication packages, and lists prepared by the patients or their caregivers can be used to collect the data. In this study, the majority of the participants (76.8%) did not take to the hospital any documentation describing their continuous-use medications, so memory was then the most common source of information. Memory was also the most common (94.1%) resource used by an emergency service in a study by Cater et al. (2015). Coutinho et al. (1999) have argued that one can rely on information provided by the patient regarding medications used within two weeks prior to interview, and whenever the patient's cognitive functions do not allow it, that information can be obtained from their caregivers (Cater et al., 2015), but in both cases the information must be confirmed by the pharmacist as soon as possible. The prescriber's lack of confidence in patients' oral information regarding medication use may also be one of the causes of discrepancies. Assessment of home medications revealed that a high percentage of medicines taken at home were considered neglected, and a few were replaced by drugs of the same class. This replacement may have occurred because of the standardization of medications adopted by the hospital, which led physicians to prescribe active principles available at the hospital pharmacy. Differences between medications taken at home and those prescribed in the hospital that caused no changes in the patient's clinical condition may be considered discrepancies. It should be noted that all neglected drugs were standardized in the surveyed hospital. In a study conducted by Cater et al. (2015), the percentage of prescribed medicines taken at home (33.9%) was very close to that found in the present study. In this study, the five most commonly neglected drugs were metformin, followed by losartan, acetylsalicylic acid, hydrochlorothiazide, and levothyroxine, all of which are used to treat chronic diseases. It was impossible to identify whether the medication discrepancies were intentional, even though interrupting a disease treatment abruptly should be very rare, especially without a documented reasoning. There is no evidence if an abruptly withdrawal of metformin, levothyroxine, and losartan may cause rebound effect after sudden discontinuation. However, omissions of these medications may lead to hyperglycemia, hypothyroidism, and augmented blood pressure levels, respectively, posing risks to patients' health. A sudden withdrawal of acetylsalicylic acid may be associated with traditional cardiovascular risk factors and thrombosis. A sudden withdrawal of hydrochlorothiazide from patients with normal sodium intake may cause rebound retention of sodium and water, leading to edema through compensatory mechanism (Medication Reconciliation, 2009;Kalb et al., 2009). Different studies (Comino et al., 2015, Oliveira Filho et al., 2014Beckett, Crank, Wehmeyer, 2012;Hellström et al., 2012) have classified medication discrepancies, and the most frequent discrepancies was non-prescription drugs taken at home, which corroborates the findings of this study. Discrepancies in nonprescription medications taken by the patients or administered to them by their caregivers encountered additional problems, such as forgetfulness, inadequate drug storage and poor administration, lack of knowledge of the practice by the health team, and consequently, poor assessment of the interactions with the medications prescribed in the hospital. Therapeutic duplicity occurred in 12% of the prescriptions, i.e., 48 patients reported they were taking their home medicines, which were the same (or same class) being prescribed and administered in the hospital. Therapeutic duplicity may increase the risk of adverse reactions and interactions. Especially the benzodiazepine-related duplicity can lead to excessive sedation, risk of falls and fractures, mental confusion, and benzodiazepine poisoning symptoms (Johnson, Streltzer, 2013). The use of medicines can be complex and vulnerable to iatrogenic diseases, especially in hospitals where a large team is involved in the patient treatment. Medication use encompasses important steps, such as prescription, communication, dispensing, administration, and clinical follow-up (Johnson, Streltzer, 2013;Doerper et al., 2015). The lack or poor communication, whether written or oral, can lead to all errors occurring within hospitals, therefore, the lack of the documented drug history may result in potentially harmful discrepancies, leading to imprecise and sometimes fatal treatment (Beckett, Crank, Wehmeyer, 2012). Data presented here should be viewed with relative caution, due to the short time available to classify the discrepancies found. There are also some limitations related to the intentionality and epidemiological design of the study. Further longitudinal studies should be carried out to evaluate clinical implications of drug discrepancies. Impact studies of the drug reconciliation service are also needed to strengthen the implementation of this practice in hospital settings. Despite these limitations, the study presents an important outcome that may have negative impacts on the patient, the health team, and the institution. The reconciliation may also be applied at discharge from hospital, for continuity of treatment, and may provide guidance on discharge prescriptions that may contain manipulated medications, lack of important information, duration of treatment, use of abbreviations or illegible prescriptions (van Melle et al., 2016). Further studies on medication reconciliation at hospital discharge should be conducted. CONCLUSION Findings from this study allowed us to conclude there was a high rate of patients with prescription medication misuse (80.2% discrepancies). Assessment of medication use revealed that vascular disease, number of continuous-use medications at home, and poor documentation in the medical record were independent factors associated with medication discrepancies. Medication reconciliation is crucial in reducing these errors. Pharmacists can help reduce these medication-related errors and the associated risks and complications.
2021-10-19T15:13:04.965Z
2021-09-27T00:00:00.000
{ "year": 2021, "sha1": "1d0fef1ba3d4094c05987bc796f2b79e63cfeec5", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjps/a/tMRB4nmnz5Ny9P9Nw3HYZKP/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7a1bf097f21a71d6863a15bf3cb269e1931f9e9e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56398857
pes2o/s2orc
v3-fos-license
Integrin Alpha-V Beta-3-Matrix Metalloproteinase-2 ( MMP-2 ) , Cross-Talk The present study aimed to detect comparative expression of integrin αVβ3 and its involvement in expression and activation of matrix metalloproteinase-2 (MMP-2) in 25 malignant human breast tumor and adjacent normal breast tissues from different clinical TNM stages (DCIS to T4) of the disease and possible involvement of known regulating parameters of MMP-2 like TIMP-2, MT1MMP and EMPRIN. Integrin αVβ3 was highly expressed in tumors than adjacent normal breast tissues. Pro-MMP-2(72-KD) was mainly expressed in adjacent normal tissues compared to tumors. The mature forms of MMP-2 (68 KD and 64 KD) were found only in tumors. Appreciable expression of TIMP-2 and induction of MT1-MMP and EMPRIN in T2-T4 stages suggested their possible role in MMP-2 activation. Over expression Integrin αVβ3 in tumors than adjacent normal breast tissues was an indication of cancer progression with involvement of integrin signaling. We conclude that, the co-precipitation of MMP-2 with αvβ3 by anti-αv antibody is a strong indication that integrin αvβ3 is a surface receptor for MMP-2 and αvβ3-MMP-2 complex on the surface of tumor cells may play a very important role in determining the invasive property and malignant behavior of tumor tissues. The positive expression of endogenous inhibitor of MMP-2, TIMP-2 may have an appreciable role in activation of this protease and risk of malignancy in advanced stage of the disease. The enhanced expression of MT1-MMP and EMPRIN suggested a role for these factors in gelatinase regulation. However the exact mechanism(s) remains to be investigated. Finally, evaluation of integrin αVβ3 associated MMP-2 expression and activity may add valuable information and can possibly be therapeutic target. The clinical exploitation of integrins will provide oncologists with novel therapeutic strategies for the treatment of malignancy in breast cancer. Corresponding author. H. Sil, A. Chatterjee Introduction Integrins are transmembrane heterodimeric proteins, consisting of noncovalently associated α (120 -180 KD) and β (90 -110 KD) subunits [1] [2].Integrins mediate cellular adhesion to and migration on the ECM proteins found in intracellular spaces and basement membrane [3].They also regulate cellular entry into and withdrawal from cell cycle [4] [5].Ligation of integrin by their ECM ligands induces a cascade of intracellular signals [6] that include tyrosine phosphorylation of focal adhesion kinase (FAK), increment of intracellular Ca 2+ levels, ionositol lipid synthesis, synthesis of cyclins [5] and expression of intermediate early genes [6].In contrast, prevention of integrin ligand interaction suppress cellular growth and induces apoptotic cell death [7] [8].Integrin on tumor cells are now thought to play intricate role in the progression of solid tumors.Integrin expression is altered in malignant cells as compared to their normal counter parts and altered integrin expression appears to be involved in several aspects of tumor growth, invasion and metastasis [9].One of the most studied integrin receptors in tumor progression and metastasis is α v β 3 which is a major integrin for vitronectin and plays important role in cancer growth.In murine cells, interaction of vitronectin with its receptor provides co-mitogenic signal [10].Malignant melanomas express these integrin receptors as they enter the vertical growth phase [11].Upregulation of this receptor in endothelial cells facilitates their interaction with lung carcinoma cells indicating a potential role of this receptor in tumor cell extravasations [12].Tumorogenicity in athymic nude mice strongly correlated with α v β 3 expression by tumor cells [13]. Cellular invasion process involves the production of proteolytic enzymes which are capable of degrading components of the extra cellular matrix and basement membrane.One of the major groups of these enzymes is matrix metalloproteinase (MMP) family.The prognostic value of MMPs has been investigated in several malignancies.The role of MMP-2, is well established during invasive activity of many cell types [14]- [16].In particular, the molecular interaction between MMP-2 and integrin α v β 3 , via the hemopexin C (PEX) domain, was shown to be essential for efficient cell invasion and angiogenesis [17] [18].MMP-2 overexpression has been correlated with poor survival in breast carcinoma [19] [20] especially in node-positive patients.Studies reveal that the α v β 3 integrin receptor is expressed by various cancer types.Malignant melanoma has been shown to modulate expression of proteolytic enzymes by the tumor cells.Stimulating antibodies to α v β 3 in melanoma cell line caused increased expression of MMP-2 with an enhanced ability to invade basement membrane [21].Expression of α v β 3 on cultured melanoma cells enabled their binding to MMP-2 in a proteolytically active form, facilitating cell-mediated collagen degradation, thereby facilitating directed cellular invasion [17].Integrin alpha V beta 3 was strongly expressed in primary invasive breast carcinomas.In contrast, this integrin heterodimer was abundant in all breast cancer cells metastatic to bone.In situ hybridization also revealed high levels of α v β 3 mRNA expression and suggested that integrin α v β 3 is an endothelial cell marker with significant prognostic value and potential usefulness as a target for specific anti angiogenic therapy.It also has been demonstrated that tumor-specific α v β 3 contributes to spontaneous metastasis of breast tumors to bone and suggests a critical role for this receptor in mediating chemotactic and haptotactic migration towards bone factors.Our present study aimed to detect the comparative expression and activity of integrin α v β 3 associated MMP-2 in breast tumor tissue and adjacent normal breast tissue and the possible involvement of MT1-MMP, EMPRIN and TIMP-2 in the modulation of MMP-2 activity in breast cancer. Patients The present study involving 25 breast cancer patients diagnosed among the women, who were referred to Chittaranjan National Cancer Institute, India because of clinical breast lump, suspicious mammographic finding or a breast symptom (eq.pain, nipple discharge) between 2008 and 2010.Women willing to participate in the project were interviewed and examined by a trained study nurse before any diagonostic procedures.The participation rate of patients with diagonosed breast cancer was 98%.Thus the patient series represents unselected typical breast cancer cases of different stages from the institutional hospital catchment area.Patients were offered treatment according to the stage of the disease, either surgery followed by chemotherapy and ± Radiotherapy or Neo-adjuvant chemotherapy followed by surgery then completion chemotherapy and radiation depending on the mode of the surgery, the patient's menopausal status, and the stage of the disease, according to the national guidelines.In brief, postoperative radiotherapy was given to all patients treated with breast-conservation surgery.Hormonal therapy was offered to receptor ER or PR positive patients with axillary node positive (pN+) or T3 and T4 tumors irrespective of the mode of surgery.All pre-menopausal patients were treated with Tablet-Tamoxifen for 5 years.Post menopausal patients were either treated for 2 -3 years with tamoxifen followed by 2 -3 years of aromatase inhibitor or with aromatase inhibitor for 5 years.Patients with pN+ status and some with axillary node negative (pN−) status presenting with other adverse prognostic factors such as estrogen receptor (ER)/progesterone receptor (PR) negative or poorly differentiated tumor, were given adjuvant chemotherapy (FEC/FAC cyclophosphamide, anthracyclin, taxens, methotrexate and 5-flurouracil) for six cycles.Stage was assessed by using the TNM classification.Patients with noninvasive carcinomas, a previous history of breast cancer, metastatic disease (stage-IV), or insufficient tumor material was excluded from the present study.Thus 81 patients with sufficient primary tumors and complete clinical histories were available for the present study.The mean age of the patients was 59.2 years (median 56.8 years; range, 23.3 -91.6 years).The mean follow up time was 55.0 months (median 57.5 months; range, 1.2 -115.1 months).The clinicopathological data of the patients are summarized in Table 1. Methods Collection of tissue samples: Tissues from tumor and respective normal breast tissues of the same patient were collected from the operation theater during surgery.Tissues were stored at −80˚C and used for the further experiments. Immunoprecipitation: Tissues of tumor samples and respective normal breast tissues of the same patient were collected, homogenized, extracted with tissue extraction buffer (Tris-50 mM, NaCl-150 mM, NP40-1%, protease inhibitor cocktail and pH adjusted to 7.5) and the protein content of the extracts were estimated by Lowry's method.Equal amount of protein (100 µg each) of tissue lysate was pre cleared with Gelatin Sepharose 4B beads shaking for 1 hour at 4˚C and then subjected to co immunoprecipitation with anti-αv monoclonal antibody (1 µg/ml) for overnight shaking at 4˚C.Antigen-antibody complexes are then bound to Gelatin Sepharose 4B beads (Roche).The beads were washed with ×3 with Tris-buffered saline with (0.02%) Tween-20 (TBST) and suspended in 50 µl of 1× sample buffer (0.075 gm Tris, 0.2 gm SDS in 10 ml water, pH 6.8) for 30 mins at 37˚C.The extract was then subjected to zymography. Gelatin Zymography: Equal amount of protein (100 µg each) of tissue lysate was taken.The gelatinases were separated from tissue extract using Gelatin Sepharose 4B beads shaking for 2 hours at 4˚C.The beads were washed x3 with Tris-buffered saline with (0.02%) Tween-20 (TBST) and suspended in 50 µl of 1× sample buffer (0.075 gm Tris, 0.2 gm SDS in 10 ml water, pH 6.8) for 30 mins at 37˚C.The extract was then subjected to zymography on 7.5% SDS-PAGE (sodium dodecyl sulfate polyacrylamide gel electrophoresis) co-polymerized with 0.1% gelatin.Gel was washed in 2.5% Triton-X-100 for 30 mins to remove SDS and was then incubated overnight in reaction buffer (50 mM Tris-HCl pH 7, 4.5 mM CaCl 2 , 0.2 M NaCl).After incubation, the gel was stained with 0.5% coomassie blue in 30% methanol and 10% glacial acetic acid.The bands were visualized by destaining the gel with 30% methanol and 10% glacial acetic acid. Immunoblot assay: The tissues were collected, extracted with cell extraction buffer (Tris-37.7 mM, NaCl-75 mM, Triton X-100-0.5%, protease inhibitor cocktail and pH adjusted to 7.5) and the protein content of the extracts were estimated by Lowry's method.Equal amount of protein (50 µg each) was taken and co-immuno- Quantification of the results: Bands of zymography, western blots and RT-PCR were quantitated using Image J Launcher (version 1.4.3.67) Statistical Analyses The statistical analyses were carried out by using the Epi Info (TM) 3.5.3software of the centers for disease Control and Prevention (CDC, USA) for windows 9.0 programme.The associations between MMP-2 expression and clinicopathological parameters were tested.The univariate analyses were performed using test of proportion, was carried out to find out the association of MMP-2 with molecular parameters including age, menopausal status, lymph node involvement, status of ER, PR, Her-2/neu and stage of the disease.Values of the parameters were expressed as Mean ± s.e. and "p" values less than 0.05 were considered statistically significant. Integrin α v β 3 Is Highly Expressed in Breast Cancer Tissue Comparative western blot analysis of malignant (TNM stage-T2-T4) breast tumor tissue and adjacent normal breast tissue lysate clearly shows that integrin α v β 3 surface receptors are over expressed in tumor when compared to adjacent normal tissue (Figure 1). The tissues were collected, extracted with NP40 extraction buffer (Tris-50 mM, NaCl-150 mM, NP40-1%, protease inhibitor cocktail and pH adjusted to 7.5) and the protein content of the extracts were estimated by Lowry's method.Equal amount of protein (100 µg each) was taken.Pre-cleared and then subjected to co-immunoprecipitation following the method described in Methods & Materials.Bands were visualized using SuperSignal West Femto as substrate. Breast Cancer Tissue Has More α v β 3 Associated MMP-2 Activity Comparative zymographic analysis of malignant (TNM stage-T2-T4) breast tumor tissue and adjacent normal breast tissue lysates, co-immunoprecipitated with anti α v monoclonal antibody clearly shows that gelatinolytic activity of pro-MMP-2(72) has been observed mainly in adjacent normal breast tissue lysate (p = 0.00001).Where as in most of the tumor samples, subsequent proteolytic activation of this protease also has been observed in the form of gelatinolytic band of activated MMP-2 (68 KD and 64 KD).Though there was no significant difference of MMP-2 activity in TNM stage-II and III (T2-T4) (Figure 2). Equal amount of protein (100 µg each) from Tissues of tumor samples (lane-T) and respective adjacent normal breast tissues (lane-N) of the same patient were taken.Pre-cleared with Gelatin Sepharose 4B beads shaking for 1 hour at 4˚C and then subjected to co immunoprecipitation with anti-α v monoclonal antibody (1 µg/ml) following methods described in the text.The accompanying graph represents the comparative densitometric/quantitative analysis of the band intensities using image J Launcher (version 1.4.3.67). Breast Cancer Is Associated with Enhanced Expression of Integrin α v β 3 Associated MMP-2 Comparative western blot analysis of malignant (TNM stage-T2-T4) breast tumor tissue and adjacent normal breast tissue lysates, co-immunoprecipitated with anti-αV monoclonal antibody clearly shows that total protein expression of pro-MMP-2 (72KD) has been appreciably increased in tumor tissue lysates as compared to adjacent normal breast tissue (Figure 3).The tissues were collected, extracted with NP40 extraction buffer (Tris-50 mM, NaCl-150 mM, NP40-1%, protease inhibitor cocktail and pH adjusted to 7.5) and the protein content of the extracts were estimated by Lowry's method.Equal amount of protein (100 µg each) was taken.pre cleared with Gelatin Sepharose 4B beads shaking for 1 hour at 4˚C and then subjected to co-immunoprecipitation with anti-αV monoclonal antibody (1 µg/ml) for overnight shaking at 4˚C following methods described in the text.Bands were visualized using Super-Signal West Femto as substrate.The accompanying graph represents the comparative densitometric/quantitative analysis of the band intensities using image J Launcher (version 1.4.3.67). Discussion In this present communication we have tried to elucidate the role of alfaVbeta3 integrin receptor in binding and activation of MMP-2 from 25 screened invasive breast cancer patients.The most pronounced difference was seen in zymographic analysis where we found gelatinolytic activity of activated forms of MMP-2 (i.e., the 68-kDa and 64 kDa) were more frequent in tumor than the precursor form and were more intense than that corresponding to pro-gelatinase A (72 kDa).This fully active form of MMP-2 was not observed in adjacent normal breast tissue samples.Our findings are in agreement with other studies suggesting that the activation of MMP-2 (gelatinase A) is a more common event in aggressive breast cancer and integrin α v β 3 is also responsible for this proteolytic activation of MMP-2 [30]- [33].We observed progelatinase A in very few of tumor samples.There is strong evidence that shows integrin ligand interaction initiates a cascade of signaling reactions which induces the release of different proteases that dissolve the basement membrane and help in cell invasion.The enhanced expression of MMP-2 upon α v β 3 ligation was demonstrated by Bufetti et al. [34].As the cancer progressed from DCIS into T4stage, there was a tendency towards an increment of the gelatinolytic activity of MMP-2 which becomes significantly higher with subsequent activation of this protease in comparison to non-malignant breast tissue indicating the involvement of surface receptor integrin α v β 3 in different stages of tumor progression and metastasis.MMP activity is tightly regulated at several levels, from transcription, to proenzyme activation and finally by inhibition with TIMPs.Most MMPs are secreted in a proenzyme form that is later activated by cleavage of the amino-terminal 80 amino acids.This processing to an active form is accomplished by one of the five membrane-type (MT) MMPs residing on the cell surface, by activated protein C, or by the plasminogen activator-plasminogen cascade [35]- [37].A final step in regulating active MMPs is inhibition by small inhibitory proteins called TIMPs.MMP-2 is secreted in association with TIMP-2, which mediates cell surface binding of the latent complex [38].Several proteases, MMP2, MT1-MMP, TIMP2, and integrin α v β 3 were shown to colocalize in caveolae in human endothelial cells [39].In this present study positive expression of TIMP-2 has been observed in tumors and also in normal tissues by means of ELISA and RT-PCR in both stage-II and stage-III cases.In line with our results Garbett E A et al. (2000) [40] showed increased expression of TIMP-2 in invasive breast cancer.In a previous study Ree A.H et al. 1997 correlated increased amount of TIMP-1 and TIMP-2 with distant metastasis [41].So TIMP-2 may also play a crucial role in activation of MMP-2 in advanced stages of human breast cancer (Table 2). The statistical analyses were carried out by using the Epi Info (TM) 3.5.3software of the centers for disease Control and Prevention (CDC, USA) for windows 9.0 programme.The associations between MMP-2 expression and the regulating parameters of MMP-2 were tested.The univariate analyses were performed using chi-square analysis, and the independent prognostic value of variables was further examined with their corresponding Probability values. Recently, the expression of MT1-MMP in various human cancer tissues has been associated with pro-MMP-2 activation.Deryugina et al. (2001), have demonstrated that MT1-MMP:TIMP2:MMP2:α v β 3 complex was shown to promote maturation of MMP2 in carcinoma cells [42].In our present study increased expression of MT1-MMP has been observed in tumor as compared to adjacent normal tissue, confirming its possible role in activation of MMP-2 in breast cancer.Extracellular matrix metalloproteinase inducer (EMMPRIN) (also known as CD 147) is a 58 kDa glycoprotein, originally purified from the plasma membrane of cancer cells and was designated tumor collagenase stimulating factor (TCSF) because of its ability to stimulate collagenase-1 (MMP-1) synthesis by tumor stromal fibroblast cells [43].In our present study, increased expression of EMPRIN has been observed in tumor tissue as compared to adjacent normal tissue, indicating its role in MMP-2 activation.VEGF has been identified as a predominant regulator of tumor angiogenesis.Expression of the VEGF ligand has been observed across a range of tumor types and has been widely correlated with tumor development and/or poor prognosis [44]- [48].In ductal carcinoma in situ, increased pathologically aggressive lesions was associated with increased VEGF protein levels [49].VEGF protein content is also increased in invasive breast cancer and this overexpression has prognostic significance in patients with either node-positive or node-negative disease for both relapsefree and overall survival.In patients with human epidermal growth factor receptor (HER)-2 overexpressing tumours, a higher VEGF content was demonstrated, confirming that VEGF is a downstream target of HER-2 activation.It is observed, that over expression of VEGF significantly correlated with MMP-2 expression and activity in various cancer types including breast cancer.In our present study, enhanced expression of VEGF protein has been observed in tumors, compared to adjacent normal tissues indicating its correlation with enhanced MMP-2 expression.In line with our result, Li Hao et al. (2007) has shown that VEGF and MMP-2 co-expression is associated with primary tumor progression, histological grade and lymph node status in patients of breast cancer.In a previous study from our lab, we investigated whether α v β 3 and MMP-2 are associated on the membranes of a human cervical cell, SiHa and the possible involvement of MT1-MMP and TIMP-2 in the modulation of MMP-2 activity.SiHa cells expressed all the molecules which are reported to form a complex to activate pro-MMP-2.Active MMP-2 associated with α v β 3 may regulate matrix degradation and thereby modulate directed motility of SiHa cells.In this present study we confirm that breast cancer tissue shows more alfaVbeta3 associated MMP-2 compare to matched control tissue.The components of the MMP-2 activation complex (MT1-MMP, EMMPRIN, TIMP-2) and VEGF are all in increased amount in cancer tissues than the matched controls.The findings may help to understand the role of alfaVbeta3 intrgin associated MMP-2 in breast cancer progression.The findings may have potential use in clinical management of breast cancer. Figure 1 . Figure 1.Expression of integrin α v in tumor & adjacent normal breast tissue by immunoblot. Figure 2 . Figure 2. Zymographic analysis for gelatinolytic activity of α v β 3 associated MMP-2 in breast tumor and adjacent normal tissue. Table 1 . clinicopathological data of the patients. Table 2 . Gelatinase expression and statistically significant clinicopathological variables of breast cancer patients in univariate analysis.
2018-12-18T06:56:42.079Z
2015-08-24T00:00:00.000
{ "year": 2015, "sha1": "e30190253ce5769c38ea5e9021f27db41e084c4c", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=59235", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e30190253ce5769c38ea5e9021f27db41e084c4c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221235774
pes2o/s2orc
v3-fos-license
Synthesis of Green-Emitting Gd2O2S:Pr3+ Phosphor Nanoparticles and Fabrication of Translucent Gd2O2S:Pr3+ Scintillation Ceramics A translucent Gd2O2S:Pr ceramic scintillator with an in-line transmittance of ~31% at 512 nm was successfully fabricated by argon-controlled sintering. The starting precipitation precursor was obtained by a chemical precipitation route at 80 °C using ammonia solution as the precipitate, followed by reduction at 1000 °C under flowing hydrogen to produce a sphere-like Gd2O2S:Pr powder with an average particle size of ~95 nm. The Gd2O2S:Pr phosphor particle exhibits the characteristic green emission from 3P0,1→3H4 transitions of Pr3+ at 512 nm upon UV excitation into a broad excitation band at 285–335 nm arising from 4f2→4f5d transition of Pr3+. Increasing Pr3+ concentrations induce two redshifts for the two band centers of 4f2→4f5d transition and lattice absorption on photoluminescence excitation spectra. The optimum concentration of Pr3+ is 0.5 at.%, and the luminescence quenching type is dominated by exchange interaction. The X-ray excited luminescence spectrum of the Gd2O2S:Pr ceramic is similar to the photoluminescence behavior of its particle. The phosphor powder and the ceramic scintillator have similar lifetimes of 2.93–2.99 μs, while the bulk material has rather higher external quantum efficiency (~37.8%) than the powder form (~27.2%). Introduction Scintillation materials convert the radiation of high-energy rays (X-rays or gamma rays) into visible light and are extensively applied in various fields such as safety inspection, high energy physics, nuclear medicine, and industrial non-destructive testing [1][2][3][4][5][6]. Scintillators can be divided into solid, liquid, and gaseous states, among which solid inorganic scintillators, as the most widely used materials, include single crystals and polycrystalline ceramics. A ceramic scintillator is superior to a single crystal due to its low cost, short production cycle, large-size production, as well as high dopant concentration with a homogeneous mixture at the molecular level [6][7][8][9]. Solid-state Gd 2 O 2 S (GOS) has excellent chemical and physical characteristics, such as high melting point (2070 • C), high density (7.43 g/cm 3 ), high X-ray attenuation coefficient (~52 cm −1 at 70 keV), wide band gap (4.6-4.8 eV), favorable chemical durability, low phonon energy, low crystal symmetry, and low toxicity, which make it a promising host material for luminescence and scintillation applications [10,11]. GOS-based phosphors have attracted considerable attention for decades. For instance, the green-emitting GOS:Tb phosphor exhibits a high brightness and a high luminous efficiency and is applied in television screens, cathode ray tubes, and X-ray intensifying screens [12,13]. GOS belongs to the hexagonal crystal structure (space group: P3 − ml; lattice parameters: a = b = 0.3851 nm and c = 0.6664 nm) with trigonal symmetry [10]. Owing to its optically anisotropic Characterization The precipitation precursor and its reduction product were characterized by field-emission scanning electron microscopy (FE-SEM; Model S-4800, Hitachi, Japan) and X-ray diffraction (XRD; Model D8 Focus, Bruker, Germany) using nickel-filtered CuKα as the incident radiation. The surface microstructure of the ceramic was observed on a desktop scanning electron microscope (SEM; Model EM-30plus, COXEM, Korea). Photoluminescence (PL)/photoluminescence excitation (PLE) spectrum, fluorescence lifetime, and quantum yield of the powder and ceramic were measured by transient fluorescence spectrophotometer (Model FLS 980, Edinburgh Instruments Ltd., Livingston, UK) using a 450 W xenon lamp as the excitation source. The X-ray excited luminescence (XEL) spectrum of the ceramic was measured using the photomultiplier tube working on a Zolix Omni-λ300 monochromator at a voltage of −900 V, while the X-ray tube copper target was operated at a voltage of 69 kV and a current of 3 mA. Figure 1A shows the XRD patterns of the precursor powder and reduction products. An amorphous precursor was obtained via the chemical precipitation route. The reaction temperature of 80 • C with an aging time of 1 h is not enough for its crystallization. Upon calcination at 1000 • C under a hydrogen atmosphere, all diffraction peaks of the products can be well indexed to hexagonal Gd 2 O 2 S (JCPDS No. 26-1422) without any impure phases, indicating that the precursor fully converts into oxysulfide via thermal decomposition and reduction reaction. Its schematic crystal structure drawn by VESTA 3D visualization software is shown in Figure 1B. Only one coordination type for Gd 3+ exists in the unit cell, that is, each Gd 3+ is surrounded by four oxygen atoms and three sulfur atoms to form a seven-coordination polyhedron [39]. After Pr 3+ incorporation, it would substitute the Gd 3+ site because of their similar ionic radii (0.0990 nm for Pr 3+ with CN = 6 and 0.0938 nm for Gd 3+ with CN = 6). Along with increasing Pr 3+ addition, remarkable shifts of diffraction peaks were not observed due to the low Pr 3+ concentration and small difference in ionic radii between Pr 3+ and Gd 3+ . The average crystallite size (D XRD ) can be calculated by the Debye-Scherrer formula: Results and Discussion where B 0 is the half-peak width, B c is the correction factor caused by instrument broadening, θ is the angle of the diffraction peak, and λ is the wavelength of the X-ray [40]. Characterization The precipitation precursor and its reduction product were characterized by field-emission scanning electron microscopy (FE-SEM; Model S-4800, Hitachi, Japan) and X-ray diffraction (XRD; Model D8 Focus, Bruker, Germany) using nickel-filtered CuKα as the incident radiation. The surface microstructure of the ceramic was observed on a desktop scanning electron microscope (SEM; Model EM-30plus, COXEM, Korea). Photoluminescence (PL)/photoluminescence excitation (PLE) spectrum, fluorescence lifetime, and quantum yield of the powder and ceramic were measured by transient fluorescence spectrophotometer (Model FLS 980, Edinburgh Instruments Ltd., Livingston, UK) using a 450 W xenon lamp as the excitation source. The X-ray excited luminescence (XEL) spectrum of the ceramic was measured using the photomultiplier tube working on a Zolix Omni-λ300 monochromator at a voltage of −900 V, while the X-ray tube copper target was operated at a voltage of 69 kV and a current of 3 mA. Figure 1A shows the XRD patterns of the precursor powder and reduction products. An amorphous precursor was obtained via the chemical precipitation route. The reaction temperature of 80 °C with an aging time of 1 h is not enough for its crystallization. Upon calcination at 1000 °C under a hydrogen atmosphere, all diffraction peaks of the products can be well indexed to hexagonal Gd2O2S (JCPDS No. 26-1422) without any impure phases, indicating that the precursor fully converts into oxysulfide via thermal decomposition and reduction reaction. Its schematic crystal structure drawn by VESTA 3D visualization software is shown in Figure 1B. Only one coordination type for Gd 3+ exists in the unit cell, that is, each Gd 3+ is surrounded by four oxygen atoms and three sulfur atoms to form a seven-coordination polyhedron [39]. After Pr 3+ incorporation, it would substitute the Gd 3+ site because of their similar ionic radii (0.0990 nm for Pr 3+ with CN = 6 and 0.0938 nm for Gd 3+ with CN = 6). Along with increasing Pr 3+ addition, remarkable shifts of diffraction peaks were not observed due to the low Pr 3+ concentration and small difference in ionic radii between Pr 3+ and Gd 3+ . The average crystallite size (DXRD) can be calculated by the Debye-Scherrer formula: Results and Discussion where B0 is the half-peak width, Bc is the correction factor caused by instrument broadening, θ is the angle of the diffraction peak, and λ is the wavelength of the X-ray [40]. The resulting DXRD values calculated from (101) diffractions are ~35.7, 32.3, 58.0, and 47.7 nm for the samples doped with 0.1, 0.25, 0.5, and 0.75 at.% Pr 3+ , respectively. The precipitation precursor shows a two-dimensional nanoplate-like shape clustered into a loose honeycomb. Although the XRD analysis cannot indentify the chemical composition of the precursor, it may be speculated to possess the layered Ln 2 (OH) 4 SO 4 ·nH 2 O (Ln = Gd and Pr) structure consisting of the host layer of Ln-hydroxy polyhedron and the interlayer SO 4 2− [25,33], because this compound not only exhibits the nanoplate morphology but also has the same Ln/S molar ratio as the (Gd,Pr) 2 O 2 S reduction product ( Figure 1). The precipitation formation mechanism obeys the hard-soft acid-base principle, viz., the hard acid tends to react with the hard base and the same for the soft acid and the soft base-hard-hard or soft-soft combinations [41]. The hard Lewis acid of Gd 3+ is readily coupled with the hard Lewis bases of SO 4 2− and OH − to form the basic sulfate. After being reduced at 1000 • C, the nanoplate-like precursor is completely cracked into a sphere-like oxysulfide powder with a statistic average particle size of~95 nm. Nanomaterials 2020, 10, x FOR PEER REVIEW 4 of 11 structure consisting of the host layer of Ln-hydroxy polyhedron and the interlayer SO4 2− [25,33], because this compound not only exhibits the nanoplate morphology but also has the same Ln/S molar ratio as the (Gd,Pr)2O2S reduction product ( Figure 1). The precipitation formation mechanism obeys the hard-soft acid-base principle, viz., the hard acid tends to react with the hard base and the same for the soft acid and the soft base-hard-hard or soft-soft combinations [41]. The hard Lewis acid of Gd 3+ is readily coupled with the hard Lewis bases of SO4 2− and OHto form the basic sulfate. After being reduced at 1000 °C, the nanoplate-like precursor is completely cracked into a sphere-like oxysulfide powder with a statistic average particle size of ~95 nm. The appearance of the GOS:Pr ceramic fabricated by argon-controlled sintering is shown in Figure 3A. Although the ceramic sample exhibits translucence to the naked eye, it is a relatively good optical quality for a ceramic that has an optically anisotropic crystal structure. Under UV irradiation, the bulk emits the strong green visible light derived from 3 P1,0→ 3 H4 transitions of Pr 3+ . On the ceramic transmittance curve ( Figure 3B), the absorption band at ~350 nm corresponds to the 4f 2 →4f5d transition of Pr 3+ and the other ones beyond 350 nm are assignable to the intra−4f 2 transitions of Pr 3+ . The GOS:Pr bulk has an in-line transmittance of ~31% at 512 nm (Pr 3+ emission center). Figure 3C,D shows the surface microstructure and fracture surface of the sintered GOS:Pr ceramic. The specimen has a dense microstructure, and pores are only occasionally observed. Such a microstructure is desired for improved scintillation performance, since the pores frequently induce scattering losses. Its relative density was measured to be ~99.2% by Archimedes' method. The statistic average grain size is ~5 μm via WinRoof image analysis software. The grain size is relatively uniform, and exaggerated grain growth is not found. By observing its fracture surface, the dense GOS:Pr ceramic is mainly intragranularly fractured. The appearance of the GOS:Pr ceramic fabricated by argon-controlled sintering is shown in Figure 3A. Although the ceramic sample exhibits translucence to the naked eye, it is a relatively good optical quality for a ceramic that has an optically anisotropic crystal structure. Under UV irradiation, the bulk emits the strong green visible light derived from 3 P 1,0 → 3 H 4 transitions of Pr 3+ . On the ceramic transmittance curve ( Figure 3B), the absorption band at~350 nm corresponds to the 4f 2 →4f 5d transition of Pr 3+ and the other ones beyond 350 nm are assignable to the intra−4f 2 transitions of Pr 3+ . The GOS:Pr bulk has an in-line transmittance of~31% at 512 nm (Pr 3+ emission center). Figure 3C,D shows the surface microstructure and fracture surface of the sintered GOS:Pr ceramic. The specimen has a dense microstructure, and pores are only occasionally observed. Such a microstructure is desired for improved scintillation performance, since the pores frequently induce scattering losses. Its relative density was measured to be~99.2% by Archimedes' method. The statistic average grain size is 5 µm via WinRoof image analysis software. The grain size is relatively uniform, and exaggerated grain growth is not found. By observing its fracture surface, the dense GOS:Pr ceramic is mainly intragranularly fractured. Figure 4A presents the excitation spectra of (Gd1-xPrx)2O2S (x = 0.001-0.0075) phosphor particles obtained by monitoring 512 nm emission of Pr 3+ as a function of activator concentration. The main broad bands of all samples are located at 285-335 nm, which are ascribed to 4f 2 →4f5d transitions of Pr 3+ [34]. With more Pr 3+ addition, a ~2 nm redshift of the 4f 2 →4f5d transition band center can be observed. Such a phenomenon can be attributed to the lower electronegativity of Pr 3+ (1.13) than that of Gd 3+ (1.21), which allows for the easier electron transfer to the excitation state of Pr 3+ . The weak bands at 245-285 nm are assignable to the absorption of the host lattice, since the corresponding bandgap of Gd2O2S was found to be ~4.6 eV [34]. Redshift lattice-absorption bands (~6 nm) are also observed, because increasing Pr 3+ incorporation would lead to a smaller bandgap for the oxysulfide. Under 306 nm excitation, all the (Gd1-xPrx)2O2S phosphors exhibit characteristic green emissions of Pr 3+ ( Figure 4B). That is, the strongest emission peak at ~512 nm originates from the 3 P0→ 3 H4 transition; the second strongest peak at ~502 nm derives from the 3 P1→ 3 H4 transition; and the weakest peak at ~547 nm arises from the 3 P1→ 3 H5 transition. Both the PLE and PL intensities of (Gd1−xPrx)2O2S phosphors increase to 2.6-2.8 times with the gradually rising Pr 3+ concentrations from 0.1 to 0.5 at.% as shown from the normalization curves ( Figure 4C,D). A further increase in Pr 3+ content (e.g., 0.75 at.%), however, causes a reduction in PL/PLE intensity due to Figure 4A presents the excitation spectra of (Gd 1-x Pr x ) 2 O 2 S (x = 0.001-0.0075) phosphor particles obtained by monitoring 512 nm emission of Pr 3+ as a function of activator concentration. The main broad bands of all samples are located at 285-335 nm, which are ascribed to 4f 2 →4f 5d transitions of Pr 3+ [34]. With more Pr 3+ addition, a~2 nm redshift of the 4f 2 →4f 5d transition band center can be observed. Such a phenomenon can be attributed to the lower electronegativity of Pr 3+ (1.13) than that of Gd 3+ (1.21), which allows for the easier electron transfer to the excitation state of Pr 3+ . The weak bands at 245-285 nm are assignable to the absorption of the host lattice, since the corresponding bandgap of Gd 2 O 2 S was found to be~4.6 eV [34]. Redshift lattice-absorption bands (~6 nm) are also observed, because increasing Pr 3+ incorporation would lead to a smaller bandgap for the oxysulfide. Under 306 nm excitation, all the (Gd 1-x Pr x ) 2 O 2 S phosphors exhibit characteristic green emissions of Pr 3+ ( Figure 4B). That is, the strongest emission peak at~512 nm originates from the 3 P 0 → 3 H 4 transition; the second strongest peak at~502 nm derives from the 3 P 1 → 3 H 4 transition; and the weakest peak at~547 nm arises from the 3 P 1 → 3 H 5 transition. Both the PLE and PL intensities of (Gd 1−x Pr x ) 2 O 2 S phosphors increase to 2.6-2.8 times with the gradually rising Pr 3+ concentrations from 0.1 to 0.5 at.% as shown from the normalization curves ( Figure 4C,D). A further increase in Pr 3+ content (e.g., 0.75 at.%), however, causes a reduction in PL/PLE intensity due to luminescence quenching. Thus, the optimal Pr 3+ concentration in the GOS matrix is determined to be 0.5 at.%. As a function of the Pr 3+ content, the PLE and PL intensities have a highly consistent variation tendency. Nanomaterials 2020, 10, x FOR PEER REVIEW 6 of 11 luminescence quenching. Thus, the optimal Pr 3+ concentration in the GOS matrix is determined to be 0.5 at.%. As a function of the Pr 3+ content, the PLE and PL intensities have a highly consistent variation tendency. Huang et al. propose a theory to describe the relationship between PL intensity and activator concentration [42], which agrees with Dai and Meng et al. [43,44]. That is, the mutual interaction type of luminescence quenching in a solid phosphor can be determined by the following equation: where I is the emission intensity, c is the activator concentration, s is the index of electric multipole, d is the sample dimensionality (d = 3 for energy transfer among the activators inside particles), and f is the constant. The variable s values correspond to different quenching mechanisms. Namely, the s values of 6, 8, and 10 relate to the dipole-dipole, dipole-quadrupole, and quadrupole-quadrupole electric interactions, respectively, while s = 3 corresponds to exchange interaction. The plot of log(I/c) versus log(c) for the 512 nm emission of Pr 3+ is shown in Figure 5, from which a linear slope (s/3) of ~0.7 is yielded and thus the s value is close to 3. It can be concluded that the exchange interaction is mainly responsible for the luminescence quenching of GOS:Pr phosphors. The exchange interaction processes are divided into radiative and non-radiative. The former includes emission and radiative transfer, whilst the latter comprises internal relaxation and multipolar interactions between ions. The PL intensity linearly rises as Pr 3+ concentration increases up to 0.5 at.%, since more luminous centers are generated. However, a further increase in Pr 3+ content (e.g., above 0.75 at.%) enhances the probability of energy transfer with cross-relaxation between Pr 3+ activators due to the shortened distances. Huang et al. propose a theory to describe the relationship between PL intensity and activator concentration [42], which agrees with Dai and Meng et al. [43,44]. That is, the mutual interaction type of luminescence quenching in a solid phosphor can be determined by the following equation: where I is the emission intensity, c is the activator concentration, s is the index of electric multipole, d is the sample dimensionality (d = 3 for energy transfer among the activators inside particles), and f is the constant. The variable s values correspond to different quenching mechanisms. Namely, the s values of 6, 8, and 10 relate to the dipole-dipole, dipole-quadrupole, and quadrupole-quadrupole electric interactions, respectively, while s = 3 corresponds to exchange interaction. The plot of log(I/c) versus log(c) for the 512 nm emission of Pr 3+ is shown in Figure 5, from which a linear slope (s/3) of~0.7 is yielded and thus the s value is close to 3. It can be concluded that the exchange interaction is mainly responsible for the luminescence quenching of GOS:Pr phosphors. The exchange interaction processes are divided into radiative and non-radiative. The former includes emission and radiative transfer, whilst the latter comprises internal relaxation and multipolar interactions between ions. The PL intensity linearly rises as Pr 3+ concentration increases up to 0.5 at.%, since more luminous centers are generated. However, a further increase in Pr 3+ content (e.g., above 0.75 at.%) enhances the probability of energy transfer with cross-relaxation between Pr 3+ activators due to the shortened distances. Figure 6A shows the XEL spectrum of the translucent GOS:Pr ceramic scintillator made in this work. The ceramic material exhibits a strong green emission at 510-514 nm, arising from 3 P0→ 3 H4 transition of Pr 3+ , which is similar to the PL behavior of its powder form. The mechanisms between PL and XEL substantially differ from each other. The PL primarily utilizes the 4f 2 →4f5d transition of Pr 3+ . However, the XEL can be divided into the following three processes: That is, under high-energy ray excitation, lots of electron-hole pairs in the host lattice are created [45]. The Pr 3+ cation traps the hole to form a transient Pr 4+ state [15], followed by recombination with the electron to emit visible light. The 1931 CIE chromaticity coordinate of the GOS:Pr ceramic is (0.11, 0.73), which falls into the characteristic green region ( Figure 6B). Figure 6A shows the XEL spectrum of the translucent GOS:Pr ceramic scintillator made in this work. The ceramic material exhibits a strong green emission at 510-514 nm, arising from 3 P 0 → 3 H 4 transition of Pr 3+ , which is similar to the PL behavior of its powder form. The mechanisms between PL and XEL substantially differ from each other. The PL primarily utilizes the 4f 2 →4f 5d transition of Pr 3+ . However, the XEL can be divided into the following three processes: Pr 4+ + e _ → (Pr 3+ ) * → Pr 3+ + hv (4) Nanomaterials 2020, 10, x FOR PEER REVIEW 7 of 11 Figure 6A shows the XEL spectrum of the translucent GOS:Pr ceramic scintillator made in this work. The ceramic material exhibits a strong green emission at 510-514 nm, arising from 3 P0→ 3 H4 transition of Pr 3+ , which is similar to the PL behavior of its powder form. The mechanisms between PL and XEL substantially differ from each other. The PL primarily utilizes the 4f 2 →4f5d transition of Pr 3+ . However, the XEL can be divided into the following three processes: Pr Pr h That is, under high-energy ray excitation, lots of electron-hole pairs in the host lattice are created [45]. The Pr 3+ cation traps the hole to form a transient Pr 4+ state [15], followed by recombination with the electron to emit visible light. The 1931 CIE chromaticity coordinate of the GOS:Pr ceramic is (0.11, 0.73), which falls into the characteristic green region ( Figure 6B). That is, under high-energy ray excitation, lots of electron-hole pairs in the host lattice are created [45]. The Pr 3+ cation traps the hole to form a transient Pr 4+ state [15], followed by recombination with the electron to emit visible light. The 1931 CIE chromaticity coordinate of the GOS:Pr ceramic is (0.11, 0.73), which falls into the characteristic green region ( Figure 6B). Figure 7A exhibits the decay kinetics of the GOS:Pr phosphor powder and ceramic scintillator for the 512 nm emission of Pr 3+ under 306 nm excitation. The fluorescence lifetime can be obtained via fitting the decay curve with the single exponential equation: I(t) = Aexp(−t/τ) + B, where τ is the fluorescence lifetime, t is the delay time, I(t) is the instantaneous emission intensity, and A and B are constants [45,46]. The fitting results yield τ = 2.93 ± 0.02 µs, A = 425.74 ± 1.09, and B = 10.12 ± 0.51 for the phosphor powder, and τ = 2.99 ± 0.03 µs, A = 264.73 ± 1.39, and B = 7.89 ± 0.38 for the scintillation ceramic. The fluorescence lifetimes determined in this work are in general agreement with the reported values of 2.4-3.0 µs for GOS:Pr,Ce ceramics [8,11,18,47]. Figure 7B exhibits the responses of the GOS:Pr phosphor powder and ceramic scintillator to 306 nm excitation using a white BaSO4 solid as a reference material. The external quantum efficiencies (εex) of the sample can be deduced from the total number of emitted photons divided by the total number of excited photons as follows [45,48]: where P(λ)/hν and E(λ)/hν are the numbers of photons in the emission and excitation spectra of the samples, respectively. The reflection spectrum of the white standard is used for calibration. The external quantum efficiencies of the GOS:Pr powder and bulk are determined to be ~27.2% and 37.8%, respectively. The rather higher εex for the latter is attributed to the improved crystallinity and rapid grain growth via high-temperature sintering. Conclusions A precipitation precursor with two-dimensional nanoplate-like morphology was prepared at 80 °C using ammonia as the precipitant, followed by reduction at 1000 °C under a hydrogen atmosphere to yield a hexagonal Gd2O2S:Pr phosphor powder. After cold isostatic pressing and argon-controlled sintering, the obtained Gd2O2S:Pr scintillation ceramic has a dense microstructure with a relative density of ~99.2%. The main conclusions from this work can be summarized as follows: (1) The sphere-like Gd2O2S:Pr phosphor powder has an average particle size of ~95 nm and exhibits the characteristic green emission from 3 P0,1→ 3 H4 transitions of Pr 3+ ; (2) The optimum concentration of Pr 3+ is 0.5 at.%, and the luminescence quenching type is dominated by exchange interaction; (3) The Gd2O2S:Pr ceramic has an in-line transmittance of ~31% at 512 nm upon X-ray excitation into the strong green emission with a 1931 CIE chromaticity coordinate of (0.11, 0.73); (4) The phosphor powder and the ceramic bulk have similar lifetimes of 2.93-2.99 μs; (5) The Gd2O2S:Pr ceramic scintillator has a higher external quantum efficiency (~37.78%) than the powder form (~27.2%). Figure 7B exhibits the responses of the GOS:Pr phosphor powder and ceramic scintillator to 306 nm excitation using a white BaSO 4 solid as a reference material. The external quantum efficiencies (ε ex ) of the sample can be deduced from the total number of emitted photons divided by the total number of excited photons as follows [45,48]: where P(λ)/hν and E(λ)/hν are the numbers of photons in the emission and excitation spectra of the samples, respectively. The reflection spectrum of the white standard is used for calibration. The external quantum efficiencies of the GOS:Pr powder and bulk are determined to be~27.2% and 37.8%, respectively. The rather higher ε ex for the latter is attributed to the improved crystallinity and rapid grain growth via high-temperature sintering. Conclusions A precipitation precursor with two-dimensional nanoplate-like morphology was prepared at 80 • C using ammonia as the precipitant, followed by reduction at 1000 • C under a hydrogen atmosphere to yield a hexagonal Gd 2 O 2 S:Pr phosphor powder. After cold isostatic pressing and argon-controlled sintering, the obtained Gd 2 O 2 S:Pr scintillation ceramic has a dense microstructure with a relative density of~99.2%. The main conclusions from this work can be summarized as follows: (1) The sphere-like Gd 2 O 2 S:Pr phosphor powder has an average particle size of~95 nm and exhibits the characteristic green emission from 3 P 0,1 → 3 H 4 transitions of Pr 3+ ; (2) The optimum concentration of Pr 3+ is 0.5 at.%, and the luminescence quenching type is dominated by exchange interaction; (3) The Gd 2 O 2 S:Pr ceramic has an in-line transmittance of~31% at 512 nm upon X-ray excitation into the strong green emission with a 1931 CIE chromaticity coordinate of (0.11, 0.73); (4) The phosphor powder and the ceramic bulk have similar lifetimes of 2.93-2.99 µs; (5) The Gd 2 O 2 S:Pr ceramic scintillator has a higher external quantum efficiency (~37.78%) than the powder form (~27.2%).
2020-08-23T13:06:03.702Z
2020-08-20T00:00:00.000
{ "year": 2020, "sha1": "e25c3cdbba89de046d9878cd433e6177514d1ea9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/10/9/1639/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a26f79d75330a505af70d53c9f5499b5ef249e9c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
247629917
pes2o/s2orc
v3-fos-license
Online interprofessional education materials through a community learning program during the COVID 19 pandemic in Chile This article aims to share the online collaborative experience of interprofessional teamwork among healthcare undergraduate students based on community learning during the coronavirus disease 2019 (COVID-19) pandemic in Chile. This experience took place in 48 different communities in Chile from November 10, 2020 to January 12, 2021. It was a way of responding to the health education needs of the community when the entire Chilean population was in confinement. Students managed to adapt to the COVID-19 pandemic despite the challenges, including internet connectivity problems and the limited time available to do the work. The educational programs and videos shared in this article will be helpful for other interprofessional health educators to implement the same kind of program. Introduction Background Interprofessional courses, in particular, represent an inter-curricular effort that has benefits for students and academics involved in the health professions, and especially for people, families, and communities that are at the center of health care. There was an interprofessional program to support community residents who had no prompt access to healthcare teams by responding to sanitary emergencies caused by the coronavirus disease 2019 (COVID 19) pandemic. This program was done through online educational interventions in healthcare seeking to satisfy community needs. It also pursued the strengthening of students' skills for Online interprofessional education materials through a community learning program during the COVID 19 pandemic in Chile Sandra Oyarzo Torres, Mónica Espinoza Barrios* Department of Education in Health Sciences and Undergraduate Management, Faculty of Medicine, University of Chile, Santiago, Chile This article aims to share the online collaborative experience of interprofessional teamwork among healthcare undergraduate students based on community learning during the coronavirus disease 2019 (COVID-19) pandemic in Chile. This experience took place in 48 different communities in Chile from November 10, 2020 to January 12, 2021. It was a way of responding to the health education needs of the community when the entire Chilean population was in confinement. Students managed to adapt to the COVID-19 pandemic despite the challenges, including internet connectivity problems and the limited time available to do the work. The educational programs and videos shared in this article will be helpful for other interprofessional health educators to implement the same kind of program. Keywords: Chile; COVID-19; Health education; Health occupations students; Interprofessional relations social commitment, respect for diversity, and a rights approach. It was conducted in collaboration with community leaders. Information about the interprofessional education program can be accessed through the Faculty of Medicine of the University of Chile available at: http://www.medicina.uchile.cl/; http://formacioncomun.med.uchile.cl. This study was conducted among 650 students from the 8 healthcare undergraduate programs (medicine, nursing, midwifery, speech therapy, physical therapy, occupational therapy, medical technology, nutrition, and dietetics), grouped into 28 teams with 28 professors from the Faculty of Medicine of the University of Chile, and 48 leaders from social organizations. This experience took place in 48 different communities of Santiago, and the rest of Chile during the COVID 19 pandemic, from November 10, 2020 to January 12, 2021 (Supplement 1). Objectives This article aimed to share the collaborative experience of interprofessional teamwork, including programs and video files of education for community residents. Teaching and learning activities In 2020, due to the COVID 19 pandemic, all activities were performed online (Table 1). This was done to protect the health of students, professors, and community residents. The experience made it possible to integrate teaching and community engagement. Throughout these experiences, students developed teamwork skills, respect for the professional role of each member of the healthcare team and other professions and disciplines, and integration with the local community laborers, in collaboration with social leaders and teachers. The methodology used was to carry out a diagnosis of educational needs detected in the community through online focus groups, problem trees, brainstorming, and online questionnaires. After the information was collected, a Gantt chart was made with the development of the online health education project, in conjunction with community leaders based on the educational needs detected in the population among which the intervention would be carried out. Challenges in implementing the online interprofessional education experience based on community learning Students had to carry out an online healthcare educational proj-ect based on community needs related to the COVID-19 pandemic care such as hand washing and alcohol gel disinfection, mental health care during the pandemic, and responsible pet ownership. During this experience, effective communication stood out, in the virtual framework with people and groups in different areas of this intervention, as participants adhered to ethical and bioethical principles in their praxis. At the same time, the experience set out new challenges related to accessibility and coordination with community leaders via new communication technologies such as Zoom, Meets, WhatsApp, Instagram, YouTube, and infographics. Implementing the online interprofessional education experience based on community learning Students inquired into their former knowledge associated with disciplinary and experiential spheres. They also faced situational contexts, which presented an opportunity to apply that knowledge to community education during the COVID-19 pandemic. The experience enabled dynamism and the acquisition of teamwork skills through a virtual modality. Students also valued the contribution of different professionals and community roles to face a problem that had characteristics transferable to their future professional performance. Although the experience was positive, it would still be premature to draw a definitive conclusion regarding the impact of this modality of work on the skills acquired through interprofessional education [1]. The material developed by students was shared with the community for educational purposes: educational videos, audio reports, and infographics were generated and published on Instagram (Supplements 2-4). Conclusion Students were able to adapt to the COVID 19 pandemic situation to educate community residents despite challenges, including a lack of internet connectivity and limited time available to do the work. The flexibility of all team members and their commitment to carry out the community project stood out as important factors. The implementation of this program was achieved by planning, articulating, implementing, and assessing an intervention that became useful for the community and the team. The educational materials and videos shared in this article will be helpful for other interprofessional health educators to implement the same kind of program.
2022-03-25T06:18:22.290Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "4b0df482ffa612109f1dca60a22588c5d96d8db5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3352/jeehp.2022.19.6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "822e6d1dff3268675cf39c5450e9ab0b241863d5", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7459604
pes2o/s2orc
v3-fos-license
Enzymatic Degradation of Aromatic and Aliphatic Polyesters by P. pastoris Expressed Cutinase 1 from Thermobifida cellulosilytica To study hydrolysis of aromatic and aliphatic polyesters cutinase 1 from Thermobifida cellulosilytica (Thc_Cut1) was expressed in P. pastoris. No significant differences between the expression of native Thc_Cut1 and of two glycosylation site knock out mutants (Thc_Cut1_koAsn and Thc_Cut1_koST) concerning the total extracellular protein concentration and volumetric activity were observed. Hydrolysis of poly(ethylene terephthalate) (PET) was shown for all three enzymes based on quantification of released products by HPLC and similar concentrations of released terephthalic acid (TPA) and mono(2-hydroxyethyl) terephthalate (MHET) were detected for all enzymes. Both tested aliphatic polyesters poly(butylene succinate) (PBS) and poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) were hydrolyzed by Thc_Cut1 and Thc_Cut1_koST, although PBS was hydrolyzed to significantly higher extent than PHBV. These findings were also confirmed via quartz crystal microbalance (QCM) analysis; for PHBV only a small mass change was observed while the mass of PBS thin films decreased by 93% upon enzymatic hydrolysis with Thc_Cut1. Although both enzymes led to similar concentrations of released products upon hydrolysis of PET and PHBV, Thc_Cut1_koST was found to be significantly more active on PBS than the native Thc_Cut1. Hydrolysis of PBS films by Thc_Cut1 and Thc_Cut1_koST was followed by weight loss and scanning electron microscopy (SEM). Within 96 h of hydrolysis up to 92 and 41% of weight loss were detected with Thc_Cut1_koST and Thc_Cut1, respectively. Furthermore, SEM characterization of PBS films clearly showed that enzyme tretment resulted in morphological changes of the film surface. To study hydrolysis of aromatic and aliphatic polyesters cutinase 1 from Thermobifida cellulosilytica (Thc_Cut1) was expressed in P. pastoris. No significant differences between the expression of native Thc_Cut1 and of two glycosylation site knock out mutants (Thc_Cut1_koAsn and Thc_Cut1_koST) concerning the total extracellular protein concentration and volumetric activity were observed. Hydrolysis of poly(ethylene terephthalate) (PET) was shown for all three enzymes based on quantification of released products by HPLC and similar concentrations of released terephthalic acid (TPA) and mono(2-hydroxyethyl) terephthalate (MHET) were detected for all enzymes. Both tested aliphatic polyesters poly(butylene succinate) (PBS) and poly(3-hydroxybutyrate-co-3hydroxyvalerate) (PHBV) were hydrolyzed by Thc_Cut1 and Thc_Cut1_koST, although PBS was hydrolyzed to significantly higher extent than PHBV. These findings were also confirmed via quartz crystal microbalance (QCM) analysis; for PHBV only a small mass change was observed while the mass of PBS thin films decreased by 93% upon enzymatic hydrolysis with Thc_Cut1. Although both enzymes led to similar concentrations of released products upon hydrolysis of PET and PHBV, Thc_Cut1_koST was found to be significantly more active on PBS than the native Thc_Cut1. Hydrolysis of PBS films by Thc_Cut1 and Thc_Cut1_koST was followed by weight loss and scanning electron microscopy (SEM). Within 96 h of hydrolysis up to 92 and 41% of weight loss were detected with Thc_Cut1_koST and Thc_Cut1, respectively. Furthermore, SEM characterization of PBS films clearly showed that enzyme tretment resulted in morphological changes of the film surface. INTRODUCTION Plastic materials are ubiquitous in our daily life and although the annual European production is in a steady state since a decade, the global production is constantly increasing 1 . Most conventional plastics such as polyethylene, polypropylene, polystyrene, poly(vinyl chloride), and poly(ethylene terephthalate) (PET) are fully petrol-based and not biodegradable. The release of those plastic materials into the environment and their subsequent accumulation poses environmental risks and negatively impacts ecosystems, including the extreme consequences of plastic patch formation in rivers and oceans (Eriksen et al., 2014;Lechner et al., 2014). Considerable effort has been directed toward implementing bio-based plastics as environmentally-friendly alternatives to the traditionally petrol-derived materials. In particular, the substitution of polyesters such as PET and polybutyrate adipate terephthalate (PBAT) seems to be imminent since several market-leading companies are focusing their investigations on production of monomers derived from renewable biomass. Recent innovations also allow the biotechnological production of bio-based monomers from renewable carbon, enabling the replacement of petrochemical building blocks (Pellis et al., 2016c,d). These bio-based building blocks can be either produced by microbial conversions of various feedstocks or with combined biotechnological-chemical pathways that lead to various monomers such as 1,4-butanediol and adipic acid (used for the production of PBAT) (Harmsen et al., 2014). Fermentation of sugars or various other feedstocks, including lignocellulose (Pinazo et al., 2015), can also be used to obtain succinic and lactic acid for the production of poly(butylene succinate) (PBS) and poly(lactic acid) respectively. Poly(hydroxyalkanoates) (PHAs) on the other hand are directly produced by natural or engineered microorganisms. Mulch films are the most common and highly consumed plastic products in agriculture and their widespread use has led to an increase in environmental wastes. Therefore, commercially available mulch films are usually made of biodegradable plastics, with PBS as the main component (Koitabashi et al., 2012). In recent years there has been conservable interest in the substitution of PET with plantderived poly(ethylene furanoate) (PEF) (Pellis et al., 2016b). The monomers for PEF (2,5-furandicarboxylic acid and ethylene glycol) can be 100% produced from renewable feedstocks (Pellis et al., 2016d). The potential of enzymes for degradation of polymer building blocks has been studied by several groups. Various enzymes belonging to the cutinase family were reported to hydrolyze PET, the most used polyester (Yoon et al., 2002;Vertommen et al., 2005;Herzog et al., 2006;Heumann et al., 2006;Donelli et al., 2009;Herrero Acero et al., 2011;Ribitsch et al., 2011Ribitsch et al., , 2012Kanelli et al., 2015). Moreover, reports on the biocatalyzed hydrolysis of poly(lactic acid) (Pellis et al., 2015, in press;Ortner et al., 2017), poly(butylene succinate) (Hu et al., 2016) and poly(ethylene furanoate) (Pellis et al., 2016b) using similar biocatalysts were also described and 1 Plastics Europe Plastics-the Facts 2015, 2015 (Accessed May 17, 2016). certify the importance of such processes in the optics of a sustainable development (Clark et al., 2016;Pellis et al., 2016c). Despite the high industrial potential reported for cutinases from Thermobifida spp., these enzymes have usually been obtained by intracellular recombinant expression in E. coli Su et al., 2013;Roth et al., 2014;Then et al., 2016), an approach that hampers the scale-up of the production process. Lately the methylotrophic yeast P. pastoris gained increasing interest as expression system for recombinant proteins for basic research as well as for industrial applications as shown by the number of filed patents (Bollok et al., 2009). In addition to the ability of P. pastoris to perform post-translational modifications one of the main advantages is that the recombinant proteins can often be secreted at high concentrations while maintaining their correct folding and activity (Cregg et al., 1993;Cereghino and Cregg, 2000;Ahmad et al., 2014;Hu et al., 2016). Furthermore, this host usually allows a simple production scale-up by changing from shaking flaks expressions to (fedbatch) fermenters (Schilling et al., 2002;Johnson et al., 2003;Zhao et al., 2008). Several commercial proteins are produced in P. pastoris, including recombinant Tritirachium album Proteinase K (Thermo Scientific, Waltham, MA, USA), Trypsin (Roche Applied Science, Germany), and nitrate reductase (The Nitrate Elimination Co., Lake Linden, MI, USA; Ahmad et al., 2014). In the past, successful expressions of cutinases from Fusarium solani (Kwon et al., 2009;Hu et al., 2016), Alternaria brassicicola (Koschorreck et al., 2010), Glomerella cingulata (Seman et al., 2014), and Trichoderma harzianum (Rubio et al., 2008) in P. pastoris have been reported. In this study, cutinase 1 from Thermobifida cellulosilytica (Thc_Cut1) as well as two glycosylation knock out mutants (Thc_Cut1_ko) were cloned and overexpressed in P. pastoris and screened for their ability to hydrolyze the aromatic polyester (PET) and the aliphatic polyesters [Poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) and PBS]. Chemicals and Reagents Restriction enzymes, antarctic phosphatase, T4 DNA ligase as well as Endo H f were obtained from New England Biolabs (USA). Synthetic genes of P. pastoris codon optimized Thc_Cut1 and glycosylation site knock out mutants (Thc_Cut1_ko_Asn and Thc_Cut1_ko_ST) cloned into pMK-T were ordered from GeneArt (Germany). Pro-Q R Emerald 300 Glycoprotein Gel and Blot Stain Kit (P21857), CandyCane glycoprotein molecular weight standard (C21852) as well as P. pastoris KM71H strain and expression vector pPICZαB were acquired from ThermoFisher Scientific (USA). E. coli XL-10 cells were purchased from Agilent (USA). PureYield TM Plasmid Midiprep System, SV Gel and PCR Clean-Up System Kits and Mini-PROTEAN R TGX (Stain-Free TM ) Precast Gels were obtained from Promega (Germany) or BioRad (USA), respectively. Peptone, Yeast extract and DifCo yeast nitrogen base were purchased from Becton Dickinson (USA) and Zeocin TM was obtained from Eubio (Austria). All other chemicals were of the highest available purity and ordered from Sigma-Aldrich. PET powder obtained from still water bottle from Cristaline R was kindly provided by Carbios (St-Beauzire, France) and was previously characterized (Gamerith et al., in press). PHBV was purchased from Metabolix while PBS was purchased from Sigma-Aldrich. The PBS material used for quartz crystal microbalance (QCM) experiments was obtained from BASF and the physicochemical properties of this polyester were previously reported (Zumstein et al., 2016). Designing of thc_Cut1 Glycosylation Site Knock out Mutants Using NetNGlyc 1.0 server (Technical University of Denmark) five possible N-glycosylation sites were predicted in the native Thc_Cut1 sequence (GenBank accession no. ADV92526.1). Asparagine (Asn) at amino acid position 10 is directly followed by a proline which makes glycosylation unlikely due to conformational constraints. Also for Asn at position 233 the glycosylation potential was lower compared to the other potential glycosylation sites according to the prediction. Therefore, the three glycosylation sites at Asn 29, Asn 49, and Asn 161 were knocked out by changing the nucleotide sequence accordingly, resulting in two triple knockout mutants (for details see Table 1). Synthetic genes of the designed glycosylation site knock out mutants (Thc_Cut1_ko) cloned into pMK-T were ordered from GeneArt. General Recombinant DNA Techniques All general recombinant DNA techniques described in this work were performed following previously reported standard protocols (Sambrook et al., 1989). Digestion of cloning vector (pMK-T) and expression vector (pPICZαB) were performed with NotI Hf and XhoI, dephosphorylation was performed by antarctic phosphatase and T4 DNA ligase was used for ligation according to manufacturer's protocols (New England Biolabs). Plasmids and DNA fragments were purified by PureYield TM Plasmid Midiprep System Kit or Wizard R SV Gel and PCR Clean-Up System Kit. After transformation of E. coli XL-10 cells and plasmid purification pPICZαB_Thc_Cut1 and pPICZαB_Thc_Cut1_ko constructs were sequenced by LGC Genomics in order to confirm the DNA sequence. 1% (w/v) yeast extract, 2% (w/v) peptone, 2% (w/v) glucose, 1 M sorbitol, 2% (w/v) agar] containing 0.1 mg/mL Zeocin TM and incubated at 28 • C for 3-5 days. Transformants were cultivated in YPD medium in 96-deep-well-plates and screened for multicopy integrants on YPD agar plates [1% (w/v) yeast extract, 2% (w/v) peptone, 2% (w/v) glucose, 2% (w/v) agar] containing 0.1-2 mg/mL Zeocin TM . Stock cultures of selected clones were stored at −80 • C. P. pastoris Shaking Flask Fermentation For enzyme production, 1 L baffled shaking flasks containing 250 mL of buffered glycerol complex medium [BMGY; 1% (w/v) yeast extract, 2% (w/v) peptone, 1% (v/v) glycerol, 3.4% (w/v) yeast nitrogen base, 4 × 10 −5 % biotin, 100 mM potassium phosphate buffer pH 6.0] were inoculated with P. pastoris KM71H transformants and incubated at 28 • C and 150 rpm for approximately 16-18 h. Cells were harvested by centrifugation (3,000 × g, 8 min, 22 • C) and the cell pellet was re-suspended in one-tenth of the original volume (75 mL culture volume in 300 mL shaking flasks). Enzyme expression was induced by the addition of methanol to a final concentration of 1% (v/v). Methanol was added twice daily to a final concentration of 1% (v/v) to sustain the induction. During fermentation, samples were collected by centrifugation (14,000 rpm, 5 min, 22 • C) and supernatants were stored at −20 • C until further use. After up to 120 h of enzyme expression, cells were harvested by centrifugation (4,500 rpm, 4 • C, 20 min) and the supernatant was stored at −20 • C until protein purification. Immobilized Metal Ion Affinity Chromatography for Enzyme Purification The enzyme purification from the fermentation supernatants was performed via affinity chromatography (ÄKTA purifier, GE Healthcare) using HisTrap TM excel 5 mL columns (GE Healthcare). After sample loading (flow rate 2 mL/min) the column was washed with 7 column volumes (CV) of equilibration buffer (20 mM NaH 2 PO 4 , 500 mM NaCl, pH 7.4) followed by 3 CV of 1% elution buffer (20 mM NaH 2 PO 4 , 500 mM NaCl, 500 mM imidazole, pH 7.4). The enzyme was eluted with 45% elution buffer for 6 CV. Finally the column was washed with 100% elution buffer for 3 CV and stored in a 20% ethanol solution. Proteins were detected at 280 nm. SDS-PAGE analysis of purification fractions was performed in order to confirm the presence of Thc_Cut1 or Thc_Cut1_ko in the pooled fractions. PD-10 columns (Sephadex TM G-25 Medium, GE Healthcare) were used to exchange the buffer to 100 mM KH 2 PO 4 /K 2 HPO 4 pH 7.0 before storage of purified proteins at −20 • C until further usage. SDS-PAGE and Glycostain Analysis SDS-PAGE of fermentation supernatant samples withdrawn at different time points was performed according to standard conditions (Laemmli, 1970). After staining with Coomassie Brilliant Blue R-250, SDS PAGE gels were imaged using a ChemiDoc (Chemidoc TM MP Imaging System, Bio-Rad). Stain free SDS-PAGE gels were directly visualized using a ChemiDoc (Chemidoc TM MP Imaging System, Bio-Rad) without further treatment. Deglycosylation of Thc_Cut1 and Thc_Cut1_ko mutants was performed using Endo H f according to manufacturer's instructions (New England Biolabs). Glycostain gels were prepared according to Pro-Q R Emerald 300 Glycoprotein Gel and Blot Stain Kit manual and detected by G-Box or hand-held UV lamp at 300 nm. Protein Analysis Total protein concentrations in the fermentation supernatants (from different time points) as well as protein concentrations of purified enzymes were determined using the Bradford assay (BioRad) according to manual instructions and using bovine serum albumin (BSA) as standard. Esterase Activity Assay Esterase activity of fermentation supernatants (from different time points) and of purified enzymes was measured using pnitrophenyl butyrate (pNPB) as soluble substrate according to Gamerith et al. using the experimentally determined extinction coefficient (ε = 9.7 mL µmol −1 cm −1 ) (Gamerith et al., in press). Enzymatic Hydrolysis of Polyester Substrates For hydrolysis reactions, 50 mg of PET powder or 5 mg of aliphatic polyester powders (PHBV, PBS) were weighed and incubated with 5 µM of enzyme (Thc_Cut1 or Thc_Cut1_koST) diluted in a final volume of 1 mL in KH 2 PO 4 /K 2 HPO 4 (1 M, pH 8.0). In case of PBS films, pieces of 0.5 × 1.0 cm were cut and washed in three serial steps (5 g/L Triton X-100, 100 mM Na 2 CO 3 , and ddH 2 O; each for 30 min at 50 • C and 100 rpm) prior to hydrolysis reactions in order to remove possible surface contaminations (Pellis et al., 2015, in press;Gamerith et al., 2016). Incubations were performed in 2 mL tubes at 100 rpm and 65 • C for different time frames. The released acids, alcohols and oligomers, namely: terephthalic acid (TPA) and mono(2-hydroxyethyl) terephthalate (MHET) for PET; succinic acid (SA) and 1,4-butanediol (BDO) for PBS and 3-hydroxybutyric acid (3-HBA) for PHBV were analyzed by HPLC using either a diode array detector (DAD) or a refractive index detector (RI). As a blank, polyester substrates were incubated in KH 2 PO 4 /K 2 HPO 4 (1 M, pH 8.0) without enzyme. Enzyme blanks were also performed by incubating 5 µM solutions of enzymes in KH 2 PO 4 /K 2 HPO 4 (1 M, pH 8.0) without polyester substrate. All hydrolysis experiments were performed in triplicates. Analysis of Soluble Monomers and Oligomers Released by High Performance Liquid Chromatography (HPLC-DAD or HPLC-RI) HPLC-DAD Detection of TPA and MHET HPLC analysis of released products upon enzymatic hydrolysis of PET was performed as recently described by Gamerith et al. (in press). Briefly, after enzyme treatment of polyester powders the enzyme was precipitated with ice-cold methanol. After acidification to pH 3.5, samples were centrifuged (Hettich MIKRO 200 R, Tuttlingen, Germany) at 14,000 rpm at 0 • C for 15 min, filtered (0.45 µm nylon) and transferred to HPLC vials. For HPLC (Agilent Technologies, 1260 Infinity) analysis, a reversed phase column C18 (YMC 30, 250 × 4.6 mm ID, S-5 µm) was used. Analysis was carried out with constant 10% 0.01 N formic acid and starting with 85% water and 5% methanol, gradual (1 min) to 10% methanol, gradual (to 8 min) to 50% methanol and gradual (to 10 min) to 90% methanol, back to starting position with a 7 min post run. The flow rate was set to 0.85 mL min −1 and the column was maintained at a temperature of 40 • C. The injection volume was set to 10 µL. Detection of the analytes was performed with a photodiode array detector (Agilent Technologies, 1290 Infinity II) at a wavelength of 241 nm. Standards of TPA and bis(hydroxyethyl)terephthalate (BHET) were prepared in KH 2 PO 4 /K 2 HPO 4 (1 M, pH 8.0) in a range of 0.005-0.5 mM and treated the same way as samples. HPLC-RI Detection of SA, BDO, and 3-HBA HPLC-RI detection of released products from PHBV and PBS was performed as previously reported by Pellis et al. (2015). Briefly, hydrolysis samples were precipitated following the Carrez method and filtered through 0.45 µm Nylon filters (GVS, Indianapolis, USA). The analytes were separated by HPLC using refractive index detection (1100 series, Agilent Technologies, Palo Alto, CA) equipped with an ICSep-ION-300 column (Transgenomic Organic, San Jose, CA) of 300 mm by 7.8 mm and 7 µm particle diameter. Column temperature was maintained at 45 • C. Samples (40 µL) were injected and separated by isocratic elution for 40 min at 0.325 mL min −1 in 0.005 M H 2 SO 4 as the mobile phase. Standards of SA, BDO and 3-HBA were prepared in KH 2 PO 4 /K 2 HPO 4 (1 M, pH 8.0) in a range of 0.5-100 mM and treated the same way as samples. Enzymatic Hydrolysis Measurements using a Quartz Crystal Microbalance (QCM) The hydrolysis of spin-coated PHBV and PBS thin films by Thc_Cut1 was measured by QCM as previously reported (Zumstein et al., 2016). In brief, we spin coated thin films from chloroform solutions containing the respective polyester (concentration: 0.5% w/w) onto the surfaces of gold-coated QCM sensors. After air-drying the sensors, they were incubated in a buffered solution [3 mM tris(hydroxymethyl)-aminomethane, 10 mM potassium chloride, pH 7.0] for 14 h. The sensors were subsequently mounted into the flow cells of a QCM instrument (model E4, Q-sense) and rinsed with buffered solution of the same composition at a volumetric flow rate of 20 µL/min and a temperature of 40 • C. Upon attaining stable resonance frequencies of the fundamental tone and several oscillation overtones, we switched to delivering solutions that contained Thc_Cut1 (2.07 µg/mL) but otherwise were identical in pH and ionic composition to the solutions used for equilibration. We subsequently monitored changes in the resonance frequencies over the course of the hydrolysis experiment and related these frequency changes to adlayer mass changes using the Sauerbrey equation. We used the fifth overtone of the oscillation for calculations and data plotting. To measure the fraction of coated polyester dry mass that was removed over the course of the hydrolysis experiment, we measured the resonance frequency of each sensor in air after the experiment as well as before and after the initial polyester spin coating step. We note that Thc_Cut1 used in QCM experiments was expressed in E. coli. Scanning Electron Microscopy (SEM) PBS films morphology was qualitatively assessed through scanning electron microscopy (SEM). Control PBS (without any enzymatic treatment) and enzymatically hydrolyzed films (after 24, 48, 72, and 96 h) were surface characterized. All SEM images were acquired collecting secondary electrons on a Hitachi 3030TM (Metrohm INULA GmbH, Austria) working at EDX acceleration voltage. RESULTS AND DISCUSSION Glycosylation Site Knock out Mutant design, Vector Construction, and Transformation in P. pastoris In its natural hosts or when heterologously expressed in E. coli Su et al., 2013;Roth et al., 2014;Then et al., 2016) Thermobifida spp. cutinases are not glycosylated. In contrast, expression in P. pastoris may lead to glycosylation which can have positive effects such as increased stability, as previously shown for a Thermobifida xylanase , human aquaporin 10 (Öberg et al., 2011) and Rhizopus chinensis lipase (Yang et al., 2015). Shirke et al. also recently reported that glycosylation stabilizes a P. pastoris-expressed cutinase from Thiellavia terrestris by inhibiting its thermal aggregation (Shirke et al., 2017). On the other hand, glycosylation may also have negative effects, as shown for example for a lipase from Rhizomucor miehei which had decreased activity upon N-glycosylation (Liu et al., 2014). Therefore, it was important to investigate the influence of glycosylation on the activity and stability of Thc_Cut1 when expressed in P. pastoris. Hence, two glycosylation site triple knock out mutants were designed (Figure 1). The recombinant pPICZαB_Thc_Cut1 and pPICZαB_Thc_Cut1_ko plasmids contained the codon optimized gene of wild type or mutated Thc_Cut1, the methanol inducible alcoholoxidase 1 promoter (AOX1), the S. cerevisiae α-factor secretion signal, a C-terminal 6x His-Tag and a transcription termination signal. The tightly regulated AOX1 promoter holds advantages for overexpression of proteins since cells are not stressed by the accumulation of recombinant protein during growth phase. Even the production of proteins that are toxic to P. pastoris is possible by uncoupling the growth from the production phase (Ahmad et al., 2014). The most commonly employed method of generating multi-copy expression strains in P. pastoris is based on direct screening of transformants on agar plates containing increasing concentrations of antibiotics (e.g., 0.1-2 mg/mL of Zeocin TM ) (Ahmad et al., 2014). After successful transformation by electroporation the selection of Thc_Cut1 and Thc_Cut1_ko transformants yielded clones that might contain multi-copy integrations as shown by growth on 2 mg/mL Zeocin TM . A direct correlation between copy number and expression level has been shown especially for intracellular expression (Vassileva et al., 2001;Marx et al., 2009), but this direct correlation is not necessarily valid for secreted proteins (Marx et al., 2009). Analysis of Cutinase Expression in Shaking-Flasks After successful transformation and screening on high Zeocin TM YPD agar plates the best growing P. pastoris KM71H transformants of each enzyme were chosen for enzyme production in shaking flasks. Enzyme expression was induced by the addition of methanol and during fermentation several Frontiers in Microbiology | www.frontiersin.org supernatant samples were collected by centrifugation. Analysis of these supernatant samples drawn at different time points during shaking flask fermentations by SDS-PAGE clearly showed that methanol induction successfully stimulated the expression of cutinases (Figure 2). Although hyperglycosylation of heterologous proteins is more common in S. cerevisiae (Grinna and Tschopp, 1989), expression in P. pastoris can also lead to hyperglycosylation mainly attributed to N-mannosylation (Bretthauer and Castellino, 1999;Várnai et al., 2014). Glycosylation may affect the migration of the proteins on SDS-PAGE or, in case of heterogeneous glycosylation, may result in smears (Bretthauer and Castellino, 1999;Várnai et al., 2014). It was previously reported that heterologous expression of cutinase CUTAB1 from Alternaria brassiciola in P. pastoris led to a single band on SDS-PAGE when applied as crude supernatant. However, when applied after purification, an additional band became more distinct. Since purified and Endo H f deglycosylated CUTAB1 only showed one band, the two different bands were assigned to the glycosylated and non-glycosylated enzyme (Koschorreck et al., 2010). In our case Thc_Cut1 appeared as a distinctive band around 38 kDa (Figure 2A, right), indicating its high level of glycosylation, whereas Endo H f deglycosylated Thc_Cut1 showed one clear band corresponding to the calculated mass of 29.4 kDa (Figure 2A, left; Herrero Acero et al., 2011). The protein band around 70 kDa corresponds to Endo H f used for deglycosylation. On the contrary, Thc_Cut1_ko mutants showed a clear band around 29 kDa with or without Endo H f treatment suggesting that the glycosylation sites were successfully knocked out in both mutants (Figures 2B,C). These results were confirmed by staining SDS PAGE gels with Pro-Q R Emerald 300 glycoprotein stain, which creates a bright green-fluorescent signal on glycoproteins. For direct comparison of Commassieand Glyco-staining, the same samples were loaded on two SDS PAGE gels whereas only one gel was glycostained afterwards. Glycostained gels showed clearly fluorescent bands of purified native Thc_Cut1 expressed by P. pastoris, whereas no fluorescent bands were detected for Thc_Cut1 expressed by E. coli (Gamerith et al., in press; Figure 3) or for Endo H f deglycosylated Thc_Cut1 expressed by P. pastoris (see Figure S1). Although all fermentation supernatant samples from different time points during expression of Thc_Cut1_ko mutants showed high unspecific fluorescent signals, no specific bands corresponding to Thc_Cut1 were detected, suggesting a high background noise of the medium (see Figure S2 for Thc_Cut1_ko_ST). Also purified glycosylation site knockout mutants did not show any fluorescent bands, verifying the successful knock out of all glycosylation sites ( Figure S2, last lane for purified Thc_Cut1_ko_ST as example). Posttranslational glycosylation processes might have an influence on the expression level due to their time-and energy-demand. Furthermore, also dissolved oxygen concentrations and careful control of the methanol levels are crucial for a high expression of recombinant proteins in P. pastoris (Seman et al., 2014). Methanol might not only have toxic effects for the cells but, as a highly flammable and hazardous substance it is also problematic for large-scale applications (Ahmad et al., 2014). Nevertheless, studies on methanol-inducible promoters, including AOX1, have shown that protein expression can also be achieved without methanol induction by constitutive co-expression of positively acting transcription factor Prm1p from either of the GAP, TEF, or PGK promoters (Takagi et al., 2012). In agreement with SDS-PAGE analysis also an increase of total extracellular protein concentration, determined by Bradford assay (BioRad) and (BSA) as standard, as well as an increase in volumetric esterase activity on pNPB as substrate was detected in fermentation supernatants over time (Figures 4A,B, respectively). Within 24 h from the methanol addition, induction resulted in a clear increase in the total extracellular protein concentration. Interestingly, no significant differences between the expression of native Thc_Cut1 and Thc_Cut1_ko mutants concerning the total extracellular protein concentration were observed. Furthermore, the volumetric activity of all enzyme variants on pNPB was of the same order of magnitude. Heterologous expression of Thc_Cut1 and Thc_Cut1_ko mutants in P. pastoris resulted in about 400 ± 20 mg total extracellular protein per liter in shaking flasks without optimization of culture conditions. These results are comparable to the previously reported expression level of F. solani cutinase in P. pastoris of 340 mg extracellular protein per liter (Kwon et al., 2009). In comparison to heterologous expression of CUTAB1 in P. pastoris by Koschorreck et al., yielding in 212 mg extracellular protein per liter, the expression level of Thc_Cut1 was almost doubled (Koschorreck et al., 2010). It is well-known that especially in the case of P. pastoris optimization of expression conditions in shake flasks or fed-batch fermenters can largely improve protein yields (Schilling et al., 2002;Zhao et al., 2008). However, the scope of this study was to demonstrate the general feasibility of Thc_Cut1 expression in P. pastoris. Immobilized Metal Ion Affinity Chromatography (IMAC) for Enzyme Purification From the 70 mL crude supernatants of each enzyme 65 mL were loaded onto HisTrap TM excel columns resulting in 14 mL of purified and buffer exchanged enzymes with different concentrations and esterase activities ( Table 2). Interestingly, around 75% of the native Thc_Cut1 and Thc_Cut1_koST could be recovered from the crude supernatant, whereas only around 34% of Thc_Cut1_koAsn could be purified. Protein peaks of Thc_Cut1 and Thc_Cut1_ko were detected at 280 nm and the presence of enzymes in the corresponding fractions was confirmed by SDS-PAGE analysis (see Figure S3 for example of Thc_Cut1 purification). Kwon et al. reported a negative effect of a C-terminal 6xHis tag on the cellular process for proper synthesis, folding, and secretion of F. solani cutinase in P. pastoris (Kwon et al., 2009). Similarly, also C-terminal fusion of small tags [such as FLAG-(Gly)5 and His-(Gly)5 tags] to the extracellular domain of human Fas ligand (hFasLECD) led to a failure in secretion of functional protein in P. pastoris, whereas the secretion of functional hFasLE CD was retained upon N-terminal tagging (Muraki, 2006). Nonetheless, since all cutinases used in this study had a C-terminal 6xHis tag, it was not possible to assess any effect of C-terminal tags within this study. Enzymatic Hydrolysis of Aromatic Polyesters (PET) Several cutinases (Vertommen et al., 2005;Heumann et al., 2006;Donelli et al., 2009;Kanelli et al., 2015)-including E. coli expressed Thc_Cut1 (Herrero Acero et al., 2011)-have been found to hydrolyze PET. For this reason, this aromatic polyester was chosen as substrate for performing the first hydrolysis experiments with cutinases expressed in P. pastoris in order to confirm their activity. Besides the crystallinity of polyesters (Mochizuki and Hirmai, 1997;Vertommen et al., 2005;Herzog et al., 2006;Mueller, 2006;Brueckner et al., 2008;Tokiwa et al., 2009;Pellis et al., 2016a), also the incubation temperature is wellknown to affect the enzymatic hydrolysis of polyesters, mainly by affecting the polymer chain mobility (Marten et al., 2003;Eberl et al., 2009). Incubation temperatures close to the glass transition temperature (T g ) are suggested in order to promote enzymatic attack of polymers for degradation purposes (Mueller et al., 2005;Mueller, 2006;Kawai et al., 2014;Then et al., 2016) while T < T g are instead suggested when the surface hydrophilization is desired (Pellis et al., 2015, in press;Ortner et al., 2017). We recently reported that, for short term reactions, higher incubation temperatures led to faster hydrolysis rates of PET by E. coli expressed Thc_Cut1 (Gamerith et al., in press) while for longer reaction times limited enzyme stability may counteract this effect. Furthermore, the ionic strength as well as the buffer choice were found to have a severe effect on enzymatic hydrolysis of PET by polyester hydrolases . High buffer concentrations might prevent the pH decrease of the incubating buffer during hydrolysis reactions due to the acidic released products (e.g., TPA). Hence, hydrolysis of a 24% crystalline PET powder with P. pastoris expressed Thc_Cut1, Thc_Cut1_ko_Asn and Thc_Cut1_ko_ST were performed at 65 • C in 1M KH 2 PO 4 /K 2 HPO 4 pH 8.0 and released products were quantified by HPLC-DAD (Figure 5). No significant differences between the hydrolysis efficiency of the two glycosylation site knock out mutants could be observed, but treatment with the knock out mutants resulted in slightly increased TPA levels compared to the native Thc_Cut1. Up to 62 mM released TPA were observed after 96 h of hydrolysis, corresponding to ∼24% degradation of initial PET powder to soluble TPA. Compared to previously reported results for PET hydrolysis with Thc_Cut1 by Gamerith et al.-using incubation conditions of 100 mM KH 2 PO 4 /K 2 HPO 4 pH 7.0 and 60 • C (Gamerith et al., in press)-a combination of increased incubation temperature, higher pH and increased buffer concentration resulted in significantly higher hydrolysis rates of PET by Thc_Cut1 in this current study. This high degree of hydrolysis signifies a big step toward feasibility of enzymatic recycling of polyesters. It is interesting to note that while the two glycosylation site knock out mutants showed very similar results with regards to both expression as well as PET hydrolysis rates, the purification yields of purified Thc_Cut1_ko_Asn were significantly lower compared to Thc_Cut1_ko_ST. Due to the highest productivity (and therefore scalability) for the expression, further enzyme selectivity studies on aliphatic polyesters were performed using the native Thc_Cut1 and Thc_Cut1_koST. Enzymatic Hydrolysis of Aliphatic Polyesters (PHBV and PBS) In order to investigate the substrate specificities of Thc_Cut1 and Thc_Cut1_koST in more detail, hydrolysis experiments were performed using the aliphatic polyesters PHBV and PBS as substrates. Both aliphatic polyesters were successfully hydrolyzed by Thc_Cut1 and Thc_Cut1_koST, although to very different extents. Figures 6, 7 show the quantified concentrations of released products after up to 96 h enzymatic hydrolysis of PHBV and PBS powders, respectively. Interestingly, Thc_Cut1 and Thc_Cut1_koST reached similar levels of released 3-HBA (∼0.5 mM) for the hydrolysis of PBHV, whereas for PBS, the released products were approximately twice as high for Thc_Cut1_koST as compared to native Thc_Cut1 (i.e., ∼14 vs. ∼7 mM SA and BDO). Due to different substrate concentrations (50 mg for PET vs. 5 mg for PBS and PHBV) the absolute values of released products seem lower compared to PET hydrolysis, but in fact the quantified SA and BDO concentrations correspond to ∼24 and 48% degradation of initial PBS powder to soluble released products by Thc_Cut1 and Thc_Cut1_koST, respectively. This finding indicates that there was a remarkable influence of the glycosylation on the substrate specificity. Glycosylation may not only lead to increased stability and protection against proteolysis, but may also have a role on the catalytic activity. For several proteases it has been reported that glycosylation can alter their substrate recognition, their specificity and binding affinity, as well as the turnover rates. Moreover, glycans which are in the vicinity of the active site are more likely to influence the substrate binding (Goettig, 2016). Recently, we have demonstrated that both surface engineering as well as attachment of polymer binding modules or hydrophobins can dramatically influence sorption and thereby hydrolysis of polyesters (Herrero Acero et al., 2013;Ribitsch et al., 2013Ribitsch et al., , 2015. To support faster Thc_Cut1-mediated hydrolysis of PBS than PHBV, we complemented the data above by QCM measurements. Previous studies showed that QCM can be used to study both adsorption of enzymes to the polyester surface and mass loss of polyester films due to enzymatic hydrolysis in real time Perz et al., 2015;Zumstein et al., 2016). Here, we monitored the mass change of spin-coated PBS and PBHV films during their hydrolysis by Thc_Cut1 (Figure 8). These measurements showed that the mass of the spin-coated PBS films rapidly decreased after the addition of Thc_Cut1 and that the adlayer mass reached stable final values within 1.5 h of the onset of Thc_Cut1 addition (Figure 8A; results of duplicate experiments). When Thc_Cut1 was added to PHBV films, we measured an initial adlayer mass increase that we ascribed to the adsorption of Thc_Cut1 to the film surface. Detection of this mass increase implies that PHBV hydrolysis was slow (in contrast to PBS). Slow PHBV hydrolysis was substantiated by the finding of slow and continuous decreases in the PHBV film mass over the subsequent hours of continuous exposure to Thc_Cut1 (Figure 8B). The differences in the mass decreases determined in the QCM-D measurements were consistent with the changes in the dry masses of the sensors, which we determined by measuring the adlayer masses of dried sensors before and after spin coating step and after the enzymatic hydrolysis experiment. These measurements revealed that 93 and 3% of the spincoated PBS and PHBV masses, respectively, were removed during the hydrolysis experiments. We note that the Thc_Cut1 that was used for the QCM experiments was expressed in E. coli. This Thc_Cut1 variant is expected to have no glycosylation and to therefore show the same activity on polyesters as the Thc_Cut1_ko_ST variant that was expressed in P. pastoris, for which we showed the absence of glycosylation ( Figure S2). In summary, the QCM-based analysis supported faster hydrolysis of PBS than PBHV by Thc_Cut1. Among the tested polyesters, PBS was most extensively hydrolyzed and was therefore chosen for additional analyses. Hydrolysis of PBS films followed by weight loss and SEM analysis were performed. The concentrations of released products from PBS films showed the same trend as for PBS powder-Thc_Cut1_koST released more than double the amount of hydrolysis products compared to native Thc_Cut1 (∼48-50 mM SA and BDO by Thc_Cut1_koST vs. ∼12-15 mM SA and BDO by Thc_Cut1) (Figure 9 and Figure S4). These results are in accordance with the weight loss, which reached up to FIGURE 9 | Weight loss of PBS films upon enzymatic hydrolysis by Thc_Cut1 (dark gray bars) and Thc_Cut1_ko_ST (middle gray bars). Time scan for 24, 48, 72, or 96 h was performed at 65 • C with 5 µM enzyme in 1 M KPi pH 8.0 with 0.5 × 1.0 cm PBS films at 100 rpm 92% with Thc_Cut1_koST and only around 41% with Thc_Cut1 within 96 h (see Figure 9). Hu et al. recently reported on complete degradation of PBS films by a recombinant cutinase from Fusarium solani within 6 h (Hu et al., 2016). To complement the PBS hydrolysis data, an additional SEM characterization of the film surfaces was performed. Figure 10 shows clear changes in the morphology of PBS film surfaces caused by treatments with both native Thc_Cut1 and Thc_Cut1_koST (Figures 10C-F), while no detectable changes of the control samples occured (Figures 10A,B). Moreover, 24 h of enzymatic hydrolysis of the PBS films surface resulted in more surface erosion when using the ko mutant than the native Thc_Cut1 ( Figure 10C vs. Figure 10E). After 96 h, the formation of "holes" throughout the polymeric sample is visible for the ko mutant ( Figure 10F) while only an increased surface roughness was observed for the Thc_Cut1 treatment ( Figure 10D). CONCLUSIONS In this study, we demonstrated the general feasibility of expressing Thc_Cut1 and two glycosylation site knock out mutants, Thc_Cut1_koAsn and Thc_Cut1_koST, in P. pastoris. Furthermore, we have shown that Thc_Cut1 and Thc_Cut1_ko mutants hydrolyze aromatic (PET) and aliphatic (PHBV and PBS) polyester powders, although at very different rates as shown by HPLC quantification of released products. These findings were also confirmed by QCM measurements, which showed a 3.0% mass change for PHBV thin films and a 93.2% mass decrease for PBS thin films upon enzymatic hydrolysis with Thc_Cut1. The finding that treatment of PBS films with Thc_Cut1 and Thc_Cut1_koST resulted in large PBS weight losses and clear effects on film surface topography imaged by SEM confirm the potential of Thc_Cut1 and mutants for degradation of PBS films. Together with the high activity of FIGURE 10 | SEM surface characterization of PBS films. (A,B) control reactions (polymer film + buffer only); (C,D) Thc_Cut1-catalyzed hydrolysis (polymer film + Thc_Cut1 in buffer); (E,F) Thc_Cut1_ko_ST-catalyzed hydrolysis (polymer film + Thc_Cut1_ko_ST in buffer). All samples were measured after 24 (left column) and 96 h (right column) of treatment. It's clearly visible the increase of the hydrolysis reaction from 24 to 96 h (C-F) and the higher damage to the film by Thc_Cut1_ko_ST that yield, after 96 h to holes that go throughout the film (E). Thc_Cut1 and Thc_Cut1_ko mutants on PET, this study provides a significant contribution toward enzymatic degradation of polyesters. AUTHOR CONTRIBUTIONS CG, SG, and SZ expressed and purified the enzymes. DR designed the mutants. CG performed the PET hydrolysis experiments. MZ and MS conducted the QCM hydrolysis experiments and wrote the related sections of the manuscript. MV and AP performed the aliphatic polyesters hydrolysis of powders and films and the relative SEM images. CG, AP, EH, and GG wrote the manuscript.
2017-06-15T18:22:41.204Z
2017-05-24T00:00:00.000
{ "year": 2017, "sha1": "0930586c7017d3da055bc7ae8edab0480073a645", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.00938/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dad6aa30dcc816ba658bb97f3e01a23bc6040d58", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
6929464
pes2o/s2orc
v3-fos-license
Coping strategies of HIV-affected households in Ghana Background HIV and negative coping mechanisms have a cyclical relationship. HIV infections may lead to the adoption of coping strategies, which may have undesired, negative consequences. We present data on the various coping mechanisms that HIV-affected households in Ghana resort to. Methods We collected data on coping strategies, livelihood activities, food consumption, and asset wealth from a nationally representative sample of 1,745 Ghanaian HIV-affected households. We computed coping strategies index (CSI), effective dependency rate, and asset wealth using previously validated methodologies. Results Various dehumanizing coping strategies instituted by the HIV-affected households included skipping an entire day’s meal (13%), reducing portion sizes (61.3%), harvesting immature crops (7.6%), and begging (5.6%). Two-thirds of the households were asset poor. Asset-poor households had higher CSI than asset-rich households (p <0.001). CSI were also higher among female-headed households and lower where the education level of the household head is higher. Households caring for chronically ill members recorded higher CSI in comparison with their counterparts without the chronically ill (p < 0.05). Conclusions Institution of degrading measures by HIV-affected households in reaction to threat of food insecurity was prevalent. The three most important coping strategies used by households were limiting portion size (61.3%), reducing number of meals per day (59.5%) and relying on less expensive foods (56.2%). The least employed strategies included household member going begging (5.6%), eating elsewhere (8.7%) and harvesting immature crop (7.6%). Given that household assets, and caring for the chronically ill were associated with high CSI, a policy focusing on helping HIV-affected households gradually build up their asset base, or targeting households caring for chronically ill member(s) with conditional household-level support may be reasonable. Background Human immunodeficiency virus (HIV) without a doubt is a grave public health and development problem in sub-Saharan Africa. The Joint United Nations Programme on HIV/AIDS (UNAIDS) in 2012 estimated that more than two-thirds of the over 35 million people living with HIV worldwide live in sub-Saharan Africa [1]. This region hosts only 12% of the world population [2]. The fight against HIV/AIDS is being pursued through interventions to stop the spread of the virus, and prolong the lives of those infected through the use of antiretroviral therapy (ART). Significant successes have been made in this drive. The most recent estimates suggest that the total number of new HIV infections in sub-Saharan Africa has dropped significantly [1,2]. The HIV epidemic has been stable in Ghana over the past half-decade. Indeed, linear trend analysis of prevalence data from 2000 to 2013 shows that HIV situation in Ghana has declined [3]. The most recent prevalence data shows an estimated national adult HIV prevalence of 1.3% [4]. There has been an increased access to ART globally [1] and locally [5]. Access to treatment commodities have been shown to lead to a reduction in the number of AIDS-related deaths [2,6,7]. In spite of these efforts, one major challenge many HIV-affected individuals and households in sub-Saharan Africa grapple with is food insecurity. Studies have shown that HIV exacerbates the vulnerability of affected families to food insecurity, leading to hunger and malnutrition [8,9]. For instance, a longitudinal study in Uganda among HIV-infected individuals had shown that severe food insecurity was associated with worsened quality of life [10]. Indeed, scholars have previously provided elucidation on the relationship between HIV and food insecurity. The relationship is complex and intertwined in a vicious cycle, with each worsening vulnerability and thus exacerbating the severity of the other [10,11]. Food insecurity heightens susceptibility to HIV exposure and infection; HIV on the other hand increases vulnerability to food insecurity. This relationship is often compounded by low income, resulting in profound consequences on health and nutritional status. Households that suffer from food insecurity due to poverty are malnourished prior to infection [11]. As a disease, HIV's impact on malnutrition as a result of its effect on the infected individual's metabolism, ingestion and digestion of food has long been clarified [12][13][14][15]. HIV also disrupts livelihoods as infected persons often lose the ability to work and generate income [16]. In addition, the propensity for uninfected family members to contribute economically to the household income basket is seriously affected due to the burden of care for the infected person(s). For instance, it is reported that caring for an individual with AIDS in sub-Saharan Africa can deplete as much as one-third of a family's monthly income [17]. This situation feeds into the vicious cycle of HIV and food insecurity described above. Sometimes described as a syndemic a , the relationship between HIV and food insecurity often causes individuals and households to adopt coping strategies to maintain the status quo. Studies have demonstrated that such strategies are often negative, undesired, unsustainable and often irreversible [18,19]. Strategies that have often been adopted include sale of assets, taking children out of school, migrating and engaging in transactional sex [11,16,20]. Some authorities posit that these coping strategies may bring short-term relief, but increases the risk of exposure to HIV. Destitution and despair brought on by negative coping behavior may increase the risk that a person will resort to trading unprotected sex for food [21,22]. This background shows that there is a growing body of literature on HIV, food insecurity and negative coping mechanisms in sub-Saharan Africa, with majority of the studies originating from Southern African countries [10,16,20,[23][24][25]. Little is known about other sub-Saharan African countries, including Ghana. Most HIV-related studies in Ghana have largely focused on epidemiology, behavioral, social and psychological aspects of the disease. There is paucity of data on how HIV-infected individuals and affected households address their economic needs amid living with the disease. This paper presents data on the various coping strategies of a nationally representative sample of 1,745 Ghanaian HIV-affected households. Design and methodology The study was cross-sectional in design involving a nationally representative sample of 1,745 HIV-affected households. The sampling procedure of this national representative cross-sectional survey is detailed elsewhere [26]. Respondents were sampled from all of the ten regions in Ghana. Regional sample sizes were adjusted according to the size of each region in terms of number of PLHIV, and the proportion of PLHIV on ART. This paper is based on a total of 1,745 questionnaires representing 1,745 HIV-affected households, which were retained after data cleaning. After selecting households through a systematic random sampling procedure, PLHIVs from selected households were approached to schedule household interviews using the assessment questionnaire. Most of the interviews took place in the interviewee's home. Assessment of coping strategies To explore the concept of coping, a simple numeric score, referred to as coping strategy index (CSI) was constructed using a series of questions about how households cope with a shortfall in food for consumption. The variables considered address the recurrent situation faced by the household, and the coping strategies adopted to deal with food insecurity. The variables include limiting portion size at mealtimes, reducing number of meals eaten per day, skipping an entire day's meal, borrowing food or relying on help from friends or relatives, relying on less expensive or less preferred foods, hunting/gathering unusual types or amounts of wild food, harvesting immature crops (e.g. green maize) and sending household members to eat elsewhere. Given a recall period of three months, respondents were asked to indicate how frequently their households resorted to using one or more of the above-mentioned strategies in order to have access to food. Study subjects' responses to questions on these variables allow an assessment of the frequency as well as the severity of actions. The questions on coping strategies fall into two types. Firstly, they address the recurrent situation faced by the household, and the coping strategies adopted to deal with food insecurity. Also considered are changes in household strategies in response to recent difficulties, for example by asking whether the household has recently reduced the number of meals consumed per day, or purchased lower cost ingredients. It is worthy of note that the questions we used in constructing the CSI were adopted from a standard and previously validated tool [26]. An illustration of how responses are transformed into the CSI is shown in Table 1. Computation of household asset wealth Asset wealth was assessed in the survey through questions on the type of asset owned by the householdthese assets fall into two general categories, one describing the standard of living of the household (assets such as chairs, tables), and the other associated with income earning possibilities (items such as pop-corn machines, telephone booths or hairdryers etc). Households were split into three broad classes, according to how many different types of asset they own -"asset poor"; "asset medium"; and "asset rich". Details on the construction of the asset wealth variable are given in a previously published UNAIDS technical report [27]. Computation of effective dependency rate The effective dependency rate measures the share of total household members that is below or above working age plus those of working age who are chronically ill. The study defined the chronically ill as a person who by reasons of his/her HIV status or any other health condition or disability experiences a diminished level of functioning relative to primary level of daily living. To qualify, such a person must have been in that state for a minimum of six months. For every household, the numbers of these three categories of members were subtracted from the total household size and expressed as a percentage. Details of the computations are available in [27]. Data management and analysis All data were entered in Epi Data (Version 3.1) and later exported to IBM SPSS Statistics for Windows (Version 20.0) for analysis. We used univariate analysis to generate descriptive tabulations for key variables. Statistics presented with accompanying 95% confidence intervals are derived from such descriptive analysis. Bivariate analysis produced unadjusted associations between CSI and some selected household demographic or health attributes. Two-sided test of statistical significance was performed and P value <0.05 was used to denote statistical significance. All analyses were performed using IBM SPSS Statistics for Windows, Version 20.0. Ethical considerations This study protocol was reviewed and approved by the Ghana Health Service Ethical Review Committee to ensure that the study adhered to both local and international standards for protecting the rights and safety of human subjects in research. Informed consent was obtained from all participants after the objectives and the methodology of the study was explained to them. In addition, participants were assured of privacy and confidentiality. Table shows the relative frequency score, the severity weight, the individual score and the total household score of a randomly selected household. The total household score (∑A×B); or the summation of the products of each raw score/relative frequency score and the severity weight for each strategy) is defined as the CSI for that particular household and is 17.0. Results We introduce our findings with a presentation of the demographic profiles, the sources of livelihood, household asset wealth, and effective dependency rate of the households. Demographic profiles and household composition The majority of the respondents were females (75%). The proportion of female-headed HIV-infected households is almost equal to male-headed households although there are notable regional variations. The average size of households differed by household headship. Those headed by men had on average three members, compared to two for female-headed households. Forty-one percent of respondents are married, 15% are divorced, and 20% are widowed. Nearly 72% have attained at least a primary school education. Nationally, 16.3% of the main respondents were chronically energy deficient (defined as BMI < 18.5 kg/m2). The rate was highest in the Central Region (27.5%) and lowest in the Upper East Region has. Details of the regional distribution of this statistic are given in Table 2. We have also presented in Table 2 some of the above attributes by sex of household health. Sources of livelihood, household asset wealth, and effective dependency rate The main sources of household income are petty trading, cash crop production, skilled trade and casual labor. Other sources include remittances and income from wage employment. Asset wealth is assessed in the survey through questions on the type of asset owned by the householdthese assets fall into two general categories, one describing the standard of living of the household (assets such as chairs, tables), and the other associated with income earning possibilities (items such as pop-corn machines, telephone booths or hairdryers). About two-thirds of HIV affected households were asset poor. With the exception of Central region with an asset poor rate of 37%, all other regions recorded higher proportions of asset poor households; these ranged from 72% -90% (Table 2). The national effective dependency rate of 48.5% does not compare with dependency rates of some of the regions (Table 2). Northern region has the highest proportion of effective dependents per household (75%) -close to 30 percentage points higher than the national average. The distant second is the Eastern region (65%). The Central, Upper east, Upper west, and Western regions each recorded a rate close to the national average. The national capital (Greater Accra region) recorded the lowest number of effective dependents per household -40% (regional data not shown). Coping strategies instituted by HIV-affected households Our data show that households affected by food insecurity employ short-term different behavioral responses (food consumption coping strategies) to manage food shortages in the household. The frequency with which households adopted various coping strategies ranged from 5.6% to 61.3% (Figure 1). The three most important coping strategies used by households to cushion food insufficiency were limiting portion size (61.3%), reducing number of meals per day (59.5%) and relying on less expensive foods (56.2%). Conversely, the least employed strategies included household member going begging (5.6%), eating elsewhere (8.7%) and harvesting immature crop (7.6%). It is worth noting that most households do not use a single strategy, but a combination of strategies. Coping strategy index and its relationship with some selected attributes of HIV-affected households We explored the concept of using the coping strategy index (CSI). The CSI is a proxy measure of relative food insecurity in a household; with lower scores reflecting better household food security situation. The national average CSI was 21, and this was three times lower than that of the Upper East region (Figure 2). The CSI for HIV affected households is lowest in Brong Ahafo region ( Figure 2). The CSI was further applied to selected socio-demographic explanatory variables such as household asset wealth, headship of household, educational level of household head. At both the national and regional levels, logical relationships were observed. We observed an inverse relationship between asset ownership and CSI. Asset rich households have the lowest CSI, and the opposite is true for asset poor households ( Figure 2). There were clear patterns between CSI scores and certain household characteristics. One such characteristic is headship of household (female versus male). The CSI scores were significantly higher among female-headed households (p < 0.05). The level of education of household heads was also associated with the CSI. The CSI was generally lower where the education level of the household head is higher (Table 3). Coping strategies and household-level burden of care Besides socio-demographic factors, other variables that may exacerbate a household's food insecurity situation are the presence of AIDS orphans, effective dependency rate, recent occurrence of a death in the household, and caring for a chronically ill household member. We show in Table 3 the association between CSI and these explanatory variables. At the national level, households that care for orphans exhibited higher mean CSI score (32) compared with those with no orphans (21). Similar to households with orphans, households with chronically ill individuals demonstrated less resilience to food insecurity. Households caring for chronically sick members have higher CSI (mean CSI of 32.7) in comparison to their counterparts without chronically ill members (mean CSI of 20.6) ( Table 3). Discussion The study assessed among others the negative coping skills that Ghanaian HIV-affected households adopt in ensuring their livelihoods. Our respondents were randomly selected from both rural and urban ART centers in the nation. We have presented the socio-demographic and health attributes household asset wealth, and coping strategies of 1,745 HIV-affected households in Ghana. Our analyses are descriptive in nature; and therefore encourage that caution be exercised when generalizations beyond this descriptive perspectives are done. A greater proportion of them were educated. The higher attainment in formal education in HIV-positive households is likely to impact income earning opportunities and influence household food security and coping options [28]. Higher education in rural communities enhances peoples' ability to participate in off-farm income activities, which is likely to increase household income and subsequently enhance access to food [29]. Polygynous unions are common in many parts of Africa [21]. The larger households (>5 members) reported in the study may actually be clusters of smaller nuclear families who then share the same lower risk factors as the households with less than five members. This study is additionally important for its focus on food consumption coping strategies adopted by HIV-affected households. Households commonly use a combination of any of these 5 coping strategies to mitigate food insufficiency in their homes. More than half of these families reported limiting portion size, reducing number of meals eaten daily and relying on less expensive foods. Begging Figure 1 Coping strategies instituted by household affected by HIV. Legend: Figure 1 explores the concept of coping. Prevalence of various short-term different behavioral responses (food consumption coping strategies) in response to threat of, or actual food insecurity are presented. The strategies range from limiting portion size at mealtimes (61.3%) to sending household members to go begging (5.6%). and harvesting immature crops were least employed coping strategies. As expected, asset rich households had the lowest CSI, and asset poor households reported the highest CSI. Asset poor households are more likely to engage in negative coping strategies than asset rich households because of less household income to support food expenditure and fewer physical assets of worth that can be sold in time of crisis ( Figure 2). Generally, the CSI scores were higher among femaleheaded households compared to their male counterparts. As in other countries in sub-Saharan Africa, women in Ghana mostly carry the responsibility of caring for the sick; they are thus unable to engage in economic activities outside of the home to earn a steady income. Ghanaian women tend to have lower educational attainment levels compared with men due to discriminatory access to formal education as children [30]. The lower attainment in formal education in households is likely to impact income earning opportunities [28] and push households to adopt negative strategies in an attempt to moderate food insufficiency problems. Higher education improves women's ability to participate in higher income-generation activities which is likely to increase household income and subsequently enhance access to food [29]. Never Sometimes The reported relationship between CSI and education level of the household head was mixed. Generally higher education of the household head was linked with lower CSI. However, in 4 regions, household heads with basic education had higher CSI than their uneducated peers. Not assessed in the current study, but nevertheless, significant in this discussion is the role of culture on coping mechanisms [8,9,31]. Ghana is politically partitioned into ten regions, and 216 districts. In each region, and in most districts are various cultural/socio-cultural practices. The influence of these culturally-determined practices on coping behaviors could be in dependent of one's scholastic attainment. There is no argument that culture explains quite a lot of human behavioral tendencies and patterns. Sometimes referred to as individualism/collectivism, these cultural characteristics are related to different coping strategies [31]. See and Essau for example found that cultural values predicted coping, partly mediated by valuation of tradition, and cultural norms. Further research preferably employing both quantitative and qualitative techniques could provide a rewarding clarification to these relationships. Based on the national average, HIV-affected households with AIDS orphans or chronically ill persons reported SSD -Statistically significant difference (p < 0.05); NSD -No statistically significant difference (p > 0.05). Descriptive statistics and 95% confidence interval are derived from descriptive analysis using the "Explore" tool in SPSS). higher CSI, demonstrating the use of more negative food consumption coping strategies to buttress food security in these families (Table 3). Poor households with prime adult chronic illness are prone to food insecurity [32]. Households experiencing chronic illness and of prime-age adults suffer from loss of income and household labor shortages, which adversely affects food security due to declining agricultural productivity [33] and diminished household purchasing power [34]. Thus, asset rich households, regardless of a high-burden level of care may be resilient to food insecurity if the chronically ill persons are not the prime age adults, who are typically the main income earners in the household. Survey limitations As is the usual in assessment of food security and vulnerability, the collection of market data is very critical, especially in settings characterized by instability/high food and fuel prices. In recent past, Ghana like many other countries in the sub-region fit this characterization. This phenomenon can impact negatively on household food security. The inability of the current survey to capture market data and subsequently provide necessary adjustments during the analysis is a limitation. Seasonality of food insecurity is a major problem in most parts of the country. Commonly referred to as the "lean season" and "harvesting season", these periods respectively denote deterioration and amelioration of household state of vulnerability to food insecurity. Given that the data collection exercise was carried out during the course of the harvesting season, the level of food security could have been underestimated. In other words, households who were identified in this survey to be food secure during this period of the year could easily slip into food insecure during the lean season. As a consequence, the results of this assessment should be cautiously interpreted. These notwithstanding, it is unlikely that these limitations will significantly alter the main conclusions and recommendations. Conclusions In summary, this paper suggests that HIV-affected households in Ghana do employ negative food consumption coping strategies to cushion food insufficiency and other pressures in their homes. While these strategies may provide short-term relief they are erosive, unsustainable, and undermine resilience in the long run. Reducing food intake, buying low quality cheap foods, gathering unusual kinds of wild foods, relying on casual labor, or going begging to battle food insufficiency, has dire implications for household members' health, children's school attendance and performance, and adults' income-earning capacity in the long run. There is an urgent need for policies that focus on building the capacity and stability of these households. Well-informed interventions that are appropriate for the local setting should aim to support HIV-affected households with long-term coping strategies that improve resilient to food insecurity. Endnote a Defined as two or more epidemics interacting synergistically to contribute to excess burden of disease.
2017-06-19T14:51:54.187Z
2015-02-21T00:00:00.000
{ "year": 2015, "sha1": "09e1fcc130b1290f56befcceb9844cfa997ecae6", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-015-1418-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7fa50646fad96e6a66088bd725cac3084f6a9d9", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
235817416
pes2o/s2orc
v3-fos-license
Influence of Material Properties on Tire/Road Noise for Non-destructive Pavement Condition Assessment With the purpose of using tire/road noise for pavement condition assessment as a non-destructive test method, an investigation on the influence of pavement material characteristics on tire/road noise is conducted. The acoustic data is collected by a directional microphone mounted behind the rear tire on the passenger’s side of a test vehicle, driving over different sections of pavements with different material properties. The tire/road noise under 2000 Hz is extracted to investigate how it is influenced by the pavement features, such as macrotexture, porosity, top-layer thickness, and top-layer stiffness. The macrotexture, which mainly reflects pavement friction, exhibits high relevance to the tire/road noise below 1000 Hz. The top-layer stiffness of pavement could be distinguished from the sound pressure level at peak frequency between 700 ∼ 1300 Hz, the larger stiffness coming with higher amplitude of sound pressure level (SPL) at peak frequency. The results show high potential to assess pavement friction by the tire/road noise starting from macrotexture measurement. Moreover, the consistency of paving quality or sudden change of pavement subsurface structure can be recognized by looking into the spectrum features at peak frequency area of tire/road noise. Introduction Pavement health monitoring is very important to our nation. The current road situation is in need of improvement according to the ASCE report card [1] . Different road surface problems will cause different safety concerns. Too much bleeding on the road's surface will reduce the friction between tire and pavement interface, causing vehicles to skid. Segregation is "a lack of homogeneity in the hot mix asphalt constituents of the in-place mat of such a magnitude that there is a reasonable expectation of accelerated pavement distresses" [2] . Severe segregation will lead to pavement raveling, or even potholes. Besides, the invisible subsurface delamination or other kinds of deterioration may cause the sudden collapse of the top layer of the road. This is a potential danger for drivers on city roads, especially on interstate highways. Accordingly, both surface distresses and subsurface delamination need to be effectively monitored and transportation authorities must be alerted when the condition reaches critical levels. The vision is to use vehicles of opportunity, which regularly travel on city roads and interstate highways, to collect and integrate sensor measurements and to perform onboard judgments about surface distresses and subsurface integrity of roadways and bridge decks. To avoid variation due to the subjectivity of human ears, an acoustic sensor, namely a microphone, is used to "hear" and sense the road condition through the vehicle noise generated by tire. The microphone application idea allows testing at normal traffic speed, and makes continuous monitoring of road condition possible. Generally, drivers and passengers can distinguish a difference in road condition as the vehicle travels from one type of surface to another [3; 4; 5] . Also, from the daily experience, people could also "hear" the road condition instead of "see". When the vehicle drives over the smooth roads, the vehicle noise from tire/road interaction is very quiet; while the vehicle drives over the damaged roads or rough roads, the vehicle noise is loud. Besides, Sandberg and Ejsmont concluded from a noise source distribution of a 74 dB vehicle study in driven-by test that noise generated by tire is the predominant one among all the noise from a moving vehicle, compared with other sources such as exhaust system, intake system, engine and remaining unidentifiable noise [3] . A pilot study in Netherlands indicated that 90% of the equivalent sound energy in urban traffic is generated by tire/road noise [3] . The most important noise source on the vehicle, during a driving-by test according to ISO 362 [6] , is the tires. Hence, tire/road noise is selected for pavement feature analysis. In order to make use of the tire/road noise for road feature detection, the first issue is to understand the frequency content of tire/road noise that is related to road features, i.e., the influence of road features on tire/road noise. In this study, the influence of road features on tire/road noise will be investigated and validated by the field test. The challenge of this study is that no current technology is using acoustic measurements collected by microphone to directly assess pavement conditions. Some related literature discusses the use of tire-road noise for quiet pavement application [3] , and also mentions that there exists some relationship between tire-road noise and pavement macrotextureone road feature [3; 4] . The primary challenge is to determine the frequency band of tire/road interaction that related to road conditions. Hence, before to apply the tire/road noise for pavement condition assessment, the purpose of this study can be addressed as to understand the fundamental frequency content of vehicle noise related to road features. The significance of this study is summarized as follows: • Driver's safety is very important. Real time monitoring of road characteristics most closely related to safety is necessary decisions regarding maintenance and repair priorities. • Real-time pavement condition monitoring through an under mounted microphone is cost effective and user friendly. It eliminates the need to find the location from a large area or digging for potential subsurface delamination. • Measurements can be conducted at traffic speed without traffic interruption. It not only eliminates the danger and expense from work zones, but also improves safety for inspection personnel who would otherwise be exposed to traffic hazards. Theoretical Background Pavement features include surface and subsurface parameters. Pavement surface features contain texture (microtexture, macrotexture, megatexture and unevenness), tining, and friction. Pavement subsurface features include stiffness, porosity and thickness of top layer [7] . Texture Pavement texture is "the deviation of a pavement surface from a true planar surface" [8] . The type of pavement texture is distinguished according to the ranges of texture wavelength [8] . Four types of pavement texture are classified as follows: microtexture, macrotexture, megatexture, and unevenness [3] . Microtexture in the order of single grains of sand, with wavelength less than 0.5 mm, can influence the friction and adhesion between the tire and the road surface [7] . Note the equation f = v/λ [3] , where v is vehicle driving speed (m/sec), λ is texture wavelength (m), and f is the corresponding frequency (Hz). Assuming that the driving speed is 8.9 m/sec, and wavelength is <0.5 mm, the frequency is >17.8 kHz. Hence, the influence of microtexture on tire/road noise is in high frequency content, larger than 1 Hz [9] . Moreover, microtexture is highly related to pavement friction, especially at lower speed less than 64 kph [10] . Larger microtexture depth will increase friction. Macrotexture in the order of wavelength of tire tread patch, with wavelength of 0.5 mm to 50 mm, can also influence the friction between the tire and road surface [7]. Referring to the same equation, the frequency range is between 0.178 kHz to 17.8 kHz within the macrotexture wavelength at driving speed 8.9 m/sec. Sas and Sandberg indicate the effective frequency for macrotexture with a very strong influence on tire/road noise is below 1 kHz [9; 11] . Megatexture in the order of wavelength of tire-pavement content patch, with wavelength of 50 mm to 500 mm, exerts considerable influence on the noise generated, especially in the high frequency area larger than 1 kHz [12] . It can be a defect in the pavement surface, resulting from the wear and fatigue of the surface material. Moreover, wavelength of 0.5m to 50 m is termed as unevenness, or roughness, which is in the order of a stretch of pavement. Its influence on tire/road noise is not clear. Also, it is not in the scope of this paper. Tinning Tining is one finishing technique for concrete pavement. Tining is achieved by dragging metal prongs on semi-hardened concrete pavement to create grooves longitudinally or transversely. The purpose of tining is to reduce hydroplaning in wet weather. However, sound level will increase significantly when the tining is transverse, while longitudinal tining has only a weak effect on sound levels. Friction Friction between tire and pavement will cause stick-slip noise. The mechanism of stick-slip noise is similar as the noise caused when running your palm over a smooth surface [7] . Macrotexture will tend to increase the stick-slip noise, so as the friction. Stiffness The stiffness here is referred to the effective hardness of the surface. Variations in stiffness are commonly attributed to variations in binder materials. For example, asphalt binder is relatively flexible compared to cement binder. To study the effects of stiffness to tire/road noise, Nilsson and Zetterling conducted an experiment to test if the top layer stiffness will influence the tire/road noise [13] . A set of measurements of tire/road noise was collected on a thin grinding paper that glued either directly on a smooth cement concrete base or on a rubber sheet that was placed on the concrete surface. The difference obtained was a 5 dB noise decrease at the peak frequencies 700 ~ 1300 Hz. Accordingly, it seems that the influence of pavement stiffness may be dramatic if a really soft surface like rubber is mainly used. The observations suggest a quite high stiffness effect when very soft surfaces are compared to conventional hard surfaces. Porosity and thickness of top layer Porosity is a measure of the void spaces in pavement top layer material. Porosity here is specially referred to the porous pavement. Porous pavement is a permeable pavement surface with a stone reservoir underneath. The reservoir temporarily stores surface runoff before infiltrating it into the subsoil. Runoff is thereby infiltrated directly into the soil and receives some water quality treatment. Porous pavement often appears the same as traditional asphalt or concrete but is manufactured without "fine" materials, and instead incorporates void spaces that allow for infiltration. Apparently, porous pavement are more acoustically absorptive than non-porous pavement, so porous pavements tend to be quieter than non-porous pavements. Porosity parameters are composed of percent voids, the size of voids, the layer thickness, and the shape factor, among which the layer thickness will affects the peak frequency of tire/road noise. According to the research by Sandberg and Ejsmont [3] , the porosity and thickness of top layer has a strong influence on tire/road noise above the frequency of 1 kHz. Figure 1 The Effect of Tire/Road Parameters on Tire/Road Noise (Note: (1) the color indicates the degree of influence: Black ⎯ very high, Dark grey ⎯ high, Light grey ⎯ low to moderate; (2) "+" represents increase effect on tire/road noise, "-" represents decrease effect on tire/road noise.) In summary, Figure 1 presents the potential influence of tire/road parameters on tire/road noise [3; 7; 9] . Three observations should be noticed from Fig. 1: (1) pavement macrotexture is highly relevant to tire/road noise, comparing with megatexture and microtexture; (2) tire tread pattern pitch also has a very high level of influence on tire/road noise; (3) the relative frequency range for pavement macrotexture on tire/road noise is below 1 kHz. These observations bring pavement macrotexture to the tire/road noise study. Since macrotexture strongly influences tire/road noise, the use of vehicle noise for pavement condition assessment starts from pavement macrotexture measurement. Another interesting phenomenon in Fig. 1 is that the tire/road parameters' influence on tire/road noise has an intersection at 1 kHz. The author also found from the collected data that there is a peak around 1 kHz in tire/road noise spectra. Sandberg named this peak around 1 kHz as "multicoincidence peak" [11] . He pointed out that it is a multi-functional region of frequency. Also, it is related to pavement macrotexture and vehicle tire tread pattern, both of which have many kinds of sound generation mechanisms coincidently over the frequency range from 700 to 1300 Hz [11] , which need be avoided during macrotexture estimation. Meanwhile, frequencies from 700 to 1300 Hz are termed "peak frequencies". Test Verification A field test conducted in September 2019 at Jinfeng Vehicle Research and Test Base in Chongqing verified the discussions above, especially on the factors of friction, porosity, thick of layer and stiffness. The test track consists of 16 sections of pavements, each 200 meters in length. A test vehicle drove at 3 different speeds over the track, 32 kph, 56 kph, and 80 kph, 3 rounds for each speed. The following discussions will demonstrate how the pavement macrotexture influences tire/road noise from the experiment data, and how the top layer thickness affects peak frequency of tire/road noise. Test Configuration The microphone configuration mounted underneath the test vehicle is shown in Fig. 2. The sensitivity of the directional microphone is from 44 ~ 52 mV/Pa. The sampling frequency of this test is 50 kHz. Since the type of tire will influence the frequency spectra of acoustic signal, especially in the typical "tireband" range 500 ~ 1300 Hz [14], the same tire is used throughout the experiment to eliminate the possibility of the tire effect. Based on the experience of a previous study [15] , acoustic data collected by the directional microphone behind rear tire on the passenger's side is chosen for the macrotexture investigation, since the acoustic signal spectra from the microphone represents good correlation with pavement macrotexture. The microphone is directed to the interface of rear tire and ground, which is expected to be the source of most tire/pavement sound, and shielded from wind and engine noise, 5.1 cm away from the ground surface as shown in Fig. 2. Pavement macrotexture and tire/road noise In the Jinfeng test, different frictions are represented as different macrotexture depths (MTD). The MTD value is known for all pavement sections of the test track. A Fourier transform was performed to the collected acoustic measurements with the microphone mounted underneath the vehicle behind the driver side rear tire. Four pavement sections with different MTD's are presented in Figure 3. From Pavement A to Pavement D, the MTD's in order are 0.5 mm, 0.8 mm, 1.2 mm and 1.5 mm. The texture of pavement gets rougher as MTD increases. Referring to Fig. 4, with the same speed (32 kph), as the MTD of pavement increases from 0.5 mm to 1.5 mm, the sound pressure level (SPL) of tire/road noise goes up below 650 Hz, which is close to the peak frequencies (700 ~ 1300 Hz). The same trend is obtained by Sandberg's research [3] , which indicates the acoustic energy increases as coefficient of friction increases below the peak frequency at around 1 kHz. This strong correlation between MTD and tire/road noise below 1 kHz verifies the conclusion in Fig.1 regarding the macrotexture part. The porosity of asphalt mixture pores is composed of connected pores, semi-closed pores and fully closed pores. Tire/road noise is influenced by the connected and semi-closed pores, which is named effective porosity. Three OGFC pavements with effective porosity of 20.5%, 17.93% and 15.24% are tested under the same experimental condition. The average sound pressure level below 1 kHz for these three pavements are 75.6 dB, 77 dB, and 78.5 dB accordingly. The relationship between the porosity and the sound pressure level is shown in Fig. 5. The sound pressure level decreases as porosity increases of the tested OGFC pavements. Pavement A and B shown in Fig.6 have the same subsurface profile and macrotexture value except the thickness of the top layer, 10 cm for Pavement A and 5 cm for Pavement B. However, the frequency spectra of both pavements are very similar to each other. No significant difference is detected. Sandberg and Ejsmont [3] have investigated the influence of top layer thickness of porous pavement on the tire/road noise. It is concluded that the peak frequency will shift for different thickness. Nevertheless, for non-porous pavement, it seems that the thickness of the top layer has little influence on the frequency spectrum. Fig.7, the only difference of both pavements is the stiffness of the top layer: the stiffness of Pavement B is harder than that of Pavement A. Based on the conclusion from the experiment by Nilsson and Zetterling [13] , the SPL at peak frequency of pavement B is higher than that of Pavement A, which means the stiffness of stone matrix asphalt (SMA) is larger than that of epoxy. Hence, the Fig.7. Moreover, the porosity study proved the influence of top-layer stiffness to the tire/road spectra from another perspective, which indicates that higher average sound pressure level comes with lower porosity, while lower porosity means harder surface layer. Figure 7 Influence of Top Layer Stiffness on Tire/Road Noise The study on the influence of pavement subsurface features on tire/road noise indicates some potential for non-destructive pavement subsurface monitoring with moving vehicle at real time. Generally, the stiffness of top layer pavement within 10 cm from surface could be sensed by microphone. The harder surface layer will produce higher amplitude in sound pressure level at peak frequencies 700 ~ 1300 Hz. Conclusion This paper conducts an overall study to investigate the influence of material characteristics on the tire/road noise for non-destructive pavement assessment. Different pavement characteristics influence the spectra of tire/road noise at different frequency ranges. From the field test results, the following conclusions can be drawn: 1. Pavement macrotexture is highly relevant to the tire/road noise comparing with megatexture and microtexture, which is verified by the field test. 2. The relative frequency range for pavement macrotexture on tire/road noise is located as below 700 Hz, considering avoiding the multi-functional region of frequency range from 700 to 1300 Hz. 3. The top-layer stiffness difference of pavement is able to be distinguished by the sound pressure level at peak frequency between 700 ~ 1300 Hz. The harder surface layer will produce higher amplitude in sound pressure level at the peak frequency. In the future, the tire/road noise will be further studied to evaluate the pavement condition as a cost-effective way. Since pavement surface friction is represented by macrotexture, it is recommended to assess pavement surface friction through tire/road noise starting from macrotexture measurement. Meanwhile, a non-destructive test method will be developed by using tire/road noise to evaluate the pavement subsurface consistency as a symbol of paving quality, even more, a warning for invisible potential damage.
2021-07-14T20:07:18.169Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7c5493425ed0d398914a2cc58286ae7d9b1bb783", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1965/1/012105", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7c5493425ed0d398914a2cc58286ae7d9b1bb783", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
244751295
pes2o/s2orc
v3-fos-license
Analyzing Parametric Sensitivity on the Cyclic Behavior of Steel Shear Walls As a destructive phenomenon in most parts of the world, earthquake has threatened the safety of structures and the lives of its inhabitants and is considered as the main problem in the seismic vulnerability of buildings. Steel shear walls are regarded as one of the newest structural systems resistant to lateral load in steel structures. )e present study aimed to investigate the impact of effective parameters on cyclic behavior by numerically modeling a steel shear wall and comparing it with laboratory results. )e results indicated the significant contribution of the thickness of steel shear sheet so that when the thickness changes to 25%, the final response of the structure increased by approximately 20% and decreased by 15%. Introduction As a destructive phenomenon in most parts of the world, earthquake has threatened the safety of structures and the lives of its inhabitants; reducing the irreparable damage of earthquakes has always been the ultimate goal of researchers and scientists in earthquake engineering. e probability of an earthquake based on the distribution of active faults in Iran in Figure 1 indicates a permanent danger. Researchers have introduced various earthquake-resistant lateral-bearing systems over time. is process has continued from the frames of building materials to introducing the structural control systems. e steel shear wall system ( Figure 2) is considered as one of the new seismic systems, which has been approved by the Canadian Steel Code since 1994 and has been recognized by the US Steel Code since 2005. As a destructive phenomenon in most parts of the world, earthquake has threatened the safety of structures and the lives of its inhabitants; reducing the irreparable damage of earthquakes has always been the ultimate goal of researchers and scientists in earthquake engineering. e probability of an earthquake based on the distribution of active faults in Iran in Figure 1 indicates a permanent danger. Various earthquake-resistant lateral-bearing systems have been introduced by researchers over time. is process has continued from the frames of building materials to introducing the structural control systems. e steel shear wall system ( Figure 2) is considered as one of the new seismic systems which has been approved by the Canadian Steel Code since 1994 and has been recognized by the US Steel Code since 2005. During recent years, when the research on the seismic performance of steel shear walls and their reliability is increased, using these walls, especially in the United States and Japan, has increased significantly [1]. e design regulations have now begun to provide design criteria for such walls due to the relatively comprehensive knowledge of their behavior. e basis of the performance of this structural system is based on using the performance of the diagonal tensile field after buckling of steel sheets. e simple implementation based only on existing technical knowledge and without the need to acquire new skills reduced dimensions of the foundation, significantly increases the lateral stiffness of the structure, and reduced dead load, while the economics of this system compared to the steel bending frame system is considered as the main advantages of this system [2]. At first glance, a steel shear wall is similar to a sheet beam, in which the beams, the columns, and the steel sheet act as stiffeners, wings, and the core of the sheet beam, respectively. At the beginning of using such shear walls, the steel sheet was used with hardener and tried to prevent the sheet from buckling. However, researchers further suggest the use of thin-walled steel shear walls without hardeners based on the experiments today. e loading of the sheet buckles in the compression diameter and flows in the tensile diameter, and the buckling diameter returns to the sheet plate during the loading and becomes the tensile diameter [3]. At this distance, the resistance of the sample decreases and pinching occurs in the hysteresis curve. Figure 2 shows the pinching situation in the distance between points C and D. Despite the sheet buckling, the steel shear wall system is still stable, and it is not necessary to use hardener to create and maintain the stability of the system and prevent buckling [4]. Timler and Kulak [5] performed circular tests on steel shear walls without hardeners. It was found that the buckling behavior of the sheet is well shown, which resulted in the ductility coefficient μ as 4. ey were a model of diagonal rod elements replacing steel sheet, which was well predicted for a cyclic loading of experimental behavior. Driver et al. [6] performed a cyclic experiment on a four-story sample. In this experiment, the steel shear wall was without hardener, and the ductility coefficient μ of 6 was obtained. In addition, they presented an analytical model in which a steel sheet was modeled with a shell element and the geometric and material nonlinear behaviors were considered. e results of computer analysis of their model did not indicate good accuracy. ey finally concluded that the steel shear wall system has a high ductility. Lubell et al. [7] performed cyclic experiments on a four-story sample and two one-story samples. eir samples were hardener free. ey obtained a ductility coefficient μ of 6 from the test results. Elgaaly and Liu [8] performed cyclic experiments on six three-story specimens with an aperture in which the steel sheet was hardener free. ey found that the nonlinear behavior of the system initiated with the flow of steel in the sheet and the resistance of the system was controlled by forming a plastic joint in the column. at is, they recommended that the wall sheet be completely flowing before buckling the column. Furthermore, Astaneh-Asl [9] performed cyclic experiments on steel shear walls. e steel sheet in these experiments was without using the hardeners. He concluded that the steel shear after rupture could withstand 60% of the tolerable force before rupture in the joints. is is especially useful in the event of a severe earthquake. Because the steel shear wall system is still able to withstand lateral load after rupture. Rahaei and Hatami conducted their research on composite steel shear wall in gap mode under cyclic loads. ey concluded with numerical and laboratory studies that increasing the distance between the bolts to a certain extent increases the energy absorption power and reduces the offplane displacement and the maximum normal stress in the cutters; however, the distances longer than this have little effect. In addition, the behavior of reinforced steel shear wall is independent of the stiffness of the middle beams and the type of connection of the beam to the column; however, increasing the stiffness of the beam in shear walls without hardeners causes better uniformity of stress distribution in steel sheet and finally shear strength of steel shear wall. e compound has a direct ratio with the thickness of the concrete cover and an inverse ratio with the distance between the cutters [10]. Astana-Asl and Zhao conducted an experiment on a composite shear wall at the University of Berkeley, California, in which the behavior of a new type of wall under reciprocating loads was investigated and compared with the behavior of a traditional composite shear wall. e only difference between the shear wall of the new type and the traditional model was the existence of a gap between the concrete wall and the perimeter frame in the new model. Both systems showed ductile behavior and high resistance during the experiment. Observing the results of this experiment, they found that although using the gaps reduces the overall strength and stiffness (due to the lack of concrete participation in low loads), this reduction is acceptable and is less important compared to increasing the ductility and reducing the damage to concrete due to existing such gap [11]. Hatami and Sehri conducted a study on composite steel shear walls titled investigating steel sheet thickness on the behavior of composite steel shear walls and concluded that increasing the thickness of steel sheet to the concrete layer to the optimal thickness resulted in reducing the amount of displacement outside the plate of steel sheet and did not further affect the performance of the shear wall. In addition, using two layers of concrete cover on both sides of the steel sheet to some extent reduces the secondary flexural effects [12]. Furthermore, Hadipour and Razaghi studied and compared the bearing capacity and ductility behavior of composite steel shear walls through the finite element method. ey found that changing the distance between the cutters changes the ductility of the structure and the amount of energy absorption [13]. Some experiments were initiated to investigate the behavior of this lateral-bearing system for studying the behavior of steel shear wall. Takanashi and Takemoto and Mimura and Akiyama performed seismic (reciprocating) experiments on twelve one-story and two two-story samples [14,15]. Yamada performed experiments on two samples of a class under increasing load [2]. Caccese and Elgaaly performed experiments under seismic loading (reciprocating) on eight three-story and seven two-story samples in two stages [16]. Materials and Methods Professor Pauli from New Zealand, who was one of the great seismic designers in the world and is credited with inventing the capacitive method in the design of structures, exemplifies the structure as several interlocking chains ( Figure 3). He believed that it is definitely necessary to fit one of these rings intentionally weaker so that it enters the nonlinear area in the earthquake and causes the earthquake energy loss. To do this, two things should be considered. (A) e detailing in the weaker area should be such that it does not suffer from instability and deterioration in large deformations. (B) e rest of the chain loops should be designed in such a way that they have such resistance that they remain in the elastic region when the ductile ring reaches its resistance limit. Ductility is a material that can deform greatly when resisting loads. e ductility of structural members means that they can withstand considerable inelastic or pasty deformation before collapsing. e fragile material or structure suddenly breaks and collapses under heavy loading. Figure 4 shows the force-deformation relationship for brittle and malleable materials. e final deformation may be determined by the local failure of the compression zone at a point in the limb, or by instability, or by any other set of conditions leading to the failure of the limb or the related structure. e most common method of measuring ductility is the ductility ratio (μ), which is defined as follows [18]: In the case of beams and bending elements, the ductility ratio is defined based on the curvature: e ductility of a member or structure is sometimes measured by the energy absorbed by it, which is determined by the area under the force-deformation curve. In addition, since the displacement capacity of the plastic under the reciprocating load is different from the capacity of the ascending load in the case of periodic loading, the ductility ratio is considered as the ratio of the total value of the displacement according to the following equation and Figure 5 [19]: Variable's Parameters. e powerful Abacus software based on the finite element numerical method has been used due to the complexity of solving nonlinear equations Weaker ring Figure 3: e concept of seismic design [17]. Brittle Ductile Area under curve = absorbed energy Shock and Vibration 3 resulted from nonlinearity of materials (the nonlinear behavioral model of steel) and geometric nonlinearity (buckling of steel shear wall sheet). Furthermore, the eccentric random buckling was used as a construction error (imperfection) to reduce the computation time. In addition, performing sensitivity analysis on basic mechanical and geometric parameters resulted in realizing the impact of these parameters on the cyclic behavior of steel shear wall. Table 1 shows the amount of shear sheet, beam, and column changes. Material Specifications. Nonlinear behavior with kinematic stiffness is selected for all samples. e plastic behavior of the model is selected based on Van Meiss flow rate and ST37 stress-strain diagram. According to Figure 6, Young's modulus and Poisson's coefficient are considered to be 0.3 [20]. Geometrical Specifications. e examined shear wall was 240 * 280 cm. Furthermore, beams and columns were modeled from box sections of 10 * 15 cm. e middle sheet of the wall was considered as a thickness of 1 mm, which is variable according to Table 2. As shown in Figure 7, the standard Shell library element is used. Since the shape is geometrically irregular or applying the appropriate partitions divides it into regular shapes to obtain a regular meshing pattern, therefore, creating a meshing with four-node rlement Shape, the structured technique, the Standard library element from the Shell family, and finally the selected S4R element are used (Figure 8). Loading. e seismic performance is a connection to the concept of the amount and distribution of stress values, ductility, and how to dissipate seismic energy. In this regard, the hysteresis diagrams was used which show this very well. It seems more appropriate to consider the period to choose a comprehensive loading pattern for the mentioned connection, which is applied to the boundary conditions of the end of the beam. In this regard, the seismic loading protocol was introduced by FEMA ( Figure 9). Validation. In 2017, HajiMirsadeghi et al. [22] at Khajeh Nasir al-Din Tusi University of Technology loaded the steel shear wall back and forth in the laboratory with the same dimensions as the numerical model ( Figure 10). e results of numerical modeling in cycles 1 to 17 with a slight difference corresponded to the laboratory results. erefore, the accuracy of the results can be fully trusted ( Figure 11). e Cyclic Analysis of Shear Wall. A proper understanding of hysteresis behavior is required to evaluate the seismic behavior of a steel shear wall. erefore, the cyclic behavior of the frame was investigated through the standard loading of FEMA Figure 9. erefore, the displacement input to the frame tip was considered as input and the base shear force as output. However, anchor-anchor diagrams have been used to enable the comparison of diagrams with other frames. Figure 12 shows the plastic strain contour in 8 steps. Shock and Vibration Investigating the Effect of Geometric and Mechanical Characteristics of Shear Wall Elements. According to Table 2, model No. 1 is considered as the base model, and other samples in sensitive parameters such as modulus of elasticity, plastic stress, and sheet thickness have about 25% of the amount changes (increase and decrease). en, it was analyzed during standard cyclic loading, and the diagrams of hysteresis and plastic strain contour were obtained, as shown in Figure 13. According to the National Regulations of the Tenth Building, the increase value is about 25% of the values to provide the final increase coefficient of steel sheets in the event of an earthquake [23]. is increase is about 15% strength for rolled beams. In this study, 25% of the values are considered to compare the effective parameters of increase and decrease. e results of the hysteresis diagrams are shown in Figure 13. e anchor-rotation diagram was used as a displacement-control method to compare the effect of changes in the effect quantities on the shear wall strength. As shown in Figure 13, changes in shear wall quantities such as thickness and plastic shear stress have a greater effect on shear wall strength than that of beam and column quantities. e comparison chart of Figure 14 was used to compare the changes slightly. According to Figure 14, the changes in the thickness and type of sheet (plastic stress) have a significant impact on the strength of the structure. On an average, an increase and decrease of 25% in the thickness and tension values of the plastic sheet by 18% and 15%, respectively, had an effect on the shear wall strength. Conclusion When the amount of modulus of elasticity of the sheet (frames 2 and 3) increases by 25%, the final strength of the shear wall increases by about 5% and by decreasing by 25% of the same amount, the response rate decreases by 2%. erefore, the linear area of the shear sheet has little effect on the final behavior of the frame. is is due to increasing the stress in low periods, and therefore, the plastic area can be the determining factor in the final behavior of the structure. e behavior of the structure is similar to the behavior of structures 2 and 3 by increasing and decreasing the elasticity modulus of beams and columns by 25% (frames 8 and 9), which increases and decreases by 7% and 2%, respectively, which are trivial amounts. In addition, the amount of plastic stress of the shear sheet was evaluated. When this value increases by 25%, the final strength of the frame increases by about 17%, and when the plastic stress of the sheet decreases by 25%, the final response of the frame decreases by about 13%. ese values indicate a significant effect of the amount of plastic stress intended for the shear sheet. If 25% changes is considered for the beam and column plastic stress parameter, the final strength values of the frame will increase and decrease by about 7% and 7%, respectively. at is, the effect of the plastic stress parameter of the column beam does not play a significant role in the strength and behavior of the frame; however, the same parameter have a significant effect on the shear sheet. e last parameter is the thickness of the shear sheet. When the thickness value changes by 25%, the final response of the structure increased and decreased by about 20% and 15%, respectively, indicating the significant effect of sheet thickness on the behavior of the frame and ultimately the behavior of the structure. Furthermore, when the thickness Shock and Vibration of the sheet increases, the hysteresis diagram have a larger area under the diagram that has increased by about 20% compared to the base state and therefore indicates significant ductility and energy consumption. Comparing the results obtained in this article indicated the significant effect of shear sheet properties on the behavior of the frame. In addition, the changes in the mechanical properties of the beam and column on the frame behavior will not be significant because the shear sheet provides most of the lateral stiffness of the frame. Data Availability e data used to support the findings of the study are available from the corresponding author upon request.
2021-12-01T16:29:33.249Z
2021-11-28T00:00:00.000
{ "year": 2021, "sha1": "04fbb1ce2d0fbb373689e29dea8fccadbe2d8f2f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sv/2021/3976793.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "901fbf93aa2442bb95242873740a124931e52cc0", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
2192655
pes2o/s2orc
v3-fos-license
A Novel Multisensor Traffic State Assessment System Based on Incomplete Data A novel multisensor system with incomplete data is presented for traffic state assessment. The system comprises probe vehicle detection sensors, fixed detection sensors, and traffic state assessment algorithm. First of all, the validity checking of the traffic flow data is taken as preprocessing of this method. And then a new method based on the history data information is proposed to fuse and recover the incomplete data. According to the characteristics of space complementary of data based on the probe vehicle detector and fixed detector, a fusion model of space matching is presented to estimate the mean travel speed of the road. Finally, the traffic flow data include flow, speed and, occupancy rate, which are detected between Beijing Deshengmen bridge and Drum Tower bridge, are fused to assess the traffic state of the road by using the fusion decision model of rough sets and cloud. The accuracy of experiment result can reach more than 98%, and the result is in accordance with the actual road traffic state. This system is effective to assess traffic state, and it is suitable for the urban intelligent transportation system. Introduction With the rapid development of the urbanization, the motor vehicle ownership and the road traffic flow are rapidly increasing, and the traffic congestion has become a common problem in the world [1]. Therefore, an accurate and scientific assessment of the current traffic state can provide the basis for traffic guidance systems and traffic control systems, optimize traffic management program, and reduce the incidence of traffic congestion. It is an important part to maximize the social and economic benefits of transportation resources [2]. More and more different types of vehicle detectors, such as coil detectors [3,4], probe vehicles [5,6], microwave detectors, and videos [7,8], are employed to collect traffic information with the rapid development of sensor technology. Many researchers have attempted to develop more efficient assessment models in order to obtain better results. Bachmann et al. [9] investigated the multisensor data fusionbased estimation techniques to fuse data from loop detectors and probe vehicles to accurately estimate freeway traffic speed. Bachmann et al. [10] studied the fusion techniques with Bluetooth and loop detector to improve the accuracy of traffic speed estimates. Berkow et al. [11] used the traffic signal and probe vehicles data to estimate the real-time transit location data and online implementation of arterial travel time information. Guo et al. [12] used Kalman filter approach to estimate the speed with single loop detector measurements under congested conditions. Kong et al. [13] proposed a fusion-based system composed of real-time traffic state surveillance, which can realize the real-time traffic state estimation with over 10,000 bidirectional road sections. In order to achieve the estimation of the state of the road network traffic, Kong et al. [14] also combined Kalman filter with evidence theory as a fusion platform and estimated the speed of the road network based on traffic wavelet theory. El Bantli [15] proposed the optimal linear estimation and weighted least squares method theoretically based on the incomplete traffic data, and the method was applied to estimate the road travel time. Klein et al. [16] used the data 2 The Scientific World Journal fusion and D-S theory to the decision support system of advanced traffic management. Li and McDonald [17] put forward a method of link travel time estimation by using a single GPS equipped on probe vehicle. Cheu et al. [18] put forward a fusion model based on neural network and test its effects by using simulation data. Smith and Conklin [19] used the local lane distribution patterns to estimate missing data values from traffic monitoring systems. The methodology used time-of-day lane distribution patterns at a particular location to estimate missing detector data and the results of this methodology showed that the error ranged from 6% to 8%. Chen et al. [20] proposed a method using historical data to detect bad data samples and impute them into missing or bad samples, and it gave a better estimate than previous methods. Treiber et al. [21] presented an advanced interpolation method for estimating smooth spatiotemporal profiles of local highway traffic variables such as flow, speed, and density, to fuse the traffic data and get the dynamic traffic information. Sumner [22] used fuzzy logic to fuse the detected traffic data information, quantified traffic conditions, and made a comprehensive assessment of the traffic state. Chang [23] applied the neural network to Brainmaker project, which made the current traffic state and historical information for pattern matching, and it improved the effect of computer traffic monitoring and automatic incident detection. The traditional methods of urban road traffic state assessment are usually based on complete data obtained by the detection sensors. However, during the whole process of data acquisition, transmission, and processing, there are some factors, as follows, that cause incomplete data [24,25]: (i) the error installation and correction of the sensors; (ii) under the influence of abnormal weather or environment, which cause the occasional data exceptions or data missing; (iii) abnormal work of detection sensors; (iv) hardware or software failure of traffic management center system; (v) communication interrupt between traffic detection sensors, regional controller, and traffic management center; (vi) no enough evaluation and sustainability maintenance of system. These factors have a great impact on the effective and accurate assessment of the traffic state. These incomplete data are often manifested as irregular data collection, great data acquisition error, data missing, and so on [26,27]. Texas Transportation Institute (TTI) showed that the complete rate range of traffic management system is from 16% to 93%, and the average value is 67% [28]. It means that the incomplete data is one of the outstanding problems existing in the traffic management system. Therefore, improving the effectiveness and completeness of traffic flow data and making road traffic running state assessment results more reasonable and accurate have an extremely important significance for the development of urban intelligent transportation. In this paper, a new multisensor traffic state assessment system is developed. This system can obtain traffic data, which may include incomplete data, by coil detection sensors, microwave detection sensors, and probe vehicle detection sensors. This data is processed by a novel algorithm based on the fusion decision model of rough sets and cloud. The algorithm first checks the data validity. After the preprocessing, the selected incomplete traffic data are fused and recovered using the history information. Then, a space-matching fusion model is proposed to estimate the average travel speed. Finally, a fusion decision model of rough sets and cloud is presented to assess the traffic state using the flow, speed. and occupancy, which acquired form of the multisensors. Experimental results show that our system is suitable for traffic state assessment with incomplete traffic data. Multisensor Traffic State Assessment System In view of the existing problem that the traffic flow data is incomplete, a method of traffic state assessment based on multisensor is proposed. The system obtains the traffic data to get the multiple source information by the fixed detectors (coil detection sensors and microwave detection sensors) and the probe vehicle detector (floating car detection sensors). The testing environment of traffic flow data and the main system elements are shown in Figure 1. The main three sensors of this system are as follows: 2.1. Coil Detection Sensors. Generally, coil detection sensors with square shape are laid under the roads as shown in Figure 1. When vehicles pass from these coil sensors, the inductance value of coil loop will be changed, which cause the change of frequency. And the detection sensors use this change to judge whether there are cars that pass the sensor or not. This kind of sensors can detect the traffic flow, speed, queue length, and other traffic parameters. The advantages of this sensor are low cost, high reliability, and high detection precision. However, when the distance between vehicles is less than 3 meters, the detection precision will be greatly reduced due to the magnetic field interference. Microwave Detection Sensors. Microwave detectors shown in Figure 1 are sensors using microwave transmission form to detect traffic data. They send microwave in the test roads and detect the traffic parameter by calculating the receiving frequency and receiving time. Microwave detection sensors can detect traffic information, such as flow, occupancy, speed, and direction. This kind of sensors can adapt to all kinds of bad weather and have strong antiinterference ability, but it will greatly reduce the detection accuracy, while the vehicle speed is relatively slow. time and travel speed indirectly. GPS data have a strong continuity and the acquisition range is extensive. However, the probe vehicle detection precision is affected by the GPS positioning accuracy, and data communication is susceptible to electromagnetic interference. The assessment method is also one of the important elements in our multisensor traffic assessment system. And the overall flow chart of the assessment method is as shown in Figure 2 Validity Check of Multisensor Data The purpose of validity check is to screen the incomplete data out of traffic flow information and reduce the interference during the process of traffic state assessment. Three parameters of traffic data flow and the mechanism of traffic flow are used to adapt to the validity check of different types of incomplete data. The method mainly includes the following four steps. Step 1 (basic data screening). Before macrodata screening, these data need to determine whether it contains a negative or missing data [21]. Three basic traffic parameters, traffic flow , speed V, and occupation , are considered. Through analyzing the relation of three parameters, the incorrect data can be screened. The approach is listed in Table 1. Step 2 (threshold inspection). The threshold test determines the upper and lower threshold of single information based on the statistical data. If the test value is not in the range of the upper and lower threshold, it can be considered to be erroneous data. Taking a lane, for example, there is a maximum limit value of flow and the minimum value is 0; = 0, at the same time, the maximum value of occupancy is 100% and the minimum is 0%. Step 3 (mechanism inspection of traffic flow). The theory of the mechanism inspection is mainly according to the basic characteristic of the traffic flow and the functional relation between the three parameters of the traffic flow. If the data does not conform to the inherent rule of traffic flow theory, this data set can be considered wrong and it should be deleted or recovered. Step 4 (abnormal inspection). Under normal traffic conditions, the change in the network traffic flow is a stationary random process. And the amplitudes of traffic data should be within a certain range of change. However, when a traffic incident occurs, there goes a large deviation. This paper uses the mean value and variance of previous data of moment to identify the fault data. That when − 2 ≤ ≤ + 2 is satisfied, the data is normal or is abnormal [22]. The above four steps can almost deal with all possible data error. Take an example of traffic flow data, the fault data is filtered after the validity check and the result is shown as in Figure 3. Traffic State Assessment Method The traffic state assessment method includes three stages. First, restore the incomplete traffic flow data. Second, fuse and estimate the speed value. Third, build fusion decision model. Restoration of Incomplete Traffic Flow Data. The traditional restoration algorithm based on incomplete data includes linear interpolation algorithm, historical trend restoration algorithm, restoration method based on the spatial correlation, and restoration method based on the BP neural network [23]. The advantages and disadvantages of methods are shown in Table 2. Due to the heavy traffic on the road, the traffic flow data have small fluctuation and show the obvious time correlation obviously. So the historical data should be used for fusion estimation. In this paper a traffic data restoring algorithm based on the generation of area geometry, which specializes in the analysis of history traffic flow data and the connection between the area geometric formed by the adjacent historical data and the present moment data, is proposed. The area of the geometric region formed by historical data can reflect the changing trends and the oscillation range of traffic flow data. So we make full use of the area to restore the present moment incomplete traffic flow data. The volatility of the traffic flow data can be shown by the recovered data. Take the example of flow. As shown in Figure 4, the flow data −5 , −4 , −3 , −2 , −1 are obtained by the traffic detector, respectively, at the moment −5 , −4 , −3 , −2 , and −1 . Due to the fault of sensor or transmission, the flow data at the moment is incomplete. The area of the triangle is defined as −1 , and it reflects the nonlinear degree of −3 , −2 , −1 . When −1 is large, the oscillation amplitude of data −3 , −2 , −1 is increasing. And when −1 is 0, it indicates that the data −3 , −2 , −1 is changed in liner by time. There is a correlation between the data and the historical data and their nonlinear trend. So that , the Figure 4: Sketch map of triangle area geometry by traffic flow data. area of Δ −2 −1 , is connected with −1 . In order to make the restored value more reliable, the area −2 of Δ −4 −3 −2 and the area −3 of Δ −5 −4 −3 are taken into account. The three triangles are given different weights to determine the area of Δ −2 −1 finally. The function for calculating is defined as the following formula: where 1 , 2 , and 3 are the weights of −1 , −2 , and −3 . And then the method is used to get the weights. Define where −4 is the area of Δ −6 −5 −4 . And ( = 1, 2, 3) is the intermediate variable used to calculate the weights . And define Therefore, if the geometric area constituted by the incomplete data and the last two neighboring data is fixed, the incomplete data can be fixed soon. We assume that there are two units between the adjacent moments. Then the height of the triangle Δ −2 −1 is computed by the following formula: And the linear equation of −2 −1 is According to Formulas (4) and (5), we can get two traffic flow data values at moment and these are and (defining > ) The solving process is shown in Figure 5. It can be figured that the flow value of moment is equal to , when −1 < −2 , and equal to when −1 > −2 . This ensures that the restored data for moment reflects both the historical data trends and the oscillation amplitude. Fusion and Estimation of Speed Based on Space Matching. In order to improve the effectiveness and accuracy of traffic flow data, a fusion and estimation model of speed based on space matching is proposed in this paper. This method uses the mean speed information from the probe vehicle detector and the coil detector, sets up the fusion model of road speed, and trains the weights and deviation of the model by Newton method, to obtain the final speed data. The flow chart is shown in Figure 6. Speed Fusion and Estimation Model. The speed fusion and estimation model based on probe vehicle detector and coil detector [29] is built as shown in Figure 7. In the model, the whole road is divided into upstream and downstream, which is expressed with 1 and 2 , respectively. On the downstream side of the road, because of the influence of the traffic lights, traffic will be lined up; so it is unable to provide effective information for sections of the mean travel speed; so this paper selects the upstream road sections as the research object to estimate the mean speed. The upstream of the road is divided into sections of equal length, the th is close to the downstream of whole road, and the coil is placed on the th section, so that we can get the parameters of the vehicles such as flow, speed, and occupancy through the cross section. There is no fixed detector in sections 1 to − 1, and the dotted boxes represent the spots of the cross section. The data come from the probe vehicle detector, and this model is mainly used to access the speed data of probe vehicle detector. In this paper, the data of probe vehicle detector is regarded as coil detector. Speed Fusion Method. Since the probe vehicle detector is a part of the traffic participants, and on the other hand, the coil detector can only collect the spot speed, so it cannot estimate the mean speed very well. For these reasons, it is necessary to make space-matching data of the probe vehicle detector and coils. In other words, eliminate the difference between the data of probe vehicle detectors and data of coils with data correction. According to Figure 7, the mean speed in every section can affect arterial mean speed; so it can be estimated arterial mean speed through the weight sum of mean speed in every section where V is the arterial mean speed (km/h), V is the mean speed of th section (km/h), and is the weight of the corresponding section. ∈ [0, 1]. is the deviation, which is used to correct fusion result. So the function of total error is The where V( ) is the estimated mean speed of the th sample and V( ) is the actual mean speed of th sample. is the total number of sample data. In order to find weight and deviation when getting the minimal total error, it needs to train Fusion model. The weight is trained by Newton method, which is a fast optimal method based on quadratic's Taylor series. Newton method is defined as where +1 is +1th weight or deviation, is previous weight or deviation, is coefficient of variable, and −1 is Hessian matrix which is obtained from error performance function in the current weights and threshold value. The basic idea of Newton method is that with a quadratic function locally approximate ( ) at first and then find minimum of approximated function. The Hessian function can be expressed as [30] ∇ 2 ( ) = 2 ( ) ( ) + 2 ( ) , where ( ) is Jacobean matrix: where V ( ) is the error vector. When ( ) is small, Hessian matrix is approximately expressed as If ( ) is the form of (8), gradient can be expressed as follows: where ] . Making second derivative to formula (13), the , th element of result is that So Newton method is expressed as follows: The Newton method has fast convergence speed and always can be found minimum of quadratic function in one step; so it can be used to train weight and deviation of fusion model. When data of probe vehicle and data of coils detector are fused, the fusion result can reduce training time and reduce the consumption of computer resource with this method. It also can guarantee real-time performance of fusion algorithm. Fusion Decision Model of Rough Sets and Cloud. After the restoration and estimation, a fusion decision model of rough sets and cloud is presented in this paper to assess the road traffic state. Cloud Model Review. Assume is the quantitative domain represented by an exact value and is the qualitative concept of . If quantitative value is a random realization of concept , and ∈ , therefore, ( ) ∈ [0, 1] which refers to the membership grade of in , is a random number with a stable tendency The distribution of in is called cloud model, and is called cloud drop, just as shown in Figure 8. There are three digital features of cloud [31,32]: expected value Ex, entropy En, and hyper entropy He. Ex is described as the center of the whole cloud drop in the domain . It reflects the digital domain coordinates which has the most representative of the concept. En is the fuzzy measurement of the qualitative concept. It reflects the range that can be accepted by the language value in the digital domain. He is the degree of dispersion of the entropy En, which is the entropy of En. It reflects cohesion degree of the cloud drops. If the membership grade ( ) of in satisfies the following equation [33]: where ∼ (Ex, En 2 ) and En ∼ (En, He 2 ), then the distribution of in is called normal cloud [34]. Cloud Generator Review. There are mainly two kinds of cloud generators, named forward cloud generator and backward cloud generator [35][36][37]. Forward cloud generator is described as the algorithm to generate a quantity of cloud drops drop( , ) of the normal cloud model by using the three digital characteristics (Ex, En, He), which is shown in Figure 9. The forward cloud generator algorithm description is as follows. Step 1. Generate a Gaussian random number En , with the expected value En and the standard deviations He 2 . Step 2. Generate a Gaussian random number , with the expected value Ex and the standard deviations En . 8 The Scientific World Journal Step 3. Make be a detailed quantitative value of concept , called the cloud droplets. Repeat Steps from 1 to 4 until producing cloud droplets. And the backward cloud generator is the inverse process of the forward cloud generator; it transforms the given sample of data to the qualitative concept, with the expression by digital characteristics of the cloud {Ex, En, He}; it is a mapping from sample of data to concept, which is shown in Figure 10. The back cloud generator algorithm description is as follows. Rough Set Theory Review. The main idea of rough set theory [38] is to divide the given space according to the equivalence relation; at the same time the equivalence property of knowledge is guaranteed. Attribute reduction is an important content of rough set; it deletes the redundant or unimportant condition attributes and attribute values under the condition that keeping the constant ability of classification, and then get the rules of the condition attribute relative to decision attribute decision. The method is simple and does not need any priori information; so it can be applied into the generation of fusion decision rules. Because of its objectivity uncertainty The Scientific World Journal description of the problem, it is well applied in traffic state assessment. Proposed Fusion Decision Model. When using the rough set theory to the analysis of the actual data and knowledge, each attribute value of the decision table must be discrete, and though there exists fluctuation in traffic flow data, in the local scope it has certain continuity. So in this paper, we use the cloud model to realize the discretization of traffic flow data. The fusion decision model of rough sets and cloud is mainly based on cloud model theory. The algorithm steps are as follows. Step 1. For multiple parameters of traffic detector, select the qualitative concept, respectively, and determine the scope of its quantitative value. Step 2. According to the cloud model theory, generate a different qualitative concept of cloud, respectively, and make the continuous values of traffic flow data discrete. Step 3. Regard the discrete traffic parameters of the samples as condition attributes, obtain the status value of every moment as decision attribute according to the expert system, and establish a decision table. Step 4. Delete duplicate objects in a decision table. Step 5. Calculate each of the importance degree of condition attributes for decision attribute and delete the condition attribute whose important degrees are 0. Step 6. According to the knowledge reduction method of rough set, delete redundant condition attributes. Step 7. Delete the redundant attribute values for each object and obtain the final decision rules. Result of Data Restoration. The traffic flow data, which were acquired from the Beijing DeShengMen bridge to the Drum Tower in June 19, 2009, were taken as the original data. In order to test the effectiveness of the incomplete data restoration algorithm, the original data were modified and manufacture some incomplete data artificially. Then we used the proposed algorithm to deal with the incomplete data, and the result is shown in Figure 11. Figure 11 shows that the 8 incomplete data points under this algorithm can effectively be detected accurately. To further illustrate the effectiveness of the algorithm, we compare with two other algorithms: the linear interpolation algorithm and the historical trend of restoration algorithm. And the result is shown in Table 3. Take number 158 as an example, the modified data means that we change the original data from 23 to 81. Compare with the relative error of different algorithms, the mean relative error of the proposed algorithm is 1.85%, while the historical trend restore algorithm is 14.78%, and the restoration of the linear interpolation algorithm is 11.90%. The effectiveness of the proposed algorithm in this paper is much better than other methods. Result of Speed Fusion Experiments. In order to verify the reliability of the algorithm, weighted average method, Kalman filtering method, and BP neural network method The Scientific World Journal have been taken into the experiment analysis. The specific analysis of experimental data with the three methods is shown in Figure 12. After the match of data detected by probe vehicle detector, we extracted the speed in each section of the road, respectively, and take its average value as the input of the fusion model. Here the road is divided into six subsections and the data detected by coil detector is constant. The 60% of the data is taken as the training sample with the method of 10-fold cross validation, and the Newton method is used to determine the weight of each speed value. And then the remaining 40% data is tested with the steps as mentioned above. The result of the space-matching fusion method and error curve is shown in Figure 13. We assess the strengths and weaknesses of these methods with these indicators such as mean absolute error (MAE), mean square error (MSE), mean absolute percentage error (MAPE), mean square percentage error (MSPE), and the maximum error (MAXERR (%)). The evaluation results are shown in Table 4. Through observing the comparison results, we find that the evaluation result of space-matching fusion method is much better than the weighted average fusion method and the Kalman filtering method and much similar to the fusion effect of the neural network method. But the calculation neural network method is relatively complex. In conclusion, the method of fusion and estimation of speed based on space matching not only guarantees the timeliness but also improves the reliability and validity of data. Analysis of Traffic State Assessment Result. Qualitative concepts of traffic flow parameters are given as follows: traffic flow = {very low, low, normal, high, very high}, speed = {very slow, slow, normal, fast, very fast}, and occupancy = {very low, low, normal, high, very high}; we use 0, 1, 2, 3, and 4 to represent the qualitative concept, respectively. And in Table 5, the threshold value of qualitative concepts of flow, speed, and occupancy is listed. The cloud models (Ex , En , He ) are shown in Table 6. Then, the collected traffic flow data and the cloud listed in Table 6 are substituted into (18), respectively, if is the maximum in ( = 0, 1, . . . , ), then traffic flow parameters value belongs to the cloud . Table 7 lists parts of the identification results traffic flow parameters based on cloud theory. We can get the final decision rules based on rough set theory, which is shown in Table 8. Figure 14 lists the results of assessment of the traffic state. There are four states below: 1 represents the smooth traffic, 2 represents the slight congestion, 3 represents the moderately congestion, and 4 represents the overcrowded. In order to explain the effectiveness of the algorithm better, we use crowded identification rate (IR) and false identification rate (FIR) to test the algorithm. The test result is shown in Table 9. It shows that the identification rate is over 98% and the misjudgment rate is low. Experimental results show that the restoring algorithm based on self-adaptive generation of area geometry and the fusion and estimation model of speed based on space matching improve the completeness and effectiveness of the traffic flow data. The fusion decision model of rough sets and cloud can be used to assess the traffic state and achieve the desired results. Conclusion In this paper, a multisensor traffic state assessment system was developed. As the sensors usually acquire incomplete data of traffic data, our system provides a novel and robust algorithms to solve this problem. The results of the restoring algorithm based on self-adaptive generation of area geometry are comparatively consistent with the real data, and the mean relative error is only 1.85%, which improves the reliability of the data greatly. And with the speed fusion estimation model based on space matching, the estimation precision is above 90%, which improves the effectiveness and the accuracy of the speed data. Finally, the traffic state assessment based on the fusion decision model of rough sets and cloud is applied to the actual road traffic condition, and the evaluation accuracy is above 98%. The experiment results show that the proposed system is feasible, effective, and accurate, and it has great important significance to the development of the urban intelligent transportation system.
2018-04-03T03:15:57.930Z
2014-08-04T00:00:00.000
{ "year": 2014, "sha1": "df07064e4e07d4d48e087feb5fff891b314fd281", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2014/532602.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85766f07dca8cc7d2dc35817f358c3d0bd5d5efb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
263310027
pes2o/s2orc
v3-fos-license
Rapid detection of Salmonella enterica in primary production samples by eliminating DNA amplification inhibitors using an improved sample pre‐treatment method Abstract Sensitive detection of pathogens in livestock farms is an integral part of the One Health Action Plan of the European Union (EU). Ensuring this requires on‐site testing devices that are compatible with complex matrices such as primary production samples. Among all, faeces are considered the most challenging matrix type that makes it difficult to identify pathogens because of complexity in sample preparation for molecular testing. We have developed a loop‐mediated isothermal amplification (LAMP) based veterinary point‐of‐care (POC) device (VETPOD) and adapted it to detect Salmonella enterica in primary production samples. Three different sampling methods (semi‐wet chicken faeces, boot socks collection and dust samples from poultry shed) were iteratively tested to assess their nature of complexity and possibility for adapting them as suitable sampling methods for on‐site testing. During the study, the sample preparation method that included a two‐step centrifugation combined with washing of the enriched Salmonella cells was found crucial in eliminating amplification inhibitors originating from the faecal matrices. A total of 90 samples were tested that included 60 samples for sensitivity study and 30 samples for relative level of detection (RLOD, a level of detection in comparison to ISO 6579:1 reference method). Overall, the VETPOD had a sensitivity of 90%, 84.62% and 81.82% for boot sock, faecal and dust samples, respectively. The RLOD was 2.23 CFU/25 g which was found to be 1.33 times higher than the ISO 6579:1. Performing with an excellent agreement with ISO 6579:1, the VETPOD proved as a promising alternative to detect Salmonella spp. in primary production and animal husbandry samples. obtained from a strain collection of The National Food Institute, Technical University of Denmark (DTU-Food).The strains were initially selected on XLD agar at 41 C followed by an overnight incubation at 37 °C on Blood Agar plates (Tryptic Soy Agar supplemented with 5% sheep blood) before use.The strains were harvested and diluted in phosphate buffered saline, pH 7.4 (PBS) to prepare a working stock by adjusting the concentration to optical density (OD) 0.8 at 600 nm (VWR-spectrophotometer UV-1600PC, VWR International, Denmark) (Vinayaka et al., 2020(Vinayaka et al., , 2022)). Preparation of DNA for testing The enriched bacterial samples were subjected to a simple thermolysis based cell lysis protocol. One mL of the enriched samples were centrifuged at 4000 g for 5 min and the supernatant was discarded.The obtained pellet was re-suspended in 92 µL of NaOH (25 mM) and heated at 98 °C for 10 min.The reaction mixture was neutralized by adding 8 µL of 1M Tris-HCl followed by centrifugation at 10,000 g for 5 min.The final supernatant was diluted 1:10 times and 4.5 µL of the diluted sample was used as the target for LAMP reaction (Scheme 1). Image analysis The LAMP reaction solutions that were treated with SYTO-24 DNA intercalating dye after the reaction completion were analyzed for the intensity of the green color and assessed based on the signal-to-noise ratio (S/N) as reported previously (Vinayaka et al., 2022).In brief, the captured image of the LAMP reaction solution was analyzed using an online RGB color code picker tool (image-color.com).For precise quantification, each image was restricted with a circle having a radius equivalent to 10 pixels.Subsequently, the color intensity of the pixels inside the circle was recorded multiple times and an average value was calculated.The mean value of signals (n = 4) surrounding the reaction tube in the image was considered as background (B).This background signal was subtracted from the mean signal (n = 4) of the respective sample (S).Finally, the S/N ratio was determined by subtracting mean signal intensity of the negative control (NC) from the mean signal intensity of the positive sample and dividing the resulting value by the standard deviation (SD) of the negative control (NC) (Vinayaka et al., 2022). Scheme S1: Schematic representation of the optimization of the sample preparation protocols Figure S2 . Figure S2.Assessment of the degree of inhibitory effect of fecal matrix against dilution factor on the LAMP reaction Table S1 . Comparison of the performance of VETPOD with detection strategies that are used for the detection of Salmonella spp. in fecal and primary production samples (Tabulated in an order of fastest analysis time to slowest) Table S3 : Comparison of the results obtained in the sensitivity study for the samples processed by protocol A and analyzed by the reference and the VETPOD method Table S4 : Evaluation of the agreement between the methods PA: Positive agreement, NA: Negative agreement, N: Total number of samples tested, ND: Negative deviation, PD: Positive deviation, P0: relative observed proportional agreement, Pe: the hypothetical probability of chance agreement Table S5 : PODLOD calculations according to Wilrich and Wilrich method Table S6 : List of pathogens tested in the specificity study
2023-10-02T06:17:18.392Z
2023-09-30T00:00:00.000
{ "year": 2023, "sha1": "17274d538430fbab873fafcfa26ac28daf393518", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1751-7915.14343", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e7d49c8731410fb65a3083be14b38ba31dc7f0a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
196493077
pes2o/s2orc
v3-fos-license
Incidental findings in pretreatment and post-treatment orthodontic panoramic radiographs Panoramic radiographs provide proper information for most oral surgery procedures panning, for the evaluation of orthodontic treatments progress, for growth and development of children follow up and for oral health surveys in specific populations. Furthermore, the panoramic imaging allows the complete visualization of dental and bone anatomic landmarks and structure in the maxilla and mandible.1 Panoramic radiographs associated with clinical examination are used in orthodontic diagnosis and treatment plan. These radiographs are essential to evaluate teeth eruption, and are an instrument for the detection of pathologies in the jaws.2 Introduction Panoramic radiographs provide proper information for most oral surgery procedures panning, for the evaluation of orthodontic treatments progress, for growth and development of children follow up and for oral health surveys in specific populations. Furthermore, the panoramic imaging allows the complete visualization of dental and bone anatomic landmarks and structure in the maxilla and mandible. 1 Panoramic radiographs associated with clinical examination are used in orthodontic diagnosis and treatment plan. These radiographs are essential to evaluate teeth eruption, and are an instrument for the detection of pathologies in the jaws. 2 During the panoramic radiography imaging interpretation, there is the possibility for the dentist to identify radiographic findings unrelated to the main reason of the imaging examination or to the patient's complaint. The incidental findings on panoramic radiographs for orthodontic purposes are of special interest to the clinician. In many cases these findings may indicate pathologies that require other dental or medical interventions or can modify the initial treatment plan. 2 Eight per cent of the orthodontic patients are at age of transition between the mixed to the permanent dentition, a period in which dental anomalies findings are frequent. Bondemark, (2006) analyzed 496 pretreatment panoramic radiographs from orthodontic patients. Incidental findings were reported in 43 patients. Osteosclerosis, apical endodontic lesion, dentigerous cyst, odontoma, tooth morphologic alterations and alveolar bone reabsorption were observed. 2 The data regarding the prevalence of incidental findings in post-treatment follow up radiographs is scarce. The aim of this study was to investigate the type and frequency of incidental findings in the maxillofacial region of panoramic radiographs obtained for orthodontic treatment purposes. Images obtained before and after orthodontic treatment were evaluated and the incidence of the findings in both groups was compared. Materials and methods This study was approved by the Ethics Committee of the Bauru School of Dentistry (protocol n. 970,779). Panoramic radiographs from patients treated with fixed orthodontic appliances from 2005 to 2015 were selected from the archives of the Department of Orthodontics, Bauru School of Dentistry, University of São Paulo. The samples were selected giving priority to the most recent records in order to obtain panoramic radiographs of higher quality. Two hundred and fifty dental records of patients with complete orthodontic documentation, pre-and post-orthodontic treatment were selected. A total of five hundred panoramic radiographs were analyzed. Inclusion criteria for selection were complete radiographic documentation; patients aged 11-18years that underwent treatment with fixed orthodontic appliances; exams that have good quality in the image. Cases with incomplete radiographic documentation, presence of artifacts in the region to be examined were excluded. After selection, the panoramic radiographs were divided in two groups with 250 images: Group A: pretreatment; Group B: post-treatment. When necessary radiographic images taken between the beginning and end of treatment (panoramic or periapical) were used to confirm or exclude a diagnostic hypothesis. Previous to the radiograph analysis, a group of images randomly selected were interpreted both for the examiner and one experienced radiologist in order to have agreement in the diagnosis. For the main image analysis was performed by one single examiner. Dental Agenesis was considered only when the primary tooth was present and the germ/permanent tooth was absent in the panoramic radiograph; Enamel Pearls were identified using the panoramic radiograph and, when available, periapicals. The criteria for diagnosis of impacted tooth were: lack of space in the dental arch, abnormal position of the tooth germ, presence of obstacles in the eruption path. The external root resorption finding was considered for posterior teeth. Orthodontic containments, plates and screws were classified as present (1) or absent (0). Orthodontic apical root remodeling: Diagnosed as the roundness of the root apex specially in the anterior teeth and classified as present (1) or absent (0). After one month, 10% of the total sample was randomly selected and analyzed again to perform the intra-examiner agreement text. The images were interpreted with the aid of a light box in a room with proper lighting. The Kapa test was performed to evaluate the intraobserver agreement. The Kappa test showed an agreement of 0.97. The Wilcoxon test was performed to compare the findings pre and post-treatment, adopting the significance level α=0.05. Continuous variables are reported as means±standard deviation (SD). Results and discussion From the patients selected, 141 were females and 109 males. The mean age was 14.5±2.29. In group A 169 patients were between 11 and 13years-old and 80 between 14 and 17years-old. In group B 26 patients were between 11 and 13year-old, 129 between 14 and 17years-old and 95 patients were older than 17years-old. Some of the incidental findings were present in group A but not B, while others were present in both groups or just in group B. The findings distributions are described in Table 1 Panoramic radiography allows evaluating lesions in the jaws, the relationship of the teeth to each other, the number and location of intraosseous teeth, among other alterations. Easy access to the examination and the radiation doses four to six times lower, when compared with full-mouth series of periapical projections, are advantages of this method. The aim of this study was to evaluate the presence of radiographic findings, with clinical implications in panoramic radiographs performed at the beginning and end of orthodontic treatment. Several of the alterations observed were possibly found only in view of this opportunity to obtain a panoramic radiograph, which may be considered as incidental findings. The highest prevalence of the findings occurred in the post-orthodontic treatment radiographs, highlighting the clinical importance that the orthodontic apical remodeling plays during the evolution of orthodontic treatment, even representing an indication of clinical success. 3 The increase of osteosclerosis lesions was observed in the posttreatment group, rising the questioning regarding its relation with the orthodontic movement. It is known that the cause of osteosclerosis is idiopathic. The information regarding the role of the orthodontic movement on its etiology is still controversial and scarce in the literature. The bone that composes these areas of osteosclerosis has normal structure and functioning, and the difference lies in its greater trabecular density. Therefore, it is possible to move teeth, apply miniimplants and osseointegrated implants in these areas, as long as these radiopaque images are not related to teeth without pulp vitality. The idea is to use forces of lesser intensity than those conventionally applied. The decrease in force corresponds to a compensation, since, due to the higher local bone density, bone deflection does not occur. Thus, there will be a normal movement, even in the densest area. 4,5 However, bone remodeling may take longer because bone trabeculae are thicker and the medullary spaces are reduced. 5 Pretreatment radiographic examination allows the identification of possible alterations that may influence the orthodontic treatment planning. Among these alterations one can mention conditions with need for restorative, endodontic, periodontal or surgical treatments; diagnosis of dental agenesis; supernumerary teeth; root resorption, which may definitively require previous intervention. Additionally, the evaluation after the orthodontic treatment allows a comparison with the initial condition and the identification of the alterations that arose as a result of the orthodontic interference and its respective clinical implication. Thus, in this study we evaluated radiographs taken before and after treatment. In this study, 56.4% of the patients were female and 43.6% were male. Most patients start orthodontic treatment between 11 and 13years (67.9%), and they finish between 14 and 16years (51.6%). Granlund (2012) investigated a similar population. The authors evaluated 1278 panoramic radiographs from young patients (530 males and 757 females) with mean age of 14years-old. The presence of incidental findings in the patients with mixed dentition (i.e. supernumerary teeth) was reported. 6 In this research there was a significant reduction of supernumerary teeth and impacted teeth in the post-treatment radiographs. If the supernumerary teeth are partially or totally erupted, they may cause retention of dental biofilm, influence periodontal health and dental alignment. Supernumerary teeth may also interfere in the occlusion and should be extracted whenever they could impair the development of adjacent teeth. 7,8 Impacted teeth also have an indication of extraction for the prevention of dental ankyloses and root resorption due to the proximity between the roots. The possibility of cystic or neoplastic transformation of the remaining dental follicle should also be considered. 8 Other dental anomalies, such as enamel pearl, root laceration, supernumerary root, microdontia and hypercementose were more numerous after orthodontic treatment. However, studies that related their occurrence with orthodontic treatment are not available. The supernumerary root does not require treatment, however, its identification is important for the planning of dental procedures. 9 Regarding the higher number of root dilacerations in the post-treatment radiographs, it is possible that this result is related to the complete formation of the roots, which can be observed in higher numbers at the end of orthodontic treatment. The presence of hypercementosis may hamper orthodontic movement and cause clinical and technical peculiarities, such as the need to apply maneuvers for bracket angulation. The presence of hypercementosis does not appear to be strictly aggravating during orthodontics, but requires that the clinician be aware of the evolution of each specific case. 10 In this study, 4 images compatible with external radicular resorption were observed in 3 patients (1.2%) in the pre-treatment group and 9 images in 6 patients (2.4%) in the post treatment group. External root resorption and its progression may be related to orthodontic treatment. The main related factors to be considered are: duration, intensity, application method and direction of the force movement; As well as genetics; Systemic diseases; Root morphology and local traumas. During induced tooth movement, the applied force can compress the periodontal ligament and, consequently, cause the death of cementoblasts, causing the osteoblasts to occupy the root surface, thus initiating the dental resorption associated with orthodontic movement. 11 Han (2005) stated that tooth intrusion causes four times more root resorption than extrusion. Levander (1998) stated that root resorption is significantly higher in patients submitted to continuous orthodontic movement compared to the ones whose orthodontics is performed with pauses, which allows cement recovering. The treatment of external root resorptions in patients undergoing orthodontic treatment it is based on the principle that removal of the cause interrupts the process. The inflammatory dental resorption ceases after a week of disruption of the forces applied to the tooth. 12 However, chronic lesions are able to promote erosions in the cementum and the root apex region, causing a necrotic material retention preventing the reparative process. In such cases endodontic treatment is suitable. 13,14 Unless not statistical significantly, in the post-treatment group a higher incidence of dental pulp stones was observed. Over time, the dental pulp undergoes physiological changes due to its aging. With the influence of other factors such as caries, periodontal disease and traumatisms, the deposition of mineralized tissue in the form of nodules in the interior of the pulp cavity may occur. The relation between the presence of pulp nodules and orthodontic movement was reported, however, most authors believe that its occurrence is associated with the predisposition of factors resulting from the pulp physiological aging process. [15][16][17] One image that suggests a dentigerous cyst and one odontoma were observed in the sample investigated. Carvalho (2010) reported that 42.2% of the dentigerous cysts were present in images of patients between 11 and 20years-old. This age ranges close to the age of the patients included in this study. 18 The dentigerous cyst can inhibit the eruption of the tooth involved. 19 In the case observed in the sample, the involved tooth was extracted and the lesion treated before orthodontic treatment began. Among the benign tumors of the mouth, odontoma is the most common. It develops in patients younger than 20yearsold, during the odontogenesis process. 20,21 Composite odontoma has lower growth potential than the complex. Usually, odontoma is associated with permanent non-erupted teeth and is the most common odontogenic tumors associated with delays in tooth eruption. In such cases of treatment with surgical removal is important. 20,22 Is important to highlight that these lesions should be submitted for histopathological examination for diagnosis. Metalic devices, such as dental implants and orthodontic containments, were observed in the post-treatment group. Orthodontic restraint is part of post-treatment in order to prevent crowding recurrence, especially in lower incisors. Dental implants can be part of rehabilitating treatments, however treatment costs, cultural differences, comfort, age and service accessibility need to be considered. 23 The use of plates and screws in orthognathic surgery respects a treatment sequence, since orthodontic diagnosis and planning are fundamental for the isolated or joint correction of the skeletal discrepancies of the jaws. 24,25 The orthodontic apical remodeling was observed in 180 charts in the posttreatment images, which is clinical relevant information. The forces of tooth movement provoke the activation of osteoclasts through the modulation pathway of inflammation that, after exceeding a certain threshold, cause the replacement of cells of the periodontal ligament by osteoblasts, initiating a process of root resorption, also called orthodontic apical remodeling 3 , in this study observed mainly in the anterior region. There are studies that investigate which genetic factors play an important role in their occurrence. In particular, if there is an increase in interleukin-1 (IL-1) alpha and beta, a chemical mediator involved in the processes of bone and tooth resorption found in periodontal tissues. 26 Al-Qawasmi 26 reported that the polymorphism in the IL-1 beta gene was responsible for 15% of the total variation of the orthodontic apical remodeling in the upper central incisors, and the other dental groups did not present a statistically significant association. This article reports a retrospective investigation performed only with the information obtained in the radiographic examinations. Other information that could improve the diagnosis has not been investigated. In addition, panoramic x-ray may show 15-25% image distortion. Although aware of these limitations, this research obtained an important epidemiological data, which may alert the professionals to the clinical and radiographic control of patients during and after orthodontic treatment. Future studies that could accompany all stages of orthodontic treatment, from clinical examination, radiographic imaging, treatments, and could explore the relationship between clinical and radiographic diagnosis, should be encouraged. Conclusion In this study it was possible to observe in the pre-treatment radiographs the presence of important findings for the diagnosis and planning of orthodontic treatment. Some of them requiring specific treatments, such as: dentigerous cyst and composite odontoma, in addition to retained and supernumerary teeth. The greatest number of incidental findings was present in the radiographs taken after orthodontic treatment. The apical orthodontic remodeling was present in the majority of the patients and requires clinical and radiographic follow-up. The comparison between these two moments is extremely important because certain alterations may have their etiologies related or not to orthodontic therapy. The clinician should pay special attention to incidental findings while following-up of each patient.
2019-03-17T13:11:18.153Z
2018-02-12T00:00:00.000
{ "year": 2018, "sha1": "6c761eb0f251c6656fd54a55b7fa4a497d495c18", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/IJRRT/IJRRT-05-00132.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "939df4eed469074ac49bd987ac0a0ed7d436cad3", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
253412111
pes2o/s2orc
v3-fos-license
Effect of Black Corn Anthocyanin-Rich Extract (Zea mays L.) on Cecal Microbial Populations In Vivo (Gallus gallus) Black corn has been attracting attention to investigate its biological properties due to its anthocyanin composition, mainly cyanidin-3-glucoside. Our study evaluated the effects of black corn extract (BCE) on intestinal morphology, gene expression, and the cecal microbiome. The BCE intra-amniotic administration was evaluated by an animal model in Gallus gallus. The eggs (n = 8 per group) were divided into: (1) no injection; (2) 18 MΩ H2O; (3) 5% black corn extract (BCE); and (4) 0.38% cyanidin-3-glucoside (C3G). A total of 1 mL of each component was injected intra-amniotic on day 17 of incubation. On day 21, the animals were euthanized after hatching, and the duodenum and cecum content were collected. The cecal microbiome changes were attributed to BCE administration, increasing the population of Bifidobacterium and Clostridium, and decreasing E. coli. The BCE did not change the gene expression of intestinal inflammation and functionality. The BCE administration maintained the villi height, Paneth cell number, and goblet cell diameter (in the villi and crypt), similar to the H2O injection but smaller than the C3G. Moreover, a positive correlation was observed between Bifidobacterium, Clostridium, E. coli, and villi GC diameter. The BCE promoted positive changes in the cecum microbiome and maintained intestinal morphology and functionality. Introduction Corn, also known as maize (Zea mays L.), is one of the most produced cereals and one of the major food sources worldwide [1]. In recent decades, scientific research has focused on pigmented corn varieties due to their beneficial health properties [2]. Among them, black corn (Zea mays spp.) is a variety traditionally cultivated in South and Central America that has an affinity for warm and dry climates [1]. Black and purple corn can accumulate anthocyanin in different tissues; thus, these varieties have a significant concentration of these flavonoids [3]. Black Corn Extract Procedure Prior to extraction, black corn grains were ground with a 1.0 mm stainless steel sieve (Willy, Solab ®, Piracicaba, Brazil) to prepare the corn flour. The production of the extract was performed at room temperature without any light. The black corn flour was added to ethanol 50% (1:10 v/v), then submitted to a magnetic stir plate (100 rpm/60 min/room temperature). After the allotted time had passed, the suspension was vacuum-filtered via filter paper. The ethanol in the extract was evaporated in a rotatory evaporator (40 • C) [26]. Then, the resulting concentrate was lyophilized, resulting in a dried extract (Figure 1), whose weight was quantified to calculate the final yield considering the initial flour mass. temperature). After the allotted time had passed, the suspension was vacuum-filtered v filter paper. The ethanol in the extract was evaporated in a rotatory evaporator (40 ° [26]. Then, the resulting concentrate was lyophilized, resulting in a dried extract (Figu 1), whose weight was quantified to calculate the final yield considering the initial flo mass. Total Polyphenols and Antioxidant Capacity The analysis of total polyphenols was determined in the dried extract by the Sing Ciocalteau assay [27]. The absorbance was measured (760 nm) and total polyphenols we expressed as grams of gallic acid equivalent (GAE) per 100 g of wet weight sample. Anthocyanin Profile Analysis The black corn anthocyanin-rich extract was analyzed by High Performance Liqu Chromatography (HPLC) Alliance Waters ® model 2690/5, with a Waters ® photodiode ray detector model 2996 (scanning from 210 to 600 nm with quantification at 520 nm). T chromatographic separation was performed using a Thermo Hypersil BDS (Therm Fisher Scientific, Waltham, MA, USA) C18 column (100 mm × 4.6 mm × 2.4 µ m) at 40 ° an injection volume of 20 µ L, a total run time of 20 min, and a 1.0 mL min −1 flow rate. T Total Polyphenols and Antioxidant Capacity The analysis of total polyphenols was determined in the dried extract by the Single-Ciocalteau assay [27]. The absorbance was measured (760 nm) and total polyphenols were expressed as grams of gallic acid equivalent (GAE) per 100 g of wet weight sample. Anthocyanin Profile Analysis The black corn anthocyanin-rich extract was analyzed by High Performance Liquid Chromatography (HPLC) Alliance Waters ® model 2690/5, with a Waters ® photodiode array detector model 2996 (scanning from 210 to 600 nm with quantification at 520 nm). The chromatographic separation was performed using a Thermo Hypersil BDS (Thermo Fisher Scientific, Waltham, MA, USA) C 18 column (100 mm × 4.6 mm × 2.4 µm) at 40 • C, an injection volume of 20 µL, a total run time of 20 min, and a 1.0 mL min −1 flow rate. The mobile phase used was an aqueous solution of formic acid (Phase A) and acetonitrile (Phase B). The quantification was performed by using external standards. The gradient elution was 20% solvent B over 3 min, followed by a linear gradient up to 30% solvent B within 15 min and held there for 2 min, and then a linear gradient up to 60% solvent B in 13 min and held there for 2 min. Returning to initial conditions, 20% solvent B in 5 min and held there for 8 min for column rinse and re-equilibration. Mobile phase 2 consisted of an aqueous solution of formic acid as solvent A and acetonitrile as solvent B. The gradient was linear from 5 to 10.5% solvent B over 7 min 30 s and held there for 4 min 30 s, then a linear gradient up to 12% solvent B over 1 min, then another linear gradient up to 14% solvent B over 1 min, and then reduced back to 5% solvent B at 2 min 30 s and held there for 3 min 30 s for column rinse and re-equilibration [29]. Cyanidin-3-glucoside and pelaronidin-3-O-glucoside were used as standards. Intra-Amniotic Experiment Forty Cornish-cross fertile eggs from a commercial hatchery (Moyer's chicks, Quakertown, PA, USA) were incubated under controlled temperature (37 ± 2 • C) and humidity (89 ± 2% humidity) in a poultry farm incubator at Cornell University Animal Science. All experimental procedures were carried out in accordance with the Cornell University International Animal Care and Use Committee (IACUC, protocol code: 2020-0077). The black corn extract and the cyanidin-3-glucoside (C3G) were diluted in 18 MΩ H 2 O to verify the concentration to achieve an osmolarity value (Osm) of <320 Osm [18,22], in order to certify that the viable embryos would not be dehydrated upon the administration of the amniotic fluid. During the embryonic development (a total of 21 days), on day 17 of incubation, eggs with viable embryos (n = 36) were distributed by randomization into four groups with a similar weight frequency distribution. The groups were distributed as follows The intra-amniotic administration of black corn extract (1 mL/animal) was prepared at a concentration of 5% in accordance with our previous study [18]. The C3G was administered at a concentration of 0.38%, as this compound has yet to be tested intra-amniotically; hence, we chose to proceed with a lower dosage. A 1 mL solution was administered using a 21-gauge needle into amniotic fluid following candling [22,25]. Afterward, cellophane tape was used to seal the injection holes, and all the eggs were allocated to hatching baskets to minimize bias related to allocation. On day 21, after hatching, chickens were weighed and then euthanized by CO 2 exposure, and the blood was collected by cardiac puncture. The duodenum and cecum were immediately collected, and part of the duodenum and cecum were immersed in liquid nitrogen and then kept at −80 • C until further analysis. Meanwhile, the other portion of the duodenum was fixed in a 10% (v/v) formalin solution for histological analysis. Total RNA Extraction from Duodenum Total RNA extraction from the proximal duodenum (n = 5 animals/group) was performed with a RNeasy Mini Kit, Qiagen Inc. (Cat # 74004, Valencia, CA, USA), as suggested by the manufacturer's protocol. The procedures were performed under RNase-free conditions, and RNA was quantified by absorbance (260/280 nm). The integrity of the 18S ribosomal RNAs was carried out using agarose gel electrophoresis (1.5%) and staining with ethidium bromide. Extracted RNA samples were frozen at (−80 • C) until further analysis. Gene Expression Analysis The gene expression of the duodenum was determined by real-time polymerase chain reaction (RT-PCR) as described earlier [18,25]. Briefly, cDNA was created with a total of 20 µL of reverse transcriptase (RT) reaction completed in a BioRad C1000 touch thermocycler using the Improm-II Reverse Transcriptase Kit (Ca # A1250; Promega, Madison, WI, USA). The cDNA obtained was analyzed by Nanodrop (Thermo Fisher Scientific, Waltham, MA, USA). The concentration of cDNA was verified by measuring the absorbance (260/280 nm) with an extinction coefficient of 33 (for single-stranded DNA). The forward and reverse primers and the tested genes' descriptions were designed based on the Gen- Real-time PCR amplifications were carried out under specific conditions: 95 • C (30 s followed by 40 cycles (95 • C, 15 s), annealing temperature for 30 s, and elongation at 60 • C for 30 s in the Bio-Rad CFX96 Touch (Hercules, CA, USA). The gene expression data was obtained as the lowest cyclic product (Cp) values based on the "second derivative maximum" as computed by Bio-Rad CFX Maestro 1.1 (Version 4.1.2433.1219, Hercules, CA, USA). The assays were quantified through a standard curve in the real-time qPCR analysis, and a 1:10 dilution prepared a standard curve with four points. The software procedure a Cp vs. log 10 concentration graph, and the efficiencies were calculated as 10 (1/slope). The specificity of the amplified real-time RT-PCR procedures was verified by melting curve analysis (60-95 • C) after 40 cycles, resulting in several different specific products with specific melting temperatures. Intestinal Content and DNA Isolation The cecum (n = 5 animals/group) from a separate chicken was aseptically removed and treated as shown elsewhere [18,30]. In short, the cecum content (200 mg) was placed into a plastic tube with phosphate-buffered saline (PBS) solution and homogenized through a vortex with glass beads (3 mm in diameter) for 3 min. To remove the debris, it was centrifuged, and the supernatant was collected. Before DNA extraction, the pellet was washed twice with PBS and stored at −20 • C. In order to perform the purification of DNA, the pellet was re-suspended in 50 mM ethylenediaminetetraacetic acid (EDTA) and treated with lysozyme (Sigma Aldrich Co., St. Louis, MO, USA). The bacterial genomic DNA was isolated using the Wizard Genomic DNA purification kit (Cat # A1120, Promega Corp., Madison, WI, USA). Primers Design and PCR Amplification of Bacterial 16S rDNA Primers for Lactobacillus, Bifidobacterium, Clostridium, Escherichia coli, and L. planetarium were used. The universal primers were designed based on prior research [25,30,31]. PCR products were separated by electrophoresis on a 2% agarose gel, stained with ethidium bromide, and quantified by Quantity One 1-D analysis software (Version 4.6.8, Bio-Rad, Hercules, CA, USA). All products were expressed relative to the content of the universal 16s rRNA primer product and the proportions of each examined bacterial group. Histological Analysis Duodenal morphology was performed as previously described [18,32]. Briefly, duodenum sections were fixed using buffered formaldehyde solution 4% (v/v), dehydrated, cleared, and embedded in paraffin. Sections (5 µm) were added to glass slices, deparaffinized in xylene, rehydrated in ethanol, and stained with Alcian blue/Periodic acid-Schiff. The morphometric measurements of villus height (µM), villus surface (µM), depth of crypts (µM), goblet cell number, and goblet cell diameter (µM) in the crypt and the villi, Paneth cell number, and Paneth cell diameter were assessed using a light microscope (CellSens Standard software, Olympus, Waltham, MA, USA). Five segments of each biological sample (n = 3/treatment group) were assessed, and ten randomly selected villi and crypts were analyzed per segment (50 replicates per biological sample). Villus surface area was obtained by the equation: where VW = villus width average of three measurements, and VL = villus length. Statistical Analysis Experimental groups were completely randomized. Statistically significant differences between experimental groups were conducted by a one-way Analysis of Variance (ANOVA) and a post-hoc Duncan test for those with a normal distribution. The mean for a normal distribution is tested using the Shapiro-Wilk normality test. The means without normal distribution were analyzed using Kruskal-Wallis and a post-hoc Dunn's test. Data were expressed as mean ± standard error deviation (SED) and differences were considered significant when p < 0.05. The association and significance between intestinal biomarkers, bacterial population, and histological parameters were analyzed by Spearman's rank correlation coefficient. GraphPad Prism ® version 8.0 software packages (GraphPad Software Inc., San Diego, CA, USA) were used for graphing and data analysis. Black Corn Extract Characterization The cyanidin-3-glucoside (C3G) was identified as the principal anthocyanin constituent of black corn extract (BCE), followed by pelargonidin-3-O-glucoside. The BCE showed a high concentration of total phenolic compounds (555 mg GAE/100 g), and the antioxidant capacity was 70.79% (Table 2). Effect of BCE on the Bacterial Population on Cecum Content The BCE promoted significant changes in the cecum bacterial populations. Specifically, the BCE and the G3G increased (p < 0.05) Bifidobacterium and decreased (p < 0.05) E. coli populations compared to No injection and H 2 O injection. The BCE group had the highest abundance of Clostridium compared to the other treatment groups. Further, the abundance of Lactobacillus significantly (p < 0.05) decreased after the C3G intra-amniotic administration compared to the control and BCE groups. The abundance of L. plantarum was similar (p > 0.05) among all experimental groups ( Figure 2). cally, the BCE and the G3G increased (p < 0.05) Bifidobacterium and decreased (p < 0.05) E. coli populations compared to No injection and H2O injection. The BCE group had the highest abundance of Clostridium compared to the other treatment groups. Further, the abundance of Lactobacillus significantly (p < 0.05) decreased after the C3G intra-amniotic administration compared to the control and BCE groups. The abundance of L. plantarum was similar (p > 0.05) among all experimental groups (Figure 2). Effect of BCE on Duodenal Gene Expression The gene expression of duodenal interleukin one beta (IL-1β) and nuclear factor kappa beta (NF-κβ) was similar (p > 0.05) among the experimental groups. The pro-inflammatory cytokine tumor necrosis factor-alpha (TNFα) was downregulated (p < 0.05) in the C3G group compared to BCE and the H2O injection ( Figure 3A). Furthermore, to evaluate the intestinal physical barrier integrity, the mRNA expression of AMP-activated protein kinase (AMPK), occludin (OCLN), and voltage-dependent anion channel (VDAC) were determined, but no significant difference (p > 0.05) was observed among the groups for these variables. On the other hand, the caudal-related homeobox transcriptional factor 2 (CDX2) gene expression was downregulated (p < 0.05) after the intra-amniotic administration of H2O, BCE, and C3G compared with No injection (Figure 3B). Effect of BCE on Duodenal Gene Expression The gene expression of duodenal interleukin one beta (IL-1β) and nuclear factor kappa beta (NF-κβ) was similar (p > 0.05) among the experimental groups. The pro-inflammatory cytokine tumor necrosis factor-alpha (TNFα) was downregulated (p < 0.05) in the C3G group compared to BCE and the H 2 O injection ( Figure 3A). Furthermore, to evaluate the intestinal physical barrier integrity, the mRNA expression of AMP-activated protein kinase (AMPK), occludin (OCLN), and voltage-dependent anion channel (VDAC) were determined, but no significant difference (p > 0.05) was observed among the groups for these variables. On the other hand, the caudal-related homeobox transcriptional factor 2 (CDX2) gene expression was downregulated (p < 0.05) after the intra-amniotic administration of H 2 O, BCE, and C3G compared with No injection ( Figure 3B). Intestinal functionality was assessed through intestinal transporters. The mRNA expression of cellular retinol-binding protein-2 (CRBP2) and ZIP 4 was similar among all experimental groups (p > 0.05). However, lecithin: retinol acyltransferase (LRAT) and zinc transporter 1 (ZnT1) downregulated (p < 0.05) in the C3G group compared to the H2O injection group, but there was no difference between the intra-amniotic administration of Intestinal functionality was assessed through intestinal transporters. The mRNA expression of cellular retinol-binding protein-2 (CRBP2) and ZIP 4 was similar among all experimental groups (p > 0.05). However, lecithin: retinol acyltransferase (LRAT) and zinc transporter 1 (ZnT1) downregulated (p < 0.05) in the C3G group compared to the H 2 O injection group, but there was no difference between the intra-amniotic administration of BCE and H 2 O for these markers ( Figure 3C). Effect of BCE on Duodenal Morphology A morphological analysis of the duodenum was performed to observe the intraamniotic effects of BCE in the duodenal mucosa. The animals that received the BCE had no changes in the villi height compared to the H 2 O injection (p > 0.05). The C3G group showed the highest villi height among all experimental groups (p < 0.05). Further, the duodenal depth crypt and the Paneth cell number were higher in the C3G compared to the BCE group (p < 0.05). The Paneth number was higher (p < 0.05) in the BCE when compared to the No injection (Table 3). Values are means ± SED, n = 3 animals/group. BCE: black corn extract; C3G: cyanidin-3-glucoside. Treatment group means for specific variables followed by the same letter are not significantly different (p > 0.05) by Kruskal-Wallis and a post-hoc Dunn's test. Moreover, goblet cell (GC) morphological analysis was performed in the villi and the crypt. In the villi, the GC diameter ( Figure 4A) and number ( Figure 4B) were higher (p < 0.05) after the C3G administration intra-amniotically compared to the BCE group, which had similar values to the control groups. Furthermore, the BCE promoted a decrease (p < 0.05) of acid GC compared to the C3G, H 2 O injection, and No injection ( Figure 4C). The villi mixed GC was higher in the BCE and C3G than in the H 2 O injection and No injection ( Figure 4D). In the same way, in the crypt, the C3G increased (p < 0.05) the GC diameter compared to the other experimental groups, and BCE was similar to the control groups ( Figure 4E). Further, the BCE and the C3G promoted a decrease in the GC number compared to the other groups ( Figure 4F). After classifying the GC, we observed that the BCE and C3G have the lowest number of villi acid GC compared to the control groups ( Figure 4G). There was no difference in the crypt mixed GC in the BCE group compared to the H 2 O injection and C3G ( Figure 4H). In our results, significant intestinal correlations were observed between the intestinal parameters investigated ( Figure 5). Positive correlations were observed between Bifidobacterium and Clostridium, E. coli and villi GC diameter, and CDX2 and OCLU. Furthermore, villi height, TNFα, NF-κB1, and CDX2 showed a negative correlation. ( Figure 4E). Further, the BCE and the C3G promoted a decrease in the GC number compared to the other groups ( Figure 4F). After classifying the GC, we observed that the BCE and C3G have the lowest number of villi acid GC compared to the control groups ( Figure 4G). There was no difference in the crypt mixed GC in the BCE group compared to the H2O injection and C3G ( Figure 4H). In our results, significant intestinal correlations were observed between the intestinal parameters investigated ( Figure 5). Positive correlations were observed between Bifidobacterium and Clostridium, E. coli and villi GC diameter, and CDX2 and OCLU. Furthermore, villi height, TNFα, NF-κB1, and CDX2 showed a negative correlation. Discussion The current scientific literature suggests that the dietary intake of bioactive components offers significant health-promoting benefits [5,15]. Bioactive components include a range of phenolic components, in which each subgroup exerts different tissue and/or cel- Discussion The current scientific literature suggests that the dietary intake of bioactive components offers significant health-promoting benefits [5,15]. Bioactive components include a range of phenolic components, in which each subgroup exerts different tissue and/or cellular effects and promotes beneficial responses in the organism [15,20]. Our study focused on the effects of black corn (Zea mays) anthocyanin-rich extract on intestinal functionality, morphology, and microbial populations in an intraamniotic approach. The intraamniotic administration of black corn extract (BCE) promoted a significant improvement in cecal Bifidobacterium, Clostridium, and reduced E. coli. populations. BCE did not change the duodenal brush border membrane morphology and functionality compared to the control groups. The BCE composition showed significant levels of C3G and total phenolic compounds. Purple corn flour has shown an amount of anthocyanin (mg cyanidin-3-glucoside/100 g) varying from 220 [33] to 310.04 mg [24]. A wide variation has also been observed among different genotypes, from 12.8 to 93.5 mg C3G/g in 20 different genotypes [34]. In this context, considering our previous study with the same food source but as a flour (black corn flour) [13], the values of C3G in the extract (283.91 mg/100 g) were almost ten-fold higher relative to the flour (30.40 mg/100 g). Total phenolic compounds were similar in the extract (555 mg GAE/100 g sample) compared to the flour (614.30 mg GAE/100 g). Solid-liquid extraction with solvents is the simplest and most common method for extracting phenolic compounds, which is performed to achieve higher yields of the required compounds [35]. Polyphenolic compounds are secondary metabolites of plants that have an effect on plant adaptation to the environment [36], as well as potential bioactivities in animal organisms [15]. According to their chemical structure, phenolic compounds are classified into categories, in which the largest group is the flavonoids, with anthocyanin as a subgroup [37]. Furthermore, the 16s rDNA analysis investigated five bacterial populations and revealed that BCE and C3G increased Bifidobacterium and reduced E. coli populations in comparison to the other experimental groups (No injection and H 2 O injection). The C3G metabolism promotes the proliferation of the genus Bifidobacterium in the cecum [38]. Species of Bifidobacterium can produce a β-glucosidase enzyme, which supports the hydrolysis of C3G into aglycones and phenolic compounds, which in turn promotes the growth of these beneficial bacteria [17]. In a study with berries, the bifidogenic effect was attributed to the content of anthocyanin but also of polyphenols, as polyphenols contribute to creating a redox environment beneficial to the Bifidobacteria selection, which is favorable by a low oxidation-reduction potential [39,40]. Furthermore, the E. coli genus contains diverse pathogenic strains that may impair the epithelial barrier by disrupting tight junction proteins [41]. The protective effect of anthocyanin on pathogenic bacteria might be through its intestinal metabolite protocatechuic acid [12], which has been shown to inhibit the growth of E. coli [42], which agrees with the observed reduction of E. coli abundance in the current study. In agreement, the inhibition of E. coli might be associated with the villi goblet cell (GC) diameter and crypt GC number, as indicated by the positive correlation between these variables. GC produces the most important substance in the mucus layer: mucin, which forms a gel barrier against pathogenic bacteria [43]. Therefore, we speculate that a reduction in E. coli due to the BCE contributes to maintaining the GC number as the control. Moreover, the BCE administration increased the Clostridium and Lactobacillus populations in cecal content compared to the C3G administration. In addition to C3G, other phenolic compounds are found in the black corn extract, which might explain these findings. Several Lactobacillus strains use phenolic compounds as a carbon source, thus maintaining their growth besides being involved in the hydrolysis of phenolic compounds due to fermentation [44]. Polyphenols are suggested to exert a prebiotic-like effect by increasing the Lactobacillus populations [15], and strains of Lactobacillus are considered probiotics due to their immunomodulatory and anti-inflammatory actions, inhibition of bacterial toxins, and competition with pathogens [45]. Therefore, for further investigation, and in addition to the anthocyanin profile, the focus should also be on phenolic characterization. In the present study, the BCE did not affect the intestinal brush border membrane (BBM) biomarkers: interleukin one beta (IL-1 β), tumor necrosis factor-alpha (TNFα), and nuclear factor kappa beta (NF-κβ). However, the isolated C3G administration downregulated TNFα expression compared to H 2 O injection and BCE. Considering the chemical composition of the BCE, it provided a higher administration of cyanidin-3-glucoside (0.014 mg/mL) compared to the isolated C3G, which provided an administration of 0.003 mg/mL. It was previously demonstrated that the effect of polyphenols to downregulate TNFα gene expression was concentration dependent, as 2% saffron extract downregulated TNFα expression, but 5 and 10% did not have this effect, as tested in vivo via intra-amniotic administration [21]. Additionally, we highlight that even with a high dosage of cyanidin-3-glucoside, the black corn extract did not exert any detrimental effect on the investigated inflammatory pathway, as there was no difference in the biomarkers in the BCE group versus the controls. Further investigation and other biomarkers are required to address phenolic compounds' dosage and profile to exert an anti-inflammatory effect. The BCE administration did not alter villi height and GC diameter (crypt and villi), relative to the H 2 O injection. However, these variables were lower in the BCE compared to the C3G. Therefore, we hypothesize that this result is not attributed to the anthocyanin level but probably to other phenolic components that might be present in the extract, such as protocatechuic, vanillic, p-hydroxycinnamic, and ferulic acid [46]. In agreement, the administration of saffron flower extract, a source of phenolic components, showed a dose-dependent effect on decreasing villus surface area, goblet cell number, and diameter [21]. The duodenal morphometric observations in the current study may indicate that depending on the polyphenolic components can exert distinct effects on the brush border development and absorptive capacity [21,23,34]. The variation of polyphenol composition in four distinct types of beans contributed to different results in intestinal morphology and functionality [23]. Interestingly, regarding the type of goblet cells, the BCE group showed the lowest number of acidic GC (in the villi and crypt). A luminal acidic pH facilitates the growth of beneficial bacteria over detrimental bacteria [47]. Therefore, the decrease of acidic GC might be associated with the increase in the Clostridium population verified in the BCE group [48]. However, even with the growth of Clostridium bacteria, the BCE did not affect Paneth cell number relative to the H 2 O injection groups. These cells indicate an early state of inflammation, infection, and toxicity due to the secretion of antimicrobial peptides [49]. Finally, in a prior experiment, we showed the beneficial effects of black corn soluble extract (composed of 6.33 g of total dietary fiber/100 g) on intestinal inflammation parameters, morphology, and BBM barrier function [18]. On the other hand, in the present study, the black corn extract (5%) is composed mainly of phenolic components without any dietary fiber. It modulates the cecal microbiome by changing specific bacterial populations and maintaining intestinal morphology and functionality without detrimental effects. Thus, we highlight the positive effects of black corn anthocyanin-rich extract without any soluble dietary fiber, which was able to improve the cecal microbial populations and maintain intestinal morphology and functionality without any detrimental effects in vivo. Conclusions The black corn anthocyanin-rich extract improved the cecal microbiome by increasing Bifidobacterium and Clostridium, reducing the E. coli population while maintaining intestinal morphology and functionality. Further, the C3G group showed additional effects on improving intestinal morphology versus the BCE, suggesting that the combination and dosage of phenolic compounds might interfere with intestinal morphology development. Therefore, our results suggest that black corn anthocyanin-rich extract is a promising target matrix to be used as a functional extract to improve intestinal microbial populations, and further studies in terms of dosage and profile of phenolic compounds in this food matrix are now warranted.
2022-11-09T16:10:39.225Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "ec8c2f2c24725a4793bb58bd98ab464b6ee177b8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/21/4679/pdf?version=1667568358", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a3c9fb94503c442d1313bc0868c903f45812286", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
78450393
pes2o/s2orc
v3-fos-license
A study on mental health status and its determinants in elderly people of Raipur city, Chhattisgarh, India The ageing of the world’s population is a global phenomenon with extensive economic and social consequences. A multitude of social, demographic, psychological, and biological factors contribute to a person’s mental health status. Almost all these factors are particularly pertinent amongst older adults. Factors such as poverty, social isolation, loss of independence, loneliness and losses of different kinds, can affect mental health. 1 Although these disorders have a low prevalence, the impact they have on individuals, families and societies is huge. INTRODUCTION The ageing of the world's population is a global phenomenon with extensive economic and social consequences. A multitude of social, demographic, psychological, and biological factors contribute to a person's mental health status. Almost all these factors are particularly pertinent amongst older adults. Factors such as poverty, social isolation, loss of independence, loneliness and losses of different kinds, can affect mental health. 1 Although these disorders have a low prevalence, the impact they have on individuals, families and societies is huge. There are older women worldwide than older men. This difference increases with advancing age and has been called "feminization of ageing". Rapid increase in nuclear families and contemporary changes in psychosocial matrix makes vulnerable to older people to develop mental health problems. In the above context study was done in elderly population of Raipur City, Chhattisgarh India with the objective to know Prevalence of mental illness and their determining factors. 2 METHODS It was a community based cross sectional observational study was conducted in randomly selected 32 Areas of Raipur city including Urban and Slum during July 2013 to June 2014. Multi stage Simple random sampling method was used. A total of 640 subjects were included in study. Sample size were calculated by using statistical formula, n= Z 2 1-α/2 P(1-P)/d. 1 Predesigned proforma and duke health Profile was used as study tool. Duke health profile is based on self-rating scale. For mental health 100 indicates the best health status, and 0 indicates the worst health status. For anxiety, depression, anxiety-depression, pain, and disability, 100 indicate the worst health status and 0 indicates the best health status. Ethical consideration was obtained from institutional ethical committee and informed consent from subject. All elderly persons in the age group of 60 years and above who were residing in the study area for at least one year, and Willing to participate in study without compulsion was included in study. Those who were not willing to participate were excluded. All the participants were categorized into three sub-groups-young Old: 60 to 74 years; old-old: 75 to 84 years; and oldest-old: ≥85 years. 2 and data analysis was done by employing percentages and test of significance using appropriate software. RESULTS Present study has predominant female (58.28%) and young old age group (81.71%). Most of the people belong to middle socioeconomic status (46.71%) followed by lower socioeconomic status (42.03%). Majority hale belongs from joint family (84.06%). A significant proportion were widowed (57.5%) followed by married (40.62%). About 85% were dependent. Most of the people (51.71%) use to do household activity in their leisure time ( Table 1). Out of total study population 20.31% had excellent whereas (79.68%) showed average mental health status, none had worst ( Table 2). Study observed that out of total study population 52.03% had anxiety whereas 27.65% had depression and 20.31% were normal who have excellent mental health status (Table 3). Study reveals 66.87% of total population had average mental health status whereas only 14.84% shows excellent mental health status and belong to young old age group. Male (36.70% excellent) had better mental health than female (8.57% excellent). About 45.83% of independent people shows excellent mental health whereas only 15.80% of dependent showed excellency. Those who were performing household activity had better mental health (22.35% had excellent) than those who were doing nothing (4.37% excellent) in their leisure time activity. Inverse relation was observed with family status, those who were living in nuclear family (26.47%) shows excellent whereas (19.14%) in joint family shows excellent status. DISCUSSION Present study has predominant female (58.28%) and young old age group (81.71%). A significant proportion were widowed (57.5%) followed by married (40.62%). In another study similar trend was observed 79% were female and 21 were male. It shows the fact that female have relatively high life expectancy than male contrary to present study Higher numbers of residents (47%) were in range of 80-90 years. Regarding marital status of residents, 54% were widow, only 9% were unmarried. 3 Out of total study population 20.31% had excellent whereas 79.68% showed average mental health status, none had worst ( Table 2). In another study 20.0% and 13% of adults aged 55 and over suffer from a mental Disorder. 4,5 Study observed that out of total study population 52.03% had anxiety whereas 27.65% had depression and 20.31% were normal who have excellent mental health status (Table 3). In another study, it is estimated that 20% of people age 55 years or older experience some type of mental health concern. The most common conditions include anxiety, severe cognitive impairment, and mood disorders (such as depression or bipolar disorder). 6 In another study Schizophrenia has an estimated point prevalence of 0.4% and a lifetime risk of 1% i.e. one in a hundred people will suffer from schizophrenia during their lifetime. 7 Unlike present study another author have shown, for both sexes there was a reduction in the prevalence of high and very high psychological distress for people aged over 55 (30% and 18% of females and males respectively) (ABS 2013). 12 Male (36.70% excellent) had better mental health than female(8.57% excellent). Similar trend was observed in another study where, women reported higher prevalence of mental health issues such as anxiety (p 0.02) and insomnia (p 0.02) compared with men. 13 About 45.83% of independent people show excellent mental health whereas only 15.80% of dependent showed Excellency. Simillar study was done by another author suggested that Among the health problems studied, depression was found to be significantly associated with unemployment (p<0.05). 10 working beyond traditional retirement ages may be beneficial for mental health in some populations. 14 Those who were performing household activity had better mental health (22.35% had excellent) than those who were doing nothing (4.37% excellent) in their leisure time activity. Similar finding was observed by another author shows Statistically signify cant inverse associations were found for total physical activity and leisure physical activity versus dementia and depression (p<0.001). 15 Inverse relation was observed with family status, those who were living in nuclear family (26.47%) shows excellent whereas (19.14%) in joint family shows excellent status. In respect to socio-economic status people belong to Upper class (34.31%) shows Excellency and only (1.58%) lower class had excellent status. Married person had better mental status than widowed and separated (28.07%), (14.94%) and (16.66%) respectively had excellent mental health (Table 4). Although socioeconomic measures of disadvantage such as unemployment, being unmarried, low income and low education have been shown in many studies to be positively related to the prevalence of psychiatric disorders. 5 CONCLUSION Females lives longer than male leads to feminization of ageing. It is important to improve the social capital and involve communities and families in supporting the older adults.
2019-03-16T13:07:31.881Z
2016-12-22T00:00:00.000
{ "year": 2016, "sha1": "803e0a3d9978d34e55b43db710324db9b5ad2d85", "oa_license": null, "oa_url": "https://ijcmph.com/index.php/ijcmph/article/download/57/55", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2d3f6a45f30298be6f0a38a57f961d286f4ff572", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249560715
pes2o/s2orc
v3-fos-license
Meta Structural Learning Algorithm with Interpretable Convolutional Neural Networks for Arrhythmia Detection of Multi-Session ECG Detection of arrhythmia of electrocardiogram (ECG) signals recorded within several sessions for each person is a challenging issue, which has not been properly investigated in the past. This arrhythmia detection is challenging since a classification model that is constructed and tested using ECG signals maintains generalization when dealing with unseen samples. This article has proposed a new interpretable meta structural learning algorithm for this challenging detection. Therefore, a compound loss function was suggested including the structural feature extraction fault and space label fault with GUMBEL-SOFTMAX distribution in the convolutional neural network (CNN) models. The collaboration between models was carried out to create learning to learn features in models by transferring the knowledge among them when confronted by unseen samples. One of the deficiencies of a meta-learning algorithm is the non-interpretability of its models. Therefore, to create an interpretability feature for CNN models, they are encoded as the evolutionary trees of the genetic programming (GP) algorithms in this article. These trees learn the process of extracting deep structural features in the course of the evolution in the GP algorithm. The experimental results suggested that the proposed detection model enjoys an accuracy of 98% regarding the classification of 7 types of arrhythmia in the samples of the Chapman ECG dataset recorded from 10646 patients in different sessions. Finally, the comparisons demonstrated the competitive performance of the proposed model concerning the other models based on the big deep models. I. INTRODUCTION Computer-aided detection (CADe), also known as Computer-aided diagnosis (CADx), are systems that help physicians to interpret medical images and signals. ECG recording techniques, X-ray, MRI, and ultrasound systems create a lot of information and a professional radiologist or physician has to analyze and assess them in a quite short period of time [1]. CAD systems process the digital signals and images and highlight the suspicious parts such as possible diseases to facilitate the physician's decisionmaking process. These assistants can help physicians to reduce the human error caused due to fatigue [2]. One of these diagnosis systems, which is quite developed is the system for automatic diagnosis of cardiac diseases using ECG signals. The electrical signals generated by various activities of upper and lower heart muscles are measured with electrocardiograms (ECGs). However, sometimes the performance of each muscle is affected by external factors such as excessive blood fat, which can cause arrhythmia in the electrical performance of the heart. For instance, a heart problem on the walls of the ventricles can be considered a cause of arrhythmia [3]. Moreover, the supraventricular on the upper walls of the ventricles in a region called atria is another complication leading to arrhythmia in the electrical signals of the heart. According to the statistics, heart complications account for one-third of the mortality rate in the adults aged 35-90 years old [4]. Hence, researchers have used artificial computer systems to propose efficient methods for the early diagnosis and classification of arrhythmia through ECG signals. Deep learning models such as convolutional neural networks (CNNs) [5,6], AlexNet [7], VGGNet [8], and GoogleNet [9] have proven greatly efficient in recent years. The relevant analyses have achieved high rates of accuracy in the diagnosis of arrhythmia through ECG signals [8,10,11]. From a medical standpoint, every lead observes the heart from its own perspective in multi-lead ECG signals. These signals are categorized as two general classes, the first of which includes the chest signals that observe the heart from various angles of the axial plane. The second class contains the body signals that view the heart from various angles of the coronal plane. Each of these signals highlights a part of the general reality of the heart [12]. Although the arrhythmia of a signal is stable during each ECG signal recording session, the range of this arrhythmia might experience some changes. In general, deep learning methods proposed for the diagnosis of arrhythmia handle ECG signals recorded in a single session. The outstanding characteristic of deep learning models developed through a single-session ECG signal is their high accuracy. For instance, Zheng et al. [13] proposed a CNN model-based method yielding the accuracy of 97% regarding the classification of seven types of arrhythmia in the sample dataset of 10646 patients and 35565 ECG signals, all of which were recorded in a single session. Big-ECG [14] is one of the well-known methods. This method first, diagnosed 45 patients with QRS Complex, and then, classified the signals using the Random Trees model. It obtained an accuracy of 95.6%. In addition, Faust et al. [15] employed a fine-tuned method along with transfer learning in an ImageNet model. They achieved 97% of accuracy in the classification of seven types of arrhythmia in the samples of a dataset including 10646 patients and 35565 ECG signals, all of which were recorded in a single session. Since these models were developed through single-session ECG signals, their main deficiency is their low generalizability while dealing with unseen samples. As discussed previously, generalization is essential, for each patient's range of arrhythmia might undergo some changes in different sessions. Moreover, Da Silva et al. [16] proposed a dataset of ECG signals recorded in different sessions called CYBHi. They indicated that the only accurate method for the development and evaluation of each model of ECG signal detection could be achieved when the model was developed and tested through the signals recorded in separate sessions. Hence, the advantage of developing an arrhythmia detection system from the multi-session ECG signals is the comprehensive performance analysis of the arrhythmia diagnosis method through the ECG signals in the disjoint acquisition session. As opposed to the previous techniques, this method will be tested on different states of humans within several days of experimentation. Therefore, the proposed method proved to be generalizable and succeeded in preventing the learning model overfitting while dealing with new and unseen samples. This is regarded as an extension of the permanence analysis. However, due to generalization defects, there is a massive literature gap in the use of deep learning models for the signals recorded in different sessions. Meta-learning is defined as an emerging method for developing the ability to deal with new conditions in machine learning models [17]. Meta-learning emphasizes the preparation of guidance for machine learning models so that they can make the best decision in new conditions including unseen samples. Moreover, meta-learning seeks a way towards learning to learn. Thus, it plays a key role in resolving the problem of arrhythmia classification in multisession ECG signals [18]. In recent years, numerous methods have been proposed to develop a model for the demonstration of the same behavior while handling the signals recorded in various sessions, numerous algorithms are introduced for meta-learning including reinforcement learning, transfer learning, and active learning. The following two subjects should be examined to employ meta-learning for multi-session signals: 1. ECG signals are structural signals consisting of QRS Complexes. Therefore, the ordinary meta-learning algorithms are not appropriate for this signal, and an approach that is a combination of meta-learning and structural learning should be used. 2. Interpretability in meta-learning is a crucial principle since every model that seeks to obtain a higher level of generalization must be able to interpret it easily, plus it should not be similar to common neural networks of the black box [19,20]. The challenge addressed in this article is the arrhythmia classification of multi-session ECG signals as a new problem in this field. Therefore, this article proposed a structural meta-learning method that can be described in CNN to resolve the problems of arrhythmia classification in multi-session ECG signals. The following are the three main keywords in the proposed method: 1. Meta-learning, 2. Structural Learning, 3. The Describablity. The solutions to present each of these parts will be explained in detail in this article. First, a compound loss function is designed in CNN, which is capable of implementing the learning to learn feature in the meta-learning. In this loss function, guidance is provided for the central model to enable it to self-educate. Second, to add describablity feature to CNN, this model was encoded by evolutionary trees in the genetic programming. These trees are CNN models that are learned in the course of the evolution process. Third, in order to add the structural learning feature to the proposed meta-learning algorithm of trees in GP, the CNN trees carry out the deep feature learning process using the morphological structural operators. The experimental results section of the number of parameters and computational complexity of the proposed model for classification of the samples of the Chapman ECG dataset including 10646 patients are evaluated [13]. The experimental results suggested that the student model achieved an average of 97% accuracy through Lead III input signal for the classification of 7 types of arrhythmia in the ChapmanECG dataset. The main innovations of this article are as follows: 1. Providing a new method for arrhythmia classification of multi-session ECG signals for each person (i.e., training and testing the model in two separate sessions) 2. Introducing a new describable meta-learning algorithm via encoding the CNN model in the evolutionary trees of the GP algorithm 3. Adding the structural learning feature to the proposed metal-learning algorithm to detect QRS Complex in ECG signals using the morphological structural operators in the evolutionary trees of the GP algorithm Then, section II of the article reviews the studies on arrhythmia classification of ECG signals. Section III of the article, describes the proposed structural meta-learning algorithm. Section IV comprises the experimental results. Section V provides a conclusion for the article. II. RELATED LITERATURE The ECG signals include various arrhythmia, among which 11 types were mentioned in the previous section. The methods are divided into different classes with respect to the number of arrhythmias classified in various investigations. Besides, it should be noted that all investigations introduced for diagnosing arrhythmia were assumed to be carried out based on single-session ECG signals. Among all methods introduced up to now, deep learning is one of the top algorithms in the field of the classification of ECG signals. Employing more layers in deep learning enables us to have a deeper model and when the ECG signals pass through these layers, more distinguishing features can be extracted to facilitate the detection of the pattern of arrhythmia in them. The total process of extracting the features in the deep learning models is carried out automatically and without the need for hand-craft feature extraction. In recent years, a broad spectrum of ECG dataset-based deep learning methods are introduced such as the MIT-BIH dataset and several other private datasets. The methods introduced [21], [22], and [23] are CNN-based deep learning models, which have classified various arrhythmia in ECG signals. The majority of these methods use 1D filters in CNN since the ECG signal in the MITBIH dataset is 1D as well. Here several CNN-based methods are explained. In [24], a CNN model with 8 convolution layers, 4 pooling layers, and one fully connected layer is used, which obtained the accuracy of 93.1% for 22 patients and 109449 accurate pulses. In addition, in [21], a CNN model enjoying 8 convolution layers bearing 1D filters, 4 pooling layers, and one fully connected layer is proposed for the classification of 5 types of arrhythmia (non-ectopic, supraventricular ectopic, ventricular ectopic, fusion, and unknown beats). This method obtained an accuracy of 94%. In [25], a CNN model comprising 8 convolution layers bearing 1D filters, 4 pooling layers, and two fully connected layers is used for the classification of 12 types of arrhythmia. In this article, the proposed model was examined on a dataset, which is not publicly accessible including 91232 ECG records from 23500 male and 2004 female patients. The results suggested that this method calculated accuracy of 98.5%. In [26], a deep learning method enjoying the U-Net architectural standard, which comprised thirty-two 1D residual layers, was introduced to classify 5 types of arrhythmia. The results suggested that this method calculated accuracy of 98.5%. In [22], a CNN model including 12 convolution layers was used for the classification of 44 records. In [27], CNN model consisting of 8 convolution layers bearing 1D filters, 4 pooling layers, and two fully connected layers enjoying the End-to-End feature were suggested for the classification of 17 and 15 arrhythmias in ECG signals, respectively. Per the results, this method calculated the accuracy of 98.5% and 94%, respectively. Besides these methods, some models evaluated their methods on 12-lead ECG signals. In [28], an LSTM model was used on 38,899 patients and for 12 different arrhythmias. The results manifested 90% accuracy. Moreover, in , a CNN model was used on an unknown number of patients and for 8 different arrhythmias. The results manifested an F1-Score amounting to 81%. Despite the advantages of the aforesaid models such as accuracy of higher than 90%, as well as the End-to-End feature, the dataset used in these models including MIT-BIH were collected about 40 years ago and has problems such as the imbalance class and a small number of patients. Therefore, the results of these studies cannot be approved for practical use. Zheng et al. [13] collected a dataset called ChapmanECG from 12-Lead ECG signals from more than 10,000 people. The interesting point about this dataset is that the signals are recorded from the patients within several different days and during different sessions. A primary evaluation based on the gradient boosting tree model was conducted on the classification of this dataset. This model obtained the accuracy of nearly 97% for each class of arrhythmia, separately. Other than the evaluations conducted in [24], a CNN model comprising three 1D convolutions was proposed for the classification of 12-lead signals of the Chapman ECG dataset. It yielded an average AUC of 79.60 (AUC=79.60%). Faust et al. [29] presented a new and more integrated method on this dataset. In this article, a transfer learning-based fine-tuning method was implemented in the Image-Net deep learning model. Following the evaluations of this article, an accuracy of 92.24% was obtained for the classification of 7 types of arrhythmia and 96.13% accuracy was obtained for the classification of 4 types of arrhythmia. The great deficiency of this method is using the folding method for dividing the data between training and testing. This method results in using repetitive data from a special session in training and testing. All methods introduced up to now enjoyed a fundamental presumption, which is the fact that ECG signals of training and testing stages were recorded for each person during the same session. As mentioned in the previous section, with regard to the related literature, the only study concerned with the challenge of classification of multi-session ECG classification was carried out by Da Silva et al. [16]. This article sought to demonstrate this challenge, thus, it assessed its proposed method that was based on the CNN model on the ECG signals of one session of the training and testing stages and acquired the accuracy of 96%. However, the accuracy of the same model for the multi-session ECG signals in the training and testing stages was reduced down to 88%. Thus, the classification of arrhythmia of multi-session ECG signals for individuals is a challenging subject, which is still open to debate. III. PROPOSED METHOD The ECG dataset used in this article was collected by Chapman University and Shaoxing People's Hospital (Chapman ECG in short) [13]. Table I shows the numerical details of the dataset. The ECG signals for each person were recorded within several days and during different sessions. This enables us to assess the proposed method of this article on multi-session ECG signals. In this dataset, the 12-lead ECG signals were recorded from 10646 people with a frequency higher than 500Hz. Each ECG signal in 12-lead is a 10-second strip. In addition, an initial pre-processing was applied to this dataset to smooth the ECG signals using the Butterworth filter and the Non-Local Means technique. Figure 1 shows a general schematic of the meta-learningbased model proposed in this article. The input of this model is based on a dataset in = { , } , in which signifies one of the ECG signals in the Chapman dataset. In addition, indicates the no arrhythmia label, which includes one of SB, SR, AFIB, ST, A, SI, SVT, AT, AVNRT, AVRT, SAAWR. This model includes two phases for one task, i.e., meta training and meta testing. In the meta training, the dataset of training = { , } is used for training the classifications, in which shows the number of training samples. A. Meta Structural Learning Model The meta testing aims at detecting the labels of a query sample = , in which indicates the number of queries. Taking into account that ECG signals in the Chapman dataset were recorded during different sessions (i.e., ≠ ∀ , ∈ ,), the meta-learning model must include the following two characteristics: 1. Non-linear mapping in the artificial neural networks must enjoy the generalization feature for unseen samples in various sessions, 2. Mapping must preserve the relationship between the classes of the unseen samples in . Thus, this article seeks to transfer the knowledge related to ECG signals in various sessions between classification models. In light of that, per Fig. 1, a two-phase communication network is proposed for this problem. First, the model for extraction of transferable features is meta learned on the training samples using the CNN trees. Then, the features extracted from the query model will be extracted based on the same model, and they will be added to the model to calculate their distance. Then, a relationship between training samples and the query will be acquired using the labels space. In general, the los related to the model that is manifested in the Figure will be formalized using an equation. In this equation is the distance between feature maps extracted by CNN tree for two input signals of and , which is calculated as equation (2). In this equation, ( ) shows the interpretable representation learning section for the input. It will be elaborated on in section 3.3. This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3181727 In addition, is the distance between the estimated labels by the model for two and input signals that will be calculated through Gumbel-Softmax distribution. ( , … , ) In this equation, are the soft labels that are used in the Gumbel-Softmax distribution. These labels are calculated using the following equation. In this equation, signifies the calculated probability by model, plus is the soft label of this model. By the hard label coded as one-hot 1 , merely the corresponding information of classes can be transferred among the models, and it will be impossible to train other information such as the relationship between various arrhythmia classes. Therefore, in equation 4, a soft label and a hard label are used together for training models. In equation 4, the parameter indicates the extent of the softness of the labels. In the normal classification applications, is regarded as equal to 1. In the Gumbel-Softmax distribution, a higher amount is considered for to obtain a softer probable distribution in the classes that include more information about One-Hot coding. Upon the increase of , the inclines towards a uniform distribution [30,31]. In coding 1 of One-Hot the amount of real class is considered to be 1 and the rest of classes are zero. This article used the middle layers of CNN models to extract the automatic features. As stated in equation (2), , ∀ ∈ {1, . . . , } in model is a deep network that uses the convolution layers to extract the deep features from input signal. Figure 2 shows the architecture used for the CNN model as the th example. As mentioned in [20], the deep learning methods are the black box models, and what happens inside them is not clear. Thus, every meta-learning method designed based on these models will be uninterpretable. It reduces the possibility of using them for different sciences, as well as for non-professional use. Therefore, this article used a new CNN network that is called evolutionary deep learning. In this approach, the CNN networks are coded by genes in the GP algorithm. In the GP algorithm, genes resemble trees with the ability to make plans, plus they are interpretable like the decision tree and can be written as simple mathematical equations [32]. Figure 3 signifies the general diagram concerning the general stages of constructing the CNN trees as the GP algorithm. At first, the CNN models were encoded as GP trees. At the beginning of training, multi-gene samples are constructed to generate the population of the first generation in the GP algorithm. The collection of the initial population can be displayed as a collection of = { , . . . , , . . . , }. In this collection, is the member of this population that is generated as a multi-gene sample. Moreover, is a set of mapping functions that are coded as genes and can be expanded as Here, signifies the number of mapping functions or genes that equal the number of classifying models in Fig. 1. Besides, is a dimensional function that is displayed as CNN trees. The process of representing an evolutionary CNN tree as genes for the extraction of the structural features will be elaborated in section III.C. In the course of the GP algorithm in each generation, these trees are used to transfer the input to the new space in the form of representation learning. The process of changing the numerical space can be represented as } indicates the structural features in the new numerical space. ( ) can be expanded with the following equation. In which signifies the transpose operator and is the number of classifying models. When all samples are executed in the current generation of the GP algorithm, a set of deep features will be calculated in the new numerical space using the mapping functions. In the GP algorithm, the accuracy of the deep features obtained in the classification of ECG signals, as well as the complication of its structure is used as fitting functions [33]. To implement this compound fitting function in this article, the Pessimistic Error Estimate (PLE) is used for each gene in the GP algorithm like the following equation. In addition to this equation, the Ω function shows the complexity of gene calculation and is the set of all nodes of all genes in . Taking into account that this article examines the classification of various arrhythmia of ECG signals, the classification error of signals in the dataset is a great parameter to evaluate the CNN trees. Moreover, per Occam's Razor's theory, to reduce the total computational overload of the GP algorithm, the Ω function is used in the fitting function of each tree. This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3181727 Otherwise, the population of the current population are transferred to the subsequent generation via the following three stages: 1. The best samples of the current population are selected based on their fitting function and the rest are eliminated, which is called the natural selection stage. 2. Afterward, the crossover and mutation genetic operators are applied to the remaining samples to construct a more mature set of samples. 3. The mature samples created in the previous stage construct a new generation together with a set of randomly constructed samples to observe the population size in all generations. C. CNN Tree Structure It was stated in the previous section that gene, in the sample is a mapping function in the form of a CNN tree, which is used to extract the structural features from signal. The representation of the CNN tree encoded in gene, as well as its functions and terminals, will be explained here. The representation of the CNN model in the form of GP trees is addressed in [34]. This article used a modified version of it per [20], which utilizes the morphological convolutional functions in this tree and it is appropriate for QRS Complex analysis in ECG signal. Figure 4 shows an instance of the CNN trees. As displayed in this figure, the surface of this tree comprises a variety of layers. There is an input layer in the leaf nodes of this tree that receives the signal as the input. Afterward, there is a morphological convolutional layer, which is normally followed by a pooling layer. In this tree, a node can be a layer of the combination of morphological convolutional / pooling. Before the root layer, there is a concatenation layer. Finally, there is the root layer, which is the output layer. In this three, the convolution layer carries out the operation of extracting the structural features using the morphological operators, which is fully explained in the next section. The pooling layer is used after the convolution layer to reduce the output dimensions of the convolution. Besides, the concatenation layer connects two input layers together. The output layer is the same as the Flatten layer in the CNN, which forms the feature diagram. Table II shows the set of functions used in the CNN tree. First, the functions used to change the numerical space of the features include Conv, SQRT, ADD, ReLU, Sub, and Abs, among which Conv is the most important function. This function is responsible for applying the morphological operators to the input ECG signals. On account of their special geometrical characteristics, the morphological functions such as dilation and erosion are capable of perfectly analyzing the complexities of ECG signals including QRS Complex. The morphological operators such as erosion and dilation are beneficial for analyzing the shape-oriented signals due to their theoretical framework and lower computational complexity [35,36]. The morphological operators, i.e., erosion and dilation, are the restricted form of counter-harmonic mean morphology [37]. ( ) in equations 2 and 3 is the counter-harmonic mean morphology filter that is expanded as follows. The type of the operations of this function is determined with respect to the amount considered for , (q= 0 linear, < 0 pseudoerosion, > 0 pseudodilation). Sub and Add functions carry out the weight addition and subtraction operations of two signals based on the weights of and . Two input ECG signals might have different measurements. Thus, the ECG signal will be cut using the aforementioned function to obtain two ECG signals of the same size. The Sqrt, ReLU, and Abs functions are used to change the amount of ECG signal and change the numerical space of the respective sample. In the new networks, the ReLU is preferred over the activating Sigmoid function for hidden layers for two reasons. First, it is simple and easy to use. Second, it does not cause a local minimum problem. In this function, in case the input amount is less than zero, the output will be the same as the input, and in case the input is less than or equal to zero, then the output will be zero. The ReLU function has a fixed derivative for all inputs greater than zero. This fixed derivative accelerates the network's learning. The Concat1, Concat2, Concat3, and Concat4 functions are used in the concatenation layer, which receives several ECG signals as input and displays them as a diagram in the output. The MaxP function performs a downsampling operation on the ECG input signal. This function can reduce the dimensions of the received ECG signal. Table III shows a set of terminals and amounts authorized for them to be used in the program. The terminals include , × , × , × , , , , and . shows the input ECG signal. The × , × , and × terminals show the filter kernel that is added to this function as the second input of Conv. Considering that the convolution function performs the morphological operator, each of these filters is a diamond, disk, line, rectangle, and square-shaped matrix. and are added to Sub and Add functions as input and their amounts range between 0.000 to 1.000. and terminals are the same size as the kernel of MaxP functions. Their amounts are created randomly based on the initial range, and they are evolved in the course of evolutionary learning. Table III shows the list of the terminals and the range of the authorized amounts used for them in the nodes of the leaves of the CNN tree. The ECG input signal is displayed through the measurement of × 1. × , × , and This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. terminals show the convolution filter and the second input of Conv. Taking into account that the used morphological filter is a matrix in the shape of a diamond, disk, line, rectangle, and square. and are added to Sub and Add functions as input and their amounts range between 0.000 to 1.000. and are the same size as MaxP kernel functions. Their amounts will be evolved randomly and in the course of execution of the GP algorithm. (2), the value of 0.6 is considered for parameters and 0.3 for parameters. A. RESULT ANALYSIS METHODOLOGY To analyze the results first the confusion matrix is calculated for test samples. Table IV shows the method of formation of a confusion matrix for tumors with the label of true and predicted , . The combination of 8 labels of true and predicted results in an 8*8 confusion matrix. Four evaluations criteria, i.e., , , , and , created based on the confusion matrix are employed for evaluation of the classification models. The total accuracy of , demonstrates the performance of classification model for all sub-classes in a special fold. This criterion is calculated by dividing the number of classified accurate samples into the total samples. This criterion can be easily obtained in the confusion matrix through adding the principal diameter with the total samples of the matrix as follows. For instance, per the results of Table V, the proposed method for the classification of the samples of the AF class enjoyed the mean of PREC cl =94.92%, SEN cl =94.79%, SPEC cl =95.97%, and ACC cl =97.21% for 12 leads. This table demonstrated that the proposed method delivered a better performance for lead III (that is from the class of the chest leads) than other leads. The proposed method for the classification of the samples of the class AF amounted to the mean value of PREC cl =95.02%, SEN cl =95.56%, SPEC cl =95.10%, and ACC cl =97.15% for the lead III in the Chapman dataset. The proposed method delivered a proper performance regarding the classification of samples of the class AF with respect to every constructed lead and test. For instance, the proposed method demonstrated a lower performance when trained by lead V1 and obtained the mean value of PREC cl =92.18%, SEN cl =94.23%, SPEC cl =95.29%, and ACC cl =96.00% for seven classes of arrhythmia in the Chapman dataset. However, these results were dropped by the mean of PREC cl =2.84%, SEN cl =1.33%, SPEC cl =0.19%, and ACC cl =1.15% when trained by lead III. It signifies that regardless of the lead it is trained to, the proposed method preserves its appropriate performance. In general, per Tables V to XI, the proposed method delivered a relatively better performance. It is interesting to know that the lead III enjoyed a better performance in all of these tables. Following the results of Tables V to XI, the proposed method trained by lead III provides better performance for all arrhythmia classes. Thus, the performance of the classification model constructed with lead III is investigated in more detail via a confusion matrix. In Figure 5, section(a), the confusion matrix that belongs to the lead IIIbased classification model is provided in the test stage for various arrhythmia classes in the Chapman dataset including Sinus Bradycardia (SB), Sinus Rhythm (SR), Atrial Fibrillation (AFIB), Sinus Tachycardia (ST), Atrial Flutter (AF), Sinus Irregularity (SI), and Supraventricular Tachycardia (SVT). In this matrix, the principal diameter is an indication of the correct classes (TP), which is quite crucial for examining a classification model. In general, this table shows that the proposed method enjoyed an appropriate distribution regarding all classes with no emphasis on a specific class with overfitting and no bad performance concerning a specific arrhythmia class. In this matrix, the majority of wrong states occur between Atrial Flutter (AF) and Atrial Fibrillation (AFIB) arrhythmia classes, as well as Sinus Rhythm (SR) and Sinus Irregularity (SI) classes. In this matrix, the average of 1.8% of the AF and AFIB samples, and 3.2% of SR and SI samples are wrong. It should be noted that from the medical viewpoint, classification fault between these two arrhythmias is not crucial. To manifest the appropriate performance of the classification model constructed concerning lead III, the convergence diagram in Figure 5, Section (b) applied to the dataset will be examined. This Figure Per diagram, the accuracy increases upon the increase of epoch for various arrhythmia classes. The accuracy of this architecture was increased among the epochs of 50 to 300 and reached 95.18%. The accuracy was stabilized when the epoch reached the range of (350-500) and no increase has occurred. However, the lead III-based constructed classification model was stabilized in the epoch amounting to 550 and it recorded an accuracy of 95.18%. This diagram in various arrhythmia classes indicated that the number of convolution layers resulted in an increase in accuracy and architectural performance. In general, the diagram in Figure 6 shows that the classification model constructed concerning lead III enjoys a suitable convergence for the classification of the arrhythmia samples in the Chapman dataset. In this section, the proposed method of the best CNN tree model is selected to be classified in the GP algorithm. Then, it is evaluated by 12 In case the classifier is full in the agreement, then, K=1. In case there is no contract between the assessors, despite being accessed randomly, then K=1. Per the Table, the classifier of the proposed method obtained the Kappa criterion near 1 for all classes. This indicates the quality of the proposed method with regard to this criterion. Per the results of this Table, the classification model trained by all 12 leads obtained a higher mean PREC cl =3.11%, SEN cl =2.53%, SPEC cl =2.80%, and ACC cl =1.14% than the classification model trained by lead III. Therefore, it can be concluded that the construction of a classification method for the detection of arrhythmia of 12lead ECG signal delivered a better performance. However, it manifested a computational complexity and had a larger number of parameters. Then, Figure 6 sections (a) and (b) show the confusion matrix and convergence diagram of the classification model constructed concerning all 12 leads of ECG in the test phase, respectively. In general, this confusion matrix demonstrates that the proposed method constructed concerning all 12-lead ECG manifested an appropriate distribution regarding all classes, with no emphasis on a specific class with overfitting and no bad performance concerning specific arrhythmia classes. In this matrix, there were errors between the arrhythmia classes that are not crucial from the medical viewpoint. Figure 6, Section (b), shows the convergence diagram of the classification model based on 12-lead ECG signals. The proposed method enjoys a proper convergence regarding various arrhythmia classifications in the Chapman dataset. Besides, even the convergence rate is improved in comparison to Figure 5, Section (b) and it has rapidly obtained convergence. The Table XIV shows the model complexity and execution duration of the proposed method during the training, validation and testing phase. Table XIV demonstrates that the training section of the proposed method had an execution duration of 25 minutes and 20 seconds. This time has no effect on the process of executing the proposed method since the training phase is offline, which is crucial during the development of the system. However, the most important time is the duration of the testing. Accordingly, Table XIV shows that the execution duration of the proposed method is 59 seconds. It signifies that the proposed method can diagnose arrhythmia in a split second. In this section, the performance of the classification model constructed concerning lead III is compared to the other state-of-the-art methods based on deep learning. First, in Table XIV the classification model constructed concerning lead III and methods whose input is based on Single-lead ECG are compared. This Table includes the details such as the number of patients, number of ECG records, the number of the diagnosed classes and rhythms, as well as the method used in them. The results of the performance of the previous methods are displayed on the basis of the criterion that is reported in them such as accuracy or F1-Score. Various deep learning models such as LSTM, CNN, and RNN are used in these methods. Traditionally, a broad spectrum of these methods are evaluated on an MIT-BIH Arrhythmia dataset, which belong to PhysioNet. For instance, in [39], a deep CNN model was employed for the classification of 12 rhythms. In this model, various stages such as batch normalization and data augmentation were used. The results of this investigation indicated the value of an F1 value of 83% for 53,549 patients. In [26], a deep learning model, with the standard U-Net architecture was used for the classification of 5 types of arrhythmia This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3181727 concerning the MIT-BIH Arrhythmia dataset. This article reported accuracy of 97.32% for the classification of the records of 47 patients. In [21] of the same dataset, a CNN model was proposed enjoying a preprocessing stage including data normalization. The CNN model constituent five convolution layers, three pooling layers, and one fully connected layer. The results of this work revealed an Acc of 98.45% in the evaluation section on 47 patients from the MIT-BIH Arrhythmia dataset. Besides, these methods, recently a compound model including CNN+LSTM is introduced for the classification of the Chapman ECG dataset with singlelead signals. This method used the CNN model to generate deep spatial features from the raw ECG signals. Then, the output of the CNN model was allocated to the LSTM model to generate the deep temporal features. Its results for 10,436 patients revealed an Acc of 92.24% at the evaluation section. In this article, the classification model constructed concerning lead III was proposed to classify single-lead ECG signals. Its performance was evaluated on the Chapman ECG dataset. Following Table XV, the classification model constructed concerning lead III obtained the Acc of 98.55% for the records of 10,464 patients. The proposed classification model constructed concerning lead III used a larger number of records than the majority of previous methods for constructing a model. The single-lead-III model demonstrated 0.02% higher accuracy in comparison to the [15], which also used the Chapman ECG dataset. In most of the previous investigations, similar records were used in the train and test dataset, which reduced the integration of the methods and cast doubt on when dealing with unseen data. A subject that was taken into account in the proposed model of this article. For a better representation of the performance of the proposed method, the proposed classification model developed considering all 12 leads of ECG was compared to the previous methods developed with regard to 12-lead ECG in Table XVI. The previous methods listed in this table are based on deep learning models such as CNN and LSTM. In [29], on account of combining CNN+LSTM, in addition to the classification of seven rhythms based on single-lead ECG illustrated in Table XV, a separate section was allocated for the classification based on the 12-lead ECG signals. Its results for 10,436 patients revealed an Acc of 92.24% at the evaluation section. Moreover, in [15] printed recently, a method based on Detrending+ResNet model was introduced for 10,093 patients in the Chapman ECG dataset. Its results for the classification of seven rhythms manifested the Acc of 92.24% in the evaluation section. Deep learning models such as CNN deliver better performance by increasing the number of inputs. Thus, methods provided in Table XVI are regarded as pioneers and most of them enjoy quite a high accuracy. Subsection IV.D demonstrates the superiority of the proposed model over other available methods of diagnosing arrhythmia of ECG signal by manifesting the numerical results and necessary comparisons. This subsection counts several crucial reasons and required justifications for this superiority. Deep learning models are statistical models that are dependent on data distribution. These models perform well merely when faced with data distribution that they are trained with. For instance, in the [42] method, the model obtained an F-measure of almost 99% for training and testing data, however, in experiments carried out in this study it was observed that it enjoys the F-measure of 82% for the Chapman dataset. In light of that, the generalization of a statistical model is directly dependent on the sample distribution. In the proposed method, the statistical model was trained based on data distribution, and this distribution does not necessarily exist in the testing data section. As already mentioned in [19], the latent medical variable cannot be directly achieved in the medical data. Methods such as frequency analysis can express these variables. In the proposed method special frequencies were used in the ECG signal, which helped extract the functional dependency in the ECG leads. It was not examined in methods such as [15] and [29]. The final feature considered for the proposed method is the independence of this method in specifying and extracting QRS Complex. Methods such as Big-ECG [14] of their performance are directly dependent on specifying the stroke. It reduces the flexibility of the model since, in some types of arrhythmia, the QRS peak undergoes changes and is quite difficult to specify. In general, these are several justifications that can be stated to improve the proposed method in comparison to other previous methods. VI. CONCLUSION This article proposed an Interpretable Meta Structural Learning algorithm regarding the challenging problems to classify various arrhythmia of ECG signals recorded in several sessions for each person. Therefore, a compound loss function was provided that included a structural feature extraction fault and a space label fault with GUMBEL-SOFTMAX distribution in the CNN models. The collaboration was carried out between models to create the learning to learn feature in these models via transferring the knowledge among them when dealing with unseen samples. This article encoded the models in the form of evolutionary trees of the GP algorithm to create the interpretability feature for CNN models. These trees learn the process of extracting deep structural features in the course of the evolution of the GP algorithm. The experimental results suggested that the proposed classification model enjoys an accuracy of 98% for the classification of 7 types of arrhythmia in the samples of the ChapmanECG dataset on 10646 patients, which were recorded in different sessions. Finally, the comparisons demonstrated the competitive performance of the proposed model through state-of-the-art methods based on the big learning models.
2022-06-11T15:09:14.578Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "3720f69243984d5c9015cc1b643e5ed9f9777a50", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1109/access.2022.3181727", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "e20f84c14b217cb52902f0dffc29ea127e367344", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
132204172
pes2o/s2orc
v3-fos-license
Rainfall forecast in the Upper Mahaweli basin in Sri Lanka using RegCM model The Upper Mahaweli basin is the upper most sub basin of 788 km2 in size above Polgolla barrage in the Mahaweli River, the longest river in Sri Lanka which starts from the central hills of the island and drains to the sea at the North-east coast. Rainfall forecast in the Upper Mahaweli basin is important for issuing flood warning in the river downstream of the reservoirs, landslide warning in the settlements in hilly areas. Anticipatory water management in the basin including reservoir operations, barrage gate operation for releasing water for irrigation and flood control also require reliable rainfall and runoff prediction in the sub basin. In this study, the Regional Climate Model (RegCM V4.4.5.11) is calibrated for the basin to dynamically downscale reanalysis weather data of Global Climate Model (GCM) to forecast the rainfall in the basin. Observed rainfalls at gauging stations within the basin were used for model calibration and validation. The observed rainfall data was analysed using ARC GIS and the output of RegCM was analysed using GrADS tool. The output of the model and the observed precipitation were obtained on grids of size 0.1 degrees and the accuracy of the predictions were analysed using RMSE and Mean Model Absolute Error percentage (MAME %). The predictions by the calibrated RegCM model for the basin is shown to be satisfactory. The model is a useful tool for rainfall forecast in the Upper Mahaweli River basin. Introduction Extreme weather events, especially heavy rainfall adversely affect the people, services and properties to hinder the societal development. Climate projections show that changing extreme weather patterns are very likely and will have significant consequences for the society and the economy [1]. Prediction of weather phenomenon with higher reliability under changing climate has become a major focus in the world. The Upper Mahaweli basin is the upper most sub basin of 788 km 2 in size above Polgolla barrage in the Mahaweli River, the longest river in Sri Lanka which starts from the central hills of the island and drains to the sea at the North-east coast. There are two large reservoirs to regulate water and to generate hydropower in this sub basin, and Polgolla barrage in the Central Province diverts part of Mahaweli river water to the North Central Province of Sri Lanka while harnessing hydropower [2]. Rainfall forecast in the sub basin is important for issuing flood warning in the river downstream of reservoirs, landslide warning in the settlements in hilly areas. Anticipatory water management in the basin including reservoir operations, barrage gate operation for releasing water for irrigation and flood control also require reliable rainfall and runoff prediction in the sub basin. Figure 1 shows the location of the basin. The Upper Mahaweli basin is characterized by predominantly mountainous topography elevations ranging from 400m to 2000m MSL ( Figure 2). The basin receives an average annual rainfall of about 2500mm, western slopes in the basin receives higher rainfall up to 6000mm in some years with high intensity causing floods [3 & 4]. Therefore rainfall forecast is important to understand its accompanied impacts and necessary adaptation to minimize its adverse effects and for planning adaptation strategies. models formed by physical equations describing motion, thermodynamics, continuity, hydrostatic equation together with closure modes [5]. Global Circulation Models (GCM's) and Limited Area Models are two groups of models used for NWP. Global weather simulation, which is computationally expensive, provides predictions at a coarse scale (250 km by 250 km grid) in most cases [6]. GCMs fail to capture sufficient basin properties to provide an accurate description of weather at local or basin scale. Thus, the process of dynamic downscaling of GCM predictions is used to produce the basin-scale weather predictions of the Upper Mahaweli basin. Dynamical downscaling is a computationally intensive technique which makes use of the lateral boundary conditions from GCMs combined with regional-scale forcings such as land-sea contrast, vegetation cover, etc., to produce regional climate models (RCMs) [6]. There are number of dynamic downscaling models and each model has its advantage/disadvantage over the counterparts. WRF [7], RegCM [8], CCSM [9], etc., are among the popularly used RCMs. RegCM developed and maintained by the Abdus Salam International center for Theoretical Physics (ICTP) is used in this study due to its successful and increase use in Asian countries, and that it has not been used yet in Sri Lanka. The RegCM has been the first limited area model developed for long term regional climate simulation, and it has been applied by a large community for a wide range of regional climate studies, from process studies to paleo-climate and future climate projections [10]. Model description RegCM simulation system consists of four modules, including the terrain, the initial / lateral boundary condition (ICBC), the main program (Main) and post-treatment (Post-proc) module ( Figure 3). The entire technological process can be divided into pre-processing, simulation and post-processing. The pre-processing stage includes topographic parameters, parameters of the study area as well as the mode setting of the initial and boundary conditions for the setting. Terrain and ICBC are two parts in the preprocessing stage. The main module (Main) is the master control program in the model; post-processing module is to convert the output results for the needs of the average type and data format [11]. The RegCM has number of physics options which can be changed to fine-tune the model in order to calibrate the model to the given region. These models include the Biosphere-Atmosphere Transfer scheme(BATS) for surface process representation, The Radiative Transfer scheme of the NCAR Community climate Model, A medium resolution local Planetary boundary layer scheme, the Kuo-type cumulus convection scheme and the explicit Moisture scheme [13] Observed daily rainfall data at four rainfall gauging stations in the basin were obtained from the Department of Meteorology of Sri Lanka to select extreme events for model calibration and validation GIS based Thiessen polygon method was used to generate average rainfall over the study area and two highest rainfall events were selected as extreme events for model calibration and validation. Table 1 illustrates the details of selected rainfall events. The reanalysed datasets EIN15 at 1. Post processing GrADS tool is used to view the daily rainfall obtained from RegCM model and the observed rainfall. The observed was spatially distributed using GIS Inverse Distance Weighing tool (IDW) in the same grid size of 0.1 degree of the RegCM results using GIS Point to Raster tool. For the purpose of comparison grids in the basin were numbered as shown in Figure 4. One physics scheme was changed while keeping other physics schemes at default options. Once the best option was selected, it was kept unchanged for the remaining simulations. This procedure was followed in selecting all physics schemes. The appropriate physics options that were selected for the Upper Mahaweli basin during calibration are shown in Table 2. Figure 5 shows the RegCM output and the Figure 6 shows the observed rainfall data on same grids for the rainfall event of 25/03/2004. Table 3 provides the area percentage corresponding to the MAME % and the cumulative area percentage for the rainfall event of 25/03/2004. It is observed that the MAME % is less than 50%, for about 40% of the basin area for the selected event for calibration. Model validation The calibrated model was applied to simulate the extreme event of 10/05/2002, for model validation. The MAME % and the area percentage respect to each MAME % were calculated to have an assurance about the result for validation. Figure 7 shows the observed rainfall and figure 8 shows the RegCM model predictions for this event According to Table 4, the MAME is less than 50% over more than 60% of the basin area for the selected extreme event. Further the RMSE value for this date was 17mm, which is an acceptable error. Figure 10 shows the error percentage of the model of the validation results. Therefore the selected physics combination can be applied for the Upper Mahaweli river basin to provide reasonably accurate weather predictions by downscaling the global weather predictions available in coarse grid. However, verifying the model against more events and using more rain gauges would increase the confidence of predictions. Moreover, the resolution of RegCM can be reduced up to 1 km by using the new hydrostatic version of RegCM (version 4.5 of RegCM) which is under development.
2019-04-26T14:26:59.299Z
2017-04-11T00:00:00.000
{ "year": 2017, "sha1": "34649b65259b782a18579ff8388298328073c3d0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/822/1/012075", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c8d88a116cf18972c9db9dd5a4aea908ceec002c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
4581100
pes2o/s2orc
v3-fos-license
Habitat Loss, Not Fragmentation, Drives Occurrence Patterns of Canada Lynx at the Southern Range Periphery Peripheral populations often experience more extreme environmental conditions than those in the centre of a species' range. Such extreme conditions include habitat loss, defined as a reduction in the amount of suitable habitat, as well as habitat fragmentation, which involves the breaking apart of habitat independent of habitat loss. The ‘threshold hypothesis’ predicts that organisms will be more affected by habitat fragmentation when the amount of habitat on the landscape is scarce (i.e., less than 30%) than when habitat is abundant, implying that habitat fragmentation may compound habitat loss through changes in patch size and configuration. Alternatively, the ‘flexibility hypothesis’ predicts that individuals may respond to increased habitat disturbance by altering their selection patterns and thereby reducing sensitivity to habitat loss and fragmentation. While the range of Canada lynx (Lynx canadensis) has contracted during recent decades, the relative importance of habitat loss and habitat fragmentation on this phenomenon is poorly understood. We used a habitat suitability model for lynx to identify suitable land cover in Ontario, and contrasted occupancy patterns across landscapes differing in cover, to test the ‘threshold hypothesis’ and ‘flexibility hypothesis’. When suitable land cover was widely available, lynx avoided areas with less than 30% habitat and were unaffected by habitat fragmentation. However, on landscapes with minimal suitable land cover, lynx occurrence was not related to either habitat loss or habitat fragmentation, indicating support for the ‘flexibility hypothesis’. We conclude that lynx are broadly affected by habitat loss, and not specifically by habitat fragmentation, although occurrence patterns are flexible and dependent on landscape condition. We suggest that lynx may alter their habitat selection patterns depending on local conditions, thereby reducing their sensitivity to anthropogenically-driven habitat alteration. Introduction Populations occurring at the periphery of a species' geographic range often occupy habitats that are of lower overall quality, leading to reduced survival, reproduction and population density, compared to populations in the core of the range [1]. In addition, peripheral populations tend to be more sensitive to environmental variability than those in the core, which can promote increased demographic stochasticity and lower resilience [2][3][4]. As a result, individuals in the range periphery may be more sensitive to the processes of habitat loss and fragmentation. Alternatively, animals may respond with more flexible habitat selection patterns, enabling them to move among variable environments to enhance their fitness [5]. This flexibility should increase species' persistence in landscapes experiencing anthropogenic change, such as in areas subject to high fragmentation. However, much of our perception of how wide-ranging species respond to these landscape-scale processes is speculative, especially in peripheral populations where both occurrences and their detection probability are often limited. This shortcoming is especially relevant because as landscapes continue to be altered by anthropogenic disturbance, many species are faced with declines in range size [6]. An improved understanding of the effects of habitat loss and fragmentation on species occurrence patterns will enhance our understanding of how these processes may impact species distributions. Habitat loss and fragmentation are separate processes whereby habitat loss is an overall reduction in the amount of suitable habitat resulting in a decline of patch size and habitat fragmentation is the breaking apart of habitat, independent of habitat loss [7]. While the effects of habitat loss on species are consistently negative, the effects of habitat fragmentation are less well understood, as few studies measure fragmentation independently of habitat loss [7]. While habitat fragmentation can have both weakly positive and weakly negative effects on biodiversity and population size, the impact of these effects is often far less important than the effects of habitat loss [7][8][9]. There is some evidence that the effects of habitat fragmentation depend upon the amount of habitat that is available in a landscape. The 'threshold hypothesis' predicts that individuals will be more affected by habitat fragmentation when the amount of habitat on the landscape is limiting (i.e. less than 30% habitat), and small and isolated patches become more numerous, than when habitat is abundant and patches are larger and more continuous [10,11]. Habitat fragmentation may compound the effects of habitat loss due to changes in patch size and landscape configuration, implying that fragmentation may have a greater effect at the range periphery, where habitat is often limiting [2]. This hypothesis has been supported by several studies examining population size and presence of birds and small mammals with habitat thresholds ranging from 10-30% [10,[12][13][14]. In contrast, the 'flexibility hypothesis' suggests that individuals may alter their habitat selection patterns, permitting them to inhabit variable environments that would otherwise be unsuitable due to habitat fragmentation [5,15]. Canada lynx (Lynx canadensis) occur across the boreal forest of North America, where their primary prey is snowshoe hare (Lepus americanus). Since lynx are dependent upon snowshoe hares, they select forested habitat based on high hare abundance or where they are most easily depredated [16][17][18], whereas hares select young coniferous forests where both food and cover are adequate [19,20]. In the southern periphery of the lynx range, forest composition is more heterogeneous and hare densities are naturally lower, leading to reduced abundance and restricted distribution of lynx [21], which require densities between 1 to 1.5 hares per hectare to persist [22].Because habitat for both lynx and hare has become both reduced and fragmented due to anthropogenic activities in their southern ranges, the distribution and abundance of both species is now restricted [23,24]. This has reduced genetic diversity in southern populations of both hare [25] and lynx [26]. Additionally, the southern range of lynx in Ontario has contracted by over 175 km since 1970 [26]. Although the mechanisms ultimately limiting lynx populations at the southern range periphery remain to be fully understood, this may be due to sensitivity to habitat fragmentation [27], with habitat loss and climate change as other important factors [26]. Several other felid species are also reported to be sensitive to habitat fragmentation (e.g. Iberian lynx (Lynx pardinus) [28], bobcat (Lynx rufus) and cougar (Puma concolor) [29]). However, whether these species express any flexibility in selection patterns in relation to the amount of habitat on a landscape or whether these patterns hold true for habitat fragmentation, has not yet been explored. We examined the occurrence patterns of Canada lynx across the 2 regions in the southern geographic range of the species in Ontario to assess patterns of occurrence in relation to habitat loss and fragmentation. Given that lynx are prey specialists, requiring areas within a narrow range of suitable conditions to meet prey and habitat requirements [30] as well as connectivity requirements [31], we predicted that lynx would be sensitive to habitat loss when habitat was widely available, and sensitive to both habitat loss and fragmentation when suitable habitat was less than 30%; this would support the 'threshold hypothesis' [10,11]. These patterns may be expressed more strongly near the southern range periphery, due to increased levels of habitat loss and reduced habitat quality [26], leading us to speculate that any sensitivity to habitat fragmentation would be most apparent there. Alternatively, the 'flexibility hypothesis' suggests that lynx will have tolerance to both habitat loss and fragmentation, such that their occurrence patterns may not correlate with either process, indicating flexibility in habitat selection. We developed a habitat suitability model for lynx and tested the above predictions using patterns of track occurrence across the species' southern range periphery. We compared two regions each with three similar levels of suitable land cover as determined by the habitat suitability model, to examine if occurrence patterns differ across landscapes with varying amounts of suitable land cover. Observations of lynx tracks in areas with limited suitable land cover and increased fragmentation would imply that lynx are not sensitive to habitat fragmentation, or that the importance of suitable habitat on occurrence patterns at the range periphery are less critical than previously understood. Ethics Statement The Trent University Research Ethics Board approved the study (reference #21083). In the introduction of the study, participants were explicitly told that informed consent was implied if they submitted their survey data. The field component consisted of non-invasive track surveys conducted on public land, so no access permits or animal care protocols were required. Canada lynx are considered not at risk under provincial and federal guidelines. Study Area The study area encompassed 200 000 km 2 in central Ontario ( Figure 1A), across the southern boreal forest and the Great Lakes St. Lawrence forest, a transition zone from boreal to deciduous forest, encompassing the southern range limit of lynx occurrence in the region [32]. The area is largely comprised of boreal forest, with spruce (Picea glauca, P. mariana), balsam fir (Abies balsamea), trembling poplar (Populus tremuloides) and white birch (Betula papyrifera) as dominant tree species. The southerly portions of the study area in the Great Lakes St. Lawrence region include pines (Pinus resinosa, P. strobus), eastern hemlock (Tsuga canadensis), yellow birch (B. alleghaniensis) and maples (Acer saccharum, A. rubrum). Habitat loss and fragmentation throughout the study area is caused primarily by forestry and associated road construction. Historically 1% of the entire region (approximately 2000 km 2 ) was harvested annually [32], current levels are 0.04% or 800 km 2 (2000-2010 average; [33]). Other sources of habitat loss include populated areas, agriculture, and natural disturbance such as forest fire and pest infestations. Habitat Suitability Model In order to quantify lynx habitat suitability, we used the analytic hierarchy process, a decision-making procedure that is useful in the development of habitat suitability models for wide-ranging mammals (see [34,35] for description of methodology). We developed the survey design based on a literature review identifying important ecological factors affecting lynx occurrence, with an emphasis on the southern range periphery. The primary habitat characteristics were land cover attributes (e.g., [17,18]), forest age class (e.g., [18,36]), annual snowfall (e.g., [37]) and road density (e.g., [38]). We developed two separate models of habitat suitability, one based on expert-opinion, where we received 11 solicited responses from lynx researchers across North America, and the other using a literature-based approach with four 'naïve' participants with no previous knowledge of lynx ecology. Both experts and naïve participants received the same survey and the naïve participants also received four research papers providing a detailed description of the basic habitat requirements of lynx from across its range [17,18,38,39]. The survey consisted of five separate pair-wise comparison matrices based on each of the features of interest (land cover, forest development stage, snowfall, and road density) and an overall comparison of the relative importance among all features. The overall ranking of features was used to weight parameters within the model and estimate the relative importance of factors affecting lynx habitat suitability, whereas weights within a feature determined the ranking for its attributes. We used the Ontario Forest Resource Inventory to characterize land cover; these data provide a detailed description of species composition and forest stand age as determined by aerial photo interpretation. The study area included 41 provincial forest management units, and each unit was updated with forest fire and harvest information up to and including 2008. Standardized forest units were combined to create six generalized land cover types (coniferous forest, deciduous forest, mixedwood forest, developed land, wetland, and open areas) and five forest development stages (presapling, sapling, immature, mature and old; [40]), which improved the accuracy of the dataset [41]. We converted the land cover map to a geospatial raster for analysis; all GIS analyses were conducted in ArcGIS 9.2 (ESRI, Redlands, CA, USA). We evaluated the lynx habitat model in a portion of the study area near the North Bay -Temagami region of northeastern Ontario, Canada (47.01uN, 79.97uW; see Figure 1A). The Temagami region is approximately 8,000 km 2 and was selected because it is located within the southern range periphery of lynx in Ontario and the transition zone of boreal forest with the northern Great Lakes-St. Lawrence forest. Between January and March 2009, we surveyed lynx occurrence at 48 randomly selected sites that represented a gradient in available land cover types [38]. We assessed lynx presence by snowtracking triangular transects around the centroid of the cell (dimensions 0.5 km per side, [38]). Additional lynx tracks that were encountered opportunistically while travelling within the landscape were also considered as lynx presence. We calculated habitat suitability at the centre of each transect and each opportunistic track, using both models. We used receiver operating characteristic plots and the Area Under the Curve (AUC) as an independent measure of model accuracy via the program ROC/AUC [42]. AUC provides a measure of model accuracy, where values .0.7 indicate good model fit. We selected P fair , the value where specificity and sensitivity are equal, as the threshold habitat suitable for lynx occurrence. Lynx Occurrence Sampling Two regions were selected to document lynx occurrence (estimated by track identification) in landscapes across a gradient of habitat fragmentation. Each region fell within the larger study area which encompassed the southern boreal forest and Great Lakes-St. Lawrence Forest, and was divided into three landscapes based on the amount of suitable land cover (high, moderate, and low) as determined by the habitat suitability model ( Figure 1B). The Chapleau region was 12 900 km 2 , located primarily in the boreal forest. The western portion of the region had the highest amount of suitable land cover and is the least fragmented landscape in this region. The central area of the Chapleau region is highly fragmented with the most habitat loss due to forestry, roads, and human settlements. The easternmost portion of this region has a moderate amount of suitable land cover and a moderate level of fragmentation due to forestry roads ( Table 1). The Mississagi region was 12 800 km 2 located primarily in the Great Lakes St. Lawrence forest. The northern portion of this region had moderate amounts of suitable land cover, but was fragmented due to forestry roads; the central portion had the highest amount of suitable land cover and was least fragmented, and the southernmost landscape had the least amount of suitable land cover in this region, with habitat loss due to forestry, human settlements and roads. These regions were surveyed for occurrence of lynx tracks from January to March 2010 and each identified track point was recorded as a lynx occurrence. All forest access roads, trails, hydro-electric line corridors, cutovers and riparian areas were sampled via snowmobile, totalling 9 320 km of survey lines in both landscapes. All lynx track locations were documented; Chapleau had 104 track points and Mississagi had 89 tracks points (see Figure S1 in Information S1). Roads in these two regions were limited to 1 or 2 highways, , 20 secondary roads, and forestry roads. To test whether there was bias arising from track proximity to surveyed roads, we randomly selected 100 points from roads (including highways, primary, secondary and tertiary roads, and snowmobile trails) and the surrounding landscape (not bisected by roads) and compared them at five spatial scales (10 km 2 , 25 km 2 , 50 km 2 , 75 km 2 , and 100 km 2 ) to assess any differences in habitat quality in each region. We found that there was no difference in the amount of lynx habitat (as defined by the suitability model) in any landscape, regardless of spatial scale and distance to roads (M. Hornseth, unpublished data, but see [38]). Accordingly, we deemed that proximity of locations to roads was not relevant to our particular analysis. True absences are difficult to detect using typical survey methods, especially without repeated visits. We randomly selected points (equal to the number of lynx locations) from survey logs to represent pseudo-absences in Chapleau and Mississagi. These locations were at least 1 km apart and at least 2.5 km from the nearest lynx location. To examine the effect of spatial scale, and to encompass overall selection patterns, we buffered both observed lynx tracks and pseudo-absences with radii of 2.82 km and 5.61 km to create areas of 25 km 2 and 100 km 2 (from published home range size estimates), to assess the role of spatial scale on occurrence patterns (see [18,39]). Habitat Amount and Fragmentation Landscape connectivity can be considered across a variety of spatial and ecological scales, and for our analysis the metrics of interest included estimates of: (i) structural connectivity, which represents the spatial configuration of suitable patches; and (ii) functional connectivity, which includes animal response to patches [43]. We created a binary landscape of habitat quality using the literature based habitat suitability model and a critical threshold of habitat suitability value of 52 (threshold tuned by balancing the error rate between false positives and false negatives [42]). We quantified the percentage of habitat within each lynx and pseudoabsence area to estimate habitat amount. To avoid confusion of working at multiple scales, we used the term suitable land cover to describe the output of the habitat suitability model at a landscapelevel and suitable habitat to describe this output at a finer spatial scale (25 km 2 and 100 km 2 areas). We used PatchMorph [44] and the habitat suitability model to estimate a 'functionally' connected landscape for lynx from: (1) a critical threshold of habitat suitability value of 52, (2) a minimum patch size of 5 ha (the minimum mappable forest stand (Ontario Ministry of Natural Resources, unpublished data)), and (3) a crossing distance of 200 m (M. Hornseth, unpublished data). Note that crossing distance is defined as the distance that lynx will travel in unsuitable habitat; the minimum for this metric is two raster pixels and parameters were set conservatively as per published observations of lynx habitat use patterns (see [17,45]). Although we acknowledge that actual functional connectivity requirements for lynx are just beginning to be understood (see [46]), we consider our selected values as being within the range of those that are plausible, with minor deviations likely affecting our results only qualitatively. Additionally, we did a sensitivity analysis with crossing distances of 200 to 1000 m in 400 m increments to determine the effect of this parameter on our estimates of connectivity. Effective mesh size can be defined as the average area potentially accessed by an animal on a given landscape without having to cross defined borders or low quality habitat, so larger values indicate that the landscape is more connected and smaller values indicate the landscape is more fragmented [44,47]. We used effective mesh size (M eff ) as our measure of habitat fragmentation in ArcMap 10.1 [48]. M eff is calculated by: where A is the area of a single patch and A t can be either the total area of the polygon or the total amount of suitable habitat (i.e., the sum of all patch areas). In order to remove correlation between habitat amount and effective mesh size, we used the total amount of suitable habitat as the denominator (L. Fahrig, pers. comm.). Since correlations were still high (0.63-0.86), we regressed M eff against habitat amount and used the residuals as our estimate of habitat fragmentation (M eff.r ). Data Analysis We aimed to determine whether lynx are limited by habitat amount, fragmentation, or both processes, by contrasting patterns on landscapes with different amounts of suitable land cover. We hypothesized that lynx habitat requirements would restrict their occurrence to highly-connected areas in each landscape. We used one-sided unpaired t-tests to examine whether habitat amount and fragmentation were greater in presence areas than pseudo-absence areas at each spatial scale among landscapes with high, moderate, and low amounts of habitat amount in each region. We examined any correlations between these two within each region and landscape. We tested 3 a priori hypotheses to explain lynx occurrence; i) lynx occurrence is limited only by fragmentation, ii) lynx occurrence is limited only by habitat loss, and iii) lynx occurrence is limited by both habitat amount and fragmentation. We used logistic regression and standard model selection procedures to determine which hypothesis best explained lynx occurrence in landscapes across the levels of suitable land cover. We used Akaike's information criterion to evaluate the candidate models for each lynx and pseudo-absence area and landscape within each region. We considered DAIC .2 to indicate a significant difference in model likelihood [49]. AIC does not assess model performance, and only models that performed well were considered plausible for the AIC model selection, so we used the Logistic Regression x 2 model likelihood ratio test to determine model fit. Habitat Suitability Model Both the expert-opinion and literature-based models suggested that coniferous forest land cover, and forest in a sapling developmental stage, provided the most suitable habitat for lynx. However, models differed with respect to the relative importance of overall features, with the literature-based model suggesting that land cover was only slightly (1.04 times) more important than development stage whereas expert opinion suggesting that development stage was substantially (1.20 times) more important than land cover type. We omitted annual snowfall and road density from the final habitat suitability models due to low overall importance in both models (see Table S1 in Information S1). We detected lynx at 19% (n = 48) of the sites within the Temagami landscape; we also included 14 more lynx track occurrences that we encountered opportunistically within the study site, increasing the total number of validation locations to 62. Table S1 in Information S1) and was selected for the remaining analyses (see Table S2 in Information S1). Landscape Characteristics The landscapes within both regions had similar amounts of suitable land cover (Table 1), but different levels of habitat fragmentation. The high-cover landscape in Chapleau consisted of 41.9% suitable land cover with an effective mesh size of 87.3 km 2 . In Mississagi, the high-cover landscape had approximately the same amount of suitable land cover (42.8%), but a much larger mesh size of 258.6 km 2 . The landscapes with a moderate amount of suitable land cover in the Chapleau and Mississagi regions had similar amounts of suitable land cover (35.0% and 31.9%, respectively) and mesh sizes (22.4 km 2 and 23.1 km 2 , respectively). The low-cover landscapes had similar amounts of suitable land cover (20.6% in Chapleau, 25.5% in Mississagi), however, the landscape in the Chapleau region was substantially more fragmented (M eff 5.7 km 2 ) in comparison to the matched landscape in the Mississagi region (M eff 18.6 km 2 ). This indicated that although the two landscapes had similar amounts of suitable land cover, generally the Chapleau landscape was more fragmented. Lynx Occurrence Where possible, lynx selected areas with higher amounts of high quality habitat (structural connectivity) at the 25 km 2 spatial scale ( Table 2). There was a positive correlation between the amount of suitable habitat and lynx occurrence areas in both high-and moderate-levels of suitable land cover in the Chapleau region, and in the landscape with a moderate-level of suitable land cover in the Mississagi region at the 25 km 2 area (Figure 2). In both regions, on landscapes with high-and moderate-levels of land cover, lynx consistently occurred in areas with at least 50% habitat and avoided areas with ,30% habitat ( Figure 3). However, in the lowcover landscapes, approximately half of lynx occurrences had less than 30% habitat at a spatial scale of 25 km 2 . These trends were consistent across both regions. At a spatial scale of 100 km 2 , there were no correlations between the amount of suitable habitat and lynx occurrence at any level of suitable land cover (Table 3). Once the effect of habitat amount was removed, there was no correlation between habitat fragmentation (M eff.r ) and lynx occurrence on any landscape, at either spatial scale (Table 2). Lynx occurrence patterns differed across landscapes, but the trends were consistent across regions. In the landscapes with moderate levels of suitable land cover, the top model included both the proportion of suitable habitat and habitat fragmentation lynx occurrence. However, only the proportion of suitable habitat had a positive association on lynx occurrence, M eff.r had a negative correlation with lynx occurrence indicating that lynx selected areas with higher amounts of fragmentation ( Figure 4; Table 3). In the high-and low-cover landscapes in both regions, there was no significant correlation between lynx occurrence patterns and proportion of suitable habitat or effective mesh size (Table 3). Sensitivity Analysis We examined 3 crossing distances in the PatchMorph output to determine if crossing distance was either underestimated or strongly influential on lynx occurrence. We tested crossing distances of 200 m, 600 m, and 1000 m, and used standardized regression coefficients from single variable logistic regressions to determine the level of influence. Effective mesh size coefficient estimates ranged from 20.02 to 0.04, with no visible trend; none of the coefficients were significant (p values ranged from 0.228-0.589). Increasing the estimated crossing distance did not affect model fit. Discussion Our results confirm that lynx are not sensitive to habitat fragmentation at low levels of suitable habitat, and also suggest that lynx display considerable flexibility in habitat selection patterns, supporting the 'flexibility hypothesis'. We showed that in landscapes with moderate and high amounts of suitable land cover (30-35% and .40%, respectively), lynx occurred in areas with at least 30% available habitat and largely avoided areas below that threshold, while being unaffected by habitat fragmentation. Although this finding is consistent with the 'threshold hypothesis', this hypothesis also predicts that lynx would be more sensitive to habitat fragmentation on landscapes where suitable land cover was low. However, our results showed that on landscapes where suitable land cover was limited (,30%), lynx did not select areas with concentrated habitat and lynx occurrence patterns were not well correlated with either habitat amount or habitat fragmentation, instead supporting the 'flexibility hypothesis'. Overall, we detected a threshold at which lynx occurrence patterns changed, but instead of being more sensitive to habitat fragmentation at low levels of suitable habitat, lynx displayed more flexibility in habitat selection on these landscapes. This indicates that lynx habitat choice is complex and either involves factors beyond mere resource preference, or selection of different land cover types in these areas. Patterns of Occurrence As predicted by the literature-based habitat suitability model, lynx were most likely to occur in sapling-stage coniferous forest. These results are consistent with other literature on lynx habitat ecology [17,18] and also describes snowshoe hare habitat preferences [19,20]. Road density and annual snowfall were not important for describing lynx occurrence in Ontario. This finding contrasts with previous work (e.g., [37,38]) but is consistent with a companion occupancy model within our study area [46], suggesting that these factors differentially affect lynx occurrence across their range and may be threshold-dependent. We surmise that low variation in snowfall patterns and low abundance of major highways as well as low road density in our study site may have accounted for the disparate results. Lynx occurrence, as determined by snow tracks across the study area, also supported this model, signifying that our model is generally robust. We recommend the use of this habitat suitability model as a tool to evaluate future forest condition on resource availability for Canada lynx in Ontario. Flexibility in Response to Habitat Loss Our results suggest that when approximately 30-35% of the landscape consists of suitable land cover, there is a strong correlation between the amount of suitable habitat and lynx occurrence. While this trend was not significant at higher levels of land cover at a landscape scale, in landscapes with both high and moderate amounts of suitable land cover, lynx occurrence patterns suggest that lynx preferred areas with at least 50% suitable land cover. While lynx will occur in some areas with less than 50% available suitable land cover, lynx consistently avoided areas with less than 30% suitable habitat when suitable land cover was abundant at a landscape level. This is consistent with previous work on small mammals and birds showing that habitat occupancy dynamics are determined by species-specific tolerance thresholds [7,11,13]. When suitable land cover comprised only 20-25% of the landscape, our results showed that there was no correlation between lynx occurrence and habitat amount, indicating some flexibility in habitat requirements on these landscapes. In contrast, when suitable habitat was limited, lynx did not avoid areas with less than 30% land cover and were not associated with areas with more than 50% suitable habitat, despite the local availability of these areas. It is possible that when suitable habitat is scarce, lynx can survive provided that hares, or suitable alternate prey, remain available on the landscape. This speculation is supported by observations of resident snowshoe hares occupying small patches ,10 ha in fragmented landscapes [50,51] and the ability of lynx to include alternate prey items when hares are limited [31,52]. This pattern of labile specialization has been recently documented in birds, where the most specialized species tend to generalize their habitat selection pattern following disturbance [53]. However, the results of our study contrast with previous work by Swihart et al. [5,50], who showed that some species have greater sensitivity to habitat change at range margins. This suggests that there is a wide range of responses to habitat alteration and that further work is necessary to clarify the impact of landscape change on lynx. Habitat Fragmentation Our results show that there is no correlation between lynx occurrence patterns and habitat fragmentation (M eff.r ). M eff.r (mesh size) measures the connectivity of a landscape, independent of habitat loss, so a negative coefficient indicates a positive relationship with habitat fragmentation. Our results suggest that there is a weakly negative relationship with M eff.r at moderate levels of suitable land cover, which is the opposite of what we predicted. In addition, the results from our sensitivity analysis suggest that increasing crossing distance does not improve the measure of habitat fragmentation for lynx. While some studies have suggested that habitat fragmentation may only be important when habitat amount is below 30% [9][10][11], our results do not support this hypothesis. At low levels of suitable land cover there was no relationship between habitat fragmentation and lynx occurrence, which is consistent with studies showing that the effects of habitat loss are generally far greater than the effects of habitat fragmentation [7,11]. Our results concerning habitat loss and habitat fragmentation are especially applicable to forestrydominated landscapes, where silvicultural practices can result in marked shifts in habitat features for a variety of species, including higher densities of prey species such as snowshoe hares [54]. Therefore, we recommend that planning decisions regarding lynx consider the amount of total available habitat, which should generally improve chances of population persistence, while also benefitting overall landscape structure and function. This point is especially relevant at the southern range periphery of lynx, where habitat loss is contributing to the northward regression of the species' distribution [26]. Conclusion Our results highlight the importance of examining habitat fragmentation independently of habitat loss to isolate and understand the impacts of each process [7,8]. While previous research suggests that closely related species, such as bobcats and Iberian lynx, are sensitive to habitat fragmentation [28,29], our results show that habitat loss, not fragmentation, drives occurrence patterns for Canada lynx. The effects of habitat loss and fragmentation may be species-specific, so we recommend that this hypothesis be further evaluated in both specialist and generalist species to improve our understanding of the impacts of these wide-spread processes. This is especially necessary for carnivores, which are considered to be sensitive to both habitat loss and fragmentation [55]. Ultimately, as rates of habitat loss and fragmentation continue to increase on a global scale, this and additional research can improve conservation efforts by ensuring that recovery strategies focus on the appropriate management action. Supporting Information Information S1 Comparison of expert option and literature-based models. Table S1. Performance metrics for the expert-opinion and literature based habitat suitability models for Canada lynx occurrence in Ontario, Canada. Receiver operating characteristic was based on 62 presence/absence locations near Temagami, Ontario. Bold text indicates better model performance. Table S2. Expert-opinion and literature based model weights for all variables used in the development of the habitat suitability model for Canada lynx in Ontario, Canada. Models were based on a survey using the analytic hierarchy decisionmaking process to rate the importance of different variables. The expert-opinion model is based on the replies of nine lynx researchers; the literature based model is based on the responses of 4 unbiased observers after having reviewed four research papers on lynx habitat selection. Figure S1.
2016-05-12T22:15:10.714Z
2014-11-17T00:00:00.000
{ "year": 2014, "sha1": "5593f05132f688e91a48cb0be70856b987f8cf1f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0113511&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5593f05132f688e91a48cb0be70856b987f8cf1f", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55513714
pes2o/s2orc
v3-fos-license
Examination of homogeneity of selected Irish pooling groups . Flood frequency analysis is a necessary and important part of flood risk assessment and management studies. Regional flood frequency methods, in which flood data from groups of catchments are pooled together in order to enhance the precision of flood estimates at project locations, is an accepted part of such studies. This enhancement of precision is based on the assumption that catchments so pooled together are homogeneous in their flood producing properties. If homogeneity is assured then a homogeneous pooling group of sites lead to a reduction in the error of quantile estimates, relative to estimators based on single at-site data series alone. Homogeneous pooling groups are selected by using a previously nominated rule and this paper examines how effective one such rule is in selecting homogeneous groups. In this paper a study, based on annual maximum series obtained from 85 Irish gauging stations, examines how successful a common method of identifying pooling group membership is in selecting groups that actually are homogeneous. Each station has its own unique pooling group selected by use of a Euclidean distance measure in catchment descriptor space, commonly denoted d ij and with a minimum of 500 station years of data in the pooling group. It was found that d ij could be effectively defined in terms of catchment area, mean rainfall and baseflow index. The study then investigated how effective this selected method is in selecting groups of catchments that are actually homogenous as indicated by their L-Cv values. The sampling distribution of L-CV ( t 2 ) in each pooling group and the 95% confidence limits about the pooled estimate of t 2 are obtained by simulation. The t 2 values of the selected group members are compared with these confidence limits both graphically and numerically. Of the 85 stations, only 1 station’s pooling group members have all their t 2 values within the confidence Introduction It is widely accepted that a short annual flood (AM) series is inadequate for the estimation of design floods of large return periods. Regionalization (FSR, 1975), i.e. pooling analysis (FEH, 1999), is one of the possible methods used to provide a framework for design floods. In pooling analysis flood data are pooled from other gauging stations that possess similar hydrological behaviours to the at-site station. A very common way to implement regional/pooling is the index flood method proposed by Dalrymple (1960). The estimation of Q T , T-year flood, based on this approach involves derivation of a growth curve which shows the relation between X T and the return period T where X T = Q T /Q I and Q I is the index flood at the site of interest. Generally the mean (FSR, 1975) or median (FEH, 1999) of the at-site AM flood series is taken as the index flood. It is assumed that the X T − T relation is the same at all sites in a homogeneous pooling group. The identification of a homogeneous pooling group is therefore important in pooling analysis. Lettenmaier et al. (1987); Stedinger and Lu (1995) and Hosking and Wallis (1997) among other researchers have demonstrated that a successful pooling analysis requires a homogeneity criterion to be satisfied. 820 S. Das and C. Cunnane: Homogeneity test However very recently Kjeldsen and Jones (2009) have approached this in a different way. An examination of homogeneity is normally used to assess whether a proposed group of sites is homogeneous or not. Examination of the homogeneity of regions/pooling groups is usually based on a statistic that relates to the formulation of a frequency distribution model, e.g. the coefficient of variation, CV (Wiltshire, 1986;Fill and Stedinger, 1995) and/or skew coefficient, g, their L-moment equivalents (Chowdhury et al., 1991;Hosking and Wallis, 1997) or of dimensionless quantiles such as the 10-year event (Dalrymple, 1960;Lu and Stedinger, 1992). Hosking andWallis (1993, 1997) proposed homogeneity tests based on L-moment ratios such as L-CV alone (H1) and L-CV and L-skewness jointly (H2) which are widely used in flood frequency analysis although the former one is recommended by these authors for having better power to discriminate between homogeneous and heterogeneous regions. Very recently, a similar conclusion has been drawn by Viglione et al. (2007) when they compared several homogeneity tests. They stated that the H1 test is ahead of all others when the L-skewness is lower than 0.23. They further concluded that the H2 as a homogeneity test lacks power. These findings certainly indicate that the heterogeneity among the sites in a group is mainly due to variations in the sample L-CVs. However, one of the main assumptions of these tests is that the true regional distribution is kappa. For that reason and others Hosking and Wallis (1997) recommended that though the heterogeneity statistic is constructed like a significance test it should not be used in that way. They, Hosking and Wallis (1997, p. 70), further stated that ". . . a significance test is of doubtful utility anyway, because even a moderately heterogeneous region can provide quantile estimates of sufficient accuracy for practical purposes. Thus a test of exact homogeneity is of little interest." In this paper a graphical way of examining the homogeneity of a pooling group is presented which is based on L-CV , i.e. t 2 . The main idea behind the approach is the comparison of the variability of t 2 from each site in the pooling group with that expected (un-weighted average pooled t 2 ) supposing the differences between sites to be due to sampling error. The pooling groups are identified by the Region of Influence (ROI) approach. The population distribution is GEV (with k = −0.05, k = 0.0, k = +0.03), rather than Kappa as suggested by Hosking and Wallis (1997), and was based on the GEV's descriptive ability of the annual maximum data series of Ireland. The outline of the paper is structured as follows: the next section describes the procedure used to obtain growth factors and flood quantiles in the context of flood frequency pooling analysis. This is followed by a description of procedures to select pooling variables for similarity distance measure (d ij ) in the context of formation of pooling groups using the ROI approach. A graphical way of examining homogeneity of pooling groups obtained by the ROI approach is then presented. Then the analysis of the examination procedure is summarised and finally a selected number of heterogeneous pooling groups are reviewed with the help of Box-plots of catchment descriptors. Estimation of pooled growth factors and flood quantiles The growth factor X T is the factor which when multiplied by the index flood Q I , gives the flood magnitude of return period T , Q T , as in Eq. (1) The relationship between X T and T is often referred to as the growth curve. When a growth curve is obtained by pooling the information from sites of a pooling group, it is called the pooled growth curve. Qmed is used as the index flood in this study where Qmed is the median of the annual maximum series. In this study the pooled growth curve is obtained using the approach based on the method of L-moments. The Lmoments developed by Hosking (1990) are based on probability weighted moments (PWMs) introduced by Greenwood et al. (1979). With this approach the derivation of a growth curve in a pooling group involves the following key steps: 1. computation of at-site and pooled L-moment ratios 2. selection of a suitable form of distribution and estimation of its parameters by the method of L-moments. L-moments are calculated and then the dimensionless Lmoment ratios t 2 and t 3 are calculated for each site. Pooled Lmoment ratios for the target site, i, are then computed using the following equation: where t (j ) is the L-moment ratio (either t 2 or t 3 ) for the j -th most similar site and w ij is a weighting term. Weights can be related to a site's record length and/or a site's d ij values. Recently a more complex way of assigning weights is proposed by Kjeldsen and Jones (2009) although they state that only a little has been gained in the flood estimation procedure using the new approach. In this study w ij is taken as 1. Choice of unweighted averages was guided by the observations made by Hosking and Wallis (1997, p.90), namely "The calculation of regional averages by weighting the sites proportionally to their record lengths is not essential. If the region is exactly homogeneous, then a good approximation of the variance of t (i) is proportional to n(i) − 1, and in this case weighting the sites proportionally to their record lengths minimizes the variance of the regional average t R . If the region is heterogeneous, it is possible that weighting proportionally to record length may give undue influence to sites that have frequency distributions markedly different from the region as a whole and that also have long records". The Generalised Extreme Value (GEV) has been selected as the pooled distribution function. The selection of the GEV distribution is explained in Sect. 4. The values t R 2 , t R 3 are equated to expressions for these quantities written in terms of the distribution's unknown parameters (expressed in dimensionless form) and the resulting equations are solved for the unknown parameter values. The dimensionless GEV growth curve (X T ) is defined by two parameters k and β: where T is the return period. The two parameters k and β are estimated from the sample L-CV, t 2 , and sample L-skewness, t 3 , as follows (Hosking and Wallis, 1997) where denotes the complete gamma function. Formation of pooling groups using Region Of Influence (ROI) approach The Region of Influence (ROI) approach of formation of a pooling group is considered to be the most appropriate and meaningful way of delineating a pooling group. The technique developed by Burn (1990), involves the identification of a region of influence i.e. a separate pooling group for each gauging station in a region. The identification of a pooling group consists of selecting stations that are hydrologically similar to the site of interest. Similarity is measured generally by a Euclidean distance measure in catchment descriptor space. The effective identification of a pooling group in a ROI approach is governed by two important criteria: the choice of appropriate site descriptors as pooling variables and the size of a group in terms of number of sites and station years included. Burn (1990) investigated a number of options to determine a threshold value based on the d ij values to define a cut-off for the inclusion of stations in the ROI method for a target site. However, a more practical way of choosing an appropriate size of a pooling group was presented by FEH (1999). They investigated a range of pooling group sizes and decided on adoption of the 5T rule, namely that the total number of station years of data to be included when estimating the T year flood should be at least 5T . The adoption of such a rule was a compromise. If too few stations are included the precision of the Q T estimate is sacrificed whereas if far too many stations are included then the assumption of homogeneity may be compromised. Hosking and Wallis (1997) however show that a small departure from homogeneity can be tolerated so that having too few stations included may be less desirable than having slightly too many. They also suggested not to use more than 20 sites in a group as little gain in the accuracy of quantile estimates is obtained by using more than about 20 sites in a group. Recently, Kjeldsen and Jones (2009) found that a fixed pooling group consisting of 500 station years performed well for a range of return periods. In relation to identifying site descriptors as pooling variables, careful consideration is necessary as to which form of catchment descriptors are to be used in a ROI method of pooling analysis. In the next subsection an investigation of selecting pooling variable for the Irish case is described in detail. Choice of catchment descriptors on effectiveness of ROI distance measures The general form of the similarity measure used for selecting members of a pooling group is defined by where d ij is the weighted Euclidean distance from site j to site i; n is the number of attribute variables; X k,i is the value of the k-th variable at the i-th site and W k is the weight applied to attribute k, reflecting its relative importance. The subscript i denotes the subject site and the subscript j denotes the j -th pooled site. In choosing a distance measure d ij a decision has to be made about which catchment descriptors are to be included in the distance measure and what weightings are to be applied to them and whether logarithms or other transformations are to be used. The FEH (1999) provided a number of useful maxims for choosing a distance measure. It recommended not to use at-site flood statistics (e.g. CV, g) as pooling variables because this might well result in groups consisting of sites that have experienced similar floods in recent history. Neither could such site flood statistics be used for ungauged catchments. Seasonality of the flood response (e.g. timing and regularity of flood events) has also been considered (Burn, 1997;Cunderlik and Burn, 2006) as a similarity measure. Seasonality statistics are obtained from observed flood series. Therefore, a similarity measure based on these could not be used for ungauged sites, without additional assumptions. For Irish conditions two sets of catchment descriptors have been selected as potential pooling variables: -similar variables as used in FEH i.e., AREA (catchment area), SAAR (standard average annual rainfall), BFI (baseflow index) and FARL (index of flow attenuation by reservoir and lake) -on the assumption that homogeneity is strongly dependent on CV or L-CV, those catchment descriptors that could predict L-CV best were identified and a selection of these were used to form d ij . This approach is along the lines outlined by Kjeldsen and Jones (2009). For selecting the final set of pooling variables, FEH used pooled uncertainty measure (PUM) which is a weighted average of the squared differences between each at-site growth factor and the pooled growth factor measured on a logarithmic scale. In this part of the study a simulation procedure is used for this purpose because far fewer stations (85) than the 602 stations used for the UK study were available. The first objective is to find which combinations of FEH descriptors, which are listed in Table 2, lead to pooling groups which are most effective at exploiting the information about the flood distribution contained in the pooling groups. The simulation procedure uses the GEV distribution for data generation which is considered to be representative of what is appropriate in Irish conditions. Hosking and Wallis (1997, p.93) suggested not to use the observed sample L-moment ratios as the population L-moment ratios of the simulated region because this would yield a simulated region that has much more heterogeneity than the actual data. Castellarin et al. (2001) addressed the issue by using a region of influence approach to estimate the at-site population values of t 2 and t 3 . A similarity measure based on at-site flood statistics is used to form a group of sites for a subject site and its population values of t 2 and t 3 are considered as the corresponding pooled estimate of t 2 and t 3 for the group. Later, Gaál et al. (2008) adopted this approach in their study. A similar kind of approach is used here with a similarity measure defined as which is independent of the descriptor variables being considered in Table 2. A pooling group is formed for each site using Eq. (8) and the pooled t 2 and t 3 are estimated using Eq. (2). The estimated pooled values of t 2 and t 3 are then used as population values for each site in step 2 of the simulation procedure. The simulation procedure does not consider the implications of intersite correlation among sites in a pooling group because it was found by Hosking and Wallis (1997, p.127) to be of very little consequence. The steps of the simulation procedure for selecting variables are described as follows. 1. The gauging stations in the subject site's pooling group are identified using the d ij values of Eq. (7) for a set of catchment descriptors having a minimum of 5T station years of data in the pooling group. 2. Random samples are drawn from GEV populations for the subject site and for each site in the pooling group. For each site the sample size is taken as being equal to the length of the observed historical record at the site and the parameters are estimated from the site t 2 and t 3 values obtained using the procedure described above, as in Castellarin et al. (2001) and Gaál et al. (2008). 3. The t 2 and t 3 values are obtained for each sample in the pooling group and the average of these is calculated to represent the pooled t 2 and t 3 values. 4. The pooled t 2 and t 3 values are then used to determine the pooling group's GEV growth curve parameters k and β using Eqs. (4) and (6). 5. The subject site'sX T value is calculated for T = 50 and 100 years respectively using Eq. (3). 6. Steps 2 to 5 are repeated 10 000 times to provide 10 000 values ofX T and the RMSE T and BIAS T are calculated for the subject site by the following equations: whereX T i,s is the estimated T -year growth factor at a site i at the s-th repetition; X T i is the assumed true Tyear growth factor at site i; M is the number of sites in the pooling group and S is the number of repetitions. RMSE T and BIAS T defined in the simulation procedure has been evaluated at 50 and 100-year return periods for each site. The eight combinations listed in Table 2 of the four variables have been tested based on RMSE T (primarily) . In all, 85 stations have been considered for the study. The data sets that have been used in the study are summarized in Table 1. For each of these sites, a pooling group was selected from the 85 stations. Initially in the simulation procedure all weights W k in Eq. (7) were set to unity. Figures 1 and 2 shows, in box-plot form, respectively the variation in the 100-year RMSE and BIAS values for different sets of catchment descriptors used in Eq. (7). In Table 2, the corresponding mean variation of RMSE 100 and RMSE 50 values, for different sets of pooling variables, is summarised. It shows that the numerical measures of effectiveness vary by very little between rows. The set of two variables, lnAREA and lnSAAR, and the set of the single variable lnAREA performed best in terms of providing the lowest RMSE 100 values. In terms of RMSE 50 , the set consisting of lnAREA and lnSAAR comes second best to the set consisting of lnAREA on its own. Overall, the set of variables comprised of lnAREA and lnSAAR may be considered as being the most suitable set of pooling variables for Irish conditions. However, if there is also a desire to incorporate another physical catchment effect then the BFI could be included with these two. While inclusion of just one or two catchment descriptors may indeed be best, there is an intuitive attraction in also representing some descriptor of catchment response even at the cost of a small apparent loss in effectiveness. This could be of relevance in engineering investigations where differences in catchment behaviour are considered of importance by the investigator. An extension to this investigation with varying values of weights W k in Eq. (7) was also done, particularly for the set of variables of lnAREA, lnSAAR and BFI but the results of all variations examined are not reported in detail here. An automatic search procedure was not used but it was found, by trial and error, that the weights 1.5, 1.0 and 0.1 for lnAREA, lnSAAR and BFI respectively gave RMSE 100 = 15.22 and RMSE 50 = 12.81 which offer small improvements on the W k = 1.0 values used in the calculations for the set of variables of lnAREA, lnSAAR and BFI. The trial and error approach involved assigning a selection of weights, varying from 0 to 3, to each of the quantities, i.e. lnAREA, lnSAAR and BFI. In the second approach a set of catchment descriptors were identified through the use of regression models of L-CV on the catchment descriptors. These descriptors were then also used as potential pooling variables. In the search for a best regression model both log-transformed and nontransformed variants of the catchment descriptors and L-CV were used. The best regression model for L-CV containing three catchment descriptors was found to be based on MSL, FORMWET and ARTDRAIN, where MSL is the mean stream length, FORMWET is a form of catchment wetness index analogous to PROPWET in FEH (1999) and ARTDRAIN is an arterial drainage index which is defined as % of catchment area affected by arterial drainage improvements. These descriptors were identified from a pool of twenty five catchment descriptors made available by the Irish Office of Public Works. The R 2 value of the best available model is a modest 29%. These identified catchment descriptors were also assessed by the above simulation procedure. The RMSE T values for T = 50, 100 are listed in Table 3 for six combinations of the three variables. The set of two variables, lnMSL and ARTDRAIN, and the set of the single variable lnMSL performed best in terms of providing the lowest RMSE 100 % values. Both approaches described above provide similar outcomes in terms of RMSE 100 %. This may be partly due to the relatively weak relations identified for predicting L-CV (R 2 = 0.29). A regression of L-CV on the other set's catchment descriptors (AREA, SAAR, BFI, FARL) also yields a weak relation for predicting L-CV (R 2 = 0.21). Since both sets of catchment descriptors can predict L-CV only in a weak manner, and both approaches are similar in RMSE it is concluded that neither approach is clearly superior to the other. Procedure for examination of homogeneity A homogeneity test is used to assess whether a proposed group of sites is homogeneous or not. A homogeneous group of sites leads to a reduction in the error of quantile estimators relative to estimators based on single at-site data series alone which is the main goal of a regional flood frequency analysis. A homogeneity test was introduced by Dalrymple (1960). Other tests were introduced by Wiltshire (1986), Lu and Stedinger (1992), Fill and Stedinger (1995) and Hosking andWallis (1993, 1997). A simulation procedure, using graphical presentation of key results is applied in this study to examine homogeneity of pooling groups that were formed using the ROI technique. GEV distributions with 3 different shape parameter values (k = −0.05, k = 0.0 (EV1), k = +0.03) are used in the simulation procedure. The GEV, and its special case the EV1, have a history of usage in Ireland since publication of the Flood Studies Report (FSR, 1975, p.173-174, Table 2.38, Fig. 2.14, Vol. I). More recently, a national study sponsored by the Office of Public Works, Dublin, based on annual maximum flood data of 110 stations, with average length of record 37 years and with a quarter of them between 50 and 55 years, has indicated that GEV and EV1 distributions are suitable parents for the majority of Irish flood series (Das, 2010, Ch. 3). This conclusion is based on visual examination of probability plots and numerical scores assigned to them, on classical goodness of fit tests and on L-Moment Ratio diagrams, such as Fig. 10 which shows that GEV/EV1 looks more suitable as a parent than other 3-parameter distributions tested such as Generalised logistic and Lognormal 3. While the 4 parameter Kappa distribution has been recommended by Hosking and Wallis (1997) as a parent for simulation studies, this choice was sometimes found to be problematical because of numerical difficulties and estimation failures during the parameter estimation process and as a result GEV was selected as parent distribution in this study. The steps of the simulation procedure are as follows: 1. The gauging stations in the subject site's pooling group are identified using d ij values obtained by the following equation having a minimum of 500 station years of data in the pooling group and satisfying the 5T rule for the 100 year quantile. The weights 1.5, 1.0 and 0.1 are those reported in Sect. 3 above. 2. The t 2 is obtained for each site in the pooling group and the average, without weights, of these is calculated to represent the pooled average t 2 (t R 2 ). (55) 3. Random samples are drawn from GEV distributions with 3 different shape parameter values (k = −0.05, k = 0.0 (EV1), k = +0.03) using the t R 2 as the population value to construct a 95% confidence interval for t R 2 . These population shape parameters, k = −0.05 , k = 0.0 (EV1) and k = 0.03, are selected in this context which correspond to L-skewness ≈ 0.21, 0.17 and 0.15 respectively, this being the range relevant for Ireland. The sample size is taken as being equal to the average record length of the observed historical record at the gauging sites and the parameter values are estimated from the value of the t R 2 . The 95% confidence interval is constructed assuming that the samples t R 2 values are normally distributed. While the L-CV values may not be perfectly normally distributed Viglione's (Viglione, 2010) results show that the departure from normality is not severe for the range of L-CV and L-skewness values that are observed in Irish conditions. Hence the normality assumption was made in the calculation of confidence intervals. 4. The number of stations in the selected pooling group whose t 2 values fall outside the confidence interval (the attribute termed here as m) is counted and reported. It is also noted whether the t 2 of the subject site is outside the confidence limits (CL). Analysis The procedure described above is applied for each of the 85 stations. Each station had its own unique pooling group. The sample values of t 2 for the stations in the group, t R 2 and the CL about t R 2 are displayed in Fig. 3 for five stations. The summary statistics of the procedure are given in tabular form in Table 4. In addition to that the heterogeneity measures, H1 and H2, described in Appendix A, for each group is calculated and a summary of these measures is reported in Table 5. The following observations and findings are obtained from the analysis. 1. Table 4 lists how many stations, m fall into the categories of zero value outside the CL, one value outside the CL, 2 values outside the CL, 3 values outside the CL or more than 3 outside the CL. In all, for the case of EV1, only one station (1%) was in the first category while 52% of stations were in the latter category. values for GEV (k = −0.05) and GEV (k = +0.03) are broadly similar. Table 4, it is seen that as the shape parameter increases from k = −0.05 to +0.03 the number of cases where m > 3 increases from 33 to 47. From 3. In 27 groups (32% of groups) the t 2 of the subject site was outside the CL for the case of EV1. The corresponding numbers for the case of negative shaped GEV and for the case of positive shaped GEV are 27 and 28 respectively. All the 27 stations of the EV1 case were also in the latter cases. 4. Table 5 summarises the results of H1 and H2 for the 85 pooling groups. 22% of groups have a H1 value lower than 4.0. The percentage increases to 86% when the same criterion is set for H2 and that is very similar to what was found for the UK pooling groups (FEH, 1999, p. 176). 5. The range of t 2 values, max t 2 -min t 2 , was calculated for the 85 pooling groups. The average range of t 2 for the 85 pooling groups was 0.11 with a minimum value Figure 4 shows a plot between H1 values and ranges of L-CV values for the 85 groups. The plot shows an upward trend, implying that a high H1 value can be expected when the t 2 values in a pooling group have a large range, which can be expected in the absence of homogeneity. A similar plot is drawn for H2 in Fig. 5, showing no obvious trend, implying that a low H2 value may be obtained for a pooling group which is in fact a heterogeneous group. 6. Figure 6 shows a plot between H1 and m. Different values of H1 occur for a particular m value and that is reasonable as the memberships of the groups in those cases are different even though they may have some overlap. However, the average values, marked by triangles in the plot, show an increase of H1 with m, i.e. the higher the number of t 2 values of group members outside the CL, the higher the value of H1 that can be expected. If a H1 value less than 4.0 is considered as a good criterion for testing homogeneity, then in this approach it is required that fewer than m = 2 values fall outside the confidence limit, i.e. m/N ≤ 0.15. 7. Figure 7 shows a plot between H1 and d ij,max of the pooling groups. The d ij,max is defined here as the distance associated with the group member which just qualified as a member of the pooling group. The plot shows an upward trend to some extent, implying that a low H1 value can be expected for a low d ij,max value, which is an implicit assumption of a ROI pooling scheme. However in many cases, low d ij,max values, even those below 1.0, can lead to a high value of H1 suggesting that the assumption may not always be true particularly for Irish conditions. A similar plot is drawn in Fig. 8 between d ij,max and m. The plot leads to a similar conclusion to that for Fig. 7. While a low value of d ij,max is desirable, it is noted that even low values of d ij,max can occur where a significant number of group members' t 2 values falls outside the CL. Investigation of selected heterogeneous pooling groups The investigation has been carried out on those 27 cases where the pooling groups are heterogeneous and in which the t 2 of the subject site lies outside the confidence limits. The investigation mainly focuses on identifying any inappropriateness among group members that would cause the pooling groups to be heterogeneous. In this context, FEH (1999, 3, Fig. 16.9) documented a detailed review system, providing an example. That system mainly considers two attributes: (1) whether the subject site has any special qualities that need to be taken into account and (2) whether any of the pooled sites has catchment descriptors that are particularly different from those of the subject site. Sites in the pooling group can be investigated using several characteristics including atsite flood statistics and catchment descriptors. Statistics in a pooling group such as discordancy measure (Hosking and Wallis, 1997) and the distance measure (d ij ) can also be used to investigate sites in the pooling group. In this part of the study, four catchment descriptors, namely, AREA, SAAR, BFI, FARL; and the distance measure (d ij ) are taken into account in the investigation process. The first three of the catchment descriptors, AREA, SAAR and BFI,were already used for initial selection of sites for a pooling group. In the investigation procedure, sites are reviewed with the help of Box-plots and a summary table and in some cases, with the help of the 'examination of homogeneity' chart described in Sect. 4. Four Box-plots of catchment descriptors, such as AREA, SAAR, BFI and FARL, are constructed to show the subject site in the context of the pooling group. For each of these catchment descriptors, the placement of numerical values for sites in the pooling group is displayed against a backdrop of the relative frequency of the 85 sites considered in this study. This facilitates the identification of any particularly inappropriate sites. In the summary table, statistical properties such as t 2 , t 3 and d ij values of sites in a pooling group are listed as shown in Fig. 9. The investigation procedure for pooling groups of station no 6031 is described in detail as it serves as an example. An example: station no 6031 on the River Flurry There are 17 sites in the pooling group of which eight, including the subject site, have values which fall outside the CL, thus indicating a strongly heterogeneous group. The heterogeneity measures H1 and H2 for the group are 7.66 and 2.82 respectively. The examination of Box-plots in Fig. 9 reveals the catchment area of the subject site is small (46.2 km 2 ) and it is very near to the 5 percentile mark on the Box-plot of AREA. The site is not positioned at the centre of the group of gauged catchments in the pooling group. There are 5 sites on the left of the subject site and there are as many as 11 sites on the right. The attribute certainly includes some sites that have large catchment area compared to the subject site. This may lead to d ij values exceeding the value 1.0 in several cases. The d ij values for the last three sites are around 1.3 and these sites are among the seven other sites that fall outside the CL. The examination of the summary table on the right hand side of Fig. 9 shows that the subject site has large values of both t 2 and t 3 and that these are the largest among the group members. Hence, the conclusion can be drawn here that the pooling group in its present structure may not be ideal for that subject site 6031. Leaving out some sites at the bottom of the table might be considered in this context. The large number of sites, 17, in the pooling group is also a possible contributor to heterogeneity. The remaining 26 pooling groups of 27 heterogeneous pooling groups were also investigated and in 8 cases the heterogeneity was due to exceptionally large or small catchment area relative to the other group members. Likewise 3 cases were similarly caused by exceptionally large SAAR or small SAAR values and 3 cases by exceptionally large BFI or small BFI values. A further 5 cases were caused by extremely low FARL values relative to other pooling group members. In 7 cases there was no obvious single cause of heterogeneity. Fig. 9. Four Box-plots and a summary table for investigating a pooling group. The subject site is marked with a ×. Small dots denote sites included in the pooling group. The underlying distribution of each catchment descriptor is shown in the Box-plots. Each Box-plot gives the minimum and the maximum value (+) and percentiles for the frequencies 0.05, 0.25, 0.5, 0.75, 0.95. The summary table lists record length, t 2 , t 3 and d ij values for a 100-yr pooling group for subject station 6031. Conclusions In the context of ROI pooling group based flood frequency estimation procedure, the most suitable form of distance measure d ij for Irish conditions was sought. The ROI method with the suitably identified distance measure, Eq. (1), was used to form pooling groups for the subject sites. A simple graphical approach of examining homogeneity of the pooling groups was presented. The graphical approach compared the sampling variability of pooled estimates of L-CV with the L-CV of pooling group members. The approach also allowed the location of L-CV of the subject site to be viewed in the context of pooling group members, which is important in the case of site specific pooling group. Most of the Irish pooling groups exhibited a degree of heterogeneity among the group members. A graphical approach of reviewing a heterogeneous pooling group was also presented in this context. The following conclusions were obtained from the above studies: 1. It was found that the distance measure d ij could be satisfactorily defined in terms of lnAREA and lnSAAR but if there is a desire to incorporate another physical catchment effect then the BFI could be included with these two. The d ij can also be defined in terms of lnMSL and ARTDRAIN. 2. A visual approach for the identification of the homogeneity of ROI pooling groups has been presented. The results are compared with the heterogeneity measures H1 and H2, obtained for those groups. Overall the results show that even with a carefully considered ROI procedure, such as using distance measure of Eq. (1), it is not certain that perfectly homogeneous pooling groups are identified. As a compromise it is recommended that a group containing more than 2 values of L-CV outside the 95% confidence limits of that variable, i.e. m/N > 0.15 should not be considered homogeneous. 3. A thorough investigation on 27 heterogeneous pooling groups has been carried out. In many cases, special attributes of the subject site such as extremely large or small values of AREA or of SAAR or of BFI or exceptionally low values of FARL contributed to the degree of observed heterogeneity of the pooling groups. It is deemed necessary that the subject site be positioned near the centre of the group of gauging sites, on the respective catchment descriptor axes, to which it is hydrologically similar; but in some cases the fulfillment of that condition does not guarantee that the pooling group is homogeneous.
2018-12-11T00:28:45.988Z
2010-07-30T00:00:00.000
{ "year": 2010, "sha1": "7bbc8b73c0405cef973fc2a3fb44587f550358c3", "oa_license": "CCBY", "oa_url": "https://www.hydrol-earth-syst-sci.net/15/819/2011/hess-15-819-2011.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "288693be7cdc61de542d9cd832048ddfa140aa0e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
1731619
pes2o/s2orc
v3-fos-license
Severe Hyponatremia due to Levofloxacin Treatment for Pseudomonas aeruginosa Community-Acquired Pneumonia in a Patient with Oropharyngeal Cancer Hyponatremia (serum Na levels of <135 mEq/L) is the most common electrolyte imbalance encountered in clinical practice, affecting up to 15–28% of hospitalized patients. This case report refers to a middle-aged man with severe hyponatremia due to Syndrome of Inappropriate Antidiuretic Hormone Secretion related to four possible etiological factors: glossopharyngeal squamous cell carcinoma, cisplatin treatment, right basal pneumonia with Pseudomonas aeruginosa, and the treatment with Levofloxacin. This case report discusses a rare complication of common conditions and of a common treatment. To our knowledge this is the first case of hyponatremia related to Levofloxacin and the second related to fluoroquinolones. Introduction Hyponatremia (serum Na levels of <135 mEq/L) is the most common electrolyte imbalance encountered in clinical practice, affecting up to 15-28% of hospitalized patients. Both moderate and especially severe hyponatremia (Na <125 mEq/L) found in newly admitted hospital patients are linked with a significantly elevated in-hospital mortality of 28% compared to 9% in-hospital mortality in normonatremic, matched controls [1]. Despite the frequency of this condition, the etiologic diagnosis and the management of hyponatremia are neither easy nor optimal [2]. This may be attributable to the diversity of underlying disease states associated with the condition and, until the last few years, a lack of targeted treatments. Case Presentation A 59-year-old male was brought in to the emergency department of Emergency County Hospital of Cluj-Napoca presenting with dizziness, psychomotor agitation, delirium with visual and auditory hallucinations, and temporal and spatial disorientation. The patient was known for chronic tobacco use (40 cigarettes/day) and heavy alcohol consumption. History taking revealed that his symptoms had started 1 month previously when he was hospitalized in the Pneumology Unit suffering from right chest pain, dyspnea, fatigue, productive cough, 38 ∘ C fever, and shaking chills. He was diagnosed with Pseudomonas aeruginosa right basal pneumonia based upon findings on physical examination and paraclinical explorations (chest CT scan, blood cultures, bronchoscopy with cytology, and aspirate cultures). Consequently, antibiotic treatment was initiated, at first, ceftriaxone 2 g/day (7 days long), but as there were no clinical signs of resolution, the treatment was switched to amikacin and colistin, for the following 10 days. On admission to our service he was still on antibiotic treatment for pneumonia, that time with Levofloxacin 500 mg per day, which had been started 4 days previously. His medical history included a glossopharyngeal squamous cell carcinoma cT3N3Mx treated with combined chemotherapy (docetaxel, cisplatin, and capecitabine) and radiation therapy. The clinical examination, at the admission time, revealed an overweight patient (BMI = 27 kg/m 2 ) with warm and moist skin and psychomotor agitation. Cerebral CT scan was performed in the Emergency Room and it did not show any focal masses or other pathological findings that could explain the acute onset of the neurologic manifestations. The lab exams (Table 1) showed a low serum sodium concentration of 114 mEq/L indicating severe hyponatremia, with correspondingly low serum osmolality of 233.9 mOsm/kg, normal creatinine, urea, and uric acid. His urinary Na was high 40 mEq/L and his central venous pressure was normal (5 cmH 2 O). Given the severity of the hyponatremia a treatment with hypertonic saline 3% was started in the Emergency Room and his existing Levofloxacin treatment was stopped. He was admitted to the Internal Medicine Department for surveillance. During the following 24 h the patient serum Na rose to 120 mEq/L (6 mEq/L). Hypertonic saline treatment was stopped and replaced with fluid restriction (800 mL/day), as the patient did not meet the exclusion criteria for applying this treatment (mentioned above). Within the next 72 h serum Na increased to 131 mEq/L and his symptoms subsided. Adrenal insufficiency and severe hypothyroidism were excluded through laboratory tests. Therefore, the etiology of euvolemic hypotonic hyponatremia diagnosed in our patients was likely to be Syndrome of Inappropriate Antidiuretic Hormone Secretion (SIADH) fulfilling both essential and supplemental criteria for the diagnosis [3]. Looking back into his past medical file, as shown in Figure 1, we discovered that our patient had been hyponatremic ever since diagnosed with neck cancer. After chemotherapy the Na levels decreased slowly by 10 mEq/L, the patient being asymptomatic. During the acute episode of pneumonia and Levofloxacin treatment, hyponatremia suddenly aggravated and the patient became symptomatic. The mild fluid restriction (maximum 1500 mL/day), normal Na intake, and the avoidance of any medication that could affect sodium levels allowed the maintenance of Na concentration close to physiological limit, for more than 6 month after discharge (Figure 1). Discussions In our case, we identified at least 4 possible causes of SIADH and hyponatremia: Each of these etiological factors will be discussed separately beneath. Hyponatremia is common in malignant solid tumors (up to 25% of all patients) either as part of the underlying disease or due to drug side effects [4]. SIADH has been reported in >3% of patients with head and neck cancer, most often in patients with lesions in the oral cavity and less frequently in those with lesions in the larynx, nasopharynx, hypopharynx, or other sites [5], frequently due to ectopic secretion of antidiuretic hormone (ADH). Treatment of lung cancer with chemotherapeutic agents such as platinum derivatives or vinca alkaloids may lead to hyponatremia. The development of hyponatremia, during cisplatin therapy, bears special mention for our case. Cisplatin stimulates ADH secretion to cause SIADH, but it can also directly damage renal tubules to interfere with sodium reabsorption, which in rare cases may lead to hyponatremia via salt wasting nephropathy [4]. Besides malignancies, SIADH may be caused by a variety of other conditions, including CNS disorders, pulmonary disorders (e.g., tuberculosis, pneumonia, and acute respiratory failure), HIV infection, prolonged strenuous exercise, and drugs [3]. A recent German study reported a high incidence of hyponatremia (31.8%) among patients with communityacquired pneumonia [6]. The degree of hyponatremia severity seemed to correlate with the patients' comorbidities (such as chronic heart failure, chronic renal disease, diabetes mellitus, and malignancies), higher severity of pneumonia, and higher inflammatory biomarkers. However, the association of these comorbidities with sodium levels was weak and disappeared after inclusion in a multivariate model. The relationship of hyponatremia and higher pneumonia severity probably reflects the presence of hypovolemia, severe sepsis, and subsequent activation of vasopressin and natriuretic peptides secretion [6]. The decrease in the ability to reduce urine osmolality and excrete water loads and the increasing levels of ADH in the absence of antibiotic treatment [7] might be incriminated in pneumonia-induced hyponatremia. As for the antibiotic treatment, it seems that Levofloxacin could be a cause for SIADH. There is no data in the literature regarding hyponatremia to Levofloxacin, but there are case series presentations of quinolones' side effect. Fluoroquinolones have the potential to cause SIADH. The likely mechanism of hyponatremia is that quinolones cross the blood-brain barrier and stimulate the -aminobutyric acid and N-methyl-D-aspartate receptors, which leads to the synthesis and release of antidiuretic hormone [8,9]. An objective causality assessment using the Naranjo scale could have been useful to demonstrate the link between Levofloxacin use and hyponatremia. Finally, the link between alcohol ingestion and hyponatremia is worth mentioning. Hyponatremia may be one of the laboratory signs of chronic alcohol abuse (up to 17% of the alcoholics), along with frequent presentation to ER department for recurrent unexplained falls, poorly controlled hypertension, and/or gastrointestinal symptoms [10]. The mechanisms for hyponatremia in alcoholics include hypovolemia, pseudohyponatremia due to alcohol-induced hypertriglyceridemia, beer potomania syndrome, and rarely SIADH or cerebral salt wasting. In our case, heteroanamnesis revealed the stop of alcohol ingestion 1 year prior to presentation, so chronic alcoholism was not considered as a cause of hyponatremia [10,11]. However, a CNS disorder associated with alcohol consumption (Wernicke encephalopathy or Korsakoff dementia) was initially suspected to be the cause of the delirium at presentation, so high-dose thiamine was administered intravenously for 5 days. Apart from the fact that persistent hyponatremia may affect the patients' quality of life, it seems to be a negative prognostic marker for overall survival. Large studies including patients with malignant disease demonstrate that serum Na level <130 mEq/L is independently associated with a 2-2.5-fold greater risk for in-hospital mortality [12]. Conclusions In summary, we presented a case of hyponatremia of multifactorial etiology that was promptly investigated and corrected. To our knowledge this is the first case of hyponatremia related to Levofloxacin and the second related to fluoroquinolones. The patients' outcome at six months was a good one, bearing in mind the comorbidities.
2018-04-03T03:16:16.695Z
2016-10-25T00:00:00.000
{ "year": 2016, "sha1": "370a5465014e57265eb18891e02f64318fe009bc", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crim/2016/5434230.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3965d4928ec01d13a9793c5a9a10d0602eee3826", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
100774
pes2o/s2orc
v3-fos-license
Effort in Multitasking: Local and Global Assessment of Effort When performing multiple tasks in succession, self-organization of task order might be superior compared to external-controlled task schedules, because self-organization allows optimizing processing modes and thus reduces switch costs, and it increases commitment to task goals. However, self-organization is an additional executive control process that is not required if task order is externally specified and as such it is considered as time-consuming and effortful. To compare self-organized and externally controlled task scheduling, we suggest assessing global subjective and objectives measures of effort in addition to local performance measures. In our new experimental approach, we combined characteristics of dual tasking settings and task switching settings and compared local and global measures of effort in a condition with free choice of task sequence and a condition with cued task sequence. In a multi-tasking environment, participants chose the task order while the task requirement of the not-yet-performed task remained the same. This task preview allowed participants to work on the previously non-chosen items in parallel and resulted in faster responses and fewer errors in task switch trials than in task repetition trials. The free-choice group profited more from this task preview than the cued group when considering local performance measures. Nevertheless, the free-choice group invested more effort than the cued group when considering global measures. Thus, self-organization in task scheduling seems to be effortful even in conditions in which it is beneficiary for task processing. In a second experiment, we reduced the possibility of task preview for the not-yet-performed tasks in order to hinder efficient self-organization. Here neither local nor global measures revealed substantial differences between the free-choice and a cued task sequence condition. Based on the results of both experiments, we suggest that global assessment of effort in addition to local performance measures might be a useful tool for multitasking research. INTRODUCTION In everyday life multiple cognitive task requirements are omnipresent and occur in many different contexts. For example, teachers concurrently observe the behavior of problematic pupils while they are engaged in explaining a mathematical procedure, a text passage, etc. Surgeons have to concurrently track the vital functions of the patient while they are engaged in opening the ribcage. Working in an office requires performing cognitive tasks like planning the budget or evaluating the outcome of the work group, and these tasks might be interrupted by phone calls, incoming emails or colleagues/students knocking at the door. And, finally, managing a household with children permanently requires engaging and disengaging in several tasks like planning a dinner, looking out for sources of dangers for justwalking children, answering questions of older children etc. Thus, multiple cognitive task requirements are a societal fact and one can hardly avoid them. While multitasking is generally costly, there might be factors that help people to cope better with multitasking. Selforganization of task choice and task scheduling certainly is such a factor. However, self-organization is also a process that requires additional control. Consequently, we reason that while some conditions might be beneficial for multitasking in terms of better task performance, this will come at a cost in terms of more effort required. Therefore, we focus not only on local performance measures, but also on global measures of effort in terms of subjective and objective measures. As a first step, we aim to present a new experimental approach to compare conditions in which participants themselves organize how to cope with multiple cognitive task requirements with conditions in which task organization is externally controlled and thus task scheduling is pre-determined. In the experimental task, we combine properties of dual tasking and task switching paradigms by allowing for parallel processing of different tasks in a protocol that requires the rapid alternation between tasks. By this, we present an experimental set-up that allows the independent assessment of local and global costs of selforganization processes during multitasking. While the process of self-organization itself has been investigated elsewhere, the focus of this research is on a possible trade-off between local and global measures. The comparison between self-organized and externally controlled task scheduling is empirically and theoretically especially interesting because different lines of psychological research allow opposing hypotheses. First, research in PRP studies that allowed participants to freely choose task order, revealed that several factors, like for example, expectation of stimulus order or repetition of task order (De Jong, 1995), distribution of stimulus onset asynchronies between Task 1 and Task 2 stimuli (Miller et al., 2009), duration of central processing stages of Task 1 and 2 Ruiz Fernández et al., 2011) or duration of motor responses (Ruiz Fernández et al., 2013) impact on whether participants perform Task 1 or Task 2 first. These findings are in line with the assumption of a higher-order control process that determines task order and preparation for the tasks (e.g., De Jong, 1995;Luria and Meiran, 2003;Szameitat et al., 2006). Recent theorizing assumed that task order in conditions with varying distribution of stimulus onset asynchronies might be chosen in a way that optimizes task performance (see Miller et al., 2009 for an optimization account). Consequently, conditions that enable to self-organize task order might be advantageous compared to conditions with externally controlled task order. Similarly, recent research in the voluntary task switching paradigm suggests that self-organization might be advantageous over cued task switching. In voluntary task switching settings, participants freely choose which task to perform in the next trial whereas in cued task switching settings, a cue is presented prior to each target instructing participants which task to perform in the next trial. Switch costs, that is the RT difference for task switch and task repetition trials, are smaller in voluntary task switching settings compared to cued task switching (e.g., Arrington and Logan, 2005;Mayr and Bell, 2006;Demanet and Liefooghe, 2014). And finally, within applied work psychology, researchers predict that self-organized task performance is superior to fixed task scheduling. With regards to this assumption, the selfregulation theory of Hacker (e.g., Hacker, 2009) claims that goals and plans are relevant to regulate one's action. Further, commitment to these goals seems especially high if workers participate in the goal-setting process (e.g., Pritchard et al., 1993;Kleinbeck and Schmidt, 2004). Indeed, there are even norms that request holistic and complete work activities (ISO 6385, EN DIN 29241-2, cited in Hacker, 2009). Thus, from this perspective self-organized task-scheduling likewise might be considered as favorable compared to fixed task scheduling. On the other hand, however, self-organization is an extra cognitive processes that is not required if task order is externally controlled. This process to choose tasks/task order to optimize performance is conceptualized as an executive control process (e.g., Logan, 1985;Norman and Shallice, 1986) and as such it is considered as time-consuming and effortful. However, previous research might not be ideal to investigate this process for several reasons. For instance, task choice often takes place prior to stimulus presentation. We argue that if this is the case, participants cannot actually choose task order to optimize their performance because they do not know the exact task requirements for the respective trial. Instead participants have to base their task choice on rather broad requirements of the tasks in general. Consider the case of a participant choosing between a math and letter identification task. If the participant has to decide prior to stimulus presentation, she can recall some rather abstract features of the task (e.g., in the math task, I have to do simple computations) and will probably base her decision on her assessment of the anticipated level of difficulty. We will call this a proactive, memory-driven strategy. In contrast, if the participant has to decide after stimulus presentation, she can compare the different items based on specific features (e.g., in the math task, I have to subtract 4 from 9). Consequently, her decision will be based on her assessment of the actual level of difficulty. We will refer to this as a reactive, stimulus-driven strategy. Arguably, it is much easier to choose the' best' task (in terms of time and effort invested in solving the task) if participants can apply a stimulus-driven strategy, because the memory-driven strategy has two disadvantages. First, with a memory-driven strategy, optimization is restricted to abstract features and therefore preparation will be necessarily limited. Second, with limited time and more than two alternatives, memory recall will be rather demanding, making it less likely those participants will use this strategy at all. Further, we aim to assess effort and performance in a multitasking setting that actually requires participants to schedule task order. For this, we opted for a setting in which the items of the non-chosen tasks remained the same. Thus, participants are not simply able to apply a stimulus-driven strategy to choose the item that is easiest to perform in a given trial, but they are able to choose task order such that the order of items is optimized (at least to some degree for the respectively next items of each task). By this, our setting also better resembles everyday multitasking because we require participants to schedule the task order while the affordances (e.g., stimuli for a task) for the not-yet-performed tasks remain 1 . To summarize, while previous research investigated task choice mostly for memory-driven strategies, we believe that this actually limited the possibilities to optimize the task choice process. Therefore, the present research aimed to maximize the possibility that participants make use of a stimulus-driven strategy. More precisely, we reason that presentation of specific items prior to task choice will most likely facilitate performance, because participants can select (and solve) tasks based on their actual difficulty. However, if task choice is optimized (in terms of better performance), the cognitive effort related to this optimization process cannot be assessed with traditional measures of task performance. Indeed, research in PRP and task switching settings usually did not assess overall effort to handle the experimental requirements. Consequently, to arrive at a more complete picture of task optimization, it is crucial to assess both global and local measures when comparing effort for self-organized compared to externally controlled task scheduling. In this paper, we aim to consider both -local performance measures and global effort measure to compare self-organized and externally controlled task scheduling when confronted with multiple cognitive requirements. Indeed, there are many studies comparing voluntary and cued task switching performance while assessing local performance data. Yet, results are ambiguous. Although studies unequivocally revealed that switch costs are smaller in voluntary task switching settings compared to cued task switching (e.g., Arrington and Logan, 2005;Mayr and Bell, 2006;Demanet and Liefooghe, 2014), the result patterns diverge when considering overall RT level. Mayr and Bell (2006) and Demanet and Liefooghe (2014) observed faster RTs for voluntary task choice compared to cued task order, yet Arrington and Logan (2005) reported in 4 of 5 experiments slower RTs for voluntary task choice compared to cued task order (see also Chen and Hsieh, 2013). In addition, studies comparing performance in voluntary and cued task switching settings usually did not control for task transition effects. In cued task switching settings, task order is random and consequently frequency of task switches is approximately 50% (for settings with two tasks). Yet, if participants freely choose tasks, frequency of task switches usually differs from chance because participants repeat tasks more often as expected by random task choices (e.g., Logan, 2004, 2005;Mayr and Bell, 2006;Yeung, 2010;Reuss et al., 2011). A fair comparison of performance in voluntary and cued task switching settings requires controlling for task transition effects. This is intended in the current study by applying a yoked design. That is, for each participant in the free choice condition, there is one participant in the cued condition who is cued to perform the tasks in exactly the same task order as chosen by the participant in the free choice condition (for a similar attempt see Panepinto, 2010;Masson and Carruthers, 2014). To conclude, experimental settings in voluntary task switching studies differ from everyday task performance and do no foster task scheduling that optimizes performance. Participants have to choose which type of task to perform without knowing the exact task requirements based on a memory-driven strategy. In addition, the items for the non-chosen task do not remain while in everyday life not-yet-performed task requirements usually do not change. Indeed, only if the exact task item is known, participants can choose a task and/ or select task order according to a stimulus driven strategy, allowing for an optimization of task choices. EXPERIMENT 1 To compare performance and effort for self-organized compared to externally controlled multiple cognitive task requirements, we compared a free-choice group and a cued task switching group in a yoked design. We applied a new experimental paradigm that combines characteristics of PRP and task switching settings. Participants were requested to perform four different tasks: a summation task, a subtraction task, a distance month task, and an alphabetical distance task. For the summation task and the subtraction task, two one-digit numbers had to be added or subtracted. The distance month task required counting the amount of months from a start to an end month. For example, the item "January > > February" required the response 1 and the item "July > > January" required the response 6. The alphabetical distance task required counting the amount of letters from a start to an end letter. For example, the item "H > > L" required the response 4. The respective items for the four tasks were chosen such that all items are responded to with one-digit numbers; participants pressed the corresponding numbers of the number pad of a standard keyboard. Each task was presented at a fixed location on the screen (location and task mapping was counterbalanced between participants). Most importantly, participants simultaneously saw one item for each task (see Figure 1) and each specific item remained on the screen as long as the participant did not answer to this item. Thus, the respectively next items for the four tasks were presented in parallel and consequently participants could operate on the tasks simultaneously (like in PRP studies). Yet, responding to each task was strictly sequential (like in task switching paradigms). In each trial, the actual relevant task first had to be determined. In the "free choice condition, " participants themselves indicated which task they chose by a left-hand response. Then a rectangle appeared that surrounded the item of the chosen task to confirm this task choice and participants responded to the item of this task. After responding, feedback was shown for 1000 ms before the next trial started. A new item was presented at the location of the performed task; the items that had not FIGURE 1 | Trial sequence. Participants in the free choice condition choose a task with their left hand and type in the correct digit with their right hand. The trials sequence for participants in the cued condition was similar except that participants simply started the trial by pressing a key. been responded to remained on the screen. Thus, during the feedback screen participants could use the preview of the items for the non-chosen tasks. We instructed participants in the free choice condition to choose tasks to respond as fast and as accurate as possible without following a predetermined strategy like for example to choose tasks in clock-wise order or to always alternate between two tasks. To equalize the number of responses of the free choice and cued task condition, participants in the cued task condition were requested to press any of the four task keys to start the next trial. Then a rectangle appeared surrounding the item of one task to indicate that this was the currently relevant task. Similar to the free choice condition, participants in the cued group were encouraged to respond as fast and as accurate as possible. However, unlike to the free choice condition, participants did not know which item would be required in the next trial. Consequently, participants in the free choice condition could use the preview to work on an item that they would choose while participants in the cued condition could use the preview to work on any of the three remaining items yet without knowing when this item will be required. Please note, however, that even in the forced choice condition, it is perfectly rational to use the preview to work on any of the three remaining items because each single item remained on the screen until it became cued/relevant. In order to compare free choice and cued task conditions, we considered several global measures to analyze whether conditions were differentially stressful/effortful. First, the concept of egodepletion (Baumeister et al., 1998) assumes that self-control and choice processes are resource-consuming and lead to fatigue, that is impairment in a subsequently required unrelated task. In order to assess fatigue, participants performed a Stroop task that followed the main experiment (e.g., Webb and Sheeran, 2003;Inzlicht et al., 2006;Gailliot et al., 2007). Further, we assessed the amount of subjectively experienced stress on a scale by Eilers et al. (1986), and the amount of payment participants would consider fair for this kind of work (Thaler, 1980). To control for changes in mood, we assessed affect with explicit rating and with the "implicite positive and negative affect test" (IPANAT, Quirin et al., 2009). In addition, we considered local task performance measures (RT and error rates) depending on whether participants switched or repeated the task, and we considered time to start the trial (time to choose a task in the free choice condition, and time to start the trial in the cued condition). To get a combined measure, we additionally computed the total work time, that is the sum of RT and time to start the trial. Please note that with this design, RT is measured from the onset of the task choice response (free choice condition) or the onset of the response to start the trial (cued task condition) until response. Because the items for each task were presented on the screen before, RT does not indicate the core time to perform this item. Thus, it is necessary to conjointly consider the time until participants choose a task/started the trial and the RT to assess the total work time. Further, we assessed the characteristic of the task choices in free choice condition. In addition to frequency of repetitions, we assessed whether switches occurred between task categories or across task categories. Within this regard, we considered the summation and subtraction task as one task category and the distance month task and the alphabetical distance task as another task category because of the similarities regarding stimuli (numbers vs. words/letters) and required cognitive operation (computation vs. distance assessment). Method Participants Forty-eight participants were paid 10 €for participation. Data of one participant had to be excluded due to technical problems and data of another participant were excluded because the participant did not finish the experiment in the given time slot. To control for task transitions, we also excluded the data of the respectively yoked participants. Thus, data of 44 participants (seven men, two left-handed, 18-56 years) were analyzed. All participants were tested within 2 weeks in sessions that lasted approximately 90 min. The first 10 participants were assigned to the voluntary group and the next ten participants were yoked to the first 10 participants and tested under the cue condition. This procedure was repeated for the remaining participants. Stimuli In the main experimental task, target stimuli were presented in white (Courrier New, 18) on black background. Tasks comprised of two simple math tasks that required the addition or subtraction of two digits (results ranged from 1 to 9, e.g., "3-2") and of two simple counting tasks. In the counting tasks, participants were to indicate either the numerical distance between two letters (with a maximal distance of 6, e.g., "G > > L") or between two calendar months (likewise with a maximal distance of 6, e.g., March > > January). Each task comprised of 36 different items. In the Stroop task, color words were the German words for "BLUE, " "GREEN, " "YELLOW" and "RED" printed in blue, green, yellow, or red. For congruent words, the print color of the word matched the meaning of the word, while for incongruent words both colors mismatched. Only four combinations of incongruent words were presented to a specific participant to ensure presentation of individual congruent and incongruent Stroop items in equal frequencies (c.f. Melara and Algom, 2003). For this subset of incongruent color words, the assignment of the ink color to the meaning of the color word was counterbalanced across participants. Main experimental (task switching) task Participants performed nine blocks and in each block participants had to respond to 36 items per task, thus in total to 144 items per block. The first block was considered as training block and was not analyzed. Each task was presented at a fixed location on the screen (counterbalanced between participants). The four different task items were presented around a central fixation cross. Importantly, participants simultaneously saw one item for each task (see Figure 1). Thus, the respectively next items for the four tasks were presented in parallel and consequently participants could operate on the tasks simultaneously. In the free choice group, participants indicated their task choice with an overt response (cf. Arrington and Logan, 2005) by pressing the keys "w, " "a, " "s, " or "d" using the index finger of their left hand. Participants answered the tasks using the index finger of their right hand by pressing the numbers 1 -9 on the number block. Participants were instructed to perform the tasks as fast as possible. Regarding task choice they were asked to choose in each trial the items they wanted to without following a fixed pre-determined strategy like for example rotating taskorder clockwise and without choosing the same task more than 2 or 3 times in a row. Figure 1 shows the sequence of events in an experimental trial for the free choice group. A fixation cross was presented on the middle of the screen surrounded by the four tasks until a task was selected with a spatially congruent key press. The selected task was marked with a white frame and the fixation cross was replaced by a matrix of digits from 1 to 9. When a response was registered, the background color of the corresponding digit changed for 1000 ms from black to green in case of correct response or to red in case of an error. The next trial started directly with the presentation of a new item at the location of the just performed task; the non-chosen items remained on the screen. Items of a task were randomly administered. Whenever a participant had performed all 36 items of a task in a block, the signs "XXXX" were presented at the task location and participants had to choose among the remaining tasks. After each block there was a break and participants received feedback about the number of errors and the total time it took them to perform all tasks in the last block. When participants felt ready for the next block they terminated the break. The procedure for participants in the cued group was identical, except that participants did not select task by themselves, but started a 'random generator' by pressing a start key (the same keys served as start keys than in the free choice group). The presented task and trial sequences, however, were not random, but yoked to one of the participants in the free choice group. Stroop task After performing the task switching experiment, participants were instructed to respond to the ink color of a word by pressing the keys 'a, ' 'x, ' 'l' or 'm' using the index and middle fingers of their left and right hands. The assignment of the response buttons to the ink color was counterbalanced across participants. At the start of a trial, a fixation-cross was presented for 300 ms followed by a colored word which prompted the participant to respond as quickly as possible. After 1000 ms, a blank screen was presented until registration of a key press. In case of an incorrect or late response (RT > 1000 ms), an error message appeared for 1000 ms. The next trial started after an intertrial interval of 1000 ms. The Stroop task consisted of four blocks with eight congruent and eight incongruent trials each. Questionnaires To assess explicit affect rating, participants indicated their current mood by clicking with the mouse cursor on a scale from 0 [very negative] -100 [very positive] directly before and after the main experimental task (i.e., the task switching part). After performing the Stroop task, participants filled out the "implicit positive and negative affect test" (IPANAT, Quirin et al., 2009). To assess subjective experience of fatigue and demand, participants answered the "scale to assess subjective experience of stress" (Eilers et al., 1986) 2 . Furthermore, we adopted a "compensation demanded measure, " a standard procedure from behavioral economics (e.g., Thaler, 1980;Knetsch and Sinden, 1984) to assess how much payment per hour participants considered as a fair compensation for their participation in the experiment. Stroop task After the experiment, participants performed a Stroop task to assess fatigue. For the analysis, the first trial of the block was excluded and for the RT analyses, only correct trials were included. Participants responded slower, F(1,42) = 11.86, p = 0.001, η 2 p = 0.220 and made more errors, F(1,42) = 21.29, p < 0.001, η 2 p = 0.336 in Stroop incongruent compared to Stroop congruent trials 3 . Most importantly, participants in the free choice condition committed overall more errors than participants in the cued condition, F(1,42) = 4.99, p = 0.031, η 2 p = 0.106, while response time did not differ significantly but also did not indicate any speed-accuracy tradeoff, F(1,42) = 2.60, p = 0.114, η 2 p = 0.058, see Table 1 for means. Subjectively experienced stress Participants reported more subjectively experienced stress in the free choice condition compared to the cued condition, t(42) = 2.1, p = 0.041. The amount of payment per hours that participants demanded for compensation for a future participation differed between conditions, t(39 4 ) = 2.2, p = 0.036. Participants in the free choice condition indicated that 25,15 Euro per hour would be a fair payment for this work while participants in the cued task group considered 12,48 Euro per hour as fair payment. Affect In order to test the influence of choice condition on explicit affect, while controlling for potential differences in pre-test affect, we used the analysis of covariance approach (Senn, 2006). Posttest affect rating were entered into a univariate ANCOVA with choice condition (free vs. cued) as the between-participants factor and pre-test mean Mood ratings as the covariate. This analysis revealed no significant difference between groups, F < 1. Implicit affect rating assessed by the IPANAT did not differ between groups, neither for positive affect nor negative affect, both |t| < 1. 5 Taken together, participants in the free choice condition were more fatigued and experienced more stress than participants in the cued condition, yet this was not due to any impact on affect but seems to indicate that this condition is more effortful. Local Task Switching Performance The first trial in each block was not analyzed. Post-error trials (6.3%) and RTs that exceeded more than 2.5 SDs from the cell mean for each condition (4.4%) were removed from all analyses. Additionally trials with erroneous responses (5.2%) were removed from all analyses (except analysis of error data). If not stated otherwise, a repeated-measures analysis of variance (ANOVA) with the factors condition (free choice, cued) and task transition (repeat, switch) was used to analyze the data. Task performance (reaction times and errors) Participants in the free choice condition responded faster than participants in the cued condition, F(1,42) = 22.6, p < 0.001, η 2 p = 0.35. Participants responded faster in task switch than in task repetition trials, F(1,42) = 17.7, p < 0.001, η 2 p = 0.30. In addition, participants made less errors in task switch than in task repetition trials, F(1,42) = 4.8, p = 0.03, η 2 p = 0.10, and this switch advantage in errors occurred mainly in the free choice group, F(1,42) = 4.1, p = 0.05, η 2 p = 0.09. All other effects were not significant (p > 0.45). 4 Degree of freedoms differs for this analysis because some participants did not answer this question. 5 Data of one participant who did not fill out the IPANAT is missing. Task choice times Regarding the time to choose a task/start the trial, participants in the free choice condition took longer than participants in the cued task condition, F(1,42) = 14.5, p < 0.001, η 2 p = 0.26, and especially so when they repeated tasks rather than switched tasks, F(1,42) = 26.7, p < 0.001, η 2 p = 0.39 for the main effect of task switch, qualified by the interaction of switch × condition, F(1,42) = 21.1, p < 0.001, η 2 p = 0.33. Total work time When considering the sum of RT and choice time/time to start a trial (see Figure 2), total work time of participants in the free choice condition and in the cued condition did not differ significantly, F(1,42) = 3.1, p = 0.088, η 2 p = 0.07. Participants responded faster in task switch than in task repetition trials, F(1,42) = 38.4, p < 0.001, η 2 p = 0.48, and this switch advantage was larger in the free choice condition than in the cued condition, F(1,42) = 9.7, p = 0.003, η 2 p = 0.19. Two-tailed one-sample t-tests against null revealed switch benefits both for participants in the free choice condition and in the cued condition, t(22) = 6.45, p > 0.001, d = 1.37 and t(21) = 2.36, p = 0.028, d = 0.51. Task choices Overall, Participants repeated tasks in 31.6% of the trials. This repetition rate does not differ significantly from the 25% repetition rate that would result if participants randomly chose task order, t(21) = 1.4, p = 0.168, d = 0.29. To further analyze task choices, we considered only trials in a block as long as all four stacks for each task had items. For this subsample of trials, participants repeated tasks in 26.8% of the trials. This repetition rate does not differ significantly from the 25% repetition rate that would result if participants randomly chose task order, |t| < 1. When switching between tasks, participants switched to the similar task in 30.0% of the trials and to the two other dissimilar tasks in 43.2% of the trials. The switch rate within task categories was significantly above chance of 25%, t(21) = 2.54, p = 0.019, d = 0.54, while switches to the two dissimilar tasks occurred less frequently than expected by change of 50%, t(21) = 1.82, p = 0.083, d = 0.38. Discussion The present experiment aimed to elaborate on an experimental setting that allows to identify conditions supporting multitasking. We introduced an experimental set-up that requires task switching but allows parallel processing of alternative task items to contrast self-organization and externally controlled task switching. In addition, to local performance measures, we also assessed global subjective and objective measures for effort. Results revealed a rather interesting data pattern. First, participants responded slower, made more errors, and total work time was larger for task repetition trials than task switch trials. Thus, in contrast to the usually observed task switch costs (see Kiesel et al., 2010 for a review), here reversed switch costs, that is, switch benefits emerged. This finding can easily be explained because the items for the non-chosen task remained on the screen. Because of this possibility to preview the items for task switches (see Figure 1), participants were able to work on these items while they received feedback for the just performed task. The feedback was given for 1000 ms and consequently participants had ample time to work on the alternative tasks' items after responding. Second, participants repeated tasks more often than expected by chance. This finding seem at odds with the observation that participants were able to respond faster in task switch than in task repetition trials. Yet, based on typical task switching experiments (for an overview see e.g., Kiesel et al., 2010;Vandierendonck et al., 2010), we know that task switching requires an effortful reconfiguration process and participants might avoid this process. We will come back later to this issue in the general discussion. Additionally, when participants switched tasks, they more often switched to the similar task category (from the addition to the subtraction task and vice versa or from the letter to the month distance task and vice versa) than expected by chance. We take this as a hint that participants choose tasks such that task switching was facilitated. Finally, and most interestingly, participants in the free choice condition seemed to be more fatigued than participants in the cued condition and they subjectively experienced more stress. This finding is at odds with the observation that participants in the free choice condition responded faster in task switch trials than participants in the cued condition. Thus, the objective and subjective assessment of overall effort contradict the performance measures. Usually one would assume that faster responses occur for easier and thus less stressful conditions. Consequently, based on the performance data one might have predicted that participants in the free choice condition would be less fatigued and stressed than participants in the cued group. To account for these findings, we assume that participants in the free choice condition experienced more effort when considering global measures because the requirement to schedule tasks in order to respond as fast as possible (i.e., in order to optimize local performance) is demanding and thus induces stress and leads to fatigue. Yet, participants in the free choice condition were faster in task switch trials when considering local measures because their task choice enabled them to act more efficiently and thus to take more advantage from the possibility to preview the items in case of task switches. Before we elaborate more on such a possible trade-off between local performance benefits and global effort costs, we have to consider an alternative explanation of Experiment 1. Task instruction for the free-choice group stated that participants should avoid pre-determined strategy like for example rotating task-order clockwise and without repeating the same task more than 2 or 3 times in a row. Arguably, this is a considerable additional task demand that might explain why participants in the free choice group were more fatigued after the experiment compared to the cued group without this demand. Indeed, previous research on voluntary task switching has shown that instructions to avoid specific choice patterns is cognitive demanding and impairs local performance (e.g., Mayr and Bell, 2006). Therefore, it is possible that global costs in terms of increased fatigue in the free choice group were due to difference in task instruction. Furthermore, it remains unclear whether the long preview in combination with the possibility to select freely tasks resulted in beneficial task performance/more fatigue or whether free choice alone would have been sufficient to induce these effects. More precisely, it remains to be tested whether self-organization of task selection could is effortful (in terms of global costs) even without any local performance benefits. In order to test these assumptions we decided to run a second Experiment without reduced possibility to preview the non-chosen items. EXPERIMENT 2 In Experiment 2, we applied a similar experimental procedure than in Experiment 1, but we now reduced the time of the feedback after responding to the item of a task to 200 ms. We hypothesize that this massive reduction of the possibility to preview the non-chosen items before participants can choose a task, changes task choice behavior so that free choice of the task sequence does no longer support local performance. Thus, we predict that in Experiment 2 with reduced possibility to preview the items, local performance in the free choice condition and the cued task switch condition should not differ. In addition, we assume that if global costs (increased fatigue) result from local performance benefits (total work time), we do not expect any difference in global costs in Experiment 2. Consequently, we hypothesize that the global assessment of effort for participants in the free choice and cued task switching conditions does not differ. In contrast, if global costs result from task choice processes irrespective of a stimulus-strategy or if global costs result from the demanding task switch instruction, global assessment of effort should be increased for participants in the free choice compared to the cued task switching group. Method Participants Forty-eight participants (eight men, three left-handed, 18-56 years) took part in exchange for course credits or 10€ were analyzed. All participants were tested in sessions that lasted approximately 90 min. The first ten participants were assigned to the voluntary group and the next ten participants were yoked to the first ten participants and tested under the cue condition. This procedure was repeated for the remaining participants. Stimuli and Procedure Stimuli and procedure was identical to Experiment 1 except for the following. In the main Experiment, we reduced the time of the feedback to 200 ms. When a response was registered, this response was shown in the middle of the screen in a square with green background in case of correct responses or red background in case of an error. During this feedback, the items for all four tasks remained on the screen. After 200 ms, the feedback disappeared and a new item appeared at the location of the just chosen task. For the subjective measures, we did not assess the IPANAT in Experiment 2 because this measure was not sensitive in Experiment 1. Stroop task The first trial of the block was excluded and for the RT analyses, only correct trials were included. Participants responded slower, F(1,46) = 23.14, p < 0.001, η 2 p = 0.335 and made more errors, F(1,46) = 26.24, p < 0.001, η 2 p = 0.363 in Stroop incongruent compared to Stroop congruent trials. Performance of participants in the free choice condition and in the cued condition did not differ, F < 1 for RT and errors see Figure 2 and Table 1 for means. Subjectively experienced stress Neither participants' reported stress did not differ between the conditions, t(46) = 1.45, p = 0.15, nor the amount of payment per hours that participants demanded for compensation for a future participation did differ between conditions, |t| < < 1. Affect As in Experiment 1, mood ratings did not differ between groups, F < 1. To summarize, participants in the free choice condition and participants in the cued condition did not differ significantly regarding fatigue and experienced stress. Local Task Switching Performance The first trial in each block was not analyzed. Post-error trials (7.3%) and RTs that exceeded more than 2.5 SDs from the cell mean for each condition (4.6%) were removed from all analyses. Additionally trials with erroneous responses (5.5%) were removed from all analyses (except analysis of error data). As in Experiment 1, a repeated-measures ANOVA with the factors condition (free choice, cued) and task transition (repeat, switch) was used to analyze the data. Task choice times Regarding the time to choose a task/start the trial, participants in the free choice condition took longer than participants in the cued task condition, F(1,46) = 19.2, p < 0.001, η 2 p = 0.29. Yet, task choice times did not differ significantly for task switches or repetitions, F(1,46) = 1.07, p = 0.31, η 2 p = 0.023 for the main effect of task switch, and for the interaction of switch x condition, F(1,46) = 1.12, p = 0.30, η 2 p = 0.024. Total work time When considering the sum of RT and choice time/time to start a trial (see Figure 2), response times did not differ for participants in the free choice condition and in the cued condition, F(1,46) = 1.49 p = 0.23, η 2 p = 0.031. Further, total work time did not differ for task switch and task repetition trials, F(1,46) = 0.046, p = 0.83, η 2 p = 0.001, and the interaction of switch × condition was not significant, F(1,46) = 0.748, p = 0.39, η 2 p = 0.016. Task choices Overall, Participants repeated tasks in 50.9% of the trials. This repetition rate is significantly larger than the 25% repetition rate that would result if participants randomly chose task order, t(23) = 4.32, p < 0.001, d = 0.88. To further analyze task choices, we considered only trials in a block as long as all four stacks for each task had items. For this subsample of trials, participants repeated tasks in 48.5% of the trials. This repetition rate is significantly larger than the 25% repetition rate that would result if participants randomly chose task order, t(23) = 3.88, p < 0.001, d = 0.79. When switching between task, participants switched to the similar task in 22.8% of the trials and to the two other dissimilar tasks in 28.6% of the trials. The switch rate within task categories did not significantly differ chance of 25%, |t| < 1 while switches to the two dissimilar tasks occurred less frequently than expected by change of 50%, t(23) = 5.84, p < 0.001, d = 1.19. BETWEEN EXPERIMENTAL COMPARISON To compare the results of Experiments 1 and 2, we added the between-factor Experiment to the respective ANOVAs reported for Experiments 1 and 2. Only effects of interest, i.e., the interaction with Experiment are reported. Global Measures to Assess Fatigue/Stress For the error rates in Stroop task, the difference in the free compared to the cued condition was more pronounced in Experiment 1 ( = 15.23%) than in Experiment 2 ( = −3.51%), as indicated by the significant interaction between Experiment and group (free, cued) and F(1,88) = 4.80, p = 0.031, η 2 p = 0.052. While the difference for the free and cued switching group was stronger for subjectively reported stress in Experiment 1 ( = 23.31) compared to Experiment 2 ( = 17.12), this difference was not significant, F < 1. However, the difference between free and cued switching group in terms of the amount of payment per hours that participants demanded for compensation was significant (Experiment 1, = 12.67 €; Experiment 2, = 0.43 €), F(1,88) = 5.07, p = .027, η 2 p = 0.056. Local Task Switching Performance Task Performance (Reaction Times and Errors) The difference in task performance for free and yoked groups was not different between Experiments (four-way interaction with F < 1). Although there was a tendency for overall switch benefits in Experiments 1 ( = 425 ms) and switch costs Experiment 2 ( = 56 ms) irrespective of free/ yoked group, this difference was only marginal significant, F(1,89) = 3.14, p = 0.08, η 2 p = 0.034. For errors rates, the four-way interaction was marginal significant, F(1,89) = 3.14, p = 0.08, η 2 p = 0.034, showing a tendency for greater switch benefits in Experiments 1 for the free compared to yoked group ( = 2.71 %) compared to Experiment 2 with the reverse patter, namely a switch benefit for the yoked group ( = 0.47%). Task Choice Times The difference in switch costs/benefits for free compared to yoked groups was significantly stronger in Experiment 1 compared to Experiment 2, F(1,89) = 5.24, p = 0.024, η 2 p = 0.056. As indicated by the individual analysis, participants in Experiment 1 showed a pronounced switch benefit in the free choice group against the yoked group ( = 334 ms), while participants in Experiment 2 showed strong switch costs in the free choice group against the yoked group ( = 251 ms). Total Work Time The difference in switch costs/benefits for free compared to yoked groups was significantly stronger in Experiment 1 compared to Experiment 2, F(1,89) = 4.79, p = 0.031, η 2 p = 0.051. As indicated by the individual analysis, participants in Experiment 1 showed a pronounced switch benefit in the free choice group against the yoked group ( = 425 ms), while participants in Experiment 2 showed strong switch costs in the free choice group against the yoked group ( = 56 ms) (see Figure 3). Discussion In Experiment 2, we assessed local performance measures and global measures of stress and fatigue in a setting that resembles the setting of Experiment 1. Yet, in contrast to Experiment 1, the feedback screen after responding in a trial was presented for 200 ms only, before the new item for the just chosen task occurred (while the items for the nonchosen tasks remained the same). This reduction of preview for the non-chosen items lead to a rather different response pattern than in Experiment 1. Local performance measures showed no advantage for switch compared to repetition trials. Arguably, a 200 ms preview is not sufficient to facilitate task switching significantly. Additionally, participants in the free choice condition responded faster, yet they took longer to choose a task than participants in the cues condition. When considering the combined measure of total work time, there was no significant difference for both conditions. It seems that participants in the free choice group waited to indicate their task choice such that they were able to respond faster. Yet, overall participants' choice behavior could not optimize task performance due to the lack of preview. Similarly, global measures to assess fatigue and stress did not differ in the free choice and the cued task switching group. Thus, taken together there were no significant local differences and no global differences (only in one measure marginally different) in the free choice and the cued group in Experiment 2. This suggests that preview is actually necessary to choose tasks in a way that supports optimized behavior. Please note that this finding is also suitable to rule out two objections against Experiment 1. First, one might suppose that participants in the free choice condition experienced more effort than participants in the cued condition, because they were instructed to choose a task without following a fixed task sequences and without repeating tasks too often. Yet, instructions how to choose tasks were the same in Experiments 1 and 2. Thus, task choice instructions itself cannot explain differences between free choice and cued group. Further, task choice behavior in Experiment 2 revealed that participants repeated tasks more often than expected by chance. Due to the decreased preview, the advantages to switch tasks were reduced and thus participants preferred the less demanding task repetition option. When participants switched trials, they did not switch more often than expected by chance to the similar task category. It thus seems that the reduced possibility FIGURE 3 | Local measures of performance costs displayed as the switch costs/benefits (repetition time -switch time) calculated for the total work time (task choice + task performance) for the free and cued group of Experiment 1 (preview = 1000 ms) and Experiment 2 (preview = 200 ms). Asterisks indicate significant differences. Error bars indicate the standard error of the mean. to preview likewise reduced the possibility to optimize task scheduling. GENERAL DISCUSSION In the present study, we assessed local performance measures as well global measures for effort in a multitasking setting comparing free choice of task order and cued task order. Participants in the free choice condition scheduled task order when switching between four different tasks. After performing an item for one task, participants received feedback either for 1000 ms (Experiment 1) or for 200 ms (Experiment 2) while the items for the non-chosen tasks remained the same. Consequently, participants could use the feedback time as a preview to prepare for the alternative tasks and in the free choice condition to choose a task order that optimizes performance. Results in Experiment 1 indicated that participants in the free choice condition were faster than participants in the cued condition, yet global measure of effort revealed that they were more stressed and fatigue after the experiment. In contrast, in Experiment 2 with largely reduced preview, neither local nor global measures for the free choice and the cued group differed. To account for these results, we speculate that there are three mechanisms that interact with each other:(i) advance item processing due to preview, (ii) reconfiguration required in task switch trials, and (iii) a task choice process (in free choice condition) that aims to optimize reconfiguration and task processing. First, we suppose that participants responded faster in task switch than in task repetition trials, because the preview time allowed participants to prepare (and possibly even solve) the next task before the start of a trial. Second, we assume that in addition to the possibility to work in advance on the task-switch items, another process impacts on task choice/task performance that prevents frequent task switching. Task switching requires a reconfiguration process to adopt the new task set (see Figure 4). This reconfiguration process is an executive control process and requires cognitive resources (e.g., Rogers and Monsell, 1995;Meiran, 1996;Koch, 2001;Hoffmann et al., 2003;Plessow et al., 2011;Plessow et al., 2012). To avoid the cognitive demand that is related to task switches, participants prefer to repeat a task. Thus, our results seem to be in line with the "law of least mental effort" (Kool et al., 2010, p. 678). FIGURE 4 | In both the free choice (upper) and the cued condition (lower) preview facilitates responding in task switch compared to task repetition trials. Yet, task switching requires reconfiguration; thus, the advantages of preview are attenuated. In the free choice condition (upper), the process to freely choose a task is effortful and time-consuming, yet it aims to optimize reconfiguration (and possible) task processing and thus facilitates responding in switch trials in the free choice condition. Indeed, research in the voluntary task switching paradigm further supports this assumption that participants avoid cognitive demand. A number of studies revealed that when participants were instructed to randomly choose a task, they usually repeated tasks more often than expected by chance (e.g., Logan, 2004, 2005;Mayr and Bell, 2006;Yeung, 2010;Reuss et al., 2011). This repetition bias seems reasonable because in these voluntary task switching experiments, task switch cost emerged. Yet, in the current setting participants responded faster in task switch trials because the items for the tasks remained on the screen and because of this preview possibility participants were able to work on the previously non-chosen items (the items that would be chosen in task switch trials) while the feedback screen was presented. Nevertheless, despite responding faster in task switch trials, participants did not prefer task switches over task repetitions. This observation is interesting because it might question the assumption that self-organization, that is, free choice of task order is suitable to optimize overall task performance. Further research is needed to clarify whether participants are able to balance performance benefits (or costs) in an experimental setting with the effort of task reconfiguration processes. Currently, we can only speculate why participants did not choose the faster option more often. It might be that participants were not aware that they would be faster in task switch trials or alternatively, the reconfiguration process might induce some level of conflict and thereby negative affect (e.g., Botvinick, 2007;. Third, we assume that the task choice process for participants in the free choice group is not only affected by the necessity to reconfigure but also itself impacts on reconfiguration and task performance. Here, a central conjecture is that the task choice process aims to optimize task performance and effort related to task processing (e.g., Shenhav et al., 2013). In the setting of the present study, the task choice process has to balance the tendency to (1) avoid task switches to avoid reconfiguration processes, and (2) to exploit the preview possibility and thus to prefer task switches. Interestingly, results revealed that participants responded faster in task switch than in task repetitions trials, yet they did not switch tasks more often (indeed descriptively they even repeated tasks more often) than would be expected for random task choices. Thus, the usual conclusion that fast RTs indicate easy task conditions that are preferred by participants does not hold in this setting. In addition, the task choice process is an executive control process and as such requires cognitive resources. Despite that participants in the free choice condition needed less time than participants in the cued condition to perform a task especially in task switch trials, subjective evaluation measures and aftereffects in a Stroop task indicated that the free choice condition is more stressful/effortful than the cued task condition. Based on this observation, we conclude that assessment of overall effort (with subjective and objective measures) is an additional factor that should be considered in addition to performance data when comparing different multitasking conditions. Taken together, participants in the free choice condition were more fatigued than participants in the cued condition and they subjectively experienced more stress. Thus, despite participants in the free choice condition needed less time than participants in the cued condition to perform a task especially in task switch trials, subjective evaluation measures and aftereffects in a Stroop task indicated that the free choice condition is more stressful/effortful than the cued task condition. To conclude the present experiment suggests that task organization in multitasking depicts a trade-off. While self-organization of task scheduling can optimize task performance during multitasking, it comes at the costs of more fatigue after multitasking. ETHICS STATEMENT Funding for this research was granted by the Deutsche Forschungsgemeinschaft without the necessity of an approval by an ethics committee. The study was conducted with healthy participants who gave informed consent regarding data collection. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.
2017-05-05T09:15:51.480Z
2017-02-06T00:00:00.000
{ "year": 2017, "sha1": "057265aa90857a7b9dc87218e3981f84fc2b8542", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00111/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "057265aa90857a7b9dc87218e3981f84fc2b8542", "s2fieldsofstudy": [ "Psychology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
257333431
pes2o/s2orc
v3-fos-license
Bioinformatic survey of CRISPR loci across 15 Serratia species Abstract The Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR‐associated proteins (CRISPR–Cas) system of prokaryotes is an adaptative immune defense mechanism to protect themselves from invading genetic elements (e.g., phages and plasmids). Studies that describe the genetic organization of these prokaryotic systems have mainly reported on the Enterobacteriaceae family (now reorganized within the order of Enterobacterales). For some genera, data on CRISPR–Cas systems remain poor, as in the case of Serratia (now part of the Yersiniaceae family) where data are limited to a few genomes of the species marcescens. This study describes the detection, in silico, of CRISPR loci in 146 Serratia complete genomes and 336 high‐quality assemblies available for the species ficaria, fonticola, grimesii, inhibens, liquefaciens, marcescens, nematodiphila, odorifera, oryzae, plymuthica, proteomaculans, quinivorans, rubidaea, symbiotica, and ureilytica. Apart from subtypes I‐E and I‐F1 which had previously been identified in marcescens, we report that of I‐C and the I‐E unique locus 1, I‐E*, and I‐F1 unique locus 1. Analysis of the genomic contexts for CRISPR loci revealed mdtN‐phnP as the region mostly shared (grimesii, inhibens, marcescens, nematodiphila, plymuthica, rubidaea, and Serratia sp.). Three new contexts detected in genomes of rubidaea and fonticola (puu genes‐mnmA) and rubidaea (osmE‐soxG and ampC‐yebZ) were also found. The plasmid and/or phage origin of spacers was also established. The genus Serratia, a Gram-negative rod, is now part of the family Yersiniaceae. Serratia species can be found in different environments (e.g., water, soil) and hosts (e.g., humans, insects, plants, vertebrates) where they may play different roles ranging from opportunistic pathogens to symbionts (Cristina et al., 2019;Gupta et al., 2021;Lo et al., 2016). Among Serratia species, marcescens is undoubtedly the most studied mainly for its role played as a symbiont associated with insects and nematodes or as a human opportunistic pathogen (currently reported as one of the most important bacteria responsible for acquired hospital infections such as bacteremia, pneumonia, intravenous catheter-associated infections, and endocarditis) (Ferreira et al., 2020). Other Serratia species responsible (to a minor extent) for human bacteremia are liquefaciens and odorifera (Mahlen, 2011). A growing number of marcescens genomes have then been sequenced with a pangenome allele database available for different studies ranging from virulence and antibiotic resistance to the identification of CRISPR systems (Abreo & Altier, 2019). A number of studies, in addition to marcescens, have also been reported for other Serratia species that play different roles in human and insect pathogenesis (Petersen & Tisa, 2013). Although the characterization of CRISPR systems represents a valuable substrate for diagnostic, epidemiologic, and evolutionary analyses (Louwen et al., 2014), data on CRISPR-Cas systems in the genus are scarce and limited to the detection of subtypes I-E and I-F1 in genomes of the species marcescens (Medina-Aparicio et al., 2018;Scrascia et al., 2019;Srinivasan & Rajamohan, 2019;Vicente et al., 2016). In this study, 146 Serratia complete genomes and 336 highquality assemblies are available for the species ficaria, fonticola, grimesii, inhibens, liquefaciens, marcescens, nematodiphila, odorifera, oryzae, plymuthica, proteomaculans, quinivorans, rubidaea, symbiotica, and ureilytica were explored for the presence and type of cas gene clusters and/or CRISPRs. Apart from subtypes I-E and I-F1, the study showed the presence (first detected in Serratia) of subtype I-C, the presence of unique loci, and detailed genomic contexts of CRISPR loci. The plasmid and/or phage origin of spacers was also assessed. The discovery of CRISPR-Cas systems has allowed the development of new technology tools in the bioengineering field (Dong et al., 2021). A clear example is represented by gene editing strategies based on CRISPR/Cas9 technique successfully used in agriculture, nutrition, and human health (Nidhi et al., 2021). The development of new CRISPR-based applications also relies on the continuous update of CRISPR-Cas systems data and knowledge. Our study, in providing more comprehensive data on CRISPR loci in Serratia, has undoubtedly contributed to an expanded knowledge of these systems. | Genomes analyzed One hundred and forty-six Serratia complete genomes were considered in this study. The set of genomes encompasses the 15 S. marcescens complete genomes we previously analyzed (Scrascia et al., 2019) and those of the genus Serratia available at the CRISPR-Cas ++ database (https://crisprcas.i2bc.paris-saclay.fr/ MainDb/StrainList) up to December 12, 2020 (Couvin et al., 2018;Pourcel et al., 2020) (Supporting Information: Table S1). Among genome sequences available at the assembly level of scaffolds or contigs available at the National Center for Biotechnology Information database (NCBI) (https://www.ncbi.nlm.nih.gov/assembly) up to December 12, 2020, we selected the high-quality assemblies (N50 > 50 kb, i.e. 50% of the entire assembly is contained in contigs or scaffolds equal to or larger than the 50 kb) that have been included in the study. | Detection of CRISPR-Cas loci Details about the detection of a cas gene cluster with associated arrays (CRISPR-Cas system) and CRISPR arrays only for complete genomes were retrieved from the CRISPR-Cas ++ database. CRISPR arrays recorded by CRISPR-Cas ++ were assigned to Levels 1-4 based on the criteria required to select the minimal structure of putative CRISPR as reported by Pourcel et al. (2020). Level 1 is the lowest level of confidence. Levels 2-4 were assigned based on the conservation of repeats (which must be high in a real CRISPR) and on the similarity of spacers (it must be low). Level 4 CRISPRs were defined as the most reliable ones. Levels 1-3 may correspond to false CRISPRs. In our study, only CRISPRs recorded with Level 4, were considered. CRISPRs without a set of cas genes in the host genome were defined as "orphans." Genomes harboring cas gene clusters were then submitted to the CRISPRone analysis suite (http://omics. informatics.indiana.edu/CRISPRone/) (Zhang & Ye, 2017) to graphically visualize the architecture of each cluster. The same suite was used to search and visualize cas gene clusters in the high-quality assemblies. A subtype of cas gene clusters was assigned according to the recent classification update for CRISPR-Cas systems (Makarova et al., 2020). | In silico analyses of consensus of direct repeats A consensus of direct repeats from CRISPRs was clustered by BLAST similarity. Some consensus DRs were manually trimmed when just a few terminal nucleotides were the only difference from the other members of the same cluster. The consensus DRs were used as input for CRISPRBank (http://crispr.otago.ac.nz/CRISPRBank/index.html) and CRISPR-Cas ++ to assign, based on identity with known consensus DRs (Biswas et al., 2016;Couvin et al., 2018;Pourcel et al., 2020), a specific CDR type to CRISPR. The CRISPRs whose CDR type was consistent with the subtype of the cas gene set harbored in the same genome were defined as "canonical." While those not consistent with the subtype of the cas gene set harbored in the same genome were defined as "alien." A schematic diagram of alien, canonical and orphan arrays is shown in Figure 1. consensus DRs and the number of repeats of the CRISPRs in the high-quality assemblies of Serratia sp. strains DD3, Ag1, and Ag2 were recovered from the CRISPRone output. Spacers' analysis for duplications (spacers of Ag1, Ag2, and DD3 included) was performed through the CRISPRCasdb spacer database at the CRISPRCas ++ site (https:// crisprcas.i2bc.paris-saclay.fr/MainDbQry/Index). Phagic and/or plasmidic origin of matching protospacers were searched at the CRISPRTarget site (http://crispr.otago.ac.nz/CRISPRTarget/crispr_ analysis.html) (Biswas et al., 2016). | Genomic contexts of CRISPR-positive genomes Analysis of CRISPR-positive complete genomes and high-quality assemblies was performed to better characterize the genomic context surrounding the cas gene sets and/or CRISPR arrays. Highquality assemblies with at least 4 kb flanking the cas gene sets were considered. These regions were annotated by Prokka (https://github. com/tseemann/prokka) (Seemann, 2014). Synteny was established by either the Mauve algorithm (http://darlinglab.org/mauve/mauve. html) (Darling et al., 2010) or visual inspection of annotated proteins. | Phylogenetic analyses The evolutionary relationship of Serratia strains found positive for cas genes sets was established and graphically depicted by the Cas3 sequence tree. All the protein sequences were aligned by the MUSCLE algorithm (https://www.ebi.ac.uk/Tools/msa/muscle/) (Edgar, 2004a(Edgar, , 2004b. The 16S rRNA gene tree was also drawn for comparison. Dendrograms were generated by the Neighbor-Joining clustering method and average distance trees with JalView (https:// www.jalview.org/) (Waterhouse et al., 2009). For the 16S rRNA gene F I G U R E 1 Schematic diagram of the three categories of arrays described in the study. DRs and spacers are depicted with diamonds and rectangles respectively. cas genes are shown as arrows pointing in the direction of transcription. The yellow color highlights the consistency between the DR type and the cas subtype; while the blue color indicates inconsistency. tree, the multiple sequence alignment was obtained by retrieving from one to seven full gene sequences (complete genomes) or truncated 16S rRNA gene sequences (high-quality assemblies). A phylogenetic tree was obtained by multiple alignment of all retrieved 16S rRNA genes; an abbreviated tree was constructed by using one sequence from each genome. The I-E unique locus 1 was detected in two genomes of marcescens, the I-F1 unique locus 1 in eight genomes of marcescens, and one of grimesii. In three genomes of Serratia sp. (strains Ag1, Ag2, and DD3) an additional unique locus of the subtype I-E, identical to I-E* previously reported by Shen et al. (2017), was detected ( Figure 2b). The locus I-E* identified in this study was characterized by the translocation of cas6e between cas7 and cas11, and the presence (upstream of cas3) of a gene harboring the WYL domain which encodes for a potential functional partner of the CARF (CRISPR-Cas Associated Rossmann Fold) superfamily proteins (Makarova et al., 2020). Proteins containing the WYL domain (name standing for the three conserved amino acids tryptophan, tyrosine, and leucine, respectively) have only been reported for subtypes I-D and VI-D (Makarova et al., 2014. The distribution of CRISPRpositive genomes, over the total analyzed, among Serratia species is shown in Figure 3. Coexistence in the same genome of different sets of cas genes was also detected: subtypes I-E and I-F1 were found in the single HQA of oryzae, while I-E* and I-F1were detected in two high-quality assemblies of Serratia sp. (strains Ag1 and Ag2) (Table A1). | Consensus DRs and spacers The 35 CRISPR-positive complete genomes harbored 78 CRISPRs of which 48 were canonical. The latter were distributed as follows: fonticola (4), inhibens (1), marcescens (19), plymuthica (5), rubidaea (15), and Serratia sp. (4). Twenty-three arrays were orphans and detected in genomes of and Serratia sp. (4) (Table 1; Figure 1). Alien arrays (8) were only detected in the species rubidaea. For a comprehensive analysis, arrays in the three high-quality assemblies Ag1, Ag2, and DD3 were included (Table A1). All disclosed CRISPRs were assigned, by comparative sequence analyses, to consensus DR types I-C, I-E, or I-F (Table 1). The association between consensus DR types and cas gene sets (canonical and unique loci) is reported in Table 2. Based on their nucleotide identity, the consensus DRs identified for subtype I-E and its unique loci (I-E* and unique locus 1) could be arranged into two clusters named consensus DR-I and consensus DR-II. consensus DR-I was composed of 6 consensus DRs (identity from 83% to 96%) and linked to the cas gene sets I-E and I-E unique locus 1. consensus DR-II was composed of 2 consensus DRs (identity of about 96%) and linked to the cas gene set I-E*. When the consensus DRs of the two clusters were compared to each other, the nucleotide identity dropped to 55%-62%. The position of strains TEL in the cluster marcescens and JUb9 in the cluster rubidaea shown in the Cas3 phylogenetic tree was confirmed by the 16S rRNA gene tree, which might suggest a species assignment for these strains. | CRISPR genomic contexts The 35 CRISPR-positive complete genomes and 28 of the 46 CRISPR-positive high-quality assemblies were analyzed to identify Note: Palindrome identified in each consensus DR is underlined. Abbreviation: CDR, consensus DR. a Consensus DR-I group. b Consensus DR associated with the 20DRs array in Ag1 strain, the 3DRs array in Ag2 strain and the DD3 arrays (Table A1) | 9 of 22 assignment to rubidaea was assumed for the strain JUb9 (see above). Detection of subtype I-C is the first report in Serratia. The prevalence of the subtype I-F1 in our subset of CRISPR-positive genomes is consistent with both the new reorganized Enterobacterales order (Adeolu et al., 2016) and data produced by Medina-Aparicio et al. This study also supplies data on the presence/number of CRISPRs and their consensus DRs sequences in Serratia. Apart from canonical arrays (61.5% of the total disclosed arrays), orphans (29.4%) and aliens (10.2%) arrays were also detected (Table 1; Figure 1). Orphan arrays might represent remnants of previous complete CRISPR-Cas systems (Zhang & Ye, 2017). The presence of alien arrays found only in rubidaea complete genomes is, as far as we know, the first report in bacteria CRISPR-positive genomes. Its detection might be explained as traces of ancient complete CRISPR-Cas systems I-E/I-F1 or I-C/I-E/I-F1 coexistent within the same genome (Table 1). Alternatively, the aliens might result from single horizontal gene transfer events. Further analyses could unveil their genetic origin and the entity of their distribution among CRISPR-positive bacteria genomes. Detection of more alien arrays might unveil that the presence of multiple subtypes in a genome is more frequent than it has been reported so far. Furthermore, consensus DRs specifically associated with the cas gene set I-E* were also first described (Table 2). Finally, the phylogenetic tree generated by multiple alignment of the Cas3 sequences showed a potential sub-lineage (I-E*) within the I-E branch and thus might represent and/or anticipate a distinct clonal expansion of an I-E sub-population (Figure 4). Knowledge of CRISPR-Cas systems is constantly expanding due to studies on newly available genomic sequences or genomic sequences ACKNOWLEDGMENTS We would like to thank Karen Laxton and Julian Laurence for their writing assistance. There are no funding agencies to report for this article. CONFLICT OF INTEREST None declared. DATA AVAILABILITY STATEMENT All data supporting the findings of this study are available within the article (Appendix) and its Supporting Information files (Supporting Information: Table S1: List of Serratia genome assemblies; Supporting Information: Table S2: Spacer analyses; Supporting Information: Figure S1: Possible multiple records of the same genome. Spacers' sequences were identical.
2023-03-05T05:10:41.021Z
2023-03-02T00:00:00.000
{ "year": 2023, "sha1": "a966c54e35bdea64097668ccb00ac03a1d639b33", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mbo3.1339", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "0f69dccc8dabd80b1bc9701c955d9bcf1d38d7d6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247809812
pes2o/s2orc
v3-fos-license
Instagram as A Media to Enhance Writing Skill of Students in Langsa This research is carried out to investigate how the uses of Instagram application to improve students’ writing skill. The participant of this research is three students of seventh semester from English Department of IAIN Langsa in academic Years 2020/2021 who have been selected through purposive sampling based on certain characteristics. In here, there are 3 students. This research used qualitative method to gain the data; observation and documentation. Based on the results, they used Instagram features; photo and video sharing and social network to share their mind and daily activity on it. Therefore, they can use it to find many accounts that they wanted. And then, the researcher found that the participants searched and followed several educational accounts that contain learning English content. Concequently, the participants’ writing ability improve after they convey their ideas on Instagram. As the result, they were able to practice and apply what they have learnt before in writing English caption actively on their Instagram accounts. It also can be proven from their writing score which participant 1 obtained 83 score, participant 2 obtained 87 score and participant 3 obtained 62 score in writing English in the manner of using proper grammar, spelling, punctuation, appropriate content, appropriate vocabulary, and conveying clear information. INTRODUCTION Technology is the application of science that aims to fill human needs and accelerate the achievement of the objectives of each activity that will be carried out. Technology gives people opportunities to spend their time more effectively. People have been assisted by technology including learning a foreign language. The one of assisting part of technology is social media. Social media is a great place for students to express themselves because it encourages distinctiveness. The integration of gadgets and social media helps a lot (Wil, C, S, C, et.al., 2019, p. 225). Social media represents "the technologies or applications that people use in developing and maintaining their social networking sites (C. Fuchs, 2017, p. 38). Hence, the students can utilize social media to improve their English learning skills. This research investigates how the uses social media of Instagram application to improve students' writing skill and it is impact on State Institute of Islamic Studies Langsa students of English department in writing skill. Instagram was launched in October 2010. It has attracted more than 150 million monthly active users, with an average of 55 million photos uploaded by users per day. And more than 16 billion photos shared so far. And 90% of users are under the age of 35. Based on Instagram's education demographics, users with some college education are most active on Instagram with 30% and college graduates have the second highest activity at 18%, while users with a high school diploma or less make up another 15%. (Yuheng Hu, et.al., 2014;Instagram, 2017 Zidny and Suharso (2017, p. 191) reported that the use of Instagram in the teaching and learning process significantly improved the students' writing skill. The students made a good improvement on the aspects of content, vocabulary, organization, grammar, and mechanics. The social media worked well to improve their interest, focus, and proficiency in writing. The purpose of this research is to investigate how the uses of Instagram application to improve students' writing skill and the question of this research is how are the uses of Instagram application to improve students' writing skill? LITERATURE REVIEW This section contains the previous literature that concerned with social media and language learning; In relation to English language learning, writing skill, and Instagram application. Writing Skill Writing is one of the productive skills which more emphasized in produce language. Damanik (2017, p. 38) states that writing skills is one of four English skills that should be mastered as English foreign learner. Writing is focus on how to produced language than receive language. Nevertheless, Celce and Murcia (2000) indicate that: "Writing is the production of the written word that results in a text but the text must be read and comprehended in order for communication to take place. In making good writing, we should use the correct grammar, choose appropriate vocabulary, manifested by handwriting, spelling, layout and punctuation. (Celce and Murcia, 2000, 142)." Writing is functional communication, making learners possible to create imagined worlds of their own design. It means ideas, even experiences in their mind on the paper. English Social media is being quickly working up during last few years. Many people are connected by accessing social media. Everyday, more than 90 percent of college students visit a sosial networking site. People have woven these networks into their daily routines, using Facebook, Instagram, Twitter, online gaming environments, and other tools (Raut and Patil, 2016, p. 281). According to Boateng and Amankwaa (2016), social media is rapidly changing the communication setting of today is social world. The emergent of social media is significantly influencing the academic life of students. Therefore, students can utilize social media as supporting media in learning and can fill their academic needs. Patel (in Mardiana, 2016, p. 2) states that using social media in a learning process begin to rise significantly and likely to imply for education practice and provision especially in term of connecting METHODS This section presents the participants of the study, research design, and the procedure of data collection. The Participants of the Study The participants were English department students of IAIN Research Design This research is applying qualitative method. Qualitative method is looking for a deep understanding of a phenomenon, fact or reality (Raco, 2010, p. 1). According to Creswell (2009, p. 1), qualitative research as an inquiry process of understanding based on distinct methodological traditions of inquiry that explore a social or human problem. The research presents the identification and analysis of a phenomenon related to improve students' writing skill by using Instagram application. In this research, the researcher investigated the participants' experiences, impressions, and perceptions toward the use of Instagram to improve students' writing skill. The Procedure of Data Collection The research used observation and documentation method as the instrument of the research. The researcher observed the activities of the participants' Instagram accounts by using researcher's account approximately four months. Before doing the observation, the researcher has been allowed by the participants since they gave agreement to be participants for writing results as additional data for the research. RESULTS The data were gained from two sources data that consist of observation and documentation method to support finding and discussion of this research. The data were described as follow: • She often wrote motivation words on her Instagram. The Result of Observation • Sometime she used short and long English captions on Instagram. • She also actively followed international accounts which contain English and educational accounts. 7 accounts that she has followed. • She not only put English caption. But, she also posted pictures containing English and aesthetic pictures which is edited by herself. • She has posted 20 times with English captions on Instagram. • with phrases. For instance, Participant 1 was able to write simple sentences 7 times and write phrases 10 times. Participant 2 was able to write compound sentences 3 times and write phrases 2 times. Meanwhile, participant 3 was able to write simple sentences 2 times and write a compound sentence. Entering 2020, the participants have more confidence to show their skill in writing English caption. They started to explore their writing skill with writing English caption more interesting than before. It can be seen on the table of participants' Instagram activities above. Not only have good writing caption recently on Instagram. But also, they were able to finish the academic writing task very well. They recognized to write an essay correctly. Furthermore, they can make an essay in the manner of using proper grammar, spelling and punctuation, appropriate content and vocabulary. They also can arrange to write an essay based on the composition of the writing consisting of an introduction, development of idea, and conclusion. As the result, they are used to implement their writing skill to help the academic needs in college. Moreover, the researcher also found how often participants open educational accounts both local and international accounts. They have followed that so automatically the latest posts from that accounts will appear on their Instagram wall. The researcher can be concluded that the participants actively use Instagram with using English. Furthermore, they play Instagram not only just having friends or just sharing their daily life. But also share something interesting by actively using English as a caption in their posts which indirectly they also share knowledge about English with the followers who read their posts. Besides using English captions on Instagram, participants take advantage of social networking to search foreign accounts that can help them to improve their English skills. Hence, they can be also active in opening local or international accounts where there is English learning content. The Result of Documentation The researcher has carried out documentation stage for three participants. The type of documentation was in form of pictures screenshots that have been posted by participants on Instagram. The documents were analyzed indicate that using Instagram application obtained to improve students' writing skill. Through Instagram, students were allowed to take advantages of their facilities in improving their English learning skills. The following are one of the documentation stage taken by the researcher. DISCUSSION This stage discusses about some data analysis based on the theories used in this research. The theories are about using social media such as Instagram application to improve students' writing skill. This research includes the theory of Patel (in Mardiana, 2016, p. 2) which states that using social media in a learning process begin to rise significantly and likely to imply for education practice and provision especially in term of connecting with their students or with their colleagues, to access news and appear in their walls. Meanwhile, participant 3 obtained 62 score. CONCLUSION Based on findings and discussion in previous chapter, the result of this study shows that the participants of English department students of
2022-03-31T16:54:31.563Z
2020-12-24T00:00:00.000
{ "year": 2020, "sha1": "fe03e54e9f5e439a568b0be900cce002e1392faa", "oa_license": "CCBY", "oa_url": "https://journal.iainlangsa.ac.id/index.php/jades/article/download/3031/1635", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4056a9a85bda2fc5ff5541a84023aaff729e7ae8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
268856879
pes2o/s2orc
v3-fos-license
Haro 11: The Spatially Resolved Lyman Continuum Sources As the nearest confirmed Lyman continuum (LyC) emitter, Haro 11 is an exceptional laboratory for studying LyC escape processes crucial to cosmic reionization. Our new Hubble Space Telescope/Cosmic Origins Spectrograph G130M/1055 observations of its three star-forming knots now reveal that the observed LyC originates in Knots B and C, with 903–912 Å luminosities of 1.9 ± 1.5 × 1040 erg s−1 and 0.9 ± 0.7 × 1040 erg s−1, respectively. We derive local escape fractions f esc,912 = 3.4% ± 2.9% and 5.1% ± 4.3% for Knots B and C, respectively. Our Starburst99 modeling shows dominant populations on the order of ∼1–4 Myr and 1–2 × 107 M ⊙ in each knot, with the youngest population in Knot B. Thus, the knot with the strongest LyC detection has the highest LyC production. However, LyC escape is likely less efficient in Knot B than in Knot C due to higher neutral gas covering. Our results therefore stress the importance of the intrinsic ionizing luminosity, and not just the escape fraction, for LyC detection. Similarly, the Lyα escape fraction does not consistently correlate with LyC flux, nor do narrow Lyα red peaks. High observed Lyα luminosity and low Lyα peak velocity separation, however, do correlate with higher LyC escape. Another insight comes from the undetected Knot A, which drives the Green Pea properties of Haro 11. Its density-bounded conditions suggest highly anisotropic LyC escape. Finally, both of the LyC-leaking Knots, B and C, host ultraluminous X-ray sources (ULXs). While stars strongly dominate over the ULXs in LyC emission, this intriguing coincidence underscores the importance of unveiling the role of accretors in LyC escape and reionization. INTRODUCTION Corresponding author: Lena Komarova komarova@umich.edu The ionizing sources and physical mechanisms responsible for cosmic reionization at z > 6 remain a critical unsolved problem in cosmology.The major contenders for providing the required LyC are active galactic nuclei (AGN) and massive stars in starbursts, with their relative contributions to reionization still uncertain.Ac-creting sources other than AGN, e.g.ultra luminous X-ray sources (ULXs), may also play a significant role in ionizing the IGM (Madau & Fragos 2017;Ross et al. 2017;Sazonov & Khabibullin 2018).Some studies show that AGN could produce sufficient energy to reionize the universe (e.g., Madau & Haardt 2015;Giallongo et al. 2015); while others suggest that the AGN number density and ionizing emissivity were too low in the early universe (Shankar & Mathur 2007;Fontanot et al. 2012;Hassan et al. 2018;Faucher-Giguère 2020), pointing to star-forming galaxies as the dominant source.On the other hand, while dwarf starbursts seem to be promising candidates for reionization agents (Bouwens et al. 2012;Sharma et al. 2017;Yeh et al. 2023), they may not have sufficiently high escape fractions (e.g., Fontanot et al. 2014).JWST is now revealing the blue UV slopes of the earliest, z = 8−16, galaxies, pointing to young and dustpoor stellar populations (Cullen et al. 2023a,b;Topping et al. 2023;Morales et al. 2023), in further support of galaxy-driven reionization.JWST is moreover uncovering an abundance of high-redshift starbursts with high ionizing photon production efficiencies, implying that galaxies could have reionized the universe with somewhat lower escape fractions than previously assumed (Matthee et al. 2023;Atek et al. 2024). Knowing the relevant UV sources is only half of the problem, however.The other key question is how LyC escapes the immediate environment of the source without being absorbed by the local interstellar medium (ISM).The standard paradigm for stellar-driven reionization is that supernovae and stellar winds clear pathways in the ISM that become optically thin to LyC (e.g., Clarke & Oey 2002;Fujita et al. 2003;Ma et al. 2016). There is now evidence of a radiation-driven feedback mode that also may enable LyC escape in the most extreme metal-poor starbursts such as extreme Green Peas (GPs) (Jaskot et al. 2017;Komarova et al. 2021;Flury et al. 2022a,b).Similarly, accretion-driven sources such as AGN or X-ray binaries may create optically thin channels through winds and jets (Smith et al. 2020). Haro 11 is an extreme dwarf starburst galaxy with dozens of young massive clusters (Adamo et al. 2010;Sirressi et al. 2022) and a gas consumption timescale of 50 Myr ( Östlin et al. 2021).It is one of the most important local (z = 0.021) galaxies for advancing our understanding of cosmic reionization, and a wealth of data across the electromagnetic spectrum exist for this object.Haro 11 is the first local Lyman continuum emitter (LCE) to be observationally confirmed (Bergvall et al. 2006;Leitet et al. 2011), and is moreover the closest one, lying at a distance of only 88.5 Mpc (assuming H 0 = 73 km s −1 Mpc −1 ; Sirressi et al. 2022, hereafter S22 ).With its intense star formation triggered by a dwarf galaxy merger (e.g., Östlin et al. 2001, 2015), Haro 11 is dominated by three starburst knots: A, B, and C (Kunth et al. 2003), indicated in Figure 1.While Knot A is a purely star-forming region, with properties consistent with those of Green Peas (Keenan et al. 2017), Knots B and C host ULXs (Prestwich et al. 2015), including a possible low-luminosity AGN (LLAGN) (Gross et al. 2021).Thus, Haro 11 is a prime laboratory for pinpointing the nature of UV sources and LyC escape processes crucial to reionization. However, the initial LyC detection (Bergvall et al. 2006) did not resolve which of the three knots is/are responsible for the LyC emission, since the 30 ′′ × 30 ′′ FUSE aperture encompassed the entirety of the galaxy.Unveiling the exact source(s) of the Haro 11 LyC leakage is necesssary to clarify the relative roles of compact objects and massive stars in this prototypical object.The three knots vary drastically in Lyα emission properties, extinction, stellar populations, and gas properties.So, putting the LyC detection in context of knot properties reveals the connection between environment and LyC escape.In this paper, we present HST/COS observations of the three knots at wavelengths below the Lyman limit, revealing the LyC-emitting sources.Section 2 contains the details of observations and data analysis.We present our results in Section 3, discuss cosmological implications in Section 4, and summarize our main conclusions in Section 5. We obtained HST/COS FUV spectra of Knots A, B, and C in Haro 11 (Cycle 28, program ID 16260, PI: Oey).We used the medium-resolution G130M grating in TIME-TAG mode, coupled with the 2.5 ′′ Primary Science Aperture (PSA), with all four grating offset positions (FP-POS = ALL).The observations were taken with the grating centered on 1055 Å, and resolving power R ∼ 10, 000.A total of 13 Visits over the course of 24 orbits were obtained, in 2021 January, May, October, November, and December;and 2022 May and July.For each Visit on each knot, we obtained 4 subexposures at different focal plane offset positions in the wavelength range 900 -1200 Å.Some sub-exposures had unusable data due to acquisition failures and resulted in additional Visits.All sub-exposures used for the LyC measurements are listed in Table 1. The raw COS observations were calibrated with CalCOS version 3.4.0(Soderblom 2021).Given that the observations for the individual knots were obtained throughout several different Visits, with more than one association file, we combined their individual calibrated x1d files using the IDL software by Danforth et al. (2010Danforth et al. ( , 2016)).We combined the spectra, weighting by exposure times and taking into account the data quality (DQ) flags set by the standard CalCOS pipeline. A critical step in the calibration of Lyman continuum observations is an accurate background correction.The standard CalCOS pipeline estimates the background contribution in the region of the detector where the science spectrum is located, by computing the average counts in two predefined regions external to the science extraction region.For the COS FUV detector, these two predefined regions are typically located below and above the science extraction box, depending on the lifetime position (LP) used at the time of the observations.The COS team has reported that the background levels are correlated with solar activity (Dashtamirova et al. 2019), detector gain and high voltage (HV).Additionally, they have found that the background levels vary throughout the detector and with time, with slightly higher levels towards the edge of the detector. To investigate whether the background correction was accurately accounting for the expected detector contribution in the science extraction regions, we collapsed the 2-dimensional flt images in the dispersion direction.For the G130M/1055 configuration observed at Lifetime Position 2, the background region boundaries are located at pixels y ∼ 448 and y ∼ 728, with predefined widths of 51 pixels.Inspection of the collapsed profiles showed that in all exposures the background region centered on pixel y ∼ 728 showed a slightly higher back- ground level towards the edge of the detector than that observed closer to the science extraction region.This in turn caused a slight oversubtraction of the science spectra.To improve the background correction, we modified the location of the background regions in the extraction tab reference file (XTRACTAB) to be closer to the science extraction region (centered at y ∼ 588), centering the predefined regions around pixels y ∼ 505 and y ∼ 663.The background estimates using these new regions were ∼ 3 − 6% lower than those calculated at the original background locations. Given the extended nature of the Haro 11 targets, we adopted a BOXCAR extraction technique, available in CalCOS.As detailed in James et al. (2022), for extended targets, a broader extraction box may be necessary to collect the full extent of their flux.To select an optimal extraction height value for the science region, we explored varying the nominal size by ±8 pixels, in steps of 2 pixels, aiming to improve the signal-to-noise of the extracted spectrum.We adopted extraction heights for the science regions in COS segment B of 63 (standard), 71, and 71 pixels for Knots A, B, and C, respectively. We sum the high-quality spectra from each Visit on each knot, with total effective exposure times for Knots A, B, and C of 4.7, 4.0, and 3.3 hours, respectively (Table 1).To maximize the signal-to-noise ratio, we more- over bin the spectra by 6 G130M/1055 resolution elements, and thus 60 pixels ≈ 0.6 Å. Our reduced, median-combined COS LyC spectra for the three knots are shown in Figure 2. The LyC region we observe is a 9 Å window below the redshifted Haro 11 Lyman limit of 931.4 Å, as indicated in Figure 2. We compute the mean LyC flux densities F 912 in the three knots, excluding 1.1 Å-wide geocoronal H Lyman series line regions at 930.75 Å, 923.15 Å, and 926.25 Å.The flux measurement remains the same within the error, when including these regions.There are no other known geocoronal features within the measured region.Our observed LyC fluxes and luminosities are shown in Table 2.We detect the strongest LyC emission in Knot B, which has about 2/3 of the total flux, leaving 1/3 in Knot C and none in Knot A. Our combined LyC flux density from Knots B and C is F 912 = (3.5 ± 2) × 10 −15 erg s −1 cm −2 Å−1 , consistent with the (4.0±0.9)× 10 −15 erg s −1 cm −2 Å−1 measured by Leitet et al. (2011) in the 30 ′′ × 30 ′′ FUSE aperture. STELLAR POPULATIONS AND LYC ESCAPE To quantify the LyC escape efficiency in each region, we compute the local escape fraction as: where L 912,obs is the observed integrated luminosity in our 9 Å window of 903 -912 Å at rest (or 922 -931 Å in the Haro 11 frame), and L 912,int is the intrinsic luminosity in this range, similar to Leitet et al. (2011).Thus, f esc,912 is an approximation for the fraction of the total produced ionizing power that escapes the region.We assume that the ionizing luminosity is dominated by FUV radiation from massive stars, and we estimate L 912,int by modeling the stellar population in each knot as described below.Sirressi et al. (2022) have previously constrained the stellar populations of Haro 11 using multi-band HST photometry in the F140LP, F220W, F275W, F336W, F435W, F550M, F555W, F665N, and F814W filters ( Östlin et al. 2009;Adamo et al. 2010).Fitting the resulting SEDs, they estimate individual cluster parameters in each knot.They additionally use FUV 1150 − 1800 Å COS G130M/1300 + G160M/1600 spectra ( Östlin et al. 2021, programs 15352, 13017;PIs Östlin, Heckman) and optical 4650 − 7000 Å MUSE spectra (Menacho et al. 2019, program 096.B-0923(A); PI Östlin).The G130M/1300 spectra were corrected for broad Lyα absorption as described in Sirressi et al. (2022).As detailed in Menacho et al. (2019), the MUSE observations of each knot are the result of 16 dithered integrations at 4 different position angles.The final MUSE spectra were extracted with apertures of the same size as for the COS spectra, 2.5 ′′ in diameter, and corrected for vignetting, with the resulting FUV and optical continua well matched across the wavelength gap. We combine these existing spectroscopic observations with our newly obtained COS G130M/1055 spectra (900 − 1200 Å) to model each knot's stellar content.To increase the signal-to-noise in the UV, we further bin all COS spectra by a factor of 2 in wavelength and mediancombine the overlap region.Since the G130M/1055 spectra have different initial extracted spectral sampling from the G130M/1300 and G160M/1600 data (0.6 and 0.4 Å px −1 , respectively), the final combined UV spectrum has sampling of 1.2 and 0.8 Å px −1 , over the respective ranges, while the optical MUSE data sampling is 1.2 Å px −1 .The combined spectra of the three knots are shown in Figures 3 -5.Below we describe our stellar population modeling assumptions, spectral features of interest, and fitting procedure. We use Starburst99 (Leitherer et al. 1999(Leitherer et al. , 2014) model spectra of varying cluster ages (1 − 100 Myr), masses (10 5 − 10 9 M ⊙ ), and extinctions (E(B − V ) = 0 to 1).We also consider different star formation histories, specifically, constant star formation (CSF) or single stellar populations (SSPs).We fit either CSF or a maximum of 3 SSPs to each knot's combined COS G130M/1055 + G130M/1300 + G160M/1600 + MUSE spectrum.While the S22 photometric analysis revealed at least 7 individual star clusters in each knot, our goal is to approximate the 1 − 3 dominant stellar populations within each region.We evaluate a number of stellar model assumptions by using a variety of evolutionary tracks.With respect to S22, we extend the parameter space to include stellar rotation, testing Geneva 2012 tracks with v = 0, v = 0.4 × v breakup (Ekström et al. 2012a).We also consider standard or high mass loss: Ekström et al. (2012a) or Meynet et al. (1994) tracks. For all models, a Salpeter initial mass function (IMF) with stellar mass range 0.1 − 120 M ⊙ is assumed, similarly to S22.The stellar mass normalizations depend on the low-mass end of the IMF, and we estimate that our resulting masses would be 1.6× lower if a Kroupa IMF had been assumed.We re-sample the model spectra onto our observed wavelength grid, matching the vari- able sampling, and convolve with a Gaussian of width ∼ 1 Å, equal to the stellar broadening we measure from photospheric C III 1247. Following S22, we perform a linear interpolation of the model spectra between the discrete metallicity values available in Starburst99 (Z = 0.001, 0.008, 0.02), in order to estimate models for metallicities appropriate to Haro 11.Previous Haro 11 metallicity measurements have yielded discrepant results spanning a factor of two in each knot (Guseva et al. 2012;James et al. 2013;Menacho et al. 2021).Menacho et al. (2021) provide a comprehensive summary and discussion of the previous metallicity measurements.While these measurements relied on the same, direct method, the discrepancies are attributed to differences in ionization correction factors and aperture sizes, as well as some possible real varia- tions between stellar and ionized gas metallicities probed by different apertures.We therefore use the most spatially resolved metallicity measurements for Haro 11 by Menacho et al. (2021), adopting their measured central values for Knots A and B. For Knot C, which shows a much larger age spread, 1 − 100 Myr, we treat metallicity as a free parameter and obtain values consistent with Menacho et al. (2021).Lastly, we apply extinction to the models, using the SMC law from Prevot et al. (1984).If the Calzetti et al. (2000) law is assumed, it again results in unreasonably high extinctions compared to previous measurements, and thus also stellar masses. Since our objective is to constrain the youngest, LyCemitting population, we focus on fitting the age-sensitive O VI λ1038, N V λ1243, C IV λ1548, and Si IV λ1400 P Cygni features, which trace stellar winds from young stars, as well as the optical and UV continuum, which helps constrain the extinctions and mass normalizations.While we do not weigh the P Cygni profiles differently in fitting the spectrum, we reject models that fit the continuum but not the P Cygni features.We mask out the narrow interstellar absorption components before fitting, taking special care in the absorption regions of P Cygni lines.We further isolate feature-free sections of the UV and optical continuum, and mask out nebular emission lines, ISM features, and detector gaps before fitting.The spectral regions we use for fitting are shown in the green sections in Figures 3 -5.We note that O VI λ1038 has contamination from Lyβ and moreover, the stellar wind feature in the model atmospheres is especially uncertain due to combined effects of wind inhomogeneities and X-rays, as well as low signalto-noise.For each knot's spectrum, discrete age combinations from 1 to 100 Myr are fit in steps of 1 Myr, determining the best-fitting masses and extinctions using the differential evolution minimizer in the Python package lmfit.The best model is then chosen as the age, stellar mass M , and extinction E(B − V ) combination that results in the lowest χ 2 .Testing the effects of stellar rotation and assumed mass loss rates, we find that, for all three knots, the evolutionary tracks that produce the highest-quality fits are high mass-loss, non-rotating 1994 Geneva models (Meynet et al. 1994).In these tracks, the mass-loss rate was doubled from the "standard" case (Schaerer et al. 1993), to match observations.Since 2012, a lower, theoretical mass-loss prescription (Vink et al. 2001) has been adopted by the Geneva group accounting for wind inhomogeneities, based on clumping-corrected mass-loss rates (Ekström et al. 2012b;Georgy et al. 2013).Models assuming the opposite cases of fast rotation and low mass-loss produce reasonable fits for Haro 11, but with consistently higher χ 2 values.The preference for nonrotating tracks in Haro 11 suggests the rotation rates to be at the slower end of the spectrum defined by the two available rates of v = 0 and v = 0.4 × v breakup .Noting that mass loss and rotation rates require further study in low-metallicity systems, we present the stellar population fits assuming high mass-loss and zero rotation, similarly to S22. For each knot's stellar population fit, we compute L 912,int from the model spectrum and obtain the escape fraction from equation 1.For the escape fraction uncertainty, we combine the error in the detected LyC luminosity with that in the modelled L 912,int , which we obtain by Monte Carlo sampling our best-fit stellar masses and ages 10 4 times.We also calculate the predicted Q 0 , the emission rate of H-ionizing photons, to compare to that inferred from Hα observations, Q 0 (Hα).The latter was measured for all knots by Sirressi et al. (2022) from the MUSE spectra, extracted with the same apertures as in the FUV COS observations.HST/WFC3 F665N imaging suggests that Q 0 (Hα) is underestimated by 25%, 15%, and 15% for Knots A, B, and C, respectively.We also estimate the predicted thermal FIR luminosity of each knot that arises from dust-processed optical and UV radiation, by integrating the difference between the intrinsic and the reddened model spectra in our fitting ranges 912 − 1750 Å and 4500 − 7000 Å.To compare to observations, we derive the 1−1000 µm luminosity from the galaxy-integrated IRAS F 60µm , F 100µm fluxes and the Helou et al. (1988) prescription, obtaining L FIR = 2.7 ± 1.3 × 10 44 erg s −1 .So, the dust content in Haro 11 generates a substantial FIR luminosity consistent with luminous infrared galaxies (LIRGs).This value is reasonably consistent with the sum of our estimated FIR luminosities from the individual knots obtained below, which is 1.7 × 10 44 erg s −1 . Detailed stellar population fits for each knot are presented below, with resulting parameters shown in Table 3, together with a discussion of the LyC escape efficiency and mechanisms.We present our results for the three knots in order of decreasing detected LyC flux. Knot B We find that the region of Haro 11 with the brightest measured LyC is Knot B. Using the distance of 88.5 Mpc to Haro 11, we obtain L 912,obs = (2 ± 1.5)×10 40 erg s −1 for Knot B (Table 2).This region by far dominates the Hα emission of Haro 11 and thus has the highest intrinsic ionizing power among the three knots, with Q 0 (Hα) = 8 × 10 53 s −1 (Sirressi et al. 2022).Its young massive stars should therefore dominate the production of ionizing photons in Haro 11.However, this object shows large amounts of gas and dust, and it therefore has been somewhat overlooked in the literature when considering the origin of the LyC from this galaxy. Our stellar population synthesis confirms that Knot B produces the most ionizing radiation in Haro 11.The best-fitting model for the combined COS G130M/1055 + G130M/1300 + G160M/1600 + MUSE spectrum of Knot B is shown in Figure 4.A strong match for the spectrum (reduced χ 2 = 1.7), with age-sensitive P-Cygni wind features well fitted, consists of a continuous star formation episode for the last 3 Myr, as well as 1-Myr and 13-Myr single stellar populations.The 1 Myr component is the youngest age observed in Haro 11, as can be seen in Table 3, where detailed stellar population model parameters are shown for all knots.More-over, Knot B has the most massive LyC-bright population, where the stellar mass in ages < 5 Myr is 2.4 × 10 7 M ⊙ , which is 3× higher than that found in the second-brightest LyC-leaking region, Knot C. As shown in Table 3, Knot B thus has the highest ionizing photon production efficiency among the knots, log(ξ ion ) = 25.3, compared to 25.2 in Knots A and C. Notably, the dominant, 1-Myr component in Knot B is heavily obscured, showing E(B − V ) = 0.5 and contributing marginally to the observed FUV flux.The age-sensitive P Cygni features are thus accounted for by the 3 Myr CSF component alone, while the dusty 1 Myr component contributes ∼ 30% of the optical luminosity.But with a mass of ∼ 2 × 10 7 M ⊙ , the 1 Myr population successfully reproduces the high photoionization rate inferred from Hα.As can be seen in Tables 2 and 3, our model gives a total Q 0 = 1 × 10 54 s −1 , while that inferred from Hα is Q 0 (Hα) = 0.8 × 10 54 s −1 , which is in good agreement.Our predicted FIR luminosity, dominated by the 1-Myr component and tracing dust-processed radiation, is L FIR = 1.3×10 44 ergs −1 .Our modeling thus confirms the existence of a massive, obscured young population that dominates the ionizing power and FIR luminosity of Knot B. Our results can be compared to the S22 photometrically derived cluster parameters given in Table 4. S22 photometric modelling pointed to 10 clusters within the COS aperture with ages 1-5 Myr of varying masses and extinctions, as well as a 14 Myr cluster.Since we fix the number of separate population components in our spectroscopic analysis to three, our model for Knot B cannot account for all the S22 cluster ages.But our model is consistent with the two youngest S22 clusters, 1 Myr and 2 Myr, where our total mass in these ages is within 40% of the corresponding mass found by S22, and we find similarly high extinctions of E(B − V ) ∼ 0.5.Notably, while Wolf-Rayet (WR) emission features (e.g., He I λ1640, and the λ5608 Å and λ4650 Å bumps) are not included in Starburst99 models, these features observed in the spectrum of Knot B are signatures of a ≳3 Myr population, consistent with the presence of photometrically derived 4 Myr old clusters (Table 4).Our 3-Myr CSF component thus accounts both for the presence of the youngest stars, as well as the more evolved WR components.Lastly, the 13 Myr component we find is 40× more massive than the 14 Myr cluster, which may be attributed to the spectroscopic aperture also capturing the extended, diffuse populations. Given the large cluster masses and young ages seen in Knot B, it could feasibly host Very Massive Stars (VMS, M > 120 M ⊙ ).We examine the spectrum of Knot B for VMS diagnostics, such as O V λ1371 absorption, He II λ1640 emission with equivalent width (EW) > 3 Å, an absent or weak double-peaked red bump, and a blue bump without WR lines (Kunth & Sargent 1981;Martins et al. 2023;Wofford et al. 2023).We do not detect O V λ1371 in Knot B, and we measure its He II λ1640 EW to be 2.3 Å.The red bump detected in Knot B is broad and smooth, while the blue bump shows WR C IV λ4658.The above are all consistent with classical WR stars, and not VMS. Although Knot B is the brightest LyC source in Haro 11, its local escape fraction is low.Based on the intrinsic LyC luminosity of our modelled stellar components, we obtain f esc,912 = 0.034±0.029(Table 3).The uncertainty is dominated by observational error of 77% on the detected flux, while the uncertainty in the intrinsic LyC luminosity is 21%.To compare to the escape fraction implied by the component cluster parameters, we model the clusters' UV spectra in Starburst99, based on their ages, masses, and metallicities.Summing their intrinsic LyC luminosities in the 903 − 912 Å range, we obtain f esc,912 = 0.02 ± 0.01 for the clusters, consistent with our spectroscopic model.Knot B also shows the lowest Lyα escape fraction among the three knots, f esc,Lyα = 0.65 % ( Östlin et al. 2021), although the observed Lyα peak separation of 400 km s −1 is similar to that observed in Knot C and other weak LyC leakers with f esc,LyC ∼ 0.03 (Izotov et al. 2018;Flury et al. 2022b).The inefficiency of LyC and Lyα escape in Knot B can be explained by its significant gas and dust reservoir.It exhibits the highest H I column densities among the three knots, log(N HI /cm −2 ) ≈ 21 ( Östlin et al. 2021), and the highest molecular gas mass, M H2 = 2 x 10 9 M ⊙ (Gao et al. 2022).Moreover, it has a neutral covering fraction in Si II, f c,Si II = 0.96 ( Östlin et al. 2021) and thus f c,HI ≥ 0.96 in H I (Chisholm et al. 2018), consistent with the escape fraction we obtain.With filamentary dust clouds across the region further obscuring it, Knot B shows significant dust extinction of E(B − V ) = 0.5 (Table 3). Ionization-parameter mapping (IPM), on the other hand, points to a complex picture of optical depth.Knot B shows a significant [S II] λλ6717,6731 deficiency, ∆[S II] = −0.16( Östlin et al., in prep), where ∆[S II] is the displacement in log([S II]/Hα) from typical starforming galaxies (Wang et al. 2021) (Izotov et al. 2016a,b;Flury et al. 2022b).However, the extended region shows a confined, ionization-bounded morphology in transverse directions (Keenan et al. 2017).Therefore, Knot B must be leaking LyC through a narrow ionization cone in the line of sight.Menacho et al. (2019) report evidence of narrow highly ionized channels with Knot B at the base.They also find 1000 km/s outflows, likely driven by stellar feedback.We measure [S II]λλ6716, 6731/Hα = 0.14 in our spectrum of Knot B, too low to signal shock heating by supernovae.The S22 photometrically derived cluster parameters in Knot B, the youngest of which we capture in our spectroscopic model, nevertheless imply a total power of 2 × 10 41 erg/s in stellar winds and supernovae. Since the LyC-leaking Knot B hosts an ultra-luminous X-ray source (ULX), its potential contribution to the LyC emission needs to be evaluated.This is an unusually bright, hard ULX (Prestwich et al. 2015).The X-ray emission has been revealed to originate from at least two objects (Gross et al. 2021).Based on the X-ray hardness, Gross et al. (2021) suggest that one or both of the sources is a black hole binary in a low-accretion, hard state, with the high X-ray luminosity suggesting the presence of an intermediate-mass black hole (IMBH) of mass M • > 7600 M ⊙ in the region.Alternatively, it may be a low-luminosity AGN (LLAGN), whose signatures are obscured by the dense gas and dust observed in Knot B or diluted by the intense star formation.It is thus possible for Knot B to be a unique merger site of two IMBH's or LLAGN (Gross et al. 2021). However, we find that the ULX is unlikely to be sufficiently bright to contribute to the LyC leakage from Knot B. A generous upper limit to the LyC luminosity of the ULX can be estimated by extrapolating the X-ray power-law into the UV.Using the observed slope of Γ = 1.7 (Gross et al. 2021), we estimate the intrinsic 903 − 912 Å luminosity of the ULX to be L 912,ULX ≲ 6 × 10 39 erg s −1 .This is fainter than the observed LyC of Knot B, L 912,obs = 2 × 10 40 erg s −1 , although it agrees within the observational uncertainty of ±1.5 × 10 40 erg s −1 .Besides, more realistic ULX SED models predict even fainter LyC luminosities than we estimate, by 1-2 orders of magnitude (e.g., Fernández-Ontiveros et al. 2012;Gierliński et al. 2009).Regardless, the intrinsic stellar LyC emission we constrain from our population synthesis, L 912,int = 6 × 10 41 erg s −1 , is two orders of magnitude higher than that estimated for the ULX.So, the stellar population alone can fully account for the observed LyC leakage from Knot B, with a relatively insignificant contribution from the ULX. Knot C Knot C is the region with the second-strongest LyC flux of Haro 11, with a detected LyC luminosity L 912,obs = (0.9±0.7)×10 40 erg s −1 (Table 2), or nominally about half of the Knot B luminosity.We note, however, that the large uncertainties on the LyC fluxes preclude a conclusive claim on the relative strengths of the emergent LyC from the knots.Knot C has been the prime candidate for LyC emission from Haro 11 based on its highest Lyα escape among the three knots, with f esc,Lyα = 6 % for Knot C, and f esc,Lyα ≲ 1% for A and B ( Östlin et al. 2021).Why does LyC appear fainter than that in Knot B, and what is the corresponding local escape fraction? Our best stellar population fit to the combined COS G130M/1055 + G130M/1300 + G160M/1600 + MUSE spectrum of Knot C is shown in Figure 5, with detailed parameters in Table 3.It consists of two components: a population continuously formed for 15 Myr at SFR = 1.7 M ⊙ yr −1 and a single older population with age 100 Myr.The model fits the observed spectrum well (reduced χ 2 = 2.4), with the P Cygni profiles of O VI, N V, C IV, and Si IV fit well, excluding ISM absorption, and clearly indicating the presence of 1 − 5 Myr-old stars.As can be seen in Tables 2 and 3, our stellar population fit gives a total photoionization rate Q 0 = 4 × 10 53 s −1 , which is within a factor of two of the value inferred from Hα.As Knot C appears to be a nuclear star cluster, a continuous star formation history is reasonable (Adamo et al. 2010). The current episode of constant star formation is superimposed on an older, background population, which is likely the diffuse, extended, bulge-like component.We allow the metallicities of the components to vary between observed values, Z obs = 0.004 from Menacho et al. (2021), and Z = 0.001.We note that the old component we find is a generic background population whose age is not well determined above ∼ 30 Myr.Because its contribution to the spectrum is only in the optical continuum, it can be fitted with a wide range of age-extinction combinations.The high-resolution Starburst99 UV models are only available up to ages of 100 Myr.We have therefore also fit low-resolution Yggdrasil SSP models (Zackrisson et al. 2011) to the spectrum of Knot C and found a good fit with ages up to 2 Gyr and extinctions E(B −V ) < 0.8, consistent with previously reported values (Sirressi et al. 2022).Our predicted FIR luminosity from dust processing is L FIR = 2 × 10 43 erg s −1 . Comparing our results to the S22 photometrically derived cluster parameters in Tables 3 and 4, we see that S22 find Knot C to be dominated by one massive cluster of age 15 Myr, which is likely the nuclear cluster.The cluster properties we obtain from the spectrum are consistent with S22 within the errors.Thus Knot C, undergoing continuous star formation for the last 15 Myr, has the oldest mass-weighted age in Haro 11, in contrast to the 2 Myr-dominated Knot B. While there are 1 − 5 Myr stars present, their total mass in our model is ∼ 9 × 10 6 M ⊙ , which is 3× lower than that in Knot B. Knot C therefore has a lower intrinsic ionizing luminosity than Knot B. Our stellar population model gives a local escape fraction from Knot C of f esc,912 = 0.051±0.043.Here, the uncertainty is dominated by observational error of 78%, while the model uncertainty is 30%.The escape fraction of Knot C may thus be higher than that of Knot B, 0.034 ± 0.029 (Table 3).So, despite appearing fainter in LyC than Knot B, Knot C may leak LyC more efficiently.The observed properties of Knot C are also consistent with it having the highest LyC escape among the knots.In addition to the highest Lyα escape fraction, Knot C exhibits the lowest neutral covering fraction f c,Si II < 0.5, corresponding to f c,HI ∼ 0.8 (Chisholm et al. 2018) and the lowest neutral column density, as measured with the apparent optical depth method (Savage & Sembach 1991) applied to Si II, log(N Si II /cm −2 ) = 14.7 (Table 2) ( Östlin et al. 2021) . On the other hand, its extinction is comparable to that of Knot B, suggesting a weak relation between dust extinction and LyC escape.Ionization-parameter mapping results are likewise ambiguous.On one hand, Knot C shows a strong [S II] λλ6717,6731 deficiency of ∆[S II] = −0.14 ( Östlin et al., in prep), suggestive of LyC escape (Wang et al. 2021;Pellegrini et al. 2012).On the other hand, the knot appears to be in a low ionization state, based on low O 32 ≤ 3 and [O III] λ5007/Hα ≤ 0.5 and high [O II] λ3727/Hα ∼ 0.3 (Keenan et al. 2017).As shown in Figure 5 of Keenan et al. (2017), it appears to have a high-ionization region extending to the east, but overall, Knot C appears to have low ionization relative to the rest of the galaxy.It moreover shows a confined morphology for the high-ionization region, with the O 32 ratio transitioning smoothly and quickly to lower values into optically thick envelopes. Despite some signatures of high optical depth, Knot C has the advantage of extensive stellar feedback.With a continuous star-formation history of 15 Myr (Table 3), Knot C has had a significant supernova history.S22 suggest that the mechanical feedback has been taking place even longer, over the last 40 Myr (Sirressi et al. 2022).The S22 feedback model can account for the energetics of the observed soft diffuse X-ray emission in Haro 11 reported by Grimes et al. (2007).We measure [S II]λλ6716, 6731/Hα = 0.26 in our spectrum of Knot C, consistent with stellar photoionization.But Menacho et al. (2019) find that the combination of [O I]λ6300/Hα and [O III]λ5007/Hα ratios on the outskirts of Haro 11 indicates 200 − 600 km/s shocks.They also find a ∼ 2 kpc, high-ionization structure with Knot C at the center, that is likely a superbubble.Knot C also shows 1000 km s −1 gas (Menacho et al. 2019), which is difficult to explain with supernovae or stellar winds, and may instead be a signature of radiation-driven outflows (Komarova et al. 2021).Overall, the significant stellar feedback in Knot C, whether radiation-or mechanically dominated, may have cleared optically thin channels in its ISM, through which LyC photons can escape. Similar to Knot B, Knot C contains a ULX that, based on its 0.3 − 8.0 keV spectrum and luminosity, might be one of the most luminous soft ULXs known, with L X = 4.5 × 10 40 erg s −1 (Prestwich et al. 2015;Gross et al. 2021).The X-ray emission likewise originates in two point sources, where the secondary object shows L X ∼ 2 × 10 40 erg s −1 (Gross et al. 2021).Prestwich et al. (2015) and Gross et al. (2021) explain its high luminosity by an IMBH of mass M • > 20 M ⊙ , undergoing super-Eddington accretion (Swartz et al. 2011;Kaaret et al. 2017), although it is also possible to be a neutron star binary, since compact object masses are poorly constrained.In turn, the blowout of inner-disk material in this intense accretion phase can result in a disk wind (e.g., Middleton et al. 2015).If the super-Eddington accretion drives an outflow with a mechanical luminosity similar to its X-ray luminosity (e.g., Justham & Schawinski 2012), the outflow power would be comparable to the stellar wind power of Knot C, estimated to be 6 × 10 40 erg s −1 (Sirressi et al. 2022).So, the ULX feedback may contribute significantly to gas clearing, promoting LyC escape. To understand the role of the ULX in the LyC leakage from the knot, we estimate the ULX LyC output by extrapolating its X-ray power-law into the UV.For the observed spectral index Γ = 2.1, we estimate L 912,ULX ≤ 2 × 10 40 erg s −1 .This is a generous upper limit, as Vinokurov et al. (2013) and Kaaret & Corbel (2009) show that more realistic model SEDs of supercritical accretion disks are even fainter in the UV.Our estimated ULX LyC luminosity is twice as bright as the observed LyC luminosity from Knot C, and thus the ULX can plausibly contribute LyC.However, our modelled stellar LyC of 2 × 10 41 erg s −1 exceeds that of the ULX by at least an order of magnitude.Thus, if the ULX is responsible for some of the LyC emission from the region, its contribution may be unimportant compared to that of the stellar population.However, if its mechanical feedback dominates the gas clearing for the observed LyC, its LyC contribution may still be significant. So, the stellar population of Knot C can alone account for the observed LyC leakage from this region.The soft, luminous ULX observed in this knot may be sufficiently bright to contribute to the LyC emission, but its intrinsic production is < 10% of the stellar emission.The ULX may be able to aid in the escape of Lyman radiation through mechanical feedback.Further investigation of the mechanical and radiative feedback of the ULXs is needed to conclusively establish their roles in the LyC leakage from Knot C. Knot A Knot A has also been predicted to be the LyC-leaking knot, based on ionization-parameter mapping, which Keenan et al. (2017) use to show that Knot A is responsible for a large, ∼kpc-sized region with high O 32 .They find a central O 32 ∼ 9, consistent with the most extreme Green Peas, which are the largest class of local LyC leakers (e.g., Flury et al. 2022a).However, we do not detect LyC in Knot A. The 2σ upper limit of the LyC flux density from Knot A is F 912 < 3.3×10 −15 erg s −1 cm −2 Å−1 in the range 903−912 Å, or L 912,obs < 2.8×10 40 erg s −1 . To estimate the local escape fraction upper limit, we use our population synthesis results.Our stellar model for the spectrum of Knot A consists of three SSPs of ages 3 Myr, 4 Myr, and 13 Myr, as shown in Figure 3 and detailed in Table 3.The model fits the spectrum reasonably well (reduced χ 2 = 3.7), accounting for both the continuum and age-sensitive features.The P Cygni profiles of O VI, N V, C IV, Si IV clearly indicate the presence of 3 Myr-old stars.Although the Wolf-Rayet He I λ1640 and 5608 Å and 4650 Å bumps are not included in Starburst99 models, these features are consistent with a ∼4 Myr population.With 1 × 10 7 M ⊙ , this 4 Myr component dominates the stellar mass of Knot A, but is highly obscured, with E(B − V ) = 0.5.It therefore has no FUV contribution, including in the agesensitive P Cygni features, and the 3 Myr component can alone account for these profiles, while the dusty population contributes 25% of the optical luminosity.The 3 Myr and 4 Myr populations reproduce the observed photoionization rate from Hα within a factor of two (Tables 2 and 3).However, given that Knot A has the least dust among the knots, with previously reported extinction E(B − V ) ∼ 0.2 (Menacho et al. 2021), this discrepancy may indicate a modest mass overestimate in the 4-Myr component.Comparing with the S22 clusters in Table 4, our 4-Myr spectroscopic component is 10× more massive than the 4-Myr S22 cluster, but has an 8× higher extinction.Similar to the 14 Myr cluster found by S22, we find a 13 Myr component, with a matching mass but 2× higher extinction.Lastly, the total FIR luminosity predicted by our model for the knot is 2.8×10 43 erg s −1 . The massive young populations we uncover in Knot A may possibly host VMS.This was previously suggested as an explanation for Knot A's broad blue bump (Keenan et al. 2017).Similar to Knot B, we evaluate the spectrum of Knot A for VMS signatures.We do not detect the blue-shifted O V λ1371 absorption common in VMS, and we measure the He II λ1640 EW = 2 Å, while VMS show > 3 Å (Martins et al. 2023).The observed red bump is broad and smooth, while the blue bump shows WR C IV λ4658.So, as in Knot B, we do not find evidence of VMS in Knot A. The above observations are instead consistent with classical WR stars. From our modelled intrinsic LyC luminosity of the stellar populations, we estimate a 2σ upper limit to the escape fraction f esc,912 ≤ 0.10 (Table 3).Our nondetection thus does not conclusively rule out significant LyC escape in Knot A. Indeed, multiple lines of evidence point to some degree of LyC escape in Knot A. First, despite showing the lowest observed Lyα luminosity among the knots, L Lyα = 3 × 10 40 erg s −1 , the Lyα escape fraction of Knot A is f esc,Lyα = 1.2 ± 0.12% ( Östlin et al. 2021), which is 2× that of Knot B (Table 2).Nevertheless, this value is lower than that suggested by its reddening, implying that Lyα is strongly attenuated by dust scattering ( Östlin et al. 2021).The neutral covering fraction is similar to that of Knot B, f c = 0.95 ± 0.05 ( Östlin et al. 2021), and so is the column density, log(N HI /cm −2 ) = 20.7 ( Östlin et al. 2021).Moreover, Knot A shows a number of signatures of low optical depth in ionization parameter mapping.Its [S II] deficiency is most extreme among the knots, ∆[S II] = −0.19 ( Östlin et al., in prep), predicting the highest f esc,LyC (Wang et al. 2021).Most importantly, Keenan et al. (2017) report high-O 32 gas originating at Knot A and spreading to distances of > 2 kpc, and Menacho et al. (2019) observe it to > 4 kpc as high [O III]/Hα, signaling a transparent medium in the transverse directions.In fact, the O 32 ratio does not transition smoothly into lower values at the edges of this region, strongly implying low optical depth in the plane of the sky.Menacho et al. (2019) show that this structure exhibits the highest ionization, [O III]/Hα ≳ 3, in velocity bins from −300 km s −1 to −150 km s −1 , while the central [O III]/Hα ∼ 1.This suggests that the highionization structure is likely an optically thin outflow driven by LyC from the knot, but its axis is not coincident with our line of sight. The main implication of our results in light of IPM observations is therefore that LyC escape must be highly anisotropic.While we do not detect LyC in Knot A, it is likely that the leakage, if any, occurs through a channel not coincident with our line of sight (e.g., Zastrow et al. 2011).This is consistent with the Lyα peak velocity separation, which is highest in Knot A, v sep,Lyα ∼ 500 km s −1 , compared to the other two knots.Such large v sep,Lyα is consistent with escape fractions < 1% (Flury et al. 2022b). DISCUSSION We find that the LyC-emitting regions in Haro 11 are Knots B (L 912,obs = 2.3 ± 1.8 × 10 40 erg s −1 ) and C (L 912,obs = 0.9 ± 0.7 × 10 40 erg s −1 ).We determine their respective LyC escape fractions to be f esc,912 = 3.4 ± 2.9% and 5.1 ± 4.3% (Table 3).The total LyC-luminosity-weighted escape fraction of Haro 11 is f esc,912 = 3.9 ± 3.4%, consistent with 3.3 ± 0.7% obtained by Leitet et al. (2011).Knot B appears to dominate in LyC luminosity, with the caveat that the low signal-to-noise in our LyC observations prevents a conclusive determination of which knot dominates.At face value, Knot B is responsible for 2/3 of the total observed flux, and it has ∼ 3× higher mass in 1 -5 Myr-old stars than Knot C (Table 3), which is the age of peak LyC production.Thus Knot B strongly dominates the ionizing photon production.However, it has almost full neutral covering ( Östlin et al. 2021) and high extinction (Table 3).In comparison, Knot C is a more evolved region of constant star formation for the last 15 Myr, with a commensurate history of supernova feedback, possible LLAGN feedback, and low neutral cov-ering fraction (Sirressi et al. 2022;Östlin et al. 2021).Our findings highlight the sensitivity of LyC escape to the star-formation rate, age, and optical depth. Thus, Knot B both has more young stars and emits more strongly in LyC, but has a slightly lower escape fraction than Knot C. Our results underscore the fact that LyC escape fraction and escaping LyC luminosity are separate quantities.In Haro 11, Knot B seems to produce the greatest LyC emission because it strongly dominates in LyC production.But its large gas and dust content apparently keeps its local escape fraction lower than the more evolved, and cleared, Knot C. On the other hand, Knot A shows similar age and extinction to Knot B, but we do not detect it in LyC.It is thus the interplay of star formation intensity, age, gas clearing, and/or line-of-sight orientation that determines the efficiency and detection of LyC escape.Tracers of f esc,912 alone are insufficient indicators of escaping LyC luminosity along the line of sight.Flury et al. (2022b) connect LyC properties to star formation density, Lyα properties, and O 32 in 66 local starforming galaxies, including Green Peas.They find possible evidence for two modes of LyC escape in the most extreme starbursts, dictated by whether stellar feedback is wind-dominated or radiation-dominated.One population shows younger ages, higher O 32 , and low metallicity, suggesting strong radiation-dominated feedback, which may be linked to optically thin, radiation-driven winds (Komarova et al. 2021).On the other hand, the somewhat older starbursts with lower O 32 and higher metallicity likely leak LyC with the help of superwinds (e.g., Heckman et al. 2011;Zastrow et al. 2013), and these show lower f esc,LyC .If Knot A is indeed an anisotropic LyC emitter as evidenced by ionization-parameter mapping (Keenan et al. 2017), then it falls into the radiationdominated category, consistent with its very high ionization parameter, and other radiation-dominated features (Section 3.3; Keenan et al. 2017).Knot B may likewise be radiation-dominated based on its high O 32 and young age, with the caveat that its metallicity is close to solar.Finally, Knot C has low O 32 and is apparently dominated by mechanical feedback, given its extensive supernova history (Sirressi et al. 2022).Since we observe multiple stellar generations in each region, it is likely that both modes are at play, with their relative importance to be established. Haro 11 is a local Green Pea analog, showing O 32 and other radiation-dominated properties characteristic of these objects (Micheva et al. 2017;Keenan et al. 2017).These properties are linked to Knot A, which turns out to not show direct detection in LyC, although IPM strongly implies that the region is optically thin in other sightlines.This additionally stresses the dependence of the escape fraction on the geometrical distribution of dust and neutral gas along the line of sight.The fact that the detected LyC from Haro 11 originates from regions other than Knot A further demonstrates that LyC emission from Green Peas may be more complex than consideration of a single starburst (e.g., Micheva et al. 2018).Notably, the global O 32 for Haro 11 is only 2.5 (James et al. 2013), which is on the low side for a GP.This O 32 is consistent with that of unresolved GPs in the Low-redshift Lyman Continuum Survey (LzLCS), containing the largest sample of local LCEs to date.The LzLCS GPs show f esc,LyC = 1 − 4% for similar O 32 .In addition, the ionizing photon production efficiencies we estimate are log(ξ ion ) = 25.32 for Knot B and 25.22 for Knots A and C.These values are lower than that of reionization-era galaxies, 25.4 − 25.8 (Simmonds et al. 2023;Saxena et al. 2023;Atek et al. 2024), or local strong LyC leakers, 25.6 − 26 (Schaerer et al. 2016).But they are consistent with the standard values assumed in reionization models (Robertson et al. 2015). Our spatially resolved study thus uncovers how some of the global properties we observe at higher redshifts may arise from multiple star-forming regions of widely differing properties.In particular, the regions dominating the ionization may not be the primary LyC sources in our line of sight, and more evolved regions' contribution should not be discounted. Implications for Lyα as a Diagnostic of LyC As LyC emission is difficult to observe directly, indirect tracers are required to identify LyC leakers.Lyα is expected to correlate with LyC escape, as it is also sensitive to the hydrogen column density.Radiative transfer simulations (e.g., Verhamme et al. 2015) and observations of several LCEs (Verhamme et al. 2017) confirm a tight relationship between Lyα and LyC escape.The LzLCS survey shows that Lyα width and peak separation are some of the strongest indirect predictors of LyC escape fractions (Flury et al. 2022b).With their new, larger sample, the authors reproduce the anti-correlation between Lyα peak velocity separation v sep,Lyα and LyC escape fraction, f esc,LyC , first established by Izotov et al. (2018). Our findings in Haro 11 are consistent with these Lyα predictions.As shown in Table 2, the Lyα peak separations observed in Knots A, B, and C are 530, 409, and 400 km s −1 , respectively ( Östlin et al. 2021), decreasing with increasing f esc,LyC .Knots B and C are consistent with the Flury et al. (2022b) v sep,Lyα − f esc,LyC relation, which predicts f esc,LyC ∼ 0.03 for v sep,Lyα = 400 km s −1 .The two LCE knots are also consistent with a f esc,Lyα − f esc,LyC correlation, where Knot C has a higher f esc,Lyα and a slightly higher f esc,912 .We calculate the luminosity-weighted average Lyα escape fraction for the three knots to be f esc,Lyα = 4.2 ± 0.6%.We also estimate the global Lyα peak velocity separation by combining the Lyα profiles of the three knots, obtaining v sep,Lyα = 410 ± 70 km s −1 .Our luminosityweighted LyC escape fraction f esc,912 = 3.9 ± 3.4% is consistent with the averaged v sep,Lyα according to the Flury et al. (2022b) relation.These values account only for Lyα emission from the knots, and not diffuse Lyα observed outside of them ( Östlin et al. 2009). The efficiency of Lyα escape, or its escape fraction, can provide insight into LyC radiative transfer.Knot C has both the highest Lyα luminosity and f esc,Lyα among the knots, and its f esc,912 is indeed likely the highest despite the LyC luminosity being only half of that in Knot B. So, while the LyC and Lyα escape fractions correlate, the emerging LyC and Lyα luminosities do not necessarily do so. As for the shape of the Lyα profiles, Knot A and B clearly show broader Lyα red peaks than Knot C, with their respective widths 302 km s −1 , 338 km s −1 , and 195 km s −1 , suggesting higher optical depth and low LyC escape ( Östlin et al. 2021).The Lyα red peak width specifically points to Knot B as the weakest leaker, inconsistent with our results.Since both Knots A and B show neutral covering fractions close to unity (Table 2; Östlin et al. 2021), it appears that the correlation of Lyα red peak width with f esc,LyC is not as strong when considering individual star-forming regions instead of integrated galaxy properties.The Lyα profile of Knot C, on the other hand, is narrow, but with a higher red peak asymmetry than in Knots A and B (Rivera-Thorsen et al. 2017;Östlin et al. 2021).Kakiichi & Gronke (2021) show that higher red peak asymmetry around its center points to LyC escape through a hole-ridden ISM, or a picket fence structure, as the asymmetry is seen to correlate with ISM porosity in their radiation-hydrodynamic simulations.This arises because the asymmetry of the red peak traces the presence of both optically thin and thick channels, while a symmetric red peak indicates isotropic leakage. One inconsistency we see in Lyα predictions is in the non-detected Knot A, where its peak velocity separation is highest among the knots, but its Lyα escape fraction is twice that of Knot B. The peak velocity separation thus implies the lowest LyC f esc,LyC among the knots, while the Lyα escape fraction suggests it should be higher than that in Knot B. A likely explanation for this may be that the LyC is emerging in directions transverse to our line of sight, as implied by ionization-parameter mapping (Section 3.3), while Lyα can be enhanced by scattering into our line of sight.The observed Lyα luminosity of Knot A is notably the lowest among the knots, and the intrinsic LyC we estimate is 3× lower than that of Knot B (Table 3), while exhibiting a similar covering fraction, column density, and thus similarly broad red Lyα peak.Lastly, it is important to note that the Lyα relations described above were established for unresolved galaxies, which likely also consist of multiple star-forming regions with varying properties.In our spatially resolved study, we connect Lyα profiles from smaller, ∼1-kpc apertures to individual knot properties, providing a view of separate components that may make up the integrated Lyα observations. Although most of our results agree well with Lyα predictions, there are still significant differences in the escape conditions for LyC vs. Lyα.The maximum column density at which Lyα can escape is log(N HI /cm −2 ) < 13, while the threshold for LyC is log(N HI /cm −2 ) < 17.So, the four orders of magnitude of difference provide a parameter space where gas can be optically thin in LyC but not in Lyα.Another major difference in the escape mechanisms is scattering: Lyα scatters strongly, modulating its escape path and additionally promoting dust absorption relative LyC.Dijkstra et al. (2016) simulate the radiative transfer of Lyman radiation in multiphase ISM to investigate the relationship between LyC and Lyα escape fractions.They find a positive correlation, as expected, but with significant scatter at higher Lyα escape fractions that is driven by gas covering fraction.The corresponding LyC escape fractions in this region are lower than expected from the correlation, decreasing with higher covering fractions.The scatter extends at least two orders of magnitude, consistent with observations (Flury et al. 2022b), showing that the ISM porosity introduces appreciable stochasticity to the relationship between LyC and Lyα radiation.This can also provide a context for the Lyα vs LyC observations of Knot A. Thus, Lyα properties can provide clues to LyC escape conditions, though not without additional independent tracers.Our Haro 11 study shows that Lyα peak velocity separation is consistent with it tracing f esc,LyC , where the knots fall on the observed relation within scatter.But the Lyα luminosity, red peak width, and escape fraction do not correlate directly with LyC escape in regions of varying gas optical depth and covering.Our results underscore the significant distinctions in Lyα and LyC radiative transfer, and that further study into the effects of ISM morphology and anisotropy of LyC escape is needed. IPM and Anisotropy of LyC Escape Ionization-parameter mapping relies on nebular emission line ratios with different ionization potentials to find regions of low optical depth, where high-ionization species dominate.This serves as an indirect tracer for LyC escape (Pellegrini et al. 2012). Our resolved observations of the LyC-emitting regions of Haro 11 are not fully consistent with predictions from IPM, if isotropic escape is assumed.Most importantly, in Knot A, IPM demonstrates LyC escape conditions transverse to the line of sight (Keenan et al. 2017, ;Section 3.3).Yet we find no LyC detection in the COS aperture, which probes the line of sight.As noted above, this suggests that LyC escape may be extremely anisotropic.This is also implied from our observations of Knot B, which leaks LyC despite being almost fully covered in neutral gas, implying a narrow escape path.Knot C, on the other hand, is seen to be in a low-ionization state, implying high optical depth in LyC.Yet we find it to have the highest LyC escape fraction.Thus the IPM predictions appear to be sensitive to line-of-sight effects. Hydrodynamic simulations of starbursts at z = 4 − 6 by Cen & Kimm (2015) confirm that the LyC escape fraction depends strongly on the viewing angle.They find that only highly ionized, evacuated channels with small solid angles allow significant LyC propagation.Indeed, the observed scatter in LyC escape fractions with respect to correlated starburst properties at z ∼ 3 and z < 0.4 points to a line of sight effect, where partial ISM clearing results in limited transparent paths (Flury et al. 2022b;Nestor et al. 2011).Several nearby starbursts with active stellar feedback likewise exhibit narrow ionization cones (Zastrow et al. 2011(Zastrow et al. , 2013)).The line-ofsight bias is thus crucial to account for in reionization studies at all redshifts.Anisotropy of LyC escape, dictated by the gas-clearing mechanisms such as winds and supernovae, as well as the optical depth and ionization structure, need to be accounted for when using IPM as a predictor for LyC escape.This may be even more important if accretion-driven feedback is responsible for the necessary gas clearing. Thus, our observations of Haro 11 offer important data on what information IPM does and does not provide on LyC escape.While it gives insight into the ionization structure in the plane of the sky, additional tracers probing line-of-sight conditions are required to identify objects in which LyC can be directly detected. X-ray Sources and LyC Escape The two LyC-leaking Knots B and C both host ULXs, while the undetected Knot A is purely star-forming.This interesting coincidence raises the question of the role of accretors in LyC escape, which has been a major problem in cosmic reionization (e.g, Volonteri & Gnedin 2009;Madau & Haardt 2015).Although the ULX LyC contribution is likely negligible in Knot B, the ULX in Knot C, which may be an LLAGN, may contribute to the LyC leakage from the region at the ≲ 10% level.On the other hand, the focused mechanical feedback that can be expected from these X-ray sources might contribute to clearing optically thin channels for LyC escape in both Knots B and C. Low-accretion, hard X-ray sources as that seen in Knot B may produce jets of substantial mechanical power (e.g., Merloni & Heinz 2013;May et al. 2018).Likewise, the soft, super-Eddington sources, such as the one hosted by Knot C, can also drive radiation-driven disk winds that may be important (e.g., Middleton et al. 2015).These feedback mechanisms may significantly enhance ULX contributions to LyC escape. The question of the role of accretors in cosmic reionization is all the more important in light of the observed excess X-ray emission in early-universe galaxy analogs.For Kaaret et al. (2011) observe an increased X-ray luminosity per star formation rate L X /SFR in blue compact dwarfs (BCDs), and Brorby et al. (2014) find the BCD X-ray luminosity function to have 10× the normalization observed in solar-metallicity galaxies.Also, Douna et al. (2015) find 10× more highmass X-ray binaries (HMXBs) per SFR bin in < 0.2 Z ⊙ galaxies than in solar-metallicity objects.Extremely metal poor galaxies (XMPG, Z < 0.05 Z ⊙ ) show more ULXs than higher-metallicity galaxies (Prestwich et al. 2013).Moreover, Basu-Zych et al. (2013) find that a sample of z < 0.1 Lyman break analogs (LBAs), including Haro 11, likewise exhibit a higher L X /SFR than solar-metallicity galaxies.Their interpretation is that lower metallicity results in more luminous HMXBs, as weaker stellar winds lead to more massive compact objects.Brorby et al. (2016) quantify this L X − SFR − Z relation, where L X /SFR increases with lower log(O/H).Finally, Dittenber et al. (2020) find that the majority of local Lyα emitters, and thus candidates for LyC escape, may be driven by ULXs, finding a connection between Lyα escape and HMXBs and/or LLAGN.Thus, metalpoor starbursts in the early universe likely formed an overabundance of X-ray binaries, which may have contributed to the process of reionization. From our LyC study of Haro 11, we see that LyCemitting regions may coincide with ULX sites, but the role, if any, of accretors in ionizing radiation production and mechanical feedback remains to be clarified. SUMMARY AND CONCLUSIONS Haro 11 is a key object to understanding cosmic reionization as it is the closest, and first, confirmed local LyC emitter.It is dominated by three star-forming Knots A, B, and C of widely varying properties, and since the original, spatially unresolved detection, it has remained unclear which of these three regions is responsible for the observed LyC emission.We therefore obtained new HST/COS G130M/1055 observations of each of the knots in the range 900−1200 Å, which reveal that Knots B and C are the LyC emitters toward our line of sight.Their respective 903 − 912 Å luminosities are 1.9 ± 1.5 × 10 40 erg s −1 and 0.9 ± 0.7 × 10 40 erg s −1 .So, Knot B seems to dominate the leaking LyC luminosity of Haro 11, accounting for 66 ± 50% of the detected flux in 903 − 912 Å, and Knot C accounts for 35 ± 25%.The total Haro 11 LyC luminosity is 2.9 ± 1.0 × 10 40 erg s −1 . We perform stellar population synthesis to constrain the stellar parameters and thus local LyC escape fractions of each knot.For this, we combine our new COS G130M/1055 with Sirressi et al. ( 2022) COS G130M/1300 + G160M/1600, as well as Menacho et al. (2019) MUSE spectra.Fitting Starburst99 models to each knot's spectrum, we find that Knot B, the brightest LyC emitter, is dominated by a ∼ 1 Myr, 2×10 7 M ⊙ heavily obscured component.Knot C is best fit with a continuous star formation history for ∼ 15 Myr with stellar mass of 3 × 10 7 M ⊙ and low obscuration.Knot A is dominated by a ∼ 4 Myr, 1 × 10 7 M ⊙ highly obscured population.Our modeling thus uncovers massive, young, obscured stellar populations in Knot B and Knot A, which dominate the ionizing photon production in their respective regions but have minimal FUV imprint and contribute < 40% in the optical.We do not see conclusive evidence of Very Massive Stars in any of the knots.Instead, we find clear signatures of classical WR stars in Knots A and B. The primary tracer of the young populations is the large Hα luminosity, which we reproduce with our models within a factor of two or better.Moreover, Haro 11 shows a large FIR luminosity, L FIR = 2.7 × 10 44 erg s −1 , that qualifies it as a LIRG.Our dusty stellar population models for the three knots combined account for 1.7 × 10 44 erg s −1 , which is within 60% of the observed integrated FIR radiation.Thus the model estimates and observed values are in reasonable agreement, especially considering that the IRAS aperture includes the entire galaxy.Sirressi et al. (2022) photometrically detect 7−11 clusters in each knot, with ages 1−15 Myr and masses up to 5 × 10 7 M ⊙ (Table 3).Our 3-component spectroscopic models capture the aggregate young, UVdominant populations, as well as the ∼ 15 Myr generation that does not contribute significant LyC.The spec-trum of Knot C additionally shows an old background population with an age up to ∼ 2 Gyr. The corresponding LyC escape fractions in the range 903 − 912 Å are f esc,912 = 3.4 ± 2.9% for Knot B and 5.1 ± 4.3% for Knot C. In the case of Knot A, we place a 2σ upper limit to the escape fraction of f esc,912 ≤ 10%.The luminosity-weighted escape fraction for the entirety of Haro 11 is f esc,912 = 3.9±3.4%,consistent with Leitet et al. (2011). Our results underscore that the LyC escape fraction (f esc,912 ) and escaping LyC luminosity (L 912,obs ) are distinct fundamental parameters for characterizing LyC escape.Although we find that the majority of Haro 11's LyC flux likely originates from Knot B, the values above demonstrate that its local escape fraction appears to be lower than that of the LyC-fainter Knot C. The reason is that Knot B has by far the highest ionizing photon production; but it exhibits the largest amount of neutral gas among the knots, and in particular, it has a covering fraction close to unity (Gao et al. 2022;Östlin et al. 2021).This results in a potentially lower escape fraction.On the other hand, Knot C is intrinsically fainter in LyC due to an older mass-weighted age, but it is significantly less obscured.Some of its gas has likely been evacuated by feedback, as its HI covering fraction is f c,HI ∼ 0.8 and its neutral column density is the lowest among the knots (Table 2; Östlin et al. 2021).Characterizing LyC emission by only f esc,912 would prioritize Knot C over B, whereas Knot B is in fact the more luminous LyC emitter.Thus, although the escape fraction is often emphasized in the literature, the relevant parameter is the convolution of the escape fraction and the intrinsic LyC luminosity, since both drive the observable LyC luminosity and the number of ionizing photons leaked into the IGM. We also use our new observations to test Lyα as a LyC escape tracer by comparing our results to Lyα-based predictions.We see a correlation of f esc,912 with the observed Lyα luminosity, and an inverse relation with Lyα peak velocity separation v sep,Lyα , as expected.But the Lyα escape fraction f esc,Lyα does not consistently trace LyC escape.While Knot C has both the highest Lyα and LyC escape fractions, the lowest f esc,Lyα in Knot B incorrectly predicts it to be the weakest LyC leaker.The Lyα red peak width similarly points to Knot C as the strongest leaker and Knot B as the weakest, with the latter not consistent with our observations.So, the observed Lyα luminosity and peak velocity separation appear to more consistently correlate with LyC escape, while f esc,Lyα and Lyα red peak width are less consistent tracers.However, we stress that these results are based on only three star-forming knots in this galaxy. There are important implications from the fact that Knot A is not detected in the LyC, despite it driving the Green Pea properties of Haro 11. Green Peas are the largest class of local LyC emitters, and their radiation properties strongly correlate with f esc,LyC (Flury et al. 2022a,b).Knot A has therefore been predicted to be a leaking knot based on its ionization parameter, which is the highest of the three knots (Keenan et al. 2017).First, its non-detection highlights the potential importance of multiple star-forming regions driving the LyC escape in Green Peas.In Haro 11, we see that while the knot dominating the GP properties is not directly detected, other knots are LyC emitters.Many GPs, like Haro 11, are mergers hosting multiple starforming knots.So, while GP properties and signatures of radiation-dominated feedback are alone insufficient to predict LyC emission , the multiplicity of starbursts may be important.Micheva et al. (2018) noted the potential role of two-stage starbursts in LyC emitters.On the other hand, Knot A shows clear signatures of densitybounded conditions in the plane of the sky (Keenan et al. 2017).It is therefore likely that Knot A is leaking LyC transverse to our line of sight, as seen in, e.g., NGC 5253 (Zastrow et al. 2011).This implies that LyC escape must be highly anisotropic, as predicted by simulations (Cen & Kimm 2015) and inferred from observations (Nestor et al. 2011;Flury et al. 2022b). Lastly, we note the intriguing coincidence that the two LyC-leaking knots are the hosts of the only two ULXs in Haro 11 (Prestwich et al. 2015;Gross et al. 2021).Neither of the ULXs appear to contribute significantly to the LyC emission, especially considering that the stellar populations dominate the UV light by 1 − 2 orders of magnitude.Nevertheless, the X-ray sources may promote LyC escape through accretion-dominated mechanical feedback, where powerful disk winds and jets may clear optically thin channels.A multitude of studies show that reionization-era analogs, such as LBAs and BCDs, have an overabundance of ULXs (e.g., Dittenber et al. 2020;Brorby et al. 2014;Basu-Zych et al. 2013;Kaaret et al. 2011).Further investigation is thus required into both the ultraviolet emission and mechanical feedback of ULXs, in order to determine their role in LyC escape. Figure 2 . Figure 2. New HST/COS Lyman continuum observations of Knots A, B, and C in Haro 11.We measure the LyC in a 9 Å window between 922.4 Å (the left edge of the figure) and the redshifted Lyman limit at 931.4 Å, shown as a black dashed line.The gray regions represent intervals that were excluded from LyC measurements due to geocoronal emission lines. Figure 3 . Figure 3. Top: Combined new COS G130M/1055 + Sirressi et al. (2022) G130M/1300 + G160M/1600 + MUSE observations (black) of Haro 11 Knot A, and our model consisting of 3 SSPs, indicated by the colored lines as shown.The light green regions represent intervals that were used for fitting the models to the data.The rest was excluded to mask ISM and nebular lines, geocoronal emission, and detector gaps.Bottom: Zoom of age-sensitive O VI, N V, C IV, and Si IV P-Cygni profile fits. Figure 4 . Figure 4. Same as Figure 3 but for Knot B. The model assumes two single stellar populations and an episode of continuous star formation. Figure 5 . Figure 5. Same as Figure 3 but for Knot C. The model assumes one episode of continuous star formation plus an older background population (see text). based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.These observations are associated with programs 16260, 15352, and 13017.This research is also based on observations collected at the European Southern Observatory under programme 096.B-0923(A). . Table 1 . COS G130M/1055 Observations Target UT Start Date Exposure time a (s) Table 2 . LyC Fluxes and Observationally Derived Properties of Knots A, B, and C. Table 3 . Parameters Fitted from Spectroscopic Stellar Population Models Fits based on both Starburst99 and Yggdrasil (see text).Ages up to ∼ 2 Gyr can be fit. c (Pellegrini et al. 2012;Wang et al. 2021)itions(Pellegrini et al. 2012;Wang et al. 2021).Values of [O II] λ3727/Hα < 0.1 indicate low optical depth, and for Knot B, [O II] λ3727/Hα ∼ 0.05 in the central line of sight (Keenan et al. 2017), suggests LyC escape.Knot B also shows elevated O 32 = [O III] λ5007/[O II] λ3727 > 8 overall (Keenan et al. 2017), roughly twice the mean value observed in local unresolved LCEs (Flury et al. 2022a).Higher O 32 values point to density-bounded conditions and correlate with the LyC escape fraction
2024-04-03T06:45:27.255Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "2386ffa9a7705d5b30a51a306471659db25f0cc5", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ad3962/pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "2386ffa9a7705d5b30a51a306471659db25f0cc5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253070234
pes2o/s2orc
v3-fos-license
The impact of the Covid-19 pandemic on the uptake of routine maternal and infant vaccines globally: A systematic review Maintaining routine vaccination coverage is essential to avoid outbreaks of vaccine-preventable diseases. We aimed to understand the international impact of the COVID-19 pandemic on routine vaccination in pregnant women and children aged 0-5-years-old. A systematic review of quantitative and mixed methods studies exploring changes in vaccination coverage, vaccination services, and vaccine confidence since the start of the Covid-19 pandemic was conducted. MEDLINE, EMBASE, CINHAL, PsychINFO, Web of Science, Google Scholar, World Health Organisation, UK Government Joint Committee on Vaccination and Immunisation (including EU and US equivalents), and SAGE Journals were searched between 15-17th June 2021. Selected studies included pregnant women, health professionals, and/or infants aged 0-5-years-old including their parents (population); reported on the Covid-19 pandemic (exposure); presented comparisons with pre-COVID-19 pandemic period (comparator) and reported changes in routine maternal and infant vaccination coverage, services, and confidence (outcomes). Sources published only in non-English language were excluded. The Newcastle Ottawa Scale was used to assess study quality and risk of bias (ROB), and a narrative synthesis was undertaken. This review has been registered with PROSPERO (CRD42021262449). 30 studies were included in the review; data from 20 high-income countries (HICs), seven low- and middle-income countries (LMICs), and three regional studies (groups of countries). 18 studies had a low ROB, 12 had a higher risk, however both low and high ROB studies showed similar results. Two studies meeting the inclusion criteria discussed changes in routine vaccinations for pregnant women while 29 studies discussed infants. Both groups experienced declines in vaccination coverage (up to -79%) with larger disruptions in the accessibility and delivery of vaccination services reported within LMICs compared to HICs. Changes in vaccine confidence remained unclear. The COVID-19 pandemic resulted in decreased vaccine coverage and reduced routine vaccination services for pregnant women and infants, impacts on vaccine confidence requires more research. Introduction Maternal, and infant vaccines have proven to be a powerful mechanism in decreasing infant morbidity and mortality [1,2]. Routine vaccinations, as stated by the World Health Organisation (WHO), are 'the sustainable, reliable, and timely interactions between the vaccine, those who deliver it and those who receive it to ensure every person is fully immunised against vaccine-preventable diseases [3]. The tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis (Tdap) vaccination is an example of a routine vaccination administrated to expecting mothers, which is highly effective (91.4%; 95% confidence interval [CI] 19.5% to 99.1%) at preventing pertussis during an infant's first two months of life, a disease capable of causing hospitalisation and death in this vulnerable population [1,4]. Any decrease in vaccine coverage is a public health concern for increasing the risk of outbreaks of vaccine-preventable diseases, placing vulnerable individuals at further risk as they no longer benefit from herd immunity and contributing potential for extra strain on healthcare systems [2,4]. The COVID-19 pandemic resulting from the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) resulted in over 452 million confirmed cases globally and approximately 6 million reported deaths up to March 2022 [5]. With the widespread impacts of the pandemic, resources have been diverted from existing services, and concerns have been raised regarding the continuous coverage, service access, and delivery of routine vaccinations [2]. These concerns correspond with previous outbreaks, for example, in 2014 the Ebola virus disease epidemic in West Africa resulted in decreases in the delivery of maternal services, and vaccine administrations for diseases such as polio where reductions of -3,594 doses (-216 to -5,879 95% [CI], p = 0.0362) were reported in Guinea [6]. Lesson learnt from this epidemic included ensuring communication between service providers and these communities are maintained throughout outbreaks to maintain vaccination coverage [6]. Decreases in vaccination coverage were also reported in Sierra Leone during this outbreak period, for example a decrease in measles vaccine coverage from 71.3% (62.1% -80.4% 95% [CI]) to 45.7% (29.2% -62.2% 95% [CI]) [7]. Similar lessons to Guinea were learned with the addition of the necessity of higher quality supplementary immunisation activities, active surveillance to identify areas with low coverage, and the addition of a further dose for routine measles vaccine [7]. Prior to the pandemic, vaccination coverage rates were higher in high-income countries (HICs) than in lower-middle income countries (LMICs) [8,9]. For example, diphtheria, pertussis, and tetanus third dose (DTP3) vaccine coverage in infants was 95%, in HICs and 73% in LMICs in 2017 [8]. Coverage sits even lower in low-income countries not receiving GAVI aid, with their combined DTP3 coverage in infants sitting at 48%, in 2017 [8]. Measles-containing-vaccine first dose (MCV1) coverage in infants within the African region was reported to be 74% compared to 95% in the European region in 2018 [9]. Therefore, vaccination coverage between HIC and LMIC regions was already inequitable. LMICs experience greater challenges with a lack of access to reliable transportation links, household crowding, and lack of economic means which contribute to existing inequalities in health and opportunities between these regions [10,11]. These extra challenges mean loss of vaccination coverage in LMICs is more of a public health concern due to increased risks of disease exposure and lack of access to healthcare [10,12]. This review takes a global approach to achieve a comprehensive overview of the impacts of the pandemic on routine vaccination and how impacts may differ between LMICs and HICs, where vaccination inequity was an existing issue. There is a historical lack of research on LMICs generally gives more reason to explore the global evidence [13]. With increasing globalisation, disease outbreaks in any area can affect the rest of the world, bringing responsibility for countries to work together in mitigating and controlling the impacts of pandemics and outbreaks to avoid global health issues [12,14]. We need to understand changes in routine maternal and infant vaccinations since the COVID-19 pandemic to understand what is happening globally. It is important to evaluate the available evidence to highlight areas for improvement and targeting interventions. This can equip policy makers, health service commissioners, and the wider public health community to make informed decisions on the upkeep of these essential services and their accessibility throughout disease outbreaks. This systematic review aimed to understand the impacts of the COVID-19 pandemic, specific to the SARS-CoV-2 species, on routine maternal and infant vaccination coverage, services, and confidence. We have defined vaccination coverage as changes in the proportion of vaccinated infants within their respective age group for their respective vaccination, vaccination services as any health service facilitating the administration of routine vaccines to infants, and vaccination confidence as changes in the attitude or behaviour of parents or healthcare workers surrounding the administration of infant vaccinations. Methods Guidelines established by the Cochrane Handbook for Systematic Reviews of Interventions were used [15]. This review has been registered with PROSPERO (CRD42021262449). Selection criteria The following inclusion criteria, based on the PICO (Population, Intervention/Exposure, Comparison, Outcomes) model were applied [16]: • Population: pregnant women, health professionals, and infants aged 0-5-years-old, including their parents. This age range was chosen for the inclusion of many of the early routine vaccinations administered across vaccination schedules of most countries [17]. • Exposure: defined as the COVID-19 pandemic, as declared by the WHO on 11 th March 2020 [18]. • Comparison: defined as the pre-COVID-19 pandemic period, any period prior to March 2020 where the WHO declared a global pandemic, this has also been defined by the studies included themselves [19]. • Outcomes: changes in routine maternal and infant vaccination coverage, vaccination services (for example, operating hours, changes in delivery schedules), and/or vaccine confidence. The WHO definition of routine vaccination (as stated above) was used [3]. Quantitative and mixed methods studies were included to gather all relevant quantitative results, and all countries were included for a global perspective. Studies were excluded if: • They were not presented using English language to avoid translation error, • They focused on other coronaviruses, for example SARS-CoV-1. • The study PICO differed from those specified above. • The sole focus was on non-routine vaccinations administration, considered as vaccinations not found on routine vaccination schedules such as post exposure prophylaxis, including the recent COVID-19 vaccine. Search strategy A search strategy was created using relevant medical subject headings (MeSH) [20] . Retrieved studies were uploaded to the reference management tool EndNote. Duplicate studies were removed, and remaining studies were screened using their titles and abstracts to decide upon their relevancy to the review; this process was carried out only by AY due to resource constraints. Decisions were recorded using a PRISMA flow diagram [21]. Full texts of relevant studies were retrieved for full eligibility checks following title and abstract screening. Decisions around inclusion of studies where eligibility was less clear were made via team discussion (all authors) by strictly comparing these studies to our pre-defined PICO and considering resource constraints in the potential widening of this PICO for the inclusion of these studies. Data extraction The author, year of publication, country, country income (based on World Bank 2021 classification) [22], study purpose, data collection methods and sources, population, sample size, exposure, control, outcomes, and other data of importance were extracted on to a data extraction form using Microsoft Excel by one researcher (MSc AY) due to resource constraints (see Table A in S1 Table). This enabled the comparison of differences between studies. Summary estimates, including confidence intervals, and p-values of quantitative studies were extracted where possible for the comparison of effect estimates between pre-and post-COVID-19 periods. Only quantitative data were extracted from mixed method studies. Quality assessment A Risk of Bias (ROB) assessment, using the Newcastle-Ottawa Scale (NOS) [23], was applied to all included studies, as recommended by the Cochrane handbook [15]. The NOS adapted for cohort studies was applied to one study which specified itself as a cohort study. This was scored out of a maximum of 9 stars [23]. The NOS adapted for cross-sectional studies was applied to all other studies as they either defined themselves as cross-sectional or were not explicitly clear on their study type but could be identified as cross-sectional studies [24]. These studies were scored out of a maximum of 10 stars [24]. On the NOS scale, a score of ten stars represents low ROB while zero stars represents very high ROB [23]. The NOS simultaneously acted as a quality appraisal tool [23]. Data synthesis A narrative synthesis was conducted focusing on vaccine coverage, vaccination services, and vaccine confidence as outcomes. This was appropriate due to the variation between studies in their chosen methods of reporting (cumulative counts vs rates), while also allowing cohesive discussion of interactions between the different outcome measures stated [25,26]. Tabulation of data was used throughout to assist with the presentation of results and to enable comparison between HICs and LMICs. Due to heterogeneity between studies and results, a meta-analysis could not be conducted [15]. The influence of ROB on the results of the review were explored. Results 4112 studies were retrieved: 4021 from database searches, 91 from other sources including governments and organisations (Fig 1). 2056 duplicates were removed, leaving 2056 studies for title and abstract screening where a further 1937 studies were removed. 119 studies underwent full text screening, excluding a further 89 studies after assessment. Reasons for exclusion after full text assessment include differing populations (for example pre-and late teens), outcomes PLOS GLOBAL PUBLIC HEALTH The Impact of the Covid-19 Pandemic on the uptake of routine maternal and infant vaccines measuring non-routine vaccinations, differing exposures such as the implementation of systems during the pandemic, and comparisons between post-COVID-19 periods, not pre-COVID-19 pandemic periods. Four studies were excluded due to no access for public use, 20 studies contained insufficient detail due to lacking quantitative results, 11 studies were purely qualitative, and two studies had language restrictions. 30 studies were ultimately included in the review . Vaccinations in the studies include: Risk of Bias 18 studies achieved a NOS score of 7 stars or above, and therefore could be considered as good studies with a low ROB [27-37, 47-51, 54, 55] (see Tables B and C in S1 Table). 12 studies obtained a score less than 7 stars indicating increased ROB due to: no statistical tests, poor comparability by disregarding relevant confounders (including the age of infants at time of vaccination, or service type as public or private), or sampling concerns (small sample size, or convenience sampling) reducing the representativeness of the study population [38-46, 52, 53, 56]. Studies are presented in Table 1 and Tables A-C in S1 Main results Overwhelmingly, there has been a decline in routine vaccination coverage and services internationally, with LMICs suffering more than HICs (see Table 1). Findings are described in more detail below. Studies with a higher ROB followed a similar trend to those with low ROB meaning there were no outstanding differences between their outcomes. Harris et al.'s (low ROB) large regional study included both HICs and LMICs and reported an overall decline for DTP, OPV, IPV, and Measles vaccine coverage rates within all ages; the greatest being in OPV with a median decrease of -79% (IQR -42% to -79%) administered during infancy in participants from 19 different countries across South-East Asia and the Western Pacific [54]. The smallest decrease was reported within school-entry aged children receiving measles vaccination with a median decrease of -9% (IQR -3% to -31%), from the same study [54]. The two studies exploring maternal vaccination coverage, both reported decreases. Chandir et al., reported a -28.8% average decrease in maternal tetanus toxoid vaccinations in (LMIC) Pakistan, while Public Health England reported a -4.2% decrease in monthly maternal pertussis vaccination coverage in (HIC) England [34,50]. Studies with a high ROB (ROB score < 7), show more conflicting findings in coverage, though still mainly indicating a decline. Results showed decreases in vaccine administrations from the pre-pandemic period; for example, a -15% to -7.5% decrease in BCG administration in Japan [27]. Some changes to vaccine schedules were seen such as the mean age of BCG vaccination administration decreased from 6.3-weeks prior to the lockdown, to 4.3-weeks-old (95% CI 1.93 to 2.07, p < 0.01) in Sindh, Pakistan [50]. Some service providers only continued vaccinations for certain ages; for example, Vogt et al., found 81.4% of services in the US offered vaccinations to 1-2-year-olds, whereas only 44% continued for 3-6-year-olds [44]. Likewise, Piché-Renaud et al., identified 94% of services in Ontario, Canada continued vaccinations for 0-18-month-olds, while 77% postponed vaccinations for 4-6-year-olds [33]. Overall, declines in vaccine administrations were reported across both LMICs and HICs. These were more common within LMICs as in some cases vaccination administrations increased in HICs. For example, a 2% to 7% increase in measles and rubella 2 nd dose (MR2) vaccine administrations for infants aged 5-6-year-olds across within-country regions in Japan (Kawasaki, Niigata, Nagasaki, and Fuchu) [27]. Results show reductions in operating hours and increased duration of consultations [33,36,39,43,44,46,50,51,54,56]. For example, Vogt et al., found across the US, 61.7% of practices offered reduced office hours for in-person visitations; of these, 63.7% were in urban areas, and 55.4% in rural areas [44]. Across other studies, Russo et al., found up to 42.5% of vaccination appointments were postponed or cancelled by vaccination services from their 1,474 survey responders in Italy, 13.5% stated vaccination services closed, while 44% of parents were reluctant to travel due to travel restrictions [46]. A lack of guidance was identified by England as Bell et al.'s online survey which found 25.6% of parents were unaware that childhood vaccinations continued throughout the pandemic [43]. From this same study, 23.9% to 53.3% of parents experienced difficulties accessing and booking their child's vaccination appointment [43]. Logistical disruptions included: staff shortages, for example as identified by Saso et al. in their globally distributed questionnaire; equipment shortages, including personal protective equipment (PPE), and issues with the vaccine supply-chain [33,36,39,54,56]. Sindh, Pakistan experienced a -7.4% (95% CI -5.29% to -9.51%, p < 0.0001) decrease in the daily average vaccinator attendance, a common occurrence in LMICs [50]. Some differences in vaccine confidence were reported among parents. Sokol and Grummon found 60% of parents intended to change their paediatric influenza vaccination behaviour due to the pandemic [42]. For parents whose children did not receive the 2019-2020 influenza vaccine 34% (95% CI 30%-27%) responded that the pandemic made them less likely to have their child vaccinated for the 2020-2021 influenza vaccine compared with their plans before the pandemic, while 21% (95% CI 18% -24%) responded they would be more likely [42]. Among parents whose children received the 2019-2020 influenza vaccine, 24% (95% CI 22% -27%, p < 0.001) reported being less likely, while 38% (95% CI 35% -41%) reported being more likely to have their children vaccinated with the 2020-2021 influenza vaccine [42]. Between practitioners, in Turkey, when asked 'Which was the attitude of your patients regarding routine vaccination during the pandemic?' 38.3% of family practitioners, 74.4% of paediatricians, and 65.8% of paediatric infectious disease specialists stated patients did not want to come in for vaccination due to the pandemic [52]. However, 57%, 10.5%, and 11.4% respectively also stated no problems with parental attitudes during the pandemic [52]. In Sweden, physicians reported parental concerns over their infant's vaccination administration by comparing the post-pandemic attitude of parents to the pre-pandemic period, on a scale of 1 (not at all concerned) to 10 (very much concerned); 5% reported a score of 5, 10% a score of 4, 15% a score of 3, 40% a score of 2, and 30% a score of 1, signifying a fair proportion of parents with increasing concerns surrounding the vaccination of their infant following the pandemic [39]. Discussion At the time of development, this review was amongst the first of which we were aware to systematically explore the impacts of the COVID-19 pandemic on routine maternal and infant vaccination coverage, services and attitudes globally, serving as a rapid overview. Our results from early data show that since the pandemic hit, routine maternal and infant vaccination coverage has decreased for all vaccinations in all settings investigated. The pandemic negatively impacted vaccination services, indicating problems with access and delivery. Both HICs and LMICs experienced decreases in vaccination coverage and difficulties with vaccine services. In some LMICS and HICS settings these changes were similar, however due to pre-existing low vaccine coverage in LMICs, lower coverage rates post-pandemic was reported within these settings in comparison to HICs. This is an important concern as the threshold for vaccination coverage must remain high for herd immunity to take place, additionally, it continues to highlight the poor access to healthcare and existing health disparities in vaccination coverage between these settings increasing global inequalities. Maintaining vaccination coverage in LMICs is thus even more important, though these are the countries suffering more declines. Our findings suggest that private or self-funded services experienced larger declines in vaccine delivery compared to those receiving publicly funded healthcare, however it is advised more research is conducted in this area as in some countries, such as the UK, it was found that dependency on private or self-funded services increased due to difficulty in accessing public healthcare services, and longer waiting times due to the pandemic [57]. Outreach services were disproportionately affected compared to fixed-services typically due to unavailable staff; a common issue, particularly in LMICs. These results from our review are important as it may indicate those self-funding their child's vaccinations in different countries may be less inclined to seek out routine vaccinations for their infants during the pandemic. This may be due to wider determinants such as financial insecurities resulting from the pandemic [58]. Additionally, the location of vaccination services played an unclear part in vaccine service accessibility; rural areas sometimes reported higher vaccination administrations in comparison to urban areas, with the opposite seen in other studies. The impact of the rural or urban location of vaccination services in this case is not clear indicating more research needs to be done in this area, for example, in the UK this could be achieved by reviewing changes in vaccination coverage for General Practices across the country between the pre-pandemic and post-pandemic periods; however, this may not be applicable for countries struggling to routinely collect this data. Previous research has shown that childhood vaccination coverage in LMICs has typically been lower in rural areas in comparison to urban areas, for example in the Western Pacific Region this can differ from around an average of 60% in rural areas to 70% in urban areas [59]. Interventions and policies in LMICs should therefore target those reliant on outreach services, while for both HICs and LMICs it should be ensured those on private or self-funded healthcare can access services during times of uncertainty to maintain coverage. Reduced service operating hours and increased duration of consultations indicated were among the changes seen in vaccination services, resulting in fewer infants and pregnant women accessing routine vaccinations. Logistical issues including a lack of PPE, and disruptions to the vaccine-supply chain also contributed to lower vaccination uptake. Although countries continued with their vaccination schedules, not all parents were aware, indicating the importance of clear public health messages and the efficient allocation of resources. The few studies reporting increases in vaccination coverage detected these in younger infants, where minimal increases (0.7% increase for 1 st dose MMR) [39] were reported in contrast to the larger magnitude of reported decreases seen in older infants receiving later doses (e.g., 79% decrease for OPV) [54]. This could be explained by the increased healthcare contacts in early life through mandatory routine development visitations which were utilised by health services as an opportunity for the administration of early routine childhood vaccinations, for instance as seen in the UK [60]. This finding highlights the importance of also working to maintain vaccine coverage in older infants in crises, although results showed some countries were also able to maintain vaccination coverage through the pandemic [36]. Results for changes in vaccine confidence between the pre and post pandemic period remain unclear due to a lack of available research; results simultaneously described both increases and decrease in vaccine confidence resulting from the pandemic. Even with inconclusive results, the majority of studies exploring changes in vaccine confidence were conducted within HICs so there is more of a research gap for LMICs. Existing inequities between HIC and LIC regions have been exacerbated further by the COVID-19 pandemic [61]. We have gathered data on the impacts of the COVID-19 pandemic on routine maternal and infant vaccinations globally, however, further research is still necessary. This review found only six LMIC studies, compared to thirteen HIC studies, explored changes in vaccine services, highlighting the need for more evidence from these settings. This is particularly the case for LMICs where more evidence describing changes in vaccine confidence, and accessibility to vaccination services is needed for a comprehensive understanding of the impacts. The data we have collated mirrors the magnitude of the impact of the pandemic on these maternal and infant services, however, these results are representative of many potentially unreported consequences of the pandemic. Our results align with Evans and Jombart's recent modelling of expected versus actual global immunisation for DTP1, DTP3 and MCV1 in 2020, which indicated a global decline of 2.9% attributable to the pandemic with disproportionate impacts between LMICs (-3.8%, 95% [CI] 2.6% -5.1%), and HICs (-0.9%, 95% [CI] -2.2% -0.3%) [61]. International organisations such as the WHO have attempted to address the impacts of the pandemic on vaccination coverage by raising the importance of surveillance and by tailoring responses and plans in addressing vaccination gaps [9,[62][63][64]. The World Health Assembly has endorsed the 'Immunization Agenda 2030' for strategically addressing vaccine accessibility globally for 2021-2030 [62]. This makes recommendations of how to overcome challenges posed by infectious diseases outbreaks by setting country-specific targets for immunisations, ensuring efforts are people-focussed, driven by data, and partnership-based for sustainable coverage [62]. For example, ensuring health workforce availability, and strengthening leaderships and communication for immunisation services; two issues raised in this systematic review [62]. The measles outbreaks strategic response plan 2021-2023 acts as an exemplar, highlighting issues raised in the accessibility of vaccinations during the COVID-19 pandemic similar to those mentioned above and found throughout the results of this review [65]. The report provides a set of measurable objectives countries can work towards to improve the resilience of their vaccination services and responses to vaccine preventable diseases through improving access to funding, training tools, routine risk assessments, catch-up schedules for missed doses, and periods of intense routine immunisations when coverage levels are lower than target [65]. This systematic review has found that vaccination services for many countries were not prepared to withstand the impacts of a pandemic as declines in vaccination coverage and negative impacts on vaccination services were still reported across all countries included in this review [12]. The pandemic resulted in negative impacts on vaccination coverage, and vaccination services and inequalities between LMICs and HICs and global efforts need to address this. More research exploring the impacts on the pandemic on vaccine confidence is needed for the success of these efforts to ensure efforts are 'people-focussed' as mentioned in the Immunization Agenda 2030, to identify priorities in maintaining vaccine coverage and services throughout similar crises [62]. It may be beneficial for countries to focus on country-level analysis to identify those within the population experiencing the greatest inequalities in accessing these services, as well as to identify any disproportionate impacts on service providers within countries as trends may differ between countries. Understanding how different regions have managed, and the consequence on routine vaccinations is important to inform health protection teams and policy makers, to better evaluate protocols and to adjust responses accordingly to minimise health impacts on routine vaccinations caused by pandemics and similar emergencies. Strengths and limitations A strength of this review is the comprehensive investigation into an important health area impacted by the pandemic with potentially significant public health consequences. By conducting a quality assessment and comparing the outcomes of high and low ROB studies we were able to strengthen our conclusions. Limitations include that due to resource constraints, one researcher conducted screenings and data extraction. While the methods would be strengthened by independent screening and data extraction by another researcher, cases of uncertainty were discussed in depth with two experienced researchers (co-authors EA and CC) to minimise this limitation. Due to time constraints, qualitative studies were not explored, which we recognise as beneficial to include in future research to provide richer detail on these findings. We identified a lack of research on maternal vaccinations so could not draw strong conclusions about the pandemic effects, though the existing research indicates cause for concern. We recommend more research be done in this area with the inclusion of qualitative studies for a richer explanation of results. Four studies identified during the literature search were not published for public use, while two studies were not presented using English language, resulting in potential missing evidence. Heterogeneity between studies prohibited the modelling of a comprehensive meta-analysis. In the future a review investigating the impacts of the pandemic between specific time periods, for example pre-lockdown vs lock-down periods, may assist in understanding the extent of the impacts of the pandemic. Conclusion The COVID-19 pandemic has negatively impacted routine maternal and infant vaccination coverage and vaccination service globally. In LMICs where vaccine coverage was already lower than HICs, the impacts of the pandemic has been even more pronounced, increasing the likelihood of vaccine preventable disease outbreaks and increasing existing inequity. All countries will need to strategically collaborate for the better prevention and control of infectious diseases to avoid further epidemics and pandemics, but HICs will also have an ethical duty to assist LMICs in decreasing these widening global health inequalities. Implementing catch-up sessions in all settings to maintain vaccine coverage is imperative to protecting vulnerable populations and avert further health crises. Evidence found in this review expresses emergency response plans to situations such as that seen with the COVID-19 pandemic will need reviewing in all settings to minimise negative changes in infant vaccinations coverage and administration, and to protect against associated negative health outcomes. Supporting information S1 Checklist. The table used as a guide for conducting the research article. (DOCX) S1 Text. Search strategy. The search strategy conducted on the databases Medline, Embase, and PsychINFO (Medical subject headings (Mesh), text word (tw)). (DOCX) S1 Table A provides a template of the data extraction form utilised using the software Microsoft Excel. As shown the following details were extracted: record number (relating to EndNote referencing), author, year of publication, country, country income level, methodology, study purpose, data collection methods and source, population, sample size, exposures, controls, outcomes (changes in vaccine coverage, services, and confidence), additional comments for data of significance, and the Newcastle-Ottawa Scale (NOS) Risk of Bias (ROB) score allocated to the study. Table B in S1 Table. NOS adapted for cohort studies result [23]. Table B shows the ROB assessment results for the Zhong et al., using the NOS adapted for Cohort studies. The maximum number of stars which can be retrieved is 9 indicating low ROB, 0 would be the minimum indicated high ROB. � Means star awarded,-means information unavailable. Table C in S1 Table. NOS adapted for cross-sectional studies results [23,24]. Table C shows the ROB assessment for the 29 studies assessed using the NOS adapted for cross-sectional studies arranged from studies retrieving the greatest NOS score (10) to the lowest (0). � Means star awarded,-means information unavailable. (DOCX)
2022-10-23T15:23:38.371Z
2022-10-21T00:00:00.000
{ "year": 2022, "sha1": "cd71247dd8c02c671d3b76298012ede401027fbe", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0000628&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "181473c9340a2e9127e18ec5217fdf74b609e2dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235838127
pes2o/s2orc
v3-fos-license
Reflexivity of Salat and Hahslm as Pre Design by God Into Religion and Science Due to Economics with Covid-19 The study was intended to analyze the reflections of worship as the design of the universe by god as the creator of it to be human and the parallel universe is linked to the economic dynamics of the covid pandemic mathematically and hahslm. In interconnection integration, it takes ontology for the basis of theoretical development. Identifying the problem of Islamic meaning in the dichotomy of religious science and science that requires a central theory to bridge the presence of religious science with science. It even required the presence of an ontology of religion as the value of worship to mediate from the dualism of science which was only a single science because of coming from the one god. The meaning of worship as a preliminary design of creation can be reflected in patterns of shadows, mirrors, and humans. The study used a qualitative descriptive approach from the Koran's review literature, hadiths, journals, books, and Internet media. The methodologies used are reflexivity and kaffah thinking with hahslm theory. Studies have shown that the creator's reflectivity from salat to the first design of creation that produced the universe. The simultaneous integration of interconnections would serve to bring religious dualism and knowledge into a comprehensive science through multi-disciplines, interdisciplinary, and transdisciplinary. Symbols of the creator and salat are stored in 19 mathematically in the covid-19 pandemic that also occurs in hahslm or 472319. This pandemic era brought economies into turmoil in Indonesia and global. INTRODUCTION There are a few discourses in the development of Islam and knowledge, among other things: Islamization, integration, Islam and knowledge, religion and science, multi-disciplines, interdisciplinary, and reflections.Islamization is the first movement in this discipline.The pioneer of the theory of Islamization was motorized by Ismail Faruki.This figure proposes the need to develop Islamic science [1].Because, in a previous period, Islam already had a superior civilization.Islamic civilization was higher from prophetic times to the industrial revolution.The return of science to Islam was done first by naturalization.After naturalization, which means science is free of value and no longer secular, so it is on Islamization.The approach of Islamization is made by referring to the Koran and the relevant hadiths, and so it and the hadiths are required references to the Islamization approach. The development of Islamization has been perfected with an approach to integration.Integration figures like Mulyadhi Kartanegara exclaim that all scientific sources come from god.In the Quran, God mentions on god's side is Islam (QS.Ali Imran [3]: 19).Because science comes from God, it is automatically Islam [2].If Islamic already Islam, there is no need for Islamization.Islamic and scientific dichotomy emerged after the development of western science in the Renaissance of Europe, during the development of the industrial revolution.Science is equated with the emulation that is the development of the kauniyah verse of a cosmic phenomenon [3].The scientific advance in economics is a quantitative approach using the software as a tool of analysis in its methodology. The whole science that scientists and academics are creating is searching for a new base as a secure place to get back to the leaps of science.Then came the questions to address the above problems in the form of puzzles [4], the need for a new epistemology that could connect the human desire to reach the top of science with the availability of data and knowledge. Then there would be a convergence issue in the question: is Islam capable of explaining the nature of the turbulence in modern science?This will be an opportunity for Islamization to react.The definition begins with the general theory that Islam is a method so that the next Islam will have a specific purpose in a sequence of equations or formulas.Islam, as the key pillar of the religion [5], states in Al-Quran that 'Dyn (system) in the sight of Allah is Islam. There is also an approach that combines with the various disciplines of multidiscipline.There is also an approach that combines with different disciplines called inter-discipline [6].Its development has also reflected a reflexive approach that approaches the basic theory of ontology.Reflexivity is a reflection of the blueprint that radiates in existing science and natural phenomena. The problem of the study of Islam and this knowledge is: 1.It takes ontology from the merging of Islam and knowledge 2. Analyzing Islam and knowledge of the reflections of salat. The term integration itself is generally understood as blending into a complete or unified unity.2Based on this understanding, the integration of science that brings together the religious sciences and the general/secular sciences requires an amalgamation of the two scientific groups so that they become a unity.round and whole.Integration of knowledge is not enough to provide a justification for verses of the Koran (and/or the Prophet's hadith.) in each scientific field and scientific discoveries, give Arabic or Islamic labels to scientific terms, and the like, but there needs to be a paradigm shift based on general scholarship, especially originating from the West, to conform to the basis and repertoire of Islamic scholarship related to metaphysical, religious, and sacred texts.If you look at the golden history of Islamic civilization, it is evident that progress was due to the integrated and holistic understanding of the ulama towards verse qauliyah and verse kauniyah.So that there is no dichotomy between the sciences. There is a philosophy of integration such as ontology.In Islam, the ideals of science are derived from the Quran and Hadits in the form of universal principles [7].The broad creation of all science are designed by salat.The introduction of all this ibadah idea is the beginning of the entire life system.Before God had created the universe, the creator set up the design of worship [8].All creations by God could be reflected as the ibadah. The new idea may be a scientific method because it has fundamental knowledge based on empirical science.Epistemology is one of the elements of philosophy studies.Epistemology is a branch of philosophy that extensively examines the entire method of receiving knowledge [9].Epistemology has examined the philosophy of knowledge, which refers to the roots of science, how to acquire knowledge and the truth of thought. Islam can be seen as a philosophy with a systematic approach, a detailed view, and a Kaffah viewpoint.Then Islam, as a method, is the origin of the idea of incorporation into science and philosophy [10]. The word of Islam has a root word of 3 initial letters and 1 letter which are alphabet 'a' or alif in Arabic, 's' or sin in Arabic, alphabet 'l' or lam, and alphabet 'or mim' in Arabic.There is a word in the Holy Book of the Moslem [11], the Quran, as the primary source of the sense of ontology for Islam, that is, the QS.Ali Imran [3]: verse 19. "The system beside Allah is Islam". It is not enough to research epistemology without explaining its axiology.As a result of Islamic epistemology, a description of axiology must be included [12].Science of Islam research for axiology describes the important know-how and the gain of experience.A human being does not do something without the incorporation of advantages and disadvantages. The role of epistemology in Islam that has created axiology in the form of equilibrium for real human life.Start with ontology as Islam for the fundamentals of life, then axiology as Kaffah for science, and then axiology in the form of the application of knowledge as good and bad for balance.Kaffah Science arises in axiology based on the argument that the entire meaning of basic existence is Islam [13], which is considered to be a framework.This epistemology can be found in the sentence of the Quran Surah Al-Baqarah [2]: verse 208: "To religious people enter all of you into Islam by Kâffah". Theoretical Basis The birth of the concept of integration was motivated by the dichotomy between the religious sciences and the general sciences.The two of them were separated and seemed to walk in their respective territories.It is also triggered by the separation between the Islamic education system and the modern education system which has a latent impact on Muslims.The developing assumption is that "science does not care about religion, so (on the contrary) religion ignores science".This also implies the development of the slogan "science for science", which often creates ethical values in its implementation.Science and religion as if two different entities and separate from each other, have their respective areas, both formal objects -scientific materials, research methods, criteria for truth, the role played by scientists, even to the level of the organizing institution. Science integration is the amalgamation of science structures.The dichotomic scientific structure should be changed.The structure of science does not separate the branches of religion from the branches of observation, experimentation, and logical reasoning.The integrated scientific building structure is between studies that come from qauliyah verses, Al-Quran, hadiths, and kauniyah verses, the results of observations, experiments, and logical reasoning.The division that is very popular for understanding science is the division into areas of discussion ontology, epistemology, and axiology. Before the universe was created by God, there was a very fundamental statement for the creation of jin and man.Suit qs.Adz-Dzariyat [51]:56: "…and not a God created jin and man except for worship" The creative verses (51.56) form the texts that shape the theory of reflectivity.This verse presents the 3 basic elements made up of god as the creator, the genie, and the man as the creation, and the solat as the purpose of creation.These three elements would be the basis for further theoretical thinking, that in Islam there is a link between the elements at least two separate elements.Derivative of verses of worship (51.56) it is the mathematical theory of triangulation [14].It is the theory of a recurring number of occurrences.It has the same pattern, so it forms sequences that can be grouped, from complex to simple.There's a sequence of numbers with a pattern of three different Numbers, but sequential over and over again. The Quranic Formula in Islam The source for equation H=ahq was Quran Surah Al-Hijr [15]: 87 as numbers 1587 reads: "Walaqad atainaka sab'an minal matsani wal quranal 'azim" which means' And indeed we have given you the repeated 7 and the great Quran' [15].There is a digit of 4 as a constant or the basic formula of the theory H (HahSLM).And 112 is the amounts of 'Basmalah' which began the surah in the Qur'an.The equation is mathematical: 1587 x 4 = 112 + 6236 and numbers 158741126236 are absolute numbers as it creates a spherical multiplication which is the sense of 1587 itself which says 7 that is repeated (112 times) and the great Quran (6236 surah). Theory of Hahslm Definition of theory H according to Aziz [15] is: 1. Narrowly, Theory H is defined as a theory of three dominant archetypes with a specific context in five dimensions of invariant arrangement. 2. Broadly for the most common use, Theory H can be interpreted as a theory of the basic concept of creating patterns with certain relationships.H comes from the formula Hahslm, AL-Quran letter Hijr, also stands for Huda or life. These make up 3 figures of 1 as a symbol of the God, 9 as a symbol of prayer, and 3 as symbols of human Numbers.Three Numbers in the mathematical theory of triangulation to 3.1.9or 9.1.3,which places the number 1 at the center, between 3 and 9. From Numbers theory, transformations to reflective methods.This theory is a symbolic reflection approach, can come in terms of figures, text shapes, and picture shapes, as well as other shapes [16].The elements arising are salat, god, and man.The shalat element in this reflexive method is a design, blueprint, or archetype.The element of god becomes the mirror or projector as a creator.And the human element became a symbol of a person standing in front of a mirror, ora symbol of a projection picture, ora symbol of a suit.In a system, according to Islam has at least three elements.Kaffah thinking is a system of three or more interrelated elements.The elements of thinking are embodied in the entities (subject and object), and the entering (worship).The causation of thinking is the three elements are' full variable 'not just the genre or direction [17]. Science of Everything based on the above 3 paradigm of science theory.Second, ontology has made Islam the fundamental concept.Second, Kaffah's epistemology reflected the importance of science.Third, axiology conducted a combination between good and evil. A new model, the Scientific Method of Islam, can be developed from these three frameworks.This approach can be referred to as the Quran with the words 'Silmi Kâffah,' with the meaning that the word 'silmi' is derived from the letters of sin lam mim8, and sinlammim is the root of Islam. Sinlammim's scientific method is one of the ways to break the stagnation of modern research to solve some basic problems.This approach attempts to combine science and religion.Sinlammim is necessary to become a counterbalance to resolve the fundamental problem of science [18].This new paradigm is in line with the growth of the current human understanding that already needs to look for a middle course of problems in established science by providing a new form of spirituality-related theory.From time to time, human beings seek a better world and can address the fundamental question of life.One example of how the technique in Sinlammim is very basic can be seen on the human side.Islamic science can be formed in an academic setting by being combined, with Al-Quran and As-Sunnah, as well as introducing the idea of 4-dimensional elements where the first element is God, the second element is nature, the third element is religious service, and the fourth element is a line. Integration is more difficult to explain in the nano period in the future.The explanation is that modern science has made substantial progress in certain respects.The potential of the scholar will be more focused and more partial.That will allow a group of scientists to be in the right position and, on the other hand, they are a group of scientists focused on left-wing science.It's going to be more difficult to get a middle scholar again.To satisfy the partial commitment of religion and science, the scholar proposes a new approach that is reflexivity.This would be the potential definition of the reflexivity of Islam and science.Islamization came from scholars with a fiqh approach to then-current growth, as integration is contributed by scholars with a modern approach.In the future, some scholars will establish a principle of reflexivity with an approach to a universal pattern.The distinction between the three theories is Islamization to explain the science that has a relationship with the Quran and the Hadith, Integration to explain science by contrasting the discoveries of classical Islamic scholars with modern scholars, and Reflexivity to explain science by showing the repeated pattern of Islamic treasure and invented science.Representation of Islamization could be pointed out as Malaysia, such as IIUM, for integration could be Indonesia, such as the UIN, which has Islam and Science as a general basic subject, and Reflexivity could be from other parts of the institution. Theory of Integration At the beginning of the approach to religion, Islamization was initiated by academics from higher education, such as Malaysia.Indonesia will then be complemented by a new concept of integration. While Islam or religion centers on the holy book of Koran and hadith, with qauliyah approaches, based on the methods of bayani, burhani, and irfani.The consistency of the approach by western scientists to the physical object and the consistency of the approach by religion in the holy scriptures makes Islamic and religious differentiation increasingly convoluted and increasingly distinct.In the general development of science, this dichotomy adds to the diversity, expansion, and expansion of science in all directions.Add to this the promise of a new science-a combination of religions and sciences from which science once existed, then divided into two religions and science.And there will be multiple disciplines of the broken, so there will be four sciences of one source.The diagram above shows that the source of all knowledge is from god.In the early days of human civilization, god passed down his science to the prophets, it was called the science of god [19].This knowledge of God is meant to the divinely given knowledge of prophets, including the study of God.This science of god combines with religious science.Then by western scientists, this science from god was relegated to science.And now, with the covid-19 pandemic, researchers are beginning to catalyze the integration of religious science with science.This transition into merging the two approaches is filled with an approach that combines 2 sets of sciences into one system with a theme of Islam and knowledge or religion and science [20].This approach combines 2 subsystems into one large system.There is still a differentiation presented in one container. Advances in Social Science, Education and Humanities Research, volume 529 Type of Research Type of research is descriptive by analyzing, describing, defining, describing, or explaining the reflexivity regarding integration between Islam and science [21].This research was conducted with literary studies looking for theoretical references relevant to triangular mathematics, kaffah thinking, and reflexivity. Research Scope The scope of the research is about science, Islam, reflexivity, and Mathematics [22] including the study about Islamization and integration. Data Collection Methods This research uses secondary data obtained through intermediary media.This data can be obtained through books, notes, existing evidence, or published articles and journals for reference [23]. Hahslm Methodology This study uses the Hahslm methodology by incorporating the value of worship into data processing.Qualitatively.In the Hahslm methodology, the meaning is that kauniyah is the same as Qauliyah. The life system that exists in humans, in the environment, and the universe originates from the concept of Islam, in other words, the concept of early creation is Islam.The word Islam has a root word of three letters, namely the letter 's' or sin, the letter 'l' or lam, and the letter 'm' or mim.There is a verse that supports the ontological meaning of Islam, namely QS Ali Imran (3) verse 19: "Verily Din beside Allah is Islam" (Surah Ali Imran [3]: verse 19) The reflexivity diagram above consists of the 3 elements that are, the source, the creator, the creation.These three basic elements are converged into reflections of shadow, mirrors, and mirror people.Sources are transformed into shadows, the creator is transformed into mirrors, and creation is incorporated into the mirror.In the process of creation, these three elements are a common thread from sources, creator = god, creation =man, then conversion of salutes = shadow, god = mirror, man = mirror.A system consisting of a 9.1.3in kaffah's method of thinking suggests that a system that starts from 1 to 3, then to 9. When a 9 goes to a 1, a 9 goes first in the system.It originally meant god created humans for worship [15].Could be: worship flexibly god forms human.The kaffah thinking diagram above says that the source of the human reflector is worship, of which is god himself.So, this existing human body is reflexive of worship.The existence of a human body structure is a transformation of the worship symbol.In the kaffah thinking diagram, an easy picture is s, m, p. S stands for shadow, m stands for the mirror, and p stands for people.The reflection in the mirror is reflected onto the people or can be read as well as people have a shadow behind the mirror [24].No 2 entities are people and mirrors, whereas shadow is an element also called an entering or feedback.In the club, thinking must choose three variables instead of two variables.And 9 or worship is the 3rd variable.The reflexivity diagram shows that this third variable exists which is an interpretation of worship.The meaning of the 3rd variable becomes consistent between the existence of the creation verse and the projector function [17].The existence of a projector is a creator's function that creates a building from the reflexivity and design of an architect's house.So, humans who were created by God came from prayer as the source.Because humans are the reflexivity of worship, the universe and the faculties/study programs will also reflexivity from worship. RESULTS AND DISCUSSION From this analysis, faculty/study programs need to interpret the value of worship in the higher education process and also integration.Empirically, the number of human limbs has a similarity in pattern to the number of prayer movements, by the reflexivity interpretation of the verse of creation (51.56).In Sufism, the discourse on the source of worship is by the discussion of Al-Ayan Al-Tsadisah [25]. Barbour inside his study, When Science Meets Religion: Enemies, Strangers, or Partners?Maps the relationship between Science and Religion into four typologies, namely conflict, independence, dialogue, and integration.According to Barbour, the relationship between science and religion is called "conflict", when science and religion are conflicting and in certain cases even hostile.The relationship between science and religion is called "independence" when science and religion work independently of each other.each other's means, ways, and goals, without disturbing or caring about each other.The relationship between science and religion is called "dialogue" when the relationship between science and religion is mutually open and respectful.Meanwhile, the relationship between science and religion is called "integration", when the relationship between science and religion rests on the belief that the area of study, the design of the approach, and the goals of both are the same and one. Another thinker, John F. Haught, mapped the relationship between science and religion into four forms of relationship, namely: conflict, contrast, contact, and confirmation.Haught's mapping of the relationship between science and religion is similar to that of Ian G. Barbour but different.While Barbour's map of the relationship between science and religion is typological, Haught's map of the relationship is more of an approach.According to Haught, the Conflict approach is a view which states that basic science and religion cannot be referred to or combined.The Contrast Approach is a view that states that there is no real contradiction between science and religion because they respond to very different problems.The Contrast Approach is a view that states the need for dialogue and interaction between science and religion, especially efforts to find ways how science influences religious and theological understanding.Meanwhile, the Confirmation approach is a view that suggests religion and science strengthen each other.This means that religion can play a role in the development of more meaningful science, and conversely, scientific findings can enrich and renew theological understanding. Haught views the four approaches to the relationship between science and religion as a kind of "journey".The conflict between science and religion occurs due to the blurring of the boundaries of science and religion because they are considered to be competing in answering the same questions, so people have to choose one of them.Therefore, the first step to take is to draw a clear dividing line to show the contrast between the two.The next step, after the difference between the two fields, is clear, then contact can be made.This step is driven by a strong psychological impulse that somehow the different fields of science need to be coherent.In this position, the theological-implications of scientific theory are drawn into the theological realm, not to "prove" religious doctrine, but simply to interpret scientific findings in terms of religious meaning to understand theology better.The climax is confirmation, namely by trying to root science and its metaphysical assumptions in the basic religious view of reality, which in the three monotheistic religions are rooted in the Being which is called "God".That is why the metaphysical assumption of science called Haught, among other things, is that the temporary realm is a rational "order of existence".According to Haught, without this science, as an intellectual pursuit could not even take its first steps.Interestingly, from the two studies conducted by Barbour and Haught, it can be seen that the development of the relationship between science and religion leads to an integrative pattern of relationships, in the terms used by Barbour, or confirmative, in the terms used by Haught.Such developments seem to be in line with the spirit of postmodernism.In line with the epistemological character of postmodernism which wants to embrace various kinds of narratives, from a postmodern perspective, religion is tried to be raised, both as a trend of historical context.Also, science integration can be understood as an amalgamation of knowledge structures.The structure of science does not separate the branches of religion from the branches of observation, experimentation, and logical reasoning.The integrated scientific building structure is between studies that come from qauliyah verses, Al-Quran, hadiths, and kauniyah verses, the results of observations, experiments, and logical reasoning.A very popular division for understanding science is the division into ontology, epistemology, and axiology.Furthermore, according to Mahdi Ghulsyani, the integration of knowledge in interpreting the verses of the Koran about modern science.So that the view that considers the Qur'an as a source of all knowledge is not something new, because we find that many of the great scholars of the previous Muslims held this view.Al-Faruqi also reinforces the assumption that one source of truth means that two or more sources can't occur.This is also proof that scientific integration is by the principles of al tawhîd.To say that the truth is one is therefore not only the same as asserting that God is one, but also the same as asserting that there is no other God except God.For al-Faruqi, acknowledging the Lordship of God and oneness means acknowledging truth and unity without separating between science and religion.Some of the models offered by experts on science integration are; According to Barbour, the integration models offered are: a) conflict (conflicting); b) independence; c) dialogue; and d) integration.Meanwhile, John F. Haught mapped the relationship between science and religion into four forms of relationship, namely: conflict, contrast, contact, and confirmation.Haught's mapping of the relationship between science and religion is similar to that of Ian G. Barbour but different.While Barbour's map of the relationship between science and religion is typological, Haught's map of the relationship is more of an approach.According to Haught, the Conflict Approach is a view which states that basic science and religion cannot be reconciled or combined.The Contrast Approach is a view that states that there is no real conflict between science and religion because they both respond to very different problems.The Contrast Approach is a view that states the need for dialogue and interaction between science and religion, especially efforts to find ways how science can influence religious and theological understanding.Meanwhile, the Confirmation approach is a view that suggests religion and science strengthen each other.This means that religion can play a role in the development of more meaningful science, and conversely, scientific findings can enrich and renew theological understanding.Another model is also offered by Armahedi Mahzar to classify integration models between science and religion into five models, based on the number of basic concepts that are the main components of the model.If the basic concept is the main component of that model only one is called a monadic model, if two is called a dyadic model, if three is called a triadic model, if four is called tetradic, and if five is called a pentadic model. The real contribution that will be obtained through the science integration process is that it will be able to produce a conceptual contribution to the conceptual order, contribution to the institutional structure, and contribution to the operational order.In a conceptual order, higher education will be able to produce human resources who fully understand Din Al-Islam kaffah.Meanwhile, in an institutional setting, Islamic universities will be able to compete in various sciences, both religious and scientific studies as a whole without seeing the existing scientific dichotomy.And in an operational setting, tertiary institutions will be able to fully integrate the educational curriculum through the fundamental concepts of kalam, fiqh, tasawuf, and wisdom as compulsory lessons at the first level together; The syllabus and basic books of all integrated faculties include verses from the Al-Qur'an and Hadith that are by these disciplines. CONCLUSION The fundamental of Islam and science with integration is worship.Worship is the source of the pattern of human creation.Humans were created with the value of worship, so the application of true academic and industry values also provides the value of worship.Islamization and integration run simultaneous with reflexivity, with an emphasis on the use of religious value in science. Reflexivity has 3 elements that are transformed as a shadow, mirror, and human to other entity and intangible elements.Elements of source, creator, and creation can be transformed into salat, God, and universe where the Islamization and Integration can be blended in basic design.The basic design of the universe is salat that is reflected by God as the universe. Islamization and inequality in reflexization have differentiation on its basic philosophy.Islamization and inequality are richer by epistemology.Whereas reflexivity is a concept reflected in the ontology.These reflexes can be seen from the number of letter Numbers, verse Numbers (51.56) which is 5+1+5+6=17.The number of obligatory prayers is 17 degrees.In this reflexive method of the creation verse the number 19 (one nine).This number 19 is significant in the Koran where the number 19 in the epistle of al-Mudatsir [74]:30.Meaning: over it nineteen.The number of words in verse 1 to verse 29 on this letter of al-Mudatsir, turned out to be 57 words, where 57=3x19.Of those mathematical calculations come up with a figure of 3.1.9. God has shown greatness in the simple word of worship can be transformed into multi creations.As an Islamic scholar with the guidance of the Quran and Hadith, the intellectuals must start to develop a new Advances in Social Science, Education and Humanities Research, volume 529 theory that is derived by worship.Muslim scholars should try to learn more about the exploration of ibadah from Qawliyah and Kauniyah.
2021-07-16T00:06:01.888Z
2021-04-22T00:00:00.000
{ "year": 2021, "sha1": "2d4cc625bf6031186e9edf090bbfce9eebca6b46", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125955707.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "719a4372292948b66e2bf5fbc5e17ead97c4a771", "s2fieldsofstudy": [ "Philosophy", "Economics" ], "extfieldsofstudy": [ "Sociology" ] }
258375118
pes2o/s2orc
v3-fos-license
An increase in tumor-infiltrating lymphocytes after treatment is significantly associated with a poor response to neoadjuvant endocrine therapy for estrogen receptor-positive/HER2-negative breast cancers Background The reason for the poor prognosis of estrogen receptor (ER) + /human epidermal growth factor receptor 2 (HER2)− breast cancer patients with high levels of tumor-infiltrating lymphocytes (TILs) is poorly understood. The association between TILs and response to neoadjuvant endocrine therapy (NET) was examined. Methods We recruited 170 patients with ER + /HER2− breast cancer who were treated with preoperative endocrine monotherapy. TILs were evaluated before and after NET, and their changes were noted. Furthermore, T cell subtypes were examined using CD8 and FOXP3 immunohistochemical analyses. Neutrophil and lymphocyte counts in the peripheral blood were analyzed with reference to TIL levels or changes. Responders were defined as Ki67 expression levels ≤ 2.7% after treatment. Results Post-treatment (p = 0.016), but not pre-treatment (p = 0.464), TIL levels were significantly associated with the response to NET. TIL levels increased significantly after treatment among non-responders (p = 0.001). FOXP3 + T cell counts increased significantly after treatment in patients with increased TILs (p = 0.035), but not in those without increased TILs (p = 0.281). Neutrophil counts decreased significantly after treatment in patients without increased TILs (p = 0.026), but not in patients with increased TILs (p = 0.312). Conclusion An increase in TILs after NET was significantly associated with a poor response to NET. Given that FOXP3 + T-cell counts increased, and neutrophil counts did not decrease in patients with increased TILs after NET, the induction of an immunosuppressive microenvironment was speculated to play a role in the inferior efficacy. These data might partially indicate the involvement of the immune response in the efficacy of endocrine therapy. Supplementary Information The online version contains supplementary material available at 10.1007/s12282-023-01462-5. Introduction Endocrine therapy (ET) is essential for the treatment of estrogen receptor (ER)-positive and human epidermal growth factor receptor 2 (HER2)-negative invasive breast cancer. In clinical practice, neoadjuvant endocrine therapy (NET) is an attractive treatment option for improving breastconserving surgery because of the reduced tumor size resulting from ET [1]. In this context, clinical responses evaluated using the Response Evaluation Criteria in Solid Tumors (RECIST) are applicable to NET as treatment indicator [2]. However, since ET effectiveness is obtained not only by clinical response but also by improving prognosis in patients with ER+/HER2− breast cancer, other biomarkers for ET are needed. In a previous study on NET, low expression levels of the proliferative marker Ki67 after 2 weeks of treatment were significantly associated with longer recurrencefree survival (p = 0.008), but Ki67 levels at baseline were not (p = 0.07) [3]. Because ET suppresses cell cycle progression and induces Gap 1 phase arrest, Ki67 expression in cancer cells is speculated to be a more precise biomarker for ET than the RECIST evaluation. In line with this, Ianza et al. reported no significant correlation between clinical response and disease-free survival (DFS) (p = 0.84) or overall survival (OS) (p = 0.74) with letrozole-based neoadjuvant therapy, but the prognosis was significantly associated with the delta-Ki67 proliferation index (p = 0.002 for DFS; p = 0.009 for OS) [4]. Consequently, a decrease in Ki67 is accepted as a predictor of ET in clinical studies where cell cycle complete arrest, evaluated by Ki67 ≤ 2.7% after treatment, has been used as a biomarker for neoadjuvant endocrine-based therapy [5,6]. Therefore, in clinical practice, the response to ET may be evaluated by the proliferative response determined by the downregulation of Ki67 expression levels. However, the relationship between TILs and their responses to ET has rarely been investigated. In a study by Lundgren et al., 2 year adjuvant tamoxifen (TAM) was compared with no TAM in 564 premenopausal patients according to TIL levels in archival tissues [11]. The breast cancerfree interval was significantly longer in the TAM group than in the control for patients with TILs < 50% (HR 0.63 95% CI 0.47−0.84; p = 0.002), but this association was absent in the TILs ≥ 50% group (HR 0.84; 95% CI 0.24−2.86; p = 0.77). Similarly, distant recurrence-free interval (DRFI) of patients treated with TAM was significantly improved compared with those with no TAM in ER + /HER2− postmenopausal breast cancer when TIL-low group (< 10%) was considered (HR 0.49, 95% CI 0.31-0.78, p = 0.002). But no significant benefit was obtained from TAM among patients with TIL-high group (≥ 10%) [12]. These data support the hypothesis that the inferior efficacy of ET in the TIL-high group of the ER + /HER2− subtype results in a poor prognosis. According to a study on neoadjuvant letrozole ± lapatinib, there was no statistically significant association between Ki67 suppression and high-and low-stromal TILs (− 81% vs − 66%; p = 0.513) [13]. Skriver et al. reported that among breast cancers with no pathological response, TILs were significantly increased after neoadjuvant letrozole (odds ratio [OR] 0.71; 95% CI 0.53−0.96; p = 0.02) [14]. In contrast, TILs were significantly increased in responders (mean%, 5.07 ± 10.42 vs. 3.047 ± 6.859; p = 0.0071) but not in non-responders (mean%, 3.15 ± 3.648 vs. 2.425 ± 4.919; p = 0.0938) [15]. Thus, the significance of TILs, including issues in baseline measurement and changes in sensitivity to ET, remains to be elucidated. To determine the influence of the immune response on the efficacy of ET in ER + /HER2− breast cancer, we investigated TIL levels in samples obtained before and after the start of treatment in response to NET. In addition, T cell subsets were identified using markers of cluster of differentiation (CD)8 and forkhead box protein 3 (FOXP3) for cytotoxic (anti-tumor effects) T cells and regulatory (negatively regulated immune responses) T cells (Tregs), respectively, by immunohistochemical staining. To clarify the mechanisms underlying the association between TIL levels and response to NET, changes in TIL levels were compared with peripheral immune-related blood markers, including neutrophil and lymphocyte counts. Patient eligibility and NET A total of 186 patients with histologically diagnosed invasive breast cancer, who received NET and underwent surgery between June 2010 and December 2021, were recruited in this retrospective study. Eligibility criteria were patients who are ER-positive (nuclear staining ≥ 1% of cancer cells) and HER2-negative (immunohistochemical staining 0 or 1+, and negative fluorescence in situ hybridization for immunohistochemical staining 2+). Patients without pretreatment TILs data (n = 5), post-treatment Ki67 data (n = 1), or NET < 3 weeks (n = 10) were excluded. The ET agents included aromatase inhibitors (n = 123), ovarian function suppression plus TAM (n = 33), TAM (n = 11), and fulvestrant (n = 3). The median duration of NET was 5.2 months (range, 3 weeks to 92.8 months). Cut-off of Ki67 expression levels and definition of response to NET We evaluated the average expression of Ki67 in the nuclei of cancer cells using immunohistochemistry with measurements by eyeballing. Pre-treatment Ki67 was divided into high (≥ 20%; n = 59) and low (< 20%; n = 111) groups. The response to NET was defined by post-treatment Ki67 using 2.7% as the cut-off value, following a previous study [5], and patients were classified as responders (≤ 2.7%; n = 81) and non-responders (> 2.7%; n = 89). Evaluation of TIL levels TIL levels were determined in hematoxylin and eosinstained samples obtained by core needle biopsy before NET and in surgically resected tissues after NET, as described in a previous study [16]. First, we microscopically identified lesions containing a relatively high number of invasive cancer cells and lymphocyte infiltration using a low-power field (×40). The hotspot with the highest lymphocyte infiltration was selected in a medium-power field (×100). Excluding neutrophils, eosinophils, and macrophages, lymphocytes and plasma cells in both the peritumoral and intratumoral stromal regions were evaluated. Finally, TIL levels were calculated as the percentage of areas involved in lymphocytes and plasma cells in the entire tumor and adjacent stroma. TIL levels were independently examined by two investigators (R. F. and T. W.), and in cases of discrepancies, they were discussed until a match was reached. Representative cases with low (1%) and high (30%) TIL levels are shown in Supplementary Fig. S1a and b, respectively. Immunohistochemical staining of CD8 and FOXP3 in TILs Paired tumor samples before and after NET were available for 42 breast cancer cases. Using these samples, we immunohistochemically analyzed the expression of CD8 and FOXP3. Cell Conditioning Solution (Ventana Medical Systems, Inc., Basel, Switzerland) for 64 min and BOND Epitope Retrieval Solution 2 (Leica Microsystems, Tokyo, Japan) for 20 min were used for CD8 and FOXP3 antigen retrieval, respectively. Primary antibodies against CD8 (no dilution; CONFIRM anti-CD8 SP57 rabbit monoclonal antibody, Roche Diagnostics K.K., Tokyo, Japan) and FOXP3 (dilution 1:500; 236A/E7 antibody ab20034; mouse monoclonal; Abcam, Cambridge, UK) were used. Cell membrane and nuclear staining of lymphocytes were considered positive for CD8 and FOXP3, respectively (Supplementary Fig. S1c, d). Positive cells were counted at × 400 magnification, and the average counts of four fields were used to determine the cell counts in each sample, as previously reported [17]. Measurement of absolute neutrophil and lymphocyte counts in peripheral blood during treatment Neutrophil and lymphocyte counts in peripheral blood were measured automatically using a Sysmex hematology analyzer XN-9000 (Sysmex Corporation, Kobe, Japan). Data from paired samples obtained before and after NET were available for 151 patients. All blood samples were collected and measured within one month before the start of treatment or surgery. Neutrophil counts were calculated as the sum of the stab and segment fractions. Statistical analyses The relationship between clinicopathological factors and the response to ET was analyzed using Fisher's exact test or the Wilcoxon rank-sum test. TIL levels before and after NET and their changes were compared between responders and non-responders using the Wilcoxon rank-sum and Wilcoxon signed-rank tests, respectively. Relationships between TILs or response and CD8 + cells, FOXP3 + cells, or the FOXP3 + /CD8 + cell ratio were analyzed using the Wilcoxon rank-sum test or Wilcoxon signed-rank test. Changes in CD8 + cells, FOXP3 + cells, absolute neutrophil counts (ANC), and absolute lymphocyte counts (ALC) before and after NET were calculated using Wilcoxon signed-rank test. The OR and 95% CI for univariable and multivariable analyses were obtained using logistic regression models. All statistical analyses were performed using a two-sided test with JMP ® Pro Version 15 (SAS Institute Inc., Cary, NC, USA), and statistical significance was set at p < 0.05. Table 1 shows the background of the patients according to their responses to NET. The number of responders was significantly higher in the Ki67-low group than in the Ki67high group in pre-treatment samples (p = 0.001). Responses were more frequently observed in patients treated with aromatase inhibitors than in those treated with other endocrine agents (p = 0.039). There was no significant association between response and other factors, except for post-treatment progesterone receptor (PgR) levels (median, range: 1%, 0-100% for responders vs. 10%, 0-100% for non-responders; p = 0.012). Determination of CD8+ T-cell counts, FOXP3+ T-cell counts, and FOXP3+/CD8+ T-cell ratio according to response to NET or changes in TILs There was no significant association between the response to NET and CD8 + T cell counts (p = 0.969), FOXP3 + T cell counts (p = 0.215), or the FOXP3+/CD8 + T cell ratio (p = 0.093) in pre-treatment breast cancers (Fig. 2a-c). In contrast, CD8 + T-cell counts (p = 0.039), FOXP3+ T-cell counts (p = 0.004), and FOXP3+/CD8+ T-cell ratio (p = 0.007) were significantly higher in non-responders than in responders as for post-treatment breast cancers ( Fig. 2d-f). Although the expression levels of CD8 + T cells and FOXP3 + T cells were not significantly different before and after NET in responders (p = 0.586 for CD8 and p = 0.403 for FOXP3), CD8 + T cells and FOXP3 + T cells increased significantly after treatment in non-responders (p = 0.019 for CD8 and p = 0.005 for FOXP3) (Supplementary Fig. S2). To clarify the mechanisms underlying the relationship between increased TILs and a poor response to NET, changes in CD8 + and FOXP3 + T cells during treatment were analyzed. Among patients with (n = 18) and without (n = 24) increased TILs groups, CD8 + T cells did not change after NET (median, range: 48. (Fig. 3b). Changes in ANC and ALC to changes in TILs during NET Changes in ANC and ALC levels before and after NET were analyzed considering TILs changes (Fig. 4). Among patients without increased TILs after treatment, ANC decreased significantly compared to that before treatment (median, range: 3417.8, 1568-6809 for pre-treatment and 3286, 1630-11,603 for post-treatment; p = 0.026) (Fig. 4a). In contrast, ANC did not change during NET in patients with increased TIL levels (median, range: 3635, 2038-8799 for pre-treatment and 3645, 1719-7602 for post-treatment; p = 0.312). There were no significant differences in ALC before and after NET in patients with (p = 0.793) or without (p = 0.389) increased TIL levels (Fig. 4b). Association between changes in TILs and clinicopathological factors Breast cancer patients with increased TIL levels had larger tumor sizes (T2, 63.2% vs. 46.1%; p = 0.030) and more lymph node metastases positive (n-positive, 38.2% vs. 22.6%; p = 0.038) ( Table 3). Regarding other factors, including menopausal status, ER and PgR expression levels, Ki67 status, and endocrine agents, no significant association was found with increased TIL levels. Furthermore, ANC and ALC were not significantly associated with the changes in TIL levels. Discussion In this study, we found that increased levels of TILs after NET were significantly associated with a poor proliferative response in ER+/HER2-breast cancer (p = 0.001). Multivariable analysis showed that increased TIL levels after NET was an independent and significant predictor of poor treatment efficacy. Among patients with increased TILs, FOXP3 + T cells were significantly upregulated (p = 0.035) but not in patients without increased TILs (p = 0.281). Furthermore, a significant decrease in ANC in the peripheral blood was prominent among patients without increased TILs (p = 0.026) but not among those with increased TILs (p = 0.312). Based on these data, a mechanism by which the immunosuppressive milieu in breast cancer with increased TILs after ET results in inferior efficacy was speculated. As described in the Introduction, the study by Skriver et al. is consistent with our observation (TILs increased in patients with breast cancers with no pathological response) [14], whereas TILs increased in responders as demonstrated in Liang et al.'s report [15]. Although the detailed reasons for the discrepancy in the results are unknown, different methods of response evaluation, that is, the response was pathologically and radiologically evaluated in the studies by Skriver et al. and Liang et al., respectively, or different agents of ET, that is, letrozole in the former study and anastrozole or fulvestrant in the latter study, may be involved. In a study of 987 ER+/HER2− breast cancers, higher TIL levels were significantly associated with lymph node metastases (p = 0.003), high tumor grade (p < 0.0001), low ER levels (p < 0.0001), and high Ki67 levels (p < 0.0001) [18]. The possibility that these aggressive phenotypes of breast cancer with high TIL levels are linked to a lower sensitivity to ET cannot be ruled out. However, in our study, NET response was not associated with lymph node metastasis, nuclear grade, and ER levels, except Ki67 levels ( Table 1). Since a change in TIL levels was an independent predictor of response to NET by multivariable analysis, including Ki67 (Table 2), TIL level changes might not be associated with sensitivity mediated through clinicopathological factors of TIL-high tumors. The different roles of TILs in ER + /HER2− compared with those in HER2 + or TN breast cancers seem to stem from the composition of subsets of T cells, including FOXP3 + T cells, which are more abundant in ER + than in ER-breast cancers [19]. The immune suppressive function of FOXP3 + T cells in ER + breast cancers was further shown in the meta-analysis study, in which high tumor-infiltrating FOXP3 + T cells had shorter OS in the ER + (HR 0.86; 95% CI 0.77 − 0.96; p = 0.009), but not in the ER− (HR, 1.09; 95% CI 0.82−1.45; p = 0.569) breast cancers [20]. Therefore, the immunosuppressive function directed through FOXP3 + T cells may play an essential role in the biology of ER + breast cancer. In a previous NET study, the CD8+/Treg T cell ratio was significantly increased in responders (p = 0.001) but not in non-responders (p = 0.744) [21]. Although both CD8 + and FOXP3 + T cells increased significantly after treatment in non-responders ( Supplementary Fig. S2), FOXP3 + T-cell counts and the FOXP3+/CD8 + T cell ratio appeared to be superior to the CD8 + T cell counts for predictive efficacy. Because increased TILs in nonresponders were accompanied by upregulated FOXP3 + T cells, we speculate that not CD8+, but FOXP3 + T cells plays an essential role in the efficacy of ET. Estrogen directly stimulates FOXP3 expression and function of Tregs in cervical cancer [22]. Generli et al. have reported a significant reduction in the number of Tregs after letrozole treatment [23]. Since this reduction in Tregs was restricted to the responder group, the response to letrozole was speculated to be mediated by Treg suppression. Similarly, among patients with ER+/HER2− metastatic breast cancer treated with CDK4/6 inhibitors and endocrine agents, a greater reduction in Tregs has been reported in responders than in non-responders [24]. These data support the idea that reduction in the number of Tregs induced by endocrine-based therapy plays an essential role in achieving a response. A meta-analysis documented the relationship between high NLR and poor prognosis in early breast cancer [25]. Increased myeloid-derived suppressor cells in peripheral blood have been reported to be significantly associated with a high NLR [26]. Furthermore, higher NLR significantly upregulates inflammatory cytokines, including IL-6 and IL-8, in colorectal cancer [27]. These data strongly support the hypothesis that the altered ratio of neutrophils to lymphocytes reflects an immunosuppressive microenvironment in the tumor. The detailed mechanism by which the number of FOXP3 + T cells increases in non-responders after ET remains unknown. Tregs are induced by several factors, including hypoxia and transforming growth factor-β (TGF-β) signaling [28,29]. Overexpression of the TGF-β metagene has been reported in immune-rich ER + breast cancers [30] and upregulation of TGF-β by treatment with an aromatase inhibitor [31]. As TGF-β signaling has been reported to be related to resistance to letrozole or TAM [32,33], upregulation of this signaling mediated through the resistance process to ET might activate Tregs functions. The observation that increased TILs after NET frequently occur in breast cancers with large tumor sizes and lymph node metastases ( Table 3) may indicate an unfavorable immune microenvironment for these tumors. The results obtained here are expected to contribute not only to the understanding of the mechanisms of poor response to ET induced by the microenvironment of high TILs but also to the identification of patients with inferior efficacy to ET. If sensitivity to ET is predicted more precisely by a combination of TILs change and Ki67 suppression, this prediction model will be clinically useful and will lead to the development of a new treatment strategy that combines molecules that modulate the immune microenvironment, including immunotherapy. The present study had several limitations. Since the evaluation of TILs in whole tumors from pre-treatment is not feasible, TILs counts were evaluated not by average but by a hotspot, following the previously published method by Hida et al. [34]. Depending on the method used, the increased TIL levels were possibly caused by a selection bias of the hotspot lesions. However, TILs increases were observed in non-responders but not in responders, which might deny this bias. In addition, determining post-treatment Ki67 ≤ 2.7% as a responder seems to be inappropriate for tumors with Ki67 ≤ 2.7% at baseline. Although eight such cases were included in the present study, we confirmed consistent results when these cases were excluded from the analyses (data not shown). We believe that the current methodological issues did not influence the results obtained. Since the number of patients was not enough, further studies with a larger sample size including analyses of factors related to Treg function, such as hypoxic conditions and activation of TGF-β signaling are needed. Conclusions This study showed that an increase in TILs was significantly associated with poor proliferative response to NET in ER+/HER2− breast cancers. It is speculated that the upregulation of FOXP3 + T cells after NET results in inferior sensitivity. These data indicate an essential role for ET not only in tumor suppression but also in the immune response. The poor prognosis of patients with high TIL levels in ER+/HER2− breast cancer might be partially explained by poor sensitivity to ET. Acknowledgements The authors thank Editage (www. edita ge. jp) for the English language editing. Author contributions EI and KM performed immunohistochemical staining. RF and TW evaluated TILs and expression levels of CD8 and FOXP3 using immunohistochemical staining. RF and MN were involved in data collection. RF and YM performed the statistical analyses. YM designed the study, and SH supervised the study. RF and YF prepared the manuscript. All authors have read and approved the final manuscript. Funding This study was supported by a grant from Hyogo College of Medicine (no grant number was provided). Data availability Data from individual participants were unavailable because the ethics committee did not permit their publication. Declarations Conflict of interest YM received research funding and honoraria from Chugai, AstraZeneca, Eli Lilly, Pfizer, MSD, Daiichi-Sankyo, Kyowa-Kirin, Taiho, and Esai. MN received research funding and honoraria Informed consent Written informed consent was obtained from all patients whose samples were used for immunohistochemical staining. For the other patients, we retrospectively collected the clinical data and offered no risks to the participants. The Institutional Review Board waived the need for written informed consent. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-04-29T06:18:12.714Z
2023-04-28T00:00:00.000
{ "year": 2023, "sha1": "7e7a38d52948f697ccd76ec9934e0b0ff1bca883", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e4c2b450f6cb776e71f693528c4f5b35210873a2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
211523700
pes2o/s2orc
v3-fos-license
On the relation between anticipatory ocular torsion and anticipatory smooth pursuit Humans and other animals move their eyes in anticipation to compensate for sensorimotor delays. Such anticipatory eye movements can be driven by the expectation of a future visual object or event. Here we investigate whether such anticipatory responses extend to ocular torsion, the eyes’ rotation about the line of sight. We recorded three-dimensional eye position in head-fixed healthy human adults who tracked a rotating dot pattern moving horizontally across a computer screen. This kind of stimulus triggers smooth pursuit with a horizontal and torsional component. In three experiments, we elicited expectation of stimulus rotation by repeatedly showing the same rotation (Experiment 1), or by using different types of higher-level symbolic cues indicating the rotation of the upcoming target (Experiments 2 and 3). Across all experiments, results reveal reliable anticipatory horizontal smooth pursuit. However, anticipatory torsion was only elicited by stimulus repetition, but not by symbolic cues. In summary, torsion can be made in anticipation of an upcoming visual event only when low-level motion signals are accumulated by repetition. Higher-level cognitive mechanisms related to a symbolic cue reliably evoke anticipatory pursuit but did not modulate torsion. These findings indicate that anticipatory torsion and anticipatory pursuit are at least partly decoupled and might be controlled separately. Anticipatory smooth pursuit eye movements generally occur when target motion is predictable. Such predictions can be based on strong expectations of an upcoming motion direction (Fiehler, Brenner, & Spering, 2019). At the lowest processing level, these could be induced by repeatedly showing the same kind of stimulus, such as when trials with rightward and leftward target motion are grouped into separate blocks. Stimulus repetition primarily leads to habitual or priming responses through relatively low-level learning processes (Kowler, 1989) in combination with expectation of the upcoming motion based on trial history. Another way of inducing expectation is by presenting targets in a particular configuration that acts as a visual cue, such as when a fixation spot on the left side of the screen will always be followed by rightward target motion. Finally, higher-level symbolic cues have been particularly powerful in eliciting anticipatory pursuit, for example, when a barrier on the left indicates rightward target motion (Kowler, 1989;Kowler et al., 2014). Such symbolic cues can supersede effects of stimulus repetition or simple visual cues (Kowler, 1989;Ladda et al., 2007;Kowler et al., 2014). Different cue types interact differently with the probabilistic information they convey about target motion (Santos & Kowler, 2017). When target motion is entirely unpredictable, anticipatory pursuit can still be based on an estimate of target motion probability, derived from memory and past experience (Heinen, Badler, & Ting, 2005;de Hemptinne, Nozaradan, Duvivier, Lefèvre, & Missal, 2007;Barnes & Collins, 2008;Santos & Kowler, 2017). In summary, anticipatory pursuit eye movements can be driven by a combination of visual and cognitive factors that involve learning of perceptual configurations or simple cues and memory of past history. A majority of studies on anticipatory smooth pursuit eye movements have used point-like stimuli. However, natural objects may have texture, spatial extent, and rotation around all axes. Such natural objects generate smooth pursuit eye movements that use all three degrees of freedom of the eye's rotation, including a torsional component (rotation about the line of sight). Ocular torsion during pursuit is finely tuned to visual stimulus features such as rotational direction or speed (Edinger, Pai, & Spering, 2017). However, the properties and neuronal control of pursuit's torsional component are relatively poorly understood. The current study probes anticipatory torsion by using a stimulus that triggers a horizontal smooth pursuit response with a torsional component. The goal of this procedure is to investigate whether the torsional component of pursuit is decoupled from or incorporated into the known anticipatory pursuit response. On one hand, torsional eye movements are often considered reflexive, triggered by head roll (Crawford & Vilis, 1991;Demer & Clark, 2005;Hess, 2008) or image rotation (Howard & Templeton, 1964;Cheung & Howard, 1991;Farooq, Proudlock, & Gottlob, 2004;Sheliga, Fitzgibbon, & Miles, 2009;Edinger et al., 2017). On the other hand, there is evidence that torsion is under some level of voluntary control: trained observers can produce it at will (Balliet & Nakayama, 1978), and torsion might be modulated by higher-level mechanisms such as attention (Pashler, Ramachandran, & Becker, 2006;Stevenson, Mahadevan, & Mulligan, 2016). Moreover, torsional eye movements during eye-head gaze shifts seem to anticipate the terminal position of the head after gaze lands on the target, and might thus be driven by a prediction of the gaze (eye-in-head) trajectory (Tweed, Haslwanter, & Fetter, 1998). Together, these findings indicate that torsional eye movements are not purely reflexive, and might be modulated by higher-level processes such as cognitive expectation. Given the tight behavioral link between horizontal and torsional components of smooth pursuit (Edinger et al., 2017), we hypothesize that a stimulus that moves and rotates in a predictable way will trigger anticipatory pursuit in both the horizontal and torsional direction. In three experiments, we manipulated stimulus predictability via stimulus repetition and configuration (Experiment 1), or different types of symbolic cues (Experiments 2 and 3) to investigate whether horizontal and torsional components of pursuit are affected similarly or differently by these types of predictive signals. Observers We recruited 18 observers (mean age = 25.5, std = 4.9 years, seven women) with normal and uncorrected visual acuity (at least 20/20 as assessed using an Early Treatment Diabetic Retinopathy Study (ETDRS) chart) and no history of ophthalmologic, neurologic, or psychiatric disease. Overall, nine observers each were tested in Experiments 1 and 2, and five observers participated in Experiment 3. Four observers, among them authors AR and MS, participated in at least two experiments; their data did not differ systematically from the other observers. The University of British Columbia Behavioral Research Ethics Board approved all experimental procedures, and all observers participated after giving written informed consent. Each trial began with fixation on a peripheral fixation cross, shown for 450 ms, followed by an interstimulus interval of 50 ms. The rotating target was shown for 1600 ms, followed by a screen prompt to give a perceptual judgment by pressing the up (faster) or down (slower) key on a computer keyboard. Visual stimuli and setup Stimuli were random dot patterns (RDP) presented within a disk of 8°diameter on a uniform white background (55 cd/m 2 ). The RDP consisted of 400 uniformly distributed black dots (0.05 cd/m 2 ) that were stationary within the disk, each with a diameter of 0.15°. In a given trial, the textured disk moved across the monitor to the left or right at a constant speed of 10 degrees per second (°/s) while rotating around its center in the clockwise (CW) or counterclockwise (CCW) direction at one of five rotational speeds (166, 173, 180, 187, 194°/s); rotational speed was manipulated for the purpose of the perceptual task. Observers viewed stimuli in a darkened room on a gamma-corrected 19-in. CRT monitor set to a refresh rate of 85 Hz (ViewSonic Graphic Series G90fB, 1280 × 1024 pixels, 36.3 × 27.2 cm) with a visible range of 37.8°h orizontal × 28.3°vertical from a viewing distance of 55 cm. Each observer's head was stabilized by a bite bar custom-made from dental impression material to reduce motion and instability of the head, and to achieve higher precision in eye tracking. Stimulus and procedure were programmed in MATLAB Version R2015b (The MathWorks Inc., Natick, MA) and Psychtoolbox (Version 3; Brainard, 1997;Pelli, 1997;Kleiner et al., 2007). Procedure, design, and task Each block started with a five-point eye-tracker calibration on targets spaced 10°apart on a 20°× 20°grid. In Experiments 1 and 2, trials began with fixation on a red cross (size 1°) at a peripheral location 8°to the left or right of the screen center presented for 450 ms (Figure 1). After a 50-ms interval the RDP stimulus appeared at the location of the fixation cross and moved across the screen for 1600 ms. The stimulus had the appearance of a rolling ball when rightward translational stimulus motion was combined with CW stimulus rotation (as shown in Figure 1, left), or when leftward translational motion was combined with CCW rotation; we refer to this pattern as "natural" and to the opposite pattern as "unnatural" (shown in Figure 1, middle). In Experiment 1, horizontal target motion to the right or left was presented in separate blocks of trials. The purpose of this repetition of motion direction within each block was to trigger anticipatory pursuit. Within each series of "left" or "right" blocks, rotational motion direction-either natural or unnatural-was also presented in separate blocks of trials. For example, in a "right natural" block, rightward motion direction was paired with CW rotation; in a "right unnatural" block, rightward motion direction was paired with CCW rotation. The purpose of this was to elicit anticipatory torsion. Order of blocks with stimulus rotation (left, right, natural, or unnatural first) was randomized. In each trial, observers judged whether the rotational speed of the stimulus was faster or slower than the average across all previous trials by pressing an assigned key on a computer keyboard. The purpose of this task was primarily to direct observers' attention to the rotation of the stimulus. The next trial started immediately after the observer indicated their response on the computer keyboard. We also included a baseline condition with rightward or leftward target motion and no rotation (Figure 1, right); these blocks were always presented last. In total, this experiment consisted of six blocks of 200 trials each, run in two separate sessions of no more than 60 minutes each. In Experiment 2, trials with leftward and rightward translational direction and with natural, unnatural, or no rotation were presented in randomly interleaved . Design and trial timeline in Experiment 3. Following central fixation, the RDP translated down the slope in the motion direction opposite to the indicated barrier. In block 1, the RDP did not rotate, in block 2, a natural rotation direction was always shown. The barrier cue indicated with 100% validity the upcoming target's translational direction in both blocks, and the target's rotational direction in block 2. Target presentation duration and perceptual task were identical to Experiment 1. order within the same block of trials. Upcoming horizontal direction was 100% predictable based on the location of the fixation cross, that is, fixation on the left was always followed by motion to the right, and vice versa. Upcoming target rotation was indicated by a cue presented above or below the fixation cross for 450 ms. The cue was either an arrow indicating CW or CCW rotational direction, or a noninformative circle around fixation, providing no rotation-directional information ( Figure 2). As in Experiment 1, the location of the fixation cross and cue indicated upcoming horizontal target motion reliably (100% validity). This experiment included three blocks of 200 trials each, run in one single 60-minute session. In Experiment 3, the RDP moved along one of two diagonal line segments that each had a 10°slope ( Figure 3). The RDP still translated at the same speed of 10°/s, thus the horizontal speed was slightly lower (9.8°/s) than in the other experiments. The fixation cross was centered at the RDP's start position. In the first block of trials, the RDP moved leftward or rightward with no rotation; in the second block, translational motion was combined with natural stimulus rotation. In both blocks, leftward and rightward motion directions were randomly interleaved. Upcoming target direction was indicated with a 100%-valid barrier cue (4°long extension of the slope above the crossing point) presented from the onset of fixation in all blocks. An extension of the line segment tilted from the upper left to lower right part of the screen, for example, indicated upcoming motion to the right. Each block contained 200 trials, and the experiment was run in one single 30-minute session. Eye movement recordings and analysis Eye movements were recorded binocularly with a Chronos ETD (Chronos Vision, Berlin, Germany) at a sampling rate of 200 Hz. This eye tracker is a noninvasive, head-mounted, video-based system that can assess torsional rotations of the eye. It is sufficiently accurate and precise (tracking resolution <0.05°along all three axes) for the fine spatiotemporal analysis of three-dimensional (3D) eye movements. Our procedures for preprocessing and analyzing torsional eye position have been described in Edinger et al. (2017) and are reproduced here in abbreviated form for the readers' convenience. Three-dimensional eye-in-head position data were processed offline for each eye separately using the Chronos Iris software (Version 1.5) to derive horizontal, vertical, and torsional eye position data from video recordings. The principle of deriving torsional eye position data relies on interframe changes in the iris crypt landmark with each eye rotation. Following standard practice, ocular torsion was obtained from cross-correlation between iris segments across images. Four segments were fitted to each eye's iris and angular eye position was calculated as a weighted average from all segments with a cross correlation factor of >0.7. By convention, leftward, downward, and extorsion (i.e., the top of the eye moving away from the nose) of the right eye and intorsion (the top of the eye moving toward the nose) of the left eye are positive. Eye position data were then analyzed using custom-made routines in MATLAB. Eye position was differentiated to yield 3D eye velocity, and data were filtered using routines described in Edinger and colleagues (2017). Anticipatory pursuit onset was detected in a 100-ms interval around stimulus motion onset by fitting each two-dimensional position trace with a piecewise linear function, consisting of two linear segments (starting 50 ms before onset) and one breakpoint. The least-squares fitting error was minimized iteratively to identify the best location of the breakpoint, defined as the time of pursuit onset. Catch-up saccades occur naturally during pursuit and were identified using a velocity criterion. Eye velocity had to exceed 20°/s in three consecutive frames to be considered a horizontal or vertical corrective saccade and 10°/s to be considered a torsional saccade (backward saccade to reset the eye). Saccade onsets and offsets were defined as the nearest reversal in the sign of acceleration on either side of the three-frame interval. We then computed mean torsional eye velocity and mean horizontal eye velocity in the saccade-free time interval from 50 ms before stimulus onset to 50 ms after stimulus onset, yielding the magnitude of anticipatory torsion and pursuit, respectively. Manual inspection of each individual eye trace confirmed that the algorithm correctly identified all aspects of horizontal pursuit and torsion; traces with blinks, lost signals, or errors in torsion detection were flagged and excluded from further analysis, resulting in 24.3% excluded trials across observers and experiments. This exclusion rate is owing to the Chronos relying on a clear image of the iris to derive ocular torsion. Any obstruction of the iris due to eyelashes or eye anatomy (e.g., drooping lid) at any time during the trial results in unreliable torsional data, and therefore to rejection of the trial; rejection rates differed between observers and ranged from 4.5% for the most reliable to 43.3% for the least reliable observer. Note that we recorded 3D eye positions from both eyes for each observer. Because the number of usable trials differs between left and right eye for each observer (due to subtle intereye differences in iris shape, structure, and eye lid anatomy), we selected the eye that yielded a larger number of acceptable trials based on torsion data preprocessing for all analyses for each observer. Statistical analysis Our experiments were designed to test the following hypotheses: First, we expected that stimulus configurations in all experiments would reliably trigger anticipatory horizontal pursuit. Second, we hypothesized that all experimental manipulations would also trigger anticipatory torsion because it is closely linked to pursuit. For all experiments, we assessed the effect of rotational motion direction (natural, unnatural, no rotation) on horizontal and torsional eye velocity using repeated-measures analysis of variance (ANOVA) with within-subjects factor rotation; we averaged across leftward and rightward horizontal motion directions because we did not expect or observe any horizontal asymmetries. We did not expect anticipatory responses to be modulated by rotational speed, and thus did not include speed in our hypotheses-testing. We further evaluated the relation between anticipatory and visually driven torsional components. Results of the perceptual task are not reported because the purpose of this task was to direct observers' attention to the rotation of the stimulus, and not to assess the relationship between perception and torsion. All reported t-tests were two-tailed and, if applicable, Bonferroni-corrected for multiple comparisons. Statistical analyses were conducted in IBM SPSS Statistics Version 23 (IBM Corp., Armonk, NY) and MATLAB Version R2019a (The MathWorks Inc.). Direction repetition and direction cues reliably trigger anticipatory horizontal pursuit The stimulus configuration in our paradigmfixation position to the left or right of screen center combined with centripetal target motion, or the presence of the barrier cue-made the target's horizontal motion direction predictable. As a result, observers reliably initiated anticipatory horizontal pursuit in the direction of the upcoming target, starting on average 200 ms before motion onset in both experiments. These findings are demonstrated in mean horizontal eye velocity traces for all three experiments (Figure 4). Interestingly, in Experiment 1, anticipatory horizontal pursuit velocity differed depending on whether the stimulus rotated or not (Figure 4b). This observation was confirmed by a main effect of rotation on anticipatory pursuit velocity, F(2, 16) = 25.26, p < 0.001, η 2 = 0.76. Anticipatory pursuit velocity was significantly reduced in comparison to the no-rotation baseline when the stimulus rotated naturally Only stimulus repetition, not symbolic cues, elicit anticipatory torsion Importantly, we found that observers anticipated the target's rotational direction. The eyes rotated about the visual axis either CW in response to "natural" or CCW in response to "unnatural" rotation prior to target onset. Figure 5a shows mean torsional velocity traces for Experiment 1, revealing a separation of responses to natural versus unnatural rotation several hundred milliseconds prior to target motion onset. These observations are reflected in comparisons of mean torsional eye velocity during the same interval as anticipatory smooth pursuit, from 50 ms before to 50 ms after target onset, in Experiment 1 (Figure 5b). Rotational direction had a significant main effect on mean anticipatory torsional velocity, F(2, 16) = 14.6, p < 0.005, η 2 = 0.65, mostly driven by the difference between natural rotation and no rotation [t(8) = 3.94, p < 0.004]. The difference between unnatural rotation and the no-rotation baseline was nonsignificant when corrected for multiple comparisons [t(8) = 2.21, p = 0.15] because mean anticipatory torsion was overall weaker in response to unnatural rotation. These findings indicate that anticipation of rotational motion direction, triggered by stimulus repetition, can modulate ocular torsion, especially in response to a naturally rotating stimulus that causes stronger torsion overall (Figure 6a). By contrast, cognitive expectation triggered by a symbolic cue did not modulate ocular torsion, regardless of whether this cue was paired with a particular stimulus configuration (location of the fixation cross as a stationary visual cue, Experiment 2) or whether it was used in isolation (Experiment 3). Results from these two experiments reveal no anticipatory torsion (Figure 5c,e) and no significant main effect of rotational direction (natural, unnatural vs. no rotation in Experiment 2, or natural vs. no rotation in Experiment 3) on torsional velocity (Figure 5d,f; both F < 1). Although the magnitude of anticipatory torsion in Experiment 1 was correlated with the magnitude of visually guided torsion, there was no such relationship between anticipatory and visually guided torsion in Experiments 2 and 3 (Figure 6b). Disentangling the effects of short-term and long-term expectation The results described so far are based on averages across all trials in a given block. We next investigated how anticipatory pursuit and anticipatory torsion built up over the course of a block of trials, and compared the temporal development for anticipatory pursuit and torsion. Figure 7 shows anticipatory eye velocity accumulated over time, that is, eye velocity at trial = 1 is the anticipatory eye velocity in trial 1 for all observers; eye velocity at trial = 10 is the eye velocity averaged across trials 1-10 for all observers. In Experiment 1, anticipatory pursuit responses built up quickly within the first five trials (Figure 7a). Accumulation profiles were similar in all conditions, despite differences in anticipatory pursuit magnitude (see Figure 7c,e). In Experiment 2, anticipatory pursuit built up faster in trials in which the stimulus rotated as compared with no-rotation trials (Figure 7c), possibly indicating the cost of decoding the neutral cue in that condition. In Experiment 3, anticipatory pursuit built up more slowly than in Experiments 1 and 2 (Figure 7e), possibly because the translational direction in Experiment 3 was only indicated by the barrier cue, not by an additional stationary cue (location of fixation spot). The temporal development of anticipatory torsion in Experiment 1 was slower than for anticipatory pursuit; anticipatory torsion took approximately 20 trials to reach its maximum (Figure 7b). There was no notable change in the anticipatory torsional velocity response in Experiment 2 (Figure 7d) or in Experiment 3 (Figure 7f). The comparison between anticipatory pursuit and torsion in Experiment 1 indicates that low-level visual signals derived from stimulus repetition or priming drives both responses, but at a different temporal rate. To isolate the effect of longer-term cognitive expectation, we randomized the order of motion directions in Experiments 2 and 3. However, it is still possible that short-term priming effects might have occurred due to recent trial history (Kowler, 1989;Heinen et al., 2005). To investigate the effect that the preceding trials might have had on anticipatory pursuit and torsion in a given trial, we conducted a tree-plot analysis for pursuit and torsion in those blocks in Experiments 2 and 3, in which translational or rotational directions were randomized. In Figure 8, we show averaged eye velocities in trial n as a function of rotational (or translational) direction in the previous two trials (n-1 and n-2). If a priming effect existed for torsion, for example, we would expect eye velocity of trials preceded by a stimulus with CW rotation to be more positive than the averaged eye velocity of trials preceded by CCW rotation. We observed no systematic priming effect for either torsion in Experiment 2, or torsion or pursuit in Experiment 3, when averaging data across all participants. However, some individual observers' data reflect effects of priming. We conducted two-way ANOVAs (factor 1: direction of current trial; factor 2: direction of previous trial; test effect of factor 2) on individual observer data, revealing significant trial history effects for zero out of nine observers' torsion in Experiment 2, two out of five observers' torsion in Experiment 3, and zero out of five observers' pursuit in Experiment 3. However, given that the majority of observers did not exhibit trial history effects, and that the overall magnitude of anticipatory torsion was very small, it is unlikely that priming played a significant role in driving anticipatory eye movements in these two experiments. Discussion In 1989, Eileen Kowler published a seminal article in which she demonstrated convincingly that "anticipatory smooth eye movement depended on both the cognitive expectations about the direction of future target motion and on the recent past history of stimulus motions" (Kowler, 1989;p. 1055). Kowler's early findings attributed anticipatory pursuit to cognitive expectations, showing that simple oculomotor learning was insufficient to explain smooth movements of the eye prior to target motion onset. These results had significant ramifications for how we view smooth pursuit eye movements: not only as the retinal-slip driven visual response tightly linked to low-level motion processing, but also as a sensitive read-out of higher-level cognitive processes, such as predictive motion signals (Barnes, 2008;Kowler, 2012;Kowler et al., 2014). Here we show that anticipatory ocular torsion can be elicited prior to the onset of a moving and rotating visual stimulus. However, whereas anticipatory pursuit was elicited reliably across experiments employing different cue strengths, anticipatory torsion was only triggered if the same pattern of rotational motion was presented repeatedly. This anticipatory response is therefore more likely to be driven by low-level learning or adaptive processes, and not by higher-level cognitive processes. Symbolic cues, such as arrow cues (Experiment 2) or barrier cues (Experiment 3) indicating an upcoming direction, require conscious higher-level decoding of the cue's meaning-a cognitive process that appears to be decoupled from the control of ocular torsion. The cues' differential potential in driving anticipatory pursuit and torsion indicates that these two types of anticipatory responses are at least partly decoupled and controlled separately. There are several preliminary findings in the literature indicating that torsion might be under cognitive control, and our findings are in conflict with these reports. Balliet and Nakayama (1978) report that torsional eye movements can be produced at will and initiated in the complete absence of a vestibular or visual stimulus. This finding indicates the plasticity of the torsional system and its capacity for learning. However, these results were obtained in only three subjects and after many hours of training. Pashler and colleagues (2006) found that the eyes produced ocular torsion when a large sample of observers (n = 33) attended to a five-letter word rotated CW or CCW by 15°to 45°. It is important to note that torsion was not directly assessed in this study. Instead, observers were asked to adjust a reference line to match the tilt of an afterimage produced by the rotated word; tilt of the reference line was taken as evidence that the eye must have rotated. Stevenson and colleagues (2016) assessed ocular torsion using scleral search coils (an invasive technique with high accuracy and precision) in response to a rotating stimulus that contained different frequency components in the center and periphery. The authors show that cycloversion (when both eyes rotate in the same direction) was modulated by attention, that is, higher-amplitude torsion in the direction of the attended versus the unattended frequency component. This effect was present in average results for six observers, but was based on attentional modulation found in only three observers; the other three observers' torsion was not or only mildly modulated by attention. Taken together, these three studies indicate that sustained torsional eye movements might be influenced by cognitive factors, but these reports require replication with larger sample sizes or detailed eye movement measurement. Our results are consistent with the view that torsional eye movements are not purely reflexive or the mere byproduct of a gaze shift, as originally indicated by Donders' and Listing's law. Instead, torsion might be susceptible to learning or adaptation to a given rotational motion direction. Yet, in comparison with anticipatory horizontal pursuit, anticipatory torsion does not seem to be under much cognitive control. Anticipatory smooth pursuit is commonly associated with activity in frontal brain areas, such as the frontal eye fields (Macavoy, Gottlieb, & Bruce, 1991;Fukushima, Yamanobe, Shinmei, & Fukushima, 2002), and in particular with the supplementary eye fields (Heinen & Liu, 1997;Missal & Heinen, 2004;Kim, Badler, & Heinen, 2005). However, there is no direct evidence that signals from these frontal cortical brain areas directly mediate the descending signals to the brainstem and cerebellum that are well-known to guide ocular torsion. Our findings also indicate a link between horizontal and torsional components of pursuit. Observers in our study initiated horizontal pursuit up to 200 ms prior to stimulus motion onset. This effect was stronger for baseline (no rotation) than for rotation conditions in Experiment 1, indicating that the pursuit system takes torsional eye rotation into account when computing anticipatory horizontal pursuit velocity. Our Experiments 2 and 3 provided further evidence for this link by showing similar magnitude of anticipatory pursuit across conditions in the absence of anticipatory torsion. By contrast, Murdison, Paré-Bingley, and Blohm (2013) showed that eye movement signals that result from ocular counterroll during head rotation were not taken into account when making an anticipatory pursuit movement. These authors conclude that ocular torsion is not integrated with velocity memory signals. There are several important differences between Murdison and colleagues' (2013) paradigm and our present study that could explain this discrepancy. Although torsion and pursuit were elicited by different signals in the Murdison study-vestibular signals for torsion and visual signals for pursuit-both response components were driven by the same visual stimulus in our study, resulting in torsional velocity integration in pursuit. It is noteworthy that these integration effects were observed despite the small magnitude of torsion in general, and of anticipatory torsion in particular. Visually induced torsion typically has a gain of <0.1 (Sheliga et al., 2009), similar to what we observed here. Yet, these tiny responses appear to impact anticipatory horizontal pursuit, and might contribute to the perception of rotational motion illusions (Wu & Spering, 2019). Limitations The interpretation of our findings is limited by several factors, most notably by the small magnitude of the movement under study, and by the overall small effect sizes, even when anticipatory torsion was elicited in Experiment 1. Because of this, some experimental manipulations are not feasible. For example, it would be interesting to present moving stimuli with completely randomized translational and rotational motion directions, without a cue, to examine the isolated effects of trial history as has been done in the past for anticipatory pursuit (Kowler, 1989;Heinen et al., 2005). Yet, even a highly salient barrier cue did not reliably trigger anticipatory torsion in our experiments, rendering it unlikely that anticipatory torsion would survive complete randomization. Further, it is noteworthy that the onset of anticipatory torsion in Experiment 1 was very early. A difference between conditions could already be observed 200 ms before stimulus onset, prior to the onset of anticipatory pursuit. It is possible that torsional anticipation was not strictly time-locked to the stimulus onset, but resulted from a shift in baseline torsional activity in preparation of the upcoming stimulus. We cannot rule out this alternative explanation, although it is interesting that there was no such early baseline activity in Experiments 2 and 3. Notwithstanding the possibility of this alternative explanation, it is important to note that even a potential shift in baseline torsion occurred prior to stimulus onset, and can therefore be interpreted as being part of an anticipatory response. Conclusion Taken together, our results emphasize important differences and similarities between the pursuit and the torsional system. Smooth pursuit eye movements are visually induced but can be modulated by a large number of cognitive factors, such as expectation, attention, and reward (Barnes, 2008). Torsional eye movements, although susceptible to habit or potentiation due to trial sequence, appear less cognitively controlled. These findings have important implications for our understanding of the brain mechanisms underlying the integration of both responses, as well as the impact of these eye movements on visual perception.
2020-02-27T09:33:49.494Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "8f0138c114a2797a6a6901823b11728ef6ed477f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/jov.20.2.4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d49b1f0578467a8ec4bc607f85e0b5c775665ad8", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
257273829
pes2o/s2orc
v3-fos-license
The Role of Serum Anti-Mullerian Hormone Measurement in the Diagnosis of Polycystic Ovary Syndrome Polycystic ovary syndrome (PCOS) is a common endocrinological disorder in women with significant reproductive, metabolic, and psychological health implications. The lack of a specific diagnostic test poses challenges in making the diagnosis of PCOS, resulting in underdiagnosis and undertreatment. Anti-Mullerian hormone (AMH) synthesized by the pre-antral and small antral ovarian follicles appears to play an important role in the pathophysiology of PCOS, and serum AMH levels are often elevated in women with PCOS. The aim of this review is to inform the possibility of utilizing anti-Mullerian hormone either as a diagnostic test for PCOS or as an alternative diagnostic criterion in place of polycystic ovarian morphology, hyperandrogenism, and oligo-anovulation. Increased levels of serum AMH correlate highly with PCOS, polycystic ovarian morphology, hyperandrogenism, and oligo/amenorrhea. Additionally, serum AMH has high diagnostic accuracy as an isolated marker for PCOS or as a replacement for polycystic ovarian morphology. Introduction Polycystic ovary syndrome is the most common endocrine disorder in women of reproductive age, with an estimated prevalence of 8-13% [1]. It is also the most common cause of anovulatory infertility [2]. It is a heterogeneous disorder with multiple different phenotypes and affects both adolescent and adult women. The syndrome was first described by two gynecologists, Irving Stein and Michael Leventhal, in 1935 [3], and there is no single diagnostic test available for PCOS to date. The syndrome encompasses gynecological symptoms such as oligomenorrhea, amenorrhea, infertility; dermatological symptoms such as hirsutism, acne, alopecia; and metabolic complications such as prediabetes, type 2 diabetes [4], obesity, dyslipidemia [5], non-alcoholic fatty liver disease (NAFLD) [6], metabolic syndrome [7], and obstructive sleep apnea [8]. There is a high prevalence of anxiety and depression in women with PCOS [9]. Women with PCOS are also at higher risk of developing endometrial hyperplasia and endometrial cancer [10,11]. The term polycystic ovary syndrome is considered as a misnomer as there are no epithelial cysts in the ovaries and the polycystic appearance comes from the antral follicles [12]. Not only is the term confusing, the pathophysiology of polycystic ovary syndrome is also complex and involves reproductive and metabolic dysfunction [13]. Aberrant GnRH pulsatility [14], abnormal gonadotropin secretion [15], and ovulatory dysfunction are some of the factors involved in the pathogenesis of PCOS in addition to hyperandrogenism and insulin resistance. Low-grade chronic inflammation with high levels of C reactive protein (CRP), endothelial dysfunction [16], and high oxidative stress [17] contribute further to the complexity of pathogenesis of polycystic ovary syndrome. Diagnostic Criteria for PCOS Since the pathogenesis is not fully understood, the diagnosis of PCOS is also neither simple nor straightforward, leading to the development of several sets of diagnostic criteria ( Figure 1). Although PCOS can present with multiple different symptomatology, at a National Institutes of Health (NIH)-sponsored conference on PCOS in 1990, attendees voted highly for hyperandrogenism and chronic anovulation as potential diagnostic features for PCOS [18]. Thus, the NIH criteria proposed in 1990 require the presence of oligomenorrhea and hyperandrogenism to make a diagnosis of PCOS while excluding other causes of hyperandrogenism and oligomenorrhea [19]. The common differentials are hyperprolactinemia, thyroid dysfunction, late-onset non-classic congenital adrenal hyperplasia, Cushing's syndrome, primary ovarian insufficiency, hypothalamic amenorrhea, pregnancy, etc. In 2003, at the Rotterdam consensus meeting, polycystic ovarian morphology on ultrasound was added to the above criteria considering the heterogeneous nature of PCOS and to reduce delay in diagnosis, which in turn can lead to delayed treatment [20]. In 2006, the Androgen Excess and PCOS Society defined PCOS as a disorder of hyperandrogenism and oligomenorrhea (or) polycystic ovarian morphology (or) both [21]. Exclusion of other conditions that mimic PCOS involves performing specific diagnostic tests for each of the differentials as appropriate. Challenges with the Current Diagnostic Criteria Primarily, having different sets of criteria is a source of confusion, not only for clinicians but also for patients. In addition, there are multiple challenges with each of the three diagnostic elements of PCOS: hyperandrogenism, oligomenorrhea, and polycystic ovarian morphology on ultrasound. Clinically, hyperandrogenism can manifest as hirsutism, acne, or alopecia. Acne cannot be considered as a diagnostic criterion in adolescents unless it is severe. While modified Ferriman-Gallwey (mFG) and Ludwig visual scoring systems are used to assess the severity of hirsutism and alopecia, respectively, no such universal visual scoring system is available to assess the severity of acne. There is also marked ethnic variation in the manifestation of hirsutism, requiring consideration of different cut-offs for different ethnicities [22], and women self-treating hirsutism can further complicate the clinical assessment. Biochemically, there are issues with the laboratory assays available for measuring free and total testosterone concentrations. Nearly 99% of serum testosterone is bound to binding proteins such as sex hormone-binding globulin (SHGB) and albumin, and only 1% circulates in the unbound free form, which is the active form of testosterone. Though free testosterone is best measured by equilibrium dialysis, it is not widely available in many laboratories because of its high cost and the labor-intensive process involved. The commonly available immunoassays that directly measure free testosterone levels are less accurate. Sex hormone-binding globulin (SHBG) levels can be low in women with PCOS which can affect the measurement of serum total testosterone and calculation of free androgen index (FAI) and bioavailable testosterone [23]. In some women Challenges with the Current Diagnostic Criteria Primarily, having different sets of criteria is a source of confusion, not only for clinicians but also for patients. In addition, there are multiple challenges with each of the three diagnostic elements of PCOS: hyperandrogenism, oligomenorrhea, and polycystic ovarian morphology on ultrasound. Clinically, hyperandrogenism can manifest as hirsutism, acne, or alopecia. Acne cannot be considered as a diagnostic criterion in adolescents unless it is severe. While modified Ferriman-Gallwey (mFG) and Ludwig visual scoring systems are used to assess the severity of hirsutism and alopecia, respectively, no such universal visual scoring system is available to assess the severity of acne. There is also marked ethnic variation in the manifestation of hirsutism, requiring consideration of different cutoffs for different ethnicities [22], and women self-treating hirsutism can further complicate the clinical assessment. Biochemically, there are issues with the laboratory assays available for measuring free and total testosterone concentrations. Nearly 99% of serum testosterone is bound to binding proteins such as sex hormone-binding globulin (SHGB) and albumin, and only 1% circulates in the unbound free form, which is the active form of testosterone. Though free testosterone is best measured by equilibrium dialysis, it is not widely available in many laboratories because of its high cost and the labor-intensive process involved. The commonly available immunoassays that directly measure free testosterone levels are less accurate. Sex hormone-binding globulin (SHBG) levels can be low in women with PCOS which can affect the measurement of serum total testosterone and calculation of free androgen index (FAI) and bioavailable testosterone [23]. In some women with polycystic ovary syndrome, androgens other than testosterone such as DHEAS (dehydroepiandrosterone sulfate) or androstenedione can be the only hormone elevated. Oligomenorrhea is physiological around menarche [24] and menopause, and a history of regular menstrual cycles does not always exclude oligo-ovulation. The definition of oligomenorrhea varies depending upon where the woman is in the reproductive spectrum. One to three years post menarche, adolescents with menstrual cycles < 21 or >45 days (or) >90 days for any one cycle are considered oligomenorrheic. In women three years post menarche up to menopause, adults with menstrual cycles < 21 or >35 days (or) less than eight cycles per year are considered oligomenorrheic [25]. The polycystic ovarian morphology criterion cannot be used in women with gynecological age of less than eight years nor in perimenopausal women. To detect ≥20 follicles per ovary in order to fulfill the criterion, a higher frequency 8 MHz transducer is required [25,26]. Though transvaginal ultrasound accurately measures ovarian volume and antral follicle count, it is not an appropriate procedure for women who have never been sexually active. Transabdominal ultrasound poses difficulty in accurately detecting antral follicles, especially in women with obesity. So, clinicians continue to face multiple challenges in making the diagnosis of PCOS, and there has been a constant search for a better or alternative diagnostic test or diagnostic criterion. Anti-Mullerian Hormone AMH is a unique dimeric glycoprotein that was first described by Alfred Jost in the 1940s [27]. It belongs to the TGFβ super family [28,29] and plays an important role in sexual differentiation [30] and regulation of folliculogenesis [31]. It derives its name because of its ability to inhibit the development of mullerian duct structures in the male fetuses [30]. AMH is composed of two identical glycoprotein subunits, each of which has a larger N-terminal prodomain and a smaller C-terminal mature signaling domain, both connected by disulfide bridges. Pre-proAMH is a precursor molecule that undergoes proteolytic cleavage, producing biologically inactive proAMH which then yields the biologically active form of AMH [32]. AMH attaches to specific receptors on cells of target tissues. The C-terminal of AMH binds to the extracellular domain of AMH type 1 and type 2 serine/threonine kinase receptors, producing an intracellular Smad signal, which in turn regulates target gene transcription [33,34]. Anti-Mullerian hormone plays a major role in sex differentiation ( Figure 2). The gonads are indifferent until the sixth week of fetal life. Genetic sex is determined by the sex chromosomes. The sex determining region of the Y chromosome (SRY) in male (XY) fetus allows the indifferent gonad to develop into testes. The Leydig cells secrete testosterone that stimulates development of Wolffian duct structures and Sertoli cells secrete Anti-mullerian hormone that suppresses the development of Mullerian duct structures. In female (XX) fetuses, the absence of SRY allows the gonads to develop into ovaries, and the absence of AMH in early fetal life allows Mullerian ducts to develop into fallopian tubes, uterus, cervix, and the upper third of the vagina [35]. The gene for AMH is on the short arm of chromosome 19 [36], and the gene for the AMH receptor type 2 is on the long arm of chromosome 12 [37]. AMH expression is observed in granulosa cells of primary, secondary, and small (<4 mm diameter) antral follicles while it is absent in primordial and larger (>8 mm diameter) antral follicles [38]. Primordial follicles start appearing in utero around 15 weeks of gestation in female fetuses, and during growth and development, most follicles become atretic through a hormonally regulated ligand-receptor system and associated granulosa cell apoptosis [39]. The remaining stay dormant until actively recruited into the growing pool for maturation and ovulation [40]. Primary follicles have a single layer of cuboidal granulosa cells surrounding the oocyte, and early secondary follicles acquire a second layer. The antral follicles (also called tertiary follicles) derive their name from having a fluid-filled space called the antrum and have multiple layers of granulosa cells [41]. Follicle-stimulating hormone (FSH) [42] and intraovarian regulators such as kit-ligand (SCF stem cell factor) [43] and neurotrophins [44] play an important role in initial primordial follicle recruitment. Anti-Mullerian hormone regulates ovarian folliculogenesis by inhibiting the recruitment of primordial follicles from the pool. It is considered as a marker of ovarian reserve. AMH levels decrease with chronological age and after menopause [45,46]. AMH elimination from the body follows first order pharmacokinetics and reaches approximately 90% after four days and 95% after approximately five days, and the level becomes undetectable eight days after salpingo-oophorectomy in premenopausal women. The mean terminal half-life of AMH is 27.6 ± 0.8 h with a range from 12.3 to 39.9 h in single cases [47]. Anti-Mullerian Hormone and PCOS In women with PCOS, there is increased pulsatile frequency of GnRH (gonadotropinreleasing hormone) that stimulates LH (luteinizing hormone), which in turn increases ovarian theca cell production of androgens [18]. Additionally, there is augmented androgen production due to increased activity of multiple steroidogenic enzymes in theca cells [48]. Relative FSH deficiency impairs aromatization of androgens [18]. Women with PCOS are noted to have higher levels of Anti-Mullerian hormone [49]. Serum AMH levels are significantly higher in normogonadotropic anovulatory women (World Health Organization class 2, or WHO2), especially those with polycystic ovarian morphology compared to age-matched normoovulatory premenopausal women [50]. Mean AMH levels in in vitro ovarian granulosa cells from anovulatory PCOS women are 75-fold higher compared to in vitro ovarian granulosa cells from age matched normal ovulatory controls [51]. In cells from PCOS women, luteinizing hormone (LH) increased AMH and follicle-stimulating hormone (FSH) decreased AMH [51]. In PCOS women, mean follicular fluid AMH levels are 60-fold higher than their serum AMH levels [52]. Follicular fluid aspirated from 3-4 size matched 4-8 mm follicles from women with anovulatory PCOS has significantly higher AMH levels compared to age matched normally ovulating women, raising the possibility of increased AMH production per follicle [52]. AMH levels are reported to correlate with testosterone, free androgen index, LH, mean ovarian volume, and follicle number on transvaginal ultrasound Follicle-stimulating hormone (FSH) [42] and intraovarian regulators such as kit-ligand (SCF stem cell factor) [43] and neurotrophins [44] play an important role in initial primordial follicle recruitment. Anti-Mullerian hormone regulates ovarian folliculogenesis by inhibiting the recruitment of primordial follicles from the pool. It is considered as a marker of ovarian reserve. AMH levels decrease with chronological age and after menopause [45,46]. AMH elimination from the body follows first order pharmacokinetics and reaches approximately 90% after four days and 95% after approximately five days, and the level becomes undetectable eight days after salpingo-oophorectomy in premenopausal women. The mean terminal half-life of AMH is 27.6 ± 0.8 h with a range from 12.3 to 39.9 h in single cases [47]. Anti-Mullerian Hormone and PCOS In women with PCOS, there is increased pulsatile frequency of GnRH (gonadotropinreleasing hormone) that stimulates LH (luteinizing hormone), which in turn increases ovarian theca cell production of androgens [18]. Additionally, there is augmented androgen production due to increased activity of multiple steroidogenic enzymes in theca cells [48]. Relative FSH deficiency impairs aromatization of androgens [18]. Women with PCOS are noted to have higher levels of Anti-Mullerian hormone [49]. Serum AMH levels are significantly higher in normogonadotropic anovulatory women (World Health Organization class 2, or WHO2), especially those with polycystic ovarian morphology compared to age-matched normoovulatory premenopausal women [50]. Mean AMH levels in in vitro ovarian granulosa cells from anovulatory PCOS women are 75-fold higher compared to in vitro ovarian granulosa cells from age matched normal ovulatory controls [51]. In cells from PCOS women, luteinizing hormone (LH) increased AMH and follicle-stimulating hormone (FSH) decreased AMH [51]. In PCOS women, mean follicular fluid AMH levels are 60-fold higher than their serum AMH levels [52]. Follicular fluid aspirated from 3-4 size matched 4-8 mm follicles from women with anovulatory PCOS has significantly higher AMH levels compared to age matched normally ovulating women, raising the possibility of increased AMH production per follicle [52]. AMH levels are reported to correlate with testosterone, free androgen index, LH, mean ovarian volume, and follicle number on transvaginal ultrasound in WHO2 patients [50]. Higher expression of AMH and AMH receptor 2 is noted in granulosa cells from PCOS women [53]. In human granulosa lutein cells, AMH inhibits FSH-induced adenylyl cyclase activation, expression of aromatase, and production of estradiol, and it reduces messenger RNA expression of FSH receptors [54,55]. However, there is no effect noted on basal aromatase expression [55,56]. AMH receptor 2 is also expressed in approximately more than half of hypothalamic GnRH neurons in adults and during embryonic development. Increased GnRH secretion and resultant increased LH secretion are noted in mice in response to exogenous AMH, suggesting a role for AMH in regulation of GnRH neuron excitability and secretion [57]. In animal studies, AMH 2 receptor expression is seen in endothelial cells and specialized hypothalamic glial cells called tanycytes that are known to regulate GnRH secretion [58,59]. A high concentration of AMH in anovulatory PCOS could be the cause of an exaggerated inhibitory effect on follicular growth [54]. AMH also inhibits gonadotropin-stimulated CYP19A and P450scc gene expression in cultures of human granulosa lutein cells, suggesting that it may have a regulatory role in ovarian steroidogenesis as well [56]. One hypothesis proposed is that hyperandrogenism in PCOS hypersensitizes granulosa cells to FSH, resulting in excessive preantral follicular growth and AMH expression, which in turn inhibits FSH-induced aromatase expression ( Figure 3). This results in altered antral follicular growth causing anovulation [60]. Increased AMH expression is also noted in response to insulin in luteinized granulosa cells from PCOS women. However, adding AMH decreases insulin-promoted aromatase expression [53]. All of the above imply that the high local concentration of AMH in addition to hyperandrogenism due to intrinsic theca cell dysfunction [48,61] could alter the follicular microenvironment and play an important role in the pathogenesis of PCOS [60]. in WHO2 patients [50]. Higher expression of AMH and AMH receptor 2 is noted in granulosa cells from PCOS women [53]. In human granulosa lutein cells, AMH inhibits FSHinduced adenylyl cyclase activation, expression of aromatase, and production of estradiol, and it reduces messenger RNA expression of FSH receptors [54,55]. However, there is no effect noted on basal aromatase expression [55,56]. AMH receptor 2 is also expressed in approximately more than half of hypothalamic GnRH neurons in adults and during embryonic development. Increased GnRH secretion and resultant increased LH secretion are noted in mice in response to exogenous AMH, suggesting a role for AMH in regulation of GnRH neuron excitability and secretion [57]. In animal studies, AMH 2 receptor expression is seen in endothelial cells and specialized hypothalamic glial cells called tanycytes that are known to regulate GnRH secretion [58,59]. A high concentration of AMH in anovulatory PCOS could be the cause of an exaggerated inhibitory effect on follicular growth [54]. AMH also inhibits gonadotropin-stimulated CYP19A and P450scc gene expression in cultures of human granulosa lutein cells, suggesting that it may have a regulatory role in ovarian steroidogenesis as well [56]. One hypothesis proposed is that hyperandrogenism in PCOS hypersensitizes granulosa cells to FSH, resulting in excessive preantral follicular growth and AMH expression, which in turn inhibits FSH-induced aromatase expression (Figure 3). This results in altered antral follicular growth causing anovulation [60]. Increased AMH expression is also noted in response to insulin in luteinized granulosa cells from PCOS women. However, adding AMH decreases insulin-promoted aromatase expression [53]. All of the above imply that the high local concentration of AMH in addition to hyperandrogenism due to intrinsic theca cell dysfunction [48,61] could alter the follicular microenvironment and play an important role in the pathogenesis of PCOS [60]. Anti-Mullerian Hormone Levels and Assays AMH is secreted by the testicular Sertoli cells in early fetal life in male fetuses whereas the ovarian granulosa cells in female fetuses start secreting AMH around the 36th week of gestation [62]. Throughout the life cycle, AMH is typically elevated in males compared to females. AMH level in the first month of neonatal life is significantly higher in boys (median 57.2 ng/mL; 5th-95th percentile 23.8-124) than girls (median < 0.4 ng/mL; 5th-95th percentile < 0.4-1.3), and AMH continues to increase during the first year of life. [63]. Maximum AMH is attained at age 15.8 in girls, and plateaus until age 25, then declines with age and becomes undetectable after menopause [46,64]. Based on a systematic review and meta-analysis of 11 studies, serum AMH level is higher in the follicular phase compared to the luteal phase in women with regular menstrual cycles [65]. AMH levels decrease during the course of pregnancy (approximate median values 1.69, 0.8, and 0.5 ng/mL during the first, second, and third trimesters, respectively) and after delivery, then levels increase over the next four postpartum days [66]. In reproductive age group women with PCOS and morbid obesity (BMI > 40 kg/m 2 ), serum AMH level is significantly higher compared to weight-matched controls without PCOS [67]. However, in another study, obese PCOS women have significantly lower AMH levels compared to lean PCOS women, in each of the A, B, and C phenotypes (A-hyperandrogenism, oligo-anovulation, and polycystic ovarian morphology; B-hyperandrogenism and oligo-anovulation; C hyperandrogenism and polycystic ovarian morphology) of PCOS [68]. AMH can be measured in the serum as well as the follicular fluid, and immunoassays, both manual and automated, have been available for more than two decades. In women with regular ovulatory menstrual cycles, there are multiple forms of variability for the same AMH assay, including inter-participant, inter-cycle, and intra-cycle variability [69]. When three different AMH assays (Gen II-Beckman Coulter, picoAMH-Ansh Labs, and Elecsys-Roche) were compared, the inter-assay correlation in women with PCOS was stronger in the low (<2.8 ng/mL) and high (>7.04 ng/mL) range serum AMH level subgroups [70]. The variability in antibody specificity, the presence of different biologically active AMH isoforms, the analytical interference of some assays by complements, and the unavailability of an international standard to calibrate are still some of the laboratory issues associated with AMH measurement [71]. In 2021, though the commutability data did not support 16/190 (with a content of 489 ng/ampoule) to be an international reference standard, it has been accepted as a WHO reference reagent for human recombinant AMH [72]. Since AMH levels are approximately 23% lower in women using oral hormonal contraceptive compared to non-users [73], a washout period of 2-3 months off OCPs (oral contraceptive pills) may be required to accurately assess AMH levels. Additionally, pretreatment with medications such as Metformin can decrease AMH levels in women with PCOS [74]. AMH as an Alternative to Polycystic Ovarian Morphology (PCOM) AMH has been suggested as an alternative for polycystic ovarian morphology (PCOM) in the diagnosis of PCOS, given that there are higher levels of AMH in women with PCOM compared to those with normal ovaries [75][76][77][78]. The two components of polycystic ovarian morphology are antral follicle count (also frequently referred to as follicle number per ovary) and ovarian volume. Serum AMH correlates specifically with antral follicle count and follicle number per ovary in the context of PCOS [75]. This predictive value of AMH for the components of PCOM can be applied to the current criteria for diagnosing PCOS. We examined the studies reviewed by Anand et al. in 2022 for those that tested serum AMH as a replacement for PCOM in the Rotterdam criteria. Thus, a diagnosis of PCOS was based on having two of three features of either oligo/amenorrhea, hyperandrogenism, or AMH above a cut-off threshold. These studies demonstrated that replacement of PCOM in the Rotterdam criteria by serum AMH level can accurately predict the presence of PCOS, with area under the ROC curve (AUC) ranging from 0.927-0.994, sensitivities of 78-100%, and specificities of 88-100% (Table 1) [78][79][80][81][82][83]. While these studies examined the predictive value of serum AMH using different cut-offs, these data suggest a possible correlation of serum levels of AMH with PCOM and PCOS. Anti-Mullerian hormone appears to be more sensitive in women with classic anovulatory PCOS compared to the ovulatory and nonhyperandrogenic phenotypes [84]. Since AMH is secreted by granulosa cells of pre-antral and small antral ovarian follicles, patients with PCOM may have higher serum AMH levels. Thus, serum AMH has a potential value as an alternative for the detection of PCOM in clinical practice to diagnose PCOS. AMH as an Alternative to Hyperandrogenism Given its utility as a marker of PCOS, serum AMH may be considered as an alternative to hyperandrogenism. Various factors lead to hyperandrogenism in PCOS, which in turn causes higher serum AMH. Higher levels of serum androgens, specifically total testosterone, are associated with increased production of AMH in PCOS patients [49,[85][86][87][88][89]. The average AMH in women with PCOS with all three features of Rotterdam criteria has been shown to be higher than those with only features of oligo/amenorrhea and PCOM (without hyperandrogenism), suggesting that serum AMH correlates strongly with hyperandrogenism [90]. However, others have found that a higher cut-off of AMH was necessary when used as a substitute for hyperandrogenism in criteria for PCOS. In a study of 211 Caucasian women, a threshold of 45 pmol/L (6.3 ng/mL) but not 29 pmol/L (4.1 ng/mL) for AMH substituted for hyperandrogenism resulted in effective diagnosis of PCOS [75]. While AMH could be evaluated as a potential replacement for hyperandrogenism in PCOS criteria, the benefit of this is unclear given a higher threshold value for AMH is necessary to allow for diagnosis. Additionally, the relationship between AMH and testosterone and other biochemical markers of hyperandrogenism remains unclear based on current evidence [85][86][87][88][89][90][91][92]. No studies or analyses to date have specifically investigated the diagnostic accuracy of AMH in replacing hyperandrogenism for PCOS. AMH as an Alternative to Oligo/Amenorrhea (OA) Serum AMH may also be a marker of ovulatory dysfunction, including both oligomenorrhea and amenorrhea. Fewer studies have explored the possibility of using AMH as an alternative to oligo/amenorrhea criterion in PCOS. There are some reports of increased serum AMH levels associated with oligo/amenorrhea [75,79,93]. One study of 148 women demonstrated that substituting AMH with a cut-off of 3.19 ng/mL for oligo/amenorrhea in the Rotterdam criteria could accurately diagnose PCOS, with an AUC 0.938, sensitivity 81%, and specificity 100% [79]. Additionally, AMH has the advantage of being able to distinguish oligo/amenorrhea caused by PCOS versus premature ovarian failure or hypergonadotropic hypogonadism [94]. For younger women near the age of menarche, polycystic ovarian morphology cannot be assessed for 8 years and hyperandrogenism is difficult to distinguish given the extensive presence of acne in pubescent females [91]. Thus, given the difficulties of diagnosing PCOS with other criteria, a reliable predictor of oligo/amenorrhea in the form of serum AMH could be helpful diagnostically in this age group. AMH as a Predictor of PCOS The pathogenesis of PCOS is closely linked with AMH, raising the question of whether AMH can be used on its own as a predictor of PCOS. Prior studies have found AMH was not suitable as a screening tool for PCOS independent of other diagnostic criteria, as it may lead to inaccurate diagnoses for women who only have two features of PCOS or women without PCOS who have one of the features of the Rotterdam criteria [80]. However, in a recent review and meta-analysis of the diagnostic accuracy of serum AMH in PCOS including 41 studies with a total of 13,509 subjects, Anand et al. found that AMH on its own could predict PCOS with a sensitivity of 78%, specificity of 87%, and area under the ROC curve (AUC) of 0.89 [95]. Further subgroup analysis revealed a cut-off value of 4.8 ng/mL was as accurate as a higher cut-off value for PCOS detection. Overall, this suggests that AMH has potential as a reliable and effective diagnostic test for PCOS. Additionally, further support for the utility of serum AMH comes from studies showing the rate of age-related decline in AMH being lower in PCOS compared to non-PCOS women [96], indicating that it may be a valuable marker across multiple age groups. Conclusions Since the diagnosis of PCOS is not straightforward and based on several sets of criteria and the exclusion of other differentials, there is an ongoing search for a reliable diagnostic test for PCOS. Anti-Mullerian hormone secreted by the ovarian pre-antral and small antral follicles seems to have an important role in the pathophysiology of PCOS, and there is a correlation between serum AMH level and antral follicle count on ultrasound. In this review, we discussed the studies that investigated the sensitivity and specificity of AMH both as a predictor for PCOS and as an alternative to each of Rotterdam's diagnostic criteria: polycystic ovarian morphology on ultrasound, oligo/amenorrhea, and hyperandrogenism. Overall, serum AMH alone or as a replacement for PCOM may have a high sensitivity and specificity to diagnose PCOS. However, there remains limited support for AMH as a replacement for oligo/amenorrhea or hyperandrogenism. Assessing the feasibility of replacing AMH for oligo/amenorrhea or hyperandrogenism with further studies, addressing the current laboratory and other challenges of AMH measurement, and further clarifying the pathophysiology of PCOS may allow serum AMH to be a useful diagnostic test for women with polycystic ovary syndrome. Conflicts of Interest: The authors declare no conflict of interest.
2023-03-02T16:22:58.891Z
2023-02-27T00:00:00.000
{ "year": 2023, "sha1": "b2c97df26840f8f1392b9668cd3fe1db2da60e21", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/5/907/pdf?version=1677510253", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "521d2c04e3bba9b758dcb916c85e0fc21b97bddd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233212035
pes2o/s2orc
v3-fos-license
An 83-year-old-male with bronchopleural fistula and empyema successfully treated with multidisciplinary management of thoracostomy, endoscopic, and surgical treatment: a case report Bronchopleural fistula (BPF) with empyema is a severe complication in patients undergoing lobectomy or pneumonectomy and is associated with high morbidity and mortality rates. Although a wide variety of treatment options exist, refractory cases with larger fistulas are still difficult to cure, especially in elderly patients. Here, we report a case of an 83-year-old man with stage I squamous cell lung carcinoma who underwent minimally invasive right lower lobectomy. After an initially uneventful postoperative course, he was readmitted to our hospital due to the progression of severe cough with fever after lung resection. Chest computed tomography (CT) showed an empyema cavity containing pleural effusion and a drainage tube in the right lower thorax. Bronchoscopy confirmed the presence of a fistula between the right lower bronchial stump and the pleural cavity. On the basis of his clinical symptoms and these imaging findings, the patient was diagnosed with BPF with empyema after lobectomy. He was successfully treated with multidisciplinary management including adequate pleural drainage by open-window thoracostomy, closure of the BPF by endoscopic therapy using an Amplatzer device, and complete obliteration of the empyema cavity with pedicled muscle flap. Multidisciplinary management combining thoracostomy, endoscopic therapy, and pedicled muscle flap transfer is a safe and effective treatment for elderly patients with larger fistulas and empyema. Introduction Bronchopleural fistula (BPF) with empyema is an uncommon but severe complication in patients undergoing lobectomy or pneumonectomy, and has rates of high morbidity and mortality (1,2). Successful management of BPF remains challenging due to difficulties relating to infection control, and frequent, easy redevelopment of residual space and fistula. Despite the wide variety of treatment options for BPF, curing refractory cases of larger fistulas is still difficult, especially in elderly patients (3,4). In this report, we describe the case of an 83-year-old male with BPF and empyema who was successfully treated with multidisciplinary management including open-window thoracostomy, endoscopic Amplatzer device placement, and pedicled muscle flap transfer. We present the following case in accordance with the CARE reporting checklist (available at http://dx.doi.org/10.21037/atm-20-3053). Case presentation All procedures performed in studies involving human participants were in accordance with the ethical standards Case Report of the institutional and national research committees, and with the Declaration of Helsinki (as revised in 2013). Written informed consent was obtained from the patient for publication of this manuscript and any accompanying images. A healthy 83-year-old man with a solitary pulmonary nodule, later identified as pT1bN0M0 lung squamous cell carcinoma, underwent minimally invasive right lower lobectomy in February 2014 ( Figure 1). Three years after undergoing lung resection, the patient was admitted to our hospital due to the progression of severe cough with fever. Although the patient's fever was relieved by the administration of antibiotics and insertion of closed thoracic drainage, his cough continued, and a large air leak from the tube was observed (Figure 1). Chest computed tomography (CT) showed an empyema cavity measuring 9 cm × 7 cm × 6 cm containing pleural effusion as well as a drainage tube in the right lower thorax. Bronchoscopy confirmed the On the basis of his clinical symptoms and imaging findings, the patient was diagnosed with right lower BPF with empyema after lobectomy, and he subsequently underwent endoscopic placement of a covered bronchial stent (Boston Scientific Corporation, Natick, MA, USA) in May 2017. Thereafter, the cough decreased, and the air leak from the tube was reduced. However, 1 week later, air leakage and purulent fluid discharge from the tube increased again, and further investigation with chest CT revealed that the stent had migrated into the empyema cavity ( Figure 2). On April 7, 2020, due to the inadequate results, the patient underwent rib resection thoracostomy; the posterior aspects of the 7 th and 8 th ribs were resected. The stent and purulent pleural effusion in the thoracic cavity were removed by evacuation and debridement, and positive-pressure ventilation revealed a large air leakage. Then, 2 drainage tubes were inserted into the cavity, the wound was closed, and the cavity was irrigated with an antibiotic solution through the tubes. Culturing of the drained pus identified the pathogens Klebsiella pneumoniae and Pseudomonas aeruginosa. Seven days later, he began to experience marked dyspnea, and subcutaneous emphysema developed; his closed drainage was subsequently converted to open-window thoracostomy ( Figure 1), with regular dressing changes and application of a compressive bandage during the following months. After his general condition had improved, the patient underwent re-endoscopic therapy, in which an Amplatzer atrial septal defect occluder (Shandong Visee Medical Devices Co., Ltd., Shandong, China) was placed between the proximal and distal ends of the stump to almost completely close the fistula from both sides of the defect with its central waist and 2 discs of both ends. The large air leakage was markedly reduced with drainage, and the patient's condition gradually stabilized. When healthy granulation tissue appeared and culture detected no infection in the thoracic cavity, muscle flap transposition was performed. During the operation, a 20 cm × 15 cm combined pedicled muscle flap of the latissimus dorsi and serratus anterior muscles was harvested. The tip of the combined flap was sutured and fixed to the mid-lower mediastinum, and the remainder was transposed to almost completely obliterate the empyema cavity, while part of the sacrospinalis muscle was transferred to fill the posterior residual space ( Figure 3A). The patient was discharged on postoperative day 58 ( Figure 3B). Chest CT scan after surgery revealed that the BPF and empyema cavity had been successfully obliterated ( Figure 4). The patient has remained healthy, and no recurrence of fistula with empyema was evident at the 5-month follow-up. Discussion The cause of BPF is abnormal communication between the pleural space and the bronchial tree. BPF is a potentially fatal postoperative complication of pulmonary resection and a complex challenge for thoracic surgeons, as many patients with fistula ultimately have refractory empyema. Managing refractory empyema is difficult, especially in elderly individuals, and the quality of life and survival of patients are severely impacted (1,2). BPF has a prevalence of 1.5-28% after pulmonary resection, and this variability seemingly depends on the etiology, surgical technique, and experience of the surgeon (3,4). The etiology of BPF with empyema is still not fully understood. Local risk factors include stump closure technique, a long bronchial stump, residual carcinoma at the bronchial margin, disruption of the bronchial blood supply, extended resection, right-side resection, pneumonectomy, the presence of empyema, and high-dose preoperative radiation therapy, although no single factor has been definitively identified. Systemic factors include the patient's nutritional status, diabetes mellitus, steroid use, the presence of sepsis, and preoperative chemotherapy (5,6). BPF typically presents 1 to 2 weeks after lung resection, manifesting as fever, productive cough, purulent or hemorrhagic sputum, respiratory distress, and occasionally as sepsis and acute respiratory failure. Patients typically subacutely develop malaise, flu-like symptoms, lowgrade fever, and weight loss, which can lead to persistent contamination and infection of the pleural space, trapped lung, aspiration in the unaffected lung, and in severe cases, death. The mortality rate associated with BPF after pneumonectomy has been reported to be 20-70% (5,6). Diagnosis of BPF is usually confirmed by bacteriological study, chest CT scan, or bronchoscopy. Following the development of BPF with empyema, proper and prompt management is essential to reduce A B A B the risk of associated mortality. Curing larger BPFs in elderly patients is still challenging, despite the wide variety of treatment options. Management options include thoracostomy, endoscopy, surgery, or a combination of methods, and the selection of a particular therapy is based on clinical status, the size of the BPF, and acuity of the empyema (2)(3)(4)(5)(6). Adequate pleural drainage remains the cornerstone of empyema treatment. If BPF with empyema is diagnosed, it should be drained promptly to prevent an aspiration pneumonia, life-threatening sequelae such as tension pneumothorax, and aspiration and respiratory failure. Closed chest tube drainage has been advocated as the first step in the treatment of BPF with empyema; however, with a high failure rate, chest tube drainage cannot control BPF with empyema effectively and increases the risk of contralateral lung inhalation and death. Openwindow thoracostomy has proven very useful in the treatment of BPF and is credited with saving many lives. It is a simple technique that may be performed even on extremely unstable patients, like the patient in this case study (4,5). Endoscopic therapy should be performed to promote fistula closure after thoracic drainage, as it is simple, safe, and less invasive compared to surgical treatment. Larger BPFs cannot be treated using endoscopic techniques, since neither coils nor fibrin glue is suitable for large bronchial stump fistulas due to insufficient stability in the lesion. Despite this, many researchers have shown that Amplatzer devices are suitable for the closure of fistulas of different sizes that cannot be treated successfully with other treatment modalities, especially in high-risk patients with larger fistulas (6)(7)(8). However, failure of endoscopic treatment does not preclude subsequent successful surgical management, as it may be used as a bridge to surgical treatment, as was the case in our patient (6). Muscle flap transfer may be used to obliterate BPF with empyema and may be applied using pedicled, free, and combination methods. Pedicled muscle flaps are ideal for filling a contaminated space due to their good blood supply and ability to reach almost any location in the pleural space. The most common muscles used are the latissimus dorsi, serratus anterior, pectoralis major, pectoralis minor, and intercostals. Among these, the latissimus dorsi muscle flap is the largest and most reliable (9). The proximal part of the muscle can be pedicled on the thoracodorsal vessels or the serratus branch, so it can be elevated at full length and has bulky tissue that provides reliable closure of the BPF with empyema. However, elderly patients with a large pleural cavity or those who are unsuitable for receiving a free muscle flap could have 2 or more combined pedicled muscle flaps transferred simultaneously to obliterate the pleural cavity, like in the case of our patient (9). When a pedicled muscle flap cannot be used, a free musculocutaneous flap harvested from a vastus lateralis or rectus abdominis can be transposed to completely obliterate the cavity (10). A limitation of this study is that it is a single case report, and the other inherent limitation is its retrospective nature. As a result of these limitations, it may be difficult to glean a firm conclusion from this case. Our patient remained healthy and experienced no recurrence of fistula with empyema for over 5 months following multidisciplinary management. In conclusion, in patients with BPF, it is crucial to not only completely control infection and occlude the fistula, but also to obliterate the empyema cavity. Therefore, multidisciplinary management combining open-window thoracostomy, endoscopic Amplatzer device placement, and pedicled muscle flap transfer is a useful option for treating older patients with larger fistulas and empyema. Our report indicates that this is a feasible and efficient method of management that can achieve promising results. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committees, and with the Declaration of Helsinki (as revised in 2013). Written informed consent was obtained from the patient for publication of this manuscript and any accompanying images. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2021-04-13T06:16:59.549Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "2169e94ea7b5ddc25e53b24a992455a81a2ff603", "oa_license": "CCBYNCND", "oa_url": "https://atm.amegroups.com/article/viewFile/64505/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8455f634fe963f7803837698490e461980635adf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15907999
pes2o/s2orc
v3-fos-license
Reproductive Biology and Endocrinology Effects of Leptin on in Vitro Maturation, Fertilization and Embryonic Cleavage after Icsi and Early Developmental Expression of Leptin (ob) and Leptin Receptor (obr) Proteins in the Horse Background: The identification of the adipocyte-derived obesity gene product, leptin (Ob), and subsequently its association with reproduction in rodents and humans led to speculations that leptin may be involved in the regulation of oocyte and preimplantation embryo development. In mice and pigs, in vitro leptin addition significantly increased meiotic resumption and promoted preimplantation embryo development in a dose-dependent manner. This study was conducted to determine whether leptin supplementation during in vitro maturation (IVM) to horse oocytes could have effects on their developmental capacity after fertilization by IntraCytoplasmic Sperm Injection (ICSI). Background Leptin, the product of the obesity (Ob) gene [1], predominantly synthesized by adipocytes, has been shown to be involved in the regulation of the reproductive function [2] and recent studies have been performed, by exploiting the potential role of this hormone in animal models, such as mouse, swine and bovine, to evaluate the possibility of improving in vitro oocyte maturation and embryo culture procedures. In the mouse, Kawamura et al. [3,4] demonstrated that leptin supplementation in the culture medium (10, 100 and 1000 ng/ml) promoted embryo development and increased the cell numbers of cultured blastocysts and the effect was preferentially observed in the trophoectoderm. These findings raised the possibility that leptin might regulate mouse preimplantation embryo development through a paracrine pathway. In pigs, leptin addition in oocyte maturation medium (10 and 100 ng/ ml) significantly increased the proportion of oocytes reaching the metaphase II (MII) stage, elevated ooplasmic cyclin B1 protein content and enhanced embryo developmental potential, thus suggesting that leptin might play a role in both nuclear and cytoplasmic maturation [5]. During porcine oocyte maturation, leptin increased phosphorylated mitogen-activated protein kinase (MAPK) content by 2.8-fold, and leptin-stimulated oocyte maturation was blocked when leptin-induced MAPK phosphorylation was suppressed by a specific MAPK activation inhibitor, U0126, demonstrating that leptin enhanced nuclear maturation via activation of the MAPK pathway [5]. Kun et al. [6] confirmed that 10 and 100 ng/ml of leptin in maturation medium enhanced porcine embryo development. These authors showed that there was no effect of the timing of leptin supplementation, in maturation medium, on meiotic maturation of porcine oocytes. In bovine, Paula-Lopes et al. [7] showed that leptin supplementation (1 and 10 ng/ml) exerted positive effects during oocyte maturation, by influencing blastocyst development, apoptotic index in cumulus cells and transcript levels of developmentally important genes. Moreover, they demonstrated a role for cumulus cells in mediating leptin effects. These authors hypothesized that leptin might influence the synthesis and release of cumulus cell-derived factors, which reach the oocyte through gap junction coupling and/or the extracellular environment. Leptin acts via transmembrane receptors, which show structural similarity to the class I cytokine receptor family. The leptin receptor (Ob-R) is produced in several alternatively spliced forms that have in common an extracellular domain of over 800 amino acids, a transmembrane domain of 34 amino acids and a variable intracellular domain, characteristic for each of the isoforms. These isoforms can be classified into three main classes: short (Ob-Ra), long (Ob-Rb) and secreted [8]. In the mouse, Ryan et al. [9], using immunohistochemistry, observed protein expression of the long form of the leptin receptor (Ob-Rb) in the ovary, with high intensities observed in oocytes, thecal cells and corpora lutea with peak expression at ovulation. In the pig, Craig et al. [5] demonstrated that Ob-R is expressed in oocytes from all stages of follicular development and oocyte maturation, with the highest level of expression occurring in oocytes from medium follicles and at GVBD, indicating that its expression is dependent on follicular stage and oocyte maturation. In the horse, in vitro fertilization (IVF) has been for a long time unsuccessful and reasons have been related to incomplete in vitro oocyte maturation (IVM) [10], inefficient sperm capacitation [11] or changes in oocyte zona pellucida [12,13]. In a recent study, McPartlin et al. [14] characterized stallion sperm hyperactivation and demonstrated that hyperactivation of capacitated sperm supported equine IVF. Intracytoplasmic sperm injection (ICSI) has been adopted as an alternative method to conventional IVF because sperm injection eliminates problems related to sperm binding and penetration but the complexity of oocyte maturation has not yet been overcame. ICSI is a valid tool for evaluating cleavage rates of in vitro-matured horse oocytes and ooplasmic maturation. Several studies reported a cleavage rate of 50-80% [15][16][17][18]. Unfortunately, only a small percentage of cleaved zygotes goes on to form blastocysts in culture (25-35%) [19,20]. This result may reflect the poor cytoplasmic maturation of equine oocytes matured in vitro [21]. In the literature, different culture media have been evaluated to improve the rate of equine oocyte maturation, including TCM199 [22][23][24][25][26][27], B2 [22] and Ham's F10 [28], supplemented with different concentrations of serum, hormones or follicular fluid. These conditions resulted in maturation rates varying from 20 to 85% but none of these has increased the efficiency of IVF or ICSI. The presence of leptin and leptin receptor in equine oocytes have been previously evidenced by an immunocytochemical study in compact cumulus oocytes recovered immediately upon collection and after in vitro maturation (IVM) from fillies and from mares of light or heavy body weight breeds [29]. To our knowledge, studies on the effects of leptin in equine oocytes and embryos were not reported to date. Since oocyte developmental competence is best assessed by its ability to undergo embryonic development [30], the present study investigated the effect of leptin supplementation in IVM medium on maturation, fertilization and development of horse oocytes after ICSI. In addition, the developmental expression of Ob and Ob-R proteins in early embryo development was analyzed by immunocytochemical staining. Methods All chemicals were purchased from Sigma-Aldrich (Milano, Italy) unless otherwise indicated. Collection of oocytes The study was conducted in Southern Italy (41° North parallel). Ovaries from mares of unknown reproductive history obtained at two local abattoirs, located at a maximum distance of 20 Km (30 min) from the laboratory, were transported and processed for the scraping procedure as previously described [31]. Cumulus-oocyte complexes (COCs) were recovered from medium size follicles (0.5 to 2.5 cm in diameter), identified in the collected mural granulosa cells by using a dissection microscope and only healthy COCs, classified as having an intact, compact (Cp) or expanded (Exp) cumulus investment [24,31] were selected for culture; degenerating oocytes showing shrunken, dense or fragmented cytoplasm were recorded and discarded. The time between follicle scraping and beginning of oocyte culture was less than 1 hour. Total time between slaughter and culture ranged between 2 to 4 hours. In vitro maturation In vitro maturation (IVM) was performed following the procedure described by Dell'Aquila et al., 2003 [31]. Medium TCM-199 with Earle's salts, buffered with 4.43 mM HEPES and 33.9 mM sodium bicarbonate and supplemented with 0.1 g/L L-glutamine, 2 mM sodium pyruvate, 2.92 mM calcium-L-lactate penthahydrate (Fluka 21175 Serva Feinbiochem GmbH & Co Heidelberg, Germany No.29760) and 50 μg/mL gentamicin was used. After preparation, pH was adjusted to 7.18 and the medium was filtered through 0.22-μm filters (No.5003-6, Lida Manufacturing Corp., Kenosha WI, USA) and stored/ refrigerated (4°C) until use for a maximum of one week. On the day of IVM, medium was further supplemented with 20% (v/v) Fetal Calf Serum (FCS). Then, gonadotrophins (10 μg/mL ovine FSH and 20 μg/mL ovine LH) and 1 μg/mL 17β Estradiol were added. The medium was filtered again and allowed to equilibrate for 1 hour under 5% CO 2 in air before being used. Compact and expanded COCs were washed three times in the culture medium and groups of up to 10 COCs with the same cumulus morphology were placed in 400 μL of medium/well of a fourwell dish (Nunc Intermed, Roskilde, Denmark), covered with pre-equilibrated lightweight paraffin oil and cultured for 28 to 30 h at 38.5°C under 5% CO 2 in air. The effects of recombinant human leptin (Sigma L-4146), added to the culture well, were tested at the concentrations of 1, 10, 100 and 1000 ng/ml that were reported to be effective in stimulating oocyte maturation in dose-response curve experiments in porcine [5,6] and bovine [7] oocytes. Oocytes cultured in the absence of leptin were used as controls. Oocyte preparation for ICSI After IVM culture, oocytes underwent cumulus and corona cells removal by incubation in TCM 199 with 20% FCS containing 80 IU hyaluronidase/mL and aspiration in and out of glass pipettes finely heat pulled to the diameter of the equine oocytes. Oocyte morphology after denuding was assessed under a Nikon SMZ 1500 stereomicroscope (× 60-110 magnification). Those oocytes showing an intact zona pellucida, regular-shaped perivitelline space, 1 st polar body presence in the perivitelline space, intact oolemma, regular ooplasmic shape and texture (no vacuoles) were classified as mature and morphologically normal [32][33][34] and underwent microinjection. Semen preparation for ICSI Fresh semen samples from a mature stallion with a reproductive history of normal fertility were used and trials were performed in the reproductive season (February to September 2008). The stallion was located in the reproductive centre Pegasus (Department of Animal Production, University of Bari, Southern Italy) and was routinely used in artificial insemination programs. Semen was collected by using Missouri artificial vagina with an in-line gel filter, extended with INRA 96 (IMV Technologies, Piacenza, Italy) at the concentration of 20-25 × 10 6 sperm cells/mL and used immediately. Sperm cells for ICSI were prepared by the swim-up procedure in Earle's balanced salt solution (EBSS) supplemented with 0.4% BSA, 50 μg/ mL gentamicin as previously described [31][32][33][34][35]. ICSI procedure Intracytoplasmic sperm injection was carried out as previously reported [31][32][33][34][35]. All procedures were performed at 38.5°C in Global medium (IVFonline, Ontario, Canada). Each injected oocyte was then transferred to a single 25 μL drop of fresh Global medium covered by lightweight paraffin oil and incubated at 38.5°C for 18-20 hours under 5% CO 2 in air. Embryo culture and evaluation Injected oocytes were allowed to further develop in vitro for 72 hours in the same medium. On each culture day, embryonic developmental stage was recorded and embryo quality was graded as follows: type a = blastomeres of equal size with <10% cytoplasm fragmentation; b = blastomeres of equal size with 10 to 40% fragmentation; c = unequal blastomeres with 10 to 40% fragmentation; d = unequal blastomeres with >40% fragmentation. At the end of culture time, embryos were removed from culture, fixed and evaluated as described below. The uncleaved ova were removed after 48 hours culture, fixed and evaluated with the same procedures as described below. Immunocytochemistry According to the procedures described by Kim et al. [36], with some modifications, 2-, 4-, 8-cell stage ICSI-derived embryos, fertilized and unfertilized oocytes were fixed for 4 hours in 3.7% paraformaldehyde at 4°C. Unless otherwise stated, incubations were carried out at 4°C. Oocytes and embryos were washed four times, for 20 min, in PBS containing 1% Triton X-100 (PBS-T). First step: oocytes and embryos were placed overnight in a blocking solution consisting of 0.1 M glycine, 1% goat serum, 0.01% Triton X-100, 1% powdered nonfat dry milk, 0.5% bovine serum albumine (BSA) and 0.02% sodium azide in PBS. The Ob-R primary antibody was raised against a recombinant protein corresponding to amino acids 541-840 mapping within an internal domain of human Ob-R (sc-1834, Santa Cruz Biotech Heidelberg Germany). After blocking, oocytes and embryos were incubated overnight with the primary antibody diluted to 1:100 in PBS-T. Rhodamine provided a second label to FITC. For each experimental trial, two-three embryos and uncleaved oocytes were used as minus primary controls. After these steps oocytes and embryos were stained with 2.5 μg/mL Hoechst 33258 (Sigma 1155) in 3:1 (vol/vol) glycerol/PBS, mounted on microscope slides, covered with cover slips, sealed with nail polish and kept at 4°C in the dark until observation. In order to avoid excess pressure being exerted on the oocytes/embryos, the coverslides were supported with thick droplets of a Vaseline-wax mixture placed in each corner. To test the specificity of the immunoreactions, histological sections of equine subcutaneous fat were used as positive controls. Nuclear chromatin evaluation Oocytes and embryos were evaluated in relation to their developmental stage under an epifluorescence micro-scope (Nikon Eclipse 600 equipped with B-2 A (460 nm excitation/346 nm emission) filter as previously described [35][36][37]. Normally cleaved embryos were defined by the presence of nuclei of regular morphology for each blastomere. In the group of uncleaved ova, normal fertilization was defined by the presence of two polar bodies (PBs) with two pronuclei (PN). Presence of the metaphase II (MII) with the 1 st PB with the swollen sperm head, a single PN with signs of the sperm cell in the cytoplasm, tripronucleate zygotes with a single PB extruded, were considered to represent retarded, arrested or abnormal fertilization, respectively, and were classified and grouped as abnormally fertilized oocytes. Oocytes with one PN with intact sperm cell were regarded as activated oocytes. Oocytes showing MII+PB with an intact sperm cell were classified as unfertilized. Fertilization rates in these trials included the oocytes that developed further into embryos as well as those that were found uncleaved but with evident signs of fertilization after staining. Evaluation of leptin and leptin receptor expression by confocal microscopy Oocytes and embryos were observed at 600× magnification in oil immersion with a laser scanning confocal microscope (C1/TE2000-U Nikon). An Argon laser ray at 488 nm and the B-2 A filter (495 nm exposure and 519 nm emission) was used to point out the FITC-conjugated secondary antibody for Ob-R labelling. A Helium/Neon laser ray at 543 nm and the G-2 A filter (555 nm exposure and 580 nm emission) was used to point out the TMRITCconjugated secondary antibody for Ob labelling. Scanning was conducted with 25 optical series from the top to the bottom of the oocyte with a step size of 0.45 μm to allow three-dimensional distribution analysis. Parameters related to fluorescence intensity were maintained at constant values for all measurements. Statistical analysis The statistical significance of the results was evaluated by the Chi-square-test with the Yates correction for continuity and by Fisher's exact test. Fisher's exact test was used when a value of less than 5 was expected in any cell. Proportions of matured, fertilized oocytes and cleaved embryos after ICSI were compared between each leptintreatment group and controls. Values with P < 0.05 were considered to be statistically significantly different. Effect of leptin supplementation in IVM medium on maturation and fertilization after ICSI Five consecutive IVM/ICSI trials were performed in the reproductive season with the aim to evaluate the effects of leptin supplementation in IVM medium on maturation, fertilization and developmental potential of equine oocytes. The ovaries of 60 mares were processed, 503 fol-licles were scraped and 283 oocytes were recovered (2.4 oocytes/ovary; 57%, n° of recovered oocytes/n° scraped follicles), 149 surrounded by a Cp cumulus and 134 with an Exp cumulus. After culture and cumulus removal, 262 oocytes (93%), 137 Cp and 125 Exp, were found as morphologically normal and analyzed for maturation (1 st PB extrusion). Of them, 62 Cp and 77 Exp oocytes were found matured (total = 139 oocytes), were submitted to ICSI and allowed to develop in vitro for 72 hours after sperm injection. Table 1 shows the maturation and fertilization rates, after ICSI, of oocytes cultured in presence of leptin in IVM medium. In Exp oocytes, the maturation rate was significantly higher in 100 ng/ml leptin-treated oocytes compared with controls (17/23, 74% vs 17/39, 44%; P < 0.05). In the group of Cp oocytes, the proportion of matured oocytes did not differ between leptin-treated and control oocytes. In both groups, Cp or Exp oocytes, there were no statistically significant differences between groups with respect to the percentages of normally, abnormally fertilized or activated oocytes. However, the total (normal + abnormal) fertilization rate was significantly higher in 10 ng/ml leptin-treated Exp oocytes compared with controls (9/16, 56% vs 9/39, 23%; percentages of evaluated oocytes, P < 0.05). Table 2 shows the cleavage rates after ICSI of oocytes cultured in presence of leptin in IVM medium. The addition of leptin during IVM culture was not effective on embry-onic development at the 2-4 cell stage. The rates of embryos which cleaved at the 2-4 cell stage did not statistically differ between leptin treated and control samples (percentages of the 2-cell stage embryos, NS). However leptin, added at the concentrations of 100 ng/ml, significantly reduced the rates of embryos reaching the 4-8 cell stage (1/11, 9% vs 10/19, 53%; percentages of the 2-cell stage embryos, P < 0.05). Whether calculated in respect to the number of evaluated oocytes, the effects of leptin did not attain statistical significance. Effects of leptin supplementation in IVM medium on in vitro embryo development Embryo quality did not differ between controls and 1, 10 and 1000 ng/ml-treated oocytes. Instead, the exposure to 100 ng/ml significantly increased the rate of embryos, issuing from Exp oocytes, with grade b of cytoplasmic fragmentation. In detail, in control oocytes, 9 (82%) out of the 11 embryos from Cp oocytes and 6 (75%) out of the 8 embryos from Exp oocytes were categorized as type a; in the group of oocytes treated with 1 ng/ml, 5 (83%) out of the 6 Cp embryos and 6 (75%) out of the 8 Exp embryos were categorized as type a; in oocytes treated with 10 ng/ml, 4 (57%) out of the 7 Cp embryos and 4 (66%) out of the 6 Exp embryos were categorized as type a; in oocytes treated with 100 ng/ml, 3 (60%) out of the 5 Cp embryos but none of the 6 Exp embryos were categorized as type a; in oocytes treated with 1000 ng/ml, 3 (60%) out of the 5 Cp embryos and 7 (78%) out of the 9 Exp embryos were categorized as grade a. In all experimental groups, the remaining embryos were of grade c except a Cp embryo from 100 ng/ml which resulted of grade d. Immunolocalization of Ob and Ob-R in equine early embryos Both leptin ligand and receptor proteins were detected in embryos obtained from Cp and Exp oocytes. Both proteins were detected at the 2-( Figure 1A1, A2), 4-( Figure 1B1, B2) and 8-cell stage ( Figure 1C1, C2) and were overlapped and localized in the same area ( Figure 1A3, B3, C3). Figure 2 shows a representative 25 optical planes analysis of an embryo obtained after IVM culture in presence of 100 ng/ml leptin. At all analyzed stages, Ob and Ob-R were present with cortical distribution in each blastomere over the 25 optical planes. Moreover, a granulelike expression pattern was observed in the cytoplasm of each blastomere. Leptin receptor staining was positive in the nuclei of the 4-and 8-cell embryos ( Figure 1B1, C1, D1). The addition of leptin in culture medium did not modified Ob and Ob-R proteins subcellular localization in equine early embryos. The same cortical pattern was evident in mature uncleaved fertilized and unfertilized oocytes. No immunoreactivities were detected in the negative controls embryos where primary antibodies were omitted. Moreover, the reactions of the tissues used as positive controls gave the expected results (equine subcutaneous adipose tissue; data not shown). Discussion Our results demonstrated that the addition of leptin in the range between 10 and 1000 ng/ml increased the maturation rate of equine oocytes even though the statistical significance was observed only at the concentration of 100 ng/ml. This result is in line with previous observation in other species [5][6][7]. The improvement of maturation rate of oocytes may be related to some potential action mechanisms exerted by leptin on oocyte cytoplasmic maturation. These mechanisms may include direct or indirect cumulus cell-mediated effects such as restructuring oocyte cytoskeleton, reprogramming protein synthesis, or inhibiting apoptosis [38]. As previously observed in bovine, it can be hypothesized that leptin may rescue oocytes that could potentially undergo apoptosis [30]. The beneficial effect of leptin during oocyte maturation suggests a role for leptin as a survival factor minimizing cellular damage to oocyte and/or cumulus cells. The different response to leptin treatment, observed between Cp and Exp oocytes, could be due to Ob-R modifications occurring during the process of cumulus expansion and/or to different expression or activation status of the receptor in COCs of these two categories. Previous studies, reporting concentration and stage dependent effects of leptin on embryonic development may support our hypothesis [39] and it is possible that Ob-R could activate in different ways the reported signal-transduction pathways [8] in Cp and Exp oocytes. Moreover, it has been suggested that leptin may induce germinal vesicle breakdown (GVBD), in vivo, via its action on the theca cells [9]. On the contrary, in vitro, the effects of leptin on oocyte maturation may be exerted by a direct action on the oocyte or indirect effect on cumulus cells. Leptin may influence the synthesis and release of cumulus cellderived factors, which reach the oocyte through gap junction coupling and the extracellular environment, in differ- Immunocytochemical analysis of Ob and Ob-R proteins expression ent way in Cp or Exp oocytes and, consequently, it can be hypothesized that Exp cumulus cells could be more responsive to leptin than Cp cumulus cells [7]. Fertilization rate and embryonic developmental competence (cleavage and blastocyst rate) are widely used as indicators of oocyte quality. The enhanced fertilization rate observed in leptin-treated oocytes confirmed the stimulatory effect of leptin on oocyte quality. In contrast with data reported in other species, such as pig [5,6] and bovine [7], in the present study, leptin had no beneficial effect on cleavage rates after ICSI but rather, at the concentration of 100 ng/ml, it decreased embryonic developmental rate and increased cytoplasmic fragmentation. Landt et al. [40] reported that leptin plasma levels differ between various strains of rats, with variation up to two times, suggesting that different genetic background can affect circulating leptin levels. It can, therefore, be supposed that different thresholds may exist in different subjects, cells and tissues, including oocytes and embryos, with respect to leptin sensitivity, in different species. Few information is available about intrafollicular leptin concentration in differents species. In women with intrafollicular leptin concentrations equal to or higher than 20 ng/ ml, the fertilization rate is significantly higher (85.7%) than that in women with lower doses (16.7% p < 0.05). No differences were detected, instead, in the quality of the embryos obtained either at the zygote stage or 48 hours after oocyte insemination [41,42]. In pigs, leptin was detected in follicular fluids pooled from different size follicles as follows: small follicles, 1.21 ± 0.28 ng/ml; medium follicles, 1.24 ± 0.06 ng/ml; and large follicles, 1.13 ± 0.24 ng/ml and when leptin was added in the maturation medium at the concentration of 10 ng/ml significantly increased the proportion of oocytes that reached the MII stage after 48 h IVM: this concentration should still be considered as close as possible to physiological levels [5]. To our knowledge, no data on leptin concentration in the follicular fluid is available in the horse and it could be possible that the concentration of 100 ng/ml do not respect the physiological condition. In addition, the Ob/ Ob-R system could significantly differ in the horse compared with humans, non-human primates [43] and other species. Another possible explanation could be the different types of leptin used in various experiments as reported by Herrid et al. [39]. We used recombinant human leptin which, by bioinformatic comparison, exhibited high homology to the leptin of bovine (84%), pig (85%), equine (82.7%) and mouse (83%). Previously reported studies [5][6][7]30] also used recombinant human leptin. Moreover, caution must be taken when comparing different studies due to extensively different culture conditions dependent on the examined species. Again, our apparently contraddictory results may be due to differences in semen source (frozen-thawed versus fresh) or fertilization procedures. In our study, ICSI procedure with fresh semen in Global medium was previousy tested as a reliable method to obtain equine embryos [44] whereas in pigs, Kim et al. [36] and Kun et al. [6] used IVF and Somatic Cell Nuclear Transfer embryo (SCNT) in NCSU (North Carolina State University) medium; in bovine, IVF in SOF (Synthetic Oviductal Fluid) medium was adopted [7]. Our results demonstrated that Ob and Ob-R proteins were detected in equine ICSI embryos throughout early cleavage stages. This finding is in agreement with previous observation in other species [36]. Moreover, leptin has been found to be secreted by various reproductive organs including placenta [45,46] and ovary [47]. Leptin has been reported to be expressed at high levels in mouse oocytes at all stages of follicular development whereas low expression levels were found in the mural granulosa, stroma, theca, and corpora lutea [3]. Leptin receptor mRNA and protein were present in the mouse oocytes [48] and preimplantation embryos [3]. It has been reported that cultured human blastocysts secrete leptin, and the levels of leptin are significantly higher than those of arrested embryos [49]. In human, leptin protein was localized in immature oocytes and in all stages of embryonic development [50,51]. Recently, leptin protein was reported to be expressed in all stages of porcine IVF embryos [52]. This finding is also in agreement with our previous observation in equine oocytes. In oocytes at the GV stage [29], both Ob and Ob-R were uniformly distributed throughout the ooplasm, but the intensity of reaction was lower either in light weight mares or in fillies oocytes, than in oocytes of heavy weight mares. In matured oocytes, both Ob and Ob-R were localized in the cortex and concentrated at one pole of the oocyte. This distribution was indipendent from the animal group and again with lower intensity in light mares and fillies. Leptin and Ob-R proteins in equine embryos were distributed according to the same cortical and cytoplasmic granule-like distribution pattern in each blastomere. Interestingly, positive staining was also observed in the nuclei of 4-and 8-cell stage embryos. This finding is in agreement with previous observations of nuclear positivity in neurons in rat brain [53] and perinuclear positivity in transfected Ob-R expressing HeLa cells [54]. This latter study [54] examined the intracellular traffic of Ob-R and reported that both isoforms of Ob-R were observed in HeLa cells at three cellular localizations, the plasma membrane, the peripheral cytoplasm and the perinuclear compartment. The perinuclear staining, localized in the trans Golgi network area, was reported as probably made of newly synthetized receptors en route to the cell surface [54]. The antibody for Ob-R used in the present study detects both short and long forms of Ob-R [55]. Thus, it is not known which Ob-R isoform mediated the effect of leptin on equine oocytes during IVM and is expressed in equine embryos. Conclusion The present study demonstrated for the first time that, in the horse, the addition of leptin during IVM, in the range between 10 and 1000 ng/ml, has a beneficial effect on meiotic maturation and fertilization after ICSI but it impairs embryonic development. In addition, it was demonstrated that Ob and Ob-R proteins are expressed in equine early embryos. The presence of both ligand and receptor proteins in oocytes [29] and in ICSI embryos suggests that leptin acts as an autocrine/paracrine hormone in horse maturation, fertilization and early development. Species-specific differences may exist in oocytes/embryos with regard to the sensitivity to leptin.
2018-05-08T18:00:21.047Z
0001-01-01T00:00:00.000
{ "year": 1987, "sha1": "014c4ee64da3dee443b8faa50950052b25718584", "oa_license": "CCBY", "oa_url": "https://rbej.biomedcentral.com/track/pdf/10.1186/1477-7827-7-113", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "014c4ee64da3dee443b8faa50950052b25718584", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233219940
pes2o/s2orc
v3-fos-license
Joint Secure Design of Downlink and D2D Cooperation Strategies for Multi-User Systems This work studies the role of inter-user device-to-device (D2D) cooperation for improving physical-layer secret communication in multi-user downlink systems. It is assumed that there are out-of-band D2D channels, on each of which a selected legitimate user transmits an amplified version of the received downlink signal to other legitimate users. A key technical challenge for designing such systems is that eavesdroppers can overhear downlink as well as D2D cooperation signals. We tackle the problem of jointly optimizing the downlink precoding, artificial noise covariance, and amplification coefficients that maximize the minimum rate. An iterative alternating optimization algorithm is proposed based on the matrix fractional programming. Numerical results confirm the performance gains of the proposed D2D cooperation scheme compared to benchmark secret communication schemes. provide perfect communication secrecy. The authors in [1] studied the optimization of transmitter covariance matrix for a single-user multiple-input single-output (MISO) system in the presence of multiple eavesdroppers. Secure design for multi-user systems, in which a single base station (BS) communicates with multiple legitimate users, was investigated in [2]- [5]. These works focus on fronthaul quantization noise design [2], the application of non-orthogonal multiple access (NOMA) [3], short-packet transmission for ultra-reliable low-latency communication (URLLC) [4] and the impact of quantized channel state information (CSI) [5]. In this work, we study the role of inter-user device-to-device (D2D) cooperation for improving secret communication in multi-user downlink systems. To the best of our knowledge, this is the first work that considers secret communication for D2D-assisted multi-user downlink systems. The authors in [6], [7] investigated the advantages of enabling D2D cooperation assuming pairwise [6] or broadcast cooperation strategy [7] on out-of-band D2D channels without considering the security issue. A key technical difficulty in designing D2D cooperation for secret downlink systems is that eavesdroppers have a potential of overhearing both BS-to-users and inter-user D2D communications. This suggests that information leakage by eavesdroppers will reduce the cooperation effect on D2D channels and degrade secrecy rates. We first describe the system model for a multi-user system with out-of-band D2D communication links (Sec. II). We then illustrate the signal processing operations at the BS and the selected users for D2D channels including multi-user precoding, artificial noise injection, and amplify-and-forward (AF) D2D relaying (Sec. III). We tackle the problem of jointly optimizing the downlink precoding, artificial noise covariance, and amplification coefficients that maximize the minimum user rate under the constraints on transmit powers and information leakage (Sec. IV). To deal with the non-convexity of the original optimization problem, we propose an iterative algorithm based on the matrix fractional programming (FP) [8]. Numerical results confirm the performance of the proposed D2D cooperation scheme compared to conventional secret schemes (Sec. V). II. SYSTEM MODEL We consider a multi-user downlink system in which a BS with M transmit antennas serves K L legitimate single-antenna users. There are K E single-antenna eavesdropping users that try to overhear the messages intended for legitimate users. We assume that the K L users can cooperate one April 14, 2021 DRAFT another on N out-of-band D2D cooperation links. We define the notations K L {1, 2, . . . , K L }, The received signal of the kth legitimate user on the downlink channel is denoted by y L,k,0 , where the subscripts L, k, and 0 stand for 'legitimate', the receiving user's index, and the downlink channel, respectively. We write the signal y L,k,0 as We assume that, on the nth D2D channel with n ∈ N , the j n th legitimate user transmits a signal x U,jn to the rest of legitimate users K L \ {j n }. The received signal y L,k,n at the k( = j n )th legitimate user is then given as where h d2d k,jn denotes the channel response from the j n th user to the kth user, and z L,k,n ∼ CN (0, σ 2 ) represents the additive noise. The transmit signal x U,jn is subject to the power constraint E [|x U,jn | 2 ] ≤ P U . We note that the downlink and D2D communication signals can be sensed and overheard by the K E eavesdroppers. We denote the signal received by the mth eavesdropping user, m ∈ K E , on the downlink channel as y E,m,0 which is modelled by with g m ∈ C M ×1 and z E,m,0 ∼ CN (0, σ 2 ) representing the channel vector from the BS to the mth eavesdropper and the additive noise signal, respectively. Similarly, the received signal at the mth eavesdropper on the nth D2D channel, n ∈ N , is given as where g d2d m,jn denotes the channel gain from the j n th legitimate user to the mth eavesdropper, and z E,m,n ∼ CN (0, σ 2 ) represents the additive noise. III. SECURE DOWNLINK AND D2D COOPERATION In this section, we describe the signal processing for the secure downlink and D2D cooperation systemS. On the downlink channel, the BS transmits a superposition of linearly precoded signals and artificial noise, which can be written as x B = l∈K L v l s l +n B , where s l ∼ CN (0, 1) denotes the data signal for the lth legitimate user, v l ∈ C M ×1 represents the precoding vector for the signal s l , and n B ∈ C M ×1 is the artificial noise signal injected to prevent the eavesdropping users K E from decoding the signals s 1 , s 2 , . . . , s K L . Without claim of optimality, we assume With the transmission model, the power constraint for the BS can be written as On the nth D2D channel, n ∈ N , the j n th legitimate user broadcasts an amplified version of its received signal y L,jn,0 to help the rest of legitimate users K L \ {j n } better decode their target signals. The transmitted signal x U,jn is thus given as x U,jn = α jn y L,jn,0 . The amplification coefficient α jn satisfies the power constraint where The kth legitimate user decodes the signal s k by using the signals y L,k,0 and {y L,k,n } n∈N \{n k } received on the downlink and D2D channels, respectively. Here n k denotes the index of the D2D channel in which the kth legitimate user broadcasts an amplification of its received signal. Thus, n k is an empty element if k / ∈K L . Combining all the received signals on the downlink and D2D cooperative channels, the achievable data rate R k for the kth legitimate user is given as As in [9], we consider information-theoretic privacy constraints: The left-hand side (LHS) in (8) measures the amount of information leakage of signal s k to the mth eavesdropper. Here we assume that the mth eavesdropper can exploit the received signal y E,m,0 on the downlink channel, as well as the received signals {y E,m,n } n∈N on the D2D channels. Thus, the constraint in (8) imposes that the amount of information leakage for every pair of legitimate and eavesdropping users does not exceed a predetermined level β. According to information-theoretic results [10, Ch. 4, Problem 33], a bit stream of rate R sec k = max(R k − β, 0) can be received securely by the kth legitimate user while the remaining rate min(β, R k ) can be overheard by the eavesdropping users. We refer to R sec k as the secrecy rate of the kth legitimate user. IV. PROPOSED OPTIMIZATION The goal is to jointly optimize the precoding vectors v, the artificial noise covariance matrix Q B , and the amplification coefficients α with the criterion of maximizing the minimum-user rate R min = min k∈K L R k while satisfying the transmit power constraints in (5) and (6), and the privacy constraints in (8). Let R = [R 1 ; · · · ; R L ]. We formulate the problem at hand as It is difficult to solve the problem (11) due to the non-convexity of the constraints (11b), (11c) and (11e). To handle this issue, we restate the problem by using an epigraph form [11, Ch. 4.1.3] and the matrix fractional programming (FP) [8], and propose an alternating optimization approach. Applying the FP [8, Cor. 1], (11b) can be replaced by a stricter constraint: for any γ k,info and θ k,info ∈ C (N +1)×1 , k ∈ K L , where φ B . Also, (12) becomes equivalent to (11b) when γ k,info and θ k,info are given as Although (11b) and (11c) are of similar forms, we cannot directly apply the FP for (11c). We first restate (11c) as for any Γ k,m,leak ∈ C (M +K L +N )×(M +K L +N ) and Θ k,m,leak ∈ C (N +1)×(M +K L +N ) . Here the matriceŝ . Similar to (12), the constraint (15) It is straightforward to see that the above constraint is biconvex with respect to {v, Q B } and α for fixed Σ m,leak , and that it is equivalent to (14b) if we have Repeat Step 3. Based on the above observations, we consider the following problem which has the same optimal solution as in (11): The detailed algorithm is described in Algorithm 1. We note that the objective minimum rate min monotonically increases with the iteration index t so that it converges to a locally optimal point of problem (19). Moreover, numerical checks show that Algorithm 1 converges within a few tens of iterations. V. NUMERICAL RESULTS We assume that the BS is located at the center (0, 0) of a rectangular area of side length 100, and the K L legitimate users are randomly located within the area. The K E eavesdropping users are randomly located in another rectangular area of the same shape, but with different center point (100, 0). We adopt the path-loss model c 0 (d/d 0 ) −η [7], where d denotes the distance between the transmitting and receiving nodes, and c 0 and d 0 are set to 10 dB and 30, respectively. The N transmitted users inK L are randomly chosen from K L . We also assume independent and identically distributed (i.i.d.) Rayleigh small-scale fading model. We compare the average performance over various channel realizations for the following schemes: i) Proposed D2D: {v, Q B , α} are jointly optimized according to Algorithm 1; ii) No D2D: Algorithm 1 is executed while fixing α jn = 0 for all n ∈ N and skipping Steps 4 and 5; iii) Random D2D: Each α jn is fixed as α jn ← (α jn /|α jn |)(P U /p r,jn (v, Q B )) 1/2 , whereα jn followsα jn ∼ CN (0, 1), and the received power p r,jn (v, Q B ) is computed using {v, Q B } of no D2D scheme. Then, {v, Q B } are optimized for fixed α by applying Steps 2, 3, and 6. Note that the baseline schemes require lower complexity than the proposed scheme since the coefficients α are fixed at the sacrifice of performance. In Fig. 1, we plot R min by increasing the privacy constraint level β for a multi-user system with M = 2, K L = 8, K E = 2, N ∈ {1, 2} and P B /σ 2 = P U /σ 2 = 10 dB. The figure shows that R min is significantly improved by D2D cooperation either with optimized α or with random α. Furthermore, we observe that R min increases with β. This is consistent with the fact that the maximized objective value improves when the constraint becomes looser (see (11)). In Fig. 2, we plot the minimum secrecy rate R sec min = min k∈K L R sec k versus the minimum rate R min for a multi-user system with M = 2, K L = 8, K E = 2, N = 1 and P B /σ 2 = P U /σ 2 = 10 dB. Different points of each scheme are obtained for different privacy levels β. That is, with a larger β, we achieve larger R min but less R sec min . It is also worth noting that in order to achieve R min = 0.3 bps/Hz, the minimum secrecy rates of random D2D and no D2D schemes are degraded to R sec min = 0 while the proposed D2D scheme achieves R sec min ≥ 0.14 bps/Hz. VI. CONCLUSION We have studied the advantages of enabling D2D cooperation for secret multi-user downlink systems in the presence of eavesdroppers, which can overhear both the downlink and D2D cooperation signals. We have proposed an iterative alternating optimization algorithm based on the matrix FP to tackle the problem of jointly optimizing the downlink precoding, artificial noise covariance, and amplification coefficients with constraints on transmit powers and information leakage. Via numerical results, we have confirmed the effectiveness of the proposed scheme compared to baseline schemes.
2021-04-14T02:17:17.837Z
2021-04-13T00:00:00.000
{ "year": 2021, "sha1": "7a750d00debd20f49df08af170e739bf3c398b82", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.06003", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7a750d00debd20f49df08af170e739bf3c398b82", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
6392893
pes2o/s2orc
v3-fos-license
Ghostly action at a distance: a non-technical explanation of the Bell inequality We give a simple non-mathematical explanation of Bell's inequality. Using the inequality, we show how the results of Einstein-Podolsky-Rosen (EPR) experiments violate the principle of strong locality, also known as local causality. This indicates, given some reasonable-sounding assumptions, that some sort of faster-than-light influence is present in nature. We discuss the implications, emphasizing the relationship between EPR and the Principle of Relativity, the distinction between causal influences and signals, and the tension between EPR and determinism. I. INTRODUCTION The recent announcement of a "loophole-free" observation of violation of the Bell inequality [1] has brought renewed attention to the Einstein-Podolsky-Rosen (EPR) family of experiments in which such violation is observed. The violation of the Bell inequality is often described as falsifying the combination of "locality" and "realism". However, we will follow the approach of other authors including Bell [2][3][4] who emphasize that the EPR results violate a single principle, strong locality. Strong locality, also known as "local causality", states that the probability of an event depends only on things in the event's past light cone. Once those have been taken into account the event's probability is not affected by additional information about things that happened outside its past light cone. Given some reasonable-sounding assumptions about causation (see Sec. III), the violation of strong locality in EPR experiments implies that there are causal influences that travel faster than light. The main goal of this paper is to give an extremely simple non-technical explanation of how EPR experiments lead to this striking conclusion. We do this by mapping the experiment onto a situation where twins are separated and then asked questions to test whether they can influence each other via faster-than-light telepathy. Since the EPR results force us to accept that nature does not respect strong locality, it is natural to ask how the results cohere with other locality principles. Is there a sense in which the results violate "local realism"? We discuss Einstein's Principle of Relativity (Lorentz invariance) and the principle that signals cannot travel faster than light ("signal locality"). We describe how signal locality arises from the Principle of Relativity, and show how it can be reconciled with EPR results, but only if we accept that nature has an ungraspable aspect, such as indeterminism or some other form of uncontrollability, that prevents the violation of strong locality from leading to faster-than-light signalling. II. OVERVIEW To make it clear how EPR experiments falsify the principle of strong locality, we now give an overview of the logical context (Fig. 1). For pedagogical purposes it is natural to present the analysis of the experimental results in two stages, which we will call the "EPR analysis", and the "Bell analysis" although historically they were not presented in exactly this form [4,5]; indeed, both stages are combined in Bell's 1976 paper [2]. We will concentrate on Bohm's variant of EPR, the "EPRB" experiment. This involves pairs of particles, typically a pair of photons in a spin singlet state. The question at hand is: what general types of theories can account for the observed behavior of these particles? Can strongly local theories do the job? Fig. 1 shows the space of theories of such particles. The inner (red) rectangle encloses the set of strongly local theories, the ones in which the probability of an event depends only on occurrences in the event's past light cone. The upper (green) rectangle encloses the set of theories that are "deterministic", Venn diagram of the space of theories and the constraints from EPRB experiments. The inner (red) rectangle encloses the set of strongly local theories. The EPR analysis concludes that strongly local theories must be deterministic; the Bell analysis concludes that strongly local theories cannot be deterministic. In combination, these analyses rule out strongly local theories. meaning that the behavior of the particles is is fully determined in advance without any randomness in the laws of physics. When combined [2], the EPR analysis and the Bell analysis show that no strongly local theory, whether deterministic or indeterministic, can explain the results of EPRB experiments. We now give a brief outline of those analyses, to be expanded in later sections. The EPR analysis The EPR analysis [6] starts with the experimental observation that both photons in the EPRB setup show the same behavior when subjected to the same measurement, no matter how far apart they are. The EPR analysis then points out that if strong locality is true then this cannot be due to one photon influencing the other, so they must have been pre-programmed to agree, which requires that the photons have a deterministically-evolving internal state that determines their behavior. In other words, strong locality requires determinism to explain the EPRB results. The EPR analysis therefore rules out strongly local theories that are indeterministic (vertical shading in Fig. 1). This sounds like a refutation of quantum mechanics, which is famously an indeterministic theory. However, "textbook" quantum mechanics, as taught in conventional physics courses, explicitly violates strong locality because measurement induces instantaneous collapse of the wavefunction over all of space, so EPR's analysis does not apply directly to textbook quantum mechanics. Rather, it shows that any alternate theory that was strongly local would have to be deterministic. In such a theory the result of measuring a photon's spin would not be random; it would be determined by the state of non-quantum-mechanical "hidden variables" that predetermine the behavior of the photon. The Bell analysis and the Bell inequality The second stage of analysis of the EPRB experimental data, which we call the "Bell analysis", destroys the dream of finding a strongly local and deterministic theory to replace quantum mechanics. Bell pointed out that if nature is described by a strongly local and deterministic theory then the behavior of the photon pairs has to obey a constraint called the "Bell inequality" [2,7]. In Secs. IV and V we will give an elementary explanation of the Bell inequality in terms of testing twins for faster-than-light telepathy. We will show that it arises from the fact that if someone has planned yes-or-no answers to three questions then on two randomly chosen questions they will give the same answer to both questions at least 1/3 of the time. In real EPRB experiments (e.g. [1]) the results violate Bell's inequality. This shows that no deterministic and strongly local theory can explain the behavior of the photons (cross-hatched shading in Fig. 1). Taken together, the EPR and Bell analyses of the experimental data show that strong locality must be false. If we accept the principle of common cause (Sec. III) this means that some causal influences travel faster than light. The rest of this paper explores the EPR and Bell analy-ses in as much depth as is possible without mathematical formalism. In Sec. III we lay out in more detail the meaning of the key postulates of strong locality and determinism. In Sec. IV we give an intuitive non-mathematical explanation of the Bell inequality and the resultant refutation of strong locality. Sec. V applies these concepts to the real experimental setup involving photon spin measurements. This paper focuses on strong locality because it is clear, intuitively plausible, and can be cleanly defined as a factorization condition (Eq. (1)). However, other analyses of the EPRB experiment (e.g., [8,9]) do not use this definition, and hence come to different-sounding conclusions about what EPRB means for "locality". In Sec. VI we therefore explore other principles that are related to locality, such as "information cannot be transmitted faster than light" or "there is no preferred inertial reference frame" (the Principle of Relativity), and discuss how some form of locality might survive even when strong locality is violated. Sec. VII gives a summary of our discussions. III. LOCALITY AND DETERMINISM: DEFINITIONS AND ASSUMPTIONS The principles that play a central role in EPRB experiments are: 1. Determinism: The result of any measurement on a system is pre-determined by how the system was set up originally, taking into account any subsequent influences on it. Any apparent randomness just reflects our ignorance, there is no essentially random component to the outcome [10]. In a deterministic theory, even for a measurement that was not actually performed there is a fact of the matter about what result it would have yielded ("counterfactual definiteness"). 2. Strong Locality: Once we take into account everything in its past light cone, the probability of an event is not affected by additional information about things that happened outside its past light cone (Fig. 2) [2]. This is sometimes called "factorizability" because it leads to a factorization of the probability function for space-like-separated events (Eq. (1)). As we will explain below, using a reasonable-seeming conception of "cause" it is equivalent to saying that causal influences cannot travel faster than light, so the causal influences that affect an event must be in its past light cone. We now explain in more detail our background assumptions and the meaning of determinism and strong locality. Readers interested in getting straight to the EPR and Bell analyses can skip the rest of this section. Background Assumptions In our discussion we will make the following background assumptions. For a more fine-grained formulation see Ref. [5]. 2. "Random choices": each experimenter's choice of what to measure is random, i.e., uncorrelated with the state of the particles being measured and choices made by the other experimenter. These allow us to conclude from EPRB experiments that strong locality is violated. To make a connection between strong locality and causal influences, one needs 3. Reichenbach's principle of common cause [11]: correlations can be explained in terms of causes. if two phenomena show a correlation, either one causes the other or they have a common cause. If C is the common cause of A and B then conditioning on C factorizes the joint probability: p(A, B) = p(A|C) p(B|C). These assumptions seem reasonable but not incontrovertible [3,5,[12][13][14]. Proponents of many-worlds-type scenarios would deny macro-realism. A superdeterminist or a believer in retrocausality would not allow us to assume that the experimenters choices can be treated as random. Operationalists deny Reichenbach's principle. We will comment further on these viewpoints in Sec. VII. Determinism Determinism states that the outcome of a measurement is predetermined by the state of the system at earlier times, taking into account any external influences on it. In the context of EPRB experiments, as we will see in Secs. IV and V, determinism means that the outcome of doing a measurement on a particle can be reliably "preprogrammed" by physical processes that set the initial states of two particle before they are moved apart from each other. Determinism is intimately bound up with our understanding of uncertainty. One can distinguish two ways in which we may be uncertain about the outcome of a measurement: 1. Uncertainty arising from our ignorance. The outcome of the measurement could be predicted given accurate knowledge of the initial state of the object and the laws governing its evolution, but we don't have sufficiently accurate information about these things to make an exact prediction. 2. Fundamental uncertainty: the outcome of the measurement has an essentially random component, either in the evolution of the system or its effect on the measuring device. In a sense the system gets to "decide on its own" how to behave. In ordinary life, and in science up until the advent of quantum mechanics, all the uncertainty that we encounter is presumed to be of the first kind, uncertainty arising from ignorance. We can't predict the weather very accurately, but the more we learn about the state of Earth's atmosphere and oceans and the laws they obey, the better our predictions become. Determinism says that all uncertainty is of the first kind, the kind that arises only from our ignorance. Determinism is a sort of scientific optimism: if we knew enough about the state of the universe we could predict the outcome of any measurement. Quantum mechanics introduced the idea that there might be uncertainty of the second type, that nature might be fundamentally non-deterministic. The EPR analysis shows that if strong locality is valid then this sort of uncertainty is in conflict with the outcome of EPRB experiments. Strong Locality The application of strong locality to the EPRB experiment is sketched in Fig. 2. Formally, it says that any correlation between two spacelike separated events E 1 and E 2 can only arise from each of them being correlated with events λ in their shared past light cone. Once we take into account those shared influences the joint probability distribution of E 1 and E 2 factorizes [3, 5, 10]: where E 1 is the outcome of the measurement on photon 1 E 2 is the outcome of the measurement on photon 2 L 1 is events in the past light cone of E 1 but not E 2 L 2 is events in the past light cone of E 2 but not E 1 λ is everything in both light cones, or any other state of affairs that can affect both E 1 and E 2 (Given strong locality, determinism is the statement that p(E 1 | L 1 , λ) is zero or one, and similarly for p(E 2 | L 2 , λ).) As we will see in Secs. IV and V, for the EPRB experiments to falsify strong locality each experimenter must decide "at the last minute" what experiment to do on her photon. Thus the decision of what measurement to perform on photon 1 occurs in L 1 , so if strong locality is true then that decision should not affect the measurement on photon 2, and vice versa. The assumption of random choices (Sec. III) is crucial here; we assume that the choices made by the experimenters are not influenced by the events λ that determine the state of the photons, hence the random choices assumption is often called "lambda-independence" [12]. If we accept Reichenbach's principle of common cause then the violation of strong locality means that there must be some faster-than-light causal influence that allows the measurement of one photon to affect the measurement of the other [5,15,16]. IV. EPR AND BELL WITH HUMANS In order to make the EPR and Bell analyses of the EPRB data as comprehensible as possible we now explain them using an analogy where instead of experimenting on photons we are questioning people. For related approaches see, e.g., Sec 4.1.3 of Ref. [17], or Ref. [18]. A. Testing twins for superluminal telepathy Imagine that someone has told us that twins have special powers, including the ability to communicate with each other using telepathic influences that are "superluminal" (faster than light). We decide to test this by collecting many pairs of twins, separating each pair, and asking each twin one question to see if their answers agree. To make things simple we will only have three possible questions, and they will be Yes/No questions. We will tell the twins in advance what the questions are. The procedure is as follows. A new pair of twins is brought in and told what the three possible questions are. 1. Each pair of twins uses superluminal telepathic communication to make sure both twins give the same answer. 2. Each pair of twins follows a plan. Before they were separated they agreed in advance what their answers to the three questions would be. The same-question agreement that we observe does not prove that twins can communicate telepathically faster than light. If we believe that strong locality is a valid principle, then we can resort to the other explanation, that each pair of twins is following a plan. The crucial point is that this requires determinism. If there were any indeterministic evolution while the twins were spacelike separated, strong locality requires that the random component of one twin's evolution would have to be uncorrelated with the other twin's evolution. Such uncorrelated indeterminism would cause their recollections of the plan to diverge, and they would not always show same-question agreement. This inference corresponds to the EPR analysis of the EPRB experiment: strong locality (the twins cannot exchange information faster than light), when combined with same-question agreement, implies determinism (each pair of twins follows a predefined plan). The idea that twins use a deterministically-evolving internal "memory" in order to follow a plan does not seem so remarkable, but for photons this is a striking claim, because the quantum mechanical picture of a photon does not allow for any internal state that determines the outcome of measurements on a photon. The conclusion of the EPR analysis (vertical shading in Fig. 1) is that if nature obeys strong locality then only a deterministic theory can account for the agreement behavior seen in EPRB experiments. C. Bell inequality for the twins In the thought experiment as described up to this point we only looked at the recorded answers in cases where each twin in a given pair was asked the same question. There are also recorded data on what happens when the two questioners happen to choose different questions. Bell noticed that this data can be used as a cross-check on our strong-locality-saving idea that the twins are following a pre-agreed plan that determines that their answers will always agree. The cross-check takes the form of an (2)). In formulating a plan for how to give Yes/No answers to three questions, there are four types of plan. No matter what plan one follows, the answers to two different randomly chosen questions will be the same at least 1/3 of the time. inequality: Bell inequality for twins: If a pair of twins is following a plan then, when each twin is asked a different randomly chosen question, their answers will be the same, on average, at least 1/3 of the time. (2) Fig. 3 illustrates why (2) is true. For each pair of twins, there are four general types of pre-agreed plan they could adopt when they are arranging how they will both give the same answer to each of the three possible questions. If, as strong locality and same-question agreement imply, both twins in a given pair follow a shared predefined plan, then when the random questioning leads to each of them being asked a different question from the set of three possible questions, how often will their answers happen to be the same (both Yes or both No)? If the plan is of type (a) or (d), both answers will always be the same. If the plan is of type (b) or (c), both answers will be the same 1/3 of the time. We conclude that no matter what type of plan each pair of twins may follow, the mere fact that they are following a plan implies that, when each of them is asked a different randomly chosen question, they will both give the same answer (which might be Yes or No) at least 1/3 of the time (Eq. 2). It is important to appreciate that one needs data from many pairs of twins to see this effect, and that the inequality holds even if each pair of twins freely chooses any plan they like. This, then, is how the Bell analysis applies to the data for the twins: strong locality (no way for the twins or questioners to influence each other when the questioning is happening) and determinism (each pair of twins follows a plan) implies a Bell inequality (2). D. What if the twins violate the Bell inequality? In real experiments, when performing the analogous experiment on photons, the Bell inequality is violated, showing that no strongly local and deterministic theory can explain the data (cross-hatched shading in Fig. 1). Let us imagine the same thing happening in our analogy. Suppose that when we analyze our results for a large sample of twins, we find that in cases where each twin was asked a different question, their answers are the same only 1/4 of the time; 3/4 of the time one twin gives a Yes and the other a No. This result violates the Bell inequality (2), and tells us that a good fraction of the population of twins was not following any predefined plan when they answered the questions. How do we interpret this result? Our goal was to see if there was any evidence that the twins were communicating with each other using telepathic influences that travel faster than light. The fact that the twins always agree when they are both asked the same question, even when they are being interrogated at spacelike separated locations, could be explained away by assuming they were following a prearranged plan. But if their pattern of answers to different questions violates the Bell inequality then this shows that they can't be following a prearranged plan. When one twin answers the question posed to him, he needs to know what question his twin is being asked, because if his twin is being asked a different question, at least some of the time one of them will have to deviate from any pre-arranged plan, changing his answer in such a way that it differs from the answer that his brother is giving, and thereby allowing their responses to violate the Bell inequality. Unless we are willing to discard one of the background assumptions listed in Sec. III, we are forced to accept that some sort of superluminal influence connects the twins. V. EPR AND BELL WITH PHOTONS The testing of twins for telepathic abilities, as described in section IV, is an exact analogy to the EPRB experiment, which is a modification, suggested by Bohm [19], of the original EPR experiment. In the EPRB experiment (see Fig. 4) there is a source that creates pairs of photons, analogous to twins. The photons travel out from the source in opposite directions. When they are far from each other, each photon encounters a measuring machine that can do three possible measurements. The machine contains three types of filter, call them A,B, and C, and when the photon arrives the machine flips one of the three types of filter into the path of the photon. The photon has two possible responses to the filter: it either goes through the filter ("+") or reflects off it ("−"). This is actually a measurement of the polarization of the photon: each filter consists of polaroid with a different orientation of its axis of polarization. If determinism is true then each photon has a deterministically evolving inherent polarization state that determines how it will interact with each filter. If both machines deploy the same filter then we see "same-axis agreement": either both photons pass through or they both reflect off. As with the twins, we can immediately see two ways to explain this consistent agreement. 1. Influence: when one photon reaches its machine and the machine decides what filter to flip up in front of it and the photon responds to that filter, some information is superluminally transmitted to the other photon so that if the other photon gets the same filter, it will behave the same way. 2. Determinism: when the photons are created, each is formed in a state (its "polarization state") that determines how it will respond to any possible filter it might encounter. The source puts both photons into the same state, and those states evolve deterministically, ensuring that the photons always behave the same way when they encounter the same type of filter. The EPR analysis (vertical shading in Fig. 1) concludes that in any strongly local theory, since there are no fasterthan-light correlation-creating influences, agreement in same filter (same axis) measurements must arise from the photons having a deterministically evolving internal state that pre-determines their response to the filters that they encounter. If, as EPR did, one takes strong locality to be valid, then the observed same-axis agreement shows that the photons are in a state that determines their behavior, which is in contradiction with the quantum mechanical picture where their state does not determine the outcome of measurements performed on them. However, just as for the twins, there is a Bell analysis (cross-hatched shading in Fig. 1) which shows that EPR's picture, of physical objects having deterministically evolving states and strongly local interactions, can be experimentally tested. For this we look at the data for the cases when the two measuring machines deploy different filters in front of the two photons. Following the logic used in Sec. IV, we conclude that if both photons are in the same polarization state, and there is no correlationcreating influence between their spacelike-separated measurements, then, on the occasions that the detectors deploy different filters, then photons 1 and 2 should show the same behavior (both bouncing off or both passing through) at least 1/3 of the time: Bell inequality: prob when photon 1 and photon 2 encounter different filters, they show the same behavior In Appendix A we show how Eq. (3) is a form of Bell's original inequality. When polarizations of pairs of spin-singlet photons are measured in real-world experiments, it is found that they do show agreement in same-axis measurements, but when we perform different-axis measurements the two photons only show the same behavior 1/4 of the time; 3/4 of the time they show different behavior: one bounces off its filter and the other passes through. This violates the Bell inequality. Such violation has now been seen in many experiments, e.g. [1,20,21]. We conclude that strong locality is violated by spinsinglet photon pairs. Either you need a strong-localityviolating influence to make the same-axis agreement happen, or, if you try to save strong locality by assuming that each photon is in a state (the same state for both of them) that pre-determines the outcome of measurements on it, then you need a strong-locality-violating influence to obtain the observed violation of Bell's inequality for different-axis measurements. Either way, a violation of strong locality is required to account for all the relevant experimental observations. VI. CONSEQUENCES FOR LOCALITY The EPRB experiment, in combination with some assumptions that we have outlined in Sec. III, tells us that nature does not obey the principle of strong locality. If we accept Reichenbach's principle of common cause we would say that there are causal influences that travel faster then light. But this cannot be the end of the story: • What about Einstein's theory of relativity? Are EPRB results compatible with the Principle of Relativity? • If so, is there some "medium-strength" locality principle, implied by Relativity but weaker then Strong Locality, that is compatible with EPRB experiments? • What about determinism? Do the EPR and Bell analyses leave open the possibility of deterministic theories? We will now explain why it is believed that EPRB experiments do not violate the Principle of Relativity, and suggest that "signal locality" is a useful medium-strength locality principle, since it distills the requirements of relativity and chronology protection (the absence of causal paradoxes [22]). We will acknowledge that signal locality contains concepts such as "control" that are not usually present in physical principles, and argue that, although signal locality is compatible with determinism, nature must have some inherent elusiveness, perhaps indeterminism or perhaps some form of uncontrollability, in order for the EPRB results to be consistent with signal locality and hence with Relativity and chronology protection. A. EPR and the Principle of Relativity To quote Bell himself, "one of my missions in life is to get people to see that if they want to talk about the problems of quantum mechanics-the real problems of quantum mechanics-they must be talking about Lorentz invariance" [23]. In this quote, "Lorentz invariance" is just the Principle of Relativity, which states that the laws of physics are the same in all inertial reference frames, so the laws of physics are invariant under the Lorentz transformations that relate different reference frames to each other. So, is the faster-than-light connection between distant photons that we see in EPRB experiments compatible with the Principle of Relativity? There is evidence that they are compatible, but not in the straightforward way that one might assume. Naively one might say that of course the EPRB results are consistent with the Principle of Relativity, because they agree with the predictions of quantum mechanics, and there is a relativistic, Lorentz-invariant, formulation of quantum mechanics, namely quantum field theory. It is true that most presentations of quantum field theory seem Lorentz invariant because they focus on expectation values and do not discuss the measurement postulate (instantaneous wavefunction collapse induced by the measurement process). But textbook quantum mechanics, including quantum field theory, needs the measurement postulate to explain how unique experimental results arise from measurements (the "macro-realism" assumption of Sec. III, discussed further in Sec. VII). There is no Lorentz-invariant version of measurement-induced wavefunction collapse that is compatible with the EPRB results [3]. However, this does not rule out the possibility that there may be other Lorentz-invariant theories that can explain the EPRB results. In fact, in 2006 an example was proposed: a version of quantum mechanics where the wavefunction occasionally collapses spontaneously in a Lorentz-invariant way [24]. Whether or not this theory is a correct description of nature, it seems to provide an existence proof that EPRB results are compatible with the Principle of Relativity. B. Different forms of locality If the Principle of Relativity is compatible with EPRB experiments but strong locality is not, then strong locality does not follow from the Principle of Relativity. However, there is another locality principle, signal locality, that does plausibly follow from relativity combined with chronology protection (no causal paradoxes). To set the context for our discussion of signal locality, here is a quick survey of various requirements that can be thought of as expressing ideas of locality, along with a summary of how compatible they are with EPRB experiment results: 1) Strong locality (or local causality): after taking into account everything in its past light cone, the probability of an event is not affected by additional information about things that happened outside its past light cone. As we have seen, this is disproven by EPRB experiments. 2) Information must be transmitted no faster than light. This is also disproven by EPRB experiments, since the result of the measurement on one photon contains information about the measurement performed on the other that did not come from the backward light cone. 3) Signal locality: signals can travel no faster than light. This is compatible with EPRB experiments, but at a price, as we will describe below. 4) Energy or other conserved quantities must travel no faster than light. This is compatible with EPRB experiments, since there is no evidence that any physical substance travels from one photon's measurement site to the other's. 5) The Principle of Relativity: The laws of physics are the same for any observer who is not accelerating (any "inertial frame of reference"). As discussed above, this is compatible with EPRB experiments. C. EPR and Signal locality In discussing signals, the essential point is that signalling is more than the transfer of information. Sending a signal means having a controllable means of transferring information. Control, however, is based on high-level concepts such as agency and free will, and such concepts are not usually invoked in fundamental physical principles. Bell complained that signal locality "rests on concepts which are desperately vague, or vaguely applicable. The assertion that 'we cannot signal faster than light' immediately provokes the question: Who do we think we are?" [25]. In view of this concern, we will proceed by treating signal locality as a property (of theories) that we hope will be implemented by some more fundamental feature of the theory. Theories with the property of signal locality are attractive because they can obey the Principle of Relativity (no preferred inertial frame of reference) while maintaining chronology protection, i.e. avoidance of causal paradoxes. There is danger of a causal paradox if someone can send a signal to themselves in the past, since the person could, after receiving the signal, decide (assuming free will) not to send it. (For a discussion not assuming free will see Ref. [26].) To make sure that this cannot happen, we have to ensure that the sender of controllable information (signals) is always in the past of the receiver. In a theory that obeys the Principle of Relativity, this means that signals must go slower than light, since only then will all reference frames agree on the time ordering of the sender and receiver. We can now see that EPRB experiments, which show that information can be transferred faster than light, also require some accompanying element of uncontrollability in nature in order to make sure that this does not translate to a violation of signal locality. We can imagine two ways to preserve signal locality in the face of the results of EPRB-type experiments. (a) Nature is indeterministic. The measurement outcomes are uncontrollable because nature is fundamentally indeterministic. In an indeterministic theory there is a random component to the evolution and/or measurement of the photons, so we have to rely on some sort of faster-than light influence to produce the consistent agreement in the results of space-like separated same-axis measurements and also to produce the different-axis correlations that violate the Bell inequality. But this does not allow superluminal signalling because the measurement results are not determined by anything: they are inherently random and therefore uncontrollable. In this case signal locality follows from the requirement of "parameter independence" (see Appendix B). Textbook quantum mechanics is an example of an indeterministic parameter-independent theory. The instantaneous collapse of the wavefunction is the faster-than-light influence that ensures same-axis agreement. (b) Nature is deterministic but uncontrollable. The measurement outcomes are uncontrollable because, even though they are determined by the states of the objects in question ("hidden variables"), those states are themselves sufficiently uncontrollable, because of some physical law, that they cannot be used to send signals. In such a scenario the violation of Bell's inequality in the different-axis measurements arises from a faster-thanlight influence that allows one measurement to affect the hidden variables of the object being measured at the other, space-like separated, location, but the experimenter at one location can never control the hidden variable states well enough to be able to control the measurement results at the other end, and thereby send a message. Bohmian Mechanics is an example of a theory that follows this pattern [3,12]. In Appendix B we describe Jarrett's more formal way of exposing this dichotomy, by analyzing strong locality into two weaker conditions [10]. Violation of one of them ("Remote Outcome Independence") corresponds to possibility (a) above; violation of the other ("Parameter Independence" or "Remote Detector Independence") corresponds to possibility (b). In Fig. 5 we show an augmented version of the simple theory-space diagram (Fig. 1), including the set of theories that obey signal locality (enclosed by the dashed (blue) line). If signal locality is valid then physics is restricted to the unshaded (white) area of allowed theories. These all violate strong locality, as required by the EPRB experimental results, but in a way that avoids superluminal signalling as described above. VII. SUMMARY The EPRB experiment uses spin-singlet photon pairs to test the degree to which the laws of nature obey some sort of principle of locality. If we accept some background assumptions (Sec. III) then the EPRB experimental results bring us to the following conclusions. • The observed behavior violates the principle of strong locality (local causality) which states that no correlation-creating influence can travel faster than light. In a nutshell, this is because in the EPR experiment either you need a faster-than-light influence to make the same-axis agreement happen, or, if you try to save strong locality by assuming that the agreement arises from the photons being in states that determine in advance that their spins will have specific values, then you need a fasterthan-light influence to obtain the violation of Bell's inequality for different-axis measurements. • The EPRB results are compatible with the Principle of Relativity (equivalence of all inertial reference frames, also called Lorentz invariance). • In order to avoid casual paradoxes we expect nature to display the property of signal locality (signals cannot travel faster than light). This means there must be sufficient indeterminism or uncontrollability in nature to prevent the EPRB correlations from being used for signalling. If we prefer to explain the EPRB results by adopting an indeterministic theory (such as textbook quantum mechanics) then, while accepting that strong locality is violated, we can ensure that the theory is signal-local by imposing a weaker locality principle like "parameter independence", although, as described in Appendix B, the concept of parameter independence also involves nonfundamental concepts of the type that made Bell object to signal locality as a fundamental principle. Treatments of EPRB that favor this approach (e.g. [8,9]) tend to de-emphasize the violation of strong locality and frame EPRB as forcing us to choose between determinism and a weaker form of locality. If we wish to preserve determinism then we must come up with a theory (such as Bohmian mechanics) in which there are superluminal influences between the deterministically evolving hidden variables, but we face the challenge of constructing the theory so that it preserves signal locality by enforcing essential limits on our ability to control those variables, so they cannot be used for signalling. In Sec. III we listed the background assumptions used in our analysis. We now briefly discuss the possibility of dropping those assumptions. For a fuller discussion see Ref. [5]. Dropping the assumption of macro-realism (experiments have unique outcomes) renders our entire analysis moot. However, an anti-macro-realist must then explain how it is that experiments always appear to have unique outcomes. Anti-macro-realist versions of quantum mechanics such as the many-worlds [27] or many-minds [28] interpretations lead to questions of how probabilistic predictions emerge and the role of decoherence [12]. It is possible to deny that the choices made by the experimenters can be treated as random. For example, a superdeterminist (e.g. [14]) would suggest that some mechanism ensures that those choices are always predetermined just so as to violate the Bell inequality. This seems difficult, given that there are many ways to design a pseudo-random number generator for each experimenter to use, including ones that are sensitive to events outside their shared past lightcone λ (see Fig. 2 and Ref. [12]). Alternatively, a believer in retrocausality would suggest that the experimenters' choices could exert influence backwards in time on the state in which the particles were prepared. This calls for an explanation of why such retrocausality does not lead to violations of chronology protection [29]. Reichenbach's principle of common cause is not an essential assumption but it plays a fundamental role in science. An "operationalist" or "instrumentalist" would reject it [15,30], admitting that EPRB experiments violate strong locality, but maintaining that not all correlations can be explained in terms of causes that factorize correlations (see Sec. III), so this is not a sign of superluminal causal influences. However, since most of science consists of the search for the causes of correlations, the operationalist then has to explain which correlations call for such causal explanations and which do not. A Quantum Information theorist would also reject Reichenbach's principle, claiming that quantum entanglement can cause correlations without causing the individual events that exhibit the correlation [31]. In conclusion, the EPRB experiment exposes some of the complexity of the concept of locality. Strong locality, which seems simple and intuitively attractive, is violated by the results while the Principle of Relativity is not. Signal locality can be preserved, but it is formulated in terms of concepts like "control" which seem out of place in a theory of physics. Quantum mechanics uses indeterminism to avoid superluminal signalling and impressively accounts for the unusual characteristics of the EPRB correlations (they are not attenuated with distance, and they only connect specific particles that were created in entangled states) but in textbook quantum mechanics the measurementinduced collapse of the wavefunction is instantaneous over all space and therefore not Lorentz invariant, so one natural goal is to find and empirically validate a Lorentz invariant form of wavefunction collapse such as that proposed by Tumulka [24]. The EPRB results also leave open the possibility of theories that, unlike quantum mechanics, are deterministic, with signal locality ensured by limits on the controllability of the hidden variables. Bohmian mechanics is a well known proposal, and there is an ongoing search for a Lorentz-invariant version of it [32]. 6: Strong locality can be understood as saying that the outcome at one detector is independent of both the settings of the remote detector, and the outcome of the remote measurement [10]. then using the notation (A1) and defining p(AB) to be the probability that machine 1 deploys the A filter and machine 2 deploys the B filter, and so on, we can rewrite p diff as a sum over all filter settings F = (AB, BC, CA, BA, CB, AC) where the two detectors deploy different filters: In our experiment, each filter is deployed at random, so all six combinations occur with equal probability, The labeling of photons and measuring machines as "1" and "2" is arbitrary, so with no loss of generality we can treat the BA filter deployment as being AB with the numbering of the photons and machines reversed, so p(+ − |AB) = p(− + |BA) and so on, so (A5) can be written (A6) Using Bell's original inequality (A2) we recover our inequality (A3).
2016-01-16T18:11:26.000Z
2015-06-06T00:00:00.000
{ "year": 2016, "sha1": "066bbd5862ca94f98198831eb89ae5ff7d15d5bb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1506.02179", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "066bbd5862ca94f98198831eb89ae5ff7d15d5bb", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Physics" ] }
9248706
pes2o/s2orc
v3-fos-license
Dried Root of Rehmannia glutinosa Prevents Bone Loss in Ovariectomized Rats Dried root of Rehmannia glutinosa is a kidney-tonifying herbal medicine with a long history of safe use in traditional folk medicine for the treatment of joint diseases. This study was conducted to investigate prevention of bone loss by a standardized dried root of R. glutinosa in an ovariectomized (OVX) rat model of osteoporosis. The OVX groups were divided into five groups treated with distilled water, 17β-estradiol (E2 10 µg/kg, once daily, i.p) and dried root of R. glutinosa extracts (DRGE 30, 100, and 300 mg/kg, twice daily, p.o) for eight weeks. We measured the body, organs, and uterus weights, and femur and lumbar vertebrae bone mineral density (BMD), serum alkaline phosphatase (ALP), estradiol levels. The treatments with DRGE 300 mg/kg significantly inhibited BMD decrease in the femur and lumbar (17.5% and 16.4%, p < 0.05, respectively) by OVX without affecting the body, organs, and uterus weights. Also, serum ALP level in the DRGE 300 mg/kg treated group was significantly decreased, but the estradiol level did not change in serum of the DRGE 300 mg/kg treated group. These results show that DRGE is able to prevent OVX-induced bone loss without influencing hormones such as estrogen. Introduction Osteoporosis is characterized by a reduction in bone mass and microarchitectural deterioration of bone tissue, resulting in skeletal fragility and susceptibility to fractures [1]. The most common type of OPEN ACCESS osteoporosis is the bone loss associated with estrogen deficiency at postmenopausal women [2]. In addition, secondary hyperparathyroidism, associated with calcium or vitamin D deficiency, may accelerate bone loss and increase the risk of developing osteoporosis. Both estrogen deficiency and secondary hyperparathyroidism are associated with a primary increase in bone resorption and an impaired bone formation response [3]. Hormone replacement therapy (HRT) has proven to be efficacious in preventing bone loss and reducing the incidence of skeletal fractures in postmenopausal women [4]. However, long-term HRT increases the high risk of breast and endometrial cancer, thromboembolic events and vaginal bleeding [5]. Concerns about the adverse side effects of HRT have led to interest in the anti-osteoporotic activity of natural products. Rehmannia glutinosa Libosch, which belongs to the family of Scrophulariaceae, is one of the earliest known and most important edible crude herbs used for various medicinal purposes in East Asia. There are two types of R. glutinosa used as medicinal herbs, named Gun-Ji-Whang (non-processed root; dried rehmannia root), and Sook-Ji-Whang (processed root; steamed rehmannia root) in Korean according to the processing method [6]. Dried or steamed root of R. glutinosa have been used to reduce fever, activate blood circulation, tonify the kidney, and for Yin deficiency syndrome, and they are used in quite different therapeutic applications and the choice is strictly defined in Traditional Chinese Medicine (TCM) [7]. The root of R. glutinosa has also been reported to possess anti-tumor [8], anti-stress [9], anti-thrombic [10], and hypo-glycemic [11] effects. The major active components of the root of R. glutinosa are iridoid compounds such as catalpol and dihydrocatalpol, while other components are phenol glycoside ionones, flavonoids, amino acids, inorganic ions, microelements, which are responsible for its diverse bioactivities [7]. It was reported that steamed root of R. glutinosa stimulates the proliferation of osteoblasts, while inhibiting the generation and resorptive activities of osteoclasts in bone metabolism [12]. The herbal formulationYukmi-jihang-tang, consisting of seven kidney-nourishing herbs was reported to reduce bone resorption both in in vitro and in vivo by inhibition of phosphorylation of peptide substrates [13,14]. Recently, catalpol from fresh root of R. glutinosa has been reported to promote the proliferation of osteoblasts of MC3T3-E1 cells. Although dried and steamed root of R. glutinosa are used in quite different therapeutic applications in TCM, dried root of R. glutinosa also might have potential effects in regulating bone metabolism because both of dried and steamed root of R. glutinosa have related main active constituents [15]. However, dried root of R. glutinosa has not received much attention concerning bone metabolism. Prevention of bone loss by dried root of R. glutinosa in an ovariectomized (OVX) rat model has not been investigated yet. We have studied the acute effects of dried and steamed root of R. glutinosa (50% EtOH extraction) in a OVX rat model (unpublished data). Our findings demonstrated that four weeks of treatment with dried root of R. glutinosa extracts (DRGE) significantly decreased the BMD loss in femur compared to the control group and that the BMD loss was not significantly decreased in animals given steamed root of R. glutinosa extracts (SRGE), however, this could have simply been because the changes in BMD in the DRGE treated group were less variable than in the SRGE treated group. That said, DRGE was efficacious than an equivalent dose of SRGE in the OVX rat model, so we did not use SRGE to perform the long-term experiments, and have now focused on whether long-term DRGE treatment decreases bone loss in OVX rats. Thereby, we have performed the DRGE treatments in rats in pre-osteoporosis state. In the present study we examined the prevention of bone loss of a standardized dried root of R. glutinosa in an OVX rat model. Body weight and bone mineral density (BMD) of femur and lumbar vertebrae were determined weekly using dual energy X-ray absorptiometry (DXA). Serum alkaline phosphatase (ALP) concentration was measured by a biochemistry analyzer. Serum estradiol levels were also determined by a radioimmunoassay (RIA) kit. HPLC Chromatograms for Standardization of DRGE Dried root of R. glutinosa extracts (DRGE) was monitored at 205 nm for catalpol ( Figure 1). The content of catalpol was calculated for standardization. DRGE was standardized to contain 5.4 mg/g catalpol. Bone Mineral Density of the Femur and Lumbar Vertebrae in Treatments of DRGE Three weeks after the OVX operation, OVX groups showed a significant decrease in the right femur bone mineral density (BMD) and lumbar vertebrae (1-4 regions) compared to the sham group (p < 0.05). After eight weeks of treatments, the final femur BMD of the 300 mg/kg DRGE-treated groups were significantly higher than that of the OVX-control group (17.5%, p < 0.01 vs. control, Figure 2A). Also, the lumbar vertebrae BMD of the DRGE 100 and 300 mg/kg-treated groups were significantly higher compared to the OVX-control group (14% and 16.4%, respectively, p < 0.05 vs. control, Figure 2B). Weekly Body Weight in DRGE Treatments Body weights increased over time in all groups, but body weights increased significantly more in the OVX groups alone than in sham groups. A significant difference in body weight was observed between the E2 10 µg/kg treated group and the OVX-control group by two weeks after initiating administration. The body weight gain of the E2 10 µg/kg treated group was also significantly less than that of the OVX-control group. However, there was no significant difference in the body weight and body weight gain of DRGE-treated groups during the experimental period ( Figure 3). Figure 2. (A) Effects of DRGE on BMD in right femur and (B) lumbar vertebrae (g/cm 2 ) of OVX rats by dual energy X-ray absorptiometry (DXA). These BMD values were determined weekly during the experimental period. Data are mean ± SD values (n = 12 per group). * p < 0.05, ** p < 0.01, significantly difference from the OVX-control group. Figure 3. (A) Effects of DRGE on body weight gain and (B) body weight (g) in OVX rats. The body weight was recorded weekly during the experimental period. The body weight gain was calculated by the equation: final body weight -initial body weight. Data are mean ± SD values (n = 12 per group). * p < 0.05, significantly difference from the OVX-control group. Uterus and Organ Index in Treatments of DRGE OVX caused atrophy of uterine tissue, indicating the success of the surgical procedure and in the E2 10 µg/kg treated group the uterus index (mg/g) increased significantly compared to the OVX-control group. However, DRGE-treated groups did not show an effect on the uterus index following OVX ( Figure 4A). The index of heart, liver, spleen, and kidney was not significantly different in each group either ( Figure 4B). and organs were dissected, washed with saline, and immediately weight for analysis. Data are mean ± SD values (n = 12 per group). * p < 0.05, significantly difference from the OVX-control group. Serum ALP and Estradiol Concentration in Treatments of DRGE ALP level in the OVX-control group was significantly higher compared to the sham group. After eight treatments, the DRGE (300 mg/kg) treated group displayed significantly lower serum ALP levels compared to the OVX-control group ( Figure 5A). In addition, the DRGE 300-treated group was not significantly different from the OVX-control group ( Figure 5B). Discussion Our findings demonstrate that eight weeks of treatment with DRGE significantly decreased the BMD loss in the femur and lumbar vertebrae and inhibited the increase in serum ALP levels compared to the OVX control group without the influence of hormones such as estrogen. Bone loss caused by estrogen deficiency in both humans and experimental animals is primarily due to an increase in osteoclastic bone resorption [16]. OVX rats, which exhibit most of the characteristics of human postmenopausal osteoporosis [17] have been widely used as a model for the evaluation of potential osteoporosis treatments [18]. Like previous reports, our OVX resulted in significantly decrease in the femur and lumbar vertebrae BMD after eight weeks. This BMD loss was accompanied by a significant increase in bone remodeling, as was evidenced by the enhanced bone turnover makers. An increase in ALP serum levels, the most widely used biochemical bone turnover marker [19], was observed in OVX rats [20]. Although we did not determine the 3-D architecture of trabecular bone within the distal metaphyseal femur region, oral administration of DRGE at dosages of 300 mg/kg significantly decreased the BMD loss in the femur and lumbar vertebrae, which was reflected by the decrease in ALP serum levels compared to the OVX-control group. These results suggest that DRGE decreases bone loss by inhibiting bone remodeling in OVX rats. OVX dramatically increases body weights, while E2 treatment prevents normal levels completely [21]. Although the mechanisms by which OVX induces an increase in body weight are not clear, estrogen deficiency induced body fat accumulation and subsequently caused an increase in body weight [22]. Heine et al. demonstrated that estrogen receptor (ER) knockout mice have higher fat mass and lower energy expenditure than wild-type mice [23]. Estrogen may be involved directly in rat energy metabolism by binding to ER within the abdominal and subcutaneous fat tissues [24]. In our results, DGRE did not affect OVX-induced body weight gain, and serum estradiol concentration. These results suggest that DGRE did not display estrogen-like activity in OVX rats. Estrogen expresses its activities by binding to different ERs, ERα and ERβ. ERβ is more abundant than ERα in bone tissue while ERα is mainly distributed in reproductive cells and is the dominant receptor mediating the most obvious effects of E2 in breast and uterus [25]. As mentioned above, DRGE decreased bone loss without resulting in an increased uterus weight. Although measurement of uterus weight was relatively crude, these results indicate that DRGE might show a higher affinity for ERβ than for ERα that produces optimal action in preventing bone loss without stimulating an unwanted proliferation of the uterine tissues. Consistent with our finding from the E2 treated group, the DRGE might have anti-osteoporotic effects in OVX rats, without the influence of hormones such as estrogen. However, further mechanistic studies are needed to clarify whether the prevention of bone loss effects of DRGE may be elicited by regulating the expressions of ERβ. There have been studies on the biological activities of iridoid glycosides, which are potent antioxidants and free radical scavengers [26]. It has been demonstrated that oxidation-derived free radicals increase bone resorption by promoting osteoclastic differentiation [27]. Catalpol has also demonstrated promotion of proliferation and differentiation of MC3T3-E1 cells, a mouse osteoblast cell line, in vitro. The effects of DRGE on bone thus appear to be related to its high contents of the iridoid glycosides such as catalpol. Kim et al. demonstrated that that R. glutinosa inhibits the secretion of both interleukin-1 (IL-1) and tumor necrosis factor-a (TNF-α) from mouse astrocytes [28]. These cytokines are well known regulators of bone metabolism. IL-1 is known as a highly potent bone resorptive cytokine [29], and TNF-α appears to synergize with IL-1h in their ability to increase bone resorption [30]. From the above reports, it is also hypothesized that R. glutinosa might have potential effects in regulating bone metabolism. Sample Preparation and HPLC Analysis Dried root of R. glutinosa was purchased Yaksudang Co. (Seoul, Korea).The sample was identified by Dr. HeeSoon Shin and a voucher specimen (#NP-1031) was deposited in the Functionality Evaluation Research Group, Korea Food Research Institute, Seongnam, Korea. The dried root of R. glutinosa (300 g) was extracted with 50% ethanol (3,000 mL) for 3 h at 80 °C in a reflux apparatus. The extracts were filtered and concentrated under reduced pressure, and samples were lyophilized to yield a dark yellow powder. The yield of dried root of R. glutinosa extract (DRGE) was 13.8%. The quantitative authentication of DRGE was performed by a high performance liquid chromatography (HPLC) system equipped with a Waters 1525 pump, a 2707 auto sampler and a 2998 PDA detector. The chromatic separation was achieved at 30 °C on Waters Sunfire™ C18 (250 mm × 4 mm i.d., 5 μm particle size) column. DRGE was monitored at 205 nm for catalpol. The run time was set at 30 min and the flow rate was 1.0 mL/min and the sample injection volume was 10 μL. Mobile phases A and B were acetonitrile and water, respectively. Gradient elution was as follows: 0-10 min 0-10% solvent A, 10-20 min 15-45% solvent A, 20-30 min 45% solvent A. The content of catalpol was calculated for standardization. DRGR was standardized to contain 5.4 mg/g catalpol. Animals and Treatments Female Sprague-Dawley (SD) rats, 8-weeks old, were purchased from Samtako, Gyeonggi-do, Korea. Animals were housed at two rats per cage in an air-conditioned room at 23 ± 1 °C, 55-60% relative humidity, and a 12 h light/dark cycle (07:00 lights on, 19:00 lights off), and were given a laboratory regular rodent diet. All animal experiments were carried out according to the guidelines of the Korea Food Research Institutional Animal Care and Use Committee (KFR-M-13003). After acclimatization for 1 week, 9-week-old female SD rats were anesthetized with 2% of isoflurane and ovaries were removed bilaterally. A sham operation, during which the ovaries were just touched with forceps, was performed on the sham group. A recovery period of 1 week after surgery, rats were divided into five following treatment groups: (1) sham + vehicle, (2) OVX + vehicle, (3) OVX + 17β-estradiol (E2, 10 μg/kg once daily, i.p), (4) OVX + DRGE 30 mg/kg, (5) OVX + DRGE 100 mg/kg, (6) OVX + DRGE 300 mg/kg. DRGE at a dosage of 30 mg/kg in rats corresponds to 1.8 g DRGE/60 kg-weighed human subject, where DGRE extracted from approximately 13 g of the DRGE raw material. Finally, we decided the dosages of DRGE, i.e., 30, 100, and 300 mg/kg, separated by three time intervals. DRGE was dissolved in distilled water for oral administration at the desired doses in a volume of 5 mL/kg twice daily at 08:00 am and 08:00 pm. E2 dissolved in distilled water, with 1% dimethyl sulfoxide (DMSO) and 0.1% Tween 20. All groups were treated for eight weeks. During the experimental period, body weight and femur and lumbar vertebrae bone mineral density (BMD) were determined weekly. At the end of the treatment period, the rats were fasted for 12 h, and blood was collected via the abdominal aorta. Uterus tissue and other organs were dissected, washed with saline solution, and weighted for analysis. Uterus and organ indexes (mg/g) were calculated by dividing the uterus and organ weights by the body weight. Bone Mineral Density Measurements The BMD of femur was measured by a PIXImus (GE Lunar PIXImus, GE Healthcare, WI, USA), dual energy X-ray absorptiometer (DXA), equipped with appropriate software for bone density assessment in small laboratory animals. Calibration of the instrument was conducted as recommended by the manufacturer. Quality control with BMD (0.0553 g/cm 2 ) and percentage fat composition (16.7%) of the phantom were also performed each time the instrument was switched on. All rats were placed in the same direction. Serum ALP and Estradiol Analysis The serum samples were prepared by centrifugation of the collected blood samples (1,013 g for 15 min at 4 °C), then stored at −80 °C for biochemical determinations. Serum ALP concentrations were measured by VetTest 8008 (IDEXX Lab Inc., Westbrook, ME, USA). Serum hormone level was determined by radioimmunoassay (RIA). The estradiol RIA was performed according to the instructions accompanying a Coat-a-Count kit (Diagnostic Products, Los Angeles, CA, USA). Statistical Analysis All data were presented as the mean ± standard deviation (SD). The effects of different treatments were compared by one-way ANOVA test, followed by the post-hoc Tukeytest for multiple comparisons using GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA, USA). p < 0.05 was considered statistically significant. Conclusions In conclusion, DRGE is able to prevent OVX-induced in bone loss without the influence of hormones such as estrogen, suggesting that DRGE may be a reasonable natural alternative for the prevention of postmenopausal osteoporosis. However, further detailed mechanistic investigation of the antiosteoporotic-effects of DRGE on bone metabolism is required.
2014-10-01T00:00:00.000Z
2013-05-01T00:00:00.000
{ "year": 2013, "sha1": "25a69d592b4c70552437866140e905dc4d1e9893", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/18/5/5804/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25a69d592b4c70552437866140e905dc4d1e9893", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
73689040
pes2o/s2orc
v3-fos-license
Legal Assistance for Accused and Victims in Austrian Pre-Trial Criminal Proceedings Based on two research projects this paper evaluates the legal assistance for accused and victims in pre-trial criminal proceedings in Austria after the implementation of a revised law on pre-trial proceedings in 2008. The research projects combined legal and empirical research. The project for scientific evaluation of the realization of the criminal procedure reform law (PEUS) analysed approx. 5000 pre-trial files and has additionally conducted 85 interviews with police officers, prosecutors, judges and lawyers. The results of the empirical research provide insight into which extent in practice in Austrian criminal proceedings the accused has access to legal advise. The paper comes to the conclusion that by strengthening victims’ access to legal representation in Austrian criminal proceedings in numerous cases the actual division of power to influence the proceedings has shifted from the accused to the victim. Introduction According to the Austrian Code of Criminal Procedure (CCP) the accused and the victim have the right to legal advice.This right is important for law enforcement, especially at the beginning of the procedure, where the course for the further procedure is set.Police and public prosecutors are obliged to start pre-trial investigations, if they find sufficient indications that a criminal offence has been committed.The Austrian criminal procedure is based on the principle of objectivity and on the principle of ex-officio investigations ( § 3 CCP).Both the accused and the victim have the right to information upon the allegation and their rights ( § § 6 (2), 10 (2) CCP).The proper understanding of the function of the law enforcement agencies requires constant critical scrutinising of investigation steps, a task which is generally supported by the actions of legal advisers, e.g. by filing motions to admit further evidence. In the course of two empirical research projects which have been carried out during the last years in Austria, the role of legal assistance at the beginning of pre-trial proceedings has been analysed.The project for scientific evaluation of the realization of the criminal procedure reform law (PEUS) analysed appr.5000 pre-trial files and has additionally conducted 85 interviews with police officers, prosecutors, judges and lawyers [1,2]. The project Pre-trial Emergency Defence (PED) has investigated access to defence rights and legal assistance at the beginning of criminal proceedings in Germany, Croatia, Slovenia and Austria.Data was collected by means of questionnaires handed out to the parties to the proceedings (appr.770 were returned).Furthermore, 86 interviews with policemen, prosecutors, judges, lawyers and in Austria also with accused have been conducted [3,4]. The results of these empirical projects provide insight to which extent in practice in criminal proceedings the accused has access to a legal adviser.PEUS also explored to which extent victims make use of legal assistance, which may be provided to the victim free of charge within the legal framework ( § § 66 (2), 67 (2) CCP). The PEUS results regarding the accused's and the victims' right to legal assistance as well as identified relations and special interactions shall be presented in the following. The Parties to the Proceedings and Their Right to Legal Assistance The Right to Legal Assistance for the Accused-An Overview The right to information about his/her rights as well as the allegations ( § 50 CCP) and in this context the right to bring in a defence lawyer ( § 58 CCP) or, respectively, being provided with legal aid in the case of social indigence and/or if required "in the interests of justice" ( § 61 (2) CCP), further the right to participate in certain investigative measures ( § § 150, 165 (2) CCP) are considered as the fundamental procedural rights of the accused.Furthermore, the right to make a statement ( § 7 CCP), the right to file motions to admit evidence ( § 55 CCP) and the right to legal remedies pre-trial ( § § 67, 106, 108 CCP) as well as the right to appeal against the judgment ( § § 281 ff CCP) should be named. The Right to Legal Assistance for the Victim-An Overview It should be noted that the CCP defines the term "victim" rather extensively (see § 65 No. 1 CCP).Thus, victims are not only direct victims of violence or sexual criminal offences, but also indirect victims, meaning certain relatives of a killed person and anyone else who could have suffered a loss or damage due to the criminal offence or who could have been impaired in his/her legally protected interests.Based on this broad definition all natural persons and legal entities entitled to claim indemnification may be a victim according to the CCP.§ 66 (1) CCP gives an overview of the most important rights, to which all different types of victims are entitled.Thus, the victim has the right to be represented by counsel ( § 73 CCP), the right to information about the subject of the procedure and his/her rights ( § 70 (1) CCP), the right to have access to the file ( § 68 CCP), the right to participate in certain investigative measures ( § § 150, 165 (2) CCP), the right to pre-trial remedies ( § § 87, 106, 195 CCP) and to some extent also the right to appeal against the judgement.In 2008 the possibility for the victim to appeal for nullity against a verdict of not guilty was established within narrow limits ( § 282 (2) CCP).Especially this last remedy is not undisputed since it allows for two parties to the proceedings to challenge the verdict to the detriment of the defendant, thereby possibly distorting the balance of the procedure [1]. These central rights, which are applicable to all types of victims in the criminal procedure, have to be differentiated from the psycho-social and legal accompaniment ( § 66 (2) CPP) which is classified as special right exclusively of victims in the sense of § 65 No. 1 lit.a and b CPP.No such provisions are made for the sole purpose to assert compensation claims.However, there is a "flaw" in the system of accompaniment during proceedings.Even though the law provides for a subjective right, it is lacking legal enforcement possibilities. Certain private associations are funded annually by the Federal Ministry of Justice for the implementation of psycho-social and legal accompaniment during criminal proceedings.Their services, however, are limited to these monetary funds.Ultimately, victims do not have any means to enforce their right to legal accompaniment once these financial resources have been exhausted [5]. Legal Assistance in Practice and Their Role in the Proceedings Legal Adviser for the Accused According to the results of project PEUS less than 8% of the accused were represented by a legal adviser.Thus, in the vast majority of cases there is no legal support (Table 1).There is, however, a remarkable difference between proceedings within the jurisdiction of district courts ("B-AZ-proceedings") 1 and those before regional courts ("Stproceedings"), but also in St-proceedings not even one fifth of all cases are supported by a lawyer.These figures are significantly lower even when studying the number of consultations of a legal adviser in particular for the accused's questioning.Altogether not even 2% of all questionings of accused are made in the presence of a legal adviser.This figure stays as low as 3% in St-proceedings as well (see Table 2). Legal Adviser for the Victim 7.2% of the victims are represented by a legal adviser during investigative proceedings.In BAZ-proceedings the share of legal representation is higher on the victim's side than on the accused's whereas in St-proceedings the situation is inverted (see Table 3).The reason for this might be that in less servere areas of crime (e.g.bodily injury in street traffic) many insurance companies provide a legal adviser for the victim, since the proceedings are closely associated with their obligations to pay insurance benefits.Aside from that, the higher share of legal representation in BAZ-proceedings can also be explained by the fact that often corporate counsels enforce the victim's interests, should the victim be a legal entity rather than a natural person.The reason why victims as well as the accused are more likely to be represented in St-rather than BAZ-proceedings can be seen in the fact that the vast majority of serious violent and sex crimes fall within the jurisdiction of regional courts, where legal accompaniment is more common than within the jurisdiction of district courts. Legal Adviser for the Accused and the Victim in the Same Proceedings If you compare the relation between legal representation of the accused and the victim, you find some cases, in which the accused is represented by a lawyer during the investigative proceedings, whereas the victim is not (see Table 4).These cases amount to 12% in St-proceedings and 2.4% in BAZ-proceedings.On the other hand, in more than 7% of St-proceedings the victim is represented, while the accused has no legal adviser.In BAZ-proceedings this share accounts for 4.5%.That means that es- pecially in St-proceedings there are some cases, in which the unrepresented accused is facing a represented victim.This occurs even more so in BAZ-proceedings, which again can be explained with corporate counsels. Legal Advisers and Access to Files It is striking that in 99% of all cases the accused did not make use of his right to inspect the files if he/she was unrepresented (see Table 5).If the accused had a legal adviser, access to the file has been requested in only one third of all cases of St-proceedings according to project PEUS.This surprising result, however, has been put into perspective when interviews with legal practitioners have shown that access to the file is not always documented as thoroughly as should be.At some public prosecutor's offices only the refusal of access to files is being recorded but not if the right is being granted. Likewise, the evaluation of the inspection of files by victims and their legal advisers shows that in more than 90% of all 69 cases access to files has been requested by the legal adviser rather than by the victim (see Table 6). Legal Remedies in Pre-Trial Investigative Proceedings It has been found that remedies do not occur frequently in pre-trial investigative proceedings.If remedial actions are taken, it is mainly done so by the legal adviser.The Table analysis of nearly 5000 investigative proceedings has shown that in only two of those 5000 cases objections have been raised against investigative measures of the police or the prosecution ( § 106 CPP) and both have been raised by a lawyer.Complaints have been made in eleven cases and again more than half of those have been lodged by a lawyer.These findings lead to the assumption that there is a strong connection between legal representation and the taking of remedies.If this is true, the fact that in some cases the victim is being represented whereas the accused is not, takes on new significance. Conclusions and Crime Policy Proposals When strengthening the rights of victims, it has to be emphasized that the main interest of criminal and criminal procedure law is not to satisfy the victim's interests, but to clarify crimes and react to them.Indirectly, criminal law serves as a means to control the actions of all members of society. The principle of fairness in criminal procedure law demands to grant the accused possibilities to defend him/ her against the public prosecutor's actions (Article 6 EMRK). In order to keep the prosecutorial/police power of the state tolerable, the law enforcement agencies are required to maintain outmost objectivity and the principle of examination of the facts by the office of its own motion (exofficio investigation) and the "manuduction obligation" apply. When expanding the rights of victims during the last years the legislator has tried to keep a certain "balance in relation to the rights of the accused".The different op-tions to access legal support, which can be explained for example by the fact that the accused has the right to legal representation only if he/she is eligible due to his/her social needs whereas the victim may be granted legal accompaniment regardless of these social requirements,2 have led to the fact that in numerous cases the actual division of power to influence the proceedings has shifted from the accused to the victim. Due to the correlation between the exercise of procedural rights and the use of a legal adviser, we have to assume that ultimately the "manuduction obligation" of the law enforcement agencies cannot assure procedural fairness in the sense of a balance of interests.In order to restore the balance of all parties to the procedure, access to legal advisers has to be improved for the accused.In the interest of the proper administration of justice, access to legal aid has to be facilitated in a way that at least in cases where the victim is represented in pre-trial investigative proceedings the accused should be granted legal advice, too, if he/she is eligible due to his/her social needs.In addition to that, cases of necessary defence (or legal aid) in difficult cases should be extended to pre-trial investigative proceedings. In the course of the planned reform of the main and appellate procedures in Austria, where a possible extension of the rights of victims will be discussed, it should be paid particular attention to the balance between the parties to the procedure.Thus, the emphasis should be put on the "care for the victim during the proceedings" rather than the victim's right to remedies to enforce an "alleged right to inflict punishment". Table 2 . Presence of a legal adviser during the questioning of an accused (percentages in columns). Differences between BAZ-and St-proceedings are significant at p < 0.001; number of questionings in total = values in brackets. Table 3 . Victim represented by legal adviser (percentages in columns). Differences between BAZ-and St-proceedings are significant at p < 0.001; Cases with victims = values in brackets.
2018-12-24T12:57:55.359Z
2012-12-13T00:00:00.000
{ "year": 2012, "sha1": "7c571013735b6decb83fd619cf0853ebc25458b3", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=25731", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7c571013735b6decb83fd619cf0853ebc25458b3", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Political Science" ] }
18159996
pes2o/s2orc
v3-fos-license
Rapid Perturbation in Viremia Levels Drives Increases in Functional Avidity of HIV-specific CD8 T Cells The factors determining the functional avidity and its relationship with the broad heterogeneity of antiviral T cell responses remain partially understood. We investigated HIV-specific CD8 T cell responses in 85 patients with primary HIV infection (PHI) or chronic (progressive and non-progressive) infection. The functional avidity of HIV-specific CD8 T cells was not different between patients with progressive and non-progressive chronic infection. However, it was significantly lower in PHI patients at the time of diagnosis of acute infection and after control of virus replication following one year of successful antiretroviral therapy. High-avidity HIV-specific CD8 T cells expressed lower levels of CD27 and CD28 and were enriched in cells with an exhausted phenotype, i.e. co-expressing PD-1/2B4/CD160. Of note, a significant increase in the functional avidity of HIV-specific CD8 T cells occurred in early-treated PHI patients experiencing a virus rebound after spontaneous treatment interruption. This increase in functional avidity was associated with the accumulation of PD-1/2B4/CD160 positive cells, loss of polyfunctionality and increased TCR renewal. The increased TCR renewal may provide the mechanistic basis for the generation of high-avidity HIV-specific CD8 T cells. These results provide insights on the relationships between functional avidity, viremia, T-cell exhaustion and TCR renewal of antiviral CD8 T cell responses. Introduction CD8 T cells play a critical role in antiviral immunity and a large number of studies in both human and murine models indicate that virus-specific CD8 T cells are directly involved in the control of virus replication and disease progression [1,2,3,4,5,6,7]. Functional avidity of T cells, also defined as antigen (Ag) sensitivity, is thought to be a critical component of antiviral immunity. Functional avidity reflects the ability of T cells to respond to a low Ag dose and is determined by the threshold of Ag responsiveness. There is a general consensus that high functional avidity CD8 T-cell responses are of higher efficacy against cancers [8] and acute virus infections [9]. However, their relevance in chronic persistent virus infections and established tumors [10,11,12] remains to be determined since conflicting results were obtained in these contexts [13,14] as well as in HIV infection [15,16,17,18,19]. HIV-specific CD8 T-cell responses in nonprogressive infection were associated with high avidity and superior variants recognition [11,12,20,21], whereas other studies indicated similar avidity between patients with progressive and non-progressive chronic infection [16,18,19,22,23]. In this regard, we have previously shown that polyfunctional virus-specific CD8 T-cell responses during chronic virus infections were predominantly of low functional avidity [24]. Furthermore, it is also well established that high functional avidity T-cell responses preferentially led to viral escape and T-cell clonal exhaustion [10,24,25,26]. However, the factors determining the level of T-cell functional avidity and its relationship with the phenotypic and functional heterogeneity of T-cell responses are only partially understood [15,16,17,18,19,22]. Functional avidity is based on the ability of T cells to respond following stimulation with a cognate Ag and it is well established that responding CD8 T cells are clonally heterogeneous (i.e. oligoclonal) [27,28,29,30]. Therefore, the clonotypic composition of the responding T-cell population (and its TCR diversity) can influence functional avidity [27,28]. Indeed, we and others reported that HIV-specific CD8 T cells responding to various epitopes harbor a diverse TCR repertoire in chronically-infected patients [31,32,33]. HIV-specific CD8 T cells in primary HIV infection (PHI) are temporally associated with the initial control of viremia [1]. Lichterfeld and colleagues suggested that high-avidity HIV-specific CD8 T-cell responses are present during early infection (defined as HIV seroconversion within 6 months) and are then preferentially lost overtime [33]. In the present study, we have performed a comprehensive crosssectional characterization of HIV-specific CD8 T-cell responses in patients with PHI or chronic (progressive and non-progressive) HIV infection in both steady-state conditions as well as following virus rebound. The primary observations of the present study indicate that a) the functional avidity of HIV-specific CD8 T cells is not different between patients with progressive and nonprogressive chronic infection, b) the functional avidity of HIVspecific CD8 T cells is significantly lower in PHI patients as compared to patients with chronic infections, c) increased functional avidity is associated with T-cell exhaustion and lack of expression of markers of co-stimulation, and d) great increase in functional avidity is observed after virus rebound following spontaneous interruption of antiretroviral therapy and is associated with increased TCR renewal. Lower functional avidity of HIV-specific CD8 T cell responses during acute infection We recruited 85 HIV-infected patients and performed a crosssectional analysis of the functional avidity of HIV-specific CD8 Tcell responses. The distinct groups included a) 37 patients with very early stage of acute infection (i.e. prior to seroconversion and incomplete western blot; hereafter referred to as PHI), 39 patients with progressive chronic infection (i.e. typical progressors; hereafter referred to as CP) and 9 patients with non-progressive chronic infection (i.e. LTNP) (Table S1). We first investigated the 115 HIVspecific CD8 T-cell responses obtained in 26 untreated PHI (PHI-B), 19 untreated CP (CP-B) patients and 9 LTNP (Fig. 1A-B). As described in the Methods, blood mononuclear cells were stimulated with decreasing concentrations of the cognate peptides and the peptide dose able to induce half of the maximal response (i.e. effect concentration 50%; EC 50 ) was determined (Fig. 1A). The results of this analysis indicated that the functional avidity of HIV-specific CD8 T cells was lower in PHI-B as compared to CP-B or LTNP patients (both P,0.0001; Fig. 1A-B) while there was no difference between CP-B and LTNP (Fig. 1A-B). However, there were no significant difference in the magnitudes of HIV-specific CD8 T-cell responses among all groups (Fig. 1C) and no significant association between the functional avidity and the magnitude of HIV-specific CD8 T-cell responses (Fig. S1A). Furthermore, the differences in functional avidity of HIV-specific CD8 T cells between the different cohorts were not influenced by distinct peptides/MHC class I associations, since these differences remained significant when common epitopes (i.e. epitopes recognized by patients from distinct cohorts) were analyzed (Fig. 1D). Of note, the B*2705-KRWIILGLNK (i.e. KK10) epitope has been previously reported as a protective epitope [28,34,35] and it was one of the common epitopes recognized by CP-B and LTNP patients. While the functional avidity of B*2705-KK10-specific CD8 T-cell responses in CP-B and LTNP patients was almost identical, it was rather low as compared to the other HIV-specific CD8 T-cell responses from both groups (Fig. 1D). Taken together, these results indicate a lack of association between the functional avidity of HIV-specific CD8 T cells and virus control consistently with the recent study from Chen and colleagues [22]. Furthermore, the HIV-specific CD8 T-cell responses in acute infection have lower functional avidity than in chronic infection. Characterization of HIV-specific CD8 T cell responses disappearing after acute infection It has been previously reported that high functional avidity HIV-specific CD8 T-cell responses are selectively deleted early after acute HIV infection [33]. We addressed this issue by repeating the epitopes mapping in patients with acute infection after one year of antiretroviral therapy (ART) (PHI-T1Y). Fortyfive HIV-specific CD8 T-cell responses were identified in PHI-B patients using ICS. Among these 45 responses, 38 (85%) remained detectable after one year of ART whereas 7 (15%) became undetectable. Interestingly, at the time of acute infection, these 7 responses were already of lower magnitude as compared to the 38 responses which remained detectable (P = 0.03; Fig. 2A). Furthermore, the functional avidity of HIV-specific CD8 T-cell responses at the time of acute infection was not different between the 7 lost and the 38 remaining responses (P.0.05; Fig. 2A). These results indicate that the minor proportion of HIV-specific CD8 T-cell responses selectively lost after acute infection did not have higher functional avidity. Functional avidity of HIV-specific CD8 T cell responses after one year of ART It has been suggested that Ag load may influence the responsiveness of HIV-specific CD8 T cells [36]. To address this issue, we assessed whether the functional avidity of HIV-specific CD8 T-cell responses would change after control of virus replication, i.e. after 1 year of successful ART, in patients with acute or chronic infection. Of note, 46 additional HIV-specific CD8 T-cell responses were considered; 17 responses were identified in the initial 26 PHI patients following re-mapping after 1 year of ART and 29 responses were identified in 11 additional PHI patients only mapped after 1 year of ART. Both magnitude and functional avidity of HIV-specific CD8 T-cell responses generated during ART were similar to those measured at baseline ( Fig. 2A). Furthermore, PHI patients were treated either with ART alone or with ART+CyclosporinA (CsA) but CSA treatment had no significant impact on the magnitude or the functional avidity of HIV-specific CD8 T-cell responses (Fig. S2). The functional avidity of the same HIV-specific CD8 T-cell responses measured longitudinally either prior to ART or after 1 year of ART remained stable in both PHI and CP patients (both Author Summary CD8 T cells directed against virus are complex and functionally heterogeneous. One relevant component of CD8 T cells is their functional avidity which reflects their sensitivity to cognate antigens, i.e. how prone T cells are to respond when they encounter low doses of antigens. In patients with chronic and established HIV infection, we observed that the sensitivity of HIV-specific CD8 T cells was not different between patients with progressive or nonprogressive disease. In contrast, the sensitivity of HIVspecific CD8 T cells was significantly lower in patients with early and recent HIV infection. Furthermore, CD8 T cells of high avidity were preferentially associated with a state of functional impairment known as exhaustion. Of interest, some patients treated with antiretroviral therapy during acute infection spontaneously interrupted their treatment and experienced a rebound of virus. In these patients, the avidity of HIV-specific CD8 T cells increased and this increase was associated to stronger cell exhaustion and greater renewal of the population of antiviral CD8 T cells, thus potentially providing the mechanistic basis for the generation of high-avidity CD8 T cells. Overall, our data suggest that rapid perturbation in viremia levels drove increases in the functional avidity of HIV-specific CD8 T cells. P.0.05; Fig. 2B). Furthermore, the lack of significant effect of ART on the functional avidity of HIV-specific CD8 T cells was also confirmed in non-longitudinal, independent, T-cell responses from PHI or CP patients (both P.0.05; Fig. 2C). Therefore, HIVspecific CD8 T-cell responses remained of lower avidity (P = 0.0003) in PHI-T1Y as compared to CP-T1Y patients (Fig. 2D). Consistently with the above-mentioned analyses performed in the untreated groups, differences in functional avidity of HIV-specific CD8 T cells between PHI-T1Y and CP-T1Y were not related to distinct peptide-MHC class I associations since the differences remained significant also when common epitopes were considered (P = 0.0003; Fig. 2E). All together, these observations indicate that even after control of virus replication HIV-specific CD8 T-cell responses from patients with acute HIV infection remain of lower avidity as compared to patients with chronic infection. Functional and phenotypic profiles of HIV-specific CD8 T cells during acute and chronic HIV infections We then assessed the functional profile of HIV-specific CD8 T cells from PHI-B, CP-B and LTNP patients. Although, the magnitudes of HIV-specific CD8 T-cell responses were not significantly different between PHI-B, CP-B and LTNP (Fig. 1C), perforin expression was significantly (P#0.001) higher in HIVspecific CD8 T cells from PHI-B patients as compared to CP-B or LTNP ( Fig. S1B and 3A). As previously shown [37,38], HIVspecific CD8 T cells from LTNP contained more IL-2, whereas those from CP-B patients were mostly composed of single IFN-cproducing cells (both P,0.0001; Fig. S1B and 3A). We then performed a phenotypic characterization of HIVspecific CD8 T-cell responses and monitored CD27 and CD28 expression to assess co-stimulation and PD-1, 2B4 and CD160 expression to assess T-cell activation and exhaustion. For these analyses, only HIV-specific CD8 T cells detectable using cognate peptide-MHC class I multimers (Table S2) were taken into consideration. Regarding T-cell co-stimulation, HIV-specific CD8 T cells from PHI-B expressed a higher proportion of CD27 + CD28 + cells than those from CP-B (P = 0.02) or LTNP (P = 0.003) patients ( Fig. S1C and 3B). Also, analyses of the expression of co-inhibitory receptors indicated that HIV-specific CD8 T cells from CP-B and LTNP were both composed of significantly higher proportions of PD-1 + 2B4 + CD160 + (P,0.006 and P,0.025, respectively) or PD-1 2 2B4 + CD160 + (P,0.025 and P,0.0001, respectively) as compared to PHI-B ( Fig. S1D and 3C). HIV-specific CD8 T cells from PHI-B mostly (about 70%) lacked all three markers or expressed 2B4 alone (all P,0.006; Fig. 3C) and expressed lower frequency and intensity of PD-1 as compared to CP-B (both P,0.002; data not shown). These data suggest that HIV-specific CD8 T cells from patients with acute and chronic infection are functionally and phenotypically distinct. Association between the functional avidity and the phenotypic and functional heterogeneity of HIV-specific CD8 T cells We then assessed the association between functional avidity and the expression of co-stimulatory or co-inhibitory receptors. The functional avidity of HIV-specific CD8 T cells was negatively Furthermore, we performed correlations and rank correlation's matrix to explore the partial associations of variables and to assess the dependency and potential hidden effect of confounding variables in pairs associations. These analyses indicated that the proportions of CD27 + CD28 + and of PD-1 + 2B4 + CD160 + HIV-specific CD8 T cells were not significantly dependent on each other. This allowed us to perform a regression model analysis and to postulate that the functional avidity of HIV-specific CD8 T cells may be a linear function of the two aforementioned explained variables (after log 10 transformation). Interestingly, this regression analysis indicated that about 28% of the functional avidity of HIVspecific CD8 T cells was explained by a combination of the proportion of CD27 + CD28 + and of PD-1 + 2B4 + CD160 + CD8 T Functional Avidity of HIV-Specific CD8 T cells PLOS Pathogens | www.plospathogens.org cells (P = 0.0013; data not shown). We did not, in contrast, observe any significant correlation between the functional avidity of HIVspecific CD8 T cells and their functional profile. Overall, these observations indicate that high-avidity HIVspecific CD8 T-cell responses are preferentially composed of cells lacking the expression of co-stimulatory molecules but coexpressing high levels of co-inhibitory receptors. However, the functional avidity can only be partially predicted from the expression of co-stimulatory or co-inhibitory molecules. Qualitative changes of HIV-specific CD8 T cells in patients experiencing a virus rebound following treatment interruption We then performed a longitudinal analysis to investigate the effect of changes in viremia levels on HIV-specific CD8 T cells. To address this issue, we longitudinally monitored HIV-specific CD8 T cells in two distinct models: a) in conditions of viremia below the limit of detection, i.e. viremia ,50 HIV RNA copies/ml of plasma (in patients successfully treated by ART) and b) in conditions of rapid and major changes in viremia occurring in patients experiencing virus rebound following spontaneous treatment interruption (TI) (Fig. 5A). In particular, we evaluated HIVspecific CD8 T-cell responses in PHI-T1Y and compared them with those after 5 years (PHI-T5Y) of uninterrupted successful ART or after TI (PHI-ATI) (Fig. 5A). Nine out of the 37 patients identified during acute infection spontaneously interrupted ART. These patients were treated since PHI for $1 year (mean6SE 131615 weeks) and all had undetectable viremia (,50 HIV RNA copies/ml) at the time of TI. After TI, all patients experienced a virus rebound with an average plasma viremia of 5.18 log 10 HIV RNA copies/ml. The functional profile of HIV-specific CD8 T-cell responses at the time of TI was different from that of baseline. HIV-specific CD8 T cells in PHI-T1Y were mostly polyfunctional (associated to a large fraction of IL-2-producing cells and little perforin) (P,0.0001; Fig. 5B-C) as compared to the typical effector profile (Fig. 3A) observed in PHI-B. In patients remaining on ART, HIVspecific CD8 T cells became more polyfunctional (i.e. further shifted toward IL-2 production) after 5 years of ART as compared to 1 year of ART ( Fig. 5B-C). Conversely, in patients interrupting ART, as shown for patient #1023 who interrupted ART after two years of treatment and experienced a virus rebound of 122'000 HIV RNA copies/ml, the proportion of HIV-specific CD8 T cells co-producing IFN-c and IL-2 decreased (Fig. 5A-B). Cumulative analyses confirmed the significant (P,0.01) decrease in polyfunctionality of HIV-specific CD8 T-cell responses after TI and the significant (P = 0.03) increase in polyfunctionality of HIVspecific CD8 T-cell responses from patients who remained on ART (Fig. 5C). Then we determined PD-1 (as well as 2B4 and CD160) expression in a subset of PHI-ATI and PHI-T5Y patients with known HIV-specific CD8 T-cell responses using cognate peptide-MHC class I multimers (Table S2). As shown in the representative flow cytometry profiles from patients #1017 and #1023, PD-1 expression increased in patient #1023 who interrupted ART but not in patient #1017 who remained on ART (Fig. 5D). Along the same line, the proportion of triple PD-1 + 2B4 + CD160 + HIVspecific CD8 T cells also increased in patient #1023 but not in patient #1017 (Fig. 5F). Cumulative analyses confirmed that PD-1 expression as well as the proportion of cells co-expressing PD-1/ 2B4/CD160 in HIV-specific CD8 T cells were significantly increased in patients who interrupted ART (both P = 0.03; Fig. 5E and 5G). No increase in PD-1 expression or in the co-expression of PD-1/2B4/CD160, however, was observed in the patients who remained on ART (both P.0.05; Fig. 5E and G). Finally, consistently with the differences in the functional profile of HIVspecific CD8 T-cell responses between patients who did or did not interrupt ART (Fig. 5B-C), we observed that the proportion of dual IFN-c/IL-2-producing HIV-specific CD8 T cells was negatively correlated with the proportion of cells co-expressing PD-1/2B4/CD160 (P = 0.009; data not shown). These data indicate that major changes of viremia levels in TI patients caused reduction of polyfunctional HIV-specific CD8 T cells and were associated with an increased level of exhaustion. Increase in functional avidity of HIV-specific CD8 T cells following TI We then analyzed the effects of virus rebound following TI on the functional avidity of HIV-specific CD8 T cells. As shown for patient #1023, the functional avidity of B*0701-GPGHKARVL -and A*0301-RLRPGGKKK -specific CD8 Tcell responses significantly increased after TI (ATI) as compared to pre-TI (PTI) (Fig. 6A). Furthermore, an additional HIVspecific CD8 T-cell response against B*0701-IPRRIRQGL , which was below the detection level PTI, was observed ATI (Fig. 6A). Cumulative analyses confirmed the increase in functional avidity of HIV-specific CD8 T cells occurring ATI (P = 0.007; Fig. 6B) and also indicated that new responses generated following virus rebound were of high avidity (P = 0.04; Fig. 6B). Consistently, no significant (P.0.05) differences in functional avidity were observed during the same period when a similar analysis was performed in HIV-specific CD8 T-cell responses from patients who did not interrupt ART (i.e. for an average of 4 years; Fig. 6B). Furthermore, the functional avidity of HIV-specific CD8 T-cell responses ATI was in the same range as compared to those observed in CP patients (data not shown). Of note, consistently with the increase in the co-expression of co-inhibitory molecules occurring in patients experiencing virus rebound (Fig. 5F-G), we observed a positive association (P = 0.02) between the fold increase in functional avidity of HIV-specific CD8 T cells and the fold increase in the proportion of PD-1 + 2B4 + CD160 + HIVspecific CD8 T cells (Fig. 6C). Of interest, we performed a comprehensive statistical modeling of the changes in functional avidity of HIV-specific CD8 T cells and used mixed-effect linear models [39,40] to assess the evolution of functional avidity as a function of time and virus rebound. For this analysis, all longitudinal measures (n = 231) of functional avidity were included. The statistical model revealed that the interaction between avidity and time was not significant in steady-state conditions, i.e. neither in patients on ART (Fig. 6D; red line), nor in patients off ART (Fig. 6D; green line). In both conditions, an increase of 0.013 units per month was determined but did not reach statistical significance (P = 0.05; Fig. 6D), thus indicating that functional avidity does not significantly change under steady-state circumstances. However, we found a significant (P = 0.013) interaction between functional avidity and virus rebound. An immediate increase of functional avidity of HIVspecific CD8 T cells of about 1 order of magnitude (0.95 units) occurred directly after TI (Fig. 6D, grey dashed lines) and was not related to the duration of ART prior to TI. These observations indicated that the functional avidity of HIVspecific CD8 T cells is stable overtime in steady-state conditions Association between CDR3 renewal and increase in functional avidity of HIV-specific CD8 T cells We recently demonstrated that the global CD8 TCR repertoire of virus-specific CD8 T cells was diverse and subjected to continuous renewal [32]. We then evaluated the TCR repertoire in PHI patients experiencing a virus rebound following TI. For this purpose, we measured CDR3 diversity and the percentage of renewal of HIV-specific CD8 T cells and compared those to the changes in functional avidity of HIV-specific CD8 T cells occurring before and after virus rebound. As shown for patient #1023 (Fig. 5A), TRBV usage and CDR3 size pattern were analyzed for B*0701-GPGHKARVL -specific CD8 T cells at week (W) 18, W96 and W125 (Fig. S3A-B). Using our previously-described model to determine CDR3 renewal [32], we calculated a renewal of 76% between W18 and W96 (i.e. on ART) and a renewal of 82% between W96 and W125 (i.e. after TI). Cumulative analyses confirmed a significantly (P = 0.008) higher CDR3 renewal of HIV-specific CD8 T cells after virus rebound than in steady-state condition, i.e. during treatment (Fig. 7A). Interestingly, the level of CDR3 renewal was directly associated (P = 0.036) with the extent of increase in functional avidity of HIVspecific CD8 T cells (Fig. 7B). Taken together, these observations suggest that increase in CDR3 renewal may contribute to the increase in functional avidity of HIV-specific CD8 T cells occurring after virus rebound. Discussion T-cell functional avidity reflects the ability of T cells to respond to various concentrations of Ag and may be assessed ex vivo through a quantification of a biological function such as IFN-c production, cytotoxic activity or proliferation capacity. Several parameters concur to determine the threshold of T-cell responsiveness. These include: a) the affinity of the TCR for the peptide-MHC (pMHC) molecule, i.e. the strength of the interaction between the TCR and pMHC [41,42], b) the density of pMHC-TCR interactions (reflecting both the amount of Ag and the ability of Ag presenting cells (APC) to present Ags) [43,44,45,46], c) the expression of co-stimulatory and co-inhibitory molecules by T cells and APC [47], and d) the T-cell distribution and composition of signaling molecules [44,48]. However, the factors determining functional avidity and the relationship between functional avidity and the heterogeneity of T-cell responses are not well understood. In the present study, we comprehensively investigated the functional avidity of HIV-specific CD8 T-cell responses in a crosssectional study of different cohorts of HIV-infected patients. The evaluation of the functional avidity of HIV-specific CD8 T cells was based on optimal epitopes, i.e. epitopes not necessarily corresponding to the autologous virus sequences. Since pMHC/ TCR affinity is one of the parameters potentially influencing the functional avidity [49,50], a mismatch between the epitope sequences and the TCR or the MHC may impact the determination of avidity. However, the same strategy was used throughout all cohorts of HIV-infected patients, thus minimizing the potential biases in our observations. HIV-specific CD8 T cells generated during acute infection were of lower functional avidity as compared to those from patients with chronic progressive or non-progressive infection. These differences were not biased by distinct peptide-HLA associations and remained significant after ART-induced control of virus replication. In addition, a preferential deletion of HIV-specific CD8 Tcell responses of higher avidity was not observed, as previously described in a cohort of early HIV infection [33]. The discrepancy between our and the previous study may be explained by differences in the individual cohorts as well as by the fact that all the 37 patients received ART at the time of diagnosis of PHI in our study, whereas only 5 of the 10 patients in Lichterfeld's study received ART [33]. Our results also indicated that the minor proportion of HIV-specific CD8 T-cell responses lost after acute infection had an initial lower magnitude rather than a higher avidity. Furthermore, consistently with previous studies [16,18,19,22,23,51], there were no significant differences in the functional avidity of HIV-specific CD8 T-cell responses from chronic progressive and non-progressive infection. These observations suggest that T-cell functional avidity does not represent a correlate of virus control, at least in the context of chronic and persistent virus infections. Along the same line, HIV-specific CD8 T-cell responses commonly associated with virus control [52], i.e. HLA-B*27-, B*57-or B*5801-restricted T-cell responses, were consistently found in the lower range of functional avidity (data not shown). These observations do not support previous studies showing a relationship between higher avidity T-cell responses Figure 7. Increased CDR3 renewal of HIV-specific CD8 T cells following treatment interruption and association with functional avidity. A. Percentage of CDR3 renewal of HIV-specific CD8 T cells before (under treatment) and after treatment interruption (TI). CDR3 diversity and renewal were determined as described [32]. Example of TRBV usage and CDR3 size pattern analysis of B*0702-GPGHKARVL -specific CD8 T cells in patient #1023 at week 18, 96 and 125 are shown in Fig. S3. B. Association between the percentage of CDR3 renewal and changes in the functional avidity of HIV-specific CD8 T cells. doi:10.1371/journal.ppat.1003423.g007 and better virus control [11,12,20,21,53,54]. One potential explanation is that in most of these studies, specific T-cell epitopes (e.g. TW10 or KK10) were considered predominantly in individuals with non-progressive infection. Of note, consistently with our study, Chen and colleagues recently demonstrated that KK10-specific CD8 T-cell responses in elite controllers showed better virus control and broader viral recognition but similar functional avidity as compared to progressors [22]. They also confirmed the overall lack of difference in the functional avidity of HIV-specific CD8 T cells between patients with progressive and non-progressive infection [22]. Taken together, these observations suggest an association between higher avidity T-cell responses and chronic HIV infection. Of note, we also assessed the relationship between T-cell functional avidity and the expression of markers of exhaustion. It is important to underscore that HIV-specific CD8 T cells which had higher avidity in chronic infection expressed also higher levels of exhaustion markers. Therefore, these results further indicate that higher functional avidity does not correlate with better virus control but rather with the status of cells activation/exhaustion. We also determined the impact of the expression of costimulation (i.e. CD27 and CD28) and exhaustion markers (i.e. PD-1, CD160 and 2B4) on the levels of functional avidity in HIV-specific CD8 T cells using a regression model. The regression model indicated that the expression of the above markers only partially accounts for the establishment of the functional avidity of HIVspecific CD8 T cells, thus indicating that additional factors may contribute to determine the levels of functional avidity. The lower avidity of HIV-specific CD8 T cells in PHI patients may also be potentially explained by the fact that patients were identified very early in the course of infection and received ART within 24 hours. Therefore, one cannot exclude the possibility that this early control of virus replication blunted the natural evolution and maturation of the immune response, as previously shown for T-and B-cell responses [55,56,57]. Consistently, it was also shown in mice that functional avidity of antiviral CD8 T cells continuously increased (avidity maturation) during the first month of infection [58]. In this regard, when patients treated during PHI experienced a virus rebound, the functional avidity of HIV-specific CD8 T-cell responses significantly increased. The mixed-effect linear model we used indicated a punctual increase of about one order of magnitude following virus rebound. However, there was no quantitative correlation between either the peak or the steady-state of the virus rebound and the increase in avidity. Several mechanisms were proposed to modulate T-cell functional avidity maturation including: 1) the formation of clusters comprising several TCRs and other molecules able to reinforce the immunological synapses [59,60,61], 2) the optimization of the signal transduction machinery such as an increase in the amount of and in the basal phosphorylation levels of signaling molecules [62,63] and 3) a selective expansion of high TCR avidity clones and/or the loss of clones with low TCR avidity [33,64,65,66,67]. We cannot exclude that the same mechanisms may also contribute to explain the increase in avidity observed following treatment interruption and virus rebound. Interestingly, we showed that TCR renewal was also significantly higher following virus rebound and associated with an increase in T-cell functional avidity. Therefore, our data indicate a potential role of TCR renewal in the modulation of the levels of functional avidity. However, our results do not distinguish between the recruitment of new clones, selective expansion of pre-existing high-avidity clones or depletion of low-avidity clones since the study was performed at the population level. Taken together, these results support the following model. HIV-specific CD8 T cells of lower functional avidity are generated during primary immune responses; then, persistence of detectable viremia drives an increase in functional avidity as supported by the major increase in functional avidity associated with the sudden increment in viremia levels; the increase in viremia levels is also associated with massive TCR renewal which, in turn, causes the generation/selection of T-cell clones with higher functional avidity. These results provide insights on the relationships between functional avidity, viremia, T-cell exhaustion and TCR renewal of antiviral CD8 T-cell responses. Ethics statement These studies were approved by the Institutional Review Board of the Centre Hospitalier Universitaire Vaudois and all subjects gave written informed consent. Study groups Seventy-six patients with primary (PHI) or progressive chronic (CP) HIV infection were enrolled. Diagnosis of PHI included the presence of an acute clinical syndrome, a negative HIV antibody test, a positive test for HIV RNA in plasma, and #3 positive bands in a Western blot. All PHI patients started ART alone or ART+CsA within 72 h as described [68] and were followed for up to 10 years. Patients with chronic progressive (CP) HIV infection were infected for more than a year, were ART-naïve at the time of inclusion, had $400 CD4 T-cells/ml, $5000 plasma HIV RNA copies/ml and were directly treated with ART upon diagnosis as described [69,70]. Four CP patients were investigated both prior to (BSL) and then after 1 year of ART (T1Y). Furthermore, 9 additional HIV-infected patients with non-progressive disease, i.e. LTNP, as defined by documented HIV infection since .10 years, stable CD4 T-cell counts .500 cells/ml, and plasma viremia ,500 HIV RNA copies/ml were also included. Clinical and virological characteristics of the different cohorts are detailed in Table S1. Antibodies The following antibodies were used in different combinations. IFN-c ELISpot assay ELISPOT assays were performed as per the manufacturer's instructions (BD Biosciences). In brief, 2610 5 cryo-preserved blood mononuclear cells were stimulated with 1 mg of single peptide or peptide pools in triplicate conditions as described [73]. Media only and staphylococcal enterotoxin B (SEB) were used as negative and positive controls, respectively. Thresholds for assay validation and positivity were determined as described [73]. Results are expressed as the mean number of SFU/10 6 cells from triplicate assays. Only cell samples with .80% viability after thawing were analyzed, and only assays with ,50 spot forming unit (SFU)/10 6 cells for the negative control and .500 SFU/10 6 cells after SEB stimulation were considered valid. An ELISpot result was defined as positive if the number of SFUs was $55 SFU/10 6 cells and $4-fold the negative control. ICS assay Cryo-preserved blood mononuclear cells (1-2610 6 ) were stimulated for 6 h or overnight in 1 ml of complete media (RPMI (Invitrogen), 10% fetal bovine serum (FBS; Invitrogen), 100 mg/ml penicillin, 100 units/ml streptomycin (BioConcept)) in the presence of Golgiplug (1 ml/ml, BD), anti-CD28 (0.5 mg/ml, BD) and 1 mg/ml of peptide as described [74]. Staphylococcus enterotoxin B (SEB; Sigma) stimulation (100 ng/ml) served as positive control. At the end of the stimulation period, cells were stained for dead cells (4uC for 209; Aqua LIVE/DEAD, Invitrogen), permeabilized (RT for 209; Cytofix/Cytoperm, BD) and then stained at RT for 209 with CD4, CD8, CD3, IFN-c, IL-2, TNF-a and perforin (clone B-D48). Cells were then fixed (CellFix, BD), acquired on an LSRII SORP and analyzed using FlowJo 8.8.2. Analysis and presentation of distributions were performed using SPICE version 5.1, downloaded from http:// exon.niaid.nih.gov/spice/ [75]. The number of lymphocyte-gated events ranged between 0.6-1610 6 . With regard to the criteria of positivity of the ICS, the background in the unstimulated controls never exceeded 0.03%. An ICS to be considered positive had to have .0.03% of cytokine-positive cells after subtraction of the background (media alone) and to be .5 fold higher that the background. Determination of CDR3 renewal The analysis of the CDR3 diversity and renewal was performed as described [32]. CDR3 renewal corresponds to the percentage of TCR sequences specific for a given epitope that changed between two time points. Briefly, blood mononuclear cells were stained with cognate multimers and anti-CD3, anti-CD8, anti-CD45RA, and anti-CCR7 mAbs (BD Biosciences). CD45RA + CCR7 + naïve and Ag-specific (multimer + ) CD8 T cells were directly sorted (FACSAria, BD Biosciences) in RLT lysis buffer (Qiagen, Hilden, Germany) containing 20 ng RNA carrier (Roche Diagnostics, Rotkreuz, Switzerland) and RNA extracted (Qiagen). Then, cDNA preparation and amplification were performed by using the SuperSMART PCR cDNA Synthesis Kit according to the manufacturer's instructions (Clontech Laboratories, Saint-Germain-en-Laye, France). Amplified cDNA was subjected to TRBV-TCR-b-chain C region (TRBC) PCR reactions as described [32]. For spectratyping, aliquots of positive samples were mixed with Genescan-500 ROX size standards and run on an ABI 3130 capillary sequencer (Applied Biosystems, Foster City, CA). The CDR3 junction (length) was analyzed using the IMGT system as described [32]. Functional avidity Peptide stimulations were performed as described above. Functional avidity of T-cell responses was assessed by performing limiting peptide dilutions (ranging from 2 mg/ml to 1 pg/ml) in in vitro assays as described [24]. The peptide concentration required to achieve a half-maximal IFN-c response (EC 50 ) was determined. HLA class I genotyping Four-digit HLA class I genotyping was performed by direct sequencing methods as described [76]. The data were analyzed and alleles were assigned using Assign-SBT version 3.5 (Conexio Genomics, Applecross, Australia). Statistical analyses Mann-Whitney and Wilcoxon-matched paired tests were performed using GraphPad Prism version 6.00 (San Diego, CA). Analyses of the functional avidity of CD8 T-cell responses were performed on log 10 -transformed data using non-parametric tests. Associations among variables were performed by Spearman test. Rank correlations matrix and linear regression analysis were performed after log 10 transformation of variables using R software. Bonferroni corrections for multiple analyses were applied. Regarding SPICE analyses of the flow-cytometry data, comparison of distributions was performed using a Student's t-test and a partial permutation test as described [75]. Furthermore, mixedeffect linear models were used to assess the evolution of functional avidity as a function of the time and virus rebound, as described [39,40]. In brief, let Y_ij be the measured avidity for subject i at time j (time_ij) and rebound_ij, the covariate coded as 1 if a patient i is an on-therapy at time j and coded as 0 if not on therapy (off-therapy). We fitted the following mixed effect linear model: Y_ij = (b_0+r_i)+(b_1)time_ij+(b_2)rebound_ij+e_ij where b_0 is the global mean, b_1 the effect of the time on avidity, b_2 the effect of the virus rebound on avidity, r_i the random effect which represents the individual deviation from the global intercept and e ij are independent measurement errors with mean zero. The interaction between time_ij and rebound_ij was tested.
2017-04-19T09:15:31.706Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "9e7042ac03ff912f19efdb5304453768fb933238", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1003423&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e7042ac03ff912f19efdb5304453768fb933238", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268618137
pes2o/s2orc
v3-fos-license
Relationships between sodium, fats and carbohydrates on blood pressure, cholesterol and HbA1c: an umbrella review of systematic reviews Background The relationship between nutrition and health is complex and the evidence to describe it broad and diffuse. This review brings together evidence for the effect of nutrients on cardiometabolic risk factors. Methods An umbrella review identified systematic reviews of randomised controlled trials and meta-analyses estimating the effects of fats, carbohydrates and sodium on blood pressure, cholesterol and haemoglobin A1c (HbA1c). Medline, Embase, Cochrane Library and Science Citation Index were search through 26 May 2020, with supplementary searches of grey literature and websites. English language systematic reviews and meta-analyses were included that assessed the effect of sodium, carbohydrates or fat on blood pressure, cholesterol and HbA1c. Reviews were purposively selected using a sampling framework matrix. The quality of evidence was assessed with A MeaSurement Tool to Assess systematic Reviews 2 (AMSTAR2) checklist, evidence synthesised in a narrative review and causal pathways diagram. Results Forty-three systematic reviews were included. Blood pressure was significantly associated with sodium, fibre and fat. Sodium, fats and carbohydrates were significantly associated with cholesterol. Monounsaturated fat, fibre and sugars were associated with HbA1c. Conclusion Multiple relationships between nutrients and cardiometabolic risk factors were identified and summarised in an accessible way for public health researchers. The review identifies associations, inconsistencies and gaps in evidence linking nutrition to cardiometabolic health. INTRODUCTION Suboptimal diets are estimated to be responsible for 11 million deaths globally, more than smoking tobacco. 1 Diet is a major contributory factor in the incidence of diabetes, cardiovascular disease and other non-communicable diseases, which cause a major burden on healthcare resources.Cardiovascular disease alone is estimated to be €210 bn/year in Europe, of which the majority (€111 bn) is healthcare costs, and the remainder is productivity losses (€45 bn) and informal care (€45 bn). 2 In order to evaluate the effectiveness of dietary policies, it is necessary to have a reliable evidence base to describe the health benefits of dietary changes, particularly if the changes in nutritional intake have competing health outcomes, for example, if the policy reduces sugar intake, but increased salt.Populationlevel dietary public health policies are often evaluated in modelling studies to estimate the potential benefits, where the health effects cannot be easily observed.Modelling studies often make simplifying assumptions such as assuming all health benefits are captured by a single risk factor between diet and health, such as salt, 3 fruit and vegetables, 4 or calories. 5 6While economic evaluations have modelled a variety of associations between WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ There is extensive research describing the associations between diet and cardiometabolic risk factors.However, the evidence from high-quality systematic reviews to describe these effects is diverse, overlapping and dispersed making it challenging for researchers to access up-to-date evidence across all relevant nutritional markers and cardiometabolic outcomes. WHAT THIS STUDY ADDS ⇒ This review brings together evidence across nutrients to provide consistent quantitative estimates of the associations between nutritional intake and cardiometabolic risk. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ This review supports the evaluation of public health policies targeting behavioural aspects of diet, particularly for population-level interventions, where randomised controlled trial evidence cannot easily be collected.The review provides a single resource that brings together evidence across nutrients and cardiometabolic risks to develop the capacity to evaluate public health dietary policies. on March 28, 2024 nutrition to health, 7 few have modelled multiple nutritional components and captured food substitutions.Simulating substitutions to other food items is important to capture the overall benefit of a policy and any mitigating unintended consequences. There is a large and rich literature describing the impacts of diet on cardiometabolic health, and cardiovascular disease.Systematic reviews have synthesised evidence for differing levels of individual nutrient groups, such as sodium, 8 or carbohydrates, on the risk of cardiovascular disease. 9Changes to nutritional intake in realworld contexts often take the form of diets, which consist of multiple nutrient adjustments that impact the same cardiovascular outcomes.Researchers have addressed this by looking at dietary patterns 10 11 or food types such as whole grains 12 or red meat. 13Navigating this evidence can act as a barrier for researchers not trained in nutrition to interpret this evidence when dietary intervention outcomes are measured in nutrient intake (sugar, salt or fibre).Therefore, it is beneficial to bring together evidence for the health effects of sodium, fats and carbohydrates.Within fats monounsaturated fatty acids (MUFA), polyunsaturated fatty acids (PUFA), saturated fatty acids should be considered independently, as should sugars and fibre within carbohydrates, to identify positive and negative health effects. Randomised controlled trials provide a robust method to reduce biases, but the duration of follow-up, or sample size, is unlikely to identify a relationship between diet and health events, such as diabetes, cardiovascular disease and cancer.Changes in cardiometabolic measurements for blood pressure, cholesterol and blood glucose can be detected within randomised controlled trials, and can be used as markers for risks of non-communicable diseases to indirectly predict the long-term health impacts.We limited our outcomes to those measure that are typically used in cardiovascular and diabetes risk scores, 14 15 including blood pressure, cholesterol and HbA1c.Weight was excluded because energy intake was not an exposure of interest. Despite the large number of systematic reviews collating evidence for individual nutrients, no synthesising evidence for multiple nutrient exposures was found.The aims of this study were to describe the relationships between diet composition described by major nutrient groups and cardiometabolic risk factors.We undertook an umbrella review of reviews to identify estimates from meta-analyses of randomised controlled trials and developed a causal pathways diagram to synthesise the findings. METHOD The protocol was registered with PROSPERO, CRD42020191611.The design of this umbrella review of reviews 16 was developed to support public health evaluation of dietary policies. Search strategy Database searches were performed in several databases in Medline, Embase, Cochrane Library and Science Citation Index from 1946 to 26 May 2020.Supplementary searches were conducted of key websites for relevant reports (WHO; Public Health England; Cochrane-hypertension) and reference searching of included reviews. Inclusion/exclusion criteria Studies were included in the review if they assessed fats, fibre, carbohydrate, sugar and salt.We divided the fat category into fatty acids from foods (MUFA, PUFA, saturated fatty acids) and overall fat intake.Studies were included if they measured blood pressure, cholesterol (total, low-density lipid (LDL), or high-density lipid (HDL) or glycaemia (HbA1c).These cardiometabolic outcomes would enable subsequent alignment with epidemiological models for diabetes and cardiovascular risk assessment. 4Studies were included into the review if they were a systematic review and meta-analysis of randomised controlled trials or natural experiments with controlled design.Studies were included if they included all adults, or in patients with a relevant metabolic disorder such as diabetes or hypertension. We excluded studies from observational cohort studies to reduce the risks of bias often identified in nutritional studies. 17Children and patients with a health condition other than those identified above were defined as an ineligible population for this review.Individual food products, such as nuts, meat or eggs were excluded to enable the review to focus on the nutrients rather than foods.The aims of the review were to describe effects of nutrient composition, rather than energy intake, on cardiometabolic risks.Given the importance of energy intake for weight gain 18 and complex system of factors influencing weight gain, 19 this was excluded as an outcome.Triglycerides were not included in the review because these are not included in the main risk equations under consideration for subsequent modelling work.Fasting plasma glucose was included in the study protocol but was removed during the review because data on effects on HbA1c were more commonly reported. Study selection Studies were screened for inclusion based on the inclusion/exclusion criteria by title and abstract sifting by a reviewer (KS) and 10% were reviewed independently by a second reviewer (PB). We developed a purposive method of study selection using a sampling framework matrix to stratify the inclusion of evidence by population, exposure (macronutrients) and cardiometabolic risks split by population groups.The method is based on an approach taken to identify evidence for other modelling studies in which a broad scope of evidence is needed. 20The method helps to ensure that evidence is represented for all exposures and outcomes and not overwhelmed by the dominant areas of research.The relevant reviews were labelled BMJ Nutrition, Prevention & Health according to the nutrient components under investigation and cardiometabolic risk factors.This process enabled the reviewers to map the focus of reviews identified, and limit extraction within each category to the most recent evidence available.Studies were selected into the sampling framework matrix by year of publication until two studies were identified for each category, or the list of included studies was exhausted. The sampling framework matrix was developed to categorise studies by outcome (blood pressure, cholesterol, HbA1c) and nutritional exposure.Nutrient categories were defined as sodium/salt consumption (g), total fat reduction (% total energy intake (TEI)), fatty acids modification from diet, fatty acids modification from supplements, fatty acids modification from both, total carbohydrate reduction (%TEI), fibre (g) and sugars (%TEI).The grouping aimed to identify evidence on substitutions across macronutrient categories (fats and carbohydrate), and also substitutions within these categories, that is, substitution to MUFA from saturated fat. Experts in nutrition were consulted to review the final study selection and to identity gaps in evidence.Where gaps were identified, additional studies were identified and included to inform these relationships. Data extraction Data on study characteristics were extracted to include review methods, review inclusion criteria (population, study follow-up, study design), summary of geographical locations, number of papers identified and included, number of participants, interventions, controls, planned subgroup analyses and outcomes.All study characteristics were extracted by a single reviewer (KS) with all studies checked (PB, SA, EM). Data on the mean difference, upper and lower CIs for each exposure and health outcome (systolic or diastolic blood pressure (mm Hg), total cholesterol (mmol/L), HDL cholesterol (mmol/L) and LDL cholesterol (mmol/L), or HbA1c (%)) were extracted separately, including units of measurement.Information on dose sizes, ranges and substitution patterns were extracted.The main study outcomes were extracted unless a subgroup or sensitivity analysis reported exposure from dietary changes, as opposed to capsules or enteral nutrition.Furthermore, exposures in which TEI was not restricted to identify substitution effects were prioritised.Cholesterol effects measured in mg/dL were converted to mmol/L by multiplying by 0.02586.Effects were extracted by a single reviewer (PB) and double checked by two reviewers (SA, EM). Quality assessment All studies included in the study were assessed for quality using the AMSTAR2 checklist. 21Quality assessment was undertaken by one reviewer; items that were unclear were discussed.A second reviewer undertook quality assessment of a sample of 10 reviews.We did not exclude any studies on the basis of quality. Evidence synthesis and causal pathways diagram A novel meta-analysis for all causal factors between exposures and health outcomes was not feasible given the large number of exposures and outcomes to be analysed.A narrative synthesis of the data was performed in line with Synthesis without meta-analysis (SWiM) guidance. 22ull details of the method of evidence synthesis are described in the online supplemental material.A causal pathways diagram was developed to illustrate findings, to synthesise evidence and depict the links in the nutrienthealth relationship.Causal pathway diagrams are useful for summarising and organising information, structure information to validate findings with experts. RESULTS Database searches identified 2575 and 19 studies were identified in supplementary searches of the grey literature and consultation with nutrition experts.Of these, 43 studies were selected through the process of filling the sampling framework matrix.The full details of the study selection process are detailed in figure 1.An additional study that was used to fill the gap in the review evidence was identified for the impact of substitutions between fatty acids and cholesterol. 23The sampling framework matrix of study exposures and outcomes by subpopulation is reported in online supplemental table S1; summary characteristics of the included studies is reported in table 1.During data extraction, an updated version of a Cochrane review was identified. 8The outcomes of the AMSTART2 critical appraisal tool assessment for all included studies can be found in online supplemental table S2.Six review studies were assessed as high quality, 4 as moderate quality, 22 as low quality and 11 as critically low. Blood pressure 5][26] The effects on blood pressure were larger for a hypertensive population (overall range: −1.50 mm Hg to −7.83 mm Hg) compared with normotensive populations (overall range: −0.66 mm Hg to −7.75 mm Hg). 8 24-27 Low carbohydrates diet decreased systolic and diastolic blood pressure 9 27-31 and the results were significant in some studies and subgroup analyses. 9 28 29 31There was evidence to suggest that increased fibre is associated with a reduction in systolic blood pressure (overall range: −1.59 to −1.27 mm Hg), and diastolic blood pressure (overall range: −2.40 to −0.39 mm Hg), [32][33][34] and the associations were statistically significant in most studies. 32 33ne study found that replacing carbohydrate with fructose decreased diastolic blood pressure. 35 BMJ Nutrition, Prevention & Health In individuals with diabetes, replacing carbohydrates with MUFA significantly reduced systolic blood pressure (mean: −2.31 mm Hg), 36 but not in general populations. 37][43] There was no evidence for a significant relationship between low-fat diets, or sugars and systolic blood pressure. 35 44olesterol Sodium was associated with an increase in total cholesterol (overall range: 0.02-0.13mmol/L). 8 24 25The relationship was statistically significant in the most recent evidence review. 86][47][48][49][50] The difference was statistically significant in two out of five studies. 45 46ncreasing MUFA to replace saturated fat was significantly associated with a reduction in total cholesterol (mean: −0.05 mmol/L). 23Two studies in patients with diabetes were not statistically significant. 41 52In general populations increasing saturated fat to replace either carbohydrate 23 51 or any foods 51 was found to increase total cholesterol (overall range: 0.05-0.24mmol/L) and the findings were statistically significant. 23 51here was evidence that low carbohydrate diets increased total cholesterol (overall range: 0.07-0.13mmol/L) in the general population, and some estimates were statistically significant, 9 28 46 but not statistically significant in diabetes populations. 30 31 49 50There is evidence for a relationship between fibre and total (overall range: −0.15 to −0.21) and the association was statistically significant for total cholesterol in one study. 32There is evidence to suggest that dietary-free sugars significantly increase total cholesterol (mean: 0.23 mmol/L), 44 but not in patients with diabetes. 53n general populations, low-fat diets substituting fat for carbohydrate reduced HDL cholesterol (overall range: −0.01 to −0.09 mmol/L), [45][46][47][48] and the relationship was significant [46][47][48] or borderline significant. 45Increasing MUFA to replace saturated fat was significantly associated with lower HDL cholesterol (mean: −0.002 mmol/L). 23ne study identified a statistically significant relationship between PUFA replacing saturated fat and lower HDL cholesterol (mean:−0.005mmol/L), 23 whereas three reported non-significant findings. 39 40 54Two studies of PUFA replacing other dietary energy in populations with diabetes report different direction of effects for HDL 41 52 and both were statistically significant.In general populations, increasing saturated fat to replace carbohydrate or any foods was found to significantly increase HDL cholesterol (overall range: 0.01-0.011mmol/L) and the findings were statistically significant. 23 51here was evidence that low carbohydrate diets increased HDL cholesterol (overall range: 0.04-0.10mmol/L) 9 28-31 46 49 50 and the relationships were statistically significant in some studies or subanalyses. 9 28 29 31 46ietary-free sugars significantly increased HDL cholesterol (mean: 0.02 mmol/L). 44In a general population, substitution between sucrose, fructose, starch and glucose was not statistically significant. 55There was no evidence of a statistically significant effect for either sodium or fibre on HDL cholesterol. In general populations, low-fat diets substituting fat for carbohydrate reduced LDL cholesterol (overall range: −0.01 to −0.11 mmol/L), [45][46][47][48] and the relationship was significant in two studies. 45 46Increasing MUFA to replace saturated fat was significantly associated with lower LDL cholesterol (mean: −0.04). 23We found statistically significant effects for PUFA to replace saturated fat on LDL cholesterol (overall range: −0.04 to −0.48), 23 51 but not when replacing other dietary energy. 39 40Three studies of PUFA in populations with diabetes reported non-significant findings. 36 41 52In general populations, increasing saturated fat to replace carbohydrate, or any foods, was found to significantly increase LDL cholesterol (overall range: 0.03-0.19mmol/L) and the findings were statistically significant in the majority of analyses. 23 51here was evidence that low carbohydrate diets increased LDL cholesterol (overall range: 0.10-0.11mmol/L) 9 28 29 46 50 56 and the relationships were statistically significant in some studies or analyses. 9 28 46 50 56here is evidence for a relationship between fibre and LDL cholesterol (overall range: −0.10 to −0.23). 32 34 57here is evidence to suggest that dietary-free sugars significantly increase LDL cholesterol (mean: 0.17 mmol/L). 44ubstitution from starch to sucrose or glucose increases LDL cholesterol 55 but not fructose. 53There were no statistically significant effects for sodium on LDL cholesterol. Glycaemia (HbA1c) In populations with diabetes, there was evidence that lowfat diets substituting for carbohydrates decrease HbA1c (overall range: −0.17% to −0.47%) and was statistically significant in one study, 58 but not statistically significant in another. 49ncreasing MUFA was associated with a significant reduction in HbA1c when substituted for carbohydrate or saturated fat (overall range: −0.09% to −0.12%) for the general population 59 and non-statistically significant in a population with diabetes when substituted for carbohydrate. 36Increasing PUFA to replace carbohydrate or saturated fat was associated with a decrease in HbA1c (overall range: −0.02% to −0.33%), 41 42 52 54 59 and the relationships were statistically significant in one study. 59here is evidence for fibre consumption decreasing HbA1c in populations with diabetes (overall range: −0.61 to −0.91) and the finding was statistically significant. 57 60e association was not statistically significant in a general population. 32here is evidence to suggest that fructose and tagatose are associated with a decrease in HbA1c in general populations 61 and populations with diabetes. 62Substitutions between fructose, sucrose, glucose and starch were not associated with significant changes to HbA1c. 55There was no statistically significant effect of low sodium diet on HbA1c. Summary data and causal pathway diagram A summary of effects size and significance for relationships for the general population is provided in table 2, and individual study effects are reported in the online supplemental tables.Figure 2 illustrates the evidence in a causal pathway diagram to illustrate the evidence. DISCUSSION/CONCLUSION Main findings of this study The review serves the function of mapping the nutrient exposures and cardiometabolic outcomes.It has identified evidence across nutrients, cardiometabolic risk factors and considered variations in effects across population subgroups.The findings are illustrated in a causal pathway diagram.The review summarises current understanding of the non-weight relationships between dietary quality and cardiometabolic risks, and provides researchers with a resource to justify the health benefits of dietary change.The review has highlighted the harms of sodium on blood pressure, particularly in those with hypertension.Whereas fibre and unsaturated fats can reduce systolic blood pressure.The relationships between fats and carbohydrates on cholesterol vary by the types of macronutrients, so that fibre and starch decrease cholesterol, whereas sugar and saturated fat increase cholesterol.MUFA, sugar and fibre were associated with HbA1c.Many of the studies included in the review were found to be a low grade of evidence.There were many cases where the findings from reviews with similar exposures and outcomes were conflicting.This may be due to the differences in study objectives and inclusion criteria but may also be impacted by changes in evidence over time.As such, the findings should be interpreted with caution.In synthesising the evidence, we considered the quality of studies, but have not excluded the findings from lowquality studies.Further research could update formal synthesis of the nutrients and cardiometabolic risks using consistent methods. What is already known on this topic? The direction of relationships between macronutrients and cardiometabolic risks are consistent with national 63 and international guidelines 64 to restrict the consumption of salt, saturated fat and increase consumption of fruit and vegetables to increase dietary fibre.We only identified a significant relationship between free sugars and cholesterol, and none for a relationship between sugar and HbA1c.This finding that there are few studies identifying significant effects of sugar on cardiometabolic risks is consistent with other reviews of the relationship between carbohydrate and health. 65However, given the reviews exclusion of weight gain as a measure of metabolic health, the negative health effects of free sugars diet may not be fully represented within the scope of this review. What this study adds This umbrella review of reviews provides a comprehensive search and mapping of the literature.The findings The purposive sampling of studies through a sampling framework matrix enabled the reviewers to identify evidence from a range of dietary macronutritional components across various population groups, also identifying gaps and uncertainty in the evidence. The study has highlighted gaps and uncertainty in the evidence for associations between nutrients and cardiometabolic risks.Few studies have investigated the association between sugar and cardiometabolic risks.We note that despite the large number of studies investigating the relationships between sodium and blood pressure, none have reported associations with HbA1c.Recent findings from observational studies highlight a relationship between sodium and HbA1c in a non-hypertensive population. 66There is a high degree of uncertainty in the evidence identified in this review, with inconsistent and conflicting evidence across many of the relationships we have reviewed. Limitations of this study A limitation of the study is that the reviews were not statistically combined in favour of a narrative assessment of outcomes and strength of evidence.The inclusion of all relevant reviews in this field would either contain dietary interventions too heterogenous to be combined statistically or would not add to the findings from the reviews.The review does not illustrate dietary impacts on triglycerides, or other measures of glycaemia that may be of interest to nutritionists, epidemiologists and other health professionals, because these are not commonly used in assessing cardiometabolic risk.It was necessary to prioritise certain dietary changes and metabolic risks for this review, but further research could extend this approach to accommodate evidence on single micronutrients that have been associated with reductions in blood pressure. 67urposive sampling may have excluded important studies and evidence that may strengthen or conflict with the summaries provided here.However, selecting more recent systematic reviews should capture the most contemporary evidence. Figure 1 Figure 1 Preferred Reporting Items for Systematic Reviews and Meta-Analyses diagram of selected articles for inclusion in the review. Table 1 Characteristics of systematic reviews examining the effect of nutritional intake on measures of metabolic health in adults Table 1 Continued on March 28, 2024 by guest.Protected by copyright. Table 2 Description of the direction, statistical significance and certainty of reported relationships between nutrients and metabolic risks for the general population, unless otherwise indicated Table 2 Continued Figure 2 A causal pathway diagram illustrating the direction, and strength of evidence between nutrients and metabolic markers.HDL, high-density lipid; LDL, low-density lipid.on March 28, 2024 by guest.Protected by copyright.http://nutrition.bmj.com/BMJNPH: first published as 10.1136/bmjnph-2023-000666 on 21 March 2024.Downloaded from BMJ Nutrition, Prevention & Health have been combined in a narrative synthesis, and causal pathway diagram to indicate the effects of various macronutrient components based on the most recent available evidence.
2024-03-23T15:13:50.165Z
2024-03-21T00:00:00.000
{ "year": 2024, "sha1": "e7301c0e807d389e7fa82198ee72a78013cb5e42", "oa_license": "CCBY", "oa_url": "https://nutrition.bmj.com/content/bmjnph/early/2024/03/21/bmjnph-2023-000666.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b4c61623a7df8acb99a5ac2d88e3e95dfb2ee811", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
255549561
pes2o/s2orc
v3-fos-license
Molecular basis of resistance to macrolides, lincosamides and streptogramins in Staphylococcus hominis strains isolated from clinical specimens Coagulase-negative staphylococci (CoNS) are the most frequently isolated bacteria from the blood and the predominant cause of nosocomial infections. Macrolides, lincosamides and streptogramin B (MLSB) antibiotics, especially erythromycin and clindamycin, are important therapeutic agents in the treatment of methicillin-resistant staphylococci infections. Among CoNS, Staphylococcus hominis represents the third most common organism. In spite of its clinical significance, very little is known about its mechanisms of resistance to antibiotics, especially MLSB. Fifty-five S. hominis isolates from the blood and the surgical wounds of hospitalized patients were studied. The erm(C) gene was predominant in erythromycin-resistant S. hominis isolates. The methylase genes, erm(A) and erm(B), were present in 15 and 25 % of clinical isolates, respectively. A combination of various erythromycin resistance methylase (erm) genes was detected in 15 % S. hominis isolates. The efflux gene msr(A) was detected in 18 % of isolates, alone in four isolates, and in different combinations in a further six. The lnu(A) gene, responsible for enzymatic inactivation of lincosamides was carried by 31 % of the isolates. No erythromycin resistance that could not be attributed to the genes erm(A), erm(B), erm(C) and msr(A) was detected. In S. hominis, 75 and 84 %, respectively, were erythromycin resistant and clindamycin susceptible. Among erythromycin-resistant S. hominis isolates, 68 % of these strains showed the inducible MLSB phenotype. Four isolates harbouring the msr(A) genes alone displayed the MSB phenotype. These studies indicated that resistance to MLSB in S. hominis is mostly based on the ribosomal target modification mechanism mediated by erm genes, mainly the erm(C), and enzymatic drug inactivation mediated by lnu(A). Introduction Coagulase-negative staphylococci (CoNS) are part of the normal bacterial flora of human skin, but they have been increasingly recognized as opportunistic pathogens capable of causing various types of infections (Piette and Verschraegen 2009). Among clinically significant strains of CoNS, Staphylococcus hominis is ranked the third in importance only after S. epidermidis and S. haemolyticus. The S. hominis is a genetically diverse species, and it is believed that recombination plays a significant role in generating this diversity (Mendoza-Olazarán et al. 2013;Zhang et al. 2013;Szczuka et al. 2014). These bacteria can be responsible for blood stream infections, endocarditis, peritonitis, bone and joint infections (Kloos and Bannerman 1999;Kaufman and Fairchild 2004;Chaves et al. 2005;Sorlozano et al. 2010;Bouchami et al. 2011). Similar to other staphylococci, the formation of biofilm on medical devices, or on host tissues, is thought to be the one of the major pathogenic factors of S. hominis (Kaufman and Fairchild 2004;Götz et al. 2006;Chokr et al. 2006;Rodhe et al. 2006;Fredheim et al. 2009;Szczuka et al. 2015). Relatively high prevalence of methicillin resistance complicated the treatment of staphylococcal infections (Casey et al. 2007). Macrolides, lincosamides and streptogramin B antibiotics are the preferred alternative to penicillins and cefalosporins in the treatment of staphylococci infection. Moreover, erythromycin and clindamycin are recommended as second-line drugs for patients with a β-lactam allergy (Leclercq 2002;Gherardi et al. 2009). MLS B antibiotics are structurally distinct but functionally similar because they inhibit protein synthesis by binding to the 50S subunit (23S rRNA) of the bacterial ribosome. In staphylococci, resistance to MLS B is generally based on three mechanisms: the ribosomal target modification mediated by erm genes, the active efflux of antibiotics mediated by msr(A) and enzymatic drug inactivation mediated by lnu(A) (Leclercq 2002). The lnu(A) gene encodes lincosamide O-nucleotidyltransferase, which only inactivates lincosamides. Erythromycin-resistance methylase (erm) genes encode proteins which methylate adenine residue A2058 in the peptidyltransferase region of 23S rRNA domain V, which is part of the large (50S) ribosomal subunit and prevents the binding of the antibiotic to the target site (Leclercq 2002;Novotna et al. 2005). This methylation results in cross-resistance to macrolide, lincosamide and streptogramin B antibiotics (MLS B phenotype), which can be expressed either constitutively (cMLS B ) or inducibly (iMLS B ). Coagulase-negative staphylococci, with an iMLS B resistance phenotype are resistant to 14-membered and 15-membered macrolides, whereas CoNS with a cMLS B resistance phenotype are resistant to all MLS B antimicrobials. The msr(A) gene is involved in the active efflux of antibiotics, causing resistance to 14-and 15-membered macrolides as well as to streptogramin, but not to lincosamides (MS B phenotype). This makes clindamycin, as a treatment choice, effective (Lina et al. 1999;Leclercq 2002;Vimberg et al. 2015). The main purpose of this study was to assess the molecular basis of resistance to MLS B antibiotics in clinical isolates of S. hominis. Bacterial strains Fifty-five isolates of S. hominis were collected from the blood and surgical wound swabs of hospitalized patients. True bacteremia was diagnosed in 36 the of patients. The isolates were identified by using the VITEK 2 system (bioMérieux, France). Although the tuf sequencing gives perfect results in the identification of this species, the VITEK 2 offers very good results as well. Because the S. hominis is a genetically diverse species, we confirmed the identification of all tested S. hominis isolates by using the API STAPH. In this study, we included only those isolates whose identification was beyond any doubt. The isolates were stored at −70°C, in 50 % glycerol broth (BHI), until commencement of the study. Characterization of resistance mechanisms Phenotypic characterization of macrolides and lincosamides resistance was determined by the double-disc test, with erythromycin (15 μg) and clindamycin (2 μg) discs applied 20 mm apart. A 10-μl inoculum of a 0.5 McFarland suspension was spotted on Mueller-Hinton agar with antibiotic disc. After 18 h incubation at 35°C, blunting of the clindamycin zone of inhibition proximal to the erythromycin disc indicated the inducible type (D-shaped zone) of MLS B resistance, whereas resistance to both erythromycin and clindamycin indicated the constitutive type. Lack of a D-shaped zone in erythromycin-resistant and clindamycin-susceptible isolates was interpreted as the MS B efflux phenotype (Leclercq 2002;Aktas et al. 2007). The results were interpreted according to EUCAST recommendations. Isolates were also screened with a 30-μg cefoxitin disc and studied for the presence of mecA genes to test methicillin resistance (Geha et al. 1994). The bacterial genomic DNA was isolated from clinical isolates using the Genomic DNA Plus kit (A&A Biotechnology, Poland). For the detection of macrolide resistance genes (erm(A), erm(B), erm(C), msr(A), lun(A)) and mecA genes, PCR assays were performed as described by Lina et al. (1999), Le Bouter et al. (2011) and Geha et al. (1994). The STATISTICA software (10.00 StatSoft, Tulsa, OK, USA) was used for statistic analysis. Association between methicillin resistance and resistance to MLS B antibiotics was evaluated by using chi-square (χ 2 ) test. A P value of <0.05 was considered significant. Results The most prevalent resistance determinant was erm(C) which was detected in 25 of the isolates (45 %), followed by lnu(A), erm(B) and erm(A) detected in 17 (31 %), 14 (25 %) and 8 (15 %) isolates, respectively. The msr(A) gene was detected alone, in 4 isolates and in 6 isolates, in combination with other genes. As Table 1 shows, 14 distinct resistance genotypes could be observed in the S. hominis strains. Fourteen isolates were negative for all screened genes. All isolates harbouring the erm(B) or erm(C) genes alone or in combination with other genes exhibited resistance to erythromycin. The erm(A) was never found alone and all erm(A)-positive isolates were resistant to erythromycin. Fourteen isolates, which were negative for all five resistance genes, displayed susceptibility to erythromycin and clindamycin. No isolates, resistant to clindamycin only, were found. Twenty eight erm-positive isolates were resistant to erythromycin but remained susceptible to clindamycin and exhibited the inducible MLS B phenotype. The remaining nine erm-positive isolates showed resistance to erythromycin and clindamycin, displaying the constitutive MLS B phenotype. It should be emphasized that the cMLSB phenotype was detected only in strains harbouring simultaneously erm and lnu(A). The four isolates, harbouring the mrsA gene alone, represented the MS B phenotype. Methicillin-resistant S. hominis isolates were significantly more often resistant to macrolides and lincosamides (93 % to erythromycin, 77 % to clindamycin) than methicillinsusceptible isolates (50 and 22 %, respectively; p<0.001). Discussion Coagulase-negative staphylococci have been recognized as an important cause of nosocomial infections and are the most frequently isolated bacteria from blood (Krediet et al. 2004;Hira et al. 2007;Piette and Verschraegen 2009). These pathogens have developed an increased resistance to antimicrobial agents, especially to methicillin and other semisynthetic penicillins. Among CoNS, S. haemolyticus has the highest tendency to develop resistance to multiple antibiotics (Rodríguez-Aranda et al. 2009). S. hominis isolates display a lower virulence than S. haemolyticus and have been recognized, less frequently, as significant human pathogens. However, there are reports indicating that S. hominis can be responsible for nosocomial outbreaks (Chaves et al. 2005;d'Azevedo et al. 2008;Palazzo et al. 2008;Sorlozano et al. 2010;Ruiz de Gopegui et al. 2011;Roy et al. 2014). Nevertheless, there is limited information on their resistance to antibiotics, especially to macrolides, lincosamides and streptogramin B. As mentioned above, MLS B are used against staphylococcal infection in penicillin-allergic patients and in methicillin-resistant staphylococci (MRS)-infected patients. In particular, the use of clindamycin is regarded as a valid choice in the treatment of soft-tissue and bone infections (Lina et al. 1999;Leclercq 2002;Gherardi et al. 2009). The present data indicates that 16 % of S. hominis strains were resistant to clindamycin, whereas 75 % displayed resistance to erythromycin. In German studies, only 19 % of S. hominis strains were erythromycin resistant (Gatermann et al. 2007). Most of these strains displayed the constitutive MLS B phenotype, as opposed to our study, which demonstrated that the majority of S. hominis expressed the inducible MLS B phenotype. It should be emphasized, that coagulase-negative staphylococci, with an iMLS B resistance phenotype are resistant to 14-membered and 15-membered macrolides, but susceptible to lincosamides, streptogramin B and 16-membered macrolides. Although, iMLS B CoNS are in vitro resistant to erythromycin and in vitro sensitive to clindamycin, prescribing clindamycin may lead to treatment failure. In our studies, more than half of the S. hominis isolates were resistant to methicillin. Additionally, methicillin resistance was closely associated with resistance to erythromycin, which narrows the therapeutic options. It is well known that glycopeptides are the treatment of choice for infections caused by the multi-resistant staphylococci. However, due to the emergence of vancomycin-resistant staphylococci, a reduction in the use of this antibiotic has been recommended. Recently, Won and Kim (2013) has reported the emergence of vancomycin-resistant S. hominis. Also, the emergence of resistance to relatively new antibiotics, such as linezolid and quinupristin/dalfopristin, has also been noted, in clinical S. hominis strains (Petinaki et al. 2005;Ruiz de Gopegui et al. 2011). This study indicated that the resistance to macrolides and lincosamides in S. hominis is mostly based on the ribosomal target modification mechanism mediated by erm genes; mainly the erm(C) and enzymatic drug inactivation, mediated by lnu(A). The erm(C) genes are predominant among coagulase-negative staphylococci from European countries, Canada and Korea (Martineau et al. 2000;Lim et al. 2002;Novotna et al. 2005;Gatermann et al. 2007;Gherardi et al. 2009). However, these data largely concerns the most frequently isolated coagulasenegative strains i.e. S. epidermidis and S. haemolyticus, whereas little is known about the distribution of MLS B resistance genes in other staphylococci species, including S. hominis. Recently, Le Bouter et al. (2011) characterized resistance to macrolides, lincosamides and streptogramin B in 72 S. saprophyticus strains isolated from urine specimens. They found that the distribution of MLS B resistance genes in S. saprophyticus is different from that generally reported for S. epidermidis and S. haemolyticus. The results of this study show that erm(A) and erm(B) genes were present more frequently in S. hominis than in other staphylococcal species as previously described (Martineau et al. 2000;Gatermann et al. 2007;Gherardi et al. 2009). For example, in a study conducted in Korea, erm(B) genes were present only in 3.3 % of isolates (Lim et al. 2002). The efflux of macrolides due to msr(A) is a mechanism found only in a minority of S. hominis. Previously obtained data indicates that msr(A) genes were present in 11-24 % of coagulase-negative staphylococci (Aktas et al. 2007;Bouchami et al. 2007;Gatermann et al. 2007). In contrast, in S. saprophyticus, the efflux mechanisms were the most common mechanisms of resistance to MLS B antibiotics (Le Bouter et al. 2011). We observed a high occurrence of the lnu(A) gene, which confer resistance to lincomycin, but clindamycin remains active (Leclercq 2002). Overall, this study suggested that S. hominis may constitute a reservoir for MLS B genes, in particular erm(C) and lnu(A), among coagulase-negative staphylococci. These resistance genes are often located on plasmids or transposons and may be transferable to more pathogenic staphylococcal species (Leclercq 2002). Our results indicated that the uncommon pathogen, S. hominis had a high prevalence of erythromycin resistance and most of these strains display the inducible MLS B phenotype. Ribosomal modification and drug inactivation are the main mechanisms of MLS B resistance, in S. hominis strains. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/ licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2023-01-10T14:25:24.865Z
2015-08-09T00:00:00.000
{ "year": 2015, "sha1": "c356d193b484c55eb36e4d62615921bdb265ea72", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12223-015-0419-6.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "c356d193b484c55eb36e4d62615921bdb265ea72", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
208302262
pes2o/s2orc
v3-fos-license
Childhood obesity in urban Ghana: evidence from a cross-sectional survey of in-school children aged 5–16 years Background Childhood obesity is a growing public health concern in many low-income urban settings; but its determinants are not clear. The purpose of this study is to assess the prevalence of childhood obesity and associated factors among in-school children aged 5–16 years in a Metropolitan district of Ghana. Methods A cross-sectional quantitative survey was conducted among a sample of 285 in-school children aged 5–16 years. Pre-tested questionnaires and anthropometric data collection methods were used to collect data. Descriptive, bivariate, binary and multivariate logistic regression statistical techniques were used to analyse data. Results Some 46.9% (42.2% for males and 51.7% for females) of the children were overweight. Of this, 21.2% were obese (BMI falls above 95th percentile). Childhood obesity was higher in private school (26.8%) than public school (21.4%), and among girls (27.2%) than boys (19%). Factors that increased obesity risks included being aged 11–16 as against 5–10 years (aOR = 6.07; 95%CI = 1.17–31.45; p = 0.025), having a father whose highest education is ‘secondary’ (aOR =2.97; 95% CI = 1.09–8.08; p = 0.032), or ‘tertiary’ (aOR = 3.46; 95% CI = 1.27–9.42; p = 0.015), and consumption of fizzy drinks most days of the week (aOR = 2.84; 95% CI = 1.24–6.52; p = 0.014). Factors that lowered obesity risks included engaging in sport at least 3times per week (aOR = 0.56; 95% CI = 0.33–0.96; p = 0.034), and sleeping for more than 8 h per day (aOR = 0.38; 95% CI = 0.19–0.79; p = 0.009). Conclusion Higher parental (father) educational attainment and frequent consumption of fizzy drinks per week may increase obesity risks among in-school children aged 5–16 years in the Metropolitan district of Ghana. However, regular exercise (playing sport at least 3 times per week) and having 8 or more hours of sleep per day could lower obesity risks in the same population. Age and sex-appropriate community and school-based interventions are needed to promote healthy diet selection and consumption, physical activity and healthy life styles among in-school children. Background Childhood obesity is one of the most important child and public health issues in many parts of the world today [1]. In simple terms, obesity refers to abnormal or excessive fat accumulation resulting from energy imbalance between calories consumed and calories expended [1]. Several factors contribute to obesity. These include sedentary lifestyle, physical inactivity and poor eating habits, including consumption of savoury foods with hidden fats and sugars that impair metabolism [1,2]. Other factors include biophysiological causes such as genetic causes, insulin resistance, hyperinsulinism, and disruption of the normal satiety feedback mechanisms [1,2]. Globally, obesity prevalence has nearly tripled since 1970s [1]. For instance, about 1.9 billion adults (18+ years) were overweight in 2016, out of which 650 million were obese [1]. In the same year, 41 million children under age five were overweight or obese [1]. Among children and adolescents aged 5-19, over 340 million were overweight or obese in 2016 [1]. Evidence further suggests that low-income countries harbor majority of obese people [1]. For instance, nearly half of overweight/ obese children under-five are in Asia, while one quarter live in Africa [1]. In Africa in particular, the number of overweight or obese children has nearly doubled from 5.4 million to 10.6 million in in 1990 and 2016 respectively [1]. In current literature, there is recognition of the fact that many obese adults developed the condition in childhood and adolescence [1]. Childhood obesity also has additional consequences, including higher risks of premature death and disability in adulthood, increased risk of fractures, increased future risks of breathing difficulties, heart disease, hepatic impairment, diabetes, insulin resistance, vision problems, cancer, and psychological consequences such as low self-esteem [3,9,10]. The difficulty in treating childhood obesity and the social and economic burden of managing the condition are other consequences of childhood obesity [11]. In relation to the economic cost of childhood obesity for instance, one study found a $14.1 billion annual cost for additional prescription drug, emergency room, and outpatient visit healthcare costs annually [12]. Also, an obese 10 year old child who maintains weight gain throughout adulthood has a lifetime medical costs of $19,000 higher compared to a healthy-weight 10-year old who maintains a normal weight throughout life [13]. When compared with a normal weight child, a child who is obese for two consecutive years has a $194 higher outpatient visit expenditure, a $114 higher prescription drug expenditure, and a $12 higher emergency room expenditure [12]. While the evidence on childhood obesity and the potential adverse effects in low-income settings is growing, there is still a general paucity of data on prevalence of childhood obesity in many African settings [14]. Specifically, there is currently limited data on prevalence of obesity in children from 5 to 15 years in many lowincome settings [1]. In Ghana for example, a recent systematic view on overweight and obesity epidemic in Ghana highlighted the relative lack of focus on young children and adolescents [3]. In addition, while studies have identified various factors that increase the risk of childhood obesity, including age [15][16][17], sex [14,16,18], educational level of parent/guardian [14,19], as well as unhealthy diet and physical inactivity [20][21][22][23][24], it is not entirely clear whether these factors are implicated in Ghana's obesity epidemic. In order to prevent childhood obesity, there is need for context-specific studies to estimate prevalence and identify risk factors [25]. As Guh et al. have argued, one important step in preventing childhood obesity with its associated problems lies in comprehensive studies of its prevalence and associated factors [26]. This study contributes to filling this important knowledge gap by examining prevalence of obesity and associated factors among in-school children in a metropolitan district of Ghana. Study design A cross-sectional school-based quantitative survey was conducted among children aged 5-16 in the Tema Metropolitan District of the Greater Accra region of Ghana. The design, implementation and reporting of results followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist for cross-sectional studies. Study context Empirical data collection was done in two of the largest basic schoolsone public and the other private -in the Tema Metropolitan district, 30 km East of Accra, the Capital City of Ghana. The metropolis is the second most populous metropolis in the Greater Accra Region. It has a population of 403,934, with nearly everyone living in urban localities [27]. The proportion of children aged 5-16 is estimated to be 29.4% of the total population of the metropolis [27]. There are about 338 basic schools (primary and junior high schools), of which 185 are private and 153 are public schools. The two schools that were purposively chosen for this research were some of the largest in terms of student numbers. The number of students in the private and public schools that were studied were 320 and 610 respectively. Study population We included children from the two selected basic schools who were aged 5-16 years.We however excluded children within the 5-16 year age bracket who had some form of physical disability that could not allow accurate determination of their true height. Sample size and sampling One recent cross-sectional study among 218 in-school children in northern Ghana reported childhood obesity prevalence of 17.4% [6]. Based on this prevalence, and assuming a 95% confidence level, alpha 0.05, 5% worst acceptable margin of error, and power of 80%, we estimated a minimum sample of 221 using Cochran's sample size estimation formula for cross-sectional studies [28]. We adjusted the minimum sample size upward by 30% to account for non-response and as well increase the power of the study. The final sample size was thus 287. We used a multistage sampling procedure to select qualified children. In the first stage we obtained registers of all students in each of the two schools. We screened each register to identify children who met our inclusion criteria. In total, 737 children (189 private and 548 public school) from the two basic schools met our inclusion criteria. In stage two, we allocated our total sample of 287 proportionate to the size of the population of children aged 5-16 years in each school. This was to ensure that the sample for each school was commensurate with the size of the population of eligible children. This yielded 191 children from the public school and 94 from the private school. In stage three, we entered the names of all eligible children from each school into excel spreadsheet and gave each a unique number (e.g. 001-548 for the public school, and 001-189 for the private school). The lists of all the numbered children was then exported into a google-based random number generator software, and the required number of children (191 for public school and 94 for private school) for each school was randomly selected. We matched the randomly selected numbers to the corresponding names on our list of eligible children. In stage four, the research team visited the schools to meet all selected children. They were told about the purpose of the study, how they were selected, and what the study procedures involved. All initial questions were addressed in these meetings. As the children were below the age of legal consent (i.e. 18 years), we wrote personalised letters to their parents/ guardians. The letters explained the purpose of the study, how the children were selected and what the study procedures were. Information sheets giving further details about the study, including information about ethical approval, rights of their child, informed consent, and contact details of the researchers were included with each letter. Each child delivered the letter to their parent/guardian. Each parent/guardian was given 1 week to decide on their child's participation in the study. After this period, parents/guardians who consented to their child's participation were directed to sign or thumbprint the consent form and return it to their child. Only one child did not receive parental consent to participate in the study and was subsequently dropped. All children who received parental consent were required to assent to their parents' consent. All gave their assent. Data collection The second author (a specialist child health nurse) and two trained research assistants collected the data. Two kinds of data were collected. First, questionnaires were used to collect information on socio-demographic, dietary, and physical activity characteristics of the children. The questionnaire was developed specifically for the purposes of this study based on extensive literature review and expert consultation in Ghana (see Additional file 1). It was however pretested on a total of 30 children aged 5-16 in two other schools not included in the actual study. We tested the reliability of the instrument and found it to be reliable (Cronbach's alpha coefficient range = 0.80-0.92). The level of reliability observed for both individual items and the entire data collection tool is considered in literature to be good [29]. Details of the test-retest properties are provided in Additional file 2. Second, anthropometric information such as the weight and height of each child was collected to enable calculation of body mass index (BMI) and obesity status. The anthropometric measurement tools employed were a Weighing Scale (Tanita WB-3000 Digital Doctor Scale, Tanita Corporation of America, Inc., Illinois, USA) to measure the weight of each child in kilograms as they stood on it on a flat hard surface, and a height rod (HR-200 Tanita Wall-Mounted Height Rod, Tanita Corporation of America, Inc., Illinois, USA) to measure the height of children in centimeters. All data collection was done in the schools during break hours. Children were interviewed one at a time in a specially designated office where maximum confidentiality was assured. Interviews were done in English and three other local dialects (Ga/Dangme, Ewe and Twi) depending on each child's preference. Measurement The outcome variable of the study is childhood obesity. Similar to previous studies, we used BMI count as a marker of obesity [11]. While adult BMI is easily calculated by dividing body weight in kilograms by height in meters squared, for children and teenagers, BMI requires tuning for age and sex [1]. We used the WHO Anthro-Plus software (version 1.0.4) for calculating BMI for children/adolescents aged 5-20 years. BMIs calculated using the WHO AntroPlus were then compared with the CDC curves. Any child with BMI >95th age-sex percentile was considered obese [30]. Our final outcome was dichotomized into 'obese' and 'not obese'. A number of potential covariates were also measured using the questionnaires, including socio-demographic characteristics like age, sex, educational level of parent/ guardian, religion, occupation of parents and obesity history of child's family; dietary/behavioural factors such as consumption of processed foods and fizzy drinks; and physical activity such as sports, means of transport to school, sleeping hours and playing of computer games other than outdoor games. In particular, sports activity was defined as engaging in any of the following: playing football, basketball, tennis, volley ball and ampe (a simple jumping game played by school-aged childrenmostly girls -in Ghana and neighbouring countries and usually involving two or more players and requires no equipment), as well as running and cycling. Fizzy drinks were defined as non-alcoholic soft drinks that contain carbonated water, a sweetener (sweetener may be sugar, high-fructose corn syrup, fruit juice, a sugar substitute, or some combination of these), and a natural or artificial flavouring. Model specification In this study, childhood obesity (Y) is the response variable with two binary outcomes: Y = 1, when a child is obese (BMI falls above 95th percentile), and Y = 0, when a child is not obese (BMI falls below 95th percentile). The independent variables for the response variable, Y, are the socio-demographic, dietary and behavioural factors. If Y is the response variable, the probability that a child will be obese is (Y i ) and the alternative other outcome is (1-Y i ). Therefore, the odds ratio in favour of a child being obese is Taking the natural log of the odds ratio gives the logit model: Where, 'ln' is the natural logarithm, Y i is the probability that a child will be obese, (1 − Y i) is the otherwise probability, β 0 denotes the intercept parameter, X i denotes the explanatory variables, β i denotes the coefficients to be estimated, and u i is the error term. From Eq. (2), the model specification for estimating factors that predict obesity status could be represented as follows: father education level child þ β 6 mother education level child β 7 number o f siblings child þ β 8 father occupation child þ β 9 mother ccupation child þ …β n þ u i Where In = natural log; Y i /(l-Y i ) = odds ratio; 'ß 0 ' = intercept term (i.e. probability of a child being obese if all the explanatory variables were equal to zero), ß 1 to ß n = explanatory variables' coefficients holding all other variables constant, and u i = random error term. Data processing and statistical analysis Following completion of data collection, all questionnaires were first manually examined to check for completeness. Questionnaires were then hand-coded and entered separately into Epi info version 7 by two research assistants. The two data entries were compared, and all data entry errors or inconsistencies were discussed and resolved with the two research assistants who performed the data entry. Following from this, a single database was created, agreed upon, and imported into STATA software for analysis. Both descriptive and inferential statistical analyses were done. For the descriptive analysis, frequency distributions and proportions were used to summarise categorical variables. Mean and range were computed to summarise continuous variables. For the inferential statistical analysis, both Chi-square test of independence and fisher's exact test (for observations with less than 5 cell counts) were performed to first to assess association between childhood obesity and independent variables. This was followed by binary and multivariable logistic regression analysis to estimate odd ratios for factors that showed statistical association at the bivariate level. A 95% confidence level and statistical significance of p < 0.05 were assumed in the regression analysis. Characteristics of respondents A total of 286 children were surveyed; one questionnaire was missing, hence 285 were used for the analysis. Table 1 shows the background characteristics of respondents. Mean age was 11.27(SD = + 4.73), and a little over half (50.5%) were female. Majority (57.5%) were aged 11-16 years. The majority of children reported that their fathers' (33.0%) and mothers' (38.7%) highest educational level was basic education. Table 2 also shows essential anthropometric characteristics of respondents. Some 46.9% (42.2 for males and 51.7% for females) of the children were overweight for their age. Table 3 shows prevalence of obesity by background of children. Of 46.9% of respondents that were overweight, 21.2% were obese. Some 26.8% of children from the private school were obese compared to 21.4% from the public school. Childhood obesity was also higher among girls (27.2%) than boys (19%). Factors associated with obesity To identify factors that significantly predict childhood obesity, chi-square and fisher's exact tests were first performed between a total of 16 theoretically relevant independent socio-demographic and dietary variables and the outcome variable. The results are shown in Tables 4 and 5 with childhood obesity. These factors were pulled into binary and multivariable logistic regression models and odd ratios were estimated. The results are shown in Table 6. Children who were involved in sporting activities for at least 3 days per week had a 42% significant reduction in their odds of being obese as compared to children who participated in sporting activities for less than 3 days per week (cOR = 0.58; 95% CI = 0.36-0.95; p = 0.030). After adjusting for other variables identified as significant predictors of obesity, this association was still statistically significant (aOR = 0.56; 95% CI = 0.33-0.96; p = 0.034). Further, children who consume fizzy drinks on some days (cOR = 2.39; 95% CI = 1.06-5.38; p = 0.035) and most days (cOR = 3.36; 95% CI = 1.60-7.06; p = 0.001), had respectively 2.39 and 3.36 times the odds of being obese as compared to children who hardly or never consume fizzy drinks. These differences were statistically significant. After adjusting for other variables identified as significant predictors of obesity, children who consumed fizzy drinks on most days were still 2.84 times more likely to be obese compared to children who hardly or never consumed fizzy drinks (aOR = 2.84; 95% CI = 1.24-6.52; p = 0.014). Finally, children who slept for more than 8 h per day had 68% reduction in their odds of being obese as compared to children who slept for less than 5 h (cOR = 0.32; 95% CI = 0.16-0.63; p = 0.001). This association was still significant after adjusting for other variables (aOR = 0.38; 95% CI = 0.19-0.79; p = 0.009). Furthermore, children who slept between 5 and 8 h, had 34% reduction in their odds of being obese as compared to children who slept for less than 5 h (cOR = 0.66; 95% CI = 0.31-1.41; p = 0.286). This association was however not significant. Discussion This study aimed to estimate childhood obesity prevalence and identify key factors among in-school children (5-16 years) in a Metropolitan district of Ghana. Findings highlight childhood obesity as an important public and child health issue that needs attention. To start with, 21.2% of the children in our study were obese, and more children from the private school (26.8%) than the public school (21.4) were obese. Childhood obesity prevalence in our study was higher than the 17.4% prevalence reported in a similar previous study in the northern part of Ghana [6]. However, childhood obesity prevalence in our study is lower than the 43% recently reported for the general adult population in Ghana [6]. This notwithstanding, the relatively high obesity prevalence in our study suggests that children are equally vulnerable to obesity in cities in Ghana. This could be related to the fact that children share the same or similar obesogenic environments with adults, and are therefore increasingly exposed to risk factors such as sedentary life style in urban settings. Also, there is the perception in Ghana about body weight: people are perceived to be living good when they look fat [4]. Consequently, many parents may take steps to ensure that their children conform to this expectation in order to be praised for good parenting. Together with previous studies highlighting growing prevalence of childhood obesity, our findings here suggest a need for promotive health interventions (e.g. healthy eating and physical activity) targeted not only at adults but also children in our study context as well as in other African settings such as Nigeria [15], Uganda [18], and Ethiopia [14] where similarly high levels of childhood obesity have been reported. Children who were aged 11-16 were 6 times more likely to be obese compared to those aged 5-10 years. It is not entirely clear why this difference exists. However, we believe it could be related to dietary practices as well as sedentary life styles. On diet, children aged 5-10 are more likely to have their dietary choices better regulated than those aged 11-16. For instance, children aged 5-10 are likely to carry home-made meals to school than children aged 11-16 who may have access to money and may therefore purchase their own meals especially in school. This may expose children aged 11-16 to unhealthy diet compared to those aged 5-10, which may affect weight gain and subsequently the BMI of children aged 11-16. As a number of studies have shown, eating outside of home, especially in fast food eateries, is correlated positively with overweight and obesity in children [31]. In terms of sedentary lifestyle, children aged 5-10 may again be more regulated in terms of their sleep time as well as time spend on watching TV and playing computer games within the home environment. Within the school, 5-10 year old children may also be more involved in outdoor games and playground activities than those aged 11-16 who may spend more time doing classroom work. When this is combined with the possibility that children aged 11-16 may be more exposed to unhealthy diet, the likelihood of weight gain and higher BMI could be more in this age group. In line with the recent WHO's global action plan on physical activity 2018-2030 [2], we recommend targeted interventions such as sport and other physical activities as well as dietary education and nutrition counselling among children aged 11-16 to ensure that they maintain healthy weight. Further, the risks of childhood obesity appeared to increase with increase in paternal education. This result is very counter-intuitive precisely because better educated parents/guardians generally have greater opportunities for obtaining information related to healthy dietary practices as well as healthy behaviour change information. Therefore, one would expect better outcomes and lifestyle indicators for children whose parents have higher education. This however appears not to be the case in our study. Though counter-intuitive, this result is nevertheless not surprising. This is because in many low-income settings, higher education is often linked to higher socioeconomic status, including higher purchasing power. Having higher purchasing power means the ability to afford, for example, personal means of transport such as a car, which may be used to transport children to school compared to children with lowly educated parents who may lack such purchasing power and may walk or ride bicycles. Also, better purchasing power may increase the chances of consumption of more processed foods, unhealthy snacking, consumption of high fat diets within and away from home. This link has been suggested in a number of low-income settings [18]. In addition, better purchasing power may also encourage more sedentary behaviour through access to TV and computer games and related indoor activities that reduce physical activity. Another way that higher education of parents could expose children to obesity risks is work. Parents with higher education are likely to be engaged in paid employment. Tight work schedules may make it difficult for parents to develop healthy meal plans for their households and this could increase consumption of convenient foods which may be unhealthy. Indeed, our results here support growing research evidence that suggests that whereas higher socio-economic status is inversely related to obesity in high-income settings [10], it is positively associated with obesity in low-income settings [32]. Not surprisingly, engaging in sport activities for at least 3 days per week reduced the odds of being obese by 42% compared to children who participated in sport activities for less than 3 days per week. Our results here highlight a need for age-and sex-appropriate sports and physical activity-based interventions such as football, basketball, tennis, volley ball, ampe as well as running and cycling among school children. First, while the importance of diet/physical activity in relation to obesity is not new insight, physical activity does represent one side of the energy balance equation, which influences whether energy would be expended leading to a healthy weight or accumulated leading to child obesity. Regular physical activity could potentially lead to reduction in the odds of a child being obese. Therefore, parents and teachers who are the primary caregivers of children should endeavour to increase children's physical activity. Physical education in schools should be placed on learning time tables for more than three times per week to increase children's physical activity. Parents should also encourage children to engage in outdoor games. This will however require local government authorities to create save and child-friendly spaces and playgrounds within urban communitiessomething currently lacking in many urban settings in Ghana -to encourage more outdoor physical activities among children. Children who consumed fizzy drinks on most days were 2.84 times more likely to be obese compared with those who hardly or never consumed fizzy drinks. This is not surprising given that fizzy drinks typically are sugar-sweetened beverages, and usually have high content of fructose [33]. Also, most sweetened foods are typically calorie dense, which when combined with less Indeed, previous studies have found that children who consumed sweetened foods are more likely to be overweight or obese compared to those who do not [3,8,19,22]. In this regard, strategies such as developing healthy meal plans for the household could help regulate children's consumption of unhealthy foods. Teachers and school authorities could be involved to promote the sale and consumption of more healthy foods especially within the vicinity of educational facilities. In Ghana, most private schools provide lunch at a fee or regulate eating times. For private schools that provide food, they need to develop healthy food timetables for children. In place of fizzy drinks, naturally prepared fruit drinks should be served. Also, parents as much as possible should avoid giving children extra money which they could easily use to purchase fizzy drinks. Parents at home should discourage fizzy drinks consumption by not consuming them. Parents must particularly be examples to their children by consuming healthy and nutritious foods they expect their children to consume as this may drive home the message of healthy eating and living among children. The Government could also help by placing high taxes on fizzy drinks to discourage their consumption. Also, more sleep hours appeared to reduce the risk of childhood obesity. Children who slept for more than 8 h per day had 62% reduction in their odds of being obese compared to children who slept for less than 5 h. This result support one study among school-aged children in northern Ghana which linked shorter sleep hours to increased obesity risks [6]. In contrast, a study conducted in China revealed that short sleep duration was not associated with obesity [34]. It is not entirely clear why this discrepancy exists and further research is required in different contexts to better explore this issue. However, our results do suggest a need for parents to encourage their children to have sufficient sleep. As a consensus statement of the American Academy of Sleep Medicine recommends, children aged 6-12 need between 9 and 12 h of sleep per 24 h on a regular basis to promote optimal health whereas those aged 13-18 should sleep 8 to 10 h per 24 h on a regular basis to promote optimal health [35]. Our study has some limitations. First, although the anthropometric measurement instrument (e.g. weighing scale) were continuously calibrated and monitored, it is possible that extended period of use could have affected the accuracy of some of the measurement. Related to the above, the use of only BMI as screening tool for obesity has limitations. For instance, concerns have been raised regarding the reliability of weight and height measurement in research contexts [36][37][38]. Nevertheless, BMI is a valuable population-level indicator used widely in epidemiologic research. Second, there could have been recall bias. This is because respondents were asked about events that occurred several weeks before the interview. Some respondents may also have given socially desirable dietary and behavioural responses such as exercise in order to present themselves as leading active lives when in fact this may not be the case. Third, the study was conducted in only two schools and involved only 285 children in one metropolitan district. Therefore, the limitations of generalizability due to the non-representativeness of the sample are acknowledged. Fourth, our statistical analysis approach was largely data-driven. That is, co-variates were selected based on bivariate significance rather than a prespecified theoretical view of potential causal structure. Consequently, some variables that may be important after conditioning on other variables may have been missed. This is a limitation of our study. Finally, other factors that were not measured in the study such as parental nutrition knowledge, may nevertheless have had an influence on respondents dietary and behavioural behaviours. These limitations notwithstanding, the results provide useful evidence that could inform large-scale research as well as school-based interventions to reduce the risk of childhood obesity in Ghana. Conclusions This study has provided further insights into the prevalence of, and risk factors, for childhood obesity which have implications for interventions to reduce obesity in Ghana and similar contexts elsewhere. Though predictive sociodemographic factors such as age of child and fathers' educational attainment may not be amenable to intervention to reduce childhood obesity risks, dietary/behavioural factors observed to influence childhood obesity could be targeted to encourage the maintenance of healthy weight and lifestyle among children. Therefore, age and sexappropriate community and school-based interventions are needed to promote healthy diet selection and consumption, physical activity and healthy life styles among in-school children. Finally, there is need for large-scale community-based population studies in Ghana to estimate obesity prevalence in different contexts and identify risk factors in different populations. Such information could be vital for national policy and intervention development.
2019-11-27T16:06:03.160Z
2019-11-26T00:00:00.000
{ "year": 2019, "sha1": "14d68ac595256d40b72fc07d82aeff8cbb74eb86", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-7898-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14d68ac595256d40b72fc07d82aeff8cbb74eb86", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
266075366
pes2o/s2orc
v3-fos-license
The estimated glomerular filtration rate was U-shaped associated with abdominal aortic calcification in US adults: findings from NHANES 2013–2014 Objectives The high incidence of abdominal aortic calcification (AAC) is well-documented in individuals with severe renal function decline. However, there is limited research on the historical relationship between estimated glomerular filtration rate (eGFR) and the risk of AAC occurrence in the general population undergoing routine medical examinations. The main objective of this study was to investigate the historical relationship between eGFR and AAC in the general population of the United States. Methods We performed a cross-sectional study using the National Health and Nutrition Examination Survey 2013–2014 database. Weighted multivariate linear regression models were used to estimate the associations of eGFR with AAC score. Smooth curve fitting and two-piecewise linear regression were employed to explore the potential non-linear relationship. Results A total of 2,978 participant (48.22% were male) aged 40–80 years were included in this study. The fully-adjusted model demonstrated a negative correlation between eGFR and AAC score (β = −0.015, 95% CI: −0.023 to −0.006). However, when applying the smooth curve fitting method, a U-shaped relationship was identified, and the inflection point was calculated at 76.43 ml/min/1.73 m2 using the two-piecewise linear regression model. Conclusions There was a U-shaped association between eGFR and AAC score in general US adults, with an inflection point at about 76.43 ml/min/1.73 m2. Introduction In the context of chronic kidney disease (CKD) patients, cardiovascular disease (CVD) is the leading cause of mortality (1).Up to 45% of pre-dialysis CKD patients may experience mortality before reaching end-stage renal disease (ESRD), with cardiovascular disease being the primary cause of death (2).Abdominal aortic calcification is one of the main predictors of morbidity and mortality of vascular calcification-related diseases (3,4).A cohort study involving 101 pre-dialysis CKD patients at stages 3-5 revealed that 82% of patients exhibited abdominal aortic calcification, with occurrence rates of 50% in stage 3, 83% in stage 4, and 91% in stage 5 (5).The remarkably high occurrence rate of vascular calcification in patients with CKD stage 4-5 is an important risk factor contributing to their higher incidence and mortality rates of cardiovascular disease compared to the general population (6,7). However, limited research has been conducted on the relationship between eGFR and the risk of AAC occurrence in the general population undergoing routine medical examinations, especially after the age of 40 when there is a gradual decline in GFR (8,9).Therefore, the aim of this study is to investigate the association between GFR levels and the risk of AAC within the general population of the United States, in order to identify potential approaches for providing AAC risk assessments based on eGFR levels in individuals undergoing medical check-ups.To achieve this objective, we analyzed data from the National Health and Nutrition Examination Survey (NHANES) for the years 2013 and 2014. Study population The NHANES is an American cross-sectional survey that collects data on the health and nutrition of the general population through stratified multistage random sampling (https://www.cdc.gov/nchs/nhanes/).A total of 10,157 subjects were enrolled in the NHANES 2013-2014.A total of 2,978 participants were included in the current study after the exclusion of individuals lacking records of AAC scores and those with missing data on eGFR variables (Figure 1).The NHANES was authorized by the National Center for Health Statistics study ethical review board, and each participant signed written informed permission (10).All tests were taken at a mobile testing facility on-site. Assessment of eGFR Serum creatinine (SCr) was determined by Jaffe rate method and calibrated by standardized isotope dilution mass spectrometry.Data about gender, age, and SCr were used to calculate eGFR according to the CKD-EPI Creatinine Equation (2021) (11). AAC measurement In order to acquire and quantitatively assess AAC, dual-energy x-ray absorptiometry (DXA, Densitometer Discovery A, Hologic, Marlborough, MA, USA) was employed, specifically targeting the lumbar spine (vertebrae L1-L4) (12, 13).DXA scans were executed by trained and certified radiology technologists at the NHANES mobile examination center.The Kauppila score system was utilized to evaluate the extent of AAC.Higher AAC scores indicated severe AAC (SAAC).In this study, the Kauppila scores ranged between 0 and 24, the presence of AAC was diagnosed as AAC above than 0 and severe SAAC above than 6 (14)(15)(16).A detailed description of AAC measurements is available at https:// wwwn.cdc.gov/Nchs/Nhanes/2013-2014/DXXAAC_H.htm.Due to various reasons such as pregnancy, self-report during the DXA examination, and other factors, certain participants were deemed unsuitable for DXA scans.As a result, only approximately 1/3 of the participants successfully obtained valid AAC data in the end. Covariates The following covariates were included in the study: Age, gender, race/ethnicity, education level, diabetes mellitus (DM), systolic blood pressure (SBP) and diastolic blood pressure (DBP), smoke status, drinking status, body mass index (BMI), waist circumference (WC), arm Circumference (AC), albumin creatinine ratio (ACR), hemoglobin (HGB), apolipoprotein (B) (ApoB), triglyceride (TG), total Cholesterol(TC), LDL-cholesterol (LDL-C), HDL-Cholesterol (HDL-C), glycohemoglobin (HbA1c), Albumin (ALB), total protein (TP), alkaline phosphatase (ALP), aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma glutamyl transferase (GGT), total calcium (Ca 2+ ), phosphorus (P), potassium (K + ), sodium (Na + ), uric acid (UA).The following data were self-reported by participants during the home interview: age, sex, race/ethnicity, education level, smoking status, and alcohol consumption status.Furthermore, data including ACR, ApoB, TG, TC, LDL-C, HDL-C, HbA1c, ALB, TP, ALP, AST, ALT, GGT, Ca 2+ , P, K + , Na + , and UA were obtained from the laboratory tests.A detailed description of the variables used in this research is available at https://www.cdc.gov/nchs/nhanes/.with covariates adjusted as potential effect modifiers.Smooth curve fittings using Generalized Additive Models were employed to capture the non-linear relationship between eGFR and AAC.The recursive partitioning method was used to identify the optimal change point with the highest likelihood, followed by segmented regression models and likelihood ratio tests for threshold effect analysis.This adjustment was performed after controlling for the same covariates as utilized in the linear regression models.The continuous variables were described using mean ± standard deviation for normally distributed variables, and median with interquartile range (IQR) for non-normally distributed variables, the categorical variables were presented as percentages.We conducted weighted linear regression models (continuous variables) or weighted chi-square tests (categorical variables) to calculate the differences among different groups.To analyze the baseline characteristics of samples with missing AAC data, we treated the samples with eGFR data but missing AAC data as a separate group, and the statistical results are presented in the Supplementary Table S3.We used package R (http:// www.R-project.org) and EmpowerStats (http://www.empowerstats.com)to analyze, with P < 0.05 considered statistically significant. Subgroup analyses Subgroup analysis was performed to further evaluate the robustness of the association between eGFR and the risk of developing AAC (Table 3).The results indicate that there is a downward trend in the risk of developing AAC as eGFR increases, compared to the Q1 subgroup of eGFR, in both male and female individuals, as well as the elderly population aged over 60, Non-Hispanic White People and other ethnic groups, and the non-diabetic population (P for trend < 0.05). Non-linearity and threshold effect analysis in the association between eGFR and AAC score Additionally, we also performed a weighed generalized additive model and a smooth curve fitting stratified by sex and race/ ethnicity to detect the non-linear association between eGFR and AAC score.Interestingly, a U-shaped association was detected between eGFR and AAC score (Figures 2-4), significant inflection points were observed in both males and females (Figure 3), as well as in Non-Hispanic White People individuals (Figure 4). The inflection point of eGFR calculated using the twopiecewise linear regression model, was found to be 76.43 ml/min/ 1.73 m 2 for the total population (log-likelihood ratio test P < 0.001), furthermore, the inflection point of eGFR was 73.37 ml/min/1.73m 2 (log-likelihood ratio test P < 0.001) in males and 80.32 ml/min/1.73m 2 (log-likelihood ratio test P < 0.001) in females and 76.92 ml/min/1.73m 2 (log-likelihood ratio test P < 0.001) in Non-Hispanic White People (Table 4). Discussion The aim of this study was to evaluate the association between eGFR and abdominal aortic calcification in the general population of the United States.In our cross-sectional study of 2,978 participants, we identified a U-shaped association between eGFR and AAC score among male and female participants, those without diabetes, and those aged over 60.Notably, we observed an inflection point at an eGFR of 76.43 ml/min/1.73m 2 in the overall population. Calcifications of large arteries and heart valves are common in patients with CKD and may contribute to a significant rise in cardiovascular risk, even among young adults with childhood- Therefore, the findings of that study cannot be directly extrapolated to the general population with mild decline in GFR.Some studies on the relationship between eGFR and AAC in healthy individuals are consistent with our findings.A study was conducted in the UK involving 93 healthy living kidney donors (mean age 45.9 ± 1.8 years, mean GFR 88.73 ± 2.97 ml/min/ 1.73 m 2 , with 50 males) to investigate the prevalence and predictive factors of AAC.The results revealed that 31% of the A threshold, nonlinear association between eGFR and AAC score was found in a generalized additive model (GAM).The solid red line represents the smooth curve fit between variables, blue bands represent the 95% confidence interval around the fit.The model was adjusted for age, sex, race/ethnicity, level of education, diabetes status, SBP, DBP, BMI, waist circumference, arm circumference, albumin creatinine ratio, apolipoprotein (B), total cholesterol, triglyceride, LDL-cholesterol, HDL-cholesterol, glycohemoglobin, albumin, total protein, alkaline phosphatase, aspartate aminotransferase, alanine aminotransferase, gamma glutamyl transferase, total calcium, phosphorus, potassium, sodium, uric acid, hemoglobin, and albumin creatinine ratio.The association between eGFR and AAC score stratified by sex.The model was adjusted for age, race/ethnicity, level of education, diabetes status, SBP, DBP, BMI, waist circumference, arm circumference, albumin creatinine ratio, apolipoprotein (B), total cholesterol, triglyceride, LDL-cholesterol, HDL-cholesterol, glycohemoglobin, albumin, total protein, alkaline phosphatase, aspartate aminotransferase, alanine aminotransferase, gamma glutamyl transferase, total calcium, phosphorus, potassium, sodium, uric acid, hemoglobin, and albumin creatinine ratio.The association between eGFR and AAC score stratified by race/ ethnicity.The model was adjusted for age, sex, level of education, diabetes status, SBP, DBP, BMI, waist circumference, arm circumference, albumin creatinine ratio, apolipoprotein (B), total cholesterol, triglyceride, LDL-cholesterol, HDL-cholesterol, glycohemoglobin, albumin, total protein, alkaline phosphatase, aspartate aminotransferase, alanine aminotransferase, gamma glutamyl transferase, total calcium, phosphorus, potassium, sodium, uric acid, hemoglobin, and albumin creatinine ratio. patients exhibited AAC (20).The occurrence of AAC was found to be similar to our research across corresponding eGFR levels categorized into quartiles, with AAC occurrence rates of 30.34% for the Q2 group and 26.21% for the Q3 group.However, contrary to our findings, their intergroup comparison did not show any statistically significant differences between individuals with AAC and those without AAC in terms of GFR, systolic blood pressure, pulse pressure, calcium-phosphorus product, or smoking.This lack of significance may be attributed to the smaller sample size in their study.A meta-analysis of over 1.4 million individuals from more than 30 cohort studies showed a U-shaped relationship between eGFR and the risk of cardiovascular mortality after adjusting for traditional cardiovascular risk factors and proteinuria (21,22).The eGFR threshold was found to be 75 ml/min/1.73m 2 , above which the risk gradient for cardiovascular mortality remained relatively stable.Below this threshold, there was a linear increase in cardiovascular mortality rate.Interestingly, our study discovered that the inflection point for the U-shaped curve relationship between eGFR and AAC was found to be 76.43 ml/min/1.73m 2 , which is remarkably similar to the eGFR threshold identified in the aforementioned meta-analysis.Given that AAC is a significant predictor of cardiovascular events, it is not surprising that both our study and the meta-analysis yielded similar Ushaped curves and inflection points for eGFR. The clinical observation that AAC scores significantly increase as eGFR decreases is consistent with practical scenarios.However, there is an inexplicable trend of augmented AAC scores at higher eGFR among the overall population and women, particularly in the female demographic where this trend is more explicit.To elucidate the reasons for the increased AAC score at a higher eGFR, we divided the population into five groups based on eGFR (ml/min/ 1.73 m 2 ): ≤60, 60 ∼80, 80∼100, 100∼120, >120, with gender serving as the stratification variable for population description and variance analysis.Results are given in Supplementary Tables S1, S2.In the cohort with eGFR > 120 ml/min/1.73m 2 , females exhibited an increase in AAC values alongside a surge in factors such as body mass index, waist circumference, arm circumference, apolipoprotein B, total cholesterol, triglyceride, and glycohemoglobin.Conversely, in males within this eGFR range, only glycohemoglobin demonstrated a significant increase, whereas AAC, body mass index, waist circumference, arm circumference, apolipoprotein B, total cholesterol, and triglyceride showed a decreasing pattern.Accordingly, it's postulated that the increased eGFR associating with elevated AAC in females could be primarily connected to obesity, high blood lipids or diabetes, rather than the overestimation of eGFR due to malnutrition-induced muscle loss as proposed in other literature (21,(23)(24)(25).The trend of increased AAC scores in females at higher eGFR impacts the nonlinear relationship between eGFR and AAC in the overall population. In the curve fitting graph stratified by race for eGFR and AAC scores, an L-shaped relationship between eGFR and AAC is evident among Non-Hispanic White People.When the eGFR declines below 76.92 ml/min/1.73m 2 , there is a significant rise in AAC scores as eGFR decreases.The impact of a falling eGFR on AAC in Non-Hispanic White People exceeds that in other races.This finding aligns with the results from two other studies (26,27), indicating that Non-Hispanic White People are more likely to develop AAC under similar conditions, which cannot be fully accounted for by traditional CVD risk factors. Age is recognized as a known risk factor for the occurrence of arterial calcification.Multiple regression equations in Table 2 demonstrate that, even after adjusting for age and other factors, there is still an independent effect of eGFR on AAC.Additionally, it is undeniable that this effect is more consistent in the population aged over 60. For the individuals with eGFR data but missing AAC values, we examined whether the distribution of renal function in this subgroup was similar to that of the overall study population.We identified 489 samples aged 40 and above with eGFR data but without AAC data, and compared them separately to the samples with both eGFR and AAC data.The results are presented in Supplementary Table S3.The analysis revealed notable differences between the group with missing AAC data and the group with complete data.Specifically, the group with missing AAC data had a higher average age, higher levels of blood urea nitrogen, lower eGFR values, and a higher proportion of females. Based on the findings of this study, it can be speculated that these 489 samples with missing AAC data may be at a higher risk of developing AAC.This further supports the conclusions drawn in the main text regarding the observed relationship trend between AAC and eGFR. The main highlight of our study is the identification of a previously unexplored U-shaped relationship between eGFR and AAC, as well as the determination of the threshold value for eGFR.This novel finding adds to the existing knowledge on the link between eGFR and AAC.Our study reveals the intricate interplay between kidney function and AAC formation, emphasizing the significance of eGFR assessment as a potential early marker for identifying and managing cardiovascular risk in the general population. There are also some limitations in our study.Firstly, the use of a cross-sectional design restricts our ability to infer causal relationships between eGFR and AAC.Secondly, we excluded individuals with eGFR < 30 due to the significant impact of severe disturbances in calcium-phosphorus metabolism in CKD stages 4-5 on AAC occurrence.Thirdly, the data on AAC and serum creatinine were only collected for participants aged 40-80 years in the NHANES 2013-2014 survey, which limits the generalizability of our study findings.Lastly, there remains a possibility of bias arising from unadjusted potential confounding factors. Despite using NHANES data that is approximately ten years old, we believe that the physiological mechanisms related to renal function and vascular calcification may not have undergone significant changes during this period.However, we also acknowledge that there may be other unconsidered factors that could influence this relationship.Therefore, we plan to expand our model in future studies by incorporating additional potential influencing factors and using external data to validate our regression model in order to determine whether it can still accurately describe the relationship between eGFR and AAC. Conclusions Our study identified a negative correlation between eGFR levels and AAC among a population of Americans aged 40-80.This relationship follows a U-shaped curve, with an inflection point observed at eGFR 76.43 ml/min/1.73m 2 .These findings underscore the importance of early AAC monitoring in the general population.This provides a health alert for individuals undergoing health check-ups, reminding them to pay attention to the risk of developing AAC and their cardiovascular health. TABLE 1 Weighted characteristics of the study population based on eGFR quartiles. The continuous variables were described using mean ± standard deviation for normally distributed variables, and median with interquartile range (IQR) for non-normally distributed variables.The categorical variables were presented as percentages, and the p-value was calculated using a weighted chi-squared test.AAC, abdominal aortic calcification; SAAC, severe abdominal aortic calcification. TABLE 2 Association between eGFR and AAC score. TABLE 3 Subgroup analyses the association between eGFR and AAC. Analysis was adjusted for age, sex, and race/ethnicity, level of education, diabetes status, SBP, DBP, BMI, waist circumference, arm circumference, albumin creatinine ratio, apolipoprotein (B), total cholesterol, triglyceride, LDL-cholesterol, HDL-cholesterol, glycohemoglobin, albumin, total protein, alkaline phosphatase, aspartate aminotransferase, alanine aminotransferase, gamma glutamyl transferase, total calcium, phosphorus, potassium, sodium, uric acid, hemoglobin, and albumin creatinine ratio were adjusted.In the subgroup analysis stratified by sex, race/ethnicity, age, or diabetes status, the model is not adjusted for the stratification variable itself. TABLE 4 Threshold effect analysis for the relationship between eGFR and AAC score using piece-wise linear regression.Model I: One line effect, Model II: Nonlinear analysis, LRT test, log likelihood ratio test; P-value < 0.05 means Model II is significantly different from Model I, which indicates a non-linear relationship.Analysis was adjusted for age, sex, and race/ethnicity, level of education, diabetes status, SBP, DBP, BMI, waist circumference, arm circumference, albumin creatinine ratio, apolipoprotein (B), total cholesterol, triglyceride, LDL-cholesterol, HDL-cholesterol, glycohemoglobin, albumin, total protein, alkaline phosphatase, aspartate aminotransferase, alanine aminotransferase, gamma glutamyl transferase, total calcium, phosphorus, potassium, sodium, uric acid, hemoglobin, and albumin creatinine ratio were adjusted.In the subgroup analysis stratified by sex, race/ethnicity, age, or diabetes status, the model is not adjusted for the stratification variable itself.
2023-12-08T16:08:45.073Z
2023-12-06T00:00:00.000
{ "year": 2023, "sha1": "edd9992ec88228816d53e3ebc1c87ce21a6a11a6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2023.1261021/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d9bbfe0ee16ac57cd4083ce0f6c2cfdce1c59b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5267709
pes2o/s2orc
v3-fos-license
Significantly Higher Prevalence Rate of Asthma and Bipolar Disorder Co-Morbidity Abstract Asthma and bipolar disorder (BD) are 2 distinct diseases that share similar pathophysiology. This study aimed to determine their relationship thorough a meta-analysis of articles on their comorbidity rate. The aim of the study is to examine the overall prevalence rate of BD in asthmatic patients and of asthma in BD patients compared to healthy controls. Electronic research of PubMed and ClinicalTrials.gov was performed. Articles discussing the prevalence rate of BD in patients with/without asthma and the prevalence rate of asthma in those with/without BD, as well as clinical trials in humans and case-controlled trials or cohort studies, were all included. Case reports or series and nonclinical trials were excluded. Through a random-effects model, a meta-analysis of the results of 4 studies comparing the prevalence rate of BD in patients with/without asthma, and in 6 studies comparing the prevalence rate of asthma in subjects with/without BD were performed. There were significantly higher prevalence rates of BD in asthmatic patients than in healthy controls (P < 0.001) and of asthma in BD patients than in healthy controls (P < 0.001). Only the patient's mean age significantly modulated the odds ratio of the prevalence rate of asthma in BD patients (slope = 0.015, P < 0.001). Only 10 studies were included and most were cross-sectional studies. The possible confounding effect of medication on BD or asthma onset was not investigated. Any possible etiology of the comorbidity was also not determined. This meta-analysis highlights the importance of the significantly high comorbid rate of BD and asthma, and the positive association with age. Special attention must be given to the comorbidity of asthma and BD, especially in older patients. INTRODUCTION A sthma is a troublesome disease that has a high mortality rate in the world. 1 Similarly, bipolar disorder (BD) contributes to significant socio-economic burden globally. These 2 belong to distinct disease categories and do not seem to have any association with each other. However, as knowledge on the comorbidity of BD improves, many more comorbid illnesses have been found. In recent reports, asthma has been associated with BD. This is especially important as patients with severe mental illness are believed to have higher comorbidities and higher mortality rates compared to healthy subjects. These patients also have less healthy habits and less medical consults. 2 Thus, public health policies must be drawn up to address these problems. Asthma has been initially believed to be a disorder of inflammation. 3 In asthmatic patients, inflammatory cytokines like interleukin-4 (IL-4), IL-5, and IL-13 are altered along with asthma exacerbation. 4 Similarly, BD patients are found to have dysfunction of inflammation when the disease is aggravated or subsides. 5,6 Alterations of IL-4, IL-6, and IL-12 have been proven in BD patients under different emotional states. [7][8][9] As such, some researchers suggest that these 2 diseases at least share similar mechanisms in their pathophysiology. 10 Moreover, in clinical practice, some reports discuss the comorbidity of asthma and BD, either in forms of first onset of asthma or BD. Some have tried to investigate the prevalence rate of asthma in BD patients compared to healthy controls. [11][12][13][14][15][16][17] On the other hand, other studies have compared the prevalence rate of BD in asthmatic patients and in healthy controls. 10,[18][19][20] Lastly, the medication used in 1 disease may cause a flare up of the other disease. For example, the steroids used in asthma may aggravate the emotional changes in BD. 21 In clinical studies, there is a significantly higher prevalence rate of asthma in BD patients 11,[13][14][15][16][17] or a significantly higher prevalence rate of BD in asthmatic patients compared to those in health controls. 10,18,20 However, in another report, there is no significant association between asthma and BD. 12,19 Such inconsistency may be due to different study designs, 19,20 different latitudes and hemispheres, 12,14 or different sex proportion. 10,19 These findings have implications on clinical practice and public health policy. The present study aimed to summarize current evidences on the comorbidity rate of asthma and BD through metaanalysis and to investigate any possible association between comorbidity and clinical variables in such patients. Literature Search and Screening A previous meta-analysis protocol was followed for the present research strategy. 22 In the first stage, the identification stage, 2 independent psychiatrists performed a systematic literature search with the electronic database of PubMed and ClinicalTrials.gov. To include as many studies as possible, the most simple keywords of ''(asthma) AND (bipolar)'' was used in the search process, which was limited to only articles written in English and was conducted on December 29, 2015. The 2 authors then examined the titles and abstracts in the screening stage. Reports that were not related to the prevalence rate of BD and asthma were excluded. Inconsistencies and disagreements in selection were settled through consensus after reading the full text of these studies. Later, in the stage of eligibility, the remaining studies were re-screened using the inclusion criteria of: (1) articles discussing the prevalence rate of BD in subjects with/without asthma or articles discussing the prevalence rate of asthma in subjects with/without BD; (2) articles on clinical trials in humans; and (3) case-controlled trials or cohort studies. Case reports or series, and nonclinical trials were excluded. The meta-analysis was divided into 2 parts. The first part was a meta-analysis of articles discussing the prevalence rate of BD in patients with/without asthma. The second was a metaanalysis of articles discussing the prevalence rate of asthma in subjects with/without BD. The primary outcomes were the prevalence rate of BD in patients with/without asthma and the prevalence rate of asthma in patients with/without BD. All primary outcomes and clinical variables in the studies were extracted as much as possible. If the data were not available, the authors were contacted for the original data. The entire screening and selection process was shown in Figure 1. Meta-Analysis and Data Extraction In the current meta-analysis, the effect size (ES), set as the standardized mean difference based on the odds ratio and treated with a random effects model, was defined as the difference in the prevalence rate of BD in patients with/without asthma and of asthma in subjects with/without BD. An ES > 0 was defined as ''favor comorbidity.'' If there were no actual case numbers or prevalence rates, or no response from the authors, other statistical parameters such as the t or P value with the sample size was used to calculate the ES. The meta-analysis procedures were performed via the platform of the Comprehensive Meta-Analysis software, version 2 (Biostat, Englewood, NJ). Statistical significance was set at a 2-tailed P < 0.05. Through Q statistics, related P values, and I 2 statistics, heterogeneity in the included studies was investigated. At the same time, publication bias was investigated by visual examination of funnel plots and through Egger's regression analysis. 23 To evaluate the possible confounding effects of clinical variables, subgroup meta-analysis and meta-regression were performed. The meta-regression was performed through the unrestricted maximum likelihood method. The extracted clinical variables for meta-regression included research duration (period of study), mean age, sex proportion (female), age of onset, and race proportion (i.e., African American, Caucasian, Asian, and Native American). The meta-analysis used here fulfilled the criteria of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 24 (Supplement Table 1 and Supplement Figure 1, http://links.lww.com/MD/A884). Besides, the ethical approval was not applicable in current study because we would not deal with the patients' personal data. Furthermore, there were no any patients being harmed due to any procedure in the present study. Studies Included in Each Meta-Analysis Thirteen articles were initially eligible, but one did not have a control group 25 and one contained not simply BD patients. 26 Of the remaining 11 articles, 2 were conducted by the same research team and their clinical data were the same. 11,13 Thus, only 1 was used to avoid unnecessary duplication. Ten articles were finally included in the meta-analysis (Table 1), [10][11][12][14][15][16][17][18][19][20] including 4 on the prevalence rate of BD in patients with/without asthma 10,18-20 and 6 on the prevalence rate of asthma in patients with/without BD. 11,12,14-17 Meta-Analysis of the Prevalence Rate of BD in Patients With/Without Asthma This meta-analysis of 4 studies included 50,358 patients with asthma and 109,218 healthy controls. The prevalence rate of BD was significantly higher in asthmatic patients than in healthy controls (ES: 2.12; 95% confidence interval [CI]: 1.57-2.87; P < 0.001) (Figure 2A). There was no statistically significant heterogeneity within the recruited studies (Q ¼ 5.36; df ¼ 3; I 2 ¼ 44.05%; P ¼ 0.147). There was also no significant publication bias detected by Egger's test (t ¼ 0.79; df ¼ 2; 2tailed P ¼ 0.514) or by visual examination of the funnel plot. Meta-regression was only performed for the female proportion because of the lack of data. There was no statistically significant association between the prevalence rate of BD in patients with asthma and female sex (P ¼ 0.953). Meta-Analysis of the Prevalence Rate of Asthma in Patients With/Without BD Six studies that covered 5750 patients with BD and 139,529 healthy controls were included in this meta-analysis. There was a significantly higher prevalence rate of asthma in patients with BD than that in healthy controls (ES: 1.86; 95% CI: 1.40-2.47; P < 0.001) ( Figure 2B). However, there was statistically significant heterogeneity within the recruited studies (Q ¼ 59.88; df ¼ 5; I 2 ¼ 91.65%; P < 0.001). There was no significant publication bias by Egger's test (t ¼ 0.31; df ¼ 4; 2tailed P ¼ 0.776) or by visual examination of the funnel plot. In the meta-regression, there was a significantly positive association between the odds ratio of asthma in patients with BD and mean age (slope: 0.015; P < 0.001) rather than female sex proportion or the proportions of Caucasian, Asian, and Native American (P ¼ 0.737, 0.797, 0.807, and 0.786, respectively). A meta-regression of African American and duration of research was not performed because of the lack of data. DISCUSSION The current meta-analysis demonstrates that BD and asthma have a significant comorbidity rate with each other. The prevalence rate of asthma is significantly higher in BD patients than in healthy controls. Similarly, the prevalence rate of BD is significantly higher in asthmatic patients than in healthy controls. Among the clinical variables, only mean age has a significantly positive association with the odds ratio of asthma in BD patients. This meta-analysis provides evidences of the significantly high prevalence rate of comorbid of asthma and BD. In previous reports, evidences reveal that both asthma and BD share common immune abnormalities, such as the abnormal expressions of IL-6 7,8,27,28 and tumor necrosis factor-a (TNF-a). 5,28 The high comorbidity between these 2 distinct diseases should be important for clinicians but in clinical practice, there remains a paucity of conclusive evidence as regards the comorbid prevalence rates. This meta-analysis provides the link between these 2 distinct diseases in clinical application, which is especially important because in clinical practice, the physical problems are frequently missed in patients with severe mental illness. 2 Thus, psychiatrists should be aware of possible comorbid asthma during the treatment of BD. Medication with exacerbating effects on the asthmatic activity, such as beta-blocker, must be avoided. 29 At the same time, physicians should pay attention to distinguishing the symptoms of asthma and agitation during manic attack and avoid medications, such as steroids, that can induce manic symptoms. 21,30 Although there is no statistically significant publication bias detected in the present study, most patients are either in countries located at higher latitudes or in the Northern Hemisphere 11,14-16 or mainly in a few countries such as Taiwan 10,18,19 and the United States. 11,15 This may be because the health care system is widely-used and is well-established in these areas. However, this imbalance in the distribution of countries of the studies may result in some publication bias in the current meta-analysis. The distribution and frequency of asthma vary with climate and air pollution in the environment. 31 Furthermore, surveillance studies in areas of lower latitudes or in the Southern Hemisphere should be performed. An interesting finding is the significantly positive association between mean age and the odds ratio of asthma in patients with BD. In previous reports, asthma is believed to be more severe in older than in younger patients 32 and that there is a higher mortality rate in patients >50 years old. 1 Moreover, the older the age of asthma onset, the more frequent the steroid treatment is needed. 33 This phenomenon may be explained by trends in the alteration of specific immune cytokines, such as the reduction in interferon-gamma (IFN-g) along with aging, 34 which has been correlated with asthma severity. 35,36 At the same time, there are changes in specific immune cytokines in patients with BD, such as reduced IFN-g 6 and increased IL-4 and TNF-a. 37 At present, there are no reports that provide a correlation between immunity and the age of patients with BD. Further investigations on the association between age of immunity dysfunction and onset of asthma are therefore warranted in the future. Asthma þ/À Ã Asthma 1860 3.2 n/a n/a n/a n/a n/a n/a n/a Global Asthma þ/À Ã Asthma 46558 0.3 47.5 n/a n/a n/a n/a n/a n/a Taiwan Bipolar þ/À y Bipolar disorder 138 14.7 n/a n/a n/a n/a n/a n/a n/a Australia Bipolar þ/À y Bipolar disorder 938 15.9 n/a n/a n/a n/a n/a n/a n/a Canada LIMITATIONS This study has some limitations that must be considered. First, the total number of studies included is small, especially in the meta-analysis of the prevalence rate of BD in patients with/ without asthma, which may influence the meta-analysis. Second, most of the included studies are cross-sectional studies. In fact, mean age, when available, has a wide range in each study. The mean ages of BD and asthma onset are distinct. This may result in the under-diagnosis of such diseases. Third, the possible confounding effects of medication on the onset of BD or asthma are not investigated. As previously mentioned, some medications such as beta-blockers or steroids can result in symptoms of asthma or mood changes. Furthermore, although there was an attempt to extract most of the clinical variables, the related meta-regression was not performed because of the lack of data. Lastly, only a conclusion of ''observation result'' was attained in the present study. We could only prove the ''comorbidity'' rather than ''same etiology/pathophysiology.'' Therefore, the phenomenon of comorbidity might be derived from similar risk factors or symptomatology, such as anxiety, but not from the same etiology or pathophysiology between these 2 diseases. We lack of the ''key'' link between these 2 diseases. Therefore, clinicians need to be careful when apply our study in clinical practice. CONCLUSION The current meta-analysis highlights the importance of the significantly high comorbid rate of BD and asthma, and the positive association with age. These findings serve to remind clinicians that special attention should be given to the comorbidity of asthma and BD, especially in older patients.
2018-04-03T03:44:48.021Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "d5adf7423d9c1f7f0dfc1092eb3ecc60dc12d1f5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000003217", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5adf7423d9c1f7f0dfc1092eb3ecc60dc12d1f5", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
248459225
pes2o/s2orc
v3-fos-license
Bisphosphonates as Potential Inhibitors of Calcification in Bioprosthetic Heart Valves (Review) As early as 50 years ago, bisphosphonates turned from a water treatment agent into one of the most widely used groups of drugs for the treatment of various diseases of calcium metabolism (bone tissue resorption, oncological complications of neurodegenerative diseases and others). Years of research on bisphosphonates have contributed to the understanding of their molecular and cellular pathways of their action. All bisphosphonates have a similar structure and common properties, however, there are obvious chemical, biochemical, and pharmacological differences between them. Each bisphosphonate has its own unique profile. This review summarizes data on the mechanisms of action of bisphosphonates, demonstrates the experience and prospects for their use for the modification of cardiovascular bioprostheses, since the issue of preventing bisphosphonate calcification has not been settled yet. Introduction Bisphosphonates (BP), or diphosphonates, as called before, have been known for a long time. They were first synthesized by German chemists as early as 1865 [1,2], but they have been used for the treatment of calcium metabolism disorders only in the last 50 years. Currently, BPs are one of the most widely used groups of drugs for the treatment of Paget disease, osteoporosis, breast cancer and neoplastic bone metastases, multiple myeloma, some other rare bone diseases, neurodegenerative diseases, and also in dentistry [3][4][5][6][7][8][9][10][11]. In veterinary medicine, these drugs are used to solve the same problems in the treatment of different animal species [12]. In addition, bisphosphonates are used for targeted delivery of drugs to the bone: antibiotics, hormones, and anticancer drugs [13]. Since 1970, BPs have been used as radioactively labeled drugs in the diagnosis of skeletal diseases [14]. The possibility of using Zoledronate BP as an immunomodulator in the complex treatment of pneumonia caused by SARS-CoV-2 is being considered [15]. The discovery of the BPs' biological effects goes back to the study of the mechanisms of calcification and the role of pyrophosphate in it. As early as the 1930s, polyphosphates were found to act as natural physiological regulators of the calcification process due to their ability to inhibit the deposition of calcium salts. In the 1960s, it was found that biological fluids (urine and blood plasma) contain a substance that inhibits the precipitation of calcium phosphate, namely pyrophosphate [16,17]. It has a high affinity for calcium crystals, slows down their formation and dissolution in vitro, and inhibits calcification in vivo, but when taken orally, it is rapidly metabolized in the body due to hydrolysis in the gastrointestinal tract [18,19]. The search for compounds resistant to enzymatic hydrolysis Molecular structure and pharmacological efficacy of bisphosphonates Bisphosphonates are synthetic analogues of pyrophosphate with two phosphonate groups bound to a central carbon atom. The P-C-P group in the BP structure makes them resistant to enzymatic hydrolysis, in contrast to the hydrolytically unstable P-O-P bond in the pyrophosphate structure. Besides, BPs have two additional side chains in a molecule, which are absent in pyrophosphate. They are called R1 and R2, respectively, and are also bound to the central carbon atom (Figure 1) Bisphosphonates bind with hydroxyapatite due to chelation of calcium ions on the surface of apatite crystals by two phosphonate groups located in close proximity, which leads to the formation of a bidentate bond [25][26][27][28]. The type of side chains in BPs is an important factor determining their properties. Hydroxyl substitution in R1 was found to increase the BPs' affinity for calcium crystals due to the formation of a tridentate bond [29]. It is the hydroxyl group that the majority of clinically used BPs contain in the R1 position. BPs with R1 substituted for Clor H + ions (Clodronate and Tiludronate) provide bidentate binding to calcium crystals and have a significantly lower binding affinity [30,31]. The configuration of the R2 side chain determines the antiresorptive activity of BPs with respect to bone tissue [32,33]. On the whole, the presence of R1 and R2 side chains makes it possible to introduce numerous substitutions and synthesize a large number of substances with different properties. According to the chemical structure of R2, BPs are subdivided into nitrogen-free and nitrogen-containing compounds (see Figure 1). The nitrogen atom in the structure has an impact on the antiresorptive efficacy of nitrogen-containing BPs, increasing it by 10-10,000 times relative to nitrogen-free ones (see the Table). The presence of a positively charged R2 group enables BPs to bind to the mineral surface of the bone, which subsequently increases the affinity of hydroxyapatite for negatively charged phosphonate groups as a result of electrostatic interactions [34]. Another important factor in the higher activity of nitrogen-containing BPs compared to nitrogen-free ones is the formation of hydrogen bond between the BP amino group and the hydroxyapatite surface. Alendronate with a free amino group is an example of this bond [35]. This explains the strong affinity of amine-containing BPs for bone tissue and their use in the treatment of bone diseases [36][37][38][39]. Moreover, BP binding with carbonate apatite has been reported [36][37][38][39]. This proves the influence of the R2's structure on the absorption, distribution, and long-term deposition of BPs in bone tissue. The BPs of the first generation differ from other groups by the absence of nitrogen in their composition (nitrogen-free bisphosphonates). The scope of effects of these substances is narrower than that of aminobisphosphonates. Nevertheless, the treatment and prevention of various diseases associated with bone resorption with these drugs has proven to be highly effective. BPs of the first generation succeed in the correction of hypercalcemia, prevention effortsto prevent the development of bone metastasis in reviews Bisphosphonates used in clinical practice and their relative antiresorptive activity Amino-containing BPs of the second generation are characterized by a wider scope of actions and higher efficiency. Thus, for example, Pamidronate has proven to be effective in the treatment of patients with multiple myeloma and breast cancer with bone metastases, i.e. tumors characterized primarily by the development of osteolytic metastases [16,44]. Despite the proven dose-dependent effect of Pamidronate, its high doses are practically not used due to adverse effects on the gastrointestinal tract [45,46]. In patients with tumor-induced hypercalcemia, Pamidronate has exhibited an advantage over Clodronate, primarily, in the duration of normocalcemia, since the average duration of the effect of Clodronate is 14 days compared to 28 days for Pamidronate [47]. Aminobisphosphonates can also be used to prevent complications of bone metastasis. The results of studies with Clodronate and Pamidronate revealed a significant reduction in the incidence of complications with prolonged use of Pamidronate [38]. The development of new third-generation BPs with a reduced frequency of administration (once per week or once per month) contributed to a significant increase in adherence to treatment, optimization of therapy outcomes, and a reduction in adverse events. Cellular and molecular mechanisms of bisphosphonate action Together with synthesizing more powerful BPs, it has become obvious that their biological activity is not explained by physical and chemical properties alone. This stimulated studies of the mechanisms of BPs' action at the cellular level [3,[55][56][57][58][59]. The cellular mechanisms of action of BPs are based on the inhibition of bone tissue resorption due to their selective binding and adsorption on the bone mineral surface. Having a high affinity for calcium ions, they perfectly penetrate the bone tissue. There, BP molecules are concentrated around osteoclasts, creating a high concentration in resorption lacunae. In vitro studies have shown that BPs influence the depth of resorption lacunae, reducing it. Within osteoclasts, they initiate a number of changes that reduce the ability of bone tissue to resorb (loss of brush border, cytoskeleton destruction, inability of osteoclasts to move or bind to bone tissue). After BPs bind to osteoclasts, they impair their biochemical processes, causing apoptosis [34,60]. At the molecular level, the biochemical mechanisms of BP action also differ and depend on their structure. There are two major mechanisms of their action. Nitrogen-free BPs of the first generation behave like pyrophosphate analogs: they are involved in the metabolism of stable ATP analogs (to adenosine-5'-(β,γ-dichloromethylene)-triphosphate) due to the action of aminoacyl-tRNA synthase [42]. Intracellular accumulation of these non-hydrolysable metabolites in osteoclasts causes a deficiency of functional ATP and also inhibits the mitochondrial ADP/ATP translocase, which, in turn, leads to osteoclast apoptosis [61][62][63]. Highly active nitrogen-containing II generation N-BPs are not metabolized, but directly induce osteoclast apoptosis by inhibiting the biosynthesis of mevalonate, which is involved in the formation of cholesterol and isoprenoid lipids, including isopentenyl pyrophosphate (IPP), farnesyl pyrophosphate (FPP), and geranylgeranyl pyrophosphate (GGPP). However, the main target of this group of BPs is farnesyl pyrophosphate synthase (FPPS), one of the enzymes involved in the metabolism of pyrophosphate-containing isoprenoid lipids [23, 39,64]. FPP and GGPP are required for post-translational prenylation of small G proteins such as Rab, Rac, Ras, and Rho. These key G proteins, prenylated at a cysteine residue, regulate various cellular processes of osteoclast function, those of maturation and survival. Therefore, inhibition of FPPS leads to loss of resorption capacity of osteoclasts or inhibits osteoclastogenesis [36,65,66]. The ability to inhibit the process of protein modification in osteoclasts leads to apoptosis of mature cells, which is proved by the appearance of specific changes in the cell and structure of the nucleus [67]. At the same time, osteoclast precursor cells lose their ability to differentiate and mature, which naturally leads to a decrease in the number of osteoclasts [28]. Moreover, in vitro data indicate that, under the influence of BP, osteoblasts reduce the secretion of the osteoclast-stimulating factor [68]. Included in the metabolism of non-hydrolysable ATP analogs reviews administration increases the "bisphosphonate load" on the bone, which determines the unique feature of this class of drugs -the preservation of the clinical effect for a long time after discontinuation of therapy [29, 69-72]. The mechanism of BP action is partially similar to the mechanism of action of statins, since they also inhibit the enzymes involved in the of mevalonate metabolism, though, statins participate only in one of the first stages, inhibiting HMG-CoA reductase [67]. Thus, the mechanism of BP action is based on a triple effect on the key processes of bone remodeling: physical and chemical binding to hydroxyapatite, a direct effect on the resorption activity of osteoclasts, and stimulation of new bone formation (Figure 2). The use of bisphosphonates for the modification of cardiovascular bioprostheses The biological prostheses for the correction of cardiovascular diseases have been used for more than 60 years [73][74][75]. Various xenogenic materials are used for the production of valve and vascular prostheses: the porcine aorta, aortic valve, and pericardium, as well as the bovine pericardium, jugular vein, and internal thoracic artery. These materials differ in microstructure, ratio of fibrillar proteins and amino acids. Since 1967, glutaraldehyde (GA) has been used in the production of biological prostheses for tissue preservation [76][77][78][79]. GA provides a high density of collagen cross-linking and significantly increases its resistance to the action of proteolytic enzymes. At the same time, bioprosthetic materials treated with GA acquire a marked tendency to pathological calcification [80][81][82][83]. According to modern concepts, the calcification of bioprosthetic tissue is based on the structural features of the chemical bonds between collagen and GA. The formation of cross-links occurs mainly due to the reaction of ε-amino groups of lysine and hydroxylysine with polymeric GA. These bonds contain several active oxygen atoms capable of forming strong complexes with calcium cations. Calcification can be provoked by the bond of polymerized GA molecules, which is similar to pyridine bases found in bone tissue collagen [84-86], a degree of mineralization being directly dependent on the density of cross-links in the collagen matrix. In addition, the level of glycosaminoglycans and proteoglycans cross-linked with collagen and preventing spontaneous precipitation of calcium salts in soft tissues decreases in the biomaterial during conservation [87][88][89]. For many years, researchers have been studying the mechanisms of calcification and searching for new methods for the conservation of biological tissue [90][91][92][93][94][95][96]. One of the avenues of investigation is related to drugs of the BP group. Thus, systemic parenteral administration of etidronic and pamidronic acids during subcutaneous implantation of the biomaterial in rats provided 97% inhibition of calcification. But the doses of the administered drug in these experiments significantly exceeded the therapeutic ones, which caused complications, such as osteomalacia and calcium imbalance. Long-term use of these drugs in experimental animals impaired general somatic growth, and short-term therapy was ineffective [97]. To avoid complications associated with the systemic use of BPs, the study of methods of local therapy began. The first such experience was gained by using polymer matrices that provide controlled release of the drug. The biomaterial and the polymer matrix were implanted in immediate proximity (thus, systemic adverse effects were avoided), but the matrix was depleted rather quickly, which made it impossible to create long term therapeutic BP concentration [98][99][100][101]. Recently, a method of local application (transcatheter delivery) of Zoledronate has been proposed to prevent calcification of the aortic valve with the development of aortic stenosis in experimental animals [102]. The study was conducted on a small group of New Zealand rabbits with highly pronounced aortic stenosis. A medicinal composition with 500 μg/L Zoledronate was used as an anticalcium therapy. It was applied directly to the valve leaflets. The experiment was completed after 28 days. Histological examination of the leaflets demonstrated a significant reduction in the area of calcium lesions by almost 40% in the group treated with Zoledronate, compared with the control group. Despite good results, this technique is rather complicated, and most authors still recommend the systemic use of BPs in the clinical treatment of aortic stenosis and aortic valve calcification [103][104][105][106]. A step forward was using the method of immobilization of BP molecules on biological tissues [107]. It was first described by Fleisch et al. as early as 1968 [108]. At the end of the last century, numerous papers were published confirming the anticalcium effect of BPs immobilized on collagen biomaterials crosslinked with GA [109][110][111][112][113][114][115][116]. The nitrogen atom in R2 of the amine-containing BP molecule can covalently bind with the free groups of the bifunctional preservative which remain after the completion of the cross-linking process (masking group) [114]. However, the primary, secondary, and tertiary nitrogen atoms in the amino group have different binding reactivity with aldehyde groups. Historically, the first and most well studied for the purpose of anticalcium modification is Pamidronate [111]. In addition, other compounds of this group were also studied [38, [112][113][114][115][116][117][118][119]. It has been established that not all BPs have the same anticalcifying effect. The structure of BPs and the presence of free phosphonic groups after immobilization on the biomaterial determine their biological and calcium-inhibiting activity. At the same time, there is no correlation between the calcium-inhibiting activity and the amount of the drug fixed on the biomaterial [120]. Pamidronate demonstrated the highest calcium-inhibiting activity on immobilization on the GA-treated biomaterial [120]. It should be noted that different BPs at the same concentration of working solutions are immobilized on reviews GA-treated biological tissues in different amounts. The amount of immobilized BP depends on its structure as well as on the species and tissue affiliation of the material [121][122][123]. No relationship was found between the amount and effectiveness of BP associated with the preserved material. It is interesting to note that Zoledronate, which has the highest systemic efficacy among all known BPs, had the least anticalcium effect in the immobilized state [120]. Russell et al. [34] suggest that some BP molecules bind to residual aldehyde groups, while others bind directly to proteins, forming hydrogen bonds (similar to the interaction of BPs with Thr or Lys FPPS [36]) with amino acids that can potentiate calcification. In both cases, the phosphonic groups remain free and can affect mineralization due to the direct physical and chemical binding of hydroxyapatite. The study established [120] that, when developing a strategy for modifying biomaterials with immobilized BPs, it is necessary to take into account the whole set of factors, the main of which are the molecular structures of BP itself, the preservative, and the predominant protein of the connective tissue matrix. Their main components are collagen and elastin, consisting of soluble tropoelastin molecules, bound by desmosine and isodesmosine, which form insoluble elastin [120]. GA pericardial cross-linking has been shown previously [122, 123] to stabilize collagen but not elastin, which can cause elastin degradation and, hence, a decrease in the elastic properties of the tissue. This occurs mainly due to very few free amino groups in elastin that are needed for crosslinking. All the GA-treated biomaterials have a high calcium-binding capacity (>100 µg/mg dry tissue). Preservation with diglycidyl ether of ethylene glycol (DEE) reduces the calcium level in the wall of the vein and pericardium by 4 to 40 times, respectively, but does not affect the wall of the aorta. Mineralization in the walls of the aorta and vein treated with GA and DEE is predominantly associated with elastin. Thus, it can be hypothesized that improved elastin stabilization would reduce calcification and increase tissue durability. BP modification reduces elastin calcification, but does not completely block it. The search for an "ideal" cross-linking agent for a biomaterial goes on. Each xenogenic material requires an individual protection strategy [124][125][126]. Conclusion Over the half-century history of the medical use of bisphosphonates, plenty of new compounds with various groups in the R1 and R2 positions and, accordingly, having different anticalcium activity have been synthesized. Researchers have a choice among existing drugs and ample opportunities for the synthesis of new ones. It is necessary to expand the indications for the use of bisphosphonates, especially for immobilization on xenogenic bioprosthetic materials in order to prevent their calcification in the recipient's body. This area has not yet been sufficiently studied since the causes for the tissue specificity of various bisphosphonates, the peculiarities of their binding, and effectiveness depending on the crosslinking agent used for preservation, as well as the mechanisms of the anticalcifying action of immobilized bisphosphonates, are unknown. There is still no unified modification method for different bisphosphonates and various tissues. However, the accumulated experience indicates that the prospects for using bisphosphonates as an anticalcium agent for the creation of cardiovascular bioprostheses are quite real, but this problem needs further investigation.
2022-05-01T15:11:01.733Z
2022-03-28T00:00:00.000
{ "year": 2022, "sha1": "372a38c2cb297e3433df3b825db9e35ac1a77435", "oa_license": "CCBY", "oa_url": "http://www.stm-journal.ru/en/numbers/2022/2/1772/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dcdf226fef61996a5537b60fcda1186d699283a2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17403883
pes2o/s2orc
v3-fos-license
Dynamical correlation functions of one-dimensional superconductors and Peierls and Mott insulators I construct the spectral function of the Luther-Emery model which describes one-dimensional fermions with one gapless and one gapped degree of freedom, i.e. superconductors and Peierls and Mott insulators, by using symmetries, relations to other models, and known limits. Depending on the relative magnitudes of the charge and spin velocities, and on whether a charge or a spin gap is present, I find spectral functions differing in the number of singularities and presence or absence of anomalous dimensions of fermion operators. I find, for a Peierls system, one singularity with anomalous dimension and one finite maximum; for a superconductor two singularities with anomalous dimensions; and for a Mott insulator one or two singularities without anomalous dimension. In addition, there are strong shadow bands. I generalize the construction to arbitrary dynamical multi-particle correlation functions. The main aspects of this work are in agreement with numerical and Bethe Ansatz calculations by others. I also discuss the application to photoemission experiments on 1D Mott insulators and on the normal state of 1D Peierls systems, and propose the Luther-Emery model as the generic description of 1D charge density wave systems with important electronic correlations. I. MOTIVATION Non-Fermi liquid behavior in correlated fermion systems is an exciting topic of current research. One-dimensional (1D) correlated electrons (more precisely: one-dimensional quantum systems with gapless excitations) are a paradigmatic example of non-Fermi liquids: their low-energy excitations are not quasi-particles but rather collective charge and spin density fluctuations which obey each to their proper dynamics 1 . The key features of these "Luttinger liquids" 2 are (i) anomalous dimensions of operators producing correlation functions with non-universal power-laws, parametrized by one renormalized coupling constant K ν per degree of freedom ν = ρ (charge), σ (spin) which have the status of the Landau parameters familiar from Fermi liquid theory; (ii) charge-spin separation, leading to a fractionization of an electron into charged, spinless, and neutral, spin-carrying collective excitations, with different dynamics determined by velocities v ρ = v σ . Each of these features leads to (iii) absence of fermionic quasi-particles. Responsible are the electron-electron interaction which is marginal in one dimension and therefore transfers nonvanishing momentum in scattering processes at all energy scales, and the nesting properties of the 1D Fermi surface. They produce divergent 2k F charge and spin density fluctuations which then interfere with Cooper-type superconducting fluctuations. All three features clearly show up in the single-particle spectral function [3][4][5] ρ(q, ω) = −π −1 Im G(k F + q, µ + ω) (1.1) which can be measured (within the "sudden approximation") by angle-resolved photoemission (ARPES) [with bad angular resolution, one essentially measures N(ω) = q ρ(q, ω) and is able to probe only features (i) and (iii)]. The spectral function is purely incoherent [3][4][5] , at best with peaks at the dispersion energies of the elementary charge and spin excitations, indicating that the electron behaves as a composite particle built on more elementary excitations. In Eq. (1.1), G is the Fourier transform of the retarded electronic Green's function G(xt) = −iΘ(t) {Ψ(xt), Ψ † (00)} , (1.2) k F the Fermi wave number, and µ (= 0) is the chemical potential. Much experimental effort has been devoted to studying and attempting to "prove" Luttinger liquid correlations in various quasi-1D systems. Examples are organic conductors of the family based on the molecule T MT SF (Bechgaard salts) where both NMR 6 and (partially) photoemission 7 have provided evidence in favor of a Luttinger liquid picture, quantum wires fabricated into semiconductor nanostructures 8 , or edge states in the fractional quantum Hall effect 9 . In all cases however, there appear to be problems with the precise values of the parameter K ρ derived, or with some other aspects of the interpretation in terms of a Luttinger liquid. It is not clear to date to what extent these discrepancies are due to the neglect of some experimentally important factor in the theory (such as, e.g. three-dimensionality or electron-phonon coupling in the chain systems, or deviations from the special filling factors in the quantum Hall edge states), or indicative of more fundamental problems either with theory or experiment. 1D (organic and inorganic) charge density wave (CDW) systems apparently could provide an alternative field of search for these typically one-dimensional correlations. Photoemission indeed has produced results 10 similar to the Bechgaard salts when performed with low angular resolution. With high angular resolution, a broad dispersing feature has been identified in (T aSe 4 ) 2 I 11 while two such signals have been measured in the blue bronze K 0.3 MoO 3 12 . Even though the actual situation in K 0.3 MoO 3 may be slightly more complictated because there are two almost degenerate bands cutting the Fermi energy, it is clearly of importance to first understand the photoemission spectrum expected from the metallic phase of a single-band CDW material. Finally, while this paper was prepared, new experiments on the organic two-chain conductor T T F − T CNQ became available which clearly show dispersing signals both on the T T F and T CNQ chains with very unusual lineshapes 13 . Specifically, the T CNQ signals are somewhat similar to K 0.3 MoO 3 , and we know from independent experiments that there are strong 2k F -CDW fluctuations on this chain in the metallic state 14 . (The T T F -chain exhibits strong 4k F -CDW fluctuations at very high temperature and is expected to be a Luttinger liquid.) The association of the two dispersing signals of K 0.3 MoO 3 with the charge and spin excitations of a Luttinger liquid is suggestive. As I will explain in the next section in more detail, it is incompatible, however, with the CDW transitions observed in these materials. This incompatibility motivates the consideration of the Luther-Emery model and is at the origin of the work reported here. Section II will discuss this model, its generic role as a low-energy fixed-point of 1D quantum systems which have both gapped and gapless degrees of freedom, and the picture we had of its correlations prior to this work. Recently, photoemission experiments also have been performed on the 1D Mott insulator SrCuO 2 15 . In Mott insulators, the charge fluctuations are gapped while the spins remain gapless. Their low-energy physics, therefore, can again be described by a Luther-Emery model, and our theory can be adapted to study the spectral functions of 1D Mott insulators. Earlier, angle-integrated photoemission on BaV S 3 has been interpreted as evidence for a Luttinger liquid 16 . The behavior of the conductivity, however, is more insulator-like, and present theory might be of interest there, too. Section III presents the construction of the single-particle spectral function (1.1). In Section IV I present results for the spectral functions of the spin-gapped Luther-Emery model, i.e. 1D Peierls systems and superconductors. In Section V, the spectral functions of 1D Mott insulators are presented. Section VI shows how the construction procedure of Section III can be generalized to arbitrary correlation functions of local operators. I compare my results with information from other studies in Section VII and use them for an interpretation of published experiments in Section VIII. I conclude with a short summary and a brief perspective. Partial results have been presented earlier 17,18 . II. THE LUTHER-EMERY MODEL The Luther-Emery model extends the Luttinger model by including the backscattering of electrons across the Fermi surface. Its Hamiltonian is 19 2) c rks describes fermions with momentum k and spin s on the two branches (r = ±) of the dispersion varying linearly [ε r (k) = v F (rk − k F )] about the two Fermi points ±k F , Ψ r,s (x) is its Fourier transform, and is the density fluctuation operator which obeys a bosonic algebra The Luttinger model is obtained for g 1 = 0 and includes only forward scattering. In one dimension, fermions can be transformed into bosons, and for the Luttinger model, there is an exact operator identity relating a fermion operator Ψ rs (x) to the bosonic density fluctuations (2.6) 1,2 . For our purposes, the approximate expression with the two phase fields and found earlier by Luther and Peschel 20 , is sufficient. This formula allows for a boson representation of the Hamiltonian and of all correlation functions. Before, it is important, however, to recall the physics of the phase fields Φ ν (x) and Θ ν (x) in (2.8) 1,21,22 . The charge density fluctuation operator is related to Φ ρ (x) by r ρ r (x) = −π −1 ∂Φ ρ (x)/∂x, and likewise for spin σ. When an additional particle is inserted into the system, a kink of amplitude π is formed in Φ ν (x). These fields therefore describe the scattering phase shifts of the particles present in the system, generated by the particles added. The operators inserting the particles are exponentials of the dual fields Θ ν (x) = Π ν (x)dx, where Π ν is the momentum conjugate to Φ ν : In a general fluctuation operator whose correlation function we wish to evaluate, the prefactor of iΘ ν / √ 2 measures the number of ν-particles it inserts into the system while the prefactor of iΦ ν / √ 2 measures the number of ν-particles it rearranges at constant total ν-particle number to generate the desired fluctuation. By ν-particle, we label, in the first place, the slowly-varying charge or spin part of the fermion operators Ψ rx (x) but, with phase factors reflecting the appropriate Fermi seas, these particles will describe the holons and the spinons of the 1D Bethe-Ansatz soluble models. The boson form of the Luther-Emery Hamiltonian becomes (2.14) ν r (p) are the operators for the charge and spin densities 15) and the interactions have been transformed as Diagonalizing the Luttinger part (i.e. H excluding H 1⊥ ) generates the renormalized velocities of the collective charge and spin excitations and their stiffness constants The phase fields transform as The main effect of the g 4 -interaction is a renormalization of v ν . We therefore drop H 4 from explicit consideration in the following, and always assume correctly renormalized velocities v ν . For K σ − 1 sufficiently large with respect to |g 1⊥ |, backscattering is irrelevant, and the Luther-Emery model reduces to a Luttinger liquid. Its renormalized value of K σ can be calculated, e.g. by perturbative renormalization group 23 which is well-controlled in this case or, if applicable, fixed to unity by the requirement of spin-rotation invariance. Charge and spin excitations are gapless, and depending on the value of K ρ , the dominant are spin density wave (SDW, K ρ < 1, repulsive forward scattering) or triplet pairing (TS, K ρ > 1, attractive forward scattering). Charge density wave (CDW) and singlet superconducting (SS) fluctuations, respectively, are subdominant. The backscattering Hamiltonian H 1⊥ is, for K σ − 1 small enough compared to |g 1⊥ |, a relevant perturbation and opens a gap ∆ σ in the spin excitation spectrum Luther and Emery have shown that for the special value K σ = 1/2, the interaction Hamiltonian H 1⊥ (2.14) can be represented as a bilinear in spinless fermions, using the bosonization formula (2.8) for spinless fermions (multiply the argument of the exponential by √ 2 and drop the σ-fields), and diagonalized 19 . On this Luther-Emery line K σ = 1/2, the gap is computed exactly to be ∆ σ = |g 1⊥ |/2πα [α is an infinitesimal in (2.8) but often associated with a cutoff of the order of a lattice constant]. Renormalization group then allows to derive the gap for arbitrary K σ . The charges remain gapless. The Mott insulator is the consequence of an instability in the charge channel, caused by Umklapp scattering off the lattice for commensurate band-fillings. The Umklapp Hamiltonian appropriate for a half-filled band is obtained by simply replacing spin by charge in Eq. (2.14), and its coupling constant often is denoted by g 3⊥ . Here the spins are gapless while relevant Umklapp scattering opens a gap ∆ ρ in the charge channel. This generic picture applies (with little modification only) to all even commensurabilities (k F a = [r/s]π/2, s even). The situation is different for s odd, where the Umklapp operator necessarily couples charges and spins 1 , and we exclude these cases from our study. The Mott insulator is dominated by 4k F -CDW and/or SDW correlations. While the Luther-Emery solution is essentially 24 exact, it is useless for computing correlation functions since there is no practical relation between the physical fermions and the spinless pseudofermions. Still, we have some qualitative information on the correlation functions. Several methods 25 support the idea that, in the gapped phase, correlations of the Φ σ -field tend towards a non-zero constant as |x| or |t| → ∞ while those involving exponentials of its dual field Θ σ (x) decay exponentially in space (or oscillate in time). The spin gap quenches low-energy spin fluctuations, therefore SDW and TS correlations should be exponentially suppressed. With a constant asymptotic value of Φ σ , CDW and SS are enhanced with respect to a Luttinger liquid, and now dominate over SDW and TS. The opening of a spin gap is a necessary condition for the emergence of dominant SS or CDW correlations in a 1D metal. As a corollary, a Luther-Emery phase must exist in the normal state of CDW systems (or superconductors) between a Luttinger liquid and the 3D ordered low-temperatures phases. One therefore should be careful in interpreting the properties of the metallic "normal state" of a CDW system (or of a 1D superconductor) in terms of a Luttinger liquid. For the one-and two-particle spectral functions, there is a general belief that the opening of a gap affects the system for frequencies smaller than this gap while the behavior of the ungapped system is essentially recovered at larger frequency scales. The exponential decay (resp. oscillations) of correlation functions involving operators exp[i(. . .)Θ σ ] would cut off (shift) the divergences as functions of q (ω) they had possessed in the Luttinger model. Possibly important power-law prefactors to exponentials have not been discussed. There has been almost no calculation or systematic construction of such functions -in particular dynamical ones 26 -and, to my knowledge, no critical check of these hypotheses by numerical simulation prior to this work 17,18 . A wide variety of models fall into the Luther-Emery universality class and my results should be applicable there in a low-energy sector: Luttinger liquids coupled to phonons and related models so long as they are incommensurate, have wide regions of parameter space with gapped spin fluctuations and gapless charges 31 ; the negative-U Hubbard model at any band-filling has a spin gap 32 , and the positive-U Hubbard model at half-filling has a charge gap 33,34 ; with longer-range interactions, charge gaps can occur at different rational bandfillings, too. The t − J-model has a spin gap at low density 35 . Spin gaps occur frequently in models of two Luttinger or Hubbard chains coupled by single-particle tunneling 36,37 . Also when a 2k F -CDW is established in many coupled Luttinger chains as a consequence of interchain Coulomb interaction, the system passes through a region of attractive backscattering which opens a spin gap 38 . III. CONSTRUCTION OF THE SPECTRAL FUNCTION I now present a systematic construction of the single-particle spectral function, Eqs. (1.1) and (1.2), for the spin-gapped Luther-Emery model. The Green's function exhibits the full complexity of the problem, involving all four phase fields Φ ν , Θ ν , while many others are easier 1 . They will be discussed in Section VI. Here, we limit ourselves to the diagonal terms of the Green's function, both in the branch index r and in the spin index s, and further assume spin-rotation invariance, so that s is dropped alltogether. This assumption, which I will make throughout the paper unless exceptions for the sake of an argument are stated explicitly, further implies K σ = 1. With the nonvanishing expectation values of operators exp[i(. . .)Φ σ ] generated by the gap opening, finite off-diagonal terms are possible, in principle, both here and in multi-particle correlation functions. They can be calculated in complete analogy to the terms discussed here, and we ignore them in the following. Using bosonization (2.8), the retarded Green's function for right-moving fermions (r = +) can be represented as a product of charge and spin correlation functions The product structure is a consequence of the charge-spin separation of the Hamiltonian (2.1). The spectral function (1.1) then is a convolution The charge part is easy and can be calculated in the Luttinger model (I only display the leading ω-and q-dependence) Using a similar expression for the spins, one can reproduce in detail the spectral functions of the Luttinger model calculated elsewhere directly [3][4][5] . Notice that the divergences are stronger than for a spinless Luttinger model ensuring that singularities remain after performing the convolution integrals. For both K ν = 1, the coalescence of three of the four singularities of g ρ (q, ω) and g σ (q, ω) is needed to generate a singularity in the spectral function of the Luttinger model; if one of them, e.g. K σ , is unity, the coalescence of two singularities is sufficient. The determination of the spin correlation function is more involved because it has no simple representation in terms of the Luther-Emery pseudofermions, excluding any exact calculation. I now show that the leading behavior of this function can, however, be uniquely constructed from symmetries, equivalences, and known limits if the Ansatz is made that g σ (xt) is a product of power laws and exponentials in x and t. There is a variety of arguments requiring this form, and we will give them in the following, together with the construction procedure. The important steps are: (i) Representing the Hamiltonian in terms of right-and leftmoving fermions requires g σ to be a function of x ± v σ t only. In general, g σ will contain both power laws (f ± ) and exponentials (f exp ) of these variables Interactions other than g 4 can only mix left-and right-moving excitations, producing products of x ± v σ t, or functions thereof, but cannot introduce new dependences on x and/or t. This is consistent both with the boson solution of the massless Luttinger phase and with the Luther-Emery solution of the gapped phase. (The Lorentz invariance of the Luther-Emery model requires all correlation functions of Luther-Emery pseudofermions to depend on x 2 − v 2 σ t 2 only -and by implication all those of the physical fermions whose operators can be represented in terms on Luther-Emery fermions alone.) The exponential part f exp necessarily is a function of x 2 − v 2 σ t 2 only. All dependences on x and t other than through functions of x 2 − v 2 σ t 2 must therefore be present also in the Luttinger model (g 1⊥ = 0), and necessarily are of power-law form. (ii) The limit of a vanishing gap ∆ σ → 0 can also be used to constrain the function g σ (xt), but is rather subtle. To make the argument clear, we momentarily relax the assumption of spin rotation invariance so that the spin channel of the model is described by g 1⊥ and general K σ . (Alternatively, we can look at a Mott problem with Umklapp scattering g 3⊥ and K ρ = 1 is more natural.) In the limit ∆ σ → 0, the function f exp (x 2 − v 2 σ t 2 ) → 1 here, because the exponential dependences are introduced by the finite gap. Straightforwardly, one would now identify the product with the spin part of the spectral function of the remaining Luttinger model, i.e. Eq. (3.7) below with anomalous exponents δ − = (K σ + K −1 σ − 2)/8 and δ + = δ − + 1/2. This physically appealing procedure was used in an earlier paper 17 , and possibly could describe the physics of a small-gap Luther-Emery model. Taking the limit ∆ σ → 0 to constrain eventual power-laws in g σ (xt) involves different physics, however, and the above argument must be modified. For vanishing gap, g σ must reduce to the correlation function of the free Luttinger model (K σ = 1), no matter what value of K σ would describe the hypothetical Luttinger model obtained from the Luther-Emery model (2.1) for g 1⊥ = 0, i.e. independently of any assumption on spin-rotation invariance. Physically, this is so because the anomalous operator dimensions K σ = 1 of the Luttinger model are a consequence of singular low-energy virtual particle-hole excitations. When there is a gap at the Fermi surface, these processes are quenched, and one is left with the exponent K σ = 1 of the free model 27 . Notice that this argument implies that we consider a rather large gap. Accidentally, the spectral functions given earlier 17 remain correct. This, however, is due to the limitation to spin-rotation invariant interactions there. They impose K σ = 1 for the power-law functions f ± (x ∓ v σ t) in any case. With f exp (x 2 − v 2 σ t 2 ; ∆ σ = 0) ∼ 1 one can determine all possible power-laws f ± up to corrections varying more slowly than a power law, to be with exponents These are the exponents of a free Luttinger correlation function for the spin part of a right-moving fermion. I remphasize that they arise because of the quenching of low-energy particle-hole excitations by the spin gap and hold independent of any assumption on spinrotation invariance. (As we will see below, the corresponding result for the charge channel implies that there cannot be any anomalous dimensions in a 1D Mott insulator with spinrotation invariance respected). (iii) From the equivalence of the Luther-Emery model to a classical 2D Coulomb gas 23 (using the Matsubara formalism of imaginary times τ = it, putting y = v σ τ ) and Debye screening of the charges above the Kosterlitz-Thouless temperature, one deduces an exponential factor with an undetermined constant c, in f exp . This equivalence quite generally excludes any decay faster than (3.9). In this picture, the perturbation Hamiltonian (2.14) generates a Coulomb gas of charges q e = ±1, and the Φ σ -fields of the Green's function appear as two test charges q ′ e = ±1/2 whose (bare logarithmic) interaction is modified by screening from the Coulomb gas. The gapped Luther-Emery phase corresponds to the high-temperature plasma phase of unbound charges in the Coulomb gas, and the screening can then be treated in the Debye-Hückel approximation 28 . Here, the effective potential between the charges is The Θ σ -fields can then be viewed as magnetic monopoles with strengths q m = ±1/2. Their interaction is again logarithmic, and they couple to the electric charges with V em ( r) ∼ − arctan(y/x) 29 . Clearly, the high-temperature plasma of electric charges e e = ±1 modifies the effective monopolemonopole interaction which becomes where I have used the Debye-Hückel polarization propagator Fourier-transforming back to real space, one obtains with an open constant c ′ ∝ c. One observes an antiscreening effect here: in the presence of the electric charges, the magnetic monopoles are confined more strongly than without charges! Going back to real times, (3.12) produces the exponential dependence in (3.9) and, most importantly, gives additional (in fact, for those multi-particle correlation functions which only depend on x 2 − v 2 σ t 2 the only firm) justification for the presence of power-law prefactors in addition to exponential terms in (3.6). (iv) The open constant c in (3.9) can be determined from a spectral representation of f exp , and our interpretation of the bosonization formula (2.8). Fourier transforming f exp (x, t), one obtains which has a gap of magnitude c∆ σ in its spectrum. This gap must correspond to the excitation of |n| spinons where n is the prefactor of iΘ σ (x)/ √ 2 in the operator whose correlation function we wish to calculate. This constrains the prefactor in the exponential to c = |n| quite generally. For the single-particle Green's function n = 1, and we obtain c = 1 here. (v) The present construction of g σ (xt) is not an exact calculation. It is therefore important to look for exactly known cases which can be used as tests, to confirm the validity of this construction. Gulácsi has calculated explicitly the t = 0-Green's function of a 1D Mott insulator 39 : He finds G(x) ∼ exp(−∆ ρ |x|)/|x| which is in complete agreement with the present theory when the 1/ |x|-contribution from the ungapped channel is multiplied to Eq. (3.14) below. That there may be a power-law prefactor in the charge part of the spectral function has also been realized but well hidden in publications, by others 40 . In Section VI, I will discuss further tests of these rules based on two-particle correlation functions. From the rules (i) -(v), I find (3.14) Fourier transformation then gives The comparison of (3.15) with (3.5) (after ρ → σ there) is interesting. The δ-function translates the absence of anomalous dimensions in the gapped channel, a consequence of rule (ii), rather than spin-rotation invariance as in the σ-version of (3.5). The change in dispersion due to the spin gap enters through this δ-function. The frequency-dependent prefactor is the same as in the gapless system. However, due to the different argument in the δ-function, it no longer becomes singular in the limit q, ω → 0 but has an upper limit of ∆ −1/2 σ now. A similar effect occurs in the Green's function of 1D quantum antiferromagnets, where the opening of the spin gap cuts off a singularity of the prefactor of the delta function 41 . The factor in parentheses is a coherence factor translating the enhanced spin-pairing tendency at the origin of the spin gap, and one readliy recognizes the same structure as for the coherence factors u q , v q familiar from the theory of superconductivity. IV. SPECTRAL FUNCTION FOR THE SPIN-GAPPED LUTHER-EMERY MODEL We now must convolute g σ (q, ω), Eq. (3.15), with the charge part, Eqs. (3.4) or (3.5). The results depend on the relative magnitudes of the charge and spin velocities. We therefore treat separately the cases of (A) repulsive interactions (in the sense that the effective forward scattering matrix element What could we expect from our knowledge of the Luttinger liquid 3 ? There the singularities at ω = v ρ(σ) q arise from processes where the charge (spin) contributes all of the electron's momentum q and the spin (charge) none. The same argument applied to the Luther-Emery model predicts signals at the renormalized spin dispersion ε σ (q), Eq. (2.18), and at a shifted charge dispersion ε ρ (q) = v ρ q + ∆ σ . (4.1) Figure 1 shows the location of the signals expected from this argument. The ∆ σ -shift in the charge dispersion comes from the fact that the zero-momentum spin fluctuation can only be excited at a cost of ∆ σ . As will be seen below, however, the spectral functions of the Luther-Emery model never show two singularities with these dispersions. The intuitive predictions on the spectral function of the Luther-Emery model basically transcribe the standard argument that the behavior of correlation functions is modified on energy scales below the gap (correlations are suppressed there) but recovered almost unchanged on higher energy scales. Our results will show that for dynamical, q and ω-dependent correlations, this argument is not trustworthy. A. One-dimensional Peierls "insulators" We assume v ρ > v σ and K ρ < 1, implying dominant CDW correlations. Calling these systems "insulators" is a misnomer, however, because the charges are gapless and the systems are metallic. More precisely, we think about the Luther-Emery model here as describing the "normal" metallic state above a CDW transition. The convolution of g σ and g ρ , Eq. (3.3), is rather straightforward now. After executing the ω ′ -integral, singularities are obtained from the coalescence of the two singularities carried by g ρ (q, ω). The result of the calculation is shown schematically in Fig. 2 for q < 0 (unlike previous papers, we present the spectral functions as those of the occupied states, i.e. as they would be measured by a photoemission experiment). There are indeed features at the special frequencies shown in Fig. 1. On the spin dispersion ε σ (q), there is a true singularity as in the Luttinger model. Here, α is defined as α = (K ρ +K −1 ρ −2)/4 = 2γ ρ since the notion of a K σ does not make sense in a spin-gapped system. Folklore would then predict another singularity |ω + ε ρ (q)| (α−1)/2 (short dashed lines in Fig. 2) which is not observed here. It is cut off instead to a finite maximum of order (4. 3) The reason for cutting of the Luttinger divergence on the charge dispersion is related to the non-singular prefactor (for q → 0) in g σ (q, ω), cf. Eq. (3.15) and the subsequent discussion, and the convolution makes this effect apparent on the charge dispersion ε ρ (q). The spin gap therefore supresses the divergence associated with the charge dispersion while on the renormalized spin dispersion, the spectral response remains singular. At positive frequencies, the Luther-Emery model has pronounced shadow bands. Here, the Luttinger liquid only has very small weight. The weight in the Luther-Emery model is much stronger, and the spectral function has the same overall shape as at negative frequencies. For q < 0, the negative frequency part is enhanced by a coherence factor 1 − v σ q/ε σ (q) while a factor 1 + v σ q/ε σ (q) decreases its shadow. These factors translate the increased coherence due to the spin pairing and the finite spin gap, and are a consequence of the corresponding coherence factors in Eq. (3.15). Of course, as suggested by Fig. 1, one can also view the shadow bands as bending back from the Fermi (or more precisely: the gap) energy as k is increased beyond k F . This view perhaps is closer to a real photoemission experiment. B. One-dimensional superconductors We now take v ρ < v σ , i.e. attractive forward scattering. This implies K ρ > 1, and such a system has dominant singlet pairing fluctuations. Interestingly, two true singularities occur here whose location is shown in Fig. 3. There is one singularity on the renormalized spin dispersion which is one-sided for q > q c and two-sided for q < q c . is a critical wave vector which arises in the convolution procedure from searching the minimum of ε σ (q ′ ) + v ρ (q − q ′ ) as a function of q ′ . At this wavevector, the dispersioñ is tangential to ε σ (q). For q < q c , a divergence on this shifted charge dispersion splits off the spin divergence. Again, there are strong shadow bands with the same functional forms as the main bands, specifically with two singularities, and with intensities controlled by coherence factors. The dispersions of the signals are displayed in Figure 3, and the shape of the spectral function is sketched in Figure 4. Notice that, quite generally, that the behavior of ρ(q, ω ≈ ±∆ σ ) is determined by that of the spin part close to ∆ σ and that of the charge part at ω ≈ 0. Unlike earlier conjectures 25 , it is therefore not necessary to know details of the charge dynamics on a scale ω ≈ ∆ σ where the Luttinger description may have acquired significant corrrections. The k-integrated density of states then is N(ω) ∼ Θ(ω − |∆ σ |)(ω − |∆ σ |) α , independent of the magnitudes of the velocities. There is no weight below the gap, and the typical gap singularity in the density of states of the spin fluctuations is wiped out by convoluting with the gapless charges. It is quite clear now that certain properties of 1D fermions -the dynamical ones involving (1+1)D Fourier transforms -are affected by the gap opening on all energy scales, contrary to common expectation, while those depending on one variable alone are modified only on scales below the gap energy. Despite the opening of a gap in the spin channel, singular spectral response remains possible in q-and ω-dependent correlation functions. V. SPECTRAL FUNCTION OF ONE-DIMENSIONAL MOTT INSULATORS The spectral function of a 1D Mott insulator can be computed as a special case of the generic solution presented above. One simply has to change σ ↔ ρ everywhere and put K σ = 1 in the gapless spin channel for spin-rotation invariance (which we assume to hold, again). Importantly, the exchange of ρ and σ also applies to the inequalities on the velocities v ν , where again two cases must be distinguished. Both factors g ν in the convolution now involve δ-functions. In the case of repulsive forward scattering v ρ > v σ , one now finds a spectral function with two singularities, similar to the case of a 1D superconductor. Since K σ = 1, the anomalous single-particle exponent α = 0, i.e. one obtains two inverse square-root singularities. In the main band (ω < 0 for q < 0), the spectral function becomes ). An important difference to the case of a superconductor occurs in the shadow band: since the spectral function of the gapless spin channel has no shadow band of its own, the singularity onε σ (q) in the shadow band is missing. The shadow band therefore has a single singularity on the charge dispersion ε ρ (q) with a weight depressed by a coherence factor with respect to the weight of the main band signals. The effect is completely analogous to the appearence of a single nonanalyticity in the (very weak) shadow bands of a Luttinger liquid with spin-rotation invariant interactions [3][4][5] . The shape of this spectral function is sketched in Figure 5. The location of the singularities follows Figure 3 with the replacement ρ ↔ σ except for the shadow bands where the straight lines should be ignored. The case v σ > v ρ again is different. Compared to the case of the 1D Peierls "insulator", the anomalous dimension α on the charge dispersion drops out due to spin-rotation invariance, giving an inverse square-root singularity on ε ρ (q). Also the finite maximum on the shifted spin-dispersion ε σ (q) does not occur. This is because the δ-function has zero weight in the energy domain where the square-root prefactor in Eq. (3.15) takes its maximum. The shadow band, of course, has a single inverse-square-root singularity with the usual coherence factors. Thus, the spectral function for this case becomes up to coherence factors, and the density of states The spectral properties of a doped Mott insulator, of course, depend on the detailed scenario emerging from a more complete theory. Work on the Hubbard model shows, however, that the upper Hubbard band qualitatively survives a finite dopant concentration 34,39 . Continuity then suggests that as the insulating state is left by varying the band-filling, spectral weight is gradually taken out of both the main and shadow bands of a spectral function such as those discussed before, and transferred into the charge and spin divergences of a Luttinger liquid signal. Although the spins are left unaffected in the transition and only a charge gap opens, both the charge and the spin signals are predicted to be shifted and strongly modified by doping. This is a direct consequence of the convolution property (3.3) of the single-particle spectral function. When superposing (to a first approximation) the two signals, care must be taken, in addition, to account for the dependence of the chemical potential on doping level. VI. GENERALIZATION TO OTHER CORRELATION FUNCTIONS We now discuss the construction of other correlation functions for the Luther-Emery model. Clearly, due to charge-spin separation, they can again be written as convolutions of charge and spin correlation functions. Consider a general local operator where Ψ rν (x) had been introduced in Eq. (2.11), and a positive (negative) exponent is understood as a creation (annihilation) operator. Bosonizing O ν , the Φ ν -field acquires a prefactor (m−n), and Θ ν is multiplied by (m+n) with respect to the single-particle operator Ψ rν . If gapless channel is assumed to be the charge ν = ρ (as we have done throughout this paper except in the preceding section), the correlation function of O ρ behaves as . (6.2) Its Fourier transform is We now turn to such an operator for spin, O σ , in the presence of a spin gap. When the spin gap opens due to the Hamiltonian (2.14), the Φ σ -field develops long-range order. Its dual field, Θ σ , then is disordered, and its correlations will contain exponential terms similar to f exp , Eq. (3.9). We now have to distinguish two cases. (i) If we have m = −n, the operator O (m,−m) σ can be represented in terms of the Φ σ -field alone. Since this is the ordering field, we simply can put it to a constant value, implying R (m,−m) σ (xt) ∼ 1, and the spacetime dependence of the total correlation function is then determined by the charge part R (s,t) ρ (q, ω) (which may carry different powers s, t of Ψ r,ρ , depending on the spin directions) alone, and given by Eq. (6.3). One can, in principle, go one step further and account for the long-wavelength fluctuations out of the ground state-value of Φ σ . A convenient method for this again is the mapping onto a classical 2D Coulomb gas. Since the Φ σ -field of the correlation functions introduces electric test charges, we know that in the massive Luther-Emery phase their interaction is exponentially screened (cf. Section III). We then find the fluctuation contribution I will discuss an interesting application in a moment. However, if (ii) m = −n, the spin correlations contain the disorder field Θ σ dual to the Φ σ , and the gap opening will lead to exponential factors as in Eq. (3.9). This is the case for the Green's function, cf. Eq. (3.2). We apply the same rules (i) -(v) as in Section III. Specifically, the prefactor of the gap in the exponential is c = |m + n|, by comparing the energy for the insertion of |m + n| σ-particles into the system with the gap obtained in the spectral representation of the exponential. The power-law prefactor is that of the free Luttinger model because there cannot be any anomalous dimensions in a gapped fermion system. In (xt)-space, the correlation function then is This expression can be Fourier transformed and convoluted with an appropriate charge part. What the present construction cannot do, however, is to give information on the magnitude, or a possible vanishing, of the prefactor of the correlation function. One example is the 2k F -CDW correlation function in the half-filled replusive Hubbard model, where a naive use of the construction above would predict (in real space at t = 0) a dependence ∼ x −1 which, on physical grounds, is not expected to be important in that model 19 . Qualitative information can be obtained in that situation from renormalization group studies, where one can monitor how the amplitude of a correlation function changes as one moves away from a Luttinger liquid fixed point 42 . A complete solution of this problem presumably would require an exact boson representation of the physical fermions in a Luther-Emery model, including fermion raising operators. To conclude this Section, I discuss two more test cases for my construction procedure. Consider the transverse 2k F -spin-correlation functions 1,25 in the Luther-Emery spin-gap regime. The spin density wave operator can also be represented as We now limit ourselves to the spin component of the correlation function and obtain, using Eq. (6.5), (6.8) Fourier transformation gives One the other hand, on the Luther-Emery line K σ = 1/2, one can refermionize the operator in terms of spinless fermions Ψ r (x), by inverting the spinless variant 1 of the bosonization formula (2.8). The limitation of this procedure to the Luther-Emery line is inessential because different coupling constants will only affect the magnitude of the spin gap but not the form of the excitation spectrum, so long as ∆ σ > 0. Now, one can calculate R (−1,−1) σ (q, ω) as the pairing correlation function of spinless fermions in a fermion representation. Such a calculation has been outlined by Lee 25 , and the result derived from his expressions agrees with Eq. (6.9) both concerning the regions of nonvanishing spectral weight, and the critical exponents of the singularities. Incidentally, my own expressions are more complicated than Lee's by additional terms and additional occupation functions n(k) and 1 − n(k). They conspire with the coherence factors [1±v σ q/ε σ (q)] to produce a prefactor v 2 σ q 2 /(v 2 σ q 2 +4∆ 2 σ ) to the leading inverse-square-root singularity which vanishes as q → 0. At q = 0, a subleading term ∝ Θ(|ω| − 2∆ σ ) times a regular function remains. Apart these subtle prefactors, the exact fermionic calculation reproduces the result of the construction procedure advocated here for the correlation functions of the Luther-Emery model. A final test is provided by the charge correlations of a 1D Mott insulator. In general, the charge density operatorn(x) has contributions at wavevectors q ≈ 0, 2k F , 4k F , etc. In a half-filled band, 4k F = 2π/a, a reciprocal lattice vector so that the 4k F -term effectively does not oscillate when measured on the lattice sites. When the Mott gap ∆ ρ opens, the field Φ ρ orders at a finite constant value. The third term in (6.11) then translates the long-range charge order, the first term measures the long-wavelength fluctuations out of this ordered ground state, and the second term measures 2k F charge fluctuations. Using the arguments at the beginning of this section (after σ ↔ ρ), we obtain from the first two terms a spectral function The zero-frequency δ-function comes from the "4k F "-part, and the high-frequency signal from the ∂Φ ρ /∂x-term. In principle, one could also calculate the 2k F -part. However, experience with the Hubbard model suggests that prefactors not specified here suppress the 2k F -CDW fluctuations on the lattice sites 1 , and we do not consider them here (similar, and nonvanishing contributions, however appear in 2k F -SDW correlation functions, and in a "bond order wave" which is best described as a 2k F -CDW centered midway between two sites). The spectral function R n (q, ω) has been calculated recently by Mori and Fukuyama 26 . They do not give an explicit expression which would allow to check the critical exponents, but the region of nonvanishing spectral weight, and the overall shape of the high-frequency signal are consistent with Eq. (6.12), whereas the δ-function in Eq. (6.12) seems to be missing. It is present, however, in a numerical diagonalization of an extended Hubbard model 43 , and provides another, though more superficial test of our construction. VII. RELATION TO OTHER WORK In the preceding sections, we have discussed some tests for the dynamical correlation functions of the Luther-Emery model constructed here 25,39 . Independent verification comes from work on many models which fall into the Luther-Emery universality class. In particular, numerical studies have attempted to look into the spectral properties of correlated fermion models. Quantum Monte Carlo simulations of the 1D Hubbard model at halffilling, a prototypical Mott insulator with v ρ > v σ provides evidence for pronounced shadow bands, much stronger than those of the doped systems which form Luttinger liquids 34 . At present, the resolution is not good enough to directly visualize the two dispersing inversesquare-root singularities found here. However, recent improvements on doped Hubbard models 44 lend hope that Quantum Monte Carlo will be able, in the near future, to confirm the predicitons made here. The 1D t − J-model at half-filling also forms a Mott insulator with v ρ > v σ , and exact diagonalization of lattices up to 22 sites has allowed a calculation of the spectral function of this model 15 . While the location of regions of finite spectral weight, and of the singularities agrees with the present study, numerical diagonalization on such small systems does not allow to determine the critical exponents of the divergences of the 1D Mott insulator. Spin gaps also arise in many lattice models. E.g. for two coupled Luttinger, Hubbard, or t − J-chains, there are wide regions of parameter space where the spin fluctuations are massive, and the single-particle spectral function has been calculated occasionally 37 . Again, exact diagonalization finds important shadow bands 37 but the resolution is not good enough to separate the two dispersing divergences found in Section IV for a superconductor, not to speak of the much weaker signal on the shifted charge dispersion ε ρ (q) predicted above for a CDW system. Evidence for such a weak signal, and for a divergent signal on the gapped spin dispersion ε σ (q) comes, however, from exact diagonalization of a t − J − J ′ -model where a spin gap opens for certain values of J ′45 . These authors observe a very strong spinon signal, the holon peak is anomalously weak, as predicted here. A Bethe Ansatz calculation of spectral functions for a 1D Mott insulator has recently been performed by Sorella and Parola (SP) based on the 1D supersymmetric t − J model 46 , and also confirms essential aspects of the present work. In their model, v ρ < v σ so that we predict a single inverse-square-root singularity on ε ρ (q). Such a singularity is also found from the Bethe Ansatz solution used by SP. When a finite magnetization is included, SP find critical exponents which explicitly depend on the momentum of the hole created. One would expect from universality and the possibility to transform a positive-U Hubbard model into one with negative U by a particle-hole transformation on one spin species alone, that such spectral functions should also describe spin-gapped systems with v ρ > v σ . We do not find such momentum-dependences in the work presented here. SP's method, however, requires the calculation of the ground state and low-energy properties of the spin Hamiltonian at finite total momentum of the spin system. These explicitly depend on the momentum, and produce the momentum-dependent exponents. In the Luther-Emery model, one calculates a spinon excitation with some momentum with respect to a zero momentum ground state. The momentum-dependent correlation exponents found by SP certainly are beyond scope and possibilities of the present model. On the other hand, their method does not allow to look into more subtle features than critical exponents, such as the finite maximum which we found in this case. VIII. APPLICATIONS TO EXPERIMENTS Importantly, our results could prove useful in the description of the photoemission properties of certain quasi-1D materials. There have been angle-resolved photoemission experiments on the 1D Mott insulator SrCuO 2 with a gap 2∆ ρ ∼ 1.8eV 15 . The lineshapes observed were anomalously broad and showed unsual dispersion. As a consequence, the authors proposed a description in terms of a system with charge-spin separation, where the broad feature would, in fact, be composed of the unresolved spin and charge signals. In addition, a strong shadow band bends back from the gap edge for k > k F . Its dispersion is consistent with the one of the charge signal for k < k F . Clearly, these observations are fully consistent with the theory presented here, which predicts two inverse-square-root singularities beyond some critical wave vector (cf. Fig. 5), and a single one below, as are the accompanying diagonalization results on a 1D t − J-model 15 . More interesting in the present context are a number of unexplained ARPES results on organic and inorganic materials which undergo Peierls transitions at low temperatures. Specifically, ARPES experiments on the blue bronze K 0.3 MoO 3 by several groups show two dispersing peaks 12 . Also in the organic conductor T T F − T CNQ, anomalous lineshapes are observed 13 . Of interest here is the T CNQ-band which shows 2k F -CDW fluctuations in the metallic state 14 and triggers a series of transitions into a low-temperature CDW phase. While some materials such as the Bechgaard salts 7 , or the T T F -band of T T F − T CNQ (which has strong 4k F -CDW fluctuations 14 ) may well fall into the Luttinger liquid universality class, it is particularly surprising that CDW systems such as K 0.3 MoO 3 , or the T CNQ-band in T T F − T CNQ, should behave as Luttinger liquids. In fact, the photoemission properties are in striking contrast to the established picture of a fluctuating Peierls insulator which has been applied quite universally to describe the normal state of CDW systems 47 . It predicts a strongly temperature dependent, narrow [|ω| ≤ ∆ CDW (T = 0)] pseudogap and ρ(q < 0, ω) is governed by a broadened quasi-particle peak at ω < 0 and a weak shadow at ω > 0 18,48 . A Luttinger liquid interpretation for the CDW photoemission is highly suggestive but encounters problems which are all resolved in a Luther-Emery framework. (i) As has been explained before, Luttinger liquids have no dominant 2k F -CDW correlations: for repulsive interactions (K ρ < 1), spin density waves are logarithmically stronger than CDWs 1 , and the behavior of lattice models is consistent with this picture 49 . For attractive interactions, the system is dominated by superconductivity 1 . A spin gap is a necessary condition for promoting CDW correlations in correlated 1D electron systems and is realized in the Luther-Emery model! (ii) 2k F -CDWs often are due to electron-phonon coupling, and renormalization group provides us with a detailed scenario 1,31 . The dependence of the spin gap on electron-phonon coupling λ, the phonon frequency ω D , and K ρ , can be calculated reliably 31 . A spin gap also opens if 2k F -CDWs are caused by Coulomb interaction between chains 38 . (iii) The spin susceptibility of CDW systems above the Peierls temperature decreases significantly with decreasing temperature indicative of activated spin fluctuations. This applies applies both to K 0.3 MoO 3 at temperatures from T P to beyond 700 K 50 , and to the T CNQ-chain in T T F − T CNQ where the magnetic susceptibility contributions of both chains can be separated by NMR 51 . Notice in this context that at finite temperature, the density of states in the spin channel of the Luther-Emery model is essentially the same as for the Lee-Rice-Anderson theory of a fluctuating Peierls insulator 52 , implying that both models will have similar χ(T ). The temperature-dependent susceptibility alone therefore cannot discriminate between these two theories. Remarkably however, in K 0.3 MoO 3 the conductivity is metallic in the same temperature range: early experiments over a restricted temperature range find the resistance ρ(T ) ∼ T 53 while very recent data taken to much higher temperatures even suggest a sublinear temperature dependence 54 -not unlike the one found in Luttinger liquids with repulsive electron-electron interactions 55 . In T T F − T CNQ, ρ(T ) ∼ T has been found 56 , but it is not known how the individual chains contribute to this dependence. The experiments are incompatible with the temperature dependence of the conductivity expected in a fluctuating Peierls insulator 18 which indeed is observed in some organic materials and also (T aSe 4 ) 2 I. (iv) For a Luttinger model, the stronger divergence in ρ(q, ω) is associated with the charge mode and disperses more quickly than the weaker signal. In the experiment on K 0.3 MoO 3 , the quickly dispersing signal is less peaked than the slow one. On the other hand, the important feature of the Luther-Emery spectral function, Fig. 1, is that the spin gap supresses the divergence of the charge signal which disperses more quickly than the divergent spin contribution. (v) A CDW transition out of a Luther-Emery liquid by opening a charge gap at the Peierls temperature, is also consistent with subtle transfers of spectral weight in regions away from the Fermi energy, observed in spectra taken through the true CDW transition 57 . In these experiments, the spectral weight at the Fermi energy is essentially zero at any temperature. However, at some finite energy below E F , the weight drops with a temperature dependence consistent with a BCS-like gap. In a naive charge-spin separating, Luther-Emery scenario, one would postulate the opening of a charge gap ∆ ρ at the Peierls temperature (as a consequence of the establishment of 3D coherence, allowing for the finite-T transition), in addition to the preexisting spin gap. Thus one expects a drop of spectral weight at the Peierls transition in an energy range between E F − ∆ σ and E F − ∆ σ − ∆ ρ which, on a sufficiently coarse temperature scale, would amount to a shift of the leading edge by ∆ ρ . More likely, the establishment of 3D coherence will destroy to some extent the ideal spin-charge separation of the 1D Luther-Emery model, and produce a single CDW gap ∆ CDW > ∆ σ below the transition, both for charges and spins. On a quantitative level, there is one major problem for the description of the normal state of most CDW systems: the spin gap derived from an analysis of the magnetic susceptibilities is much smaller than the spin gaps derived from the peak maxima of the ARPES signals. At present it is not clear if this indicates a fundamental problem with a Luther-Emery model (the problem would however not be solved with any competing theory), if this is due to some ununderstood effect in the photoemission process, or if it is due to some extrinsic sample property. In another language, it is not clear what mechanism is responsible for apparent gaps which systematically are a sizable fraction of the valence band widths. This phenomenology is not consistent with many other theories proposed for 1D fermions. Theories based on a fluctuating Peierls insulator would have to explain the two dispersing bands seen in K 0.3 MoO 3 as two separate bands. Two such bands indeed exist but the implication would be that band structure calculations get one of them too narrow by a factor of 5, but get the correct dispersion for the other one 58 . Moreover, they cannot reconcile the activated susceptibility with the essentially metallic conductivity above the Peierls temperature. Standard Luttinger liquids 1,44 , but also the anomalous ground states obtained from coupling Luttinger chains so long as their low-energy fixed point is a Fermi liquid 38 , do neither produce the CDW correlations, nor the activated susceptibility. Notice, however, that both transversely coupled Luttinger liquids (Kopietz et al. 38 ) and the 1D Hubbard model 44 can, under some circumstances, produce spectral functions where the peak on the spinon dispersion is stronger than that on the charge dispersion. They, however, would predict a Fermi surface crossing of the photoemission signal which is not observed experimentally, in addition to the problems listed above. In the experiments, instead, the dispersing spectral features bend back from the Fermi energy as k is increased beyond k F , in a manner strongly reminiscent of the shadow bands discussed before. Depsite (important) quantitative problems, the Luther-Emery spectral function is consistent with the photoemission experiments on K 0.3 MoO 3 and T T F − T CNQ, and beyond that, the model is consistent with much of the other experimental phenomenology available. I emphasize that while the agreement of the Luther-Emery spectral function with the observed photoemission lineshapes certainly is an argument in favor of this model, it is the consistency of its predictions with most other experiments available which suggests that it might be a natural starting point for a description of the low-energy physics of these CDW materials. Obviously, this suggestion is somewhat speculative and independent support is called for. Its virtue is that it comes to grips with the puzzle that the spin susceptibilities of K 0.3 MoO 3 and T T F − T CNQ decrease with decreasing temperature while the conductivity are metallic, that it leaves space for the good description of optical properties as a fluctuating Peierls insulator (they only probe the charge fluctuations which will form CDW precursors at temperatures much below the spin gap opening, presumably as a consequence of emerging 3D coherence), and that it provides an (admittedly phenomenological) description of the photoemission properties of these materials with extremely 1D electronic properties 59 . As in the Bechgaard salts 7 , a single-particle exponent α ∼ 1/2 . . . 1 would be required implying strong long-range electron-electron interactions, and there is at best preliminary support from transport measurements 54 , for such strong correlations in K 0.3 MoO 3 . Retarded electron-phonon coupling could increase α over its purely electronic value 31 . To what extent this mechanism contributes could be gauged from the measured α which must be larger than the one derived from the enhancement of v ρ over the band velocity (hélas strongly depending on the accuracy of band structure calculations). In T T F − T CNQ, the analysis is made difficult by the presence of two chains. There is evidence for strong long-range electronelectron interactions on the T T F -chain from the observation of 4k F -CDW fluctuations, but the situation for T CNQ is less clear. If a sizable enhancement of the dispersion of the ARPES signals over the estimated bandwidths can be interpreted as evidence for long-range electronic correlations, they would indeed be present on both chains. IX. SUMMARY In this paper, I have presented a construction of the dynamical correlation functions of the 1D Luther-Emery model. This model has one gapped degree of freedom, and an ungapped one, and describes 1D superconductors and Peierls insulators (spin gap) and 1D Mott insulators (charge gap). It is a natural extension of Luttinger liquid theory to the peculiar phase intermediate between metal and band insulator, made possible in one dimension by the phenomenon of charge-spin separation. The dynamical correlation functions presented here show where and to what extent the two typically 1D features of a Luttinger liquid: charge-spin separation, and anomalous dimensions of operators, survive in the presence of a gap in one channel. Since an exact calculation of such correlation functions usually is not possible in a Luther-Emery model, our construction relied heavily on limiting cases, symmetries, and equivalences to other models. However, it successfully passed several tests in situations where exact results were available from other methods. The main emphasis of the paper was on the single-particle spectral function which is measured in photoemission. We showed that, generically, charge-spin separation and anomalous dimensions are also visible in the spectral functions of the Luther-Emery model. Specifically, for a spin gapped system with repulsive interactions, describing a 1D charge density wave system, the spectral function has a true singularity on the gapped spin dispersion with an anomalous exponent α − 1/2 while on the charge dispersion, the Luttinger liquid divergence is cut off to a finite maximum by the spin gap -a results which finds a straightforward explanation in terms of convolution of charge and spin correlation functions. For attractive interactions, i.e. a 1D superconductor, two divergences with anomalous dimensions are found. For 1D Mott insulators, i.e. a charge gap, one finds one or two inverse-square-root singularities, i.e. no anomalous dimension (due to spin-rotation invariance), depending on the order of the velocities of the charge or spin fluctuations. It was also shown how these procedures can be generalized to two-and multi-particle correlation functions. Besides predicting spectral functions for the many 1D models falling into the Luther-Emery universality class, there are a few experimental situations where these results can be usefully applied. They successfully describe the photoemission spectrum of the 1D Mott insulator SrCuO 2 15 , to an extent leaving few questions open, the most notable one being experimental resolution. Less clearcut but perhaps more interesting are CDW materials such as K 0.3 MoO 3 and T T F − T CNQ which show very unusual photoemission spectra. These are qualitatively consistent with a Luther-Emery model, and we have proposed that these materials might, most naturally, be described in this framework. A Luther-Emery phase is necessary as an intermediate between a Luttinger liquid and a long-range ordered CDW, and K 0.3 MoO 3 and T T F − T CNQ are natural candidates for searching for such a strange metal. This scenario requires strong electron-electron interactions at least at high energies, and not all CDW materials need fall into this scheme. If the electron-phonon interaction is so strong as to produce CDW precursor fluctuations at very high temperature, and the electronic correlations are weak enough, the establishment of a Luttinger liquid, and the crossover to a Luther-Emery liquid at lower temperature, may be quenched, and a fluctuating Peierls insulator 47 or a bipolaron liquid 60 may be a more appropriate picture. Some CDW materials such as (T aSe 4 ) 2 I 11 , (perylene) 2 P F 6 18 , (fluoranthene) 2 P F 6 61 apparently are consistent with this picture. However, K 0.3 MoO 3 and T T F −T CNQ are not consistent, and the consistency of the spectral functions constructed in this paper with the published experiments, and the analysis of further experiments indicate that, besides electron-phonon coupling, electronic correlations must be important in these CDW systems. FIGURES FIG. 1 Dispersion of peaks in the spectral function ρ(q, ω) of a spin-gapped Luther-Emery model with v ρ > v σ . The dispersion laws ε ρ (q) and ε σ (q) are given in the text. The heavy solid and dashed lines give the signals in the main band [sign(ω) = sign(q)] while the light dashed lines label the shadow bands [sign(ω) = -sign(q)].
2014-10-01T00:00:00.000Z
1998-06-15T00:00:00.000
{ "year": 1998, "sha1": "6e5a69554768b8b8963bb35cbaeb932b43024fa6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9806174", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6e5a69554768b8b8963bb35cbaeb932b43024fa6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237453109
pes2o/s2orc
v3-fos-license
Combined Effects of Extracorporeal Shockwave Therapy and Integrated Neuromuscular Inhibition on Myofascial Trigger Points of Upper Trapezius: A Randomized Controlled Trial Objective To investigate the combined effect of extracorporeal shockwave therapy (ESWT) and integrated neuromuscular inhibition (INI) on myofascial trigger points in the upper trapezius. Methods Sixty subjects aged 18–24 years old with active myofascial trigger points in the upper trapezius were studied. Participants were assigned randomly to either group A who received ESWT one session/week, group B who received INI three sessions/week, or group C who received ESWT in addition to INI. All groups completed 4 weeks of intervention. The following main outcome measures were evaluated at baseline and after 4 weeks of intervention: pain intensity, functional disability, pressure pain threshold (PPT), sympathetic skin response (SSR), and neuromuscular junction response (NMJR). Results Within-group analysis revealed a significant decline in visual analog scale (VAS), Arabic neck disability index (ANDI), and NMJR and incline in PPT and SSR latency post-intervention (p<0.001). Multiple comparison analysis showed a substantial difference between the groups, while the major changes favored group C (p<0.05). Conclusion Combined treatment with ESWT and INI for treating myofascial trigger points in the upper trapezius is more effective than using only one of the two approaches in terms of clinical, functional, and neurophysiological aspects. INTRODUCTION Non-specific neck pain represents a serious economic burden that heavily impacts the health system and can lead to severe dysfunction [1]. The myofascial trigger points (MTrPs) are the main reason behind about 54% of chronic head and neck pains. Myofascial pain is commonly observed in the neck muscles, particularly in the upper trapezius muscle (34.7%) [2]. The MTrPs can be set out as hyperexcitable spots in a tight band within the skeletal muscle, which ache upon shortening, elongating, or activating and manifest referred pain [3,4]. In general, the pathophysiology of MTrPs is still not well understood. The peripheral mechanism produced by muscle over-activity is a clear reason for development of MTrPs [5,6]. However, changes in the central nervous system and stimulation of the autonomic nervous system are known indicators of long-term muscular stress. Trigger points are commonly found in the neuromuscular junction, where they are expected to cause abnormal activity and biochemical alterations [7,8]. Excessive acetylcholine release or acetylcholinesterase deficiency may cause the development of a tight band that results in persistent muscle contraction [8,9]. Several management strategies for MTrPs are available. These range from non-invasive approaches like massage [10], pressure release [11], ischemic compression [12,13], and/or spray and stretch [14] to invasive techniques such as dry needling [13][14][15][16], and injections [17]. For example, extracorporeal shockwave therapy (ESWT) is a non-invasive modality commonly used in the management of musculoskeletal disorders [18]. It can affect the inflammatory phase through both the induction of tissue regeneration from stem cells and the decrease of the neural transmitter in the affected region, promoting patient improvement [19]. ESWT may also stimulate the desensitizing effect on the treated area through the depletion of sensory nerve fiber neurotransmitters in addition to enhancing fibroblast proliferation and the tissue healing process [20,21]. Integrated neuromuscular inhibition (INI) has also been proposed to alleviate neck pain, improve cervical range of motion, and eliminate neck dysfunction [22]. INI has been approved as an efficient treatment for MTrPs, allowing the use of the three techniques in a single and coordinated manner [23]. Myofascial pain syndrome (MPS) is the most known chronic pain condition, but there are still no clear evidence-based clinical guidelines for ideal management [24]. In clinical practice, it is unlikely that any intervention can be performed in isolation. Therefore, it is crucial to consider the effects of combined therapies. Randomized controlled trials (RCTs) are lacking in investigating the manual treatment combined with ESWT to manage MTrPs. Hence, this study aimed to examine the combined effects of ESWT and INI on pain intensity, pressure pain threshold (PPT), functional disability, sympathetic skin response (SSR), and neuromuscular junction response (NMJR) among subjects with MTrPs in the upper trapezius. It was hypothesized that the combined implementation of ESWT and INI could offer additional benefits compared to the isolated use of the two approaches. MATERIALS AND METHODS This RCT was conducted at the outpatient clinic to investigate the effects of ESWT and INI, in addition to their combined effects on pain intensity, PPT, SSR, NMJR, and functional disability in subjects with MTrPs in the upper trapezius. The study protocol was approved by Research Ethical Committee of Faculty of Physical Therapy, Cairo University, Giza, Egypt (No. P. T. REC/012/002134) and registered at Pan African Clinical Trial Registry (Registry ID PACTR 20181184486658). This study was conducted between December 2018 and January 2020. Sample size estimation The study sample was estimated using the G*Power analytical program (model 3.1.9.2; Franz Faul, University of Kiel, Germany) (F tests, MANOVA-repetitive interaction, α=0.05, β=0.2, and large effect size=0.42). The study sample was calculated according to the main outcome (SSR, NMJR) based on the pilot study and reported that n=60 was a sufficient sample size. Subjects The participants were 60 of the university's undergraduate students (46 females and 14 males), ranging in age from 18 to 24 years old. The study methodology and objectives were thoroughly explained to all subjects, who were required to give informed consent for participation. The approving subjects were randomly distributed into www.e-arm.org three groups of similar sizes. The subjects were chosen to be included in the study after meeting certain inclusion criteria. First, medically competent men and women were included. They studied for 3 hours a day, with sufficient breaks in between. They had chronic MTrPs in the upper trapezius for more than 6 months, given the diagnostic criteria of being a tight band with a palpable nodule and distant pain when subjected to pressure [25]. Meanwhile, subjects with previous neck or shoulder pathology (e.g., fracture, surgery, inflammatory and infectious diseases), cervical disc pathology, systemic disorder, fibromyalgia, or those who underwent physical therapy for at least the previous 3 months were excluded. Subjects who met the inclusion criteria for the trial were assigned at random to one of the following: group A who received ESWT one session/week, group B who received INI three sessions/week, or group C who received ESWT in addition to INI. All groups completed 4 weeks of intervention. The randomization procedures were carried out by opening a non-transparent envelope, which was set up by a single individual using random number generation. Clinical assessment Pain intensity The visual analog scale (VAS) was used to assess pain intensity. A scale of 10 cm was labeled with "0" (zero pain) to "10" (the worst imaginable pain). The subjects were instructed to place a vertical mark on the line to indicate their pain [26]. Pressure pain threshold A digital force gauge with a rubber tip (Wagner FDX25 Force Gauge; capacity 25×0.02 Ibf; Wagner Instruments, Greenwich, CT, USA) was used to measure the active MTrPs tenderness by determining the pressure sensitivity, which was assessed by holding the device tip perpendicular to the MTrPs with the patient in the supine position. Pressure was exerted at 1 kg/cm 2 . The pressure was removed once the patient began to feel uncomfortable. This procedure was repeated thrice with a 30-second interval between each trial. The average value of the analysis was obtained [27]. Earlier research approved the PPT intra-rater (ICC 0.6-0.97) and inter-rater (ICC 0.4-0.98) Assessed for eligibility (n=150) Baseline assessment (n=60) Group A (n=20) Received extracorporeal shockwave therapy only. One session per week. Functional disability Neck function assessment was performed using the Arabic version of the neck disability index (ANDI). This is a proper tool for assessing neck function [29], composed of 10 categories with six different answers each. The patient chose the best answer to describe his/her state. A score from 0 to 4 represents no disability, 5 to 15 indicates mild disability, 15 to 24 indicates moderate disability, 25 to 34 indicates severe disability, and complete disability is more than 34 [30]. Sympathetic skin response The patient was seated in a relaxed position in a silent, semi-dark room, and the temperature was maintained at 24°C. A single square wave electrical stimulus was used to stimulate the median nerve at the wrist level. The surface and reference electrodes were attached to the palm of the hand. The stimulus was given thrice with a minute of rest in between. The average values of latency and amplitude for the three repetitions were used in the analysis [5]. Neuromuscular junction response Repetitive nerve stimulation (RNS) of the spinal accessory motor nerve was used to evaluate NMJR. The stimulator was placed on the posterior boundary of the sternocleidomastoid muscle leveled with the upper boundary of the thyroid cartilage over the spinal accessory motor nerve. The surface recording electrodes were positioned 5 cm from the spinous process of the 7th cervical vertebra. A series of 10 supramaximal stimuli of 3 Hz was delivered. The difference in amplitude between the first and fourth compound muscle action potentials (CMAPs) was used to calculate the percentage of decrement or increment changes in CMAP [5]. Intervention Integrated neuromuscular inhibition INI consists of ischemic compression, strain counter-strain, and muscle energy technique. All subjects were asked to lie in the supine position to decrease their activity in the upper trapezius. This technique was applied three times per week for 4 consecutive weeks [4]. (1) Ischemic compression (IC): Thumb pressure was applied over the trigger point of the upper trapezius. This was continued until the pain decreased. Subsequently, the pressure was reapplied until pain was felt. The application of the aforementioned procedure took 90 seconds and was repeated three to five times [4]. asked to raise the affected shoulder up and then bend their head sideways to the shoulder at the same time against resistance. This isometric contraction was maintained for 7 to 10 seconds. The therapist then stretched the affected upper trapezius by bending the head to the opposite side and rotating it to the same side. The stretch was maintained for 30 seconds. This technique was repeated three to five times [4]. Extracorporeal shockwave therapy The patient was instructed to lie in the supine position to reduce upper trapezius activity. Four sessions of ESWT were performed using a Gymna ShockMaster 500 device (Gymna, Bilzen, Belgium) adjusted to the following settings: 1.5 bar, 8 Hz, and 1,000 shocks/trigger point with one session per week [31]. Outcome measures VAS, PPT, ANDI, SSR, and NMJR were evaluated both at baseline and after 4 weeks of interventions. www.e-arm.org Data analysis Prior to the final analysis, data were tested for the hypotheses of normality and homogeneity of variance, and no violations were reported for any of the dependent variables, as measured by the Shapiro-Wilk test and Levene's test. For all participants, descriptive statistics were assessed at baseline and after 4 weeks of interventions. A mixed model multivariate analysis of variance (MANOVA) was used to estimate variations between and within groups regarding the selected parameters: VAS, PPT, ANDI, SSR, and NMJR pre-and post-interventions. The F value used was based on Wilks' lambda, and when the MANOVA indicated a significant time×group interaction effect, follow-up univariate ANOVAs (two-way mixed model) were executed. The Social Studies Statistical Package (SPSS) version 25 (IBM SPSS, Armonk, NY, USA) was used to perform all statistical tests, and the significance limit was set at p≤0.05. RESULTS Before the trial began, 150 participants were screened for eligibility. Seventy subjects fulfilled the inclusion criteria. During the eligibility assessment, 10 subjects were excluded because they declined to participate in the study. Repeated measures ANOVA showed a substantial main impact for both time ( 31). This interaction effect means that the variation between groups on the linear combination of outcomes differs between pre-and post-interventions. With respect to the demographic and clinical characteristics, there was no significant difference between the pre-intervention groups (p>0.05) ( Table 1). After treatment, there was a statistically significant decline in VAS, ANDI, and NMJR and incline in PPT and SSR latency in the three groups after intervention compared to the baseline (p<0.001) (Tables 2-5). The results are illustrated in Fig. 2. However, in terms of the differential effects of the three groups on VAS, PPT, ANDI, SSR latency, and NMJR post-intervention, multiple comparison analysis showed a substantial difference between the three groups, while the major changes were in group C, where the mean differences among the groups A, B, and C were (1.65, 4.35, Many reports support the findings of this trial, confirming that INI can enhance MTrP-related pain and disability [22,23,32,33], where IC may improve circulation, remove waste products, loosen adhesions, and normalize muscle tone and patient response to pain [32]. However, SCS can achieve its desired effect by reflexively adjust- www.e-arm.org ing the muscle spindle, which can contribute to normal muscle tone [34]. In contrast, MET has hypoalgesic effects through the inhibitory Golgi tendon reflex, which is activated during isometric contraction and results in reflex muscle relaxation [35]. Similarly, ESWT has been approved as an efficient intervention to relieve pain and disability associated with MTrPs [19,36,37]. ESWT exerts its effects by increasing fibroblast proliferation and tissue healing. It is also able to induce a desensitizing effect on the treated area by depleting sensory nerve fiber neurotransmitters [20,21]. Furthermore, ESWT impacts certain hormones and proteins that control tissue function [36]. The findings of the present trial are aligned with those of many previous www.e-arm.org studies. Ji et al. [37] investigated the impact of ESWT on MTrPs in the upper trapezius and demonstrated its efficacy in reducing pain and increasing pressure sensitivity. In a study comparing the efficacy of ESWT as a noninvasive modality and trigger point injection (TPI) as an invasive modality in the treatment of MTrPs of the quadratus lumborum, Hong et al. [38] found that three sessions of ESWT are more effective than TPI in alleviating pain. Rahbar et al. [39] compared the efficacy of ESWT to conventional treatments such as ultrasound, hot packs, and self-stretches, and they found that ESWT was superior in reducing discomfort. Kamel et al. [40] investigated the impact of ESWT versus regional non-steroidal antiinflammatory medication following neck dissection on pain threshold and intensity for 4 weeks and concluded that ESWT had more significant effects on pain intensity and threshold. The findings of the present trial indicated an increase in SSR latency, which may be due to the anatomical correlation among afferent pain fibers and sympathetic fibers, given that they are parallel within the central nervous system [41]. The increased SSR latency due to the combined treatment of ESWT and INI may highlight the suppressive influence of these treatments on sympathetic function. The multiple interneuron interface between afferent and efferent fibers in the reflex path may result in the loss or delay of SSR [42]. MTrPs are thought to develop at the neuromuscular junction, where they cause chemical changes and irregular endplate behavior [7,8]. Excessive endplate irritation induces an extreme release of acetylcholine. Acetylcholine release or lack of acetylcholinesterase contributes to the development of a taut band that leads to persistent localized muscle fiber contraction [8,9]. Thus, modulating the MTrPs could account for the potential MNJR normalization mechanism. Our findings are consistent with those of previous studies that confirmed the superiority of the combined treatments. Lytras et al. [33] investigated the impact of INI combined with therapeutic exercise on chronic mechanical neck pain and reported more substantial effects on pain and disability compared to therapeutic exercise alone. Similarly, Alghadir et al. [43] studied the combined effects of IC and MET with conventional therapy in the management of neck pain associated with MTrPs and discovered that the combined treatment was more suc-cessful than either of the two performed alone. Moreover, a pilot study was carried out by Nasb et al. [44] to examine the combined effects of dry cupping and IC in the treatment of MTrPs and neck pain, and it was gathered that combining the two modalities had a greater impact on function and pain threshold. One of the drawbacks of this study that is worth mentioning is the absence of blinding, wherein all participants were evaluated by the same investigators who implemented the intervention. Additionally, there are no follow-up data on the participants' clinical status, which would help us monitor the long-term effects of our intervention. Overall, while both ESWT and INI utilized individually improved pain intensity, PPT, functional impairment, SSR, and NMJR, their combined usage led to more marked effects, highlighting the integrated approach as a better option. CONFLICT OF INTEREST No potential conflict of interest relevant to this article was reported.
2021-09-10T06:18:09.643Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "a514f2eb144d51813c907153d879ea6c5baf5201", "oa_license": "CCBYNC", "oa_url": "https://www.e-arm.org/upload/pdf/arm-21018.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86b8a66dad62e7c6ee7bb5f1416cba25150b7898", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54793074
pes2o/s2orc
v3-fos-license
Warm formaldehyde in the Oph IRS 48 transitional disk Simple molecules like H2CO and CH3OH in protoplanetary disks are the starting point for the production of more complex organic molecules. So far, the observed chemical complexity in disks has been limited due to freeze out of molecules onto grains in the bulk of the cold outer disk. Complex molecules can be studied more directly in transitional disks with large inner holes, as these have a higher potential of detection, through UV heating of the outer disk and the directly exposed midplane at the wall. We use Atacama Large Millimeter/submillimeter Array (ALMA) Band 9 (~680 GHz) line data of the transitional disk Oph IRS 48, previously shown to have a large dust trap, to search for complex molecules in regions where planetesimals are forming. We report the detection of the H2CO 9(1,8)-8(1,7) line at 674 GHz, which is spatially resolved as a semi-ring at ~60 AU radius centered south from the star. The inferred H2CO abundance is ~10^{-8} derived by combining a physical disk model of the source with a non-LTE excitation calculation. Upper limits for CH3OH lines in the same disk give an abundance ratio H2CO/CH3OH>0.3, which points to both ice formation and gas-phase routes playing a role in the H2CO production. Upper limits on the abundances of H13CO+, CN and several other molecules in the disk are also derived and found to be consistent with full chemical models. The detection of the H2CO line demonstrates the start of complex organic molecules in a planet-forming disk. Future ALMA observations should be able to push down the abundance detection limits of other molecules by 1-2 orders of magnitude and test chemical models of organic molecules in (transitional) disks. Introduction Planets are formed in disks of dust and gas that surround young stars. Although the chemical nature of the gas is simple, with only small molecules such as H 2 , CO, HCO + , or H 2 CO detected so far, the study of molecules in protoplanetary disks has resulted in a much better understanding of the origin of planetary systems (e.g. Williams & Cieza 2011;Henning & Semenov 2013). Molecular line emission serves as a probe of disk properties, such as density, temperature, and ionization. Furthermore, simple species are the start of the growth of more complex organic and possibly prebiotic molecules (Ehrenfreund & Charnley 2000;Mumma & Charnley 2011). Molecules in disks are incorporated into icy planetesimals that eventually grow to comets and asteroids that may have delivered water and organic material to Earth. Therefore, a better understanding of the chemical composition of protoplanetary disks where these icy bodies are formed provides insight into the building blocks of comets and Earth-like planets elsewhere in the Universe. Protoplanetary disks have sizes of up to a few 100 AU, which makes them similar to or larger than our own solar system (∼50 AU radius). However, at the distance of the nearest star-forming regions these disks subtend less than a few arcsec on the sky, so that telescopes with high angular resolution and high sensitivity are needed to study their chemical composition. Most of the disks observed so far show no high chemical complexity (e.g. Dutrey et al. 1997;Thi et al. 2004;Kastner et al. 2008;Oberg et al. 2010). In the surface layers of the disk, molecules are destroyed by photodissociation by ultraviolet (UV) radia-tion from the central protostar. In the outer disk and close to the disk midplane, temperatures quickly drop to 100 K and lower, where all detectable molecules, including CO, freeze out onto dust grains at temperatures determined by their binding energies (Bergin et al. 2007). The chemical composition thus remains hidden in ices. Only a very small part of the ice molecules is brought back into the gas phase by nonthermal processes such as photodesorption. Transition disks have a hole in their dust distribution and thus form a special class of protoplanetary disks (Williams & Cieza 2011). The hole allows a view into the usually hidden midplane composition because the ices at the edge of this hole are directly UV irradiated by the star, which results in increased photodesorption and thermal heating of the ices (Cleeves et al. 2011). The hole is also an indicator that the disk may be at the stage of forming planets. So far, the chemical composition of the outer regions of transitional disks appears to be similar to that of full disks, with detections of simple molecules including H 2 CO (Thi et al. 2004;Öberg et al. 2010. However, these data were taken with a typical resolution of >2". The study of the chemistry in protoplanetary disks has recently gained much more perspective due to the impending completion of the Atacama Large Millimeter/submillimeter Array (ALMA). ALMA allows us to study astrochemistry within protoplanetary disks at an unprecedented level of complexity and at very small scales. It has the sensitivity to detect not only the dust, but also the gas inside dust gaps in transitional disks (van der Marel et al. 2013;Casassus et al. 2013;Fukagawa et al. 2013;Bruderer et al. 2014). The disk around Oph IRS 48 forms a unique laboratory for testing basic chemical processes in planet-forming zones. IRS 48 is a massive young star (M * = 2M ⊙ , T * ∼10 000 K) in the Ophiuchus molecular cloud (distance 125 pc) with a transition disk with a large inner dust hole, as revealed by mid-infrared imaging, which traces the hot small dust grains (Geers et al. 2007). The submillimeter continuum data (685 GHz or 0.43 mm) from thermal emission from cold dust obtained with ALMA show that the millimeter-sized dust is concentrated on one side of the disk, in contrast to the gas and small dust grains. Gas was detected inside the dust gap down to 20 AU radius with ALMA data (Bruderer et al. 2014), and strong PAH emission is also observed from within the cavity (Geers et al. 2007). The continuum asymmetry has been modeled as a major dust trap (van der Marel et al. 2013), triggered by the presence of a substellar companion at ∼20 AU. The dust trap provides a region where dust grains concentrate and grow rapidly to pebbles and then planetesimal sizes, producing eventually what may be the analog of our Kuiper Belt. Bruderer et al. (2014) presented and modeled the ALMA CO and continuum data, together with complementary data at other wavelengths, to derive a three-dimensional axisymmetric physical model of the IRS 48 disk. One important conclusion is that the dust in the disk is warm, even out to large radii, because the UV radiation can pass nearly unhindered through the central hole. This implies that the dust temperature is higher than 20 K throughout the disk so that CO does not freeze out close to the midplane of the disk (Collings et al. 2004;Bisschop et al. 2006). The lack of a freeze-out zone of CO is important, since much of the chemical complexity in a protoplanetary disks is thought to start with the hydrogenation of the CO ice ). In the absence of CO ice, gas-phase chemistry is the current main contributor to complex molecule formation. Alternatively, the disk may have been colder in the past before the hole was created, with the ices produced at that time now evaporating. One of the simplest complex organic molecules, H 2 CO, can form both through hydrogenation of CO-ice (Tielens & Hagen 1982;Hidaka et al. 2004;Fuchs et al. 2009;Cuppen et al. 2009) and through gas-phase reactions. H 2 CO has been detected in several astrophysical environments, such as the warm inner envelopes of low-and highmass protostars and protoplanetary disks including transitional disks (Dutrey et al. 1997;Ceccarelli et al. 2000;Aikawa et al. 2003;Thi et al. 2004;Bisschop et al. 2007;Öberg et al. 2010Qi et al. 2013a). In contrast with H 2 CO, CH 3 OH can only be formed through ice chemistry (Geppert et al. 2006;Garrod et al. 2006), which means that the H 2 CO/CH 3 OH ratio gives information on the H 2 CO formation mechanism. Furthermore, H 2 CO is a very interesting molecule for comparing the chemical composition of disks, comets, and our solar system, because H 2 CO-and CN-bearing molecules such as HCN and CN are precursors of amino acids. Synthesis of amino acids occurs in large asteroids in the presence of liquid water, for instance, through the Strecker synthesis route (Ehrenfreund & Charnley 2000). In this work we present the detection of the H 2 CO 9(1,8)-8(1,7) line in IRS 48 down to scales of ∼30 AU, a high-excitation line originating from a level at 174 K above ground. We model its abundance and discuss the implications of the origin of this molecule in combination with the nondetection of other molecular lines. We were only able to detect this line thanks to the tremendous increase in sensitivity at these high frequencies (∼ 670 GHz) using ALMA. Observations Oph IRS 48 (α 2000 =16 h 27 m 37. s 18, δ 2000 =-24 • 30 ′ 35.3 ′′ ) was observed using the Atacama Large Millimeter/submillimeter Array (ALMA) in Band 9 in the extended configuration in Early Science Cycle 0. The observations were taken in three observation execution blocks of 1.7 hours each in June and July 2012. During these executions, 18 to 21 antennas with baselines of up to 390 meters were used. The spectral setup contained four spectral windows, centered on 674. 00 , 678.84, 691.47, and 693.88 GHz. The target lines of this setup were the 12 CO J=6-5, C 17 O J=6-5, CN N=6 11/2 -5 11/2 , and H 13 CO + J=8-7 transitions and the 690 GHz continuum. Each spectral window consists of 3840 channels, a channel separation of 488 kHz, and thus a bandwidth of 1875 MHz, which allows for serendipitous detection of other lines. The resulting velocity resolution is 0.21 km s −1 (for a reference of 690 GHz). Table 1 summarizes the observed lines and frequencies. For CH 3 OH, another transition was covered in our spectral setup at 678 GHz (4(2,3)-3(1,2)) but with the same Einstein coefficient and E U as the 674 GHz transition. For CN, the 6 11/2 -5 11/2 component covered in our observations has a rather low Einstein A coefficient (averaged over the unresolved hyperfine components). The CN lines at 680 GHz with Einstein A values stronger by two orders of magnitude were just outside the correlator setting, which was optimized for 12 CO and C 17 O 6-5. We reduced and calibrated the data using the Common Astronomy Software Application (CASA) v3.4. The bandpass was calibrated using quasar 3c279, and fluxes were calibrated against Titan. For Titan, the flux was calibrated with a model that was fit to the shortest one-third of the baselines, since Titan was resolved out at longer baselines and the fit to the model would not be improved. The absolute flux calibration uncertainty in ALMA Band 9 is ∼20%. The resulting images have a synthesized beam of 0.32" × 0.21" or 38×25 AU (1.87·10 −12 sr) and a position angle of 96 • after applying natural weighting and cleaning. After extracting the continuum data and the line data of 12 CO J=6-5 and C 17 O J=6-5, we performed a search for lines of other simple and complex molecules within our spectral setup, but apart from H 2 CO, no other convincing features were found. A rectangular cleaning mask centered on the detected emission peaks was used during the cleaning process of the 12 CO line and was adapted to the detected emission of each channel. The final channeldependent mask was then applied to the frequencies of the targeted weaker lines, given in Table 1. The final rms level was 20-30 mJy beam −1 per 1 km s −1 channel, depending on the exact frequency. For more details on the data reduction see the supplementary online material of van der Marel et al. (2013). Results The H 2 CO 9(1,8)-8(1,7) line was detected at 3-8σ levels in the channels between v LSR = -0.5 and +9.5 km s −1 (the source velocity is 4.55 km s −1 , van der Marel et al. 2013). The integrated flux between -0.5 and +9.5 km s −1 shows a flattened structure centered just south of the star (Figure 1). The stellar position (Table 1) was determined from the fastest velocity channels where the 12 CO emission from the same data set was detected. The emission is extended across the spatial region, corresponding to a rectangle [-1.0" to +1.0", -0.3" to +0.2"] having an area of 2.1·10 −11 sr. Across this area, the integrated flux is found to be ∼3.1 ± 0.6 Jy km s −1 (see Figure 1 and 2). The emission is, where detected, cospatial with the 12 CO 6-5 emission at the same velocity channels, although the northern half of the emission is missing (see Figure 3). Gas orbiting a star in principle follows Kepler's laws, with the velocity v depending on the orbital radius r according to v = √ GM/r, with G the gravitational constant and M the stellar mass. The CO 6-5 emission is found to follow Keplerian motion in an axisymmetric disk (Bruderer et al. 2014). The emission in the high-velocity channels is significantly stronger than the velocities closer to the source velocity, which can be partly explained by limb brightening due to the 50 • inclination of the disk. Some H 2 CO and CO emission is found in various channels in the southeast corner (at velocities of +1-2 km s −1 ) and not follow the Keplerian motion pattern (Bruderer et al. 2014), but because of the low S /N it is not possible to distinguish whether this emission is real or a cleaning artifact for the case of H 2 CO. A striking aspect of these images is that H 2 CO is only detected south of the star, as is also found for the submillimeter continuum, which shows a north-south asymmetry of the disk with a contrast factor of >100. The north-south contrast of H 2 CO can only be constrained to be a factor >2 because of the low S /N of this detection. Moreover, the H 2 CO does not follow the submillimeter continuum exactly: at the continuum peak no H 2 CO emission is detected at all. The velocity channels in 12 CO between 2.5 and 4.5 km s −1 suffer from absorption by foreground clouds, but the H 2 CO abundance and excitation in these clouds is too low to absorb the H 2 CO disk emission in this highly excited line. Thus, the absence of H 2 CO at the peak millimeter continuum is significant. The other targeted molecules remain undetected, but the nondetection provides 3σ upper limits that can be used in the models. For the 25 mJy beam −1 km s −1 rms level, we estimated the upper limit on the total flux of CH 3 OH as follows: for the H 2 CO detected emission, the total surface area dΩ with emission > 3σ is ∼ 1.3 · 10 −11 sr or ∼7 beams, and the detectable emission lies between -0.5 and +9.5 km s −1 (11 channels). The 3σ upper limit on the total flux is thus where the factor 1.2 is introduced to compensate for small-scale noise variations and calibration uncertainties at this high frequency of the observations. Note that this is a conservative limit because the H 2 CO emission is most likely narrower in the radial direction than the spatial resolution. For the CN, H 13 CO + , and other molecules in Table 1 we expect the emission to be cospatial with the CO emission, which covers ∼ 4.8 · 10 −11 sr or ∼ 26 beams and integrated between -3 and +12 km s −1 . The upper limit is thus Physical structure Analysis of the abundance and spatial distribution of the H 2 CO in the disk requires a physical model of the temperature and gas density as a function of radius and height in the disk. We used the best-fitting model of the gas structure from Bruderer et al. (2014) based on the 12 CO and C 17 O 6-5 lines from the same ALMA data set and the dust continuum. The proper interpretation of the gas disk seen in 12 CO 6-5 emission requires a thermochemical disk model, in which the heating-cooling balance of the gas and chemistry are solved simultaneously to determine the gas temperature and molecular abundances at each position in the disk. Moreover, even though the densities in disks are high, the excitation of the rotational levels may not be in thermodynamic equilibrium, and there are steep temperature gradients in both radial and vertical directions in the disk. The DALI model In the H 2 CO maps, the white contours indicate the 20%, 40%, 60%, 80%, and 100% of the peak intensity of all channels (the peak intensity is 195 mJy beam −1 ). The dashed ellipse indicates the 60 AU radius ring. The CO +4-+6 km s −1 channels are affected by foreground absorption. (Bruderer et al. 2012;Bruderer 2013) uses a combination of a stellar photosphere with a disk density distribution as input. For IRS 48, the stellar photosphere is represented by a blackbody of 10 000 K. It solves for the dust temperatures through continuum radiative transfer from UV to millimeter wavelengths and calculates the chemical abundances, the molecular excitation, and the thermal balance of the gas. It was developed for the analysis of the gas emission structures such as are found for transitional disks (Bruderer 2013). DALI uses a reaction network described in detail in Bruderer et al. (2012) and Bruderer (2013). It is based on a subset of the UMIST 2006 gas-phase network (Woodall et al. 2007). About 110 species and 1500 reactions are included. In addition to the gas-phase reactions, some basic grain-surface reactions (freeze-out, thermal and nonthermal evaporation and hydrogenation like g:O → g:OH → g:H 2 O and H 2 /CH + formation on PAHs) are included. The g:X notation refers to atoms and molecules on the grain surface. The photodissociation rates are obtained from the wavelength-dependent cross-sections by van Dishoeck et al. (2006). The adopted cosmic-ray ionization rate is ζ = 5 × 10 −17 s −1 . X-ray ionization and the effect of vibrationally exited H 2 are also included in the network. The model chemistry output will be used to compare with the HCO + and CN data. However, since no extensive grain-surface chemistry is included, it is not suitable to model the H 2 CO and CH 3 OH chemistry. The surface density profile for the best-fitting model for IRS 48 (Bruderer et al. 2014) including the gap is shown in Figure 4. It is found that the gas disk around IRS 48 has a very low mass (1.4·10 −4 M ⊙ = 0.15 M Jup ) compared with the mean disk mass of ∼5 M Jup for normal disks (Williams & Cieza 2011;Andrews et al. 2013). Furthermore, the disk has a large scaleheight, allowing a large portion of the inner walls to be irradiated by the star. The radial gas structure has two density drops: a drop δ 20 of < 10 −1 in the inner 20 AU (probably caused by a planetary or substellar companion) and an additional drop δ 60 of 10 −1 at 60 AU radius. The resulting densities, temperatures and CO abundances of the model are given in Figure 5. The gas density is 10 5−6 (2013). The blue lines indicate the gas surface density, the red lines the dust surface density. The dotted black lines show the undisturbed surface density profiles if it continued from outside inwards without depletion. The green dashed lines indicate the radii at which the depletions start. molecules cm −3 in the upper layers of the disk, increasing to 10 8 cm −3 close to the midplane near 60 AU. The gas temperature is typically a few 100 K, except in the upper layers where the temperature reaches several thousand K. The UV radiation field is enhanced by factors of up to 10 8 in the disk, indicated by G 0 (panel 6 in Figure 5). G 0 = 1 refers to the interstellar radiation field defined as in Draine (1978), ∼2.7 × 10 −3 erg s −1 cm −2 with photon-energies in the far UV range between 6 eV and 13.6 eV. The CO abundance is ∼ 10 −4 compared with H 2 throughout the bulk of the outer disk and is not frozen out. The H 2 CO emission was modeled using the LIne Modeling Engine (LIME), a non-LTE excitation spectral line radiation transfer code (Brinch & Hogerheijde 2010). The physical struc- Bruderer et al. (2014). The panels show gas density (cm −3 ), gas temperature (K), CO abundance with respect to H 2 , dust density (cm −3 ), dust temperature (K), and UV field in G 0 (G 0 = 1 refers to the standard interstellar radiation field). The given numbers are in 10 log scale. ture described above is used as input. The first step in the analysis is to empirically constrain the H 2 CO abundance by using three different trial abundance profiles guided by astrochemical considerations. The inferred abundances were then a posteriori compared with those found in the full chemical models. In model 1, the abundance was assumed to be constant throughout the disk, testing abundances between 10 −5 and 10 −11 with respect to H 2 . In model 2, the abundance was taken to follow the CO abundance calculated by the DALI model, taking a fractional abundance ranging between 10 −3 and 10 −8 with respect to CO. Model 3 was inspired by Cleeves et al. (2011) by setting the H 2 CO to zero except for a ring between 60 and 70 AU. This model assumes that the UV irradiated inner rim has an increased chemical complexity that can be observed directly, as material from the midplane has been liberated from the ices. The abundance profile is additionally constrained by the photodissociation and freeze out. H 2 CO can only exist below the photodissociation height, which was taken as the height (z direction) where a hydrogen column density of N(H 2 ) = 4 · 10 20 cm −2 is reached. At this column, the CO photodissociation rate drops significantly due to shielding by dust as well as self-shielding and mutual shielding by H 2 for low gas-to-dust ratios (Visser et al. 2009). Therefore, at each radius this value was calculated and the abundance was set to zero above it. For radii <60 AU, H 2 CO is photodissociated almost entirely down to the midplane because of the lower total column density. Furthermore, H 2 CO is expected to be frozen out on the grains at temperatures <60 K since it has a higher binding energy (Ioppolo et al. 2011) than CO, therefore the abundance in regions below this temperature were also set to zero. All three abundance profiles are shown in Figure 6. The LIME grid was built using linear sampling, with the highest grid density starting at 60 AU, using 30 000 grid points and 12 000 surface grid points, using an outer radius of 200 AU. The image cubes were calculated for 60 velocity channels of width 0.5 km s −1 spectral resolution, in 5"×5" maps with 0.025" pixels. Collisional rate coefficients were taken from the Leiden Atomic and Molecular Database (LAMDA) (Schöier et al. 2005) with references to the original collisional rate coefficients as follows: H 2 CO (Troscompt et al. 2009), CH 3 OH (Rabli & Flower 2010), H 13 CO + (Flower 1999), and CN (Lique et al. 2010). For the other molecules the emission was only calculated in LTE, using the parameters from CDMS (Müller et al. 2001(Müller et al. , 2005. For CH 3 OH, H 13 CO + , CN and the other molecules we ran models to constrain the upper limits. For CH 3 OH, the same abundance profiles as for H 2 CO were taken because CH 3 OH is expected to be cospatial with H 2 CO when they are both formed through solid-state chemistry. For the H 13 CO + abundance the initial CO abundance was taken and multiplied with factors 10 −5 -10 −11 , as the HCO + is known to form via CO through the H + 3 + CO proton donation reaction and is indeed observed to be strongly correlated with CO (Jørgensen et al. 2004). For the other molecules the same approach for the abundance as for H 13 CO + was used. Fig. 7. Results of H 2 CO abundance models: total flux integrated over the emission rectangle of the observations for different abundances for model 1 (purple), model 2 (blue), and model 3 (red). The fractional abundances with respect to CO for model 2 have been multiplied with 10 −4 for easier comparison as the CO/H 2 abundance is typically 10 −4 . The dotted line indicates the measured observed flux, and the gray bar indicates the error on this value based on the flux calibration uncertainty. The model results for H 2 CO are presented in Figure 7. Integrated fluxes were computed by summing the model fluxes over the same rectangular region as the H 2 CO observations, after subtracting the continuum. Figure 7 presents the model fluxes as a function of H 2 CO abundance for the three trial abundance Fig. 6. Trial abundance models 1, 2 and 3 for H 2 CO. The H 2 CO abundance is limited by photodissociation in the upper layer and freeze out below 60 K. Model 1 assumes a constant abundance, model 2 assumes a fractional abundance with respect to CO, model 3 assumes a constant abundance in between 60 and 70 AU radius and zero abundance at other radii. structures. Model 1 with constant H 2 CO abundance ∼10 −8±0.15 with respect to H 2 or model 2 with abundance ∼10 −4 with respect to CO reproduce the total observed flux well within the error bar. There is little difference between model 1 and model 2 except for a factor of 10 4 that has been taken into account in Figure 7. This is expected because most of the CO abundance in the defined region is 10 −4 with respect to H 2 . Model 3 requires an abundance of 10 −8 higher by a factor 3 in the 60-70 AU ring to give the same integrated flux. Note that the LIME model fluxes obtained with non-LTE calculations are only about 25% lower than the LIME models in LTE due to the high densities in the disk. Fig. 9. Column density profiles of H 2 CO for the best abundance fits for models 1 (black), 2 (blue), and 3 (red). Abundances are 10 −8 for model 1, 10 −4 w.r.t. CO for model 2 and 10 −7 for model 3. The spectra (Figure 8) confirm that the best match for the flux for model 1 is for an abundance of ∼10 −8 , although the peaks at the highest velocities are up to twice as high as the data. The slight asymmetry in these model spectra is caused by the spatial integration over a rectangle, whereas the disk has a position angle of 96 • . The line wings in models 1 and 2 originate from the emission from radii <60 AU, which is missing in model 3, but the S /N of the data is too low to detect the difference. In the bottom panel of Figure 8 the spectra have been scaled so that the total flux exactly matches that from the observations.The current models underproduce the emission close to the central velocity ratios, suggesting that the abundance may be even more enhanced in the central southern part of the disk at these velocities and is not constant along the azimuthal direction of the semi-ring. It is also possible that there is enhanced H 2 CO at larger radii than assumed here when the freeze-out zone is taken out, although this does not add a significant amount of emission at the central velocities. Calculation of an abundance model where the freeze-out zone is removed shows that this indeed increases the emission at central velocities, but the S /N of the data is insufficient to confirm or exclude emission at larger radii. Overall, it is concluded that the abundance is ∼ 10 −8 compared with H 2 within factors of a few. The final comparison between models and data is made by comparing images. To produce images from the model output cubes the images were convolved with the ALMA beam of the observations (0.31"×0.21", PA 96 • ). Similar as in Bruderer et al. (2014), the result of the model images convolved to the ALMA beam was compared with the result of simulated ALMA observations. An alternative method is to convert the model images to (u, v)-data according to the observed (u, v)-spacing using the CASA software and reduce them in the same way as the observations. Because of the good (u, v)-coverage of our observations, the two approaches do not differ measurably within the uncertainty errors. Figures 10 and 11 indicate that the three models have a similar ring-like structure as the observations, apart from the emission in the north that is lacking in the data. The differences between the models are best seen in the velocity channel maps: models 1 and 2 still show some emission within the ring at 30-60 AU, which is higher for model 1 than for model 2 because the CO abundance is somewhat lower between 40 and 60 AU. Model 3 does not show any emission within the ring by design. A possible explanation for the missing H 2 CO emission at the peak of the dust continuum is that the dust is not entirely optically thin. The optical depth τ d was calculated as τ d ∼ 0.43 (van der Marel et al. 2013) averaged over the continuum region. If we assume all the H 2 CO emission I line to originate from behind the dust, the resulting intensity is ( The first term is the measured dust intensity, which is subtracted from the H 2 CO data. The second term indicates a reduction of the line intensity by continuum extinction. This extinction was calculated by multiplying the model output with the exponent of a 2D τ d profile, where τ d is following the continuum emission profile of the observed dust trap area. The maximum τ d was taken as 0.86, recovering the averaged opacity over this area. This correction represents an upper limit of the continuum extinction effect. The result is shown in the bottom panels of Figure 11. The continuum extinction decreases the H 2 CO emission in the south by more than a factor 2, while some emission between the star and the dust continuum remains (for models 1 and 2). Although the strength of this emission compared with the peaks at the edges is still lower than in the observations, the model image is now more consistent with the observations. The total integrated flux is only 10% lower because the rectangle used for the spatial integration did not cover most of the dust continuum. Both the S /N of the line data and the unknown mixing of the gas and dust prevent a more detailed analysis of this problem, but it is clear that the continuum extinction cannot be neglected. The observations show a decrease of a factor 4 between the east and west limbs compared with the emission at the location of the dust peak (see Figure 1), which requires a τ d of at least 1.4. Submillimeter imaging at longer wavelengths is required to measure the dust optical depth more accurately. Overall, the conclusion is that the current data can constrain the H 2 CO abundance in the warm gas where H 2 CO is thought to reside to ∼ 10 −8 , but the current S /N is too low to distinguish the different assumptions on the radial distribution of H 2 CO. However, the three models make different predictions for the distribution, which we expect to see more clearly in future higher S /N ALMA data. The total model fluxes for the CH 3 OH, H 13 CO + and CN are compared with the derived upper limits in Figure 12, where all abundances have been multiplied with 10 −4 to translate the abundance w.r.t. CO to H 2 . The other targeted molecules with upper limits were compared in the same way (plots not displayed here). For the CH 3 OH lines, the upper limit on the integrated flux is consistent with an abundance limit in model 1 of < 3 · 10 −8 , thus H 2 CO/CH 3 OH > 0.3. For H 13 CO + , the upper limit sets the abundance at < 10 −6 with respect to CO. This indicates an HCO + /CO abundance ratio of < 10 −4 , or an absolute abundance HCO + /H 2 of < 10 −8 . The CN emission is poorly constrained: the upper limit sets the CN/CO abundance at < 5·10 −4 or CN/H 2 < 5·10 −8 . The reason is the low Einstein A coefficient of this particular transition, which is almost three orders of magnitude lower than that of the other transitions in this study. Molecule Abundance H 2 CO 1 · 10 −8 CH 3 OH < 3 · 10 −8 H 13 CO + < 1 · 10 −10 CN < 5 · 10 −8 34 SO 2 < 1 · 10 −8 C 34 S < 1 · 10 −9 HNCO < 3 · 10 −9 c-C 3 H 2 < 3 · 10 −9 N 2 D + < 1 · 10 −10 D 2 O < 1 · 10 −9 All derived absolute abundance limits are given in Table 2. For 34 SO 2 and C 34 S we found absolute abundances of < 10 −8 and < 10 −9 respectively, corresponding to abundances of < 2·10 −7 and < 2·10 −8 for the main isotopes, assuming an elemental abundance ratio of sulfur of 32 S/ 34 S of 24 (Wilson & Rood 1994). For the deuterated molecules N 2 D + and D 2 O the isotope ratio D/H in these molecules is not known, therefore no upper limits on N 2 H + or H 2 O can be obtained. Origin of the H 2 CO emission H 2 CO can be formed efficiently in the ice phase by hydrogenation of solid CO, as shown in laboratory experiments, which can be followed by the formation of CH 3 OH (Hiraoka et al. 2002;Watanabe & Kouchi 2002;Watanabe et al. 2004;Hidaka et al. 2004;Fuchs et al. 2009): Because CO ice is highly abundant in the cold dusty regions in clouds and disks, this formation route is commonly assumed as the origin of gas-phase H 2 CO, following thermal or nonthermal desorption. In that case, CH 3 OH is expected to have a similar or higher abundance than H 2 CO because they are formed along the same sequence (Tielens & Hagen 1982;van der Tak et al. 2000;Cuppen et al. 2009). If all ices are thermally desorbed, gas-phase abundances can be as high as ∼ 10 −6 − 10 −5 . There is no known efficient gas-phase chemistry for the formation of CH 3 OH (Geppert et al. 2006;Garrod et al. 2006), whereas H 2 CO can also be formed rather efficiently in the gas phase. At low temperatures, the reaction dominates its formation, whereas it is mainly destroyed through reactions with ions such as HCO + and H 3 O + . These reactions lead to H 3 CO + , which cycles back to H 2 CO through dissociative recombination H 3 CO + + e − → H 2 CO + H. However, since the branching ratio to H 2 CO is only 0.3 (Hamberg et al. 2007), the ion reactions lead to a net destruction of H 2 CO. Typical dark-cloud gas-phase model abundances are a few×10 −8 relative to H 2 (McElroy et al. 2013). At low dust temperatures (< 60 K), H 2 CO can freeze out after formation. At higher temperatures in the presence of abundant H 2 O (T > 100 K), formaldehyde can also form through gas-phase reactions such as CH + 2 +H 2 O → H 3 CO + + H followed by dis-sociative recombination. Above a few 100 K, formaldehyde is destroyed efficiently through H + H 2 CO + 1380 K → HCO + H 2 . H 2 CO has been detected in warm protostellar cores together with CH 3 OH. Typical H 2 CO abundances in the warm gas are 3∼ 10 −7 with respect to hydrogen, whereas CH 3 OH has an abundance higher by a factor of 5 (H 2 CO/CH 3 OH≈0.2) (Bisschop et al. 2007), suggesting a main formation route Model results for the integrated fluxes found for CH 3 OH, H 13 CO + and CN in the empirical models. The dashed line indicates the upper limit of the integrated intensity, assuming the molecule is cospatial with the H 2 CO (CH 3 OH) or CO (H 13 CO + and CN). Abundances were calculated with respect to the CO abundance, but in these plots are multiplied with 10 −4 to translate the abundance w.r.t. CO to H 2 . through solid-state chemistry followed by sublimation. Recent observations of these molecules in the Horsehead PDR and core show a higher abundance ratio H 2 CO/CH 3 OH of 1-2, which cannot be produced by pure gas-phase chemistry (Guzmán et al. 2013). H 2 CO has been detected in several protoplanetary disks (Dutrey et al. 1997;Aikawa et al. 2003;Thi et al. 2004;Oberg et al. 2010Oberg et al. , 2011Qi et al. 2013a), which has usually been interpreted through the solid state chemical path, even though gas-phase CH 3 OH has not been detected in any of these disks due to sensitivity limitations. For high enough sensitivity the combination of H 2 CO and CH 3 OH observations would clearly distinguish between the ice-phase and gas-phase chemistry. For the disk around HD163296, the rotational temperature of H 2 CO was measured to be <30 K by fitting the emission of several transitions, and its abundance was found to be enhanced outside the CO 'snowline' at 20 K, suggesting a solid-state route (Qi et al. 2013a). The measured H 2 CO/CH 3 OH ratio limit for Oph IRS 48 of >0.3 is similar to the limit found for LkCa 15 (Thi et al. 2004) and only slightly higher than that found in warm protostars. This low limit therefore does not exclude the solid-state chemistry formation route but is also consistent with a gas-phase chemistry contribution to H 2 CO. However, the H 2 CO abundance of 10 −8 is much higher than the detections and upper limits of the diskaveraged abundances in Thi et al. (2004), which are typically < 10 −11 . Most of this difference stems from the fact that the disks in Thi et al. (2004) are two orders of magnitude more massive and also colder, with a large portion of the CO and related molecules frozen out. This cold zone is lacking in the IRS 48 disk, and our analysis has already removed the zones in which H 2 CO is photodissociated or frozen out. Nevertheless, as shown below, the H 2 CO emission is unusually strong in IRS 48 for its disk mass, hinting at an increased richness of chemistry in transitional disks due to the directly irradiated inner rim at the edge of the gas gap, as predicted (Cleeves et al. 2011). Another interesting aspect of the H 2 CO emission is the azimuthal asymmetry in IRS 48, which happens to be similar although not exactly cospatial to the dust continuum asymmetry. The dust asymmetry was modeled (van der Marel et al. 2013) as a dust trap caused by a vortex in the gas distribution with an overdensity of a factor 3, and it is possible that the increased emission of H 2 CO at this location is in fact tracing this overdensity in the gas. The image quality and S /N of the observations is insufficient to confirm this claim, however, and no overdensity was observed in the 12 CO 6-5 emission (Bruderer et al. 2014). Moreover, the H 2 CO emission does not peak exactly at the peak of the dust continuum emission: in fact, the H 2 CO emission is decreased at the continuum peak, as discussed in Sect. 4.2 (see Figure 1). It is possible that this decrease is caused by the high optical depth of the millimeter dust, which absorbs the line emission of the optically thin gas all the way from the midplane (see Figure 11 and Eq. 3). This cannot be confirmed because of the unknown vertical mixing of the gas and dust. Finally, a nondetection of an overdensity in the gas is consistent with the dusttrapping scenario: a vortex in the gas may already have disappeared while the created dust asymmetry remains because the time scale of smoothing out a dust asymmetry is on the order of several Myr (Birnstiel et al. 2013). Another possibility for the increased H 2 CO emission in the south is a local decrease in the temperature compared with the north. In that case H 2 CO is destroyed efficiently in the north through the H + H 2 CO + 1380 K → HCO + H 2 reaction, but not in the south. This possibility is consistent with the reduced CO emission in the south in both the rovibrational lines (Brown et al. 2012) and the 12 CO 6-5 lines (Bruderer et al. 2014), where Bruderer et al. (2014 suggested the temperature decrease to be of a factor 3. This temperature decrease might be caused by UV shielding in the inner disk. Note that the millimeter dust that is concentrated in the south does not provide sufficient cooling to change the gas temperature significantly through gas-grain collisions: the small dust grains have most the surface and gas-grain temperature exchange. If the temperature drops locally to between 20 and 30 K, the H 2 CO might even be an ice-phase product, but this is very unlikely. A final possibility is that the increase of H 2 CO is caused by the increased dust density and dust collisions in this region. If the grain-grain collisions are at high enough speed (which depends on the turbulence in the disk), the ice molecules may become separated from the grains. However, the dust grains have reached temperatures above the sublimation temperature of 60 K long before reaching the dust trap, therefore this is not very likely within reasonable time scales. Comparison with chemical models The column density of H 2 CO for the best-fitting model with abundance 10 −8 is ∼ 10 14 cm −2 for radii>60 AU (Figure 9), which is within an order of magnitude of the predictions of chemical models (Semenov & Wiebe 2011;Walsh et al. 2012Walsh et al. , 2013 for protoplanetary disks. For the 10 −7.5 abundance upper limit for CH 3 OH we found a column density limit of ∼ 2−4·10 14 cm −2 for radii>60 AU, which is well above the numbers derived in chemical models (Walsh et al. 2012;Vasyunin et al. 2011), which are typically 10 10 cm −2 . The DALI-model (Bruderer et al. 2014) produces predictions for HCO + and CN: the HCO + abundance peaks at 5·10 −9 , but only in a very thin upper layer of the disk where the CO + H 2 reaction operates. The disk-averaged abundance is much lower. The predicted integrated model intensity for H 13 CO + is 0.02 Jy km s −1 , far below our detection limit of 1.8 Jy km s −1 . The peak abundance for CN is 3·10 −7 with an integrated intensity of 1.1 Jy km s −1 , which is close to our upper limit. The other CN 6-5 lines at 680 GHz (outside the range of the spectral window of our observations) have an Einstein coefficient that is 2 orders of magnitude higher and should be readily detectable with an integrated flux of 15 Jy km s −1 . Comparison of upper limits with other observations The HCO + /CO ratio of < 10 −4 (or HCO + /H 2 < 10 −8 ) for IRS 48 is consistent with values found in disks, protostellar regions and dark clouds. The ratio is often found to be ≥ 10 −5 , where the lower limit is due to the unknown optical depth of the observed HCO + line (Thi et al. 2004). Our H 13 CO + line does not suffer from this problem, but our inferred HCO + abundance is not more stringent because of the very low gas mass of IRS 48. For similar abundances, all measured fluxes would be a factor of 10-100 lower than for other full disks. HCO + is produced by the gas-phase reaction H + 3 + CO → HCO + + H 2 , which relates the abundance directly to ionization, because the parent molecule H + 3 is formed efficiently through cosmic-ray ionization. Our abundance cannot set a strong limit on the ionization fraction in the disk. The ionization fraction in disks is important because the magneto-rotational instability (MRI), believed to drive viscous accretion, requires ionization to couple the magnetic field to the gas (Gammie 1996). Insufficient ionization may suppress the MRI and create a socalled dead zone that can create dust traps at its edge where the dust grains will gather (Regály et al. 2012). However, the ionization needs to be lower than about 10 −12 to induce a dead zone (Ilgner & Nelson 2006). On the other hand, recent models suggest that cosmic rays may be excluded altogether from disks around slightly lower-mass stars (Cleeves et al. 2013). A detection of HCO + at the level suggested by our models would provide direct proof of the presence of cosmic rays that ionize H 2 at a rate of ∼ 5 × 10 −17 s −1 . The CN upper limit is difficult to compare with literature values because our upper limit is very high due to the low Einstein A coefficient. Literature values give derived abundances (Thi et al. 2004) of ∼ 10 −10 , three orders of magnitude lower than our upper limit, which is again caused by the low gas mass of our disk. As noted above, the full chemical model by Bruderer (2013) suggests abundances very close to our inferred upper limits. The CN/HCN ratio is a related tracer for photodissociation in the upper layers and at the rim of the outer disk: a high ratio indicates a strong UV field, since CN is produced by radical reactions with atomic C and N (in the upper layers) and by pho-todissociation of HCN, whereas CN cannot easily be photodissociated itself (Bergin et al. 2003;van Dishoeck et al. 2006). The CN/HCN ratio is generally found to be higher in disks around the hotter Herbig stars. HCN observations are required to measure this ratio for IRS 48. Several of the other targeted molecules have been detected towards cores and protostars, such as 34 SO 2 (Persson et al. 2012), N 2 D + (Emprechtinger et al. 2009) and HNCO (Bisschop et al. 2007). N 2 D + can be used as a deuteration tracer in combination with N 2 H + , and therefore a tracer of temperature evolution, but this line was not within our spectral setup. The H 2 CO/HNCO abundance limit of >0.3 as derived for IRS 48 is rather conservative compared with values in cores of ∼10 (Bisschop et al. 2007), but the N-bearing molecules are weak in IRS 48. The c-C 3 H 2 molecule was recently detected for the first time in the HD163296 disk (Qi et al. 2013b), and their derived column density of ∼ 10 12 cm −2 is well below our observed limit for IRS 48 of 10 13 cm −2 . Predictions of line strengths of other transitions Most observations of molecules in disks present only intensities, but do not derive abundances, since a physical disk model is generally not available. To compare our observations with these data, we calculated the expected disk-integrated fluxes for other transitions, using our physical model and derived abundances or upper limits. These results are presented in Figure 13. The ALMA sensitivity limits for Band 6, Band 7, and Band 9 (230, 345 and 690 GHz) for the ALMA full array of 54 antennas for 1 hour integration are included to investigate which lines provide the best constraints for future observations. Figure 13 shows the potential for future observations of IRS 48, indicating that with full ALMA much better upper limits or detections can be reached for all of these species in just one hour of integration by choosing the correct lines. The H 2 CO flux can be measured with much better S /N and there is a wide range of abundances that can be tested. Lines in Band 7 (345 GHz) generally have the highest potential. CH 3 OH has not been detected in other disks to date. Upper limits derived for a few disks (Thi et al. 2004) give abundances < 10 −10 , below our limits for IRS 48. The fact that our observed H 2 CO fluxes are similar to those of other disks in spite of the low disk mass bodes well for future studies. Targeted ALMA observations of the strongest lines will allow much better sensitivity and are expected to easily reduce the abundance limits by 1-2 orders of magnitude. Together with searches for other complex organic molecules made preferentially in the ice, this will allow direct tests of the mechanism of sublimation of mid-plane ices in transitional disks proposed by Cleeves et al. (2011). Conclusions We observed the Oph IRS 48 protoplanetary disk with ALMA Early Science at the highest frequencies, around 690 GHz, allowing the detection of warm H 2 CO and upper limits on Fig. 13. Model predictions for the integrated fluxes for H 2 CO (both ortho and para lines, assuming an ortho/para ratio of 3), and upper limits for A-CH 3 OH, H 13 CO + , HCO + (triangles and diamonds, respectively), and CN based on our best-fit models for detections and upper limits. The boxes show the 3σ upper limit of the ALMA sensitivity for Band 6 (230 GHz), Band 7 (345 GHz), and Band 9 (690 GHz), integrated over the line profile for one hour integration in the full array. The targeted lines in observations are indicated in red (this study), blue (Öberg et al. 2010,2011), and purple (Thi et al. 2004). The lines that have the best potential for observation compared with the ALMA sensitivity are encircled. the abundances of several other molecules including CH 3 OH, H 13 CO + , and CN lines at unprecedented angular resolution. 1. We detected and spatially resolved the warm H 2 CO 9(1,8)-8(1,7) line, which reveals a semi-ring of emission at ∼60 AU radius centered south from the star. No emission is detected in the north. This demonstrates that H 2 CO, an ingredient for building more complex organic molecules, is present in a location of the disk where planetesimals and comets are currently being formed. 2. The H 2 CO emission was modeled using a physical disk model based on the dust continuum and CO emission (Bruderer et al. 2014), using three different trial abundance profiles. None of the profiles were able to match the observed data exactly, but the absolute flux indicates an abundance with respect to H 2 of ∼ 10 −8 . 3. The combination of the H 2 CO abundance in combination with upper limits for the CH 3 OH emission indicates a H 2 CO/CH 3 OH ratio >0.3. This limit together with the overall abundance suggests that both solid-state and gas-phase processes occur in the disk. 4. Although the H 2 CO emission is located only on the southern side of the disk, just like the millimeter dust continuum, the offset with the continuum peak and the low S /N do not allow a firm claim on a relation with the dust-trapping mechanism. 5. The upper limit for H 13 CO + indicates an HCO + abundance of < 10 −8 , consistent with our model. The upper limit for CN of 10 −7.3 relative to H 2 is directly at the level of that predicted by our model. Upper limits on the abundances of the other targeted molecules are consistent with earlier observations. 6. Future ALMA observations of intrinsically stronger lines will allow abundances to be measured that are one or more orders of magnitude below the upper limits derived here. This will allow full tests of the chemistry of simple and more complex molecules in transitional disks.
2018-12-16T12:14:01.729Z
2014-02-03T00:00:00.000
{ "year": 2014, "sha1": "bbc562699d6a39a2df2cc68e3bc76b075b5bdc78", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2014/03/aa22960-13.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "bbc562699d6a39a2df2cc68e3bc76b075b5bdc78", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }