id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
111214111
pes2o/s2orc
v3-fos-license
Effects of Servicescape on Perceived Service Quality, Satisfaction and Behavioral Outcomes in Public Service Facilities The identification of significant factors affecting the behaviors of customers and occupants of physical environments and assessing their importance are imperative for effective architectural planning and design. This study investigated the effects of servicescape on perceived service quality and behavioral intention. The four main factors of servicescape selected for this study were attractiveness, cleanliness, layout, and comfort; the two perception indicators were service quality and satisfaction; and the behavioral outcome measures were loyalty and public service facility revisit intentions. A total of 594 questionnaires were collected from the users of five public service facilities located in Seoul, Korea. SPSS 18 and Lisrel 8.54, confirmatory factor analysis, and structural equation modeling (SEM) were used to test hypotheses. The results revealed that cleanliness had a significantly direct impact on users′ satisfaction and an indirect impact on loyalty and reuse in public service facilities. Easy layout (easy access) was also found to be an essential factor for service quality and satisfaction. The findings also support the positive effects of comfort on perceived service quality and satisfaction. Although attractiveness was expected to be an indicator, the results failed to support a relationship between attractiveness and service quality or satisfaction. Introduction Most people spend one-fifth of their lives indoors, which significantly affects their actions, status, abilities, and performances (Sundstrom et al., 1994). As one of the fundamental human requirements, indoor environments should provide appropriate physical conditions that allow people to do whatever they need to do comfortably (Roelofsen, 2002). In today's rapidly changing environment, how service quality can be measured and how service facilities are planned, designed, and managed to ensure improved service quality are important. Much scholarly work has been done on the topic of the relationship between the characteristics of surrounding environments and the productivity or satisfaction of employees (Sundstrom et al., 1994;Ha et al., 2002); however, the physical environment has an impact not only on the satisfaction of employees but also on customers' purchase intentions, revisit intentions, and comprehensive satisfaction in the case of service industries. The impact of surrounding environments on customer behavior has been researched by architects, landscape architects, and environmental psychologists (Turley & Milliman, 2000). The increasing demands for cultural, athletic, and artistic facilities have forced local governments to provide local residents with a variety of public service facilities in South Korea. Most districts of Seoul, the capital of South Korea, have assembly halls, youth centers, and culture and sports centers to comply with the requirements of local residents. However, 42.9% of users are still not satisfied with public service facilities in terms of the inappropriateness of the facilities (Korean Ministry of Culture, Sports and Tourism, 2003). Public service facilities are being constructed with user taxes, but it is difficult to tell what efforts have been made to improve their performance (Yi & Komatsu, 2010). It is highly probable that the managers of these facilities are not quite sure about what users need from the facilities' services. Facility management (FM) has to effectively perform functions that encompass a wide range of activities in order to effectively manage built assets and deliver services to customers (Amaratunga, 2000). In order for public service facilities to be considered useful, their services, programs, and surrounding physical environments should be well suited to the end users. Facility service should be focused on customers, and facilities should be able to offer high-quality service to support facility users (Rondeau et al., 2006). The term "servicescape," sometimes called "atmospherics" and coined by Bitner (1992), refers to several dimensions of the physical or built environment that impact the behaviors of customers and employees in service organizations; these dimensions comprise both the tangible and the intangible features that make up the service experience (Hoffman & Turley, 2002). The purpose of this study is to identify the main factors and relevant characteristics of servicescape that promote the effective design and management of public service facilities and to investigate the relationships between servicescape factors (attractiveness, cleanliness, layout, and comfort) and perception indicators (service quality and satisfaction) and between indicators and outcome measures (loyalty and reuse). The result of this study could show the important factors of servicescape and its dimensions, which affect customer satisfaction and service quality. Kotler (1973), one of the pioneers of the concept of servicescape, defined servicescape as "design of buying environments to produce specific emotional effects in the buyer that enhance his or her purchase probability." Servicescape evokes emotions that help determine value, which ultimately motivates customers to make a certain choice repeatedly (Arnould et al., 1998). Bitner (1992) suggests a typology of service organizations based on variations in form and usage of the servicescape, as shown in Fig.1. There are two dimensions capturing differences in the management of servicescape. Level of interaction, the vertical dimension, denotes who mainly performs actions, the customer, the employee, or both. Literature Review 2.1 Servicescape The level of involvement of customers and employees can be one of the important decision factors in designing a physical environment, and its level may affect the goals and objectives of the organization. The physical complexity of the servicescape, the horizontal dimension, denotes the way that complex elements of the servicescape should be considered in compliance with the various needs of elaborate environments such as hotels and hospitals. Fig.1. suggests different strategic plans, designs, and management for different types of businesses; it is imperative to produce commercially significant actions by consciously designing facilities (Arnould et al., 1998). Servicescape Factors Researchers in the marketing field (Bitner, 1992;Donovan & Rossiter, 1982) have focused on servicescape pleasure extensively. Pleasure in the servicescape can be affected by how customers perceive and feel in relation to the surrounding environment or physical spaces (Ryu & Jang, 2008), and the level of pleasure felt by customers determines their satisfaction and loyalty behaviors (Bitner, 1992;Mehrabian & Russell, 1974). A significant relationship has been identified between servicescape manipulation and shopping behavior (Turley & Milliman, 2000); it has been found that the role of servicescape is very important in the service delivery process (Hoffman & Turley, 2002). Ambient factors such as noise, scent, air quality, and cleanliness are not easily recognized by customers because they are below the consumer's consciousness (Aubert-Gamet, 1997), but they contribute to a sense of pleasure in experiencing a service (Baker, 1987). Design factors such as aesthetic attractiveness, layout, and comfort are relatively more perceptible by customers than are ambient factors, and thus they have more impact on customer behavior in the servicescape (Bitner, 1992;Smith & Burns, 1996). Aesthetic attractiveness refers to architectural design, décor, color, etc. Once customers enter a facility, they often observe the interior aesthetics, which is likely to affect their attitudes toward the facility (Baker et al., 1988). In recent years, numerous studies have attempted to identify and explore the relationship between servicescape factors and customer satisfaction in various service industries such as hotels, retail stores, hospitals, and restaurants. However, the effect of servicescape factors on the behavioral outcomes of the end users of public service facilities is not yet fully understood, and relatively few studies have been devoted to the in-depth evaluation of servicescape in such facilities. Therefore, the relationship between factors and perception indicators for public service facilities is explored in this study because the importance of a particular servicescape factor is apt to be different across different service organizations (Kotler, 1973;Bitner, 1992). Methodology 3.1 Research Method and Procedure Using a literature review and a pre-survey of the end users of public service facilities, the main servicescape factors for evaluation were identified and finalized. To date, there has been limited research regarding servicescape in public service facilities. Researchers underline the importance of variation in servicescape across different service organizations and facilities (Kotler, 1973;Bitner, 1992;Harris & Ezeh, 2008). Therefore, apart from the literature review, focus group interviews with facility managers assigned to public service facilities were also conducted to establish the detailed characteristics of each servicescape factor used for service quality evaluation. For focus group interviews, the authors sent official request letters to eight divisions in district facility management corporations that managed community centers, sports complexes, youth centers, and sport and culture centers all together. Six managers of five divisions participated in the interviews twice, for two hours each, to discuss the appropriateness of items and evaluation methods for public facilities. The items were revised based on service quality measures and previous studies regarding servicescape. For example, for cleanliness, the authors discussed important features in public facilities, checked and confirmed the appropriateness of words concerning items, and checked any missing features that divisions should consider for public facility evaluation or that managers should be concerned about. Through confirmatory factor analysis, the significant factors selected for this research were aesthetic attractiveness, cleanliness, layout, and comfort. The two perception indicators were service quality and satisfaction. The two outcome measures were loyalty and revisit intention. The public service facilities used in this study were limited to public facilities located in Seoul, and operated and maintained by the Seoul Metropolitan Facilities Management Corporation, where users make full or partial payment to use services. Among the 25 municipal districts of Seoul, four districts were selected: one each from the southeast, northeast, northwest, and southwest. The Jongro, Yeungdeungpo, Mapo, and Gangnam district facility management divisions accepted the request to help with the survey. The five public service facilities selected for this research are shown in Table 1. All showed similar features in providing spaces and programs. They were multipurpose facilities that managed various programs and were used frequently by district residents. A structured questionnaire was developed and distributed to the users of the five public service facilities and collected during business days and on weekend mornings and afternoons in order to ensure diversity among the responses. The questionnaires were initially checked on-site immediately after they were completed, and in the end 594 valid questionnaires were used for analysis in this study. Bitner (1992) suggested that three primary dimensions of the servicescape are ambient conditions, spatial layout and functionality, and signs, symbols, and artifacts. Service factors that were commonly suggested by Barker et al. (1994), Bitner (1992), Brauer (1992), and Wakefield and Blodgett (1996) were layout accessibility, facility aesthetics, seating comfort, facility cleanliness, and electronic equipment and displays. Wakefield and Blodgett (1996) focused on built environment in the servicescapes of leisure service settings and investigated the effects of layout accessibility, facility aesthetics, seating comfort, electronic equipment, and facility cleanliness on perceived quality of and satisfaction with the servicescape. Servicescape dimensions Based on combined insights from the concept of servicescape as defined by Bitner (1992) Bitner (1992) suggested that positive perception of servicescapes is likely to affect approach behaviors (attraction, staying, spending money, and returning). Wakefield and Blodgett (1996) found that servicescape (layout, facility aesthetics, seating comfort, and facility cleanliness) influenced perceived quality, which resulted in higher satisfaction. From the previous research findings, servicescape factors are likely to influence perceived quality and satisfaction. A well-designed layout has a direct effect on a customer's quality perception and an indirect effect on the customer's desire to return (Wakefield & Blodgett, 1994); comfort has a more favorable impact upon the customer's emotional state as well (Greenland & McGoldrick, 2005). The surrounding physical environment, that is, the servicescape factors, is considered the main element of perceived service quality and customer satisfaction (Jang & Namkung, 2009;Ryu & Han, 2010). Some previous research findings have also revealed that physical environment influences not only users' evaluations of service quality but also their behavioral responses (Berry & Wall, 2007; Jang & Namkung) and that customer satisfaction is a significant predictor of behavioral intention (Ryu & Han, 2011). Therefore, from the literature, the following hypotheses were drawn: H1a-H1b: There is a direct relationship between attractiveness and a) service quality and b) customer satisfaction. H2a-H2b: There is a direct relationship between cleanliness and a) service quality and b) customer satisfaction. H3a-H3b: There is a direct relationship between layout and a) service quality and b) customer satisfaction. H4a-H4b: There is a direct relationship between comfort and a) service quality and b) customer satisfaction. Wakefield and Blodgett (1996) also found that higher satisfaction resulted in higher repatronage (loyalty) and longer stays. Lucas (2002) found that servicescape factors influenced satisfaction and that satisfaction resulted in repatronage intentions, the desire to stay, and recommendations. Much of the previous research has mentioned a positive relationship between customer satisfaction and loyalty (Chi & Qu, 2008;Cronin et al., 2000), and the direct effect of customer satisfaction on loyalty has proven to be statistically significant (Han & Ryu, 2009). Therefore, it is reasonable to expect that there will be positive relationships among servicescape factors and customers' perceived service quality, satisfaction, and behavioral intention (loyalty and revisit intention). H5a-H5b: Perceived service quality has a positive effect on a) loyalty and b) reuse (revisit intention). H6a-H6b: Perceived customer satisfaction has a positive effect on a) loyalty and b) reuse (revisit intention). Following a review of servicescape factors in the literature, the hypotheses for this research are proposed and summarized in Fig.2. Similar to other servicescape models, the authors' model includes quality and satisfaction as key perception indicators that are affected differently by the selected four factors and shows how perceived indicators are associated with outcome measures. Data Collection and Analysis Focus group interviews were carried out to determine the important factors that could affect the service quality of and user satisfaction with public facilities. Prior to the survey, a pilot test was conducted to ensure the clarity, readability, and ease of understanding of the questionnaire items. The questionnaire was revised in accordance with the results. A total of 594 questionnaires were collected and analyzed using a five-point Likert scale (1 = disagree completely; 5 = agree completely). The data were analyzed using SPSS 18 and Lisrel 8.54. Confirmatory factor analysis and structural equation modeling (SEM) were conducted to test the proposed hypotheses. Result 4.1 Respondent Characteristics The general characteristics of the participating respondents are summarized in Table 2. There were more female (87%) than male respondents (13%) participating in this study. One hundred eighty-five people were in the 30-39 age range, representing 34.5%, followed by 101 people in the 40-49 age group (18.8%). Respondents younger than 20 years old were found more in the youth center. Nine different jobs were identified; 326 respondents were homemakers, representing 55.3%, followed by students (19.9%). Among the respondents, 47.7% were college graduates and 25.5% were high school graduates. A similar number of respondents participated from each of the five public service facilities, as shown in Table 2. Reliability Tests and Confirmatory Factor Analysis Confirmatory factor analysis was used to test a measurement model, employing maximum likelihood estimation. Results showed a moderately good fit for the measurement model. The following indices were calculated: x 2 (chi-square), CFI (comparative fit index), GFI (goodness of fit index) and NFI (normed fit index), and RMSEA (robustness of mean squared error approximation). The ratios of the obtained chi-square values to their degrees of freedom (x 2 = 299.74, df =158) were acceptable. Other indexes including GFI, 0.92; CFI, 0.99; and RMSEA, 0.042 (see Table 3.) were acceptable. The reliability coefficients (Cronbach's alphas) were all higher than 0.70 except for the comfort factor (0.617), which was higher than the minimum level of 0.6 suggested by Peterson (1994). Factor loadings of each individual indicator with its respective construct reached significance (p < 0.01) and had no negative values. Such factor loadings can be considered "practically significant" at .50 or greater (Hair et al., 1998). Structural Model Testing and Hypothesis Testing Once the measurement issues were addressed, the proposed model for the current study was tested using structural equation modeling (SEM). The initial model failed to show a good fit (χ 2 = 1019.55, df = 214, p value = 0.0, GFI = 0.85, CFI = 0.95, NFI = 0.94, RMSEA = 0.087, and Critical Number = 143.47). Among the twelve possible relationships, eight linear relationships were found to be significantly directly related (see Fig.2.). Two servicescape variables were found to be significantly related to service quality. Three servicescape variables were found to be significantly related to satisfaction. The t-statistics from the structural model were used to examine the hypotheses as indicated in the summary of results (Table 4.). The relationship between attractiveness and service quality was not found to be significant. Attractiveness was also not related to satisfaction. Hypotheses H1a and H1b were not supported. Cleanliness was found to be positively related to satisfaction. Thus, Hypothesis H2b was supported. The results confirmed Hypotheses 3a and 3b, which predicted that when people have easier access to facilities, they perceive better service quality and feel more satisfaction. Comfort was found to be positively related to perceived service quality and satisfaction. Thus, Hypotheses H4a and H4b were supported. As expected, H5-service quality is positively associated with loyalty (H5a) and satisfaction (H5b)was supported. H6a and H6b were also confirmed. Higher satisfaction was more likely to have higher reuse or revisit intentions. Discussion This study investigated the effects of servicescape on perceived service quality and behavioral intention. In particular, the influences of multiple elements of servicescape were explored because of the lack of empirical research on the multiple effects of servicescape. Because previous findings have emphasized specific roles for the elements of servicescapes across various organizations and service facilities, the current study explored and tested the construct of servicescape in public facilities and its relationships between service quality and behavioral outcomes. Although attractiveness of the physical environment was expected to be an indicator of satisfaction or service quality, the results failed to support a relationship between the attractiveness of the physical environment and service quality or satisfaction. Harris and Ezeh (2008) found that the appearance of staff in the servicescape was positively related to loyalty. They also found that furnishings were a determinant of loyalty. In contrast to the proposition that aesthetic attributes are more tangible than are ambient features and are strong determinants of satisfaction and service quality, the disconfirmation of the proposition could have been the result of the characteristics of the public facilities and the users' expectations of the public buildings. Compared with commercial facilities, public facilities are considered less luxurious, less ornamental, and more function oriented. Users of public facilities are less likely to be concerned about attractiveness than they are about cleanliness, easy layout, and comfort. The effect of cleanliness of servicescape is consistent with the previous findings. Cleanliness of servicescape is one of the strong indicators of intention to use the service in the future. Vilnai-Yavetz and Gilboa (2010) emphasize that different levels of cleanliness will be expected and perceived as appropriate in different contexts. Especially where functionality and health concerns matter, such as in hospitals, clinics, and hotels, cleanliness is considered critical. In this study, it was also found that cleanliness was very important in public facilities such as community centers, gyms, and sporting complexes in terms of users' satisfaction directly and regarding loyalty and reuse indirectly. It is suggested that cleanliness issues be approached as a matter of motivational factors rather than of hygiene (Herzberg, 1966; Vilnai-Yavetz & Gilboa, 2010). Easy layout (easy access) was also found to be an important factor for service quality and satisfaction. The facilities and buildings the participants of this study used are large, with many different kinds of rooms and spaces such as multipurpose rooms, meeting rooms, gyms, swimming pools, and libraries. In this kind of large, complex facility, easy layout and accessibility are important for users to be able to use the space. Therefore, managers should recognize accessibility issues and wayfinding strategies. Comfort has a more favorable impact upon customer emotional state (Greenland & McGoldrick, 2005), and the findings of this study also support the positive effects of comfort on perceived service quality and satisfaction. Despite some limitations of this research, the empirical findings of this research contributed to the expansion of the concept of servicescape. Future studies need to focus on various elements of servicescape and their roles in specific types of public facilities and organizations. Aside from the influence of physical environmental features, the influence of other service factors such as staff and service delivery also need to be explored together. Furthermore, in addition to the user's evaluation of servicescape, it is important to identify which servicescape variables are considered and valued by other stakeholders including employees, planners, and managers.
2019-04-13T13:03:21.461Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "ab5b406bb6f5c7ea9d0ebaecf80125b0cb6dd5b3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3130/jaabe.13.125", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "9836e9e192e0a2030ae27f453cd0613ea56da02c", "s2fieldsofstudy": [ "Business", "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
244340768
pes2o/s2orc
v3-fos-license
Clinical Features, Biochemical Profile, and Response to Standard Treatment in Lean, Normal-Weight, and Overweight/Obese Indian Type 2 Diabetes Patients BACKGROUND Much evidence is available on the relationship between type 2 diabetes mellitus (T2D) and obesity, but less on T2D in lean individuals. AIM This study was conducted in 12,069 T2D patients from northern India to find out which clinical and biochemical features are related to lean, normal weight, and overweight/obese T2D patients. METHODS The study was conducted at two endocrine clinics in northern India as a retrospective cross-sectional study. The records of all patients who attended these clinics from January 2018 to December 2019 were screened. After screening 13,400 patients, 12,069 were labelled as type 2 diabetes mellitus according to the criteria of the American Diabetes Association, 2020, and were included in the study. The patients were subdivided into the three groups by their body mass index (BMI): lean (BMI < 18), normal weight (BMI = 18-22.9), overweight/obese (BMI ≥ 23). The study evaluated how the three subgroups responded to standard diabetes management, including antidiabetic medication and lifestyle interventions. RESULTS Of a total of 12,069 patients 327 (2.7%) were lean, 1,841 (15.2%) of normal weight, and 9,906 (82.1%) overweight/obese. Lean patients were younger, but had more severe episodes of hyperglycemia. All three subgroups experienced significant improvements in glycemic control during follow-up; HbA1c values were significantly lowered in the overweight/obese group during follow-up compared with baseline. CONCLUSIONS While overweight/obese patients could benefit from the improvements in glycemic control achieved by lowering HbA1c, lean and normal-weight patients had more severe and difficult-to-control hyperglycemia. Introduction ccording to the International Diabetes Federation (IDF), there are currently 77 million people with diabetes living in India, with an increasing trend [1]. Likewise, the prevalence of overweight/obesity in the adult Indian population has doubled (9.0% in 1990 to 20.4% in 2016) [2]. The risk of diabetes in overweight and obese individuals in India is also significantly higher [3]. For many decades obesity has been an important risk factor for the development of diabetes. Nevertheless, apart from the usual obesityassociated type 2 diabetes (T2D) patient stereotype, The subset of T2D patients with low or normal BMI has not been very well categorized. The key pathology appears to be deficient insulin secretion instead of insulin resistance, which is present in the classical obesity-related diabetes type [7]. Several reports have demonstrated a link between lean T2D, poor nutrition, and poverty during the early phase of life. Though there is no definitive evidence from human studies, animal models have demonstrated that protein deficiency in early life can cause a decline in beta-cell mass leading to deficient insulin secretion [8,9]. The profiles of these patients also differ from latent autoimmune diabetes of adults (LADA) as the autoimmune markers of LADA are not present in the majority of lean T2D patients [5,10]. In view of this evidence we conducted a clinical study to characterize the clinical presentation, biochemical characteristics, and response to standard treatment of lean, normal-weight, and overweight/obese T2D patients living in northern India. Data retrieval and analysis We conducted a retrospective cross-sectional study in T2D patients who were attending two endocrine centers in northern India. One of the clinics was the Rajiv Gandhi Centre for Diabetes and Endocrinology, J. N Medical College and Hospital, Aligarh Muslim University, Aligarh (Uttar Pradesh), the other was the Diabetes and Endocrinology Super-Speciality Centre, Aligarh (Uttar Pradesh). The records of all patients who attended these clinics from January 2018 to December 2019 were screened. After screening 13,400 patients, 12,069 patients were identified as T2D patients according to ADA criteria, 2020, and were included in the study [11]. Exclusion criteria were: -Diagnosis of diabetes before age 20 years -Diagnosis of diabetes mellitus other than T2D, including type 1 diabetes (T1D), gestational diabetes, fibrocalculous pancreatic diabetes, and drug-induced diabetes. Individuals diagnosed before the age of 20 were not included in the study to reduce the risk of including T1D patients. The study was designed to analyze the differences in clinical presentation, complication profile, and response to standard treatment between lean, normal weight, and overweight/obese T2D patients. The treatment included standard medical treatment of T2D patients including antidiabetic medication. More than half of the patients (62.2%) received two or three oral antidiabetic drugs. Metformin was the most commonly prescribed drug (>90%) followed by sulfonylureas (55.7%). 91% in the lean group and 94% in both the normal-weight and in the overweight/obese group received metformin. The relevant information regarding demographic and clinical parameters was obtained from the patients' records, including age, sex, income, area of residence, duration of disease, treatment (including insulin), blood sugar level (fasting and postprandial), HbA1c level, and comorbidities (hypertension, dyslipidemia, diabetic neuropathy, diabetic kidney disease, diabetic retinopathy, and coronary artery disease). Anthropometric data, including height and weight, were used as per record. Body mass index (BMI) was calculated as weight in kg divided by height in m 2 . Individuals were categorized according to their BMI values as "lean" (<18.0 kg/m 2 ), "normal" (18.0-22.9 kg/m 2 ), "overweight" (23.0-24.9 kg/ m 2 ), and "obese" (>25 kg/m 2 ) [12]. The diagnoses of diabetic retinopathy and diabetic neuropathy were made on the basis of historical and clinical examination (vibration perception and 10-g monofilament, ankle jerk, pinprick, temperature sensation). Additionally, all diabetes patients attending our clinic underwent a detailed fundus examination after dilation by a trained ophthalmologist for confirming diabetic retinopathy diagnosis. Diabetic kidney disease was diagnosed by the presence of albuminuria and/or reduced estimated glomerular filtration rate (according to Cockcroft-Gault formula) in the absence of other etiologies of kidney failure [11]. Diagnosis of coronary artery disease and heart failure was based on clinical or historical data or, examination by a trained cardiologist. Statistical analysis Statistical analysis was carried out using the Statistical Package for Social Science (SPSS 21.0) for Windows Software and Microsoft Excel 2019. The Kolmogorov-Smirnov test was used for testing for normal distribution of continuous variables, and the data were expressed as mean ± standard deviation. Categorical data are expressed as counts and percentages. For comparisons of categorical variables the Chi-square test was applied; continuous data was compared using unpaired Student t-tests or ANOVA. One-way ANOVA was applied to determine the difference between the above defined groups. There was no outlier, as assessed by Box plot. Homogeneity of variance was violated, as assessed by the Levene test for equality of variance. Therefore, separate variance and Welch correction were used. A paired t-test was used to compare the means of the groups after treatment. A bivariate analysis between uncontrolled and controlled diabetes for all covariates and outcomes was performed (x 2 test for categorical variables). Univariate and multivariate logistic regression was carried out to assess the association between uncontrolled hyperglycemia, microvascular complications, and other factors. For all analyses, a 2-sided value of p < 0.05 was considered statistically significant. Results A total of 12,069 patients was included in the analysis. The baseline characteristics of the patients are shown in Table 1. Mean age was 49.7 ± 11.3 years; male and female were almost equal in number. 2,891 (23.8%) subjects were aged less than 40 years. Mean duration of diabetes was 3.4 years, 9,566 (79.2%) patients had diabetes duration of less than 5 years, 9,306 (77.1%) were from urban areas, and 3,884 (32.2%) had a positive family history of diabetes. Baseline fasting blood sugar, postprandial blood sugar, and HbA1c were 157.0 ± 68.0 mg/dl, 215.0 ± 90.0 mg/dl and 9.1 ± 2.3%, respectively. There was a significant decline in these values during the followup period (Figure 1, Table 2). Most of the patients (62.2%) were receiving two or three oral anti-diabetic drugs ( Table 1). Metformin was the most commonly prescribed drug (94%) followed by sulfonylureas (55.7%). There was a significant positive correlation between HbA1c and age, presence of nephropathy and presence of retinopathy in bivariate analysis. HbA1c had a significant negative correlation with the number of oral anti-diabetic agents used. The American Diabetic Association guidelines were used to define the glycemic and non-glycemic targets [11], i.e.: -Triglycerides: <150 mg/dl -HDL cholesterol: >40 mg/dl for men and >50 mg/ dl for women -LDL cholesterol: <70 mg/dl for patients with CAD and <100 mg/dl for patients without CAD -Blood pressure: <140/90 mm of Hg Out of total 12,069 patients 327 (2.7%) were lean, 1,841 (15.2%) had normal weight, and 9,906 (82.1%) were overweight/obese. The demographic, clinical and complication-related profiles of these patients are provided in Table 3. Lean patients were younger and had a shorter duration of diabetes compared with normal weight or overweight/obese patients. They also include more patients from rural areas and income of less than two lakh rupees (i.e. 200,000 rupees) per Comparision of glycemic status after treatment annum, which equals to US$ 2,700, approximately. The lean patients had a higher HbA1c at presentation and follow-up compared with overweight/obese patients. There was no significant difference in lipid parameters among the three groups. The prevalence of hypertension and family history of diabetes was less common in lean patients. Retinopathy was more common in lean patients, but there was no significant difference in the prevalence of nephropathy and neuropathy among the groups. Coronary artery disease was less common in the lean patients. There was no significant difference in the treatment prescribed among the three groups. Although the use of insulin was more common in lean patients, the difference was not statistically significant. Discussion The prevalence of diabetes is increasing in India. ICMR-INDIAB, a population-based study, has revealed that the prevalence of T2D is 7.3% in India [13]. In 2016, the prevalence was 5.2% in rural areas and 11.2% in the urban area [13]. The Southeast Asian T2D patient has a distinctive type of T2D that occurs in individuals with lower BMI, higher fat mass, higher insulin resistance, and higher inflammatory cytokine levels. The patients are 10-20 years younger than their western counterparts [14]. These factors are indicative of more severe diabetes and possibly of complications in Southeast Asian individuals [14][15][16]. Previous reports have revealed that a high percentage of Indian patients with diabetes is lean or of normal weight [17]. These subgroups of patients are not very well characterized. We conducted this retrospective, cross-sectional study in 12,069 T2D patients living in northern Indian. The mean age of the patients was 49.7 ± 11.3 years; 2,876 (23.7%) patients were aged <40 years, 8,389 (69.1%) were between 40-65 years, and 854 (7.1%) were >65 years of age. This finding is similar to previous studies conducted in India [18][19][20], but it is different from those obtained in western populations, where elderly patients form a large chunk of the diabetic population [21]. This also shows that diabetes occurs at a younger age in Indians compared to other populations. The number of male participants was 6,091 (50.5%), which is similar to previous studies [22]; 9,901 (82.0%) patients were either overweight or obese, which is also similar to recent observations in India [23]. 327 (2.7%) of the patients were lean, 1,841 (15.3%) had normal body weight, and 9,901 (82%) were overweight or obese. In a study from southern India, Mohan et al. reported an incidence of 3.5%, 63.5%, and 32.9% for lean, normal weight and obese T2D individuals, respectively [5]. But they used different criteria for the definition of obesity, namely BMI >27 kg/m 2 for men and >25 kg/m 2 for women. In our study, 82% of the patients were overweight or obese. This difference between the studies may be due to the different cut-off values used to define obesity. Another reason could be the increased prevalence of obesity in India, as Mohan and coworkers conducted their study more than twenty years ago [5]. In our study, lean patients were younger than patients in the other two groups. This finding is in contrast to the previous two studies from India [5,24]. However, patients in both these studies had a longer duration of diabetes than in our study. Findings similar to our study have been reported from countries other than India [25]. This suggests a strong genetic predisposition in these lean T2D patients. The level of fasting blood glucose, postprandial blood glucose, and HbA1c was significantly higher in lean and normal weight than in overweight/obese patients at presentation. This finding is similar to previous findings and indicates a more aggressive disease [5]. There was a significant decline in the levels of fasting blood glucose, postprandial blood glucose, and HbA1c during follow-up in all three groups of patients ( Table 2). But during follow-up, lean and normal-weight patients had significantly higher fasting blood glucose and HbA1c levels than overweight/obese patients. A higher number of overweight and obese patients achieved HbA1c levels of less than 7% and a lower number of these patients had an HbA1c of more than 9% compared with lean and normal-weight patients. This indicates that glycemic control was better in obese T2D than in lean and normal-weight T2D patients. These findings are similar to previous results obtained in India [5]. The lean group had a significantly higher number of patients from rural areas and lower income groups than the two other groups ( Legend: Data are mean ± SD or n (%) unless indicated otherwise. Abbreviations: BMI -body mass index; CAD -coronary artery disease; DPP-IV -dipeptidyl peptidase IV; FBG -fasting blood glucose; HbA1c -glycated hemoglobin; HDL -high-density lipoprotein; HTN -hypertension; LDL -low-density lipoprotein; MFN -metformin; PPBG -postprandial blood glucose; SGLT-2 -sodium-glucose cotransporter 2; SU -sulfonylureas; TC -total cholesterol; TG -triglycerides. Table 2. Blood sugar fasting, postprandial and HbA1c values in lean, normal weight and overweight/obese patients at baseline and follow-up Table 3. Characteristics of lean, normal weight and overweight/obese patients with type 2 diabetes mellitus abnormalities, such as decreased insulin secretion, decreased uptake of glucose by muscle, decline in insulin-mediated glycolysis, and augmented fat deposition, earlier in life than others [26,27]. The Dutch Famine study has also shown that malnutrition in early life is associated with increased risk of diabetes [28]. Similar findings have been reported in different Asian populations [25]. The overweight/obese group had a stronger family history of diabetes. It has been demonstrated that obese individuals with a family history of diabetes have a higher rate of diabetes than those with negative family history [29]. The incidence of hypertension and coronary artery disease was significantly increased in the overweight/obese group. Similar findings have been reported by Mohan et al. [5]. There was no significant difference in the pattern of oral antidiabetic drugs and insulin prescription between the three groups. This finding is different from earlier findings from India, which showed a higher use of insulin in the lean group [7,24]. The difference may be explained by the fact that the lean group in the above study had mean diabetes duration of 9.2 years, while in our study group it was only 2.7 years, and 50% of patients in the lean group had a duration of <1 year. With longer duration of follow-up they might have increased insulin requirement. There was a significant decline in fasting blood glucose, postprandial blood glucose, and HbA1c in the three groups during follow-up. Although the decrease in HbA1c was more distinct in the lean group, there was a significantly better control of hyperglycemia in the overweight/obese group (Figure 1). This indicates that we need intense treatment in the lean and normalweight group to improve glycemic control. The major factor leading to hyperglycemia in lean T2D patients is impaired secretion of insulin from pancreatic β-cells [25]. This could be secondary to the phenomenon of small β-cells found in the autopsies of lean individuals [25]. The findings from our study indicate that, at least in the initial stage, the treatment regimen was similar in all the groups. The inverse correlation between the number of oral anti-diabetic drugs and HbA1c indicates that early aggressive treatment was associated with better glycemic control. The use of metformin was relatively equal in all the groups: >90% of patients (91% in the lean group and 94% each in the normal-weight and overweight/ obese group). This indicates that metformin was well tolerated in the lean T2D group. Previous studies have shown that metformin is effective in controlling hyperglycemia in both lean and obese T2D patients [30]. There was no significant difference in the levels of total cholesterol, triglycerides, LDL cholesterol and HDL cholesterol between the groups, which is not in agreement with findings of previous studies [5,24]. This may be due to the different criteria used to define obesity and the age of the subjects in these studies, as the latter were older than those in our study. There was no significant difference in the rate of diabetic neuropathy and nephropathy between the groups. Previous reports from India have reported a high incidence of microvascular complications in lean T2D individuals [5,24]. This discordance of observations may be due to younger age and lesser duration of diabetes in lean T2D subjects in our study. The incidence of retinopathy was higher in the obese group, which may be secondary to the higher duration of diabetes in this subgroup. The strength of our study is that it was a multicenter study, with subjects from both government and private setups. This lessens the chances of an inclusion bias. Also, a large number of patients were included in the analysis, and comprehensive data regarding demography, clinical variables, biochemical parameters and both microvascular and macrovascular complications were obtained. The limitations are that it was a hospital-based retrospective study, which includes the risk of bias, and a cause and effect relationship cannot be established. Conclusions T2D is a heterogeneous clinical phenomenon. We analyzed the clinical features and response to standard treatment in a large cohort of lean, normal-weight, and overweight/obese T2D patients from northern India. The overall achievements of various glycemic and nonglycemic targets in T2D patients were suboptimal. Our study revealed that lean and normal-weight individuals have more severe hyperglycemia and relatively poor response to treatment, which indicates that diabetes in such individuals may represent a more aggressive form of the disease. Patients in all three subgroups were relatively young, which suggests the need for aggressive treatment of diabetes. More prospective studies are needed for further delineation of the natural history and appropriate treatment in the lean T2D patient.
2021-11-19T00:33:14.881Z
2021-10-31T00:00:00.000
{ "year": 2021, "sha1": "3dacc46dcccad5f643e223d3f08976608530b0e8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1900/rds.2021.17.68", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18abe4c068093481b8403dacf4259f7c7d16f8f3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3914159
pes2o/s2orc
v3-fos-license
Diagnosis and treatment of movement system impairment syndromes Highlights • Impaired movements and alignments may be associated with musculoskeletal conditions.• Signs of impaired alignments and movements may be seen before there are symptoms.• Treatment is based on using the findings of making the MSI diagnosis to correct the performance of daily activities. Introduction Since 1980, Sahrmann 1,2 and associates have been developing movement system impairment (MSI) syndromes to describe conditions that can be diagnosed by physical therapists and that guide treatment and inform prognosis. 1 movement system was adopted as the identity of physical therapy by the American Physical Therapy Association in 2013. The definition of the movement system developed at Washington University is ''a system of physiological organ systems that interact to produce movement of the body and its parts.'' Fig. 1 depicts the key component systems. The conceptual framework that serves as the basis for the proposed MSI syndromes is the kinesiopathologic model (KPM) (Fig. 2). A basic premise of the KPM is that repetitive movement and sustained alignments can induce pathology. MSI syndromes are proposed to result from the repetitive use of alignments and movements that over time are proposed to become impaired and eventually induce pathoanatomical changes in tissues and joint structures. The model emphasizes the contribution of (1) the musculoskeletal system as the effector of movement, (2) the nervous system as the regulator of movement, and (3) the cardiovascular, pulmonary, and endocrine systems as providing support for the other systems, but that also are affected by movement. 1,2 For example, metabolic syndrome is known to be associated with insufficient physical activity. 3 The prevailing theory, for which there is some evidence, is that the sustained alignments and repetitive movements during daily activities are the inducers of change in all the systems. 4---7 The modifiers of the changes are intrinsic factors such as the characteristics of the individual and extrinsic factors such as the degree and type of physical activity (work and fitness) in which a person participates. The key concept is that the body, at the joint level, follows the laws of physics and takes the path of least resistance for movement, typically in a specific direction such as flexion, extension or rotation. Determinants of the path are (1) both intra-and inter-joint relative flexibility, (2) relative stiffness of muscle and connective tissue, and (3) motor performance that becomes motor learning. 1,2 The result of a joint moving more readily in a specific direction is the development over time of hypermobility of accessory motion or micro-instability. The micro-instability causes tissue microtrauma that with repetition can become macrotrauma. The concepts incumbent in the KPM not only suggest that there are signs before there are symptoms, but that correction of the impaired alignments and movements and the contributing factors is also the most effective treatment of musculoskeletal pain conditions. The KPM places Path of Least resistance Kinesiopathologic model of movement system Musculoskeletal pain syndromes are the result of cumulative micro-trauma from accumulation of tissue stress and irritation resulting from sustained alignments or repeated movements in a specific direction(s) associated with daily activities. The joint(s) that is moving too readily in a specific direction is the site of pain generation. The readiness of a joint to move in a specific direction, i.e., the micro-instability, combined with relative stiffness, the neuromuscular activation pattern and motor learning contribute to development and persistence of the path of least resistance. Treatment is based on correcting the impaired alignments and movements contributing to tissue irritation as well as correcting the tissue adaptations, such as relative stiffness, muscle weakness, and neuromuscular activation patterns. Training to correct impaired alignments and movements instead of training ''isolated muscles'' will induce appropriate neural and musculoskeletal adaptations. the emphasis on the cause of the tissue injury rather than on the pathoanatomy of the tissues. Deciding on a syndrome is based first on identifying the impaired alignments and movements across a series of clinical tests. The alignments and movements typically are associated with an elicitation or increase in symptoms. The therapist then guides the patient to correct the alignments and movements to determine if the symptoms are improved. When the examination is completed the information is used to (1) determine the syndrome, (2) identify the contributing factors, (3) determine the corrective exercises, (4) identify the alignments and movements to correct during daily activities, and (5) educate the patient about factors contributing to the musculoskeletal condition by practicing correction during activities. The following example illustrates how correcting the impaired alignments and movements address the cause of the pain, which is not achieved by identifying the pathoanatomical source of the symptoms. A patient is referred to physical therapy with the diagnosis of Supraspinatus Tendinopathy. Tendinopathy is the pathoanatomic source of pain. After assessing the patient's scapular and humeral alignments and movements and associated symptom behavior the physical therapist makes a diagnosis of insufficient scapular upward rotation with humeral anterior glide. The other components of the examination identify the contributing factors that include (1) relative stiffness, (2) muscle strength, and (3) neuromuscular activation patterns. The idea behind the KPM is that classifying the patient according to impaired alignments and movements (i.e., Scapular Insufficient Upward Rotation, Humeral Anterior glide) is more useful to guide physical therapy treatment than identifying a pathoanatomical problem because these are the impairments to be corrected. Table 1 summarizes the key concepts underlying the proposed MSI syndromes. 1 Relative intra-and inter-joint flexibility and relative stiffness Important KPM concepts related to the proposed MSI syndromes are relative flexibility and relative stiffness. 1,2 Relative flexibility refers to a condition of the joint itself. Intra-joint relative flexibility is hypermobility of accessory motions, i.e., spin, roll, or glide. One or more of these motions occurs too readily resulting in excessive range of motion, as well as in how frequently the motion occurs. Inter-joint relative flexibility refers to motion of adjoining joints occurring more readily in one of the joints even if the motion should be occurring in the other joint. For example, during forward bending the lumbar spine flexes more readily than the hip flexes. 8 Stiffness refers to the resistance present during passive elongation of muscle and connective tissue. Stiffness depends on the hypertrophy of muscle and the amount of collagen when considering whole muscle. 9---13 Viscosity also contributes to stiffness and is affected by the rate of movement. 14,15 Movement follows the law of physics and takes the path of least resistance with (1) relative flexibility, (2) relative stiffness, and (3) motor learning as determinants of the path. When movement is performed across multiple joints, the body will tend to increase the amount of movement in the joint with lower resistance to motion or lower stiffness compared to the joint with higher resistance to motion or higher stiffness. For example, during hip extension, the lumbar spine will move more readily than the hip joint into extension. Relative flexibility impairments can also occur during single joint movements, such as knee extension in sitting. If the pelvis tilts posteriorly and the lumbar spine flexes early during the knee movement, this indicates an impairment in relative flexibility of the lumbar spine with the hamstring muscles being stiffer than the back extensor muscles. 2 Movement system impairments: inducers and modifiers Sustained alignments and repeated movements associated with daily activities are the inducers of the tissue adaptations, impaired alignments and movements associated with MSI syndromes. 1,2 For example, people who regularly participate in rotational demand activities have increased lumbopelvic rotation compared to people who do not participate in rotational demand activities. 16,17 Several studies have found that the repetition of movements associated with various sports leads to adaptations in different tissues including bone, joint and its surrounding tissues and muscles. 4---7 The effects of sustained alignments and repeated movements on tissue adaptations and the development of symptoms is modified by several factors, including age, gender, tissue mobility, anthropometrics, activity level and psychological factors. 18---37 Older individuals may respond differently to repeated movements than younger individuals because their joints and surrounding tissues usually have some degree of degeneration. 35,36 Older people also have different pain sensitivity compared to younger people. 21,25 Differences in alignment between men and women 28 may also influence the effect of repeated movements or sustained alignments. Men and women with low back pain show different pain-inducing alignments and movements. 32 Women have increased knee abduction during weightbearing activities when compared to men 22 resulting in increased risk of patellofemoral pain 26 and anterior cruciate ligament tears. 37 Tissue mobility may also influence movement precision. People with joint hypermobility have reduced joint proprioception 33 and may be at greater risk of musculoskeletal conditions. 20 Anthropometrics also should be considered as a potential modifier. For example, women with lower femoral neck shaft angle are at increased risk for greater trochanteric pain syndrome. 24 Individuals with a long trunk usually have depressed shoulder alignment which has been associated with decreased pain threshold of the upper trapezius muscle region. 19 While appropriate activity levels may protect from musculoskeletal conditions, inadequate or excessive activity levels may increase risk. 18,27,29,38 The development of imprecise motion is also considered to be a factor in the development of musculoskeletal pain. Psychological factors should also be considered since they can influence pain intensity 34 and change the outcome of different musculoskeletal conditions like tendinopathy, low back pain and anterior cruciate ligament reconstruction. 23,30,31 Impairments of alignment and movement in people with musculoskeletal pain and healthy people The KPM is based on restoring ideal alignment and correcting movement impairments. Although some studies have not found differences in alignment and movement patterns between healthy people and people with musculoskeletal symptoms, 39,40 others have found significant differences. 41---48 Patellofemoral pain is related to increased peak hip adduction, internal rotation and contralateral pelvic drop. 46 Studies assessing kinematics of the shoulder complex have identified differences between people with and people without shoulder pain. 44,45 Sitting alignment is related to upper quadrant musculoskeletal pain reported in sitting. 42 People with femoroacetabular impingement have different pelvic movement during hip flexion movements compared to healthy subjects. 41,43,48 People with low back pain move their lumbopelvic region to a greater extent and earlier during lower limb movements than people without low back pain. 17,47 Although most studies assess impairments of alignment and movement after the onset of musculoskeletal pain, there also are studies showing that some alignment and movement impairments seen in asymptomatic people may increase their risk for development of musculoskeletal pain. For example, lumbopelvic movement impairments during hip abduction 49 as well as standing in more lumbar lordosis 50 may be a risk factor for low back pain development in prolonged standing. Movement system impairment examination and classification The MSI examination and process for classification 51---53 involve interpreting data from a series of tests of alignments and movements. Judgments about the timing and the magnitude of movement and degree of end-range alignment in specific joints, and the effect on symptoms are made during each test. Tests that are symptom-provoking are immediately followed by systematic corrections of the impairment to determine the role on the patient's symptoms. Correction involves (1) minimizing movement that occurs in the early part of the range of motion or excessive movement, particularly accessory motion, in the affected joint, while increasing movement in other joints or (2) reducing positions of end-range alignment in specific direction(s). An improvement in the symptoms indicates that the alignment or movement impairment is associated with the patient's symptoms. 53--- 55 MSI syndromes have been developed for all body regions, including the cervical, thoracic and lumbar spine, shoulder, elbow and hand, hip, knee, ankle and foot 1,2 ( Table 2). MSI syndromes: validity and reliability testing Several studies have been performed to examine the validity of the MSI syndromes all of them examining either the lumbar region 8,52,56---70 or the knee joint. 71 Partial construct validity has been reported for MSI-syndromes proposed for the lumbar and knee regions. 59,71 Other studies have compared movement impairments and associated signs and symptoms between different MSI syndromes. Gombatto et al. 56 showed that people with Lumbar Rotation with Extension Syndrome displayed an asymmetric pattern of lumbar movement during a trunk lateral flexion test compared to people with Lumbar Rotation Syndrome. People with Lumbar Rotation Syndrome and people with Lumbar Rotation with Extension Syndrome displayed systematic differences in hip and lumbopelvic region movement during the test of active hip lateral rotation. 52,57 Kim et al. 8 showed that people with Lumbar Rotation with Flexion Syndrome have a greater amount of lumbar flexion during a trunk flexion test compared to people with Lumbar Rotation with Extension Syndrome. People with Lumbar Rotation Syndrome demonstrated greater end-range lumbar flexion during slumped sitting compared to people with Lumbar Rotation with Extension Syndrome. 58 The reliability of examiners to classify also has been assessed for the lumbar spine and the knee. Clinicians are able to reliably classify people into MSI syndromes for the lumbar spine, 60,62,72---75 even if they have limited clinical experience. 75---77 Kaibafvala et al. 78 assessed reliability for the MSI syndromes for the knee region. Kappa values of intraand inter-rater reliability for judgments of classifications ranged from 0.66 to 0.71, and 0.48 to 0.58, respectively. MSI syndromes: treatment Treatment includes patient education, analysis and correction of daily activities and prescription of specific exercises. 1,2,79---81 Patient education refers to educating the patient about how the repetition of impaired movements and sustained alignments in a specific direction may be related to his musculoskeletal condition and how to correct the impairments during all of his daily activities, particularly those that cause symptoms. For example, patients with The most important part of the program is teaching the patient to perform daily activities correctly and without symptoms. Because the sustained alignments and repeated movements are the cause of the problem they must be corrected. 1,2 The correction also helps the patient know what contributes to the symptoms and how to decrease or limit the symptoms. Patients are advised to correct their daily activities throughout the day. Recent work has shown that in people with LBP higher adherence to performing corrected daily activities compared to adherence to exercise is associated with greater improvement in function and pain as well as a number of other LBP-related outcomes. 84 The prescription of specific exercises is based on the patient's syndrome and contributing factors identified during the initial examination. The exercises require practicing correction of impaired alignments and movements identified during the clinical tests in the examination. For example, a patient with Hip Adduction Syndrome may present with excessive hip adduction associated with hip pain while performing a partial squat movement test. The partial squat movement test then would be used as a specific exercise having the patient modify the amount and timing of hip adduction that occurs during the squat. The specific exercises and activities are performed during the treatment sessions and also are part of the home program. Each patient receives pictures or figures of the specific exercises and daily activities with written instructions. Videos also can be used to teach the patient how to perform the exercises and activities. The patient's ability to perform his program is assessed during clinic visits and used to progress the program. Judgments about the patient's knowledge of the key concept of each exercise or activity and independence in performance of each exercise or activity is important information used to make decisions about when and what to progress. 85 Clinical trials of treatment of MSI syndromes Several case reports involving treatment of MSI syndromes have been published. 79---81,86---90 The studies describe in detail the examination and treatment of people with shoulder pain, 86 low back pain, 79---81,88 abdominal pain, 90 cervicogenic headache 89 and knee pain. 87 Treatment also was described in a feasibility randomized clinical trial in people with chronic hip pain. 91 In a randomized controlled trial assessing the effect of treatment of people with chronic low back pain, 84,92 Van Dillen et al. 84 found no difference when comparing the efficacy of a Classification-Specific (CS) treatment to a non-Classification-Specific (NCS) treatment in people with chronic non-specific low back pain. Both CS and NCS treatments included some form of exercise and correction of performance of daily activities. The CS treatment involved education, exercise and daily activity correction as described for the MSI syndromes above. The NCS treatment involved education and daily activity correction emphasizing maintenance of a neutral spine. Exercise was directed at strengthening the trunk and increasing the flexibility of the trunk and lower limbs. The authors proposed that the similar improvements found in both groups was because both were prescribed correction of daily activities that emphasized maintaining a neutral spine while increasing movement of other joints when performing daily activities. The proposal also was based on the fact that people in the CS and NCS group adhered more and longer to correcting daily activities than they did to exercise. Conclusion The MSI based classification and treatment allows physical therapists to diagnosis and treat musculoskeletal conditions based on principles of the KPM where impaired alignments and movements are proposed to induce pain and pathology. MSI syndromes and treatment have been described for all body regions. The reliability and validity of the system for some anatomical regions have been partially described. 8 although efficacy of treatment has not been tested in randomized controlled trials, except in people with chronic low back pain. 84 More randomized controlled trials are needed to assess the efficacy of treatment of MSI syndromes. (2)
2018-04-03T05:11:38.153Z
2017-09-27T00:00:00.000
{ "year": 2017, "sha1": "757ce4a91136d6f2e364786efe6e7d509defb2f9", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc5693453?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "757ce4a91136d6f2e364786efe6e7d509defb2f9", "s2fieldsofstudy": [ "Psychology", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4740894
pes2o/s2orc
v3-fos-license
Resting heart rate: its correlations and potential for screening metabolic dysfunctions in adolescents Background In pediatric populations, the use of resting heart rate as a health index remains unclear, mainly in epidemiological settings. The aims of this study were to analyze the impact of resting heart rate on screening dyslipidemia and high blood glucose and also to identify its significance in pediatric populations. Methods The sample was composed of 971 randomly selected adolescents aged 11 to 17 years (410 boys and 561 girls). Resting heart rate was measured with oscillometric devices using two types of cuffs according to the arm circumference. Biochemical parameters triglycerides, total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol and glucose were measured. Body fatness, sleep, smoking, alcohol consumption and cardiorespiratory fitness were analyzed. Results Resting heart rate was positively related to higher sleep quality (β = 0.005, p = 0.039) and negatively related to cardiorespiratory fitness (β = −0.207, p = 0.001). The receiver operating characteristic curve indicated significant potential for resting heart rate in the screening of adolescents at increased values of fasting glucose (area under curve = 0.611 ± 0.039 [0.534 – 0.688]) and triglycerides (area under curve = 0.618 ± 0.044 [0.531 – 0.705]). Conclusion High resting heart rate constitutes a significant and independent risk related to dyslipidemia and high blood glucose in pediatric populations. Sleep and cardiorespiratory fitness are two important determinants of the resting heart rate. Background Early life is a determinant period in the prevention [1,2] and development [3] of chronic diseases in adulthood and, therefore, the development of inexpensive tools to identify youth at an increased risk are useful in epidemiological and clinical settings. As a result of this point of view, anthropometric variables have been tested and widely used for screening those with an increased cardiovascular risk [4][5][6]. More recently, resting heart rate (RHR) has been suggested as a valuable indicator of risk. Among adults there is scientific evidence to suggest that tachycardia should no longer be viewed as an innocent clinical feature [7]. Similarly, increased values of RHR constitute a significant risk factor in the development of cardiovascular outcomes, such as heart failure, myocardial infarction, sudden cardiac death and stroke (independent of blood pressure and a variety of other risk factors) [8]. However, the literature on this topic is relatively limited in pediatric populations. A previous study [9] found a significant relationship between high RHR and elevated blood pressure in 356 male children and adolescents. Surprisingly, the authors observed that this association occurred in both obese and lean boys. Rabbia et al. [10] also found a positive association between RHR and elevated blood pressure in adolescents of both sexes. Similarly, there is a positive relationship between RHR and lipid variables in obese children and adolescents [11]. The above mentioned data is in favor of the use of RHR as an index in screening pediatric populations at an increased risk. However, since RHR has not been thoroughly studied in epidemiological studies, the determinants of RHR in pediatric populations need to be further clarified. Therefore, the purposes of this study were to analyze the impact of resting heart rate for screening dyslipidemia and high blood glucose and also to identify its significance in pediatric populations. Sample This was a school based study, in which the sample was composed of adolescents (11 -17 years-old) of both genders from Londrina, Brazil; which is a medium-sized city (~500,000 inhabitants) located in South Brazil with a high human development index (0.824) [12]. The minimum sample size of 554 adolescents was estimated using an equation for correlation coefficients, adopting r = 0.18 [11], power of 80% and an alpha error of 5% (sample size was increased by 100% due to design effect and by 30% for predictable losses). The sample of schoolchildren was selected in 2011, through a sampling process involving two random stages. The city was divided into five geographical regions (east, west, north, south and center) and two or three schools in each geographical region were randomly selected to participate in the survey. In each of the selected schools, individual classes were randomly selected and thereafter all students in the chosen classes were invited to participate. The inclusion criteria were: (i) self-report of health (absence of previously detected chronic diseases: high blood pressure, diabetes mellitus, any type of dyslipidemia or asthma); (ii) aged between 11-17 years-old. Initially, 1,396 adolescents of both genders agreed to participate and returned the completed, signed consent form. However, 425 boys and girls were later excluded (e.g. absence in the fasting blood sample measurement; lack of 10-12 hours of fasting; refusal to participate in the running test). Therefore, after the field work, 971 adolescents (Male: 42.2% [n = 410] and Female: 57.8% [n = 561]) composed the sample. A comprehensive verbal description of the nature and purpose of the study, as well as the clinical implications of the investigation, was provided to the participants, their parents and teachers. Written informed consent was obtained from the adolescent's parent or legal guardian and all participants gave verbal consent. This study was approved by the local ethical committees and all procedures were in accordance with those outlined by the Declaration of Helsinki. Independent variables In this study, six independent variables were taken into account: body fatness percentage (%BF), sleep pattern, sport practice, cardiorespiratory fitness, cigarette and alcohol consumption.%BF was estimated using an equation based on skinfold thickness specifically for children and adolescents [13]. Sleep pattern was assessed by the question "Do you have trouble sleeping?", with responses based on the likert scale (never [score 1], sometimes [score 2], very often [score 3] and always [score 4]). Sport practice was assessed by the score from section 2 of the Baecke questionnaire [14] and cardiorespiratory fitness was estimated by a maximal multistage 20-meter shuttle run test, in which the peak oxygen uptake (in mL/kg/min) was estimated using a specific equation [15,16]. The number of cigarettes and alcoholic drinks consumed in the previous week was computed. Resting heart rate Oscillometric devices (Omron MX3 Plus), clinically validated for measuring blood pressure in adolescents [17], were used to measure RHR (expressed as beats per minute [beats/min]) and two types of cuffs were used according to the arm circumference (6 mm × 12 mm and 9 mm × 18 mm). To determine which cuff would be used, the circumference of the arm of each child was measured, and the cuff that was approximately 40% of the width of the arm circumference and 80% of the length was used [17]. All measurements were registered in a quiet room with the adolescents resting in the sitting position for 5 minutes with their back supported and feet on the ground. Two measures were taken and the mean value of both was utilized. There are not any widely accepted RHR cutoffs, therefore, RHR values were stratified into quartiles provided by a previous study [9]: <70 beats/min; 70-77.4 beats/ min; 77.5-85.9 beats/min; ≥86 beats/min. The above mentioned quartiles were adopted because both (i) were generated in a dataset which constituted Brazilian children and adolescents and (ii) have been associated with high blood pressure independent of obesity status. Blood samples After fasting for 10-12 hours, the adolescents' blood samples were collected in tubes containing ethylenediamine-tetraacetic acid (EDTA) as an anticoagulant and antioxidant, kept on melting ice during transfer, and immediately processed to obtain plasma, using a refrigerated centrifuge 4°C (Fanem W ), and stored at −80°C (Indrel W ) until the assay was performed. All collected blood samples (performed by nurses) were performed at the patients' school and biochemical analyses were done at the University Hospital at the Center of Health Sciences at the Universidade Estadual de Londrina. Biochemical parameters, including serum triglycerides, total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), lowdensity lipoprotein cholesterol (LDL-C), and glucose were measured by a biochemical autoanalyser (Dimension W , RXL, Newark, NJ, USA) and were used in conjunction with Dade Behring -Siemens kits. Modifications in lipid profile (TC ≥170 mg/dL, LDL ≥130 mg/dL, HDL < 45 mg/dL and triglycerides ≥130 mg/dL) and fasting glucose (≥100 mg/dL) were identified [18]. Potential confounders Chronological age, pubertal stage, gender and cardiorespiratory fitness were used as potential confounders and, therefore, adjusted for in the multivariable models. Chronological age was determined as a decimal variable using the difference between the birthday and the date of the assessment. Pubertal stage was identified by the peak height velocity, which was used to estimate biological maturity. The technique estimates time before or after the peak height velocity from the chronological age and anthropometric measures (height, sitting height, estimation of leg length and body weight) as described by Mirwald et al. [19]. Statistical procedures The Kolmogorov-Smirnov test analyzed the distribution of the numerical variables and, when necessary, logarithm transformation was used on variables with non-parametric distribution. Analysis of variance using Tukey's post hoc test compared numerical variables. Pearson correlation assessed the relationship between numerical variables and a linear regression model was elaborated with variables statistically significant in the Pearson correlation (RHR treated as dependent variable). The Chi-square test assessed association among categorical variables and the binary logistic regression (odds ratio [OR] and its 95% confidence interval [OR 95%CI ]) indicated the magnitude of these associations (RHR treated as an independent variable). Gender, age and pubertal stage adjusted both multivariable models (linear regression and binary logistic regression). Additionally, binary logistic regression was adjusted by cardiorespiratory fitness. The receiver-operating characteristic (ROC) curve (expressed as the area under the ROC curve [AUC]) analyzed the potential of RHR for screening metabolic outcomes. Statistical significance was set at p < .05 and statistical software BioEstat version 5.0 (BioEstat, Tefé, Amazonas) was used for all analyses. Results The sample in this study was composed of 971 adolescents aged 11 to 17 years (410 boys and 561 girls). The mean age and mean RHR were 12.9 ± 1.4 years-old and 82.7 ± 12.5 beats/min, respectively. The general characteristics of the adolescents stratified by RHR values are presented in Table 1. RHR was positively and significantly related to%BF and sleep disorders. Sport practice and cardiorespiratory fitness were positively related (r = 0.18; p = 0,001). Similarly, RHR was negatively and significantly related to cardiorespiratory fitness, sport practice and alcohol consumption. The number of cigarettes was not related to RHR values. Age (r = −0.24; p = 0.001) and pubertal stage (r = −0.09; p = 0.002) was negatively related to RHR, on the other hand, male gender (r = 0.14; p = 0.001) was significantly and positively related to RHR. In the multivariable model, independent of the other variables, only cardiorespiratory fitness and sleep disorder remained significantly related to RHR (Tables 2 and 3). Increased values of LDL-C and HDL-C were not significantly related to RHR (Table 3). On the other hand, RHR was positively and significantly related to triglycerides values. In the multivariable model, only triglycerides maintained the significant relationship with RHR, but TC and glucose did not. The ROC curve indicated significant potential for the RHR in screening adolescents at an increased value of fasting glucose (AUC = 0.611 ± 0.039) and triglycerides (AUC = 0.618 ± 0.044) ( Table 5). On the other hand, the potential for screening decreased values of HDL-C (AUC = 0.518 ± 0.026) and increased values of LDL-C (AUC = 0.525 ± 0.023) and TC (AUC = 0.539 ± 0.028) was limited. RHR was more specific than sensitive for screening the outcomes and the better cutoffs for RHR varied according to the analyzed outcome (except for TC and LDL-C where cutoff = 85.5 beats/min). Discussion The results of this study indicated that higher RHR was related to lower cardiorespiratory fitness, independent of obesity and other confounders. Moreover, the inclusion of cardiorespiratory fitness as a confounder in logistic regression made the associations of the outcomes with RHR non-significant. Previous studies have reported that the relationship between cardiorespiratory fitness and lipid variables/blood pressure in adolescents is mediated by body fatness, whereas the observed relationships with fatness are independent of cardiorespiratory fitness [20]. Our findings indicate an inverse effect of these confounders on the relationship between RHR and the metabolic outcomes (independent of fatness and strongly dependent on cardiorespiratory fitness). The close inverse relationship between cardiorespiratory fitness and RHR has been demonstrated in previous reports by other authors [10]. The recognized effect of cardiorespiratory fitness in autonomic nervous system activity and subsequent adaptations in neurohumoral control (decrease in circulating levels of catecholamines and changes in number or affinity of receptors) [21] seems to be independent of body composition [22] and could offer support to our results. A previous study [22] found that parasympathetic indexes of obese adults engaged in ≥2 hours per week of physical exercises were higher than those observed in sedentary adults of normal weight. Moreover, this protective effect has been identified in children. Gutin et al. [23] identified an improvement in parasympathetic activity in obese children submitted to 8 months of a physical training protocol, which decreased after subsequent detraining (changes in parasympathetic activity were not related to modifications in body fatness). In our study, cardiorespiratory fitness was negatively related to RHR (sport practice only in the univariate model) and, therefore, as previously observed in other cardiovascular and metabolic outcomes [1,2], physical activity practice during early life could be useful in the prevention of excessive weight gain [20], promotion of lower RHR and hence the prevention of cardiovascular diseases in adulthood. Additionally, high RHR was also associated with sleep pattern. Recently, Gallicchio and Kalesan [24] in a systematic review/meta-analysis identified that people with both short and longer periods of sleep are at an increased risk of all-cause mortality. However, the actual pathway by which sleep is linked to cardiovascular complications [25] is not clear, although it is plausible to believe that a pathway exists. Adolescents are prone to perform more activities at night (TV viewing and computer usage) than children and thus they are more exposed to shorter periods of sleep. Short sleep may act as an acute and chronic stressor and, therefore, affect the sympathetic activity of the organism and lead to an increase in RHR [26]. Moreover, the concentration of pro- inflammatory agents (interleukine-6, tumor necrosis factoralpha and C-reactive protein) is increased in people with short sleep periods [26]. Our findings highlight the fact that health professionals must target the promotion of adequate sleep patterns among pediatric populations, because this harmful relationship between sleep pattern and a higher RHR seems significant from an early age. RHR has significant potential for screening increased fasting glucose values. In agreement with this, Dubose et al. [27] recently identified that RHR can be used, together with other variables, to screen American adolescents with glucose impairment. Researches have indicated that insulin resistance has an important relationship with sympathetic activation [28][29][30], which significantly affects the RHR. Similarly, dysfunction in lipid metabolism was also related to a high RHR. A previous study [11] found a positive and significant relationship between RHR, triglycerides and TC among obese children and adolescents. On the other hand, the same authors point out that the causality/pathway by which a high RHR is linked to lipid dysfunction cannot be clearly determined. It is plausible to believe that insulin resistance could also be relevant in this process [28]. In fact, insulin resistance affects the process of energy production, leading to an increased use of lipids as fuel and a higher production of reactive oxygen species in the brain (by the activation of the nicotine adenine dinucleotide hydrogen phosphatase oxidase), which increases the oxidative stress in the rostral ventrolateral medulla, the region that determines the basal sympathetic activity [29,30]. Apparently, this inflammatory process occurs irrespective of the presence of obesity and ratifies the potential of RHR for screening adolescents at an increased cardiovascular risk. Palatini [7] pointed out that among adults there are no doubts that an RHR ≥80 to 85 beats per minutes implies an increased risk for health. In pediatric populations this RHR range seems not to be true, because the cutoffs were different according to outcomes analyzed (ranging from 81.5 to 89). Moreover, previous studies (and also our findings) have reported a significant RHR variation according to age groups [9,11,31]. Therefore, future cutoff tables should be developed in longitudinal observations and take into account adjustments for gender and age. Our study has strengths, such as the sample size calculation and random process for selecting the schools/classes. However, the limitations must be recognized too. Initially, the absence of dietary habits related to RHR (e.g. cola intake, coffee, energy drinks) constitutes a significant weakness in our study and a target for further investigations. Our sample has a wide age range and the peak height velocity has limitations when applied to some age groups within this range. On the other hand the use of other methods to estimate pubertal stage involves ethical and logistical complications. The absence of measures related to adipokines and insulin resistance must be recognized and should be the focus of future investigations. Finally, the low magnitude of the correlation coefficient found [32] should be taken into account in further inferences, because it denotes the action of other variables in the relationship between RHR and the analyzed outcomes. Thus, mediated effect could be controlled by the simultaneous use of the RHR together with other variables (e.g. general obesity, abdominal obesity, low cardiorespiratory fitness) to screen adolescents at an increased metabolic risk in further studies [27]. Conclusions The present data indicates that a high RHR constitutes a significant and independent risk factor to screen alterations in glucose and triglycerides from early in life, but further studies of RHR cutoffs are necessary. Moreover, cardiorespiratory fitness and sleep were significantly correlated to RHR, independent of a variety of potential confounders.
2016-05-03T22:56:22.947Z
2013-04-05T00:00:00.000
{ "year": 2013, "sha1": "9cdf4f07953e5faf7f05de7d3d3a5f69f8e1b139", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/1471-2431-13-48", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7f07c7ddea7d45cc68f3f8cf0b84536b2457bab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119382242
pes2o/s2orc
v3-fos-license
Best Fidelity Conditions for Three Party Quantum Teleportation Using the entangled three qubit states classified by Acin et al. we find the best fidelity conditions for quantum teleportation among three parties. Recently Acin et al. showed that there are 7 different entangled states in three qubit states [4]. In this paper, we consider quantum teleportation in three parties with those entangled states.In fact Yeo considered the quantum teleportation among three parties, using GHZ state and W state [3]. So we will consider the quantum teleportation among three parties, using the other entangled states except GHZ state and W state. For each case, we will provide the fidelity, the best fidelity condition and teleportation protocols. This paper is organized as follows. In the section I, we first review the three party quantum teleportation with GHZ state and W state. In the section II, we consider quantum teleportation in three parties sharing different entangled states based on Acin et al.'s classification of three qubit states. And the roles of parties, sender, co-sender, and receiver, are determined. Also we give the best fidelity conditions for each case. In the section III, we summarize and discuss our results. I. QUANTUM TELEPORTATION IN THREE PARTIES WITH SYMMETRIC THREE-QUBIT STATES Let us review quantum teleportation among three parties. Quantum teleportation in three parties sharing a three-qubit entangled state consists of three steps: 1)First, three parties shares a three qubit entangled state. A sender performs Bell basis measurement on his(or her) two qubits,(one is the information qubit and the other the qubit entangled to other parties) and sends the measurement result j to the co-sender and the receiver. The Bell basis measurement makes use of the following projection operators : |Φ + Φ + | for j = 1, |Φ − Φ − | for j = 2, |Ψ + Ψ + | for j = 3, and |Ψ − Ψ − | for j = 4, where 2)The co-sender performs single qubit measurement, according to the sender's measurement result j, and sends the measurement result k to the receiver. The single qubit measurement applies the following pojections : |µ + µ + | for k = 1 and |µ − µ − | for k = 2, where |µ + = sin ν|0 + e iκ cos ν|1 (4) 3)Given the protocol provided in the secret when three parties are separated, the receiver performs local unitary operations according to the measurement results j and k. Then the party recovers the information state with a probability. If |τ denotes the receiver's reconstructed state, then the success rate is measured by the fidelity of the original information state, |ψ = cos(θ/2)|0 + e iφ sin(θ/2)|1 , and the |τ , which is When the GHZ state is shared in three parties, the fidelity is shown as F GHZ = 2 3 + 1 3 sin 2ν given the protocol in table I. When the W state is shared in three parties, the fidelity is shown as F W = 7/9 given the protocol in table II. We here note that F W > F GHZ in average. However if sin2ν is greater than 1 3 , F GHZ > F W . And the best fidelity condition for F GHZ is ν = π 4 + mπ. Here the best fidelity condition means that if the co-sender Bob can perform his single qubit measurement, according to the sender's measurement result j, using the following pojections then the fidelity produces the best result. II. Quantum teleportation with asymmetric states The classification of three-qubit state by Acin et al. is as follows; Type 2b (GHZ state) Type 3a (Tri-Bell state) Type 3b (Extended GHZ states) Type 4a Type 4b Type 4c Type 5 (Real states) Note that the tri-Bell state is equivalent to the W state. Therefore, we need to consider only type3-5 states. We now show all schemes of quantum teleportation in three parties sharing one of the these types. Note the reference of protocols. The two W protocols are equivalent with respect to a permutation of parties. Suppose that they want to teleport the information state cos(θ/2)|0 + e iκ sin(θ/2)|1 . We know that Bob and Cindy are symmetric to a permutation of them. There are four choices in determining their roles of quantum teleportation. We will use '→' to mean that a party sends the measurement result to another one via CC(one-way) and '↔' to mean that both → and ← are possible via CC(two-way). 3. Bob(sender) → Cindy(co-sender) → Alice(receiver) The fidelity is 5 9 + 2 9 cos κ sin 2ν, and the protocol is of GHZ. The best fidelity condition is κ = 2nπ, ν = π 4 + mπ B. Quantum teleportation in three parties sharing the type4a state Suppose that Alice, Bob and Cindy shared the following state Since Bob and Cindy are symmetric parties, there are four cases as follows, Alice(sender) → Bob ↔ Cindy The fidelity is 2 3 and the protocol is of W. 2. Bob(sender) → Alice(co-sender) → Cindy(receiver) The fidelity is 2 3 and the protocol is of the second W. C. Quantum teleportation in three parties sharing the type4b state Suppose that Alice, Bob and Cindy shared the following state Since there are no symmetric parties, there are six cases as follows, 1. Alice(sender) → Bob(co-sender) ↔ Cindy(receiver) The fidelity is 1 2 + 1 6 cos κ sin 2ν and the protocol is of GHZ. The best fidelity condition is κ = 2nπ, ν = π 4 + mπ 2. Alice(sender) → Cindy(co-sender) → Bob(receiver) The fidelity is 3 4 and the protocol is of W. Alice(sender) → Bob ↔ Cindy The fidelity is 3 4 and the protocol is of W. 2. Bob(sender) → Alice(co-sender) → Cindy(receiver) The fidelity is 1 2 + 1 6 cos κ sin 2ν and the protocol is of the second GHZ. The best fidelity condition is κ = 2nπ, ν = π 4 and the protocol is of W. E. Quantum teleportation in three parties sharing the type5 state Suppose that Alice, Bob and Cindy shared the following state Since Bob and Cindy are symmetric parties, there are four cases as follows, The fidelity is 2 3 and the protocol is of W. All scheme of quantum teleportatopn among three parties are shown in Table VI. We here note that there are only two protocols W and GHZ in tables[III] and [IV]. This is due to the different entanglement structure of W and GHZ states. In other words, W state cannot be transformed to GHZ state with a probability. This implies that protocols can classify quantum states like stochastic LOCC. That is, we classify the role f idelity protocol bestf idelitycondition extendedGHZ Alice(sender) → Bob ↔ Cindy GHZ 5 9 + 2 9 cos κ sin 2ν κ = 2nπ, ν = π 4 + mπ
2017-09-24T02:15:55.904Z
2005-09-01T00:00:00.000
{ "year": 2006, "sha1": "758247c442be32da756ddb6ae239c58f8cc35114", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0605073", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4ce152495647385129ec110de98beb6dbeccb15c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
235486954
pes2o/s2orc
v3-fos-license
Proportional Assist Ventilation Improves Leg Muscle Reoxygenation After Exercise in Heart Failure With Reduced Ejection Fraction Background Respiratory muscle unloading through proportional assist ventilation (PAV) may enhance leg oxygen delivery, thereby speeding off-exercise oxygen uptake (V.⁢O2) kinetics in patients with heart failure with reduced left ventricular ejection fraction (HFrEF). Methods Ten male patients (HFrEF = 26 ± 9%, age 50 ± 13 years, and body mass index 25 ± 3 kg m2) underwent two constant work rate tests at 80% peak of maximal cardiopulmonary exercise test to tolerance under PAV and sham ventilation. Post-exercise kinetics of V.⁢O2, vastus lateralis deoxyhemoglobin ([deoxy-Hb + Mb]) by near-infrared spectroscopy, and cardiac output (QT) by impedance cardiography were assessed. Results PAV prolonged exercise tolerance compared with sham (587 ± 390 s vs. 444 ± 296 s, respectively; p = 0.01). PAV significantly accelerated V.⁢O2 recovery (τ = 56 ± 22 s vs. 77 ± 42 s; p < 0.05), being associated with a faster decline in Δ[deoxy-Hb + Mb] and QT compared with sham (τ = 31 ± 19 s vs. 42 ± 22 s and 39 ± 22 s vs. 78 ± 46 s, p < 0.05). Faster off-exercise decrease in QT with PAV was related to longer exercise duration (r = −0.76; p < 0.05). Conclusion PAV accelerates the recovery of central hemodynamics and muscle oxygenation in HFrEF. These beneficial effects might prove useful to improve the tolerance to repeated exercise during cardiac rehabilitation. INTRODUCTION The rate at which oxygen uptake (VO 2 ) decreases after dynamic exercise has been used to assess disease severity and prognosis and, more recently, the effectiveness of interventions in patients with heart failure with reduced ejection fraction (HFrEF) (Guazzi et al., 2004;Dall'Ago et al., 2006;Compostella et al., 2014;Georgantas et al., 2014;Fortin et al., 2015;Bailey et al., 2018). Although oxygen (O 2 ) delivery is usually in excess of the decreasing O 2 demands during recovery from exercise in normal subjects, this might not be the case in HFrEF, a phenomenon that helps to explain why the evaluation of exercise recovery kinetics has gained popularity in the clinical arena (Kemps et al., 2009;Poole et al., 2012). Exercise recovery kinetics have been shown to be more reproducible than those at the onset of exercise, and less influenced by oscillatory breathing or the confounding effects of a prolonged "cardiodynamic" phase I (Francis et al., 2002;Kemps et al., 2007). Moreover, activities of daily living are characterized by their short-term and repetitive nature, thereby suggesting that fast recovery from effort is important for the successful completion of any subsequent task (Hirai et al., 2019). In this context, a previous study has shown that unloading the respiratory musculature with proportional assist ventilation (PAV) was associated with improved peripheral muscle oxygenation during constant-load exercise, as indicated by blunted changes in deoxyhemoglobin ([deoxi-Hb + Mb]) determined by near-infrared spectroscopy (NIRS) and longer exercise tolerance in patients with HFrEF (Borghi-Silva et al., 2008a), chronic obstructive pulmonary disease (COPD) (Borghi-Silva et al., 2008b), and HFrEF-COPD coexistence (da Luz . Interestingly, inspiratory muscle training associated with whole-body training also improved the cardiorespiratory responses to exercise, leading to a fasterVO 2 recovery in HFrEF (Dall'Ago et al., 2006). Based on the previous evidence indicating that post-exerciseVO 2 kinetics can be accelerated by interventions focused on improving O 2 delivery (Borghi-Silva et al., 2008a), this study hypothesized that, compared to sham ventilation, the rate of increase in muscle reoxygenation would be accelerated by PAV in HFrEF. Confirmation of this hypothesis indicates that the beneficial effects of respiratory muscle unloading on leg O 2 delivery are not limited to the onset of exercise (Borghi-Silva et al., 2008a), lending support to the notion thatVO 2 recovery kinetics are clinically useful to assess the efficacy of interventions in this patient population. Subjects and Design The current study cohort included 10 non-smoking male patients who were recruited from the HFrEF outpatient clinic of the Institution (Miocardiopathy Ambulatory, Division of Cardiology). Patients with HFrEF satisfied the following inclusion criteria: (1) diagnosis of HFrEF documented for at least 4 years; (2) three-dimensional echodopplercardiography showing left ventricular ejection fraction (LVEF) <35%; (3) New York association functional class II and III; and (4) no hospitalizations in the previous 6 months. All patients were optimally treated according to the American Heart Association/American College of Cardiology treatment recommendations for stage "C" patients (i.e., reduced LVEF and current or previous symptoms of heart failure) (Hunt et al., 2005). All patients were judged to be clinically stable and compensated on medical therapy at the time of evaluation. In addition, patients were familiarized with stationary bicycle cardiopulmonary exercise tests prior to data collection. Patients were excluded from study if they (1) demonstrate evidence of obstructive pulmonary disease [forced expiratory volume in 1 s (FEV 1 )/forced vital capacity (FVC) ratio of <70%]; (2) have a history of smoking; (3) have a history of exerciseinduced asthma; (4) have unstable angina or significant cardiac arrhythmias; (5) have anemia (hemoglobin <13 g%); (6) had myocardial infarction within the previous 12 months; (7) have primary valvular heart disease, neuromuscular or musculoskeletal disease, or other potential causes of dyspnea or fatigue; or (8) had participated in cardiovascular rehabilitation in the preceding year. Patients gave a written informed consent, and the study protocol was approved by the Institutional Medical Ethics Committee (CEP 0844/06). Study Protocol Subjects performed a ramp-incremental cardiopulmonary exercise test (CPX) on a cycle ergometer (5-10 W/min) to determineVO 2 at peak exercise. These loads were individually adjusted according to the severity of symptoms and the severity of the disease. On a separate day, subjects performed a high-intensity constant work rate (CWR) trial test at 80% peak workrate (WR) to individually select PAV's flow and volume assist levels. At a subsequent experimental visit, the patients undertook, 1 h apart, two CWR at the previously defined WR to the limit of tolerance (Tlim, s). Data were also recorded during the 5-min of passive recovery (without any muscle contraction), which followed exercise. During these tests, patients were randomly assigned to receive sham ventilation and the pre-selected levels of PAV. The patients and the accompanying physician were unaware of the ventilation strategy (PAV or sham) under use. This was accomplished by visually isolating the ventilator and its monitor from both the physician's and the patient's view. Vastus lateralis muscle oxygenation levels were assessed by NIRS. In addition, systemic O 2 delivery was followed by continuous monitoring of exercise cardiac output (transthoracic impedance) and metabolic and ventilatory measurements were collected breath-by-breath. Non-invasive Positive Pressure Ventilation PAV was applied via a tight-fitting facial mask with pressure levels being delivered by a commercially available mechanical ventilator (Evita-4; Draeger Medical, Lübeck, Germany). PAV is a non-invasive modality that provides flow (FA, cmH 2 O L −1 s −1 ) and volume assistance (VA, cmH 2 O/L) with the intent of unloading the resistive and elastic components of the work of breathing. PAV levels were individually set on a preliminary visit using the "run-away" method: the protocols for adaptation at rest and exercise were as previously described (Younes, 1992;Bianchi et al., 1998;Carrascossa et al., 2010). Sham ventilation was applied via the same equipment using the minimal inspiratory pressure support of 5 cmH 2 O; moreover, 2 cmH 2 O of positive end-expiratory pressure was used to overcome the resistance of the breathing circuit (Borghi-Silva et al., 2008a,b). Both PAV and sham were delivered with an O 2 inspired fraction of 0.21. Maximal and Submaximal Cardiopulmonary Exercise Testing Symptom-limited CPX was performed on a cycle ergometer using a computer-based exercise system (CardiO 2 System TM Medical Graphics, St. Paul, MN). Breath-by-breath analysis ventilatory expired gas analysis was obtained throughout the test. Incremental adjustment of work rate was individually selected (usually 5-10 W/min). The load increment was individually selected based on the symptoms of dyspnea reported by the patient for some physical activities and the experience of the research team. In patients with more severe symptoms such as dyspnea to walk on level ground, the load increase was 5 W, while those who did not report fatigue for this activity, an increase of 10 W was selected, which is considered to test completion ideally between 8 and 12 min (Neder et al., 1999). The carbon dioxide (CO 2 ) and O 2 analyzers were calibrated before and immediately after each test using a calibration gas (CO 2 5%, O 2 12%, and N 2 balance) and a reference gas [room air after ambient temperature and pressure saturated (ATPS) to standard temperature and pressure, dry (STPD) correction]. A Pitot tube (Prevent Pneumotach TM , MGC) was calibrated with a 3-L volume syringe by using different flow profiles. As a bidirectional pneumotachograph based on turbulent flow, the Pitot tube was adapted at the opening of the mask used for noninvasive ventilation. The following data were recorded:VO 2 (ml/min),VCO 2 (ml/min), minute ventilation (V E , L/min), and the partial pressure of end-tidal CO 2 (P ET CO 2 ) (mmHg). Ventilatory efficiency (V E /VCO 2 slope) was defined as the ventilatory response relative to CO 2 production. TheV E /VCO 2 slope provides the ventilatory requirements to wash out metabolically produced CO 2 (Keller-Ross et al., 2016). PeakVO 2 was the highest 15-s averaged value at exercise cessation (Neder et al., 1999). In addition, 12-lead electrocardiographic monitoring was carried out throughout testing. Subjects were also asked to rate their "shortness of breath" at exercise cessation using the 0-10 Borg's category-ratio scale, and symptom scores were expressed in absolute values and corrected for exercise duration. Capillary samples were collected from the ear lobe for blood lactate measurements (mEq/L) at rest and at exercise cessation (Yellow Springs 2.700 STAT plus TM , Yellow Springs Instruments, OH, United States). Skeletal Muscle Oxygenation Skeletal muscle oxygenation profiles of the left vastus lateralis were evaluated using a commercially available NIRS system (Hamamatsu NIRO 200 TM , Hamamatsu Photonics KK, Japan) during the CWR tests with PAV and sham (Borghi-Silva et al., 2008b). Previously, the skin under the probe was shaved in the dominant thigh. The skinfold was < 12.5 mm in all patients to ensure that the amount of fat between the muscle probe did not interfere with the signals (van der Zwaard et al., 2016). The light probe was placed to the belly of the vastus lateralis muscle, approximately 15 cm from the upper edge of the patella, and firmly attached to the skin using adhesive tape (Goulart et al., 2020b) and involved in a black closed mesh with a velcro. Briefly, one fiberoptic bundle carries the NIR light produced by the laser diodes to the tissue of interest while a second fiberoptic bundle returns the transmitted light from the tissue to a photon detector in the spectrometer. The intensity of incident and transmitted light is recorded continuously and, together with the relevant specific extinction coefficients, used for online estimation and display of the changes from the resting baseline of the concentrations of [deoxy-Hb + Mb] (Borghi-Silva et al., 2008b). [Deoxy-Hb + Mb] levels were obtained second-bysecond at rest, during exercise, and 5 min of recovery. [Deoxy-Hb + Mb] has been used as a proxy of fractional O 2 extraction in the microcirculation, reflecting the balance between O 2 delivery and utilization (Sperandio et al., 2009). In order to reduce intrasubject variability and improve intersubject comparability, [deoxy-Hb + Mb] values were expressed as the percentage of the maximal value determined on a post-exercise maximal voluntary contraction (MVC) after 5-min recovery. This study used a single probe consisting of eight laser diodes operating at two wavelengths (690 and 830 nm). Due to the uncertainty of the differential pathlength factor (DPF) for the quadriceps, we did not use a DPF in the present study. The distance between the light emitters and the receiver was 3.5 cm (Goulart et al., 2020b). Central Hemodynamics Cardiac output (Q T , L/min) was measured using a calibrated signal-morphology impedance cardiography device (PhysioFlow PF-05, Manatec Biomedical, France). The PhysioFlow principle is based on the assumption that variations in impedance occur when an alternating current of high frequency (75 kHz) and low magnitude (1.8 mA) passes through the thorax during cardiac ejection (Borghi-Silva et al., 2008a). In preliminary experiments, the system detected small changes in Q T (∼0.1 L/min) with acceptable accuracy (within ± 10% for all readings) (Borghi-Silva et al., 2008a). The values were recorded as delta ( ) from baseline and expressed relative (%) to the amplitude of variation from baseline to the steady-state with sham ventilation (within ± 2 standard deviations of the local mean). Kinetics Analysis Breath-by-breathVO 2 , [deoxy-Hb + Mb], HR, and Q T data were time aligned to the cessation of exercise and the first 180 s of recovery were interpolated second by second (SigmaPlot 10.0 Systat Software Inc., San Jose, CA, United States). Data were analyzed from the last 30 s of exercise to obtain a more stable baseline and over the 180 s of recovery; i.e., it is considered only the primary component of the response. Using this approach, it was assured that the same amount of data was included in the kinetic analysis ofVO 2 , [deoxy-Hb + Mb], and Q T for each intervention, minimizing model-dependent effects on results. The model used for fitting the kinetics response was: where the subscripts "ss" and "ρ" refer to steady-state and primary component, respectively. "A, " "TD, " and "τ" are the amplitude, time delay, and time constant of the exponential response of the interest (i.e., ∼time to reach 63% of the response following the end of exercise), respectively. The overall kinetics of [deoxy-Hb + Mb] were determined by the mean of response time (MRT = τ + TD) (Mazzuco et al., 2020). Statistical Analysis The required number of patients to be assessed (n = 10, crossover study) was calculated considering the τ (s) ofVO 2 during PAV and sham in HF patients as the main outcome (Mazzuco et al., 2020), assuming a risk of α of 5% and β of 20%. The SPSS version 13.0 statistical software was used for data analysis (SPSS, Chicago, IL, United States). According to data distribution, results were reported as mean ± SD or median and ranges for symptom scores. The primary end point of the study was changes in MRT-[deoxi-Hb + Mb] with PAV compared to sham. Secondary end points included Tlim, changes of τVO 2 , and Q T recovery kinetics. To contrast differences between PAV and sham on exercise responses and kinetic measurements, non-paired t or Mann-Whitney tests were used as appropriate. Pearson's product moment correlation was used to assess the level of association between continuous variables. The level of statistical significance was set at p < 0.05 for all tests. RESULTS All patients completed the maximal and submaximal exercise tests. Baseline characteristics of HFrEF patients are presented in Table 1. The LVEF ranged from 22 to 26%. Peak WR andVO 2 of all patients were below the age-and gender-corrected lower limits of normality (Neder et al., 1999). Eight patients were Weber class C and two were class B. As anticipated by long-term β-blocker therapy, patients presented with a reduced peak HR response. Physiological Responses at the Tlim After Sham vs. PAV The values selected for volume and flow assist during PAV were 5.6 ± 1.4 cmH 2 O/L and 3.0 ± 1.2 cmH 2 O L −1 s −1 , respectively. PAV significantly improved exercise tolerance as shown by a longer Tlim compared to sham ventilation (p < 0.05, Table 2). There was no significant change inVO 2 at Tlim; however, a significantly higherVCO 2 was observed with PAV (p < 0.05, Table 2). In addition, ventilatory efficiency improved with PAV as demonstrated by a significant reduction inV E /VCO 2 slope compared to sham ventilation (p < 0.05, Table 2). Off-Exercise Dynamics After Sham and PAV All fitted data were included in the kinetics analysis as r 2values ranged from 0.90 to 0.99. Off-exercise PAV accelerateḋ VO 2 kinetics when compared to sham ventilation (representative subject in Figure 1A and sample values in Table 3). In parallel, Q T recovery kinetics was faster with PAV ( Figure 1B and Table 3) (p < 0.05). The accelerated Q T kinetics was largely explained by a faster HR recovery with PAV ( Table 3). Similar speeding effects of PAV were observed in relation to [deoxy-Hb + Mb] ( Figure 1C and Table 3). Consistent with these results,VO 2 , Q T , and [deoxy-Hb + Mb] MRT values were shorter with PAV compared to sham (Figure 2). The improvement in Q T dynamics with active intervention was related to enhanced exercise tolerance (p < 0.001, Figure 3). DISCUSSION The novel findings of the present study in patients with stable, but advanced, HFrEF are as follows: (1) PAV improved exercise tolerance and ventilatory efficiency; (2) PAV accelerated the recovery ofVO 2 , as well as [deoxy-Hb + Mb] (a noninvasive estimate of fractional O 2 extraction) (Barstow, 2019), and central hemodynamics; and (3) a faster recovery of central hemodynamics with PAV was associated with better exercise tolerance. These data indicate that unloading the respiratory muscles has positive effects on O 2 delivery to, and utilization by, the peripheral muscles during passive recovery from exercise in HFrEF. These results set the stage for future studies assessing a role for respiratory muscle unloading in enhancing the tolerance to repeated (interval) exercise in these patients. Effects of PAV on Muscle Reoxygenation Kinetics It is widely recognized that the skeletal muscle deoxygenation at the onset of exercise in patients with HFrEF is related to impairments of local O 2 delivery and utilization (Richardson et al., 2003). In addition, experimental evidence suggests that, as HFrEF progresses, there is a slower recovery of microvascular PO 2 (PmvO 2 ), reflected by impaired microvascular O 2 deliveryto-utilization matching in the active muscle, i.e., lower PmvO 2 (Copp et al., 2010). A lower PmvO 2 , in turn, may impair the recovery of intracellular metabolic homeostasis, delaying phosphocreatine resynthesis after exercise in HFrEF. These important metabolic changes increase muscle fatigability, likely Leg effort (0-10) 5.9 ± 3.0 5.4 ± 2.5 0.49 lactate (peak-rest, mmol/L) 2.10 ± 1.16 1.88 ± 1.14 0.67 VO 2 , oxygen consumption;VCO 2 , carbon dioxide output; RER, respiratory exchange ratio;V E , minute ventilation; P ET CO 2 , end-tidal partial pressure for CO 2 ; HR, heart rate. p < 0.05 (paired t or Wilcoxon tests for between-group differences at a given time point). Values are means ± SD. impairing the ability to perform subsequent physical tasks (Krause et al., 2005;Copp et al., 2010). In this sense, ventilatory strategies that can reduce fatigability and increase muscle recovery for a new high-intensity task would be relevant for the cardiopulmonary rehabilitation of these patients. In addition, HFrEF may be associated with redistribution of an alreadyreduced cardiac output toward the respiratory muscles, leading to lower peripheral muscle perfusion and O 2 supply. Collectively, these abnormalities may impair leg muscles' oxidative capacity with negative effects on dyspnea, leg discomfort, and exercise tolerance in these patients (Poole et al., 2012). In the present study, PAV accelerated the recovery of leg muscle oxygenation, as indicated by a faster decrease in [deoxy-Hb + Mb] ( Table 3 and Figure 2). The explanation for this finding might be multifactorial. For instance, PAV may have increased peripheral vascular conductance via lower sympathetic outflow (Olson et al., 2010) in response to a lessened respiratory muscle metaboreflex (Sheel et al., 2018). In fact, this was previously shown that at a given Q T and time, PAV was associated with increased oxygenation and higher blood flow to the appendicular musculature in patents with HFrEF, suggesting blood flow redistribution (Borghi-Silva et al., 2008b). Of note, Miller et al. found that decreasing the work of breathing with inspiratory positive pressure ventilation increased hindlimb blood flow out of proportion to increases in cardiac output in dogs with experimental HFrEF (Miller et al., 2007). Thus, bulk blood flow to the legs may have been enhanced by PAV despite a faster decrease in Q T , which would tend to reduce convective O 2 delivery at a given time point. The positive effects of PAV on muscle blood flow during high-intensity exercise may have persisted throughout the recovery phase, leading to more pronounced post-exercise hyperemia (Goulart et al., 2020a). A preferential distribution of local blood flow toward type II fibers, which are less efficient on O 2 utilization compared to type I fibers, is also conceivable (Barstow et al., 1996;Poole et al., 2012). Another possible mechanism demonstrated is that under hypoxia conditions, [deoxy-Hb + Mb] occurs at a lower energy output (Rafael de Almeida et al., 2019). It should also be acknowledged that the positive effects of PAV on on-exercisė VO 2 kinetics (i.e., low O 2 deficit) may have decreased O 2 debt, leading to a faster decrease in off-exerciseVO 2 (Mazzuco et al., 2020). Consistent with the current findings, this study found that respiratory muscles unloading reduced leg fatigue during high-intensity isokinetic exercise, supporting evidence that this strategy might have an adjunct role to improve patients' response to rehabilitative exercise in HFrEF . The QT off-kinetics were also accelerated with PAV (Table 3 and Figure 1). This might be related to the fact that PAV was associated with lower O 2 demands during recovery, likely due to improved muscle bioenergetics, i.e., faster PCr resynthesis (Yoshida et al., 2013). Additionally, a lower sympathetic drive with non-invasive ventilation may have prompted a faster increase in parasympathetic tonus (Borghi-Silva et al., 2008c); in fact, the quicker decrease in Q T was largely secondary to a faster HR recovery (Table 3). Interestingly, a strong correlation between faster Q T decline and increases in Tlim with PAV was found (Figure 3). Again, this might reflect a larger decrease in sympathetic efference in patients who derived greater benefit from PAV. Additional studies quantifying sympathetic neural outflow at similar exercise duration with PAV and sham ventilation are warranted to confirm (or negate) this hypothesis (Borghi-Silva et al., 2008c;Reis et al., 2014). PAV and Ventilatory Efficiency in HFrEF The present study found that respiratory muscle unloading with PAV was associated with improved ventilatory efficiency, i.e., lowerV E -VCO 2 relationship ( Table 2). Of note, however, this was not a consequence of lowerV E at a givenVCO 2 , but rather similarV E despite a higherVCO 2 . HigherVCO 2 (and, to a lesser extent,VO 2 ) at exercise cessation with PAV than sham might reflect the effects of a longer test in the former intervention during the PAV trial. This may also occur due to the dynamics oḟ VCO 2 and its relationship with the kinetics of CO 2 storage and production (Scott Bowen et al., 2012). It remains unclear, however, whyV E remained unaltered despite a higher CO 2 "load" since the respiratory neural drive, lung mechanics, or ventilation/perfusion (mis)matching was not assessed. Regardless of the mechanism, a reduction in thė V E -VCO 2 through pharmacological and non-pharmacological interventions may have relevant clinical implications, including improved survival (Paolillo et al., 2019). It is worth noting that dyspnea ratings at Tlim were similar between conditions despite a longer Tlim with PAV ( Table 2). This might reflect the effects of an unalteredV E and/or the beneficial consequences of inspiratory muscle unloading. Methodological Considerations and Potential Limitations The present study focused on the effects of PAV on recovery kinetics since the presence of oscillatory ventilation in half of patients precluded the analysis of on-exerciseVO 2 kinetics (Sperandio et al., 2009). Consistent with these results, previous studies showed that recovery kinetics were more reproducible, being determined with a higher degree of reliability and validity FIGURE 3 | Significant inverse relationship between the difference of limit of tolerance with PAV-Sham vs. the difference of mean response time (MRT) of QT (PAV-Sham). These data suggest that the higher variation of Tlim with PAV, the faster lower "central" cardiovascular kinetics (Pearson correlation = 0.76, p < 0.001). (Kemps et al., 2009(Kemps et al., , 2010. Nevertheless, the present study acknowledges that by not repeating the exercise bout, it is limited in its ability to determine the actual beneficial effects of PAV on the tolerance of any ensuing exercise. It is reasoned that a second session could influence [deoxy-Hb + Mb] due to changes in probe position, thereby decreasing the between-days comparability. Moreover, this study did not measure the work of breathing; thus, the magnitude of respiratory muscle unloading brought by PAV in individual patients remains unclear. As a non-invasive study, it relied on signal-morphology cardioimpedance to measure Q T (Borghi-Silva et al., 2008a;Paolillo et al., 2019). Although this method is not free from caveats (Wang and Gottlieb, 2006), it has provided acceptable estimates of changes in Q T in patients with cardiopulmonary diseases (Vasilopoulou et al., 2012;Louvaris et al., 2019). Clinical Implications The findings of the present study indicate that respiratory muscle unloading improves muscle oxygenation during recovery from high-intensity exercise, suggesting that non-invasive ventilation (PAV) might be used as an adjunct strategy to improve the tolerance to subsequent exercise during cardiac rehabilitation. Future studies could investigate the effects of such strategy in cardiopulmonary rehabilitation programs. It is conceivable that such an effect would be particularly relevant to more severe patients exposed to interval training (Spee et al., 2016) or, as described before, to strength training ). If the beneficial effects of PAV on muscle oxygenation prove to be associated with improved autonomic modulation (lower sympathetic drive), long-term respiratory muscle unloading may have an hitherto unexplored effect on other relevant outcomes in HFrEF, such as ventricular tachyarrhythmias, cardiac remodeling, and left ventricle afterload (Cornelis et al., 2016). CONCLUSION Respiratory muscle unloading promoted by PAV improves leg muscle oxygenation during the recovery from high-intensity exercise in patients with HFrEF. These results add novel evidence that the salutary consequences of PAV on the physiological responses to dynamic exercise in HFrEF (Borghi-Silva et al., 2008a,b;Carrascossa et al., 2010) extend to the recovery phase, an effect that might be of practical relevance to improve tolerance to repeated (interval) exercise. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation in the Institutional Repository of UFSCar. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the CEP 0844/06. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS AB-S, CG, CC, CO, DB, DA, LN, RA, and JN: conceptualization, data curation, formal analysis, investigation, methodology, project administration, supervision, writing-original draft, and writing-review and editing. All authors contributed to the article and approved the submitted version.
2021-06-21T13:21:31.740Z
2021-06-21T00:00:00.000
{ "year": 2021, "sha1": "d4d90a679a7afe8561444c37878fb908a9ebf104", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.685274/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d4d90a679a7afe8561444c37878fb908a9ebf104", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257556178
pes2o/s2orc
v3-fos-license
Research on the Characteristics of the Population Flow Network of the Main Ethnic Minorities in Northwest China : In the context of the development of the western region and the new situation of population mobility, the role of the minority floating population, who bear the dual attributes of national culture and floating personnel, has become increasingly prominent. Based on the national floating population dynamic monitoring data, this paper examines the floating scale and network characteristics of the main ethnic minorities (Hui, Uyghur, and Tibetan) in Northwest China. The results show that there are commonalities and differences among the three main ethnic groups, and the commonality is manifested in the preference for inflows to provincial capital cities and ethnic autonomous regions, with typical economic orientation and homogenous source orientation. Introduction As one of the largest and most far-reaching geographic processes since the reform and opening up [1], the flow and migration of population has always been an important concern and key influencing factor for the development of population. According to the "Report on the Development of China's Floating Population" (2020) issued by the National Family Planning Commission, in 2016, China's floating population reached 247 million.Paying attention to the population movement of ethnic minorities is related to the smooth realization of ethnic equality and ethnic unity [2]. The ethnic minority floating population with the dual attributes of national culture and floating personnel is of great significance for activating the economy [3]. Most of the literature starts from the unique identity, relationship, culture and other aspects of ethnic minorities [4]. Individual ethnic factors have a greater impact on urban integration and social adaptation [5]. Due to differences in language and customs,ethnic minorities have lost high-skilled, high-paying jobs [6], more than 70% of the minority migrants work much more than the standard working hours per week [7]. Xin-Zhe Zheng expounded the correlation between the migration of ethnic minorities and the relationship between urban ethnic groups [8]. The research on ethnic minority floating population in Northwest China mainly focuses on reasons for migration, characteristics of migration [9]. Most of them are young and middle-aged men; more are married; there is a tendency of family mobility [10]. Affected by differences in ethnic culture,religion, psychology,the ethnic minority floating population has differences compared with the Han floating population [11].It is a gradual and dynamic process from social migration, social integration to social integration of minority migrants [12].The overall situation of the floating population of ethnic minorities in the northwest is not optimistic, and it is difficult to guarantee their rights and interests [13]. Due to their uniqueness, ethnic minorities are quite different from the Han people in the direction and reasons of migration. In terms of the characteristics of ethnic minorities' migration, on the whole, most of the ethnic minorities live in the west, mainly in the province [14], the migration of ethnic populations to the eastern region is remarkable, and inter-ethnic exchanges are becoming more frequent [15]. At the same time,the trend of ethnic mobility and migration based on policy orientation is becoming more and more obvious [16]. Historically, due to the combined effects of famine, war, and economic factors, the ethnic minorities in the northwest often migrated abnormally [17]. Study area Northwest China is located between 73°E~110°E and 30°N~48°N. It covers five provincial-level regions of Gansu, Ningxia, Xinjiang, Qinghai and Shanxi administrative district. Data sources The population data in this paper are mainly from the China Migrants Dynamic Survey conducted by the National Health Commission.The research scope of this paper is Gansu, Ningxia, Xinjiang, Qinghai, and Shaanxi provinces (regions). The Hui, Uyghur, and Tibetan, who account for more than 80% of the minority population flow in the northwest region were selected as the research objects. Geographic Concentration Index The geographic concentration index is an important geographic index used to measure the balance of the distribution of things. The formula [18] is: G is the geographic concentration index; n is the total number of regions; X i is the size of the floating population in the region; T is the total size of the floating population. Social Network Analysis The degree of network connectivity is used to measure the possibility of direct connection between node i and other nodes j in the floating population contact network [19]. In the formula, I ij is the number of edges connecting the i node and the j node; K ij and K ji respectively represent the number of paths of the floating population starting from the i node (j node) and finally reaching the j node (i node). Proximity centrality is used to describe the position of a node in the population flow network. Formula [19] where CC is the proximity centrality of city i; d ij is the path length between node i and node j; n is the number of nodes. Analysis of flow scale characteristics The flow of the three major ethnic minorities in the northwest region was counted by direction, region and time, and the flow scale in different years was obtained, and analyzed and compared: The advantages are very prominent. Especially the Hui and Uyghurs, the northwest region is their main migration area, and it is also the main ethnic group participating in the flow of ethnic minorities in the northwest region. The temporal and spatial asymmetry of floating population The geographic concentration index was selected for comparative calculation. In Integrity The Hui nationality has the highest network integrity, followed by the Tibetan nationality, and finally the Uyghur nationality ( Table 1). The overall network connectivity and integrity of the Hui and Tibetans have improved. Network Centrality The degree centrality of the three major ethnic minorities is shown in Figures 1. Approaching centralit We further analyze the key cities and regions in the mobile network. The status of Urumqi, Xi'an, Changji in the Hui mobility network increased significantly.The rise in the rankings of Urumqi and Xi'an reflects the importance of provincial capitals in the Hui mobility network. The places with high closeness centrality are mostly distributed in the center of the province. In the Uyghur mobile network, Aksu has a more prominent position;cities outside the Xinjiang are less close to centrality.There are many high-grade highways running through the whole territory. Discussion The total number of floating population has remained high in recent years.The population mobility network has been studied, and the problem of network hierarchy has been characterized. This paper finds that the main ethnic minority population flow network in Northwest China is also complex and hierarchical, but compared with developed areas, the network is less complete and less hierarchical. This paper finds that there are commonalities and differences in the three major ethnic minority population flow networks in Northwest China, and the commonalities are manifested in significant economic orientation and homology source orientation. Conclusion Based on the national floating population dynamic monitoring data, this paper systematically examines the temporal and spatial patterns and network characteristics of population flows of major ethnic groups in Northwest China. The result shows: the scale of flow is generally limited, but its importance can-not be underestimated; the time and space of inflow and outflow are asymmetric, the distribution of outflow is more concentrated, and the distribution of inflow is relatively scattered; the complexity of connections, the network integrity of the Uyghur people is the worst, and the network is the most Simple, the Hui people have the best integrity, and the network is the most complex.
2023-03-16T15:37:35.460Z
2023-03-13T00:00:00.000
{ "year": 2023, "sha1": "e715cfc86487417d7489bac4172221df6616ef7a", "oa_license": "CCBY", "oa_url": "https://drpress.org/ojs/index.php/fbem/article/download/5886/5700", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ec1f82ea890ea86ae7a6cc52843e68eabd5cc0d6", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
248871417
pes2o/s2orc
v3-fos-license
Remaining Useful Life Prediction of Lithium-Ion Batteries Using Neural Networks with Adaptive Bayesian Learning With smart electronic devices delving deeper into our everyday lives, predictive maintenance solutions are gaining more traction in the electronic manufacturing industry. It is imperative for the manufacturers to identify potential failures and predict the system/device’s remaining useful life (RUL). Although data-driven models are commonly used for prognostic applications, they are limited by the necessity of large training datasets and also the optimization algorithms used in such methods run into local minima problems. In order to overcome these drawbacks, we train a Neural Network with Bayesian inference. In this work, we use Neural Networks (NN) as the prediction model and an adaptive Bayesian learning approach to estimate the RUL of electronic devices. The proposed prognostic approach functions in two stages—weight regularization using adaptive Bayesian learning and prognosis using NN. A Bayesian framework (particle filter algorithm) is adopted in the first stage to estimate the network parameters (weights and bias) using the NN prediction model as the state transition function. However, using a higher number of hidden neurons in the NN prediction model leads to particle weight decay in the Bayesian framework. To overcome the weight decay issues, we propose particle roughening as a weight regularization method in the Bayesian framework wherein a small Gaussian jitter is added to the decaying particles. Additionally, weight regularization was also performed by adopting conventional resampling strategies to evaluate the efficiency and robustness of the proposed approach and to reduce optimization problems commonly encountered in NN models. In the second stage, the estimated distributions of network parameters were fed into the NN prediction model to predict the RUL of the device. The lithium-ion battery capacity degradation data (CALCE/NASA) were used to test the proposed method, and RMSE values and execution time were used as metrics to evaluate the performance. Introduction Maintenance of electronic devices, physical equipment and systems is imperative for ensuring successful functioning, minimal downtime, reduced unprecedented maintenance costs, and prolonging the life of the device/system. Maintenance strategies are broadly categorized as preventive and predictive maintenance strategies. Although preventive maintenance adopts a conventional approach of regular scheduled maintenance protocols, predictive maintenance strategies are preemptive methods. With the availability of affordable sensor systems and advances in machine learning algorithms, predictive maintenance approaches promise economic benefits both for the manufacturers and end-users. The goal of predictive maintenance strategies is to predict the remaining useful life (RUL) of the device/system. RUL prediction of devices can be realized through two methods, namely, model-based methods and data-driven methods. Model-based methods employ physical or statistical models which best capture the degradation of the desired system. However, obtaining such models requires extensive experimental work and an indepth understanding of the underlying failure mechanisms. These models are effective until the system is upgraded or changed. Commonly used model-based prognostic approaches are Kalman filters (KF) [1,2], Extended Kalman filters (EKF) [3,4], and Particle filters (PF) [5,6]. These methods either use empirical models or incremental state-space representations of the underlying governing partial differential equations (PDE). On the other hand, data-driven methods use machine learning based approaches to characterize the failure of the desired device/system. Machine learning methods use pattern recognition to learn features from raw sensor data and identify failure patterns. Machine learning based data-driven methods successfully used for developing prognostic algorithms include regression methods [7,8], support vector machine (SVM) [9,10], relevance vector machine (RVM) [11,12], Bayesian networks (BN) [13,14], hidden Markov model (HMM) [15], and artificial neural networks (ANN) [16,17]. Among the different machine learning methods, neural network (NN) based methods are preferred by researchers due to their versatility and adaptive parameter optimization capabilities. To elaborate, NN based methods can address uncertainties in the system, such as measurement uncertainties introduced by noisy sensor data and prediction model uncertainties caused by variations in operating conditions. However, shallow neural network architectures, such as feedforward neural networks and radial bias function neural networks, face challenges in predicting the RUL with good confidence due to its dependency on large training datasets. Additionally, the complexity of the network architecture increases for highly nonlinear systems, which severely affects parameter optimization and further leads to overfitting issues. Several deep learning models are proposed in the literature to address optimization and overfitting issues, such as convoluted neural networks (CNN) [18,19], recurrent neural networks (RNN) [20,21], deep belief networks (DBN) [22], and long short-term memory networks (LSTM) [23,24]. Although deep learning models are efficient methods for RUL prediction due to their ability to learn features on-the-fly, they are computationally expensive. For instance, combining convoluted features across all time steps in CNN is timeconsuming and inhibits its application in processing large-scale time-series data. Similarly, RNN based methods are sequential wherein long-term information has to sequentially fed through all cells before being processed, subsequently causing vanishing gradient issues. LSTM based methods, on the other hand, require a large amount of memory bandwidth for processing large-scale time-series data. Additionally, there have been several attempts made to develop hybrid prognostic algorithms combining different model-based and data-driven methods to overcome the limitations arising due to dependency on accurate physical models and large amount of failure data. To name a few, Ma et al. [25] developed a hybrid prognostic model combining CNN and LSTM. The authors used CALCE and NASA datasets to evaluate their performance and used nearly 50% of data from each battery for the purpose of training. The training dataset size was optimized by using false nearest neighbor method. Even though the authors obtained good prediction accuracy, the RUL prediction can be performed only at mid/late degradation cycles which limits their applicability to safety-critical devices wherein the failures even in the early degradation cycles can lead to catastrophic results. Similarly, Wu et al. [26] developed a hybrid prognostic approach combining NN and particle filters wherein they had used the NN model equation as the system degradation model in the PF algorithm. In this case, the authors had assumed that they have the run-to-failure data for all the batteries available and used curve fitting results of each battery as the initial parameter guess for the PF algorithm. This limits the generalization capability of their proposed approach. This calls for the need to optimize shallow neural network models by regularizing network parameters and complexity to provide an efficient and computationally inexpensive framework for prognostic studies appropriate for real-time applications. Training Algorithms for Neural Networks Artificial neural network models have been successfully applied in prognostic studies [16,17] due to their ability to model non-linearities in degradation data and generalization capabilities. However, choosing an appropriate NN model is an arduous task as its optimization capabilities rely heavily on network architecture, training algorithms, and initial values of connection weights and bias. The NN model learns patterns from training data during the training phase and uses that information to produce the desired output. The NN model parameters (weights and bias) are modified to minimize the error between the predicted value and the desired output. A suitable backpropagation algorithm does the process of adjusting the network parameters for minimizing the error value. Thus, choosing an efficient training algorithm is imperative as it directly affects the performance of the network model. The commonly used NN training algorithms are gradient descent and Levenberg-Marquardt algorithms. However, the major drawback of using backpropagation-based training algorithms is that if the error values are multimodal, the algorithm becomes trapped at local minima. Additionally, improper/random initial parameter configuration aids the algorithm to settle at incorrect local minima. To overcome the disadvantages of backpropagation algorithms, several evolutionary algorithms have been proposed in the literature. Gudise et al. [27] proposed the particle swarm optimization (PSO) algorithm for training a simple feedforward NN model. The authors compared the performance of PSO with gradient descent (GD) algorithm. The results clearly showed that the convergence rate of PSO was much better than GD. Similarly, Karaboga et al. [28] proposed an artificial bee colony (ABC) algorithm to train a feedforward NN model. The performance of the algorithm was compared with genetic algorithm (GA) and GD. The results proved that ABC algorithm does not become trapped at local minima. However, the computational cost of the proposed method was considerably high as it took a very high number of epochs for convergence. Additionally, in all of the algorithms mentioned above, the initial parameter values were deterministic, so when applied to timeseries prediction models, they would only yield a single-point estimate for the future values. This work proposes using a Bayesian inference-based training algorithm to identify the NN model parameters and provide probabilistic RUL estimates over a single point estimate. Here, we resort to using particle filters for training a feedforward MLP model wherein the NN model equation is used in the PF algorithm as the incremental state transition function for predicting the system's future state. The PF algorithm performs state estimation to deduce the model parameters best representing the training dataset. Further, the PF estimated model parameters are used to configure the initial connection weight and bias values of the MLP network for other devices being monitored to predict the system's future state and subsequently for RUL estimation. Usage of PF for NN training is promising due to its ability to easily capture the highly nonlinear and non-Gaussian statistics by using different weights for different samples. It is to be noted that the particle filter weights are different from the NN weight values. However, the weighted sampling approach of PF leads to a particle collapse wherein the particle weights accumulate over a small fraction of particles known as particle degeneracy. If left unaddressed, particle degeneracy eventually leads to particle impoverishment wherein the particle weights are accumulated on a single particle, and the rest of the particles have zero weights. Particle impoverishment affects the performance of the NN model in the prognosis stage as the network model becomes configured with incorrect initial parameter weight and bias values. Although particle degeneracy can be controlled by adopting suitable resampling strategies, it is challenging to overcome particle impoverishment. In order to reduce particle impoverishment, we proposed particle roughening as a weight regularization method. The rest of the paper is organized as follows. Section 3 describes the two different lithium-ion battery capacity degradation datasets used in this work. Section 4 describes the methodology used in this work-description of multilayer perceptron, particle filters, and the proposed RUL estimation framework. Section 5 summarizes the results and discussion, and conclusions and future work are discussed in Section 6. Degradation Datasets Two different lithium-ion battery capacity degradation datasets from CALCE and NASA repository are used in this work for testing the performance of the proposed method [29,30]. CALCE Dataset Accelerated aging tests were performed for a set of prismatic cells (CS) with LiCoO 2 cathode of 1.1 Ah capacity rating. The CS cells were charged and discharged repeatedly till the cells reached their End-of-Life (EoL). The cells were subjected to a charging profile using constant current/constant voltage protocol. The current rate was maintained at 1C till the voltage reached 4.2 V. The charge was sustained at 4.2 V till the charging current dropped to 0.05A. The failure threshold for the CS cells was set to be at 0.88 Ah. The capacity degradation curves for three CS cells from CALCE repository are shown in Figure 1a. The rest of the paper is organized as follows. Section 3 describes the two different lithium-ion battery capacity degradation datasets used in this work. Section 4 describes the methodology used in this work-description of multilayer perceptron, particle filters, and the proposed RUL estimation framework. Section 5 summarizes the results and discussion, and conclusions and future work are discussed in Section 6. Degradation Datasets Two different lithium-ion battery capacity degradation datasets from CALCE and NASA repository are used in this work for testing the performance of the proposed method [29,30]. CALCE Dataset Accelerated aging tests were performed for a set of prismatic cells (CS) with LiCoO2 cathode of 1.1 Ah capacity rating. The CS cells were charged and discharged repeatedly till the cells reached their End-of-Life (EoL). The cells were subjected to a charging profile using constant current/constant voltage protocol. The current rate was maintained at 1C till the voltage reached 4.2 V. The charge was sustained at 4.2 V till the charging current dropped to 0.05A. The failure threshold for the CS cells was set to be at 0.88 Ah. The capacity degradation curves for three CS cells from CALCE repository are shown in Figure 1a. . NASA Dataset The second dataset used in this work is from NASA Ames Prognostic Center of Excellence [26]. In this dataset, the LiCoO 2 cathode cells with a rated capacity of 2.1 Ah were used for generating the battery capacity degradation data. Unlike the CALCE dataset, the NASA batteries were tested under random discharge currents rather than constant discharge currents. The charge/discharge cycles termed Random Walk (RW) cycling was performed wherein the current profile for both charging and discharging were changed every 5 min with a current value randomly selected from 0-4 A. The randomized loading conditions were applied to generate degradation data to simulate more realistic operating conditions. The failure threshold was set to be at 1 Ah, and the battery capacity degradation curves are shown in Figure 1b Multilayer Perceptron The most popular NN architecture used is the feedforward multilayer perceptron (MLP). We resort to using two-layer MLP in this work. The NN architecture represents the degradation state of the device as a function of time. The input node and output node represent the input and output variables, respectively. The input node is fed with time in cycles (battery) as the input to obtain the corresponding degradation state-capacity (battery). The hidden layer connects the input and output node and is represented by nonlinear nodes called hidden neurons. Each layer transmits information forward through an activation function. A sigmoid activation function is used between the input and hidden layer, followed by a linear activation function between the hidden and output layer. The sigmoid activation function in terms of NN parameters (weights and bias) is expressed as: where w i (1) and b i (1) is the input weight and bias corresponding to the ith hidden neuron, k is the time index, i = 1, . . . , M is the number of hidden neurons, and h i (.) is the tan-sigmoid activation function corresponding to the input layer. The predicted capacity/light output at the output node can be represented as where g((w,b),k) is the output of the MLP network, w i (2) and b i (2) represent the weight and bias values at the hidden layer, and h(.) is the tan-sigmoid activation function between the input and hidden layer as shown in Equation (1). Choice of Network Architecture One of the significant challenges with NN is the choice of network architecture. Choosing an appropriate number of hidden layers and neurons dramatically affects the performance of the NN model, especially while handling prognostic applications. During the training phase, the NN model parameters (w i and b i ) are optimized to minimize the prediction error on the training patterns. Minimal error values denote a stable network, whereas high error values reflect overfitting. Using more hidden neurons might cause the network to overfit and lead to significant deviations in the predicted values. As there are no standard approaches available in the literature to deduce the optimum number of hidden neurons, we chose to use the Bayesian Information Criterion (BIC) to fix the suitable number of hidden neurons. BIC is a widely used metric for statistical model selection owing to its computational efficiency and simplicity [31,32]. BIC of a model can be evaluated as where LL refers to the log-likelihood function of the model, N is the size of the training dataset, and q is the number of parameters to be estimated by the model. Additionally, the BIC penalizes a model based on the number of estimated parameters; hence, a complex model with higher neurons would be penalized and yield a poor (higher) BIC value. The model with the minimum BIC values is chosen as the best model for the purpose. The BIC analysis is shown in Figure 2. From Figure 2, it can be inferred that the NN model with three hidden neurons is suitable for the battery degradation dataset. owing to its computational efficiency and simplicity [31,32]. BIC of a model can be evaluated as = −2 × + ln( ) × (3) where LL refers to the log-likelihood function of the model, N is the size of the training dataset, and q is the number of parameters to be estimated by the model. Additionally, the BIC penalizes a model based on the number of estimated parameters; hence, a complex model with higher neurons would be penalized and yield a poor (higher) BIC value. The model with the minimum BIC values is chosen as the best model for the purpose. The BIC analysis is shown in Figure 2. From Figure 2, it can be inferred that the NN model with three hidden neurons is suitable for the battery degradation dataset. Particle Filters In this work, the PF algorithm is used as a state estimation method to deduce the optimum NN model parameter values (weights and bias). PF is a recursive Bayesian algorithm in which the system is represented by a state-space model comprising of a state transition model and measurement model as shown below: where xk and xk−1 represent the current and previous degradation state of the system, zk represents the available test data at k th time instant, k is the time index(cycles/hours), represents the vector with MLP model parameters (wi and bi), k−1 is the process noise, and is the measurement noise present in the system. f(.) represents an incremental model of the state transition function, and q(.) represents the measurement function. In this work, the measurement function used is g((w,b),k) represented by Equation (2). The implementation of the PF algorithm is explained below: Particle Filters In this work, the PF algorithm is used as a state estimation method to deduce the optimum NN model parameter values (weights and bias). PF is a recursive Bayesian algorithm in which the system is represented by a state-space model comprising of a state transition model and measurement model as shown below: where x k and x k−1 represent the current and previous degradation state of the system, z k represents the available test data at kth time instant, k is the time index(cycles/hours), θ represents the vector with MLP model parameters (w i and b i ), ω k−1 is the process noise, and ε k is the measurement noise present in the system. f(.) represents an incremental model of the state transition function, and q(.) represents the measurement function. In this work, the measurement function used is g((w,b),k) represented by Equation (2). The implementation of the PF algorithm is explained below: • a. Particle Initialization-At k = 1 time instant, the initial prior distribution is populated based on prior knowledge of the system model parameters as shown in Figure 3a. In this case, the curve fitting coefficients corresponding to one device from each device datasets are used. The initial prior probability density function p(x 0 ) is then sampled into weighted particles. • b. Particle Update-Whenever new measurement data are available for prediction, the weights of the particles are recursively updated based on the likelihood function, as shown below where p(z k x i k , θ i k ) is the likelihood function with θ i k representing the MLP network parameter at kth time instant, z k is the available test data, x k is the current system degradation state, and w i k represents the particle weights. It is to be noted that the particle weights of the PF algorithm are different from the MLP network parameter weights. • a. Particle Initialization-At k = 1 time instant, the initial prior distribution is populated based on prior knowledge of the system model parameters as shown in Figure 3a. In this case, the curve fitting coefficients corresponding to one device from each device datasets are used. The initial prior probability density function p(x0) is then sampled into weighted particles. • b. Particle Update-Whenever new measurement data are available for prediction, the weights of the particles are recursively updated based on the likelihood function, as shown below where ( | , ) is the likelihood function with representing the MLP network parameter at kth time instant, zk is the available test data, xk is the current system degradation state, and wi k represents the particle weights. It is to be noted that the particle weights of the PF algorithm are different from the MLP network parameter weights. • c. Particle Resampling-During particle update, the weights accumulate over a few particles, and the rest of the particles carry negligible weights after a few iterations as shown in Figure 3b. In order to enhance diversity amongst the particles, the smaller weight particles are replaced with large weight particles by a process called particle resampling. Thus, the basic idea of resampling is to maintain all the samples/particles at the same weight. • d. State Estimation-Finally, the degradation state of the system at (k + 1)th time instant is evaluated based on the new set of weighted particles. The process is repeated for all available measurement data, and the posterior distribution at the current step becomes the prior distribution for the next step. Weight Regularization Methods Although a very good candidate for non-linear degradation prognosis in general with non-Gaussian noise components, the significant drawbacks of the PF algorithm are the particle degeneracy phenomenon followed by particle impoverishment. During particle updates, the particles with negligible weights are replaced by large weight particles. After few iterations, small weights will cease to exist and only large weight particles are present in the distribution. This phenomenon is termed particle degeneracy, and particle impoverishment is a severe case of particle degeneracy wherein all but one particle is eliminated during resampling, i.e., a single particle carries all the weights, as shown in Figure 3c. This dramatically affects the diversity of particles, thus constraining the evolution of model parameters and subsequently affects the accuracy of predictions. Choosing an appropriate resampling strategy is one of the simplest methods to address particle degeneracy. In this work, we have applied three different resampling strategies based on our previous work in Ref. [33] to improve the adaptive Bayesian learning framework used in this study. The three resampling strategies considered are Multinomial Resampling (MR), Stratified Resampling (StR), and Systematic Resampling (SyR). The schematic representation for all three resampling strategies is depicted in Figure 4a. The weights of five particles after normalization are shown for illustration in Figure 4a. The length of the rectangles depicts the weights of the particles. Similarly, to address particle impoverishment, we use particle roughening as a weight regularization technique [34]. Although resampling strategies try to diversify the particle weight distribution, particle roughening, on the other hand, is a compensation technique. If particle impoverishment has occurred despite choosing an appropriate MR is a random search approach where N independent particles are randomly selected from the particle distribution. StR divides the population into equal segments called strata, and particles are randomly selected from each stratum. SyR, on the other hand, is an extension of stratified resampling wherein one particle from each stratum from a fixed location is selected during resampling. Thus, SyR is a more deterministic approach compared to the other two resampling strategies. Similarly, to address particle impoverishment, we use particle roughening as a weight regularization technique [34]. Although resampling strategies try to diversify the particle weight distribution, particle roughening, on the other hand, is a compensation technique. If particle impoverishment has occurred despite choosing an appropriate resampling strategy, one way to reduce it would be to redistribute the weights centered around one particle by jittering their values. For jittering, a small random noise (roughening noise) is added to the resampled particles. The roughening noise is small Gaussian jitter with zero mean and constant covariance. The covariance matrix is obtained from the standard deviation of the system degradation state as shown in Equation (8): where σ i K is the standard deviation of the Gaussian jitter with σ r1 being the standard deviation corresponding to the lower limit (σ 2 1 ) and σ r2 corresponding to the upper limit (σ 2 2 ) and L i K represents the likelihood function corresponding to Equation (6). The standard deviation limits used in this work are summarized in Table 1. When particle diversity improves, the distribution of model parameters is decentralized, and hence the prediction performance of the NN model improves. Remaining Useful Life Estimation The proposed approach, as shown in Figure 5, is split into two stages: Stage A-Adaptive Bayesian learning framework with weight regularization and Stage B-Prognosis using NN. Stage A is further split into three steps-Data Preprocessing, Particle Filters, and Weight Regularization, the descriptions of which are explained below. 1. Data Preprocessing-An appropriate NN model is chosen based on the BIC analysis described in previous sections. 2. Particle Filters-The chosen model is used as the measurement function in the particle filter algorithm to recursively update the model parameters using the available degradation data from the training dataset. 3. Weight Regularization-To overcome the particle degeneracy and impoverishment issues, suitable resampling strategies, and roughening methods are adopted for weight regularization. Additionally, resampling/roughening at every time step is unnecessary as it introduces additional variance in the posterior distribution. Hence, Effective Sample Size (ESS) is introduced to regulate the resampling/roughening process. If the ESS of the distribution is lower than a predefined threshold N T , then the particles are resampled/roughened. This reduces unnecessary additional computational load. The parameters obtained at the final time step for the training dataset is the PF estimated MLP network parameters. In Stage B, a new MLP network architecture is configured with PF estimated network parameters, which are the initial parameter values for corresponding weight and bias at the input and hidden layers. The available degradation data of other devices in the lot (test dataset-device 2 and 3) is fed to the MLP model as input. The MLP is trained using the LM algorithm for the test dataset. The informed PF estimated initial parameter values help to prevent local minima encountered in the backpropagation learning algorithm of neural networks. The trained NN model is used to predict the future degradation state of the device till end-of-life. During backpropagation, there is a possibility that the network parameters can go astray and distort the prediction traces. A suitable success criterion is essential to filter out outliers from the degradation traces. Two success criteria are incorporated into the framework. One is to eliminate traces that lie beyond the 2σ limits of the majority of the prediction traces. The second criterion is to eliminate traces which has a monotonically increasing trend utterly different from the actual degradation data used in this work. The Figure 5. The schematic of the proposed prognostic framework using an adaptive Bayesian learning framework with weight regularization for training the MLP network model. In Stage B, a new MLP network architecture is configured with PF estimated network parameters, which are the initial parameter values for corresponding weight and bias at the input and hidden layers. The available degradation data of other devices in the lot (test dataset-device 2 and 3) is fed to the MLP model as input. The MLP is trained using the LM algorithm for the test dataset. The informed PF estimated initial parameter values help to prevent local minima encountered in the backpropagation learning algorithm of neural networks. The trained NN model is used to predict the future degradation state of the device till end-of-life. During backpropagation, there is a possibility that the network parameters can go astray and distort the prediction traces. A suitable success criterion is essential to filter out outliers from the degradation traces. Two success criteria are incorporated into the framework. One is to eliminate traces that lie beyond the 2σ limits of the majority of the prediction traces. The second criterion is to eliminate traces which has a monotonically increasing trend utterly different from the actual degradation data used in this work. The predicted EoL is evaluated based on the successful traces, and eventually, the RUL of the device is calculated. The performance of the proposed framework is evaluated using the percentage of successful iterations, Root Mean Squared Error (RMSE), Relative Accuracy (RA), and computational time as metrics. RUL Estimation Using Different Resampling Strategies The prediction results for CALCE battery degradation dataset are discussed in this section. The battery CS-36 was used for training the model, and batteries CS-37 and CS-38 were chosen as the test datasets. As per the BIC analysis discussed in Section 3, the network architecture which best represents the CALCE dataset consists of an MLP network with three hidden neurons. The corresponding model equation was used for generating the curve fitting coefficients which was, in turn, used to populate the initial prior distribution in the PF algorithm. The three neuron NN model equation (as per Equation (2)) was used as the measurement function in the PF algorithm, which recursively updates the model parameters for the training dataset (CS-36). The PF algorithm encounters weight decay issues during particle updates, as illustrated in Figure 3b. Hence, suitable resampling strategies were adopted to sort the weight decay issues. 50 sets of model parameter values from the posterior distribution of the PF algorithm were fed into the MLP model as initial parameter configuration value. Available data points from the test dataset were fed as input to the MLP model for prognosis. The prediction results for CS-37 using multinomial resampling (MR) are shown in Figure 6b. Based on the assumed success criteria, the green traces in the Figure 6b are considered unsuccessful and eliminated for RUL prediction. The magenta traces correspond to prediction traces with RMSE values below 1% and thus is considered the best traces amongst the 50 repetitions. It can be observed from Figure 6b that only about 78% of the prediction traces were successful. The corresponding posterior model parameter distributions obtained after state estimation is shown in Figure 6a. A significant variance in the posterior distributions indicates that the weight decay issues of the PF algorithm remain unresolved for MR. The prediction results for CS-37 using stratified resampling (StR) are shown in Figure 7b. In this case, we opted to stratify the weight distribution into N s /2 times, where N s is the total number of particles used in the PF algorithm. During resampling, at least one particle from each stratum was picked, thereby improving particle diversity. Although StR yields a success rate of about 90%, the posterior distribution of model parameters shown in Figure 7a still shows negatively skewed distributions with no diversity. This indicates that if the PF algorithm fails to converge close to the true parameter values, then the prediction performance can be affected adversely. Further, systematic resampling (SyR) was adopted to improve the accuracy of predictions. The number of successful iterations jumped to 96%, and the posterior parameter distributions were found to be more spread out, as shown in Figure 8b and Figure 8a, respectively. The results clearly depict that resolving the weight decay issues in the PF algorithm regularizes the NN weights and bias values and, in turn, helps to improve the prediction accuracy. The performance metrics comprising of RMSE, computational time Further, systematic resampling (SyR) was adopted to improve the accuracy of predictions. The number of successful iterations jumped to 96%, and the posterior parameter distributions were found to be more spread out, as shown in Figure 8a,b, respectively. The results clearly depict that resolving the weight decay issues in the PF algorithm regularizes the NN weights and bias values and, in turn, helps to improve the prediction accuracy. Table 2. It can be inferred from Table 2 that SyR performs the best amongst the different resampling strategies explored in this work and also eliminates the particle degeneracy issues. Since SyR is deterministic in nature, it proves to be better than the other two resampling strategies. However, systematic resampling does not overcome particle impoverishment. The performance metrics comprising of RMSE, computational time and percentage of successful iterations are summarized in Table 2. It can be inferred from Table 2 that SyR performs the best amongst the different resampling strategies explored in this work and also eliminates the particle degeneracy issues. Since SyR is deterministic in nature, it proves to be better than the other two resampling strategies. However, systematic resampling does not overcome particle impoverishment. To elucidate, we have shown the particle weight distribution plots in Figure 9a-d, while using SyR. The total number of time steps till the actual EoL for CALCE dataset is 112. The particle weight distributions at 76th, 80th, 96th, and 112th time step (last time step) is shown in Figure 9. The particle weights start to lose their diversity around 80th time step and subsequently suffers severe particle impoverishment around the last iteration. In order to over this issue, particle roughening strategies were adopted. To elucidate, we have shown the particle weight distribution plots in Figure 9a-d while using SyR. The total number of time steps till the actual EoL for CALCE dataset is 112. The particle weight distributions at 76th, 80th, 96th, and 112th time step (last time step) is shown in Figure 9. The particle weights start to lose their diversity around 80th time step and subsequently suffers severe particle impoverishment around the last itera tion. In order to over this issue, particle roughening strategies were adopted. Figure 9. The particle weight distribution obtained during execution of particle filter algorithm us ing systematic resampling (SyR) at the (a) 76th (b) 80th (c) 96th, and (d) 112th time step. The particle weight distribution in (d) depict particle impoverishment despite using adopting robust resampling strategies. RUL Estimation Using Particle Roughening Method For particle roughening, the standard deviation of the Gaussian jitters is the key in fluencing factor for improving particle diversity. Based on Ref. [34], three different sigma values were used in literature to simulate the jittering effect as shown in Table 1, and Sigma-2 values were found to be the best-performing ones. The sigma limits were chosen based on the admissible values of measurement noise to be present in the system under consideration. Hence, we adopted Sigma-2 for generating the Gaussian jitter to be added to the resampled particles. The particles were resampled using SyR and the resampled particles were added with a small Gaussian jitter with zero mean and standard deviation corresponding to Sigma-2. However, roughening comes with an additional computationa cost as shown in Table 2. Particle roughening takes about 8 to 10 times more time than SyR. The big advantage though is that the proposed method eliminates particle impover ishment. The posterior parameter distributions and the prediction traces for CS-37 using Figure 9. The particle weight distribution obtained during execution of particle filter algorithm using systematic resampling (SyR) at the (a) 76th (b) 80th (c) 96th, and (d) 112th time step. The particle weight distribution in (d) depict particle impoverishment despite using adopting robust resampling strategies. RUL Estimation Using Particle Roughening Method For particle roughening, the standard deviation of the Gaussian jitters is the key influencing factor for improving particle diversity. Based on Ref. [34], three different sigma values were used in literature to simulate the jittering effect as shown in Table 1, and Sigma-2 values were found to be the best-performing ones. The sigma limits were chosen based on the admissible values of measurement noise to be present in the system under consideration. Hence, we adopted Sigma-2 for generating the Gaussian jitter to be added to the resampled particles. The particles were resampled using SyR and the resampled particles were added with a small Gaussian jitter with zero mean and standard deviation corresponding to Sigma-2. However, roughening comes with an additional computational cost as shown in Table 2. Particle roughening takes about 8 to 10 times more time than SyR. The big advantage though is that the proposed method eliminates particle impoverishment. The posterior parameter distributions and the prediction traces for CS-37 using particle roughening strategy are shown in Figure 10a,b, respectively. The number of successful iterations improves to 98% with just one unsuccessful iteration. Additionally, the prediction traces are more intact and closer to the true values shown by the black line in Figure 9a. Moreover, the posterior distributions of parameters are more spread out clearly indicating a better particle diversity. The proposed weight regularization method has advantages over other evolutionary algorithms, such as genetic algorithm and particle swarm optimization algorithm, which are highly computationally expensive approaches. To the best of our knowledge, the proposed adaptive Bayesian learning with weight regularization is the first of its kind to be used to optimize the MLP parameters for prognostic applications. in Figure 9a. Moreover, the posterior distributions of parameters are more spread out clearly indicating a better particle diversity. Figure 10. (a) The posterior distribution of MLP network parameters estimated by the adaptive Bayesian learning framework using particle roughening and (b) the degradation prediction traces for CALCE battery (CS-37) using particle roughening. The prediction traces for 50 repetitions are shown using the gray lines, the green lines represent outliers, and the magenta lines represent the traces with minimum RMSE values. The proposed weight regularization method has advantages over other evolutionary algorithms, such as genetic algorithm and particle swarm optimization algorithm, which are highly computationally expensive approaches. To the best of our knowledge, the proposed adaptive Bayesian learning with weight regularization is the first of its kind to be used to optimize the MLP parameters for prognostic applications. To check the robustness of the proposed method, we also applied the proposed prognostic approach to the NASA battery degradation dataset. RW10 was used for training the model, and battery RW11 was the test dataset. The performance results are again summarized in Table 2. The prognostic metrics used in this work are root mean squared error (RMSE), relative accuracy (RA) and computational time. The RMSE and RA values deduced in this work were evaluated using the following standard expressions in Equations (11) and (12). In Equation (11), n is the size of the training dataset, T is the prediction starting point index value and k is the number of cycles. Figure 10. (a) The posterior distribution of MLP network parameters estimated by the adaptive Bayesian learning framework using particle roughening and (b) the degradation prediction traces for CALCE battery (CS-37) using particle roughening. The prediction traces for 50 repetitions are shown using the gray lines, the green lines represent outliers, and the magenta lines represent the traces with minimum RMSE values. To check the robustness of the proposed method, we also applied the proposed prognostic approach to the NASA battery degradation dataset. RW10 was used for training the model, and battery RW11 was the test dataset. The performance results are again summarized in Table 2. The prognostic metrics used in this work are root mean squared error (RMSE), relative accuracy (RA) and computational time. The RMSE and RA values deduced in this work were evaluated using the following standard expressions in Equations (11) and (12). In Equation (11), n is the size of the training dataset, T is the prediction starting point index value and k is the number of cycles. Relative Accuracy (RA) = 1 − |Predicted RUL − True RUL| True RUL The results indicate that the proposed prognostic framework with weight regularization outperforms the standard resampling strategies in the literature. The parameter distributions from the adaptive Bayesian learning framework and the corresponding degradation traces for NASA battery dataset are shown in Figures 11-14. Additionally, the comparison between predicted RUL and true RUL for both the datasets using the different resampling and roughening methods at three different prediction starting points are shown in Figures 15 and 16, respectively, in terms of the RA metric. shown in Figure 15 and Figure 16, respectively, in terms of the RA metric. From Figures 15 and 16, it can be inferred that the accuracy of the resampling strategy adopted is reflected in the closeness of the predicted RUL to the true RUL value of the device under consideration The true RUL for both the datasets at different prediction starting points are represented by the magenta, red and cyan dotted lines. For both the datasets, particle roughening method performs the best with minimum RUL error value. Additionally, the variance in the RUL distribution can be inferred from the height of the box-plot shown in Figures 15 and 16. Thus, the results clearly show that the predictions results obtained using particle roughening method are both accurate as well as precise (relatively low variance in predicted RUL compared to most other common resampling methods). shown in Figure 15 and Figure 16, respectively, in terms of the RA metric. From Figures 15 and 16, it can be inferred that the accuracy of the resampling strategy adopted is reflected in the closeness of the predicted RUL to the true RUL value of the device under consideration The true RUL for both the datasets at different prediction starting points are represented by the magenta, red and cyan dotted lines. For both the datasets, particle roughening method performs the best with minimum RUL error value. Additionally, the variance in the RUL distribution can be inferred from the height of the box-plot shown in Figures 15 and 16. Thus, the results clearly show that the predictions results obtained using particle roughening method are both accurate as well as precise (relatively low variance in predicted RUL compared to most other common resampling methods). From Figures 15 and 16, it can be inferred that the accuracy of the resampling strategy adopted is reflected in the closeness of the predicted RUL to the true RUL value of the device under consideration The true RUL for both the datasets at different prediction starting points are represented by the magenta, red and cyan dotted lines. For both the datasets, particle roughening method performs the best with minimum RUL error value. Additionally, the variance in the RUL distribution can be inferred from the height of the boxplot shown in Figures 15 and 16. Thus, the results clearly show that the predictions results obtained using particle roughening method are both accurate as well as precise (relatively low variance in predicted RUL compared to most other common resampling methods). . Figure 16. Box plot showing the comparison between predicted RUL and true RUL for NASA dataset (RW-11) for three different prediction starting points using (a) MR, (b) StR, (c) SyR, and (d) particle roughening methods. Comparison of Prediction Results with Previous Works in the Literature In order to evaluate the performance of our proposed method, we have compared our prediction results with two other relevant methods available in the literature. The prediction results obtained from Refs. [26,35] are used for comparison. The authors in Ref. [26] had developed a hybrid prognostic algorithm wherein the NN degradation model was used in the particle filter algorithm as the degradation model and further extrapolated in future for prognosis. In this work, the run-to-failure data of each battery were needed to determine the initial parameter guess which were fed to the PF algorithm. Similarly, Ref. [35] is one of our earliest works wherein we had developed a particle filter trained neural network model for prognosis. However, the efficacy of the proposed method was limited due to particle degeneracy and impoverishment issues. Hence, we adopted suitable weight regularization techniques in this work to overcome those disadvantages. The prognostic metric used in all the three work is the prediction RMSE value which are summarized in Table 3 below. It is evident from Table 3 that our proposed method here with weight regularization techniques incorporated into the hybrid framework performs better than the previous works from the literature. Conclusions In this work, we proposed an adaptive Bayesian learning framework to train MLP models for prognostic application on multiple electronic devices. The proposed adaptive Bayesian learning framework uses a particle filter algorithm for state estimation, wherein Figure 16. Box plot showing the comparison between predicted RUL and true RUL for NASA dataset (RW-11) for three different prediction starting points using (a) MR, (b) StR, (c) SyR, and (d) particle roughening methods. Comparison of Prediction Results with Previous Works in the Literature In order to evaluate the performance of our proposed method, we have compared our prediction results with two other relevant methods available in the literature. The prediction results obtained from Refs. [26,35] are used for comparison. The authors in Ref. [26] had developed a hybrid prognostic algorithm wherein the NN degradation model was used in the particle filter algorithm as the degradation model and further extrapolated in future for prognosis. In this work, the run-to-failure data of each battery were needed to determine the initial parameter guess which were fed to the PF algorithm. Similarly, Ref. [35] is one of our earliest works wherein we had developed a particle filter trained neural network model for prognosis. However, the efficacy of the proposed method was limited due to particle degeneracy and impoverishment issues. Hence, we adopted suitable weight regularization techniques in this work to overcome those disadvantages. The prognostic metric used in all the three work is the prediction RMSE value which are summarized in Table 3 below. It is evident from Table 3 that our proposed method here with weight regularization techniques incorporated into the hybrid framework performs better than the previous works from the literature. Table 3. Comparison of RMSE values of the proposed method with Refs. [26,35]. Conclusions In this work, we proposed an adaptive Bayesian learning framework to train MLP models for prognostic application on multiple electronic devices. The proposed adaptive Bayesian learning framework uses a particle filter algorithm for state estimation, wherein the weight decay issues commonly encountered in PF algorithms adversely affect the convergence of MLP weight and bias values and lead to poor prognostic performance. Hence, we adopted three different resampling strategies and particle roughening approaches into the PF framework. These strategies enable weight regularization in the MLP model used for prognosis. The proposed method was tested out on CALCE and NASA battery degradation datasets with high non-linearity and non-monotonicity. The prediction results clearly showed that systematic resampling helped improve particle diversity in the PF algorithm and subsequently helped eliminate particle degeneracy. Additionally, including the suitable particle roughening strategies in systematic resampling helps to eliminate particle impoverishment. The results imply that the proposed adaptive Bayesian learning framework with weight regularization helps the model parameters converge closer to the true values, prevent local minima problems, and helps to improve the generalization capabilities of NN models. In the future, we intend to modify the proposed framework and incorporate physics informed loss functions into the MLP architecture. The purpose of including a physicsbased loss function would be to apply the framework for devices with highly noisy data (and sparse good data scenarios as well) and underlying failure mechanisms with hidden physics. Although physics informed machine learning approaches are the state-of-the-art, the computational framework proposed thus far still have discrepancies in their convergence rates and suffer from vanishing backpropagation gradients. Thus, introducing a Bayesian inference-based training process can help to overcome the optimization and convergence issues.
2022-05-19T15:07:50.282Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "e3113257a22ce01b074009adbefb02fb3384af07", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/10/3803/pdf?version=1652866081", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce6f481e853680e54387e441dccaa195d0179033", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
212732029
pes2o/s2orc
v3-fos-license
MethylNet: an automated and modular deep learning approach for DNA methylation analysis Background DNA methylation (DNAm) is an epigenetic regulator of gene expression programs that can be altered by environmental exposures, aging, and in pathogenesis. Traditional analyses that associate DNAm alterations with phenotypes suffer from multiple hypothesis testing and multi-collinearity due to the high-dimensional, continuous, interacting and non-linear nature of the data. Deep learning analyses have shown much promise to study disease heterogeneity. DNAm deep learning approaches have not yet been formalized into user-friendly frameworks for execution, training, and interpreting models. Here, we describe MethylNet, a DNAm deep learning method that can construct embeddings, make predictions, generate new data, and uncover unknown heterogeneity with minimal user supervision. Results The results of our experiments indicate that MethylNet can study cellular differences, grasp higher order information of cancer sub-types, estimate age and capture factors associated with smoking in concordance with known differences. Conclusion The ability of MethylNet to capture nonlinear interactions presents an opportunity for further study of unknown disease, cellular heterogeneity and aging processes. Background Deep learning has emerged as a widely applicable modeling technique for a broad range of applications through the use of artificial neural networks (ANN) [1]. Recently, the accessibility of large datasets, graphics processing units (GPUs) and unsupervised generative techniques have made these approaches more accurate, tractable, and relevant for the analysis of molecular data [2][3][4][5][6][7]. DNA methylation (DNAm) is the addition of a methyl group to a nucleotide, typically cytosine, that does not alter the DNA sequence and occurs most frequently to cytosine-guanine dinucleotides (CpG). Methylated regions of DNA (hypermethylated), are associated with condensed chromatin, and when present near gene promoters, repression of transcription. Unmethylated regions of DNA (hypomethylated), are associated with open chromatin states and permissive to gene transcription. DNAm patterns are associated with cell-typespecific gene expression programs, and alterations to DNAm have been associated with aging and environmental exposures [8,9]. Further, it is well-established that DNAm alterations contribute to development and progression of cancer. The hypermethylation of tumor suppressing genes and the hypomethylation of oncogenes can lead to pathogenesis and poor prognosis. Affordable array-based genome-scale approaches to measure DNAm have potentiated Epigenome Wide Association Studies (EWAS) for testing associations of DNAm with phenotypes, exposures, and states of human health and disease. Because DNAm patterns are celltype specific, EWAS often account for potential confounding from variation in biospecimen cell composition using reference-based, or reference-free approaches to infer cell type proportions [10][11][12][13]. Measuring genome-wide DNAm in large numbers of specimens typically uses microarray-based technologies such as the Illumina HumanMethylation450 (450 K) and HumanMethylationEPIC (850 K) [14] arrays, which yield an approximation to the proportion of DNA copies that are methylated at each specific cytosine locus, and are reported as beta values. Preprocessing pipelines such as PyMethylProcess have simplified derivation and storage of methylation beta values in accessible data formats [15]. The scope of features from DNAm arrays is 20-50fold higher than that of RNA-sequencing data sets that return normalized read counts for each gene. Though DNAm data can have a similar scope of features as genotyping array data sets, DNAm beta values are continuous (0-1), not categorical. Together, these facets of DNAm data sets pose challenges to analyses such as handling multi-collinearity and correcting for multiple hypothesis testing. To address these challenges, many downstream EWAS analyses have focused on reducing the dimensions into a rich feature set to associate with outcomes. By limiting the number of features through dimensionality reduction and feature selection, analyses become more computationally tractable and the burden of correcting for multiple comparisons is reduced. An important advancement to methylation-based deep learning analyses was the application of Variational Auto-encoders (VAE). Initial deep learning approaches for DNAm data focused on estimating methylation status and imputation, performing classification and regression tasks, and performing embeddings of CpG methylation states to extract biologically meaningful lower-dimensional features [16][17][18][19][20][21][22][23]. VAEs embed the methylation profiles in a way that represents the original data with high fidelity while revealing nuances [4,5,24]. Thereafter, researchers attempted to develop similar frameworks for extracting features for downstream prediction tasks and identify meaningful relationships revealed by VAE latent representations [25]. However, VAE models are sensitive to the selection of hyperparameters [26] and have not been optimized for synthetic data generation, latent space exploration, and prediction tasks. Many auto-encoder approaches represent the data using an encoder, and then utilize a nonneural network model (e.g. support vector machine) to finalize the predictions. Presently, to the best of our knowledge there is no end-to-end training approach that both extracts biologically meaningful features through latent encoding and performs predictions using the derived features. Further, existing frameworks do not output predictions for multi-target regression tasks, such as cell-type deconvolution and subject age prediction. Here, we leverage deep learning latent space regression and classification tasks through the development of a modular framework that is highly accessible to epigenetic researchers (Fig. 1). MethylNet is a modular userfriendly deep learning framework for EWAS tasks with automation that leverages preprocessing pipelines. To discover important CpGs for each prediction we use the SHAP (SHapley Additive ExPlanation) approach [27]. We highlight MethylNet as an easy-to-use command line interface that utilizes automation to scale, optimize, and simplify deep learning methylation tasks. MethylNet's capabilities are showcased here with unsupervised generative and clustering tasks, cell-type deconvolution, pan-cancer subtype classification, age regression, and smoking status classification. These analyses will pave the path for more robust deep learning prediction models for methylation data. Coupled with PyMethylProcess [15], we expect the MethylNet framework to enable rapid production-scale research and development in the deep learning epigenetic space. Results We show that MethylNet serves as an effective encoder for DNAm data by capturing latent features that have high fidelity to the original dataset. This method can utilize encodings to make accurate predictions in common DNAm analysis tasks, and the CpGs important for making predictions are concordant with prior observations. Finally, we demonstrate that MethylNet can also identify CpGs consistent with a large EWAS metaanalysis. Datasets acquired We selected six public DNAm data sets and use cases to illustrate a range of tasks and demonstrate ability to capture features that meaningfully encode aging, cell lineage, disease states, and exposures. The first dataset (Johansson data) was used to study both age and cell type classification and is one of the largest readily available DNAm datasets from healthy subjects with a wide age range (blood DNAm from individuals aged 15 to 95, GSE87571 [28]; Supplementary Figure 1 and Supplementary Table 1). The second dataset (The Cancer Genome Atlas, TCGA) was used to study cancer subtypes and includes 8376 samples representing 32 different cancer subtypes (Supplementary Tables 1, 2). The third dataset (Liu dataset) was used to compare blood DNAm in current smokers to never smokers among the controls from a rheumatoid arthritis study (GSE42861, subset n = 188 [29]). These three datasets were preprocessed using PyMethylProcess to yield 300 k, 200 k, and 300 k CpG features respectively and then split into 70% training, 20% testing, and 10% validation. Three additional datasets (GSE40279, GSE84207, and GSE75067) were utilized for preliminary evaluations of external validation and breast tumor subtyping. Motivation for DNAm encoding First, we established MethylNet as a method for DNAm encoding by demonstrating the ability to recapitulate the original DNAm signal while providing superior clustering performance over state-of-the-art clustering methods such as Recursively Partitioned Mixture Modeling (RPMM) [30] (see Supplementary Material, "Evaluation of Unsupervised Encoder Performance"; Supplementary Figures 2,3 and 4). Given MethylNet's performance in the unsupervised domain and its ability to meaningfully encode DNAm features, we next used this framework to validate performance in typical DNAm prediction tasks of age estimation, cellular proportion estimation, and disease classification. 1 Step-by-step description of the modular framework: a Train feature extraction network using variational auto-encoders; b Fine-tune encoder for prediction tasks; c Perform hyperparameter scans for (a) and (b); d Identify contributing CpGs; e Interpret the CpGs Age results DNAm-based age estimators such as the Horvath and Hannum clocks used elastic net penalized regression to identify sets of CpGs (353 and 71 respectively) strongly associated with age [31,32]. Hannum et al. leveraged DNA methylation data from whole blood measured with the 450 K Illumina platform in 656 subjects aged 19-101. Horvath leveraged genome-scale methylation data from 51 tissue and cell types in 82 independent data sets and over 8000 samples. The resulting models provide for very accurate age estimation but the number of and manner with which features can be associated with age are limited. Moreover, recently there is interest in understanding what drives observed remaining residual between chronological age and methylation age. The difference between age and methylation age has been termed biological age or age acceleration and has itself been associated with disease risk and all-causes mortality [33][34][35]. Demonstrating consistent performance between MethylNet and established approaches motivates future use of our method to study complex states and interactions underlying aging processes. Again, utilizing the Johansson data, we trained Methyl-Net on the chronological age of the individuals to predict chronological age. MethylNet-predicted age showed excellent concordance with the actual subject age (R 2 = 0.96, Fig. 2a) in the hold-out test set (n = 144), and only had 3.0 years mean absolute error (Fig. 2b) (Fig. 2c). These CpG contributions were compared between age groups using correlation distance, as illustrated in Fig. 2d. The connectivity between different age groups' CpG attributions in Fig. 2d using hierarchical clustering demonstrates the Fig. 2 Age Results on Test Set (n = 144): a Age predictions derived using the Horvath, Hannum, and MethylNet estimators are compared to the true age of the individual, the predicted ages are plotted on the x-axis, the actual ages on the y-axis, and a line was fit to the data for each estimator; b Comparison of MethylNet Age estimates on Test Set (n = 144) to Horvath and Hannum Age Estimators. 95% confidence intervals for each score were calculated using a one thousand sample non-parametric bootstrap; c Bar chart depicting the overlap of CpGs important to MethylNet and Hannum age estimators where one thousand CpGs with the highest SHAP scores per 10-year age group are divided by the total number of Hannum CpGs that passed QC; d Hierarchical clustering using the correlation distance between SHAP CpG scores for age groups across all CpGs. The linkage is found between similar age groups sharing of important CpGs by similarly aged groups. Further description of the derivation of the Shapley score estimates can be found in the supplementary materials. We aimed to compare the highly contributing CpGs to age predictions using MethylNet and to those calibrated in the Hannum epigenetic clock [31]. The CpGs used by the Hannum model were most likely associated with those aged 60-80, the most prevalent ages in the cohort. Since the number of Hannum CpGs rediscovered by MethylNet appears to peak around this range, this supports evidence that MethylNet is able to recover the defining CpGs of the Hannum cohort. Cell type Deconvolution results Reference-based cell type estimation approaches with DNAm data use a library of cell-specific leukocyte differentially methylated regions (L-DMR), to infer cellular proportions. These cell type libraries, similar to age estimation, contain a few hundred CpG features for prediction (e.g. the 350 CpG IDOL library [12]), and current deconvolution is very accurate and fast. Although current methods like estimateCellCounts2 accurately capture cellular proportions in blood, the future of cell type deconvolution includes efforts to estimate remaining sources of cell type heterogeneity, including cellular states that currently lack L-DMR. We sought to investigate the ability of MethylNet to capture current capabilities of cellular deconvolution so that it may be applied to future unsupervised domains when the requisite amount of data is available. As such, MethylNet was tasked with estimating the cell-type proportions for six immune cell-types using the same dataset as supplied for the age analysis. Unsupervised derivation of six latent clusters using VAE embeddings demonstrated separation of cellular proportions without training on a reference set of cellular proportions for DNAm profiles (Supplementary Table 6); this served as motivation for a supervised analysis. As compared to the other EpiDISH estimator methods that utilize the IDOL library, the prediction framework demonstrates exemplary performance on this task in R 2 and mean absolute error across all cell-types save for monocytes, as demonstrated in Table 1 (Fig. 3a- Table 7). Using Shapley attribution, contributions for each of the CpGs for driving the predictions of the celltypes was derived. Figure 3c shows the connectivity of their hierarchical clustering of these CpG attributions. The hierarchical clustering between the SHAP scores of each of the cell-types is consistent with the known cell lineage, reinforcing that cell lines that have coevolved similarly share similar driving CpGs that are indicative of their cell-type. Some of the cell-types obtained improved concordance metrics (e.g. R 2 ) compared to other cell types but had similar absolute errors (i.e. MAE). This is likely due to the fact that the total range of proportions of monocytes, for instance, from the collected data was small such that these errors could make it difficult to correlate the predicted and true cell type proportions. Alternatively, issues with the purity of the reference monocytes could complicate reference-library calibration. A similar overlap test was conducted between the MethylNet SHAP CpGs and IDOL-derived L-DMR CpGs (Supplementary Figure 5). Little overlap was found between the two sets, as only the B-cells were able to capture more than 10% of the IDOL CpGs. This does not indicate that MethylNet could not identify CpGs that are cell-type specific. Rather, this finding serves to indicate that models with different optimization objectives and number of features available differentially attribute CpGs. To this point, we still do not know at what point do CpGs, across individuals or larger groupings reach statistical significance and thus warrant additional inspection. Some preliminary analysis can be found in the Supplementary Figures 6 and 7. For the Hannum and IDOL analysis, we set this at an arbitrary cutoff value of the top 1000 CpGs per age/cell-type group, but the distribution of these Shapley scores and their fidelity to model predictions is an active area of research [36]. Pan-cancer prediction results Finally, motivating uses of MethylNet as a mechanism to uncover sources of disease heterogeneity and the capability of the workflow to capture features that are tissue-specific, MethylNet was employed to make predictions of 32 cancer subtypes (n = 1676) (one removed due to low sample size) across the pan-cancer TCGA cohort. This analysis yielded 0.97 accuracy, 0.97 precision, 0.97 recall and 0.97 F1-score, averaged across the different subtypes ( Fig. 4a) (training and validation performance in Supplementary Table 5). These results outperform a support vector machine (SVM)-based classification approach, in which MethylNet demonstrated a 0.15-unit (18%) increase in F1-score. A breakdown of classification accuracies for each subtype is in the supplemental results (Supplementary Tables 8 and 9). We also report on how predictive accuracy scales with dataset size in the supplementary materials ( Supplementary Figures 8-9). The latent profiles derived for pan-cancer subtypes given the model training on this predictive task showed clustering with high concordance to known cancer type differences. Thresholding a hierarchical clustering of the average cosine distance between cancer subtypes from the MethylNet derived embeddings (Fig. 4b, Supplementary Table 10) indicates clustering of the test methylation profiles by eight unsupervised biologically corresponding superclasses. The subtypes that define these larger groupings are concordant with expectations from tissue differences in cancer biology. Taken together, MethylNet not only makes highly accurate and robust classification predictions, but also extracts latent features with high fidelity to the biology of tissue or cancer type difference. The similarity between some of the subtypes may explain why and how certain subtypes did not perform as well compared to others (Supplementary Tables 8 and 10). For instance, we see that 4 KIRC and KIRP cases were conflated with each other. In addition, two cervix cases were predicted to be uterine. There were elevated rates of misclassification between the colon and rectal cancer pairings and esophageal, head and neck, and stomach cancer pairings. Finally, seven predicted glioblastoma cases were actually low-grade glioma (Supplementary Table 8). Thus, subtypes tended to be misclassified only within each superclass. The exception to this trend was the misclassification of lung squamous cell carcinomas, four of which were predicted to be its adenocarcinoma counterpart, which is consistent with the shared embedding profile, and likely reflects similar biology of cellular lineage. For the cancer subtype analysis, we sought to identify concordance between the latent profiles of methylation across cancer types. Because each tumor type has a different baseline DNAm profile for its normal tissues-oforigin, and these differences are expected to contribute to the prediction, we decided not to attempt derivation of the salient CpGs for each subtype's prediction. EWAS application, preliminary subtyping and external validation Given the success of MethylNet to capture nonlinear interacting features that cluster, recapitulate and assist with predictions, we sought to evaluate MethylNet on the Liu data for the prediction of smoking status (current vs. never smoker) and compare the results to a prior robust EWAS meta-analysis [37]. MethylNet achieved 73% accuracy in predicting smoking status despite relatively small training (n = 139), validation (n = 19) and held-out test sets (n = 30) (Supplementary Figure 10). There was a significant correlation between the rank of CpGs most Figure 10). The preservation of these ranks indicates that MethylNet can form associations with outcomes that are concordant to known EWAS analyses, even though it places more emphasis on interacting features versus the traditional EWAS. We have also conducted a preliminary subtyping classification of the PAM50 subtypes of breast cancer and preliminary validation of MethylNet age prediction on an external cohort, the results of which are included in the supplementary materials (Supplementary Table 11; Supplementary Figure 11). Data was acquired from GEO accessions GSE40279, GSE84207, and GSE75067 [31,38,39]. Discussion Here, we introduce MethylNet, a modular deep learning framework that is easy to train, apply, and share. MethylNet employs an object-oriented application programming interface (API) and has built-in functionality to easily switch between analyses with respect to embedding, generation, classification, and regression tasks. We demonstrate MethylNet's ability to capture features that recapitulated the original DNAm data and generated accurate predictions that conform with expected biology. MethylNet extends previous approaches by fine-tuning the feature extractor and adding additional layers for prediction tasks. It also employs a robust Results on test set (n = 144) for cell-type deconvolution: a For each cell type, the predicted cellular proportion using MethylNet (x-axis) was plotted against the predicted cellular proportion using estimateCellCounts2, which has been found to be a highly accurate measure of cellular proportions and thus serving as the ground truth for comparison, a regression line was fit to the data for each cell type: B-cell, CD4T, CD8T, Monocytes (Mono), NK cells, and Neutrophils (Neu); b Grouped box plot demonstrating the concordance between the distributions of the MethylNet-estimated proportions of each cell-type and the distributions derived using estimateCellCounts2; c Hierarchical clustering using the correlation distance between two cell types' SHAP CpG scores across all CpGs. The linkage is found between cell types of similar lineage hyperparameter search method that optimizes the parameters of the model for generalization to unseen data. The pipeline is flexible to the demands of the user. For instance, if a user only wanted to train a custom machine learning model on the latent features, the data can be extracted before the end-to-end training step. By demonstrating the ability to meaningfully encode DNAm features, predictive performance on four tasks; age prediction, cell-type deconvolution, pancancer subtype prediction, and concordance to the results of a known EWAS meta-analysis; we present further support of the applicability of VAEs for feature extraction, and more evidence that deep learning presents an opportunity for learning meaningful biology and making accurate predictions from feature-rich molecular data. Results on test set for pan-cancer sub-type predictions: a Comparison of MethylNet derived pan-cancer classification of test set (n = 1676) to UMAP+SVM method. 95% confidence intervals for each score were calculated using a 1000 sample non-parametric bootstrap; b Hierarchical clustering of average embedding cosine distance between all pairs of cancer subtypes. Cancer subtypes from both axes are colored by cancer superclasses, derived using the hierarchical clustering method. The clustering of similar MethylNet embeddings is concordant with known biology of tissue/cancer type difference. Skin and connective tissue cancers, and bile and liver cancers in Cluster 1. All kidney cancers in Cluster 2. Bladder, uterine and cervix cancers in Cluster 3. Pairing of colon and rectal cancers, both adrenal cancers in Cluster 4. A tie between lung adenocarcinoma and mesothelioma in Cluster 5, both of which may develop in similar locations. Pairings between stomach and esophagus cancer, and pancreas and prostate cancers in Cluster 6. Brain cancers in Cluster 7. Thymoma, Diffuse Large B-Cell lymphomas in Cluster 8. While the lung cancers were not paired together, they experienced a high degree of embedded similarity. The connectivity between the lung squamous cell cancer and its neighboring types prevented the two cancers from being grouped together Strengths, limitations, and future directions Interpretation of our high dimensional models still has challenges, partially due to the drawbacks of assigning feature attributions to high dimensional multi-collinear data. While traditional linear models can still be highly predictive, multi-collinearity has the effect of adjusting the coefficients of the predictors such that the results are not as interpretable. Shapley feature attributions are a promising method used to explain predictions estimating complex models with simpler linear ones as we able to demonstrate agreement between age groups and cell lineages and concordance between ranked SHAP scores and ranked p-values of CpGs associated with smoking status of a large EWAS meta-analysis. Our age and cell-type analyses were conducted to demonstrate the capabilities of the deep learning tool and models were trained on a relatively small study of blood samples, only a subset of those included in the Horvath framework. Further work can capture features indicative of age acceleration, a popularized prognostic indicator tied to the residual between the predicted and actual age. Since initial publications in 2013, investigators have started using the difference between chronologic and predicted DNAm age to investigate questions related to so called biological age or age acceleration [40]. This area of epigenetics is moving towards understanding the relation of the age residual with disease risk, and potential to modify it through intervention (e.g. diet and exercise). More advanced treatment of the data underlying prediction of age will allow opportunities for mechanistically informed intervention studies that aim to reduce age acceleration and improve public health [9]. MethylNet methodology presents alternative framework to uncover functional gene regulation that accounts for biological age acceleration and goes beyond the limited set of features used to predict methylation age in Horvath, Hannum, and other DNA methylation clocks. As the biology of these clocks are still being discovered [41] and due to the non-linear relationship with both chronological age [42] and other biomarkers of cell epigenetic cell maturation [43], further examination of age acceleration and biology should be done through neural networks. Our analyses also only presented predictions across one type of tissue without yet accounting for differences in methylation between cell types. MethylNet was shown to capture some of the remaining sources of cellular heterogeneity, which can include differential methylation of cell subtypes and states that are known to exist, but for which we do not currently have L-DMRs. MethylNet represents an opportunity to improve reference-based and reference-free deconvolution approaches. More robust and consistent estimators that address current limitations of DNAm-based deconvolution approaches will be the focus of future applications of the MethylNet method. Prior works that have explored pan-cancer prediction in the deep learning space have limited their analyses to a small set of CpGs that do not capture a holistic understanding of interaction and regulation in the cancer context [44]. Our results demonstrate that models with a larger number of CpGs are needed to accurately capture differences in tissue/cancer subtypes. Since MethylNet captures and confounds the biology between similar conditions, it presents an opportunity to explore similar therapeutic targets and treatments across disease types of similar tissue, within and outside cancer studies. Given the ability of MethylNet to capture the differences in the profiles between the cancer subtypes, there is great opportunity to better understand heterogeneity of other diseases. Our analyses refrain from uncovering relationship between the discovered CpGs and functional effects because of the difficulties associated with localizing the effect of a small set of CpGs of interest. Once the salient attributions are found, CpG analyses experience common pitfalls when trying to match CpGs to their nearest gene via the found promoter region. Such analyses may ascribe the CpG's effect in the context of what gene they appear to be regulating. However, genes are also regulated at a distance in the 3D topological space by interacting with enhancer regions [45,46]. Thus, enrichment methods based on individual gene to CpGs relationships implemented in missMethyl [47] may not be suitable for interpreting loci identified by MethylNet. Ideally, downstream approaches to add biological interpretation would take into account chromosome/genome interaction (e.g. through use of Hi-C data) and genome topological structure/organization. For instance, enrichment from chromatin state and histone modifications present in the target loci as used by ChromHMM and LOLA [48,49] might be more warranted. Some model result interpretation issues may be partially circumvented by integrating gene expression data into the model or more structurally by building a deep learning mechanism to predict gene expression from DNA methylation using other layers of information from the genomic context [50]. An important take-away is that as interpretation methods for these high dimensional data are pioneered, VAE-based deep learning models will likely find CpGs that interact in ways we would not traditionally think about. While the other models were trained on a much smaller set of CpGs, MethylNet is able to make its predictions on 200-300 K CpGs, capturing complex interactions between a much larger set of CpGs. Crucial next steps should address these interpretability and confounding concerns through feature selection, covariate adjustment and more biologically interpretable informatics methods for CpG interpretation. Finally, to scale up MethylNet's deep learning workflows to production grade as well as incorporate information from Whole Genome and Reduced Representation Bisulfite Sequencing, future renditions may utilize common workflow language (CWL) [51]. In addition, new Bayesian search methods may be employed to better automate the selection of model hyperparameters and automate the construction of the ideal neural network architecture [52,53]. Conclusion We demonstrate a modular, reproducible, and easy-touse object-oriented deep learning framework for methylation data: MethylNet. We illustrate that MethylNet captures meaningful features that can be used for future unsupervised analyses and achieves high predictive accuracy across age estimation, cell-type deconvolution, cancer subtype, and smoking status prediction tasks. MethylNet's accuracy at these tasks was superior, or at least equivalent to, other methods and interpretations of the model's outputs demonstrated agreement with prior literature. We hope that MethylNet will be used by the greater biomedical community to rapidly generate and evaluate testable biological hypotheses involving DNA methylation data through a scalable, automated, intuitive, and user-friendly deep learning framework. Methods Our approach uses a few simple commands, all of which can be executed for any prediction task. First, deep learning prediction models are pre-trained using variational auto-encoders, and the layers of the encoder are used to extract biologically meaningful features. These neural network layers are used to embed the data and extract features for clustering in the unsupervised setting, generating new data with high fidelity to the original source, and for prediction model pretraining. Second, prediction layers are included downstream of the encoder which fine-tune the model's prediction and feature extraction layers end-to-end for the tasks of multi-output regression and classification. Training prediction layers optimize the neural network for prediction tasks. Third, autonomous hyperparameter scans are performed to optimize the model parameters for the first and second tasks while generating rich visualizations of the data. Lastly, the contribution of the CpGs to each prediction on varying degrees of granularity are determined through Shapley Feature Attribution methods. MethylNet is implemented as a UNIX/Linux command-line tool that allows users to make deeplearning predictions on methylation data with use cases such as embedding, generation, classification and regression. With the specification of a single commandline option, MethylNet can be toggled between regression and classification tasks to address a wide breadth of problems. The modular, accessible characteristic of the MethylNet framework enables a simple procedure to train and produce results across multiple domains. In addition to predictive tasks, MethylNet can encode data into lower-dimension space from which to perform unsupervised clustering when researchers do not have labeled DNAm data. Further, MethylNet can generate realistic synthetic data with high fidelity relative to the original samples. Description of framework Here, we present a description of a modular and highly accessible framework for deep learning tasks pertaining to unsupervised embedding, supervised classification and multi-output regression of DNA methylation (DNAm) data. The MethylNet pipeline is comprised of subcommands specifically pertaining to embedding, prediction, and interpretation. We have included the minimal set of commands to run the workflow in the supplementary materials under the section "Example Code to Run Pipeline". First, after preprocessing using PyMethylProcess. The dataset is split into training, validation, and testing sets using train_test_val_split of the preprocessing pipeline utilities. Training the feature extractor to embed data An embedding routine is used to pretrain the final prediction model by using Variational Autoencoders to find unsupervised latent representations of the data. Pretraining is an important part of transfer-learning applications. The knowledge extracted from learning unsupervised representation of the data is used towards learning predictive tasks with a lower data requirement. Data fed into these VAEs pass through an encoder network that serves to compress the data and then this compressed representation is fed into a decoder network that attempts to reconstruct the original dataset while attempting to generate synthetic samples. The model attempts to balance the ability to generate synthetic samples with the ability of the data to be accurately reconstructed. The weight given to generation versus reconstruction can be set as a hyperparameter [54]. Generating synthetic training examples are important for adding noise while training a network for prediction tasks, a component which serves as a form of regularization to make the algorithm more generalizable to real-world data. While synthetic data can be generated using MethylNet via the generate_embed command, this generative process is meaningfully utilized during training, when the algorithm samples from the latent distribution of the embedded data to regularize. Nevertheless, the ability to reconstruct the original dataset is important because it governs how latent representations of the data are capturing features that properly describe the underlying signal. In order to perform embeddings on the input Methy-lationArray training and validation objects, perform_embedding command is executed via the command line interface. Hyperparameters of the autoencoder model can be scanned via the launch_hyperparameter_scan command. This randomly searches a grid of hyperparameters and randomly generates neural network topologies (number of layers, number of nodes per layer). The complexity (network width and depth), of which can be weighted by the user. The framework stores the results of each training run into logs to find the model with the lowest validation loss (Binary Cross Entropy reconstruction loss plus KL-Loss of the validation set) (hyperparameters with lowest validation loss can be found in Supplementary Table 12). Alternatively, results from the embedding routine can be input into any machine learning algorithm of choice. Embedding results are visualized through interactive 3-D plots by running transform_plot from PyMethylProcess. Training for prediction via transfer learning MethylNet can be used to perform classification, regression, and multi-output regression tasks via the prediction subroutine, which applies a transfer learning technique via the Python class MLPFinetuneVAE to fine-tune encoding layers of VAE model while simultaneously training a few appended hidden layers for prediction. A description of transfer learning has been included in the supplementary materials (see "Further Description of Transfer Learning Application"). We have also included an implementation of the multi-layer perceptron that can be trained within our framework which does not utilize transfer learning from the encoder. The make_prediction subcommand is run for these prediction tasks, and hyper parameters such as model complexity and learning rate and schedulers are scanned via the launch_hyperparameter_scan subcommand (hyperparameters with lowest validation loss can be found in Supplementary Table 13). The final model is chosen if it has the lowest validation loss (Mean Squared Error for Regression, Cross-Entropy for Prediction), and the output model is a snapshot at the epoch that demonstrated the lowest validation loss. The test set is also evaluated immediately after the model is trained using the training set. The results from MethylNet can be immediately benchmarked and compared for performance to other machine learning algorithms, which can be evaluated using the general_machine_learning subcommand from PyMethylProcess. Furthermore, ROC Curves and classification resorts can be output using plot_roc_curve and classification_report and regression reports are generated via regression_report. A confusion matrix of misclassifications can be generated from PyMethylProcess's plot_heatmap. Finally, the training curves for both the embedding and prediction steps can be visualized using the plot_training_curve subcommand (example prediction embedding plots found in Supplementary Figure 12; analysis training curves can be found in Supplementary Figure 13). Interpretation of results Predictions from MethylNet can be interrogated in two ways. The first approach uses SHAPley feature attribution to assign a contribution score to each CpG based on how much it contributed to the prediction. The second approach compares learned clusters of embeddings of methylation samples (and corresponding subtypes), for biological plausibility. The SHAPley value interpretations, available using methylnet-interpret approximate the more complex neural network model using a linear model for each individual prediction, the coefficients of which are Shapley values. Shapley values represent the contributions of each CpG to the individual predictions. They are produced after the prediction model and test MethylationArray are input to the produce_shapley_data command, which dumps a Sha-pleyData object into memory. The Shapley coefficients can be averaged by condition to yield summary measures of the importance of each CpG to the coarser category, and the coefficients can be clustered to demonstrate the similarity between methylation subtypes and coarser conditions, which can be compared to known biology. Description of experiment We evaluated our MethylNet framework (hyperparameter scan, embedding, fine-tuning predictions, interpretation) using 34 datasets from n = 9500 samples for four different prediction tasks: classification (TCGA pan-cancer subtype and smoking prediction), regression (age prediction), and multi-output regression (cell-type deconvolution). PyMethylProcess was used to preprocess the data, and yielded MethylationArray objects that contain a matrix of beta values for each individual and the corresponding phenotype information [15]. The MethylationArray data for each of these three experiments were split into 70% training, 20% testing, and 10% validation sets. The training set was used to update the parameters of the model. The validation set was used to terminate training early and choose hyperparameters that would be most generalizable to a test set. The test set was used for final model evaluation and interpretation. More information on model training can be found in the supplementals. For each score, 95% confidence intervals were computed using a one thousand sample non-parametric bootstrap. First, MethylNet's generative analysis was conducted on 8 arrays representing 8 groupings of features of the Johansson data, found by running a KMeans clustering algorithm on a UMAP clustering of CpG Methylation profiles. Each group was trained using a 50-job VAE hyperparameter scans to yield the ideal embedding. A generate_embed command was used to first embed methylation profiles and then decode them to their predicted values. All of the beta values of the CpGs of the individuals of the test set were compared to those found by generating the data from the latent embeddings. MethylNet was then configured for regression tasks and applied to derive sample age estimates in the Johansson data, using the reported chronological age as the ground truth. These results were compared to those derived from the Hannum and Horvath clocks using cgageR [31,32,55]. The Shapley framework was employed to quantify the importance of the CpGs in making predictions for age across 8 different age groups split by 10-year increments. The CpG importance was compared between the groups through hierarchical clustering to find similarities between the age groups. The one thousand most important CpGs from each group were extracted and overlapped with CpGs defined by the Hannum model to depict the concordance of important CpGs between MethylNet and the Hannum model. For a second task, MethylNet was configured for multi-target regression to estimate cell-type proportions. First, estimateCellCounts2, using the 450 K legacy IDOL optimized library [12], was used to deconvolve the celltype proportions from each sample to develop our best proxy to ground truth outcomes for training the model. The MethylNet model was trained on the estimateCell-Counts2 estimates of cell-type proportions for six different immune cell-types. MethylNet was then compared to results derived from applying the 350 IDOL derived CpGs legacy library from FlowSorted.Blood.EPIC [56] using two different deconvolution methods Robust Partial Correlations (RPC) and Cibersort implemented in EpiDISH [57]. The importance of each CpG to each celltype was then quantified through SHAP. These Shapley coefficients were compared using hierarchical clustering. A similar clustering profile would indicate these celltypes share similar driving CpGs, and recovery of the cell-lineage dendrogram would demonstrate concordance with known biology. The one thousand most important CpGs from each cell-type were extracted and overlapped with the IDOL CpGs to inspect if the two models picked up similar cell-type-specific CpGs. Additional details regarding SHAP can be found in the supplementary material. In the next task, MethylNet was used to classify samples to cancer types. The data for the classification task are from 8891 TCGA-acquired samples, representing 32 different cancer types (Supplementary Figure 1 and supplementary Tables 1 and 2), and preprocessed using PyMethylProcess to yield a 200 k CpG beta matrix. The features with the highest mean absolute deviation across samples were selected to both limit the computational complexity, memory of model training and capture the highest variation in the data. The highly variable sites are assumed to be more biologically meaningful than the lower variable sites. The MethylNet analysis pipeline was conducted on the pan-cancer dataset. The results from MethylNet were compared to a popular omics classification approach, a uniform manifold approximation and projection (UMAP) embedding of the samples, followed by support vector machine (SVM) classification. UMAP is an effective way to reduce the dimensionality of the data as well as preserve meaningful local and global structure in the data [58,59]. Both were performed using PyMethylProcess's general_machine_learning subcommand, which executed a hyperparameter grid search of the SVM model. Finally, the embeddings of the different cancer subtypes were compared by calculating of the average cosine distance between clusters in the test samples. These distances were clustered using hierarchical clustering to form larger superclasses of cancer that demonstrate a shared embedding profile. A sensitivity analysis was conducted to understand how MethylNet scales with number of training samples and features. The TCGA cohort dataset was utilized and split into MethylationArrays of increasing number of features, scaled almost logarithmically for low number of features and then number of features were scaled linearly. This generated sixteen separate datasets. These datasets were trained in parallel with 100-job hyperparameter scans to yield final predictions. The sensitivity analysis on training set size split up the training set into 10% increments from 10 to 100%, and each of the 10 sets were trained using 150job hyperparameter scans. The number of training epochs was reduced to 50 for each analysis to limit the computational compute time. Finally, a 100-job hyperparameter scan was conducted to predict smoking status on the Liu data. Gradient-based SHAPley estimates were acquired using SHAP. The CpG SHAP score for the test set samples were subset by the CpGs significantly associated with smoking identified by Joehanes et al. 2016. The average rank of the highest absolute SHAPly score for each CpGs across individuals were compared to the rank of CpGs most significantly associated with smoking reported by Joehanes et al. 2016. Correlation of these rank orders was determined through Pearson's correlation coefficient and a non-correlation statistical test was employed to find a p-value for the relationship. Code availability statement MethylNet was built using Python 3.6 and utilizes the PyTorch framework to run its deep learning models on GPUs using CUDA, although CPUs are also supported. The workflow is available as an easily installable command line tool and API via PyPI as methylnet and on Docker [60] as joshualevy44/methylnet. The Docker image contains a test pipeline that requires one line to run through the hyperparameter training and evaluation of all framework components and can run on your local personal computer in addition to high performance computing. Help documentation, example scripts, and the analysis pipeline are available in the MethylNet GitHub repository (https://github.com/Christensen-Lab-Dartmouth/MethylNet). Code Ocean is an online platform for the sharing of reproducible research, computational tools, and test pipelines amongst members of the scientific community. After providing the necessary workflow specification, researchers are able to access and execute the uploader's code to test its capabilities or run their own analyses. Tests of our pipeline's functionality can be conducted on Code Ocean at: https://doi. org/10.24433/CO.6373790.v1 . Supplementary information Supplementary information accompanies this paper at https://doi.org/10. 1186/s12859-020-3443-8. Supplementary Figure 2. a) Visual flow diagram of method used to find CpG groupings and recapitulation of DNAm profiles. First, the 300 k CpGs are projected into a 6-dimensional embedding using UMAP. Each point in the low dimensional space represents a CpG and proximity between points denotes a shared methylation profile across all of the training samples (n = 503). Then, KMeans clustering was used to find 25 clusters of CpGs with similar profiles. The number of clusters of CpG features were reduced to 8 by filtering out clusters if their variance was above 1 in the 6D space. After that, the CpG features found in each cluster were used to select CpGs to form independent MethylationArrays across the training, validation and test sets. Finally, one autoencoder was trained per each array and the test samples were recapitulated and compared to the original input data; b) Descriptive statistics for final groupings of CpGs and recapitulation scores for each resultant set of CpGs versus the original methylation profiles input into each model. Supplementary Figure 3. Generated/recapitulated beta values versus original beta values for each CpG per individual of the held-out test set (n = 144); b)-f) corresponds to each of eight chosen clusters in order of low to high cluster variance as previously described; a) is an aggregation of the generated/recapitulated versus true beta values of all of the CpG clusters. Supplementary Figure 4. Hierarchically clustered cosine distance matrix between test samples' VAE-embedded methylation profiles of the held-out test set for the TCGA cohort, colored by: a) Labels assigned to the hierarchical clustering labels for the samples; b) Original TCGA cancer labels; c) RPMM-derived clustering labels on 20 k CpGs. Agreement scores between the RPMM and hierarchal clustering results and the original cancer subtypes were calculated using the v-measure, which takes into account the homogeneity and completeness of the labeling. Note that the clustering colors are not the same because the number of clusters is different from the number of cancer labels. Supplementary Table 3 CpGs that are associated with lower age are similar to the older age group; c) Distribution of Shapley Scores for these two age groups. CpG contributions tend to be negative for the younger age groups and positive for the older age groups. Supplementary Table 8. Confusion Matrix Pancancer Classification (Colored Superclass). Supplementary Table 9. Breakdown Pancancer Classification Results (Colored by Superclass). Supplementary Table 10. Average Cosine Distance Between Embeddings of Cancer Subtypes Supplementary Figure 8. Micro F1-Scores of the held-out test samples (n = 1676) of the TCGA cohort as they relate to: a) the fraction of training samples included for the training process, b) the number of CpGs. Test performance scales linearly with the number of training samples and logarithmically with the number of CpGs. Confidence intervals were calculated using a 1 k nonparametric bootstrap of the test results for each dataset size point in the line plot, and the resulting bootstrapped f1-scores were used to compute the confidence interval for each point in the line plots; c) performance of MethylNet, pretrained using a VAE, is compared to performance using an MLP with the same architecture; F1-Score confidence intervals were derived using a 1 k nonparametric bootstrap; validation loss for each model is compared at the first training epoch and their ultimate convergence point. Supplementary Figure 9. V-Measure scores of the held-out test samples (n = 1676) of the TCGA cohort as they relate to: a) the fraction of training samples included for the training process, b) the number of CpGs. Vmeasure scores were derived by applying and comparing hierarchical clustering on the VAE embeddings to known cancer subtype assignments and using a knee point detection algorithm to identify the ideal number of clusters for each tested dataset. In this figure, scores were smoothed using an exponential moving average smoothing technique to illustrate general trends. Supplementary Figure 10. Smoking EWAS study via MethylNet: a) final embeddings derived when finetuning the MethylNet VAE demonstrates cluster separation of the "never" versus "current" smokers; b) confusion matrix for the true and predicted "never" versus "current" smokers; c) plotted average ranks found using SHAP for the CpGs that intersected with CpGs identified by Liu et al. versus the ranks of those corresponding p-values of the EWAS meta-analysis. Supplementary Table 11. Preliminary Results for PAM50 Breast Tumor Classification (n = 1018); 95% confidence intervals of scores estimated via 1000-sample non-parametric bootstrap. Supplementary Figure 11. Internal and External Validation Cohorts: a) Boxenplot demonstrating distribution of ages for internal and external cohorts; notice how age for external validation cohort is greater than that of the internal validation cohort; b) plotted MethylNet predicted age versus actual age. Supplementary Table 12. Select Hyperparameters for Embedding Tasks. Supplementary Table 13. Select Hyperparameters for Prediction Tasks. Supplementary Figure 12. Fine-tuned embeddings (few parts highlighted) for: a) Age Prediction, b) Cell-Type Deconvolution (colored by Neutrophil Cell Type Proportions), and c) Pan-Cancer Classification (Labeled kidney cancers, lung cancers, brain cancers). Supplementary Figure 13. Model Training Curves for a) Age Predictions, b) Cell-Type Predictions, c) Pan-Cancer Predictions. Please note that the learning rates for the prediction curve of a) oscillates quickly every 10 training epochs as compared to a larger timescale.
2019-07-26T08:08:02.662Z
2019-07-04T00:00:00.000
{ "year": 2020, "sha1": "5dc0493d938371325384f199d286b082397cf22c", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-020-3443-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f34612220c63b72ee1ece2149733ea4023df4073", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology", "Computer Science" ] }
269861905
pes2o/s2orc
v3-fos-license
Meta-Analysis of Brain Volumetric Abnormalities in Patients with Remitted Major Depressive Disorder , Introduction Major depressive disorder (MDD) is a clinical syndrome with depressed mood and anhedonia as its core symptoms.MDD leads to a severe disease burden due to its high recurrence rate and impairment of psychosocial function [1,2].Although patients with MDD achieve remission after antidepressant treatment, >90% have at least one residual depressive symptom [3,4], such as negative affective cognition, attention deficit [5], and negative reward expectation [4,6].Moreover, 20% of patients with remitted MDD experience a relapse of depression within 6 months of taking antidepressants [7].Posttreatment residual symptoms are prognostic risk indicators [8] and precursors of depression recurrence [9].Neuroimaging studies have found that patients with remitted MDD have brain structural abnormalities [10,11] and neural activity dysfunction [12,13], which are linked with residual symptoms, compared to healthy controls (HCs).Exploring brain morphological abnormalities in patients with remitted MDD would help better understand the neurobiological basis of residual symptoms of depression to achieve targeted intervention suitable for the residual symptom spectrum. Previous meta-analyses confirmed changes in grey matter volume (GMV) in patients with depressive episodes [14][15][16]; however, volumetric changes in patients with remitted MDD remain unclear.Although multiple neuroimaging studies have shown brain abnormalities in patients with remitted MDD compared to HCs [11,17,18], these results are inconsistent.For example, one study showed that GMV in the right superior temporal gyrus (STG) was increased in patients with remitted MDD than in HCs [18].In another study, patients with both remitted and current MDD showed reduced GMV in the left insula compared with that in HCs [17].A possible reason for this is that antidepressants are only effective in the brain regions where the target neurotransmitter receptors are distributed, and antidepressants do not completely ameliorate all damaged brain regions.A previous meta-analysis reported persistent brain dysfunction in patients with remitted MDD [13]; however, no meta-analysis of volumetric changes in these patients has been reported. To better understand the structural abnormalities in remitted MDD, we performed a meta-analysis of differences in GMV between patients with remitted MDD and HCs using the anisotropic effect size version of signed differential mapping (AES-SDM).SDM can incorporate negative results and has been used in several neuroimaging meta-analyses [15,19].Based on the results of the meta-analysis, we created a diagram of residual symptoms linked with brain morphological abnormalities in patients with remitted MDD and possible interventions to select appropriate treatment modalities according to neurobiological evidence to prevent depression recurrence in the future. Meta-Analysis Selection The exclusion criteria were studies (1) wherein coordinates were not clearly reported; (2) that did not use VBM; (3) wherein patients had a history of alcohol or substance abuse, head trauma, and major physical or neurological illness; and (4) wherein patients had a comorbidity of schizophrenia, bipolar disorders, other major psychoses, obsessivecompulsive spectrum disorders, posttraumatic stress disorder, or cluster B personality disorders. Data Extraction for Systematic Review.The first author's name, year of publication, details of study design, patient characteristics (including gender, age, age of onset, illness duration, number of episodes, comorbidity with anxiety disorder, and disease severity), sample size, magnetic resonance imaging (MRI) sequence acquisition parameters, and changes in VBM were collected from the eligible studies. From each included study, we selected the reported peak coordinates of GMV differences and t-value threshold that were statistically significant at the whole-brain level.The number of patients in the longitudinal study was the number of patients with MDD who completed the second scan, and age, age at onset, illness duration, number of episodes, and comorbidity rate with anxiety were considered baseline data. SDM Analysis. A meta-analysis of regional GMV abnormalities was conducted using AES-SDM (https://www .sdmproject.com/).AES-SDM performs voxel-wise random effects meta-analyses by reconstructing whole-brain effect size and variance maps that combine the original statistical parametric maps and peak coordinates from both positive and negative results [19].However, including negative effects could reduce the risk of a particular voxel showing opposite effects.First, pooled analyses were conducted to investigate regional GMV differences within the remitted MDD group compared to the HC group.To obtain more accurate results, we defined the p value threshold in the AES-SDM analysis as <0.05 and only discussed brain regions with voxels as >10 voxels.Whole-brain jack-knife sensitivity analyses were conducted to investigate the overlap between significant areas of heterogeneity with areas of grey matter differences.Separate simple metaregressions were performed by using available potential confounders provided in a sufficient proportion of the included studies.Figures were prepared using MRIcron and BrainNetViewer. Literature Search and Sample Characteristics. A flow diagram of the study selection process is shown in Figure 1.As shown in Table 1, the meta-analysis of the cross-sectional studies included 11 whole-brain VBM studies [17,18,[22][23][24][25][26][27][28][29][30], among which negative results were obtained.Two of them were after electroconvulsive therapy (ECT) [18,30], and the other after antidepressant treatment.There were 11 datasets that investigated 275 patients with remitted MDD (182 women; mean age 43.57years) versus 437 HCs (281 women; mean age 41.51 years).As shown in Table 2, the meta-analysis of longitudinal studies included 7 wholebrain VBM datasets [23][24][25][26][30][31][32] that investigated 167 patients with remitted MDD (women 97; mean age 42.97 years).Of these, two had a follow-up period of > two years, and five had a follow-up period of ≤8 weeks.Metaregression analysis did not find significant correlations between MDD-related GMV changes and age, age of onset, female percentage, the number of depressive episodes, comorbidity rate with anxiety, severity of depressive symptoms, or duration of illness. Jack-Knife Sensitivity Analyses and Publication Bias Analysis.Whole-brain jack-knife sensitivity analyses of the cross-sectional studies revealed an increased GMV in the right striatum in all 11 analyses (Table S2 in the supplement).Increased GMV in the bilateral ACC and decreased GMV in the left IPG and right SPG were observed in 10 analyses.Other abnormal brain areas remained replicable and were found in at least 9 studies.In subgroup analysis, decreased GMV in the right SPG and increased GMV in the right striatum were observed in all 9 analyses (Table S3 in the supplement).Other abnormal brain areas remained replicable which were found in at least 9 studies. Whole-brain jack-knife sensitivity analyses of the longitudinal studies revealed an increased GMV in the bilateral MCC and left striatum in 6 analyses (Table S4 in the supplement).Decreased GMV in the bilateral gyrus rectus and left SPG were observed in 6 analyses.Other abnormal brain areas remained replicable and were found in at least 5 studies (Table S4 in the supplement). As shown in Figure S2 and Figure S3, an analysis of publication bias of the cross-sectional studies detected by Discussion To the best of our knowledge, the current meta-analysis is the first to identify GMV abnormalities in patients with remitted MDD compared with HCs.We found that patients with remitted MDD had decreased GMV in the left insula, IPG, and right SPG compared with that in HCs.Additionally, the GMV in the bilateral gyrus rectus in patients with remitted MDD was lower at follow-up than at baseline.In contrast, the GMV increased in bilateral ACC and MCC, right striatum, MTG, and STG in patients with remitted MDD compared with that in HCs.Moreover, patients with remitted MDD had a larger GMV in the bilateral MCC, left striatum, putamen, amygdala, hippocampus, and parahippocampal gyrus at follow-up than at baseline.Among the studies included in this meta-analysis, patients with remitted MDD received only ECT and medications.However, these therapeutic approaches do not fully cover all lesions or pathophysiological pathways.Furthermore, we proposed a schematic diagram of the targeted intervention approaches for residual symptoms, according to the brain morphological abnormalities in patients with remitted MDD after ECT and medical therapy. Decreased GMV in the Neocortex in Patients with Remitted MDD.Our results showed that patients with remitted MDD had decreased GMV in the left insula, IPG, and right SPG compared with that in HCs.The left insula, IPG, and right SPG all belong to the neocortex, which processes complex cognition [33][34][35].Previous meta-analyses have also found extensive cortical GMV reduction in the insular, prefrontal, and parietal regions in patients with MDD [15].Micropathological changes that contribute to reduced GMV in patients with MDD include atrophy of neurons, dendritic cells, and glial cells [36], while the molecular mechanisms involve increased expression of inflammatory mediators, mitochondrial dysfunction, and decreased levels of brain-derived neurotrophic factor (BDNF) [37,38].Structural MRI studies have shown that a high level of peripheral inflammatory markers correlates with reduced GMV in the neocortex of patients with MDD [39,40].Both oxidative stress and reduced ATP production contribute to nerve cell regeneration damage, mitochondrial dysfunction, and changes in neuroplasticity [41,42]. Both clinical and animal experiments have shown that decreased GMV in patients with MDD improves with symptom remission.A longitudinal study found that whole-brain cortical thickness increased in patients with MDD after remission compared to that before treatment [10].Animal studies have shown that depression-like symptoms in rats are relieved by an increase in the number of frontal cortex cells and an extension of dendritic length [43], suggesting that dendritic cell atrophy could be reversible.Animal studies have shown that selective serotonin reuptake inhibitors 7 Depression and Anxiety (SSRIs), such as fluoxetine, can inhibit apoptotic kinase activation, increase BNDF levels, and reverse astrocytic atrophy [44], suggesting that the decrease in GMV caused by astrocyte atrophy is reversible.In contrast, previous postmortem studies have reported that patients with MDD who died from suicide showed reduced pyramidal neuron density in the neocortex, implying apoptosis of pyramidal neurons [45].Neurogenesis in adults occurs in the hippocampus but not in the neocortex [46,47].In other words, GMV reduction caused by the atrophy of dendritic cells and astrocytes is reversible, but that caused by the apoptosis of neocortical pyramidal neurons is not. GMV in the Left Insula in Patients with Remitted MDD.This study demonstrated that the GMV in the left insula was lower in patients with remitted MDD than that in HCs.This is consistent with the results of a previous study that reported that the cortical thickness of the insula is significantly thinner in remitted MDD patents than in HCs [48].Previous studies have found that GMV reduction in the left insula correlates with depression severity and illness duration in patients with MDD [49,50].Longitudinal studies have found that the cortical thickness of the insula in patients was significantly greater in the remission group than before treatment [10] and in the nonremission group [51] after antidepressant treatment.After ECT, patients with remitted MDD also showed increased cortical thickness and surface area in the left insula compared with those before treatment [52].This neurobiological evidence suggests that the GMV in the insula of patients with remitted MDD remains lower than that in HCs, even if medications and ECT can ameliorate the loss of GMV caused by depressive episodes. Patients with remitted MDD have persistent negative attention bias and bradykinesia, which are linked to the dysfunction of the insula [53].The insula is involved in selfperception and self-worth judgments [54], and its impairment can lead to self-perception disorders such as negative self-evaluation [55], guilt and shame/embarrassment [56,57], borderline personality disorder [58], and suicide [59], which are risk factors for depression recurrence [60,61].A previous study reported that cognitive behavioral therapy could weaken insular activity during emotion perception [62].A meta-analysis showed that insula activation in patients with MDD was significantly reduced after psychotherapy [63], suggesting that psychotherapy could be effective in improving the residual self-cognitive impairment of depression in remission.Depression and Anxiety GMV in the Frontal-Parietal Attention Network in Patients with Remitted MDD.Compared with HCs, patients with remitted MDD showed reduced GMV in the left IPG and right SPG.This result is consistent with previous multicenter research that showed reduced cortical thickness in the parietal lobe in patients with current MDD [64].The left IPG and right SPG belong to the frontoparietal attention network [65], which is involved in selective attention [66,67] and execution functions [68,69].Reduced activation in bilateral IFG in patients with remitted MDD is linked to cognitive reappraisal ability impairment for attention goals [70].Abnormal functional activity in the frontoparietal network is correlated with executive dysfunction and working memory in patients with remitted MDD [71][72][73]. The bilateral dorsolateral prefrontal cortex (DLPFC) is a key node of the frontoparietal attention network [74].Transcranial magnetic stimulation (TMS) of the DLPFC can regulate brain activity in the parietal nodes of the frontoparietal attention network through interactions between brain networks [75,76] and has been recommended as a treatment for MDD [77].Previous studies have shown that repetitive TMS (rTMS) could improve attention [78], working memory [79], executive function [80], and cognitive control [81] in patients with MDD.Additionally, functional activity in the parietal cortex is useful for predicting TMS outcomes in patients [82].Thus, patients with remitted MDD require rTMS therapy if they have residual attention and executive dysfunction. GMV of the Gyrus Rectus in Patients with Remitted MDD Was Lower than Those without Remission.Our study suggested that GMV in the gyrus rectus was reduced from baseline to follow-up with SSRI treatment, suggesting that 5-HT-ergic drugs were ineffective for the gyrus rectus.As part of the anterior cingulate gyrus extending into the frontal lobe, the gyrus rectus receives projections from the hypothalamus and brain stem and is involved in sensory integration [83].The expression of the 5-HT(2A) receptor in the gyrus rectus decreased with increasing age starting at 20 years [84].Therefore, the nonresponse of the gyrus rectus to SSRIs may be due to the insufficient distribution of 5-HT(2A) receptors.Compared to that in HCs, reduced GMV in the gyrus rectus was reported in patients with MDD, schizophrenia, and bipolar disorder [85,86], suggesting that the decreased GMV may have a more complex pathological basis.Research using deep brain stimulation revealed that the gyrus rectus is an effective target for treatment-resistant depression [87].Neuroregulatory therapies (TMS, DBS) can target cortical sites [77,88,89], and SSRIs mainly act on subcortical structures along the distribution of 5-HT-energy receptors [3].However, both have their own advantages.If a patient shows signs of neocortical impairment, future treatment should consider neuroregulatory therapy such as TMS as an equally important first-line treatment along with drugs rather than as a second-line treatment only for treatment-resistant depression. Larger Reward Circuit GMV in Patients with Remitted MDD than in HCs and at Baseline.Our meta-analysis showed that patients with remitted MDD had greater GMV in the striatum than in HCs and at baseline.However, reduced striatal GMV in medication-naïve patients with first-episode MDD was found in previous meta-analyses [14,16].Clinical studies have also found an increase in striatal GMV in patients with MDD after ECT [31].Machine learning studies have demonstrated that morphological changes in the ventral striatum predict improvements in serotonin and norepinephrine reuptake inhibitors [90].Therefore, elevated GMV in the striatum may indicate relief of depressive symptoms in patients with MDD. Moreover, the meta-analysis of cross-sectional studies found that patients with remitted MDD had greater GMV in bilateral ACC and MCC than that in HCs, and the meta-analysis of longitudinal studies found that patients with remitted MDD had greater GMV in bilateral MCC than before remission.Notably, increased cortical thickness in ACC has also been observed in unmedicated patients with first-episode MDD [91].One possible explanation for this phenomenon is that the GMV increases due to activated microglia and reactive astrogliosis [92,93] when the activated immune response system (IRS) and compensatory IRS (CIRS) pathway were elevated during the acute phase of depressive episodes [94,95].However, the sensitive IRS and CIRS responses did not return to homeostasis after the remission of MDD [96].Thus, increased GMV of the ACC in patients with remitted MDD may indicate an active compensatory immune response. The increase in GMV in the reward circuit does not equate to the complete recovery of function.Both the striatum and ACC are core nodes of the reward circuit [97] and participate in the encoding and processing of reward anticipation information.Damage to the reward circuit contributes to anhedonia [98], which is a core symptom of major depressive disorder.The response of the corticostriatal network is still lower in patients with remitted MDD when they perform the reward expectation prediction task [6].When individuals in remission of depression perform the reward prediction error task, the functional activity of the bilateral striatum is lower than that of the HC group [99], which also indicates that individuals in remission of depression still have a negative cognitive model of reward expectation despite the improvement of depressive mood.Compared to HCs, the functional activation of the striatum was increased in the patients with remitted MDD during the execution of stressful tasks [100], implying that patients with remitted MDD need to expend more energy to solve stress problems with worse stress resistance. Dysfunction of the reward circuits is highly correlated with inflammation [101].The low response of the ventral striatum to a rewards-anticipation task is related to a high level of peripheral blood leukocyte reactivity in patients with depression [102].Functional connectivity between the striatum and ventromedial prefrontal cortex (vmPFC) mediates peripheral C-reactive protein (CRP) levels and anhedonia [103].However, clinical trials have shown that anhedonia is negatively correlated with the strength of functional connectivity between the striatum and vmPFC only in patients with CRP levels > 2 mg/L [98], suggesting that examination of inflammation-related indicators in patients is necessary 9 Depression and Anxiety to distinguish depression subtypes and choose the appropriate treatment [104].Although there is a lack of neuroimaging evidence for the effects of anti-inflammatory drugs on MDD, clinical studies have shown that anti-inflammatory drugs, such as nonsteroidal anti-inflammatory drugs, omega-3 fatty acids, and statins, are effective in improving depressive symptoms [105]. GMV Abnormalities in the Right Temporal Lobe in Patients with Remitted MDD.The pooled meta-analysis revealed that patients with remitted MDD showed increased GMV in the right MTG and STG extending to the temporal pole compared with that in HCs, but the results of the subgroup analysis did not show these abnormalities after excluding the ECT study.A previous study reported that both patients with current and remitted MDD showed decreased GMV in the right STG relative to that in HCs and that the GMV of the right STG was associated with the severity of depressive symptoms [106].Another study found that the thickness of the temporal pole and insular cortex increased after ECT compared to that before ECT [107], suggesting that the increase in GMV in the temporal lobe might be an effect of ECT.Additionally, a metaanalysis also reported increased GMV in the right medial temporal lobe, the amygdala, and the hippocampus in patients with multiple mental disorders after ECT [108,109]. GMV in the right STG was positively correlated with rumination in patients with MDD [110].Additionally, trauma in patients with MDD is negatively correlated with the cortex thickness of the left MTG [111] and bilateral MTG activity [112].Brain activity in the right MTG was also reportedly reduced in patients with remitted MDD [113].Moreover, behavioral experiments have confirmed that rumination is positively correlated with auditory hallucination [114], and neuroimaging studies have revealed that rumination and auditory hallucinations were both correlated with structural and functional abnormalities of the STG [115,116].This suggests that improving the local neural activity of the STG may ameliorate rumination.Given that low-frequency temporoparietal junction-(TPJ-) TMS has been successfully used to treat auditory hallucinations [117], we hypothesized that TPJ-TMS would be equally effective against rumination, even though current neuromodulation studies all reported the stimulation target for rumination was the left DLPFC [118,119]. This meta-analysis of longitudinal studies revealed that GMV increased in the hippocampus, parahippocampal gyrus, and amygdala in patients with remitted MDD than at baseline, whereas the meta-analysis of cross-sectional studies showed decreased GMV in the hippocampus, parahippocampal gyrus, and amygdala in patients with remitted MDD compared to HCs, suggesting that patients did not fully return to normal despite treatment response.The amygdala is involved in emotional responses and is easily overactivated by negative emotional stimuli in the depressed state, corresponding to negative emotional sensitivity in patients with MDD [120].SSRI treatment normalizes the overactivation of the amygdala to negative stimuli [62,121].The hippocampus is densely innervated by 5hydroxytryptaminergic fibers [122], and neurogenic dysregulation of the dentate gyrus in the hippocampus occurs in MDD [123].Animal studies have shown that SSRIs not only promote cell proliferation and differentiation in the hippocampus [124] but also affect gamma-aminobutyric acid and glutaminergic neurotransmission [125].Therefore, increased GMV in the hippocampus, parahippocampal gyrus, and amygdala from baseline to remission is an imaging feature of improvement in depression. Schematic Diagram of Brain Morphological Abnormalities Linked with Residual Symptoms in Patients with Remitted MDD. Figure 4 shows a schematic diagram of possible treatments for residual symptoms of depression.Treatment for residual symptoms of depression mostly involves psychotherapy [126,127], while clinical guidelines recommend rTMS for treatment-resistant depression [77].Considering the presence of GMV abnormalities in patients with remitted MDD, rTMS has a potential therapeutic value for residual symptoms of depression.Here, we list the potential interventions for residual symptoms, including negative executive dysfunction, rumination, self-perception, and negative reward anticipation [4,6,8,99], which are located in the key hub of related neural networks, according to GMV abnormalities in patients with remitted MDD. Executive dysfunction is associated with dysfunction of the frontal-parietal network [74], and TMS of the DLPFC can improve executive dysfunction [76,80].The rTMS of the precuneus has been performed to improve working memory [128], suggesting that the parietal nodes of the frontoparietal network may also be a novel target for improving executive function. Rumination is correlated with impairment of the temporal lobe in patients with depression, and low-frequency rTMS of the TPJ is a potential target for rumination treatment [117,118]. Negative self-perceptions, such as self-blame and selfguilt, are linked to insular impairment, and insular dysfunction in patients is significantly improved after psychotherapy [63].Thus, psychotherapy for negative self-cognition is still needed, even after the remission of depression. The negative reward expectation is associated with impairment of the reward circuit in patients with remitted MDD [99,100].The efficacy of immunomodulation in depressive disorders has been confirmed in clinical trials, despite a lack of direct neuroimaging evidence [105,129]. 4.5. Limitations.This study has some limitations.First, the meta-analysis of cross-sectional studies included only 11 datasets, and the meta-analysis of longitudinal studies included only 7 datasets.To reduce the heterogeneity of the data analysis methods, we only extracted the results of the VBM studies, which may have limited the number of included studies.A previous meta-analysis for the fMRI features of remitted MDD included 18 datasets [13], in which the number of fMRI studies was higher than that of VBM studies.A possible reason is that brain structural changes are not obvious after a short duration of treatment.The 10 Depression and Anxiety minimum duration of depression remission included in this review was 4 months [25], which was different from that in most functional imaging studies (1-2 months) [13]. In addition, there was a lack of information on the duration of remission in the included studies, and a longitudinal comparison from the acute phase to symptom remission was added in the meta-analyses of longitudinal studies, in which only Lemke et al. included patients with lower mean HDRS scores at baseline.However, patients with acute phase episodes remained in the baseline group. Moreover, because a meta-analysis of GMV in patients with current MDD has been conducted previously [14][15][16], this was not performed in the present study.This metaanalysis included the negative results that no significant GMV abnormalities between patients with remitted MDD and HCs were reported in two studies that recruited patients with illness duration lasting more than 10 years [22,29].Thus, the influence of illnesses cannot be ignored, even though the regression analysis is inconclusive.Notably, one study reported a significant change in grey matter density but not in GMV in patients with remitted MDD [22], suggesting that only one indicator is limited. Although the search strategy we used did not limit the treatment modalities, only medication and ECT were searched in the included studies of GMV changes in this population.Two researchers independently searched for neuroimaging studies associated with depression remission after psychotherapy but only found fMRI studies that reported that improvements in depressive symptoms after psychotherapy were associated with changes in the functional activity of the prefrontal and limbic cortices [63,130,131].Although the results of this study were dominated by drug-and ECT-induced changes in GMV, we also discussed the therapeutic potential of psychotherapy for residual symptoms of depression. Additionally, this study only discussed residual symptoms that may be associated with the abnormal brain areas found in the current meta-analysis and did not review all residual symptoms.Some residual symptoms involving extracerebral systems, such as tension involving overactivity of the hypothalamic-pituitary-adrenal axis, should be concentrated in the future [132]. Finally, owing to the inherent limitations of meta-analysis, the possibility of publication bias cannot be completely ruled out, despite our best efforts to search for more original and appropriate literature, including studies with negative outcomes. Conclusion Overall, this meta-analysis demonstrated that patients with remitted MDD exhibited reduced GMV in the insula and frontal-parietal network and increased GMV in the reward circuit and temporal lobe after receiving medications and undergoing ECT.Our findings provide new insights into targeted treatment for residual symptoms based on neurobiologybased evidence, combined with anti-inflammatory medication, TMS, psychotherapy, and other treatment modalities. Criteria.Two researchers independently screened the studies according to the inclusion criteria.If there was any inconsistency in study selection, a consensus was reached though discussion.The inclusion criteria were as follows: (1) studies investigating adults diagnosed with MDD according to the Diagnostic and Statistical Manual of Mental Disorders or International Classification of Diseases, Tenth Revision criteria; (2) cross-sectional studies comparing patients with remitted MDD with HCs, and longitudinal studies should compare baseline and follow-up data of patients with remitted MDD; (3) grey matter research using whole-brain voxelbased morphometry (VBM); (4) studies with patients aged 18-70 years; (5) studies reporting coordinates in a standard space such as the Talairach space or the Montreal Neurological Institute (MNI) space. Figure 1 : Figure 1: Flow chart of literature search and selection in the meta-analysis.Abbreviations: HCs: healthy controls; MDD: major depressive disorder; n: number. Figure 2 : Figure 2: Meta-analysis results of GMV difference in patients with remitted MDD compared with HCs.The areas of decreased GMV compared with HC are displayed in blue, and the areas of increased GMV are displayed in red.Abbreviations: ACC: anterior cingulate cortex; AMY: amygdala; GMV: grey matter volumes; HCs: healthy controls; IFG: inferior frontal gyrus; INS: insula; IPG: inferior parietal gyri; L: left; MCC: median cingulate cortex; MDD: major depressive disorder; MTG: middle temporal gyrus; R: right; SPG: superior parietal gyrus; STG: superior temporal gyrus; STR: striatum. Figure 3 : Figure 3: GMV changes in patients with remitted MDD in longitudinal studies.The areas of lower GMV in patients with remitted MDD than that of those in the nonremission state are displayed in blue, and the areas of increased GMV are displayed in red.Abbreviations: AMY: amygdala; GMV: grey matter volumes; GR: gyrus rectus; HCs: healthy controls; HP: hippocampus; L: left; MCC: median cingulate cortex; MDD: major depressive disorder; PHP: parahippocampal gyrus; PUT: putamen; R: right; STR: striatum. Table 1 : Sample characteristics and MRI procedures of included cross-sectional studies. Table 2 : Sample characteristics and MRI procedures of included longitudinal studies. Table 3 : Regional differences in GMV in patients with remitted MDD in meta-analysis of cross-sectional and longitudinal studies.
2024-05-19T15:57:19.079Z
2024-05-15T00:00:00.000
{ "year": 2024, "sha1": "30d48d7ef81fe10754cff75818abfdf68d76b771", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/da/2024/6633510.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "47817d433bf2873989d58a830933ae4b388339e1", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
26200552
pes2o/s2orc
v3-fos-license
Recurrent Violent Behavior: Revised Classification and Implications for Global Psychiatry Interpersonal violent acts are a worldwide problem. Deterrence of violence has been, and remains a complicated and often unsuccessful enterprise, given that violence is multi-factorial, and occurs in a variety of contexts (e.g., domestic violence; serial killing; mass slaying; terrorism; collective violence), and on a global scale within diverse socio-cultural populations (1). Given this broad impact, there have been, and continue to be public and political calls—if not demands—to employ research findings to both better identify and mitigate or prevent causes and occurrences of violent behavior (2–4). Current research and interventions in violent behavior are mainly focused upon four domains: (a) attention to the trauma and needs of victims (5), (b) recidivism of violent crime (6), (c) violence as an expressed trait of certain neuropsychiatric disorders (7), and (d) seeking predictive variables (8). Previously, we have suggested that recurrent violent behavior (RVB)1 should be considered as a psychiatric classifier that can be employed to initiate further medical and public safety interventions within an interdisciplinary approach, as we acknowledge that RVB is often multi-etiologic, and involves biological as well as psychosocial factors (9). In this light, we have explored the viability of using RVB in DSM and/or ICD frameworks, which could be applicable in developed as well as developing and non-developed countries’ as a component of the proposed global mental health (GMH) plan (10). We believe that this could yield novel opportunities for studying chronic violence as an international mental health issue. However, we also note the potentially controversial nature of such classification, and acknowledge the need to prudently address validity, viability, benefits, risks, and harms that could be incurred by using neuroscience and neurotechnology to assess and intervene against recurrent violence (11). inTRODUCTiOn Interpersonal violent acts are a worldwide problem. Deterrence of violence has been, and remains a complicated and often unsuccessful enterprise, given that violence is multi-factorial, and occurs in a variety of contexts (e.g., domestic violence; serial killing; mass slaying; terrorism; collective violence), and on a global scale within diverse socio-cultural populations (1). Given this broad impact, there have been, and continue to be public and political calls-if not demands-to employ research findings to both better identify and mitigate or prevent causes and occurrences of violent behavior (2)(3)(4). Current research and interventions in violent behavior are mainly focused upon four domains: (a) attention to the trauma and needs of victims (5), (b) recidivism of violent crime (6), (c) violence as an expressed trait of certain neuropsychiatric disorders (7), and (d) seeking predictive variables (8). Previously, we have suggested that recurrent violent behavior (RVB) 1 should be considered as a psychiatric classifier that can be employed to initiate further medical and public safety interventions within an interdisciplinary approach, as we acknowledge that RVB is often multi-etiologic, and involves biological as well as psychosocial factors (9). In this light, we have explored the viability of using RVB in DSM and/or ICD frameworks, which could be applicable in developed as well as developing and non-developed countries' as a component of the proposed global mental health (GMH) plan (10). We believe that this could yield novel opportunities for studying chronic violence as an international mental health issue. However, we also note the potentially controversial nature of such classification, and acknowledge the need to prudently address validity, viability, benefits, risks, and harms that could be incurred by using neuroscience and neurotechnology to assess and intervene against recurrent violence (11). such regard, we support that when evaluating the relative normality of a trait or characteristic, intensity, and/or frequency must be considered (12,13). Intensity of the violent act is mainly represented by the level of harm, and such acts generally incur response by legal authorities. However, "non-visible" harm, often viewed as "less intense" conduct (e.g., certain forms of psychological/domestic violence), frequently do not prompt legal or even medical intervention because "…medical professionals do not perceive that is part of their role to investigate suspected domestic/psychological violence…" (14)(15)(16). Thus, we posit that frequency (i.e., recurrence) should be viewed as a hallmark feature of a harmful pattern of actions that is interpersonally disruptive and deviant from prescribed social norms and broadly expected behavioral control (9). Given this definition, we assert that the paucity or absence of medical/psychiatric care of persons with RVB jeopardizes mental health, public safety, and undermines the ethical probity of medicine as an individual and public good. So, if typified as a psychiatric classifier, RVB (as a behaviorally overt trait) could be observed, depicted, assessed, diagnosed, and medically engaged, regardless of intensity and/or etiology. GMH AnD HUMAn RiGHTS: RE-CLASSiFYinG RVB AS iMpORTAnT TO A COMBinED AGEnDA Our proposed aim of instating RVB as a psychiatric classifier is to leverage medical and psychosocial interventions in order to improve both mental health care and public safety needs, as consistent with stated directives of Article 25th of the United Nations Universal Declaration of Human Rights (17), and World Health Organization (WHO) Mental Health Action Plan 2013-2020 (18). At present, interpersonal RVB is regarded as an expressed trait of certain neuropsychiatric disorders (e.g., conduct disorder, psychosis, personality disorders, brain tumor, etc.) (12,13), but we feel that this diagnostic classification is limited, and constrains (1) access to psychiatric care, (2) mental health of the recurrently violent individual, and (3) efforts in and provision of public safety. However, we acknowledge that simply instituting RVB as a psychiatric classifier does not necessarily guarantee proper-if any-medical assessment or intervention, and this may be especially true in developing and undeveloped countries. At present, 90% of global neuropsychiatric research is performed in 10% of the mental health population of westernized, educated, industrialized, rich, and democratic (WEIRD) countries (19), while 85% of the global population lives in low and middleincome (LAMI) countries (20). This discrepancy in research populations misrepresents global psychiatric epidemiology, and creates a significant gap in knowledge of mental health, and in the viability, delivery, and effectiveness of assessment and care of mental disorders. The GMH initiative specifically seeks to "… improve services for people living with mental health problems and psychosocial disabilities worldwide, especially in low-and middle-income countries… based on scientific evidence and human rights… not only for treatment, but for prevention and promotion of mental well being (10). " We believe that for RVB to be included in the GMH project as a psychiatric classifier, international research efforts will be required to adapt current, and newly developing assessment and treatment tools to particular socio-cultural ecologies. To do so will necessitate review and revision of mental health resources that are available within-and for-different (developed, developing, and non-developed) countries. This will require collaboration in global research and clinical efforts. As well, the identified global burden of RVB, which undergirds its prioritization as both a mental health and public safety issue, further necessitates detailed consideration of which (and what extent of) resources should be allocated and dedicated to evaluation and intervention. This is particularly true if emerging neurotechnological tools, and systems of medical surveillance are to be employed. Here, a concern is that WEIRD and more progressively developing countries may exploit economic advantages and relationships to foster greater, and more sustainable efforts in such pursuits. In this scenario, it is possible-if not probable-that the public health and safety burdens incurred by the occurrence, prevalence, and effects of RVB would become increasingly manifest in LAMI countries. This would induce evident mental health, social, economic, and political instability which would only exacerbate extant discrepancies in GMH. Therefore, we propose that a subsequent, logical step would be to incorporate a definition, and recommendations for resources and services necessary for research, assessment, and intervention of RVB as a psychiatric classifier within the scope and activities of the GMH project. ETHiCAL iSSUES RELEVAnT TO GMH We have already sought to identify basic ethico-legal and social issues (ELSI) generated by establishing RVB as a psychiatric classifier (9). However, if positioned within the larger agenda of the GMH project, additional ELSI can, and likely will arise. For example, cultural neurocognitive diversity may render certain forms of violent behavior(s) to be representative of longstanding and generally tolerated cultural actions. While respecting socio-cultural variability, we posit that like many other mental health conditions, diagnostic criteria for RVB could be validated and legitimized across cultures even in light of socio-cultural variance. Thus, RVB may represent and RVB may represent a widely accepted abnorm when: (1) considered to be maladaptive or threatening the wellbeing of others within cultural standards of ethics and/or law and (2) viewed relative to WHO identification of violence as an urgent public health concern worthy of United Nations International Children's Emergency Fund (UNICEF) and United Nations Office on Drugs and Crime (UNODC) efforts in individual and public safety (1,12,13,21,22). In this way, if considered a GMH issue, RVB could be formally leveraged so as to facilitate psychiatric assessment and intervention to mitigate, if not prevent further, and increasing occurrence. However, this may also pose risk of additionally medicalizing if not pathologizing social behavior(s). There has been-and continues to be-debate about the utilization of psychiatry as a tool that can be employed to advance public or political interests in establishing criteria for cognitive, emotional, and behavioral normality and abnormality (23)(24)(25). Concepts such as psychopolitics or political psychiatry have emerged in an attempt to describe the use of psychiatric knowledge, assessment and interventions for public policy, prosecution, and "rehabilitation" of individuals who hold certain beliefs and/or effect particular actions (26). Clearly, this has potential for significant abuse and harm, and the use of diagnostic labels as substantiation for individuals' behaviors and/or to justify legal action has been discussed elsewhere (9,27). Hence, caution must be taken when formulating specific criteria for the definition and use of RVB in particular (ethno) psychiatric contexts within a GMH agenda. Calls for an ethically sound GMH are noteworthy and of value (19), but there is much work to be done if this is to be enacted in practice, and such efforts will require both dedicated resources and the fiscal support to sustain them. COnCLUSiOn Establishing RVB as a psychiatric classifier could be a viable and valuable approach toward enabling more accurate assessment and intervention, which could afford both medical and social benefit(s). Moreover, instantiating RVB as a classifier within the ICD framework axiomatically prompts consideration of those ways that this new classification could-and should-be leveraged within a GMH agenda. Inclusion within the GMH project could improve and enable research, availability, access and quality of resources, and services for assessing and treating RVB. Yet, attempting to medically approach a potentially harmful human conduct poses questions of meanings of normality and socio-cultural variance, and can foster misuse of diagnoses for legal and/or political ends. As well, discrepant economics of WEIRD and LAMI countries may incur issues of unequal and/or inequitable availability and affordability of mental health services, and non-sustainability of care and social benefit. Therefore, we believe that global efforts to engage this approach-and othersto proactively foster mental health and social safety should be an opportunity for collaborative dialogues between developed, developing, and non-developed countries, and should reflect ongoing consideration and articulation of ELSI fostered in and by such efforts. Thus, a more comprehensive discourse-involving internationally representative participants-is needed to most effectively foster and advance such initiatives. AUTHOR COnTRiBUTiOnS KH-F provided substantial contributions to the conception, design, drafting the work, and revising it critically for important and significant intellectual content as well as final approval of the version to be published. She has agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. JG provided substantial contributions to the conception, design, drafting the work, and revising it critically for important and significant intellectual content as well as final approval of the version to be published. He has agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
2017-08-28T05:27:15.790Z
2017-08-28T00:00:00.000
{ "year": 2017, "sha1": "e2d2fa2bf611b9f1a7a9814da81d3c21c89f9c58", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2017.00151/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2d2fa2bf611b9f1a7a9814da81d3c21c89f9c58", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
229348893
pes2o/s2orc
v3-fos-license
Robotic Process Automation -- A Systematic Literature Review and Assessment Framework Robotic Process Automation (RPA) is the automation of rule-based routine processes to increase efficiency and to reduce costs. Due to the utmost importance of process automation in industry, RPA attracts increasing attention in the scientific field as well. This paper presents the state-of-the-art in the RPA field by means of a Systematic Literature Review (SLR). In this SLR, 63 publications are identified, categorised, and analysed along well-defined research questions. From the SLR findings, moreover, a framework for systematically analysing, assessing, and comparing existing as well as upcoming RPA works is derived. The discovered thematic clusters advise further investigations in order to develop an even more detailed structural research approach for RPA. Introduction In our continuously changing world, it is indispensable that business processes are highly adaptive (Reichert and Weber 2012) and become more efficient and costeffective (Lohrmann and Reichert 2016). As a consequence, companies demand for an increasing degree of process automation to stay competitive in their markets. In this context, the use of software robots (bots for short) mimicking human interaction, also denoted as Robotic Process Automation (RPA), constitutes a 'highly promising approach' (Cewe, Koch, and Mertens 2017) and more and more companies rely on this cutting edge technology (Asatiani and Penttinen 2016) to optimise and implement their internal business processes. Problem Statement RPA constitutes an emerging technology raising high expectations in industry (Auth and Bensberg 2019). For companies, however, it is still difficult to grasp the fundamental concepts of RPA, to understand the differences in comparison to other methods and technologies (e.g., Business Process Management, BPM), and to estimate the effects the introduction of RPA will have on the company and its employees. question arises whether it is worthwhile to adapt it. Therefore, in a third step, we want to systematically understand RPA effects on humans and their work life as well as on the companies implementing RPA projects. This results in our third research question: RQ 3: What are RPA effects? In a fourth step, we investigate how far research has taken up on RPA. Particularly, we are interested in methods that aim to foster RPA implementation. This leads to our fourth research question: RQ 4: Are there methods for improving the implementation of RPA projects? Finally, the growing importance of AI in many areas raises the question to what degree AI plays a role in connection with intelligent process automation. The fifth research question addresses the topic of combining AI with RPA: RQ 5: Is AI used in combination with RPA? Formulation of the Search String We elaborate the search string iteratively based on our knowledge of the topic, the pre-specified research questions, and pilot searches. The search string is refined to retrieve a maximum number of different publications. The pilot searches are inspected to ensure that all relevant publications are found. The final search string for the SLR is as follows: 'robotic process automation' OR 'intelligent process automation' OR 'tools process automation' OR 'artificial intelligence in business process ' OR 'machine learning in business process' OR 'cognitive process automation'. Note that the abbreviation 'RPA' is not included, as the search then would yield around 31.000 results. RPA not only serves as acronym for Robotic Process Automation, but also for Recombinase Polymerase Amplification in the field of DNA chemistry and others. Though we omit the acronym RPA, all relevant publications are still included in the results. Identification of Data Sources We apply the search string to different data sources to find relevant publications. Five electronic libraries are identified as relevant for conducting the SLR as they cover scientific publications in Computer Science: ACM Digital Library, Science Direct -Elsevier, IEEE Xplore Digital Library, SpringerLink, and Google Scholar. Additionally, we consider literature cited by the retrieved publications by performing a backward reference search (Jalali and Wohlin 2012). Finally, Google Scholar alerts are analysed during the SLR procedure and the writing process to get notified about newly emerging publications on the topic. Definition of Inclusion and Exclusion Criteria To identify relevant publications, we define the following inclusion and exclusion criteria. Inclusion Criteria: 1.) The publication deals with the topic of RPA and contributes answers to at least one of the research questions. 2.) The title and the abstract seem to contribute to our research questions and contain terms such as robotic/intelligent/cognitive process automation, virtual assistant, process intelligence, business process model automation, intelligent business process management, or software bot. Exclusion Criteria: 1.) The publication is not written in English. 2.) The title and abstract do not seem to contribute to our research questions and contain words such as business process management, business intelligence, analytics, multi-agent system, big data, or process mining. 3.) The publication is a patent, master thesis, or web page. 4.) The publication is not electronically accessible without payment. 5.) All relevant aspects of the publication are included in another publication. 6.) The publication only compares existing research and has no new input. A publication is included if both inclusion criteria are met, and it is excluded if any of the exclusion criteria is fulfilled. Elaboration of Quality Assessment Questions RPA is a relatively new research area (cf. Figure 1 in Section 3). The topic is mostly driven by industry. Thus, applying rigid quality assessment questions would probably exclude relevant publications. Therefore, we decide against the introduction of additional quality criteria. Selection of Publications The search string (cf. Section 2.2) is applied to the identified data sources (cf. Section 2.3), which yields 1510 results (Inclusion Criterion 1). To select relevant publications, the metadata is loaded into Microsoft Excel. It includes title, author, year, abstract, and keywords. In a first step, duplicates and publications not written in English (Exclusion Criterion 1) are excluded resulting in 1045 publications. Then, publications whose title does not indicate any contribution to one of the research questions are excluded, leaving 289 publications (Inclusion Criterion 2, Exclusion Criterion 2). Following this, the abstracts of the remaining publications are scanned leading to 201 publications (Inclusion Criterion 2, Exclusion Criterion 2). We then exclude publications corresponding to patents, theses, or web pages, resulting in 142 relevant publications (Exclusion Criterion 3). Thereof, 125 are accessible without payment (Exclusion Criterion 4) and 85 are not included in another publication (Exclusion Criterion 5). Finally, 39 publications provide new input to the research questions and are included in the final publication list (Exclusion Criterion 6). Through backward referencing one additional publication is identified and included. The initial search was performed on June 6th 2019. Since then (until June 2020) the alerts from Google Scholar have revealed 1206 new publications. 23 of them meet the inclusion criteria, but do not fulfil any of the exclusion criteria. Thus, they are added to our final publication list leading to 63 relevant publications. Data Extraction Method To each of the 63 relevant publications, a data extraction process is applied in order to answer the research questions derived in Section 2.1. We extract the following information: 1.) General information, i.e., title, author, publication year, publication venue, number of citations, and publication type, 2.) Definitions provided for RPA (RQ 1), 3.) Differences between RPA and related technologies, e.g., intelligent automation, BPM, etc. (RQ 1), 4.) Criteria for selecting suitable business processes for RPA (RQ 2), 5.) Concrete business processes automated in specific business areas with an explicitly mentioned automation tool (RQ 2), 6.) RPA effects on humans, work life, and companies (RQ 3), 7.) Methods to improve RPA projects (RQ 4), 8.) Combination of RPA with AI (RQ 5), and 9.) Significant information outside the scope of the derived research questions. Data Analysis Method After having extracted relevant data from all selected publications, we cluster the obtained data. For each research question, we scan relevant information and build groups based on matches and differences. Concerning RQ 1, for example, we study all definitions provided by the publications, identify different aspects, e.g., 'software-based solution', 'mimics human behaviour' or 'rule-based nature', and label the publications according to the aspects they cover. The same procedure is applied to bundle differences to other technologies (RQ 1), process selection criteria (RQ 2), and effects (RQ 3). Depending on the publication type, different data analysis methods are then applied. For case studies, we investigate the business area, the concerned business process, and the used automation tool. Then, we cluster these case studies (RQ 2). Method papers are co-related with the stage of the RPA project, which they aim to improve, in order to identify common points (RQ 4). Finally, research papers answering RQ 5 are treated separately to group approaches for combining RPA with AI. Results In this chapter, we analyse the 63 publications identified by the SLR to answer the research questions described in Section 2.1. The answers are structured along the research questions and the seven discovered thematic clusters. In general, we have noticed a growing interest in RPA. Figure 1 shows the distribution of the publications over the recent years; it started with one to seven publications in the years 2014 to 2017. In 2018, 15 relevant publications appeared and in 2019, 21 works were published. In 2020, until June, 11 publications could be identified. Concerning the publication venue, there is no clear majority visible. RPA is important in a variety of areas covered by different conferences and journals. Regarding authorship, two researchers are dominating: M. Lacity and L. Willcocks are both (co-)authors of eight publications each. The 63 publications comprise 15 case studies, 22 methods, two reviews, and 24 research papers. [ Figure 1 The definition from P60 addresses two aspects. First, RPA corresponds to a software-based solution (cf. P33, P51, P58, P59). Second, it mimics human behaviour (cf. P43, P45, P51, P53, P59). Most of the other definitions in literature pick up those aspects expanding it by mainly two other characteristics. Instead of the term 'software-based solution', terms like 'software robot' (P40, P43) or 'virtual assistant' (P02) are used. Mimicking human behaviour is also expressed by phrases like 'enters data, just as a human would' (P40), 'mimic human actions' (P06), or 'operate [...] in the way a human would do' (P56). Furthermore, some publications emphasise the non-invasiveness of RPA, meaning that RPA does not change the underlying application systems (P06, P46, P51). In 2017, the IEEE Standards Association defined RPA as follows (IEEE 2017): 'A preconfigured software instance that uses business rules and predefined activity choreography to complete the autonomous execution of a combination of processes, activities, transactions, and tasks in one or more unrelated software systems to deliver a result or service with human exception management.' This definition includes the aspects software-based, rule-based, and non-invasive. The other aspects, namely mimics human behaviour and are routine tasks with structured data are not covered. Moreover, this definition includes the goal of implementing RPA ('deliver a result or service'), and it emphasises that humans are needed to handle exceptions. Note that these two aspects are not addressed by any other definition. Differences of RPA to Related Technologies. In the following, we analyse the differences between RPA and Robotic Desktop Automation (RDA), Intelligent/Cognitive RPA, and BPM. These technologies are the most frequently mentioned ones in the results of the SLR. As major difference between RDA and RPA, RDA does not have its own identity and, therefore, acts via the IT infrastructure of its users with the same roles and authorisations, whereas RPA is working autonomously in the background on a central server structure (P40). Furthermore, RDA is attended, whereas RPA is unattended (P40). Additionally, scripting and screen scraping are locally deployed from the user's desktop and can be seen as RDA, differing from RPA, which is enterprise-safe, meeting IT requirements such as security, scalability, auditability, and change management (P58). In P51, stand-alone automation includes macros, office program automation, and mouse/keyboard emulation. Most publications distinguish between intelligent and cognitive automation. Intelligent or enhanced RPA, also called self-learning RPA, uses data to learn how a user interacts with the system and mimics these interactions including human judgement (P06, P19, P51). Machine learning and process mining techniques (van der Aalst 2011) are used to build knowledge of the process to better automate it (P51, P54). Cognitive RPA, in turn, uses advanced machine learning and natural language processing to augment human intelligence and to learn performing tasks in a better way (P06, P43, P54). The main differences between rule-based automation and intelligent automation are summarised in Table 3. [ Many publications emphasise the differences between RPA and BPM. Figure 2 illustrates these differences graphically. The x-axis indicates the number of process variants (i.e., the complexity of the business process). The y-axis displays the case frequency of all process variants of the business process. The tasks on the left are best suited for BPM, the ones in the middle are candidate tasks for RPA, and the ones on the right can only be performed by humans (cf. Figure 2) To understand the difference between lightweight and heavyweight IT (Table 4, row 3), we summarise characteristics of suitable tasks for both types of automation. Lightweight IT automates tasks involving multiple systems and having a high volume, and provide a stable user interface (UI). Heavyweight IT automates tasks working in one system, the tasks have a very high volume, and are characterised by a stable back-end system architecture (P07). P56 emphasises another differentiation: RPA versus Straight Through Processing (STP). STP refers to processes that can be performed without any human involvement, whereas RPA is an 'outside-in' approach, which uses existing information systems and shall be robust to changes of these systems. 3.2. RQ 2: Which business processes can be automated with RPA and which tools are used for automation? Process Selection Criteria. The most frequently mentioned criterion in literature is repetitiveness, i.e., the process to be automated by robots shall have a high volume of transactions or a large number of process executions (P06, P10, P14, P28, P31, P50, P58, P59, P62, P63). Regarding the predictability of the process volumes, P06 states that processes with unpredictable peaks are suited for RPA implementations. However, P31 emphasises that the volumes should be predictable. Another important criterion concerns the rule-based character of the process. Consequently, the process to be automated shall be standardised, run in a stable environment, and only require limited exception handling (P02, P06, P14, P28, P31, P46, P62, P63). The next criterion is to check whether the process requires high manual efforts and, thus, is prone to errors (P06, P14, P28, P51). Furthermore, digitisation gaps in processes might fulfil this criterion as they indicate the need for human work. P51 even states that 'any activity that a person performs with mouse and keyboard can be carried out by a software robot.' The complexity of the process itself, or as a result the complexity of its implementation, constitutes another important selection criterion. All publications agree that the lower the complexity, the better the process is suited for RPA (P17, P50, P58, P59). Further, the duration of process execution can serve as a criterion. Processes to be automated shall have a high expenditure of time (P14, P17). Additionally, the following criteria are mentioned by a few publications: The inputs and outputs are digital and structured (P10, P28, P46), the process only requires a limited number of human interventions, the process accesses multiple applications, the effects of a business failure are high (P14, P28), and the transaction has a great influence on the business (P14, P63). P06 proposes to choose processes for RPA automation, which are not a priority for the IT department. Use Cases. Table 5 shows the 15 case studies, indicating in which Business Area RPA was applied, which Business Process was automated, and which Tool was used for automation. Most automated processes are swivel-chair processes, i.e., 'processes where humans take inputs from one set of systems (for example email), process those inputs using rules, and then enter the outputs into systems of record (for example Enterprise Resource Planning (ERP) systems)' (P60). For ten case studies, the used automation tool was mentioned in the corresponding publication. Four used Blue Prism (P30, P53, P58, P59), two used UiPath (P03, P17), and one case study used Redwood (P32), Bluepond (P50), Workfusion (P63), and Roboplatform (P40) respectively. The latter is a self-made tool that was built in-house. P23 compares different automation tools, namely UiPath, Automation Anywhere, and Blue Prism based on criteria, e.g., openness of the platform, future scope or performance. Their recommendation is to use UiPath because it 'triumphs all' (P23). [ The answer to RQ 3 is divided into two aspects: the first one deals with the RPA effects on humans and their work life, whereas the second one deals with positive, controversially discussed, and negative effects on the company. According to (P02, P13, P17, P18, P30, P55), employees fear to lose their job. They consider the robots as their competitors for their job (P02, P18) and are afraid to learn the use of the new technology (P13, P14). Hence, acceptance problems might arise (P18). P29 and P54 propose combined human robot teams, where each team member performs the task he or she can do best. In P30, myths about RPA are demythologised, e.g., 'RPA is only used to replace humans with technology'. In turn, P30 is refuted by the fact that more work can be done with the same number of people and humans are not replaced by technology. According to P61, staff reduction is one effect of RPA implementations. According to (P14, P32, P54), there will be less tasks for humans, especially regarding low-level tasks not requiring any specific qualification. P11 and P12 emphasise that even knowledge workers are affected by lay-off due to RPA. On one hand, this has an impact on jobs in low-cost countries (P32). P30 proposes to automate offshore processes and keep them offshore, whereas P03 stresses that humans are needed to trigger the robot. On the other, organisational structures change. Nowadays, most companies are structured like a pyramid, having many less-skilled workers and fewer highly skilled workers (cf. Figure 3a). P32 predicts the change from that pyramid structure to a diamond structure meaning that employees at the bottom of the pyramid will be replaced by robots (cf. Figure 3b). P42 goes further and predicts that the pyramid structure will be replaced by a pillar structure regarding the human workforce (cf. Figure 3c). Robots will fill up the structure such that the overall organisation structure remains a pyramid. There are some effects of RPA projects that are controversially discussed. P43 and P46 criticise that RPA is unable to make decisions, P50 argues that it provides full transparency of all decisions. The latter means that if the bot fails, an employee still can perform the task manually. Section 3.5 discusses how AI is used to expand the limits of RPA, including decision making. The costs of RPA are another point of discussion: in P14 and P55 budget constraints are seen as a challenge to realise RPA projects, while many publications highlight its cheapness, cost reduction, and high return on investment (P03, P06, P13, P18, P31, P32, P33, P35, P46, P47, P50, P54, P59). P18 differs between the implementation and the maintenance: The first is characterised by low costs, the latter can be costly and tedious. The non-invasiveness of RPA is seen differently: P03 and P46 criticise that RPA presumes an existing infrastructure and depends on the stability, availability, and performance of the systems. On the other, P54 considers the non-invasiveness as a benefit. P04 starts a discussion on possible RPA effects on enterprise architectures and argues that RPA might become invasive, i.e., RPA enables new work flows, requiring a modelling functionality in RPA systems, which contradicts the basic RPA idea. P46 emphasises that RPA is unable to adapt to a changing environment, whereas P02 and P62 notice that RPA is easily modifiable and flexible. Negative effects or limitations of RPA are seldom mentioned. Only P19 characterises RPA solutions as workarounds and P02 and P18 point out that RPA is a temporary solution. According to P03, there are software platforms, e.g., special forensic software, which are not compatible with current RPA solutions. Furthermore, P18 criticises that know-how and skills are required, and RPA solutions are not robust in respect to evolving user interfaces. P28 adds that RPA implementations require greater IT involvement than initially thought. 3.4. RQ 4: Are there methods for improving the implementation of RPA projects? To analyse the publications that introduce methods for RPA projects, we oriented ourselves on the software development life cycle (SDLC) (Royce 1987). We assigned the methods to the corresponding stage in the life cycle for the sake of better illustration (cf. Figure 4). In the following, the methods are described shortly. [ Figure 4 about here.] Analysis Stage. The approaches to improve the Analysis Stage are roughly clustered into three areas: process insights, process standardisation, and process selection. P16 uses process mining to get insights into the process, e.g., its automation rate. In P38, textual process descriptions are used to classify the tasks into the categories Manual Task, User Task, and Automated Task. The goal is to automatically detect tasks suited for RPA. To achieve this goal, P38 uses feature computation for prediction and a Support Vector Machine (SVM) to classify the process descriptions based on the features. The aim of P35 is to develop a new process mining technique, which can deal with RPA and automatically discover process models. The approach is to discover constraints within an event log, extract corresponding feature vectors, and label constraint violations. P35 uses clustering methods to identify correlations between activation and target payloads. In a subsequent publication of the same authors, i.e. P37, a tool ('Action Logger') is developed, which records UI logs that can directly serve as inputs to process mining tools and contain information relevant for RPA implementation. P34, again by the same authors, develops an idea how to discover data transformations from UI logs. In P16, process mining is used to standardise existing processes. Another standardisation technique is proposed in P22, which emphasises the importance of not automating the as-is process, but to optimise it before. Thus, the authors propose a framework for process re-engineering. The most difficult task in the analysis stage is to select the process to automate. Different approaches are proposed: P16 sticks to process mining for prioritising activities. P25 also uses process mining to discover processes, with a method focusing on creating event logs from screen monitoring data. P05 analyses UI logs to discover deterministic actions. As basic idea, 'a routine is automatable if its first action is always triggered when a condition is met [...] and the value of each parameter of each action can be computed from the values of parameters of previous actions' (P05). P39 develops a four-step method to analyse a business process based on its criteria (cf. Section 3.2): first, to be eligible for RPA, the process has to be mature and standardised (Step 1). Step 2 assesses the RPA potential of the process based on human interaction with software and its rule-based nature. Step 3 evaluates the RPA relevance based on the volume of transactions and the degree of complexity of the process. Finally, based on Steps 2 and 3, the process is classified. P39 recommends to select processes with high relevance and high potential. In turn, P48 follows a similar approach and develops a multi-criteria process evaluation model, which assesses the technical feasibility and business potential criteria to find suitable business processes for RPA. The technical criteria include the degree of rule-basedness, human intervention, digitalisation, and the structuredness of data. The potential criteria evaluate labour intensity, the number of systems involved, the number of process exceptions, the number of process steps, current costs, and process maturity. P57 proposes a method to prioritise processes while maximising RPA benefits. Based on different indicators of the process, i.e., execution frequency, execution time, degree of standardisation, stability(i.e., small number of exceptions in the process) failure rate, and automation rate, the automation potential of the process is assessed. Furthermore, the profitability of process automation is measured through fixed and variable costs of human labour and fixed and variable costs of RPA. Finally, P57 maximises the economic value and provides recommendations to support the decision of selecting appropriate processes for RPA initiatives. Product Design Stage. P44 highlights advantages and challenges of organising RPA in local business units. On the positive side, enthusiasm for digitalisation and local ownership are built. On the other, there is a lack of control mechanisms and end-to-end process views. P44 proposes to loosely couple the IT department and the RPA team. Coding Stage. P07 suggests a method for implementing RPA projects in an agile way: instead of documenting a process completely with clicks and text-based description, the users record themselves when performing the task and stores the video in the backlog. The developer creates a test case for this video and checks whether the current solution passes the test (Test Driven Development). If not, he modifies the RPA solution until the test case is fulfilled. Then, he moves on to the next video. P27 proposes the use of digital twins for RPA development. A digital twin is, in this context, a virtual shadow of an IT system. The idea allows developing RPA externally without having access to the real system. Testing Stage. P35 has the vision to develop a method 'to automatically train the RPA bots'. Research has not progressed far enough. P08 proposes a method for automated testing in RPA projects, which has been tested with a prototype. The approach is to modify the RPA life cycle. Compared to the life cycle model depicted in Figure 4, the third stage is called development and not coding, operation is named monitoring, and a fourth stage, i.e., deployment, is inserted before the testing phase. The modified life cycle not only includes design in the second stage, but test environment construction as well. During development, automatic testing can be performed serving as new input for the analysis phase. P24 extends the approach of P08 by providing technical details on test cases and the algorithm as well as by evaluating the approach of automatically generating a testing environment. Operation Stage. P16 mentions that process mining can be used to monitor the results of an RPA project. P21 proposes a middle ware system for controlling the execution of multiple RPA bots. The system includes a job-scheduling algorithm to efficiently distribute multiple tasks among available bots. In turn, P52 solves an optimisation problem to determine the optimal number of required bots while minimising costs. Then, the optimal task assignment among the bots is solved. Some publications cannot be assigned to solely one stage and are, therefore, placed in the middle of Figure 4. P15 and P36 cover the first three stages, i.e., Analysis, Product Design, and Coding. To be more precise, P15 presents an end-to-end approach that allows deducing RPA rules from user behaviour. The idea is based on the Form-to-Rule approach: First, tasks of the user are identified by observing interactions with systems and identifying forms used within the systems. Second, rules are deduced from relations between the different tasks. Third, RPA is implemented based on those rules. P36 combines the approaches presented in P34, P35, and P37 and proposes a Robotic Process Mining pipeline. After recording UI logs, noise filtering, segmentation, and simplification steps are applied to identify candidate routines. In these routines, executable (sub)routines are discovered and compiled to obtain RPA scripts. P36 emphasises that there are still many challenges to successfully apply the proposed pipeline. Stages Product Design, Coding, and Testing are addressed by P49. A framework is developed to transform a human-centred routine into a robot-automated one. The framework of routine automation can be empirically applied to different areas, including RPA, and provides implementation guidelines. One publication, i.e., P20, addresses the complete life cycle of RPA and proposes a framework to introduce RPA in auditing. The first stage is the process selection based on the evaluation of different criteria, e.g., RPA criteria, process complexity, and data compatibility. Second, the process is modified, e.g., considering data standardisation. In a third step, the process is implemented and, finally, evaluated and operated. The last step consists of evaluating effectiveness, assessing detection risk (i.e. the risk that auditing 'will not detect a misstatement' (P20)), and monitoring the RPA operations. 3.5. RQ 5: Is AI used in combination with RPA? To get an idea whether AI is already used in combination with RPA, our insights from literature are summarised in the following. Some works briefly mention the use of AI and its potentials. P40 and P43 state that with AI it becomes possible to understand semi-structured data. P56, in turn, emphasises that AI helps interpreting changing user interfaces and improving the robustness of RPA solutions. Using chat bots, P43 presumes that the interaction between humans and computer systems becomes facilitated. First AI-based applications have emerged in the RPA field: P26 presents a Cognitive Automation Robots Platform, which is able to understand data, generate insights, and use the latter as learning experiences. P33 uses the cognitive virtual agent 'Amelia', which understands chat messages. In P19, a cognitive RPA prototype is presented. It can automatically identify, extract, and process data. Once the classification model is trained (for details see P19, pp. 68-69), new unseen documents are classified and relevant objects, e.g., address fields, are detected and extracted. We discovered four publications that combine AI with RPA in greater detail. P41 provides building blocks for intelligent process automation by explaining and providing implementations on how to extract intent from audio, classify emails, detect anomalies, find cross correlations in time series, and understand traffic patterns. P62 describes how machine learning methods contribute to further improve RPA, e.g., using image processing to scan letters or invoices or using classification algorithms to label documents. The task of classifying emails correctly is picked up by several publications: P45 proposes the use of an SVM and a Text Rank Algorithm to read emails and to automatically process them. P09 develops an algorithm, named Sure-Tree, for email classification, which produces a minimum of false positives to ensure that an incorrect action is never triggered. Deriving a Framework for Analysing and Comparing RPA Publications This section synthesises the results obtained by the SLR. More precisely, we present a framework for Analysing and Comparing existing as well as upcoming Publications in RPA (ANCOPUR for short). ANCOPUR gathers the results along the defined research questions (cf. Section 2.1). Table 6 depicts the schema of our ANCOPUR framework: The first column, shows the main aspect for comparison, e.g., definition, process selection criteria, or use case. In the following columns the aspect gets detailed. The publication can be assigned to several rows depending on the aspects it covers. If a new feature is found, it can be added to ANCOPUR as well. To demonstrate its usefulness and applicability, all 63 publications from the SLR are categorised with ANCOPUR. Note that this facilitates the comparison of any new publication with existing knowledge. We illustrate and explain ANCOPUR by assigning P17 exemplary to it. This publication was part of the results of the SLR. Furthermore, a publication randomly excluded in the results of the SLR, is assigned to the framework to evaluate it (Flechsig, Lohmer, and Lasch 2019). Publication P17 is read to detect information depositing on the first column of the ANCOPUR framework. We discover the aspects Process Selection Criteria, Effects, and Use Case. Concerning the criteria for processes to be automated, P17 emphasises '1) the processes should be simple enough so that the robots could be implemented quickly and 2) improved process efficiency resulting from RPA implementation should be clearly visible.' (Hallikainen, Bekkhus, and Pan 2018) Therefore, the processes are selected depending on their complexity and the duration of process execution. Regarding ANCOPUR, P17 is indeed assigned to those two rows in the Process Selection Criteria column. Regarding the case study, the aspects Business Area, Business Process, and Automation Tool are all covered by P17: the general business area is BPO and the concrete processes are '1) new employment relationships and 2) changes in employee payment details', which are both swivel-chair processes. UiPath was used for automating the business processes. Therefore, P17 is added as reference to rows BPO, swivel-chair process, and UiPath in the use case section of ANCOPUR. The following wording is found for RPA effects: 'there were some fears about losing jobs [...] people would no longer have to carry out the boring work and could concentrate on more interesting tasks' (P17). The first statement expresses a negative effect on humans and is assigned to fear to lose the job in ANCOPUR. The second statement describes positive effects on humans and covers both aspects in ANCOPUR, namely relieved from non-value adding tasks and focus on cognitively more demanding tasks. To evaluate ANCOPUR, we assign the work presented by (Flechsig, Lohmer, and Lasch 2019) to it. For Definition, Use Case, Effect, and Combination with AI, no new aspects or no information are found at all. Concerning Differences of RPA to Related Technologies, BPM is compared to RPA. All aspects in ANCOPUR are covered, only the formulations differ a bit, e.g., 'Redesign of extensive processes with high strategic relevance and added value' (Flechsig, Lohmer, and Lasch 2019) is assigned to the row changes 'how' work is done. Regarding Process Selection Criteria, we find several questions that aim to find processes suitable for an RPA implementation (Flechsig, Lohmer, and Lasch 2019, p. 111). These criteria include repetitive, rule-based, duration of process execution, and high effects of business failure. Therefore, (Flechsig, Lohmer, and Lasch 2019) can be added to the corresponding rows in the ANCOPUR framework. Additionally, (Flechsig, Lohmer, and Lasch 2019) proposes choosing processes relevant for compliance, an aspect not considered yet. ANCOPUR can be expanded by this process selection criterion aspect. (Flechsig, Lohmer, and Lasch 2019) suggests a method for combining BPM and RPA, which can be assigned to the Product Design Stage with a new row, namely combination of BPM and RPA. The idea is to have a common Analysis Stage for BPM and RPA projects as well as to decide in the Product Design Stage whether to implement a BPM or an RPA solution. [ Altogether, ANCOPUR uses criteria and sub-criteria to classify RPA publications. The framework is useful for systematically analysing, assessing, and comparing existing as well as upcoming RPA works. Related Work Scientific works on RPA have been analysed in Section 3. This section gives a short overview of other literature research approaches highlighting the differences to our work. (Gotthardt et al. 2019) examines the current state of RPA as well as fundamental challenges in accounting and auditing. For this purpose, a literature review, interview results, and case studies are presented to summarise key factors. Unlike our work, no SLR is presented. Instead, (Gotthardt et al. 2019) follow a domain-specific approach by focusing on accounting and auditing, with a special emphasis on the role of AI. A systematic mapping study is conducted in (Enriquez et al. 2020) to analyse the current state-of-the-art of RPA. The main focus is to evaluate 14 commercial RPA tools regarding the coverage of 48 functionalities mapped to RPA life cycle phases. As major result, the Operation phase is covered by over 80% of the RPA tools, whereas support for the Analysis phase is below 15%. An SLR is presented in (Riedl and Beetz 2019). Its aim is to derive an evaluation model to identify business processes, which in parts or entirely can be subjected to RPA. The main focus of this SLR is to derive selection criteria for assessing the RPA suitability of business processes as well as to develop a corresponding evaluation method. (Riedl and Beetz 2019) apply the SLR method described in (Kitchenham 2004). However, research questions, search strings, data sources, inclusion and exclusion criteria, and data analysis differ from the ones described in our work. Only 25 scientific research articles, case studies, and professional reports are considered, compared to the 63 in our SLR. Moreover, the results are differently clustered due to the focus on different research questions. (Güner, Han, and Juell-Skielse 2020) presents a literature review on RPA cases to answer the question how RPA as a routine capability advances BPM practices. The results show that RPA, as a routine capability, advances practices at individual, organisational, and social levels. (Santos, Pereira, and Vasconcelos 2019) provide an approach to evaluate RPA development in business organisations and industries. A conceptual model on relationships between RPA topics, identified in a literature review, is presented. The model consists of three steps, i.e., definition of strategic goals, process assessment, and tactical evaluation and factors for a successful RPA implementation. Influencing factors include benefits, disadvantages, selection criteria, future challenges, and future opportunities. (Ivančić, Vugec, and Vukšić 2019) presents another SLR on RPA. Some of the research questions in (Ivančić, Vugec, and Vukšić 2019) sound similar to ours, e.g., 'How is RPA defined (RQ2-1)' reads like RQ 1 (cf. Section 2.1). Through examining search string, data sources, and inclusion as well as exclusion criteria, the differences become visible. The search results in 27 publications compared to 63 in our SLR. Definition and benefits of RPA, and differences to BPM are shortly mentioned, whereas our paper goes into detail and reveals many aspects undiscovered by previous SLRs. Tools used for automation (RQ 2), effects (RQ 3), methods (RQ 4), and the combination with AI (RQ 5) are completely ignored by this SLR. Furthermore, no framework utilising SLR results for assessing and comparing newly upcoming works has been developed. (Syed et al. 2020) identifies contemporary RPA-related themes and challenges for future research by presenting an SLR. The first two research questions overlap slightly with RQ 1 and RQ 3 as presented in this article. However, (Syed et al. 2020) focuses on the description of RPA readiness/RPA maturity in literature, the potential of RPA, an effective RPA methodology, and current and future technologies for RPA. In contrast, our paper emphasises differences between RPA and related technologies, methods for improving the implementation of RPA projects, and the combination of RPA with AI. (Syed et al. 2020) uses the results of the SLR to highlight key research challenges for future RPA research, whereas we derive a framework for evaluating and comparing RPA publications in a structured way. Hence, to the best of our knowledge, no other publication addresses the problems presented in Section 1.1. Discussion The presented results enable us to answer the five research questions. In the following, the results are discussed and interpreted along the seven discovered thematic clusters. More precisely, we identified, categorised, and analysed 63 publications belonging to the following seven clusters: RPA Definition, Differences of RPA to Related Technologies, Process Selection Criteria, RPA Use Cases, RPA Effects, RPA Project Methods, and Combination of RPA with AI. As main result we obtain the ANCOPUR framework, which enables a structured overview of the SLR results. More specifically, the framework provides a fast and easy way to identify and categorise publications in the RPA area. In particular, comparing new works with existing knowledge becomes much simpler and more structured. Moreover, ANCOPUR can be easily expanded. If new publications reveal unconsidered aspects, those can be added to evolve the framework and keep it up to date. In detail: 1. RPA Definitions. It is emphasised that RPA is a software-based solution mimicking human behaviour. These aspects are important to indicate the difference of RPA to hardware bots. 2. Differences of RPA to Related Technologies. Most papers emphasise the differences between RPA and Intelligent Automation as well as between RPA and BPM. 3. Process Selection Criteria. Best suited for an RPA automation are repetitive, rule-based and complex business processes demanding for high manual efforts. 4. Use Cases. The majority of use cases stem from business areas such as BPO and Shared Services. Note that this is reasonable as those areas possess many repetitive, rule-based business process as, for example generation of payment receipt (Aguirre and Rodriguez 2017). Anyway, it would be interesting to encounter more RPA projects in knowledge-intensive business areas, e.g., in research and development or in healthcare. Furthermore, the literature covers only successful RPA projects, leaving room for further research on failed projects. Concerning the RPA tools used in the case studies, Blue Prism and UiPath are dominating. According to (Gartner 2019), however, there are other tools that should be considered: Automation Anywhere, EdgeVerve Systems, NICE, Workfusion, Pegasystems, and Another Monday. The application of the different tools to one concrete use case as well as tool performance should be compared in further research studies. 5. RPA Effects. The positive effects are widely discussed in literature. Only a minority is critical towards RPA. One reason can be the novelty of RPA (cf. Figure 1 in Section 3), due to which the technology is hyped and negative effects do not want to be seen. It is emphasised that employees are relieved from nonvalue adding tasks, and instead can focus on cognitively more demanding tasks. Finally, business processes become faster, better available, more compliant, and improved in quality. 6. RPA Project Methods. Most methods for improving the implementation of RPA were published in 2019 and 2020 (16 of 22 paper). The vast majority of methods tries to improve the analysis stage, only some publications address the other life cycle stages. The analysis stage is the one that differs mostly from other software development projects. Product design, coding, and testing are not differing that much when either implementing an RPA project or any other software project. We expect that more publications dealing with analysis will appear as well as methods to fully automate the detection of RPA-suitable processes. Furthermore, the operation stage should be addressed, e.g., it should be monitored whether the bots are accepted or employees fear to lose their job and, therefore, refuse the use of the bots. 7. Combination of RPA with AI. The use of AI in the context of RPA is still at a very early stage. Six publications deal with this combination from a general point of view and emphasise that it might create a big impact. Only four concrete use cases are discovered, the majority focuses on the problem of classifying emails correctly. While the use cases are still scientific in nature, it is interesting to see more industry-driven approaches and projects. The publications are from the last years only, therefore, we hope for more research in the coming years. In general, research on RPA is still at its beginning. Though being increasingly present in industry, scientific works on this topic are rather scarce and mainly consider qualitative issues. Moreover, it is noteworthy that quantitative research is missing. We expect that there will be a lot more publications in the coming years. In order to assess and compare those publications with the existing body of knowledge, the present paper provides a fundamental framework based on concepts of RPA. Summary and Outlook RPA is a novel technology starting to emerge in 2015. By means of an SLR, we provide an overview of the most relevant publications until June 2020. We discovered seven thematic clusters answering fundamental questions such as 'What is RPA?', 'Which business processes can be automated with RPA?', and 'What are the RPA effects?'. Furthermore, we investigate the differences between RPA and related technologies, methods for improving the implementation of RPA projects, and whether AI is used in combination with RPA. Additionally, we provide a review of case studies including the business area, process, and the automation tool. The paper describes ANCOPUR, a framework for analysing and comparing publications in the RPA area. With the help of criteria, publications can be classified. The framework provides a robust and expandable systematics to categorise and evaluate trends and further developments in the RPA area. Therefore, it will help both scientists and users from industry to assess and compare upcoming RPA publications. As discussed, due to the novelty of RPA, the research focus lies on analysing and understanding the RPA technology. The Combination of AI with RPA and the development of Methods for RPA implementation are still in the beginning. Regarding the publication dates of the respective publications, there is a clear trend in this direction visible: nine of ten publications combining RPA and AI and 20 of 22 method papers were published in 2018, 2019, and 2020.
2020-12-23T02:15:57.663Z
2020-12-22T00:00:00.000
{ "year": 2020, "sha1": "28fd740ecc418aab38b586dd0eec04986a11a08d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "28fd740ecc418aab38b586dd0eec04986a11a08d", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
252246956
pes2o/s2orc
v3-fos-license
Ablution Skills in Early Childhood: The Effect of Big Book Media This study aims to determine the effectiveness of big book media on early childhood ablution skills. This study uses a quantitative approach with experimental methods with a pre-experimental research design type one group pretest-posttest design. The research sample amounted to 14 children. The data collection technique used a checklist for ablution skills. Data analysis technique using a t- test. The results showed that the average child’s ablution skills increased from the pre-test to 10.85, after being given big book media treatment, there was an increase from the average post-test result to 30.21. This is also reinforced by the results of hypothesis testing at a significant level of 5%, which got a t-count of 36.39 greater than the t-table. It can be said that the use of big book media is effective in improving early childhood ablution skills. after ablution. Therefore, this study examines the Effectiveness of Big Book on Early Childhood ablution Skills. Introduction Children are a precious trust from God (Anisa & Murniyetti, 2022). Because of children, parents are required to educate children since they are still in their mother's womb until they are adults (Nisak et al., 2022). Every newborn child is always in a state of purity. So, when return to the owner (Allah SWT) must also be pure, without stains and sins. That's why education for children in the view of Islam is obligatory (Aini & Fitria, 2021;Aulia & Amra, 2021;Nuha & Munawaroh, 2022). Early childhood education is a formal education forum that can facilitate various aspects of child development by providing appropriate stimulation according to the level of achievement of child development and age services (Saleha et al., 2022). The aspect of the development of moral and religious values must be facilitated to develop (Zainuddin et al., 2022). The purpose of Early Childhood Education is to develop the whole potential of the child so that later he can function as a complete human being according to the philosophy of a nation (Azzahra et al., 2021;Sabri et al., 2020;Warmansyah, 2020). Childhood is a part of life. This period is called the golden age (Khaironi, 2017;Salsabilafitri & Izzati, 2022). The child's brain's ability to think is growing up to 80% (Amalina, 2020;Nabighoh et al., 2022;Priyanti & Warmansyah, 2021). This is the main basis why the importance of education for early childhood as at the stages of child development (Yulianingsih et al., 2020). This period is not prepared to face life in the future but is limited to optimizing potential (Husna, 2022). During this period, the child easily absorbs or accepts positive and negative things which will shape the child's character (Nasir et al., 2019). To shape the character of children in a positive direction, it is necessary to inculcate religious values (Faiz et al., 2021). (Amini & Suyadi, (2020) explained that forming a positive child's character can be done through inculcating religious values with proper parenting from parents at home, then continued by teachers and the wider community. Therefore, children need to understand ablution activities according to Islamic teachings and the application of these ablution activities is adjusted to the child's level of development so that they can be used as the foundation of their religious life in the future. The essence of developing religious values is faith and worship education, meaning that from an early age, the problem of faith must be embedded in children. Likewise, religious practices have also been familiarized by educators by being trained on children (Yuliana et al., 2022). The kindergarten age is the best time for teachers to lay the foundations of worship practices (Ivrendi, 2011). Although the role of parents is large in building the basis for worship practices for their children, the role of the Kindergarten teacher is also not small in laying the foundation for worship activities for a child, because Kindergarten children like to obey their teacher's orders (Warmansyah et al., 2022). Based on Permendikbud No. 137 of 2014 describes the standard level of development of religious values, namely: 1) Knowing God through the religion he adheres to; 2) Imitating worship movements according to their religion, such as ablution and prayer; 3) Saying a prayer before or after doing something. Children aged 48-60 months can say short prayers and perform worship according to their religion. The basic competency indicators that will be developed recognize daily worship activities and carrying out daily worship activities according to the guidance of teachers or adults. Thus, children aged 5-6 years can perform the simplest worship activities, namely ablution (Artha et al., 2020). Children from an early age should get habitual ablution (Suryani, 2020). This is the responsibility of parents at home and teachers at school (Chomariyah, 2019). With the habit of ablution, it is hoped that it can improve the skills of ablution in children from an early age (Syahrizal & Suratno, 2021). So that children can do it well, introducing the practice of ablution to children is first done in a family environment, because the family is a place to be educated before entering the education level. Children are introduced to ablution in educational institutions such as schools. One of them is in Harapan Ibu Lima Kaum Islamic Kindergarten, which teaches the practice of ablution to children who, in the learning process, use the 2013 curriculum. Ablution is the most important act. Praying is not accepted by Allah if it is not preceded by ablution. Ablution must be performed when praying. Ablution according to language means clean or beautiful (Nurhayati et al., 2022). Ablution is cleaning the limbs with purified holy water based on certain conditions and pillars to eliminate minor hadas. Small hadas means people who have not performed ablution or people who do not have water for ablution. Ablution must be done but must be in order. One indicator of the acceptance of prayer is perfection in ablution. Asking children as family members to perform ablution is an obligation for parents, especially fathers. God's command to parents to carry out ablution is difficult, just ordered, and requires a short time. It implies many other commands related to the child's education process that is not free from obstacles and challenges, and require a long time. Through this verse and hadith, it is explained that parents have an obligation to their children to practice their ablution skills (Masruroh, 2018). Based on observations at the Islamic Kindergarten of Harapan Ibu Lima Kaum on July 30, 2019, it was found that out of 14 children aged 5-6 years in group B1, 9 children did not know the procedure for ablution when they wanted to pray. When asked by the teacher to perform ablution, they still looked confused. Guidance for ablution worship is still limited, the teacher only invites children to practice ablution once a week. Children in the class are taught how to perform ablution and movements and rules for ablution, so there are still sufficient children who do not know how to practice ablution. The results of interviews with teachers at the Harapan Islamic Kindergarten of Ibu Lima Kaum found teachers have not used the Big Book media in learning ablution. Therefore, there are still sufficient children who are wrong in the order of how to perform ablution. The child is still wrong in pronouncing the intention before performing ablution, and the child is not yet skilled in practicing the correct way of ablution such as after intending to wash his hands between the fingers but the child does not wash his hands according to existing provisions. Apart from that, what children often do wrong is when they wash their hands, not to the elbows, and after that, the child rubs the head only at the end of the hair. The environment plays an important role in the development of a child's life. This environment begins with the family environment. The family can be an example in terms of ablution before praying. However, in reality, sufficient parents are found to be lacking in teaching the practice of ablution to their children as a habit at home. This can be found in most children, especially those aged 5-6 years, who should already be familiar with worship and direct practice in terms of ablution. Therefore, the use of media is needed for learning ablution, so that children can be skilled in performing ablution in a manner and can recite ablution prayers. Therefore, children must be trained and accustomed to performing ablution before praying as a provision for them when they enter adulthood so that implementing worship required by Allah SWT does not become a heavy burden for their daily lives. For children to be diligent and serious in performing ablution, a media in learning that is interesting and creative is needed so that children can concentrate and also focus on digesting the learning that is being done. Media means intermediary or introduction. Media is an intermediary or delivery of messages from the sender to the recipient of the message (Fitria, 2014). Learning media is a tool that can help the teaching and learning process and serves to clarify the meaning of the message conveyed so that it can achieve learning objectives better and more perfectly (Pangestika et al., 2021). Media that can convey the message is one of them by using the big book. The researcher argues that the Big Book media is one of the learning tools in early childhood ablution skills. Media big book is a version of a large storybook, measuring 14 x 20 inches. This large size helps children to see illustrations and text writing more clearly and encourages greater involvement in this story (Andriana et al., 2017). The Characteristic Big book is attractive pictures, colors a predictable plot, words that can be repeated, and a rhythmic text pattern to sing about (Oktaviana et al., 2021). Previous research has stated that the big book media has an influence on various aspects of child development, such as literacy (Setyorini et al., 2019;Wandini et al., 2021;Yansyah et al., 2021), moral behavior (Ardayani & Suarjana, 2021), ability Receptive language (Fitriani et al., 2019), Tolerance Character (Purnamasari & Wuryandani, 2019), Empathy (Maranatha & Putri, 2021), speaking ability (Anggraeni et al., 2019), and other aspects. From several previous studies, big-book media has a positive impact on aspects of child development. However, no studies have been found on the impact of big-book media on early childhood ablution skills. Therefore, the novelty of this research is seen from the impact of giving big book media on early childhood ablution skills. Based on the above background, as well as the observation that, in the practice of ablution skills, they have not used interesting media and also children are bored in the learning. Apart from that, children's ablution skills are still categorized as unskilled, because children still do not know the intention to perform ablution, the sequence of ablution correctly, and also prayer after ablution. Therefore, this study examines the Effectiveness of Big Book Media on Early Childhood ablution Skills. Methodology The type of research used in this research is quantitative research using the experimental method to expose how much the influence of X has on Y. Experimental research is a research model that provides a stimulus, then observes the effect or consequences of changes in object stimulation. The data got as a comparison after being given treatment through big book media. The type one group pretest-posttest design was used. In this design, a pretest was carried out before being given treatment. The reason the researcher took this study was that the researcher wanted to see accurate results through several tests carried out, with the pretest (before treatment) and post-test (after treatment). This research was conducted at the Islamic Kindergarten of Harapan Ibu Lima Kaum, Tanah Datar Regency by sampling using a simple random technique with a sample of 14 children aged 5-6 years. Collecting data using a checklist of ablution skills comprising the skills of children to perform purification activities starting from 1) intending; 2) wash both palms; 3) gargle; 4) wash the nose; 5) wash your face; 6) wash both hands up to the elbows; 7) brushing hair; 8) washing the ears; 9) washing feet; and 10) reading the prayer after ablution with categories: Already Skilled (ST) value 4, to Not Skilled (BT) value 1. To see the skills of performing ablution in early childhood through the big book media, a different test analysis (t-test) was carried out. Result and Discussion The table 1, illustrates that all children experienced an increase in their ablution skill scores. The improvement of children's ablution skills during the pre-test with an average score of 10.85 after being given treatment and doing the post-test, the average score became 30.21. Normality Test In conducting the normality test, it is used to determine whether the data from each variable is normally distributed. With the assistance of computer software for statistical data processing SPSS version 20 for windows, the normality test results are shown in the table 2. Based on the output of one sample, Kolmogorov-Smirnov, the data got is 0.089 > 0.005. This means that the data has a difference of 0.039, therefore the data above is normally distributed. In Shapiro-Wik, the data got is 0.229 > 0.05, meaning 0.229 is over 0.05, then the data is normally distributed. Homogeneity Test The homogeneity test should indicate that two or more groups of sample data come from populations that have the same variance. With the help of computer software for statistical data processing SPPS version 20 for windows, the homogeneous test results are shown in the table 3. Based on the output of homogeneity of Variances, the sig value (significance) of 0.053 is greater than 0.05 (0.053 > 0.05), so the variation for each sample is the same (homogeneous). Hypothesis Testing The next step is to analyze the data from the treatment by conducting statistical tests, to see whether or not there is a significant increase in the ablution Skills in early childhood through the big book media. In this case, the t-test analysis is carried out as shown in the table 4. Based on the research that has been done, shows that the alternative hypothesis (ha) is accepted and the null hypothesis (ho) is rejected. The alternative hypothesis is accepted because the t-count is greater than the t-table at a significance level of 5% = (36.39> 2.16). This shows that the big book media can affect children's ablution skills. It can be seen that, at the time of the pretest, the children's ablution skills were in the unskilled category of 14 people. At the time of the posttest, the children's ablution skills increased to the category of 10 skilled people, and 4 more people were skilled. Big book media can improve children's ablution skills because, in its implementation, big book media can show cut pictures of the ablution procedures. Apart from that, the researchers also practiced with children for proper ablution procedures. Big book media is also influential to be used as learning media because big book media is made by pasting pictures so that it makes children interested in seeing it. This is in line with research conducted stating that the big book media has a large size, attractive image shape, and striking color so that it can attract the attention of children (Kurniaman & Sismulyasih, 2019). According to Piaget by using picture books, it can be said that children have played symbolic games, which have the function to provide pleasure and are like mental images in their efforts to imitate reality (Mahayanti et al., 2017). Therefore, the use of big book media can provide significant benefits for children, and also get learning by using big book media. Using interactive media such as big books allows teachers to explain, disseminate and provide learning more easily rather than just relying on words (Sitepu et al., 2021). This is also supported by the findings of Grove, (2017) that big books offer a wide space for children to connect with experience, giving children the opportunity to interpret texts meaningfully. The illustrations in the big book help the reader understand the story, besides that the illustrations also provide details of the setting or show the mood and tone of the book (Hilda Hadian et al., 2018). Researchers can conclude that big book media is one of the educational game tools that provides a million benefits for children, in using big book media teachers are also required to be creative in compiling and also taking pictures that will be posted on the big book. Researchers have proven that big book media can improve children's ablution skills, apart from children's ablution skills, there are still many studies that have discussed big book media for other early childhood skills. Conclusion Using big book media brings a significant influence in improving early childhood ablution skills. Growing ablution skills in children will be effective through habituation and coupled with the use of learning media such as big book media. So, teachers, parents, and adults around should provide examples and facilities for improving children's ablution skills. Therefore, Big Book is an alternative media that can be used by teachers in learning ablution to replace conventional methods that have been used in the classroom.
2022-09-15T15:55:43.628Z
2022-08-31T00:00:00.000
{ "year": 2022, "sha1": "b2140e6436e56aae2c94bde8a6e3bf6bd65abdfb", "oa_license": "CCBYSA", "oa_url": "https://obsesi.or.id/index.php/obsesi/article/download/3185/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3341fee89b449fb05b1afa9110a21ba728fb741e", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
4247811
pes2o/s2orc
v3-fos-license
A Molecular Assay to Quantify Male and Female Plasmodium falciparum Gametocytes: Results From 2 Randomized Controlled Trials Using Primaquine for Gametocyte Clearance Summary A sensitive molecular assay was developed to quantify male and female Plasmodium falciparum gametocytes. Its application in 2 clinical trials demonstrates that the early effects of primaquine may be due to gametocyte fitness rather than sex ratio. Artemisinin combination therapies (ACTs) have contributed substantially to declines in the burden of falciparum malaria in the last 15 years [1], due to their rapid clearance of asexual stage parasites and activity against immature gametocytes [2,3]. ACTs reduce the posttreatment transmission potential of parasites from infected humans to mosquitoes more than nonartemisinin treatments [4,5]. However, because of their incomplete activity against mature gametocytes, patients may remain infective to mosquitoes for up to 2 weeks after ACT treatment [6][7][8]. The only currently available drug able to clear mature sexual stage malaria parasites is the 8-aminoquinoline primaquine (PQ) [9], which has been used historically for gametocyte clearance as a single dose of 0.75 mg base/kg PQ in combination with a schizonticide [5,[8][9][10][11][12]. Concerns about hemolysis in individuals with glucose 6-dehydrogenase (G6PD) deficiency have contributed to the recent revision of the recommended dose for Plasmodium falciparum gametocyte clearance to 0.25 mg base/kg. The World Health Organization recommends that this low dose be provided alongside standard ACT without prior G6PD status screening, for the prevention of P. falciparum transmission in areas with ACT-resistant parasites, or areas close to elimination [13]. Though the aim of PQ treatment for P. falciparum is a reduction in gametocyte infectivity, there is limited direct evidence on mosquito infection prevalence in individuals treated with the currently recommended low dose [9,14,15]. In a recent trial where PQ was combined with a current ACT, the number of individuals infecting mosquitoes dropped from 93.3% to 6.7% over a time window when gametocyte prevalence and density appeared unaffected by PQ [14]. This pattern appears consistent; a review of P. falciparum transmission after PQ treatment shows that gametocyte infectivity is generally diminished prior to observable changes in gametocyte abundance [9]. An as yet untested hypothesis is that PQ may disproportionally affect male gametocytes and thus sterilize infections while gametocyte densities, largely determined by the more abundant female gametocytes [16][17][18][19], remain stable [20]. In vitro studies with Plasmodium berghei show that male gametocytes are more sensitive to a range of antimalarials [21], but PQ requires bioactivation in the liver, so in vitro studies are not possible. Because posttreatment gametocyte densities are commonly below the microscopic threshold for detection, molecular sex-specific gametocyte assays are required to test this hypothesis in clinical trials. The only previously published male gametocyte marker is insufficiently sensitive to quantify low-density male gametocytes [19]. Here, we present results from 2 randomized controlled trials of PQ in combination with dihydroartemisinin-piperaquine (DP), conducted in Kenya and Mali. While this is the first report on the trial in Kenya, gametocyte dynamics and transmission data (but not gametocyte sex ratio) were previously presented for the trial in Mali [14]. In this report, we assess the effect of PQ on gametocyte sex ratio using quantitative reversetranscription polymerase chain reaction (qRT-PCR) with sexspecific RNA markers [19], including a highly sensitive novel male marker selected using sex-specific P. falciparum transcriptomic data (Pf3D7_1469900) [22]. Our analysis allowed the first direct assessment of the effect of PQ on gametocyte sex ratio and subsequent infectivity to mosquitoes. Study Design and Participants The Kenyan trial was conducted in Mbita point, western Kenya, between September 2014 and September 2015. The study area is characterized by moderate malaria transmission [23]. During visits to local schools, 5-to 15-year-olds providing written consent for screening were tested for malaria infection by examination of 100 fields of a thick blood smear. Participants providing written informed consent were eligible if they were patent gametocyte carriers (1 gametocyte per 500 white blood cells [WBCs] in a thick film), 5-15 years of age, with P. falciparum monoinfection. Exclusion criteria were hemoglobin density of <9.5 g/dL, asexual parasite density of >200 000 parasites/µL, body mass index of <16 kg/m 2 or >32 kg/m 2 , tympanic temperature of >39°C, antimalarial treatment taken within 2 days, recent treatment with drugs known to be metabolized by the cytochrome P450 (CYP) 2D enzyme family, history of adverse reaction to study drugs, blood transfusion within 90 days, history or symptoms of chronic illness, or family history of any condition associated with extended QTc interval. The trial protocol received ethical approval from the Kenya Medical Research Institute Ethics Review Committee (#439), and the London School of Hygiene and Tropical Medicine Observational/Interventional Research Ethics Committee (#7323). Procedures All participants received a 3-day course of DP (Eurartesim, Sigma-Tau, Italy) alone or with a single low dose of PQ (0.25 mg base/kg) on the third day of DP treatment (day 2 of the trial). Enrollees and trial staff other than the trial pharmacist, who was involved only in randomization and drug administration, were blinded to treatment arm allocation. Primaquine (Sanofi-Aventis US LLC, Bridgewater, New Jersey) was prepared as previously described by dissolving crushed 1-mg tablets into distilled water and mixing with a taste-masking solution to a final concentration of 0.25 mg/kg child weight [8,24]. Participants assigned to the placebo arm received the same total volume of water and masking solution. Participants were examined at the study clinic on days 0, 1, 2, 3, 7, and 14 after enrollment. Blood samples were taken at all time points except for day 1, either by fingerpick (day 0, day 2, and day 14) or venipuncture (day 3 and day 7). At all sampling points, hemoglobin levels were quantified by Hemocue photometer (HemoCue AB, Sweden), and asexual parasites and gametocytes were enumerated by examination of a Giemsastained thick film counting against 200 or 500 WBCs, respectively, and converted to densities per microliter assuming 8000 WBCs/µL. Two assays were used for gametocyte detection: quantitative nucleic acid-based sequence amplification (QT-NASBA) was used to determine gametocyte prevalence at enrollment and days 2, 3, 7, and 14 following treatment; qRT-PCR was used for gametocyte quantification [25] and sex ratio. To validate our sex-specific qRT-PCR assays, we first confirmed the ability to generate populations of male and female gametocytes using a recently published PfDynGFP/P47mCherry reporter line ( Supplementary Figures 1-3). Then, we developed and optimized sex-specific qRT-PCR assays amplifying messenger RNA (mRNA) specific to the Pfs25 (female marker) and Pfs230p (male marker) genes [19] and more sensitive male targets (Supplementary Table 1). Pfs230p also showed limited sensitivity in our preliminary analyses (Supplementary Tables 2 and 3), and in our baseline surveys (gametocytes were undetectable in 9/112 of baseline samples using Pfs230p). We thus designed a new qRT-PCR based on a more abundant gene transcript specific to male gametocytes; Pf3D7_1469900 (hereafter, PfMGET for male gametocyte-enriched transcript) [22]. In the Supplementary Data, we provide details of target selection (Supplementary Table 2), the development and validation of the PfMGET qRT-PCR assay (Supplementary Tables 3-6), and details on molecular gametocyte assay methodology for the Kenya and Mali trials. Pfs25 and PfMGET assays showed similar sensitivity, and a threshold for positivity was set at 1 gametocyte per sample (0.002/µL) for both assays. At day 3 and day 7, blood was provided to mosquitoes for direct membrane feeding assays, as previously described [26]. Low infectivity (38 oocyst-positive mosquitoes out of 8686 dissected, from 2 individuals in the DP arm) prompted additional feeds (n = 32) to be conducted with serum replacement at days 0, 3, and 7. Mosquitoes remained uninfected in these conditions. Later it was discovered that the Anopheles gambiae sensu stricto colony was infected with Microsporidia species, which have been shown to inhibit Plasmodium survival in mosquitoes [27] and precluded meaningful assessments of infectivity in this trial. To test our findings in an independent dataset, and explore the relationship between sex ratio and infectivity, we performed a subsidiary analysis of blood samples from participants of a single blind, dose-ranging, randomized trial of DP with PQ conducted near Ouelessebougou, Mali [14]. Primaquine was provided with the first dose of DP at baseline, at doses of 0.0625 mg/kg (n = 16), 0.125 mg/kg (n = 16), 0.25 mg/kg (n = 15), and 0.5 mg/kg (n = 17) (control with DP only, n = 16). The main outcomes of this trial were previously reported [14]. For the current study, blood samples were available from baseline and from 2, 3, 7, and 14 days after the administration of PQ. Direct membrane feeding assays for the assessment of infectivity to A. gambiae mosquitoes were successfully performed at baseline and at days 2 and 7 [14,26]. Sample Size Calculations Sample size calculations were based on the anticipated primary outcome, the prevalence of infectivity to mosquitoes in membrane feeding assays. A previous study in the same setting showed that 30% of submicroscopic gametocyte carriers and 80% of patent gametocyte carriers infected mosquitoes 7 days after DP treatment [7]. Assuming a conservative estimate of 30% infection after DP and assuming PQ would reduce this to <5% [28], a sample size of 60 participants per treatment arm, allowing for 10% loss to follow-up, was considered sufficient to detect this difference in infection rate with 90% power at a significance level of .05. 5525 patients were assessed for eligibility 161 patients had P. falciparum gametocytes 120 P. falciparum gametocyte carriers were enrolled, and randomly assigned to receive either DP + Placebo, or DP + PQ 60 receive DP + Placebo 37 , 23 30 5-10 yrs, 30 Data Analysis The primary efficacy endpoint for the Kenyan trial was the mean gametocyte clearance time (ie, number of days until gametocytes become undetectable by QT-NASBA [24] in the DP-PQ arm compared to the DP arm), calculated using previously presented mathematical models that allow clearance time to be extrapolated beyond the period of follow-up [29]. Secondary endpoints were the area under the curve (AUC) of QT-NASBA-based gametocyte density over time (gametocytes/µL -1 days) [5] using log 10 gametocyte density in linear regression models with adjustment for baseline gametocyte density, and qRT-PCR gametocyte prevalence, density, and sex ratio (proportion of total gametocytes that were male) at days 3 and 7. Stata software version 12.0 (StataCorp, College Station, Texas) and SAS software version 9.3 (SAS Institute, Cary, North Carolina) were used for statistical analysis. Baseline measures were compared between treatment arms using Student t test, Wilcoxon rank-sum tests, or χ 2 tests. Differences between treatment arms in gametocyte prevalence and (log 10 -transformed) density after baseline were assessed with linear and logistic models, after adjusting for gametocyte density at baseline. Differences in the proportion of male gametocytes between treatment arms were compared with Wilcoxon ranksum tests at baseline, and linear models adjusted for baseline gametocyte density after treatment. Proportions within treatment arms were compared to paired baseline measures with Wilcoxon signed-rank test. The accuracy of gametocyte sex ratio estimates depends on the total number of gametocytes detected. To ensure robust sex ratio estimates, ratio analyses were restricted to samples with total qRT-PCR estimated counts of >16 gametocytes per sample [30], resulting in minimum gametocyte per microliter thresholds of 0.032 for Kenyan samples and 0.32 for Malian samples. RESULTS A total of 5525 children were screened, and 120 gametocyte carriers were enrolled into the Kenyan trial ( Figure 1). Sixty were assigned to receive DP with a placebo, and 60 were assigned to receive DP-PQ. Four participants in the DP arm and 2 in the DP-PQ arm were lost to follow-up or excluded from the trial. The lowest hemoglobin concentration recorded during the trial was 6.9 g/dL, observed in 1 individual prior to PQ administration who was excluded from the trial. Minimum recorded hemoglobin level in all other participants was 8.6 g/dL. Gametocyte density at baseline was not significantly different between treatment arms by microscopy (P = .705) or qRT-PCR (females: P = .692; males: P = .784; combined males and females: P = .823) ( Table 1). The median proportion of male gametocytes at baseline was 0.33 (IQR, 0.22-0.49) in the DP arm, and 0.32 (IQR, 0.17-0.53) in the DP-PQ arm (P = .547); that is, a sex ratio of approximately 1 male to 2 females (Table 2). All gametocyte density and sex ratio estimates were based on qRT-PCR. As a consequence of the higher input material for qRT-PCR in the Kenya trial (500 µL of whole blood for extraction, eluted in 10.5 µL water) compared to QT-NASBA (50 µL of whole blood for extraction eluted in 50 µL water), qRT-PCR gametocyte prevalence was considerably higher. At day 3, total gametocyte density determined by qRT-PCR was decreased to 24.81% (IQR, 9.54%-48.29%) of its level at baseline in the DP arm, and 35.73% (IQR, 14.39%-80.82%) in the DP-PQ arm (P value after adjustment for baseline density = .03). By day 7, total gametocyte density was decreased to 22.31% (IQR, 7.38%-78.22%) of its baseline level in the DP arm, and 1.43% (IQR, 0.22%-8.19%) in the DP-PQ arm (P < .001). Gametocyte AUC was significantly lower in the DP-PQ arm (P = .018). Effect of DP and PQ on Male and Female Gametocytes By day 3, female gametocyte density was decreased to a median of 9.1% (IQR, 1.3%-33.5%) of its baseline level in the DP arm, and 14.0% (IQR, 2.9%-65.2%) of its baseline level in the DP-PQ arm (P = .152) ( At day 7, the density of gametocytes was significantly reduced in the DP-PQ arm relative to the DP arm (females: 0.05% [IQR, 0.0-0.7%] of baseline; males: 3.4% [IQR, 0.4%-32.9%] of baseline; P < .001). In the DP arm, sex ratios were lower than at day 3 but still higher than baseline (median proportion male: 0.44 [IQR, 0.26-0.68]; P = .002). Sex ratios among the low densities of gametocytes in the DP-PQ arm were significantly more male biased than in the DP arm (median proportion male: 0.98 [IQR, 0.89-1.00]; P < .001 for matched measures at baseline, and group comparison with DP arm). Mosquitoes in an Independent Trial in Mali Pfs25 qRT-PCR-based gametocyte prevalence, density, and infectivity to mosquitoes for the trial in Mali have been reported elsewhere [14]. At baseline the median proportion of male gametocytes was 0.15 (IQR, 0.09-0.27) overall, and did not differ between any DP-PQ arms and the DP arm (P = .414-.996) (Figure 4). At day 2 (48 hours after PQ administration), there was a significant decrease in the proportion of individuals infecting mosquitoes and the proportion of mosquitoes these individuals infected at all PQ doses of >0.125 mg/kg (Figure 4). The median proportion of gametocytes that were male at day 2 was 0.28 (IQR, 0.14-0.53) in the DP arm. Adjusted for baseline gametocyte density, the proportion of male gametocytes was borderline significantly different in the 0.0625 mg/kg PQ arm (0.14 [IQR, 0.09-0.27]; P = .052), but not in any higher-dose PQ arms (proportion male: 0.15-0.55; P = .085-.434). The proportion of male gametocytes was similar between DP and DP-PQ arms at day 3 (P ≥ .358), but became highly male Table 7). Within each treatment arm, the proportion of male gametocytes at day 7 was significantly higher relative to baseline in the DP (P = .016) and the DP-PQ 0.125 mg/kg (P = .033) and 0.5 mg/kg (P = .043) dose groups, but not in the 0.0625 mg/kg (P = .477) or 0.25 mg/kg (P = .080) dose groups. There was no significant difference in the proportion of male gametocytes between infectious and noninfectious individuals at baseline (infectious/noninfectious: 55/26; P = .964), at day 2 (infectious/ noninfectious: 21/57; P = .531), or at day 7 (infectious/noninfectious: 5/71; P = .244) ( Figure 5). DISCUSSION Our findings support the effectiveness of a single dose of PQ (0.25 mg/kg) for shortening gametocyte carriage following ACT treatment [13]. Our sex-specific qRT-PCR suggests that PQ may not preferentially clear male gametocytes. At early time points when PQ has been shown to substantially decrease infectiousness to mosquitoes (24-48 hours) [14,20], the proportion of gametocytes that were male was comparable between the DP-PQ and DP-only arms. Before treatment, there is a nonlinear relationship between gametocyte density and infectivity to mosquitoes [31][32][33]. After PQ treatment, gametocyte density has no obvious relationship with infectivity, and infections appear to be sterilized [14,20,24]. White and colleagues highlighted these phenomena in historic studies and urged that trials of PQ efficacy based on gametocyte density measures be interpreted with caution [20]. In a recent PQ efficacy trial, it was postulated that because gametocytes were quantified using Pfs25-based molecular assays, only female gametocytes were counted, while infections may have been sterilized by clearance of the smaller undetected male population [14]. If total clearance of one sex was the cause of PQ's rapid sterilizing effects, a highly sensitive sex-specific assay would be capable of predicting posttreatment infectivity. Though we believe the assay presented in the current manuscript meets these requirements, our data indicate that PQ treatment has a similar effect to ACT treatment in terms of absolute sex ratio, in the days after treatment where infections are sterilized. The extent of the effect varied between the 2 trials we describe in the current study and may relate to the timing of PQ and ACT administration and to the limited sample size of treatment arms in the Malian trial. However, neither trial provides evidence of the hypothesized increase in female bias after PQ treatment; the Kenyan trial indicates that male gametocytes may actually be cleared more slowly than females by both DP and DP-PQ. While gametocyte sex ratio is an important determinant of transmissibility and may be associated with gametocyte density, we observed no relation between posttreatment gametocyte sex ratio and infectivity. Collection of RNA samples from a larger number of gametocyte donors participating in mosquito feeding studies conducted prior to drug administration would allow more rigorous assessment of gametocyte sex ratio and its effect on the likelihood of onward transmission in natural infections. Our findings do not exclude the possibility that PQ's sterilizing effects are sex specific. The P. berghei-based in vitro dual gamete formation assay has shown that male gametocytes are more susceptible to a range of drugs with different modes of action [21]. The advantage of this system is that its endpoint (exflagellation in males, production of the translationally repressed Pfs25 protein in activated females) is based on gametocytes' ability to activate (ie, their fitness) rather than the presence of their mRNA, which may remain detectable in nonfunctional intact gametocytes. Quantifying gametocytes based on mRNA transcript numbers has limitations [34]. We used automated RNA extraction to minimize variation in extraction efficiency and assessed mRNA transcript numbers during gametocyte maturation. mRNA transcripts of Pfs25 and PfMGET increased sharply from stage II to stage V gametocytes after which we found no evidence for age-dependent transcription patterns that may explain our findings of a faster reduction of female-specific transcripts following treatment (Supplementary Figure 4). Our findings suggest that the early sterilizing effect of PQ may be a consequence of a reduced fitness, rather than immediate clearance of 1 or both gametocyte sexes. Delves et al showed that percentage inhibition of activation by dihydroartemisinin was approximately 14 times higher for male gametocytes than females [21]. Though this appears to conflict with our observation that male gametocytes are cleared more slowly by DP-PQ, both observations may be valid if (male) gametocytes were rendered nonfunctional by DP-PQ but remained in the circulation during sampling. As such, our study supports the notion that functional assays (be it gametocyte fitness or infectivity) are essential to determine the transmission-blocking properties of antimalarial drugs. Observations of microscopy's insensitivity [6,[35][36][37][38] and the significance of submicroscopic gametocyte densities for transmission [39][40][41] have placed great import on quantifying the submicroscopic gametocyte reservoir. The recent realization that Pfs25 is transcribed specifically, or in far greater abundance in female gametocytes, affects previous interpretations of the Pfs25 readout as measure of gametocyte density [19], as the male component of gametocyte biomass will have gone largely undetected and thus total gametocyte density will have been underestimated. The results of the current study shed some light on PQ's early sterilizing activity. Determining gametocyte sex ratio may have significant utility for examining the relationship between gametocyte density, sex, and infectivity to mosquitoes, and for the assessment of drugs causing clearance of one sex. However, our findings demonstrate that the preferential clearance of one gametocyte sex does not provide an explanation for the rapid sterilizing effect of PQ. Trials assessing transmission-blocking effects of drugs such as PQ should thus . Infectiousness to mosquitoes, quantitative reverse-transcription polymerase chain reaction (qRT-PCR)-based male and female gametocyte density, and proportion male in the Malian study. A, Prevalence of infectiousness to mosquitoes among the study population in the direct membrane-feeding assay at days 0, 2, and 7. Asterisks (*) indicate significantly different infectiousness in logistic regressions relative to control. Mosquito infection was determined as the presence of any number of oocysts in the mosquito mid-gut 7 days after feeding. B, Proportion of total gametocyte density that is male (males per µL / [males per µL plus females per µL]), presented as individual data points, with the median and interquartile range (IQR). Proportion male was only presented or included in analyses if total gametocyte density was estimated to be ≥16 gametocytes per sample (1 gametocyte/50 µL = 0.32 gametocytes/µL). Data from each dose arm and time point are not included in the graph if proportion male was calculable for ≤4 individuals per arm. Asterisks (*) indicate significantly different proportion male in logistic regressions relative to control, adjusted for baseline density. C, Gametocyte density determined by qRT-PCR at baseline (day 0), and 24 (day 3) or 120 (day 7) hours after dihydroartemisinin-piperaquine (DP) or DP plus primaquine (PQ). Female gametocytes were quantified by extrapolating Pfs25 messenger RNA abundance from standard curves of known quantities of female gametocytes, and vice versa. Density is presented as median, IQR, and 10th-90th percentiles of gametocytes/µL for gametocyte-positive individuals only. Median female gametocyte density was calculated from <5 individuals in some dose arms at day 7 (0.5 mg/kg, n = 4) and day 14 (0.5 mg/kg, n = 2). Median male gametocyte density was calculated from <5 individuals in some dose arms at day 14 (0.25 mg/kg, n = 3; 0.5 mg/kg, n = 4). continue to rely on functional transmission read-outs such as mosquito feeding assays. A B A, Proportions of gametocytes that were male in samples taken at baseline from individuals whose whole blood was infectious or noninfectious to mosquitoes in the direct membrane feeding assay. Mosquito infection was determined as the presence of any number of oocysts in the mosquito mid-gut 7 days after feeding. B, Proportions of gametocytes that were male in samples taken at day 2 (48 hours after first dose of dihydroartemisinin-piperaquine [DP] in both the DP and DP plus primaquine [PQ] arms, and only dose of PQ in DP-PQ arms only) from individuals whose whole blood was infectious or noninfectious to mosquitoes in the direct membrane-feeding assay. "n" indicates number of individuals for which proportion male was calculable, and for which mosquito feeding assays were conducted.
2018-04-03T05:54:09.049Z
2017-07-06T00:00:00.000
{ "year": 2017, "sha1": "3126dc37657853fa8907b6dbbef12720067bd45d", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jid/article-pdf/216/4/457/24264047/jix237.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3126dc37657853fa8907b6dbbef12720067bd45d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
205982788
pes2o/s2orc
v3-fos-license
Cryo-electron microscopy structures and progress toward a dynamic understanding of KATP channels Puljung reviews recent cryo-EM KATP channel structures and proposes a mechanism by which ligand binding results in channel opening. Introduction The recent revolution in cryo-electron microscopy (cryo-EM), enabled by direct electron detection, fast camera readouts, and sophisticated computational methods, has produced an embarrassment of new structures (at or near atomic resolution) of proteins resistant to characterization by x-ray crystallography (Cheng, 2015;Cheng et al., 2015). Ion channels and transporters with sufficient mass to provide good contrast in cryo-EM were the immediate beneficiaries of this improved technology. To that end, the ATP-sensitive K + (K ATP ) channel, a massive (>800 kD) heteromultimeric complex, was an attractive target; the recent publication of four cryo-EM structures of K ATP by three different groups demonstrates just how attractive Li et al., 2017;Martin et al., 2017a,b). K + currents inhibited by millimolar ATP concentrations were first recorded from heart muscle (Noma, 1983). Despite being one of the most abundant channels in the cardiac sarcolemma, the role of K ATP in the heart is relatively obscure. Its main function appears to be to attenuate the rise in intracellular Ca 2+ during ischemia, thus limiting the degree of cell death and tissue damage (Nichols, 2016). Perhaps better understood is the role of K ATP in triggering insulin secretion. The rise in blood sugar after a meal shuts K ATP channels in the plasma membrane of pancreatic β cells (Ashcroft et al., 1984). The sudden reduction in K + permeability, coupled with the high input resistance of β cells, depolarizes their plasma membranes, opening voltage-dependent Ca 2+ channels and thus enabling the exocytosis of insulin granules (Rorsman and Trube, 1985;Arkhammar et al., 1987;Nelson et al., 1987). The central role of K ATP in β cell excitability makes it an attractive target for pharmacological intervention for type 2 diabetes mellitus. Indeed, the commonly prescribed antidiabetic sulfonylurea (SU) drugs are K ATP antagonists (Ashcroft et al., 2017). The pore of K ATP comprises four inward rectifier K + channel subunits (Kir6.1 or Kir6.2; Fig. 1 A). Each Kir is associated with an SU receptor (SUR) subunit (SUR1, SUR2A, and SUR2B; Inagaki et al., 1997;Shyng and Nichols, 1997). The SUR is unusual in that it is part of a subgroup of the ATP-binding cassette (ABC) exporter family of proteins (ABCC) but lacks any intrinsic transport activity Tusnády et al., 1997). Rather, it has evolved as a modulatory accessory subunit for the K ATP pore. The β cell isotype of K ATP is formed by Kir6.2 and SUR1 (Inagaki et al., 1995;Sakura et al., 1995). The two subunits are closely associated even at the chromosomal level (both residing at position 11p15.1) and must properly coassemble to exit the ER/Golgi and traffic to nucleotide-binding site (NBS, 12 sites in all) on the cytoplasmic side of the channel (Vedovato et al., 2015). The first class of site is located directly on Kir6.2 (Tucker et al., 1997). Binding of adenine nucleotides (ATP and ADP) to this site, in a reaction that does not require Mg 2+ as a cofactor, inhibits K ATP . The other two NBSs are located on SUR1 (Bernardi et al., 1992;Gribble et al., 1998). Like other ABC transporters, SUR1 has two cytoplasmic nucleotide-binding domains (NBDs; Aguilar-Bryan et al., 1995). As in bona fide ABC transporters, the NBDs of SUR1 form a headto-tail dimer in the presence of Mg 2+ and nucleotides, with two NBSs formed at the dimer interface (Smith et al., 2002;Masia and Nichols, 2008;ter Beek et al., 2014). Binding of Mg nucleotides (ATP or ADP) to these sites activates K ATP . The tension between the inhibitory and stimulatory inputs from the three classes of NBS determine K ATP 's response to metabolic changes. Indeed, mutations in any NBS can result in diseases of insulin secretion, including neonatal diabetes or persistent hyperinsulinemic hyperglycemia of infancy (PHHI; Quan et al., 2011;Ashcroft et al., 2017). In addition to its modulation by adenine nucleotides, K ATP gating is also activated by anionic phospholipids. Like every other eukaryotic inward rectifier, binding of phosphoinositides to K ATP , in particular phosphoinositol 4,5-bisphosphate (PIP 2 ), is required to maintain channels in the open state (Fan and Makielski, 1997). This results in the unfortunate (from the electrophysiologist's perspective) tendency for K ATP currents to "run down" in excised membrane patches because of the activity of membrane-embedded lipid phosphatases and phospholipases (Hilgemann and Ball, 1996;Proks et al., 2016). This effect can be reduced to a great extent by the addition of EDTA in the bath solution (Lin et al., 2003). In addition to PIP 2 , K ATP activity is increased by the binding of several other anionic lipids, including PI 3,4,5-P 3 and PI-3,4-P 2 (Rohács et al., 2003), PI-4-P (Fan and Makielski, 1997), phosphatidic acid (Fan et al., 2003), and longchain acyl-coenzyme A esters (Bränström et al., 1998;Rohács et al., 2003;Schulze et al., 2003). The influence of ligand binding on the open-closed equilibrium of K ATP is schematized in Fig. 2. This model is an adaptation of the modular gating scheme proposed by Horrigan and Aldrich (2002) to describe the influence of different domains on the gating of BK channels and is presented here as a heuristic to describe the convergence of agonist and antagonist influences on the open probability (P o ) of K ATP . The model incorporates elements of those previously used to discuss the influence of PIP 2 or nucleotides on K ATP gating (Enkvetchakul and Nichols, 2003;Vedovato et al., 2015). The pore domain is simplified here to undergo a simple open-closed transition described by the equilibrium constant L. At the single-channel level, K ATP exhibits bursting behavior with at least two closed states: a short intraburst closed state and a longer-lived interburst closed state. K ATP agonists and antagonists primarily affect gating by prolonging or shortening the duration of the interburst closures (Rorsman and Trube, 1985;Ashcroft et al., 1988;Fan and Makielski, 1999;Li et al., 2002). The inhibitory NBSs exist in two states: a nucleotide-free "permissive" state and a nucleotide-bound "inhibited" state with an equilibrium affinity constant, K IB . Occupancy of each inhibitory site affects the open-closed transition of the pore by a coupling factor D (where D < 1). The PIP 2 site exists in an unoccupied "resting" state and a PIP 2 -bound "activated" state, with an affinity constant, K PIP . Each PIP 2 -binding event favors the open-closed transition by a factor C. Finally, the NBDs are considered as a single site with an equilibrium affinity constant K NBD . Upon dimerization, the NBDs transition from resting to activated, favoring channel opening by a factor of E for each occupied dimer. SUR1 affects the pore-forming subunit in several distinct ways: (a) SUR1 increases the unliganded P o of the pore domain (possibly a direct effect on L; Babenko and Bryan, 2003;Chan et al., 2003;Fang et al., 2006); (b) SUR1 increases the apparent affinity for nucleotide inhibition at Kir6.2 (affecting either K IB or D; Tucker et al., 1997); (c) SUR1 confers Mg-nucleotide activation on Kir6.2 (coupling factor E; Tucker et al., 1997;Gribble et al., 1998); (d) SUR1 confers sensitivity to pharmacological inhibitors (SUs) and activators (affecting L, E, or both; Tucker et al., 1997); and (e) SUR1 may increase the affinity for PIP 2 or the ability of PIP 2 to stabilize the open channel pore (affecting K PIP or C; Pratt et al., 2011). Through truncation, mutation, and pharmacological manipulation, many or perhaps all of these processes have been shown to be functionally separable. However, a complete (B) Side view of the 3.63-Å structure of K ATP (PDB accession no. 6BAA) in the presence of ATP and glibenclamide. For clarity, the pore domains of two of the Kir6.2 subunits have been removed, and only one SUR1 subunit is shown. Kir6.2 subunits are shown in yellow and brown. SUR1 is color-coded as follows: lavender, TMD0; orange, L0; blue, TMD1-NBD1; and green, TMD2-NBD2. The ochre blocks represent the approximate location of the lipid bilayer. understanding of the complicated pas de deux between Kir6.2 and SUR1 must begin with an understanding of the structure of each subunit and the nature of their assembly into a complex. Below, I will discuss the new cryo-EM structures of K ATP , which represent the channel complex in a drug-inhibited state and two preopen conformations with Mg nucleotides bound at SUR1. I will describe in detail the protein-ligand interactions at Kir6.2 and SUR1, the resulting conformational changes in these subunits, and how a rearrangement of the interface between SUR1 and Kir6.2 may result in opening of the channel pore. Structure and assembly of K ATP Until recently, only low-resolution structural information was available for the K ATP complex, including a negative-stain EM structure of concatenated SUR1-Kir6.2 (18 Å), the EM structure of a tetrameric SUR2B (21 Å), and small-angle x-ray scattering and nuclear magnetic resonance studies of the NBDs (Mikhailov et al., 2005;Park and Terzic, 2010;de Araujo et al., 2011de Araujo et al., , 2015López-Alonso et al., 2012;Fotinou et al., 2013). The ∼6-Å structure of the pancreatic β cell K ATP (Kir6.2/SUR1) was initially solved in the presence of the SU glibenclamide by two different groups using cryo-EM (Table 1; Protein Data Bank [PDB] accession numbers 5WUA and 5TWV; Li et al., 2017;Martin et al., 2017b). The two structures were remarkably similar, despite the presence of an (unmodeled) GFP tag on the C terminus of Kir6.2 in one of the constructs used for structure determination (Li et al., 2017) and the presence of ATP bound to Kir6.2 in the other (Martin et al., 2017b). Subsequent optimization of sample freezing conditions improved the image contrast significantly and allowed for the refinement of the inhibited K ATP structure down to 3.63-Å resolution (Table 1; PDB accession no. 6BAA; Figs. 1 B and 3; Martin et al., 2017a). The resulting structural model confirms many of the features expected from homologous Kir and ABC structures and years of structure-function studies on K ATP . Kir6.2 is a very typical inward rectifier ( Fig. 1 B). It has two transmembrane helices (M1 and M2) separated by a reentrant pore helix and loop, the latter of which contains a K + channel signature sequence (TIG FG), conferring ion selectivity (Heginbotham et al., 1994). The cytoplasmic half of the pore is primarily lined by residues from the M2 helix, and residues at the bottom of this helix (in particular F168) come together to form a The open-closed transition of the pore domain of K ATP (symbolized by the equilibrium constant L) is energetically coupled to three classes of ligand-binding site: four inhibitory nucleotide (either ADP or ATP, symbolized as ANP) binding sites on Kir6.2 (equilibrium association constant K IB ); four stimulatory PIP 2 binding sites on Kir6.2 (equilibrium association constant K PIP ); and four stimulatory MgANP sites formed by the dimerization of the NBDs of SUR1 (equilibrium association constant K NBD ). C, D, and E are coupling factors describing the interaction between the PIP 2 site, the inhibitory ANP site, and the NBDs with the channel pore, respectively. tight seal, indicating that the pore domain is closed in the SU-inhibited structure. N-and C-terminal extensions from the pore form a large, basket-shaped, cytoplasmic domain crowned by the G-loop gate, which is also closed. There are extensive interactions between subunits in the cytoplasmic domain (Figs. 1 B and 3). Mutations expected to disrupt these interactions destabilize the open state of the channel, causing inactivation (Shyng et al., 2000;Lin et al., 2003;Borschel et al., 2017). This includes several mutations associated with PHHI (Lin et al., 2008). Typical of ABC exporters, SUR1 has two, six-helix transmembrane domains (TMDs; TMD1 and TMD2; Figs. 1 B and 4), each followed by an NBD. The ABC core of SUR1, as in other ABC proteins, is domain swapped, such that each half transporter is composed of helices from TMD1 and TMD2 (ter Beek et al., 2014). The first comprises TM6-8, TM11, and TM15-16; the second comprises TM9-10, TM12-14, and TM17. A short helix separates the domain swapped helices of each half-transporter module (TM15-16 in the first half transporter and TM9-10 in the second half transporter) and contacts a groove on the opposite NBD; the helix between TM15-16 contacts NBD1, and the helix between TM9-10 contacts NBD2 (Fig. 4). In a typical reaction cycle, ABC transporters transition between nucleotide-free, inward-facing states (access to the binding site of carried substrate on the cytoplasmic side) and nucleotide-bound, outward-facing states (release of substrate to the extracellular milieu; Higgins and Linton, 2004). In the SU-inhibited K ATP structure, the NBDs are unoccupied (ATP, but not Mg 2+ , was included in the sample) and spaced far apart (Fig. 4), resembling the "inward-facing" apo structures of the ABC transporters P-glycoprotein (PDB accession no. 4M2T; Li et al., 2014) and MsbA (PDB accession no. 3B5W; Ward et al., 2007), as well as the fellow ABCC family members transporter associated with antigen processing (TAP; PDB accession no. 5U1D; Oldham et al., 2016), multidrug-resistance protein 1 (MRP1; PDB accession no. 5UJ9; Johnson and Chen, 2017), and the unphosphorylated state of the cystic fibrosis transmembrane conductance regulator (CFTR; PDB accession nos. 5UAR and 5UAK; Zhang and Chen, 2016;Liu et al., 2017). In addition to being physically separated, there is a misalignment of the NBDs, which is not present in the symmetric P-glycoprotein or MsbA structures but is visible to a lesser extent in the unphosphorylated structure of CFTR. SUR1 has a bundle of five transmembrane helices (TMD0) N terminal to the ABC core ( Fig. 1 B). A similar motif is present in the structure of MRP1 (Johnson and Chen, 2017). However, the resolution in this region of MRP1 was not sufficient to build a complete de novo atomic model. Interestingly, whereas there does appear to be some degree of conservation of the overall protein fold, there is little to no sequence conservation between MRP1 and SUR1 in this region. TMD0 is connected to TMD1 via an intracellular loop known as L0 (sometimes called the CL3 linker). The structure of the C-terminal two thirds of this loop is conserved in MRP1 and is also present at the N terminus of CFTR, where it is known as the lasso domain (Zhang and Chen, 2016;Liu et al., 2017;Zhang et al., 2017). In the SU-inhibited structure of K ATP , the primary contact between Kir6.2 and SUR1 is mediated via a series of hydrophobic interactions between the M1 (outer) helix of Kir6.2 and the TM1 helix of the TMD0 domain ( Fig. 1 B). Additional contacts between L0 and the cytoplasmic domain of Kir6.2 are also evident. In the presence of glibenclamide, the ABC core domains of K ATP are tilted in the membrane plane and splayed outward from the pore, with NBD2 farthest away. Our understanding of K ATP structure and assembly was recently enhanced by the publication of two further structures of a concatenated SUR1-Kir6.2 construct (Ser-Ala-Ser-Ala-Ser-Ala linker) in the presence of Mg 2+ , ATP, and dioctanoyl PIP 2 (diC 8 PIP 2 ; Table 1 and Fig. 3; Lee et al., 2017). These represent active (nucleotide-bound) conformations of SUR1 and suggest a mechanism by which occupancy of the NBSs of SUR1 may be related to the channel pore. Modulation of Kir6.2 by PIP 2 and ATP Central to K ATP 's function as a metabolic sensor is its inhibition by increased intracellular ATP concentrations after glucose uptake by pancreatic β cells. The inhibitory action of ATP is independent of Mg 2+ and was shown to be intrinsic to Kir6.2 by expression of a C-terminally truncated subunit (Kir6.2-ΔC) that traffics to the plasma membrane in the absence of SUR (Tucker et al., 1997). Also intrinsic to the Kir6.2 subunit is PIP 2 modulation (Fan and Makielski, 1997;Shyng et al., 2000). Binding of PIP 2 antagonizes ATP inhibition (Baukrowitz et al., 1998;Shyng and Nichols, 1998;Fan and Makielski, 1999). Channel rundown, likely caused by degradation of PIP 2 by endogenous lipid phosphatases and phospholipases, increases the apparent affinity for ATP (Baukrowitz et al., 1998;Ribalet et al., 2000). The effects of PIP 2 and ATP are well modeled by a scheme that assumes that binding of PIP 2 and ATP are mutually exclusive (Enkvetchakul and Nichols, 2003). However, it remains a possibility that the two ligands antagonize each other allosterically (i.e., through changes in channel P o ; changing L in Fig. 2). Putative PIP 2 -binding residues in Kir6.2 can be identified through structural and sequence alignment with Kir3.2 (GIRK2), the structure of which has been solved in the presence of PIP 2 (PDB accession no. 3SYA; Whorton and MacKinnon, 2011). Phospholipid-binding residues are located on regions of Kir6.2 at the membrane-water interface, including the ends of the M1 and M2 helices, the loop connecting the slide helix (an N-terminal amphipathic helix) to M1, and the helix immediately after M2. Many of these residues (Fig. 5,A and B,green) are conserved between Kir3.2 and Kir6.2. The exceptions are N41 (lysine in Kir3.2) and H175/R176 (both lysines in Kir3.2). R177 of Kir6.2 has also been implicated in PIP 2 binding (Fan and Makielski, 1997;Shyng et al., 2000), but it seems to be oriented away from the putative PIP 2 -binding surface in the structure (Fig. 5 B). Whereas diC 8 PIP 2 was included by Lee et al. (2017) in their samples, no density in the putative PIP 2 -binding site was detected in their structures. This is likely the result of antagonistic binding of ATP at the inhibitory site, which may be responsible for the relative constriction in the PIP 2 -binding pocket compared with that of Kir3.2. Alignment of the Lee et al. (PDB accession nos. 6C3O and 6C3P) structures with the SU-inhibited structure determined in the absence of PIP 2 (PDB accession no. 6BAA) reveals no significant differences at the putative PIP 2 site. In Kir3.2, PIP 2 binding is accompanied by a 15° clockwise rotation (when viewed from the intracellular side) of the cytoplasmic domain, which pulls the pore open (model derived from PDB accession no. 3SYQ; Whorton and MacKinnon, 2011). Both Martin et al. (2017b) and Li et al. (2017) report a subset of molecules in their micrographs for which two (of four) cytoplasmic domains of the Kir6.2 subunits are rotated 9-14° relative to their position in the majority of the molecules used for structural analysis. Li et al. identified a small density in the putative PIP 2 site of this subset of particles that they speculate may have been endogenous PIP 2 that copurified with the channel protein. However, the resolution of this structure was quite limited (8.5 Å), so any such assignments are tentative and should be interpreted with caution. In all but one of the K ATP structures determined (the Li et al. structure was determined in the absence of ATP; Li et al., 2017), there is strong density corresponding to ATP bound at the inhibitory site on Kir6.2, encompassing residues from the cytoplasmic N and C termini (Fig. 5 C;Lee et al., 2017;Martin et al., 2017a,b). As for the PIP 2 site, alignment of the inhibitory ATP-binding sites in all three structures reveals no significant changes. ATP adopts an unusual conformation in these structures with the phosphates curled back toward the purine ring (Fig. 5, C and D). This configuration of ATP resembles that in the noncanonical ATP binding site on P2X4 (Hattori and Gouaux, 2012). The γ phosphate of ATP is exposed to solvent and adopts different rotamers in the different structures (PDB accession no. 6BAA vs. 6C3O; Fig. 5 D). Such flexibility may explain why ADP, as well as nucleotides with substitutions at this position, still bind to and inhibit K ATP (Ämmälä et al., 1991;Wang et al., 2002;Proks et al., 2010). ATP-binding residues are highlighted in red in Fig. 5 (A and B) and shown as yellow sticks in Fig. 5 C. Several of these have been identified by previous site-directed mutagenesis experiments (Drain et al., 1998;Tucker et al., 1998;Proks et al., 1999;Ribalet et al., 2003;Antcliff et al., 2005). In particular, substitutions at positions G334 and I182 have been shown to almost completely disrupt nucleotide binding without affecting the intrinsic open-closed transition of the Kir6.2 pore (Li et al., 2000(Li et al., , 2005. G334D, which is nearly completely insensitive to ATP inhibition at concentrations <10 mM, has been extensively exploited to study nucleotide activation of K ATP in the absence of any inhibition (Drain et al., 1998;Li et al., 2002;Proks et al., 2010Proks et al., , 2014. I182 and G334 mutations have been interpreted as exerting their effects by preventing nucleotide binding (affecting K IB ). Inspection of the structure suggests that this interpretation is correct. The overall structure of the ATP-binding site and the majority of the ATP-binding residues identified in this structure are conserved in Kir3.2, with the exception of I182 (conservatively substituted to valine), K185 (threonine in Kir3.2), and G334 (H in Kir3.2). Why is Kir3.2 not inhibited by ATP? The increased bulk at the position equivalent to 334 in Kir3.2 (H357) would be sufficient to prevent binding of ATP. Furthermore, whereas an isoleucine to valine substitution at the position equivalent to 182 in Kir3.2 (V205) may be conservative, even conservative substitutions at position 182 (e.g., I182L) disrupt ATP binding to Kir6.2 (Li et al., 2000). Two other residues, E179 and R201, were suggested from previous modeling and mutagenesis to contribute directly to ATP binding (Shyng et al., 2000;Antcliff et al., 2005). The structure of the ATP binding site (Fig. 5 C) clearly demonstrates that neither residue binds nucleotide. R201 is a potential hydrogen bonding partner for the backbone carbonyl of F333 and thus may stabilize the short helix encompassing F333 and the critically important G334. E179 appears to form a salt bridge with R176 of the PIP 2 site, so the effect of mutating E179 on ATP inhibition is likely to be allosteric. Strikingly, the inhibitory ATP-binding site is positioned very close to the putative PIP 2 site (Fig. 5, A and B). Residue N41 from the PIP 2 site is located on the same N-terminal loop of Kir6.2 as residues N48 and R50 of the ATP-binding site. K185, which coordinates the phosphates of ATP, is just downstream of the helix that contains H175, R176, and R177 of the PIP 2 site. The distinct nature of these binding sites eliminates the suggestion that ATP and PIP 2 may directly compete for the same site (MacGregor et al., 2002). However, it is tempting to speculate that binding of one ligand (e.g., ATP) can distort the binding site for the other, indicating a direct interaction between the sites, rather than the two being allosterically coupled solely through the pore domain (i.e., via "?" in Fig. 2, rather than changes in L). Indeed, point out that the constriction of the PIP 2 site in ATPbound Kir6.2 relative to Kir3.2 is caused by ATP-binding residues N48 and R50. Further experiments that directly measure nucleotide and/or PIP 2 binding to intact channels are required to test this model and determine whether the two ligand-binding sites function as a "PIP switch" to activate or shut the channel. Contribution of SUR1 to ATP/PIP 2 binding and the intrinsic gate of Kir6.2 In all of the available K ATP structures, the major contact between Kir6.2 and SUR1 is between the M1 helix of Kir6.2 and the first transmembrane helix of TMD0. What is the functional . Alternate subunits are colored yellow and brown. The inhibitory nucleotide binding site is colored red. Putative PIP 2 -binding residues are colored green. (B) Close-up view of the boxed region from A with putative PIP 2 -binding residues (green sticks) labeled. ATP is shown in cyan. ATP-binding residues are shown as red sticks. (C) Close-up view of the ATP-binding site. Residues that contact ATP directly are shown as yellow sticks. Amino acids in white are those previously identified as affecting the apparent affinity for ATP but that do not contribute directly to ATP binding in the structure. R176 from the putative PIP 2 site is colored green. Residues in parentheses are contributed by the adjacent subunit (brown in B). The EM density shown for these residues was contoured at 1.5 σ. (D) EM density (contoured at 3 σ) for ATP bound to Kir6.2 in the inhibited structure (PDB accession no. 6BAA, top) and quatrefoil structure (PDB accession no. 6C3O, bottom). consequence of this interaction? Coexpression of SUR1 with Kir6.2-ΔC increases the intrinsic P o (Fig. 2, L). Unexpectedly, the increase in P o is accompanied by an increase in the apparent affinity for inhibition by ATP rather than a decrease that would be expected for a ligand that stabilizes the closed state of a channel (change in K IB ; Tucker et al., 1997;Babenko and Bryan, 2003;Chan et al., 2003;Fang et al., 2006). This suggests that residues from SUR1 may contribute directly to the ATP-or PIP 2 -binding sites. Fig. 6 shows that residues from TMD0 and L0 can be found in close apposition to the ATP-binding site. In particular, residues Q52 at the end of the slide helix of Kir6.2 and E203 from L0 are within 5 Å of one another. When both residues are mutated to cysteine, they can form a disulfide bond that locks the channels closed, indicating that interactions in this region directly impact channel gating (Pratt et al., 2012). R176 and R177 are also located close to this interface. Both of these residues have been implicated in transduction of nucleotide binding signals from SUR1 to Kir6.2 (John et al., 2001). A mutation (E128K) in TMD0 prevents the increase in P o that accompanies coexpression of SUR1 with Kir6.2. This residue faces away from the binding interface of SUR1 and Kir6.2 but may affect gating indirectly by upsetting the interface between TM3 and the L0 loop (Pratt et al., 2011). E128K channels do not respond to PIP 2 stimulation, suggesting that this domain may increase P o by stabilizing interactions between Kir6.2 and PIP 2 (Fig. 2, K PIP or C). One could speculate that the interactions described between TMD0-L0 and Kir6.2 in Fig. 6 stabilize ATP binding to the inhibitory site. However, this interpretation does not account fully for results from the expression of "mini K ATP " channels formed by Kir6.2-ΔC and TMD0 or TMD0/L0. Kir6.2-ΔC has a low intrinsic P o (0.09-0.15; Babenko and Bryan, 2003;Chan et al., 2003). Coexpression with TMD0 (residues 1-195) enhances the P o by increasing the single-channel burst duration and lowers the apparent affinity for ATP inhibition (Babenko and Bryan, 2003). The decrease in apparent affinity is the expected result if ATP is considered to be an allosteric blocker that preferentially stabilizes the closed state of the channel (or destabilizes the open state; Li et al., 2002). Coexpression of Kir6.2-ΔC with TMD0 and the first portion of L0 (residues 1-232) results in channels that are nearly constitutively active (P o > 90%) and much less sensitive to ATP (Babenko and Bryan, 2003). As this construct encompasses the entire site depicted in Fig. 6, it is unlikely that such interactions directly stabilize ATP binding. However, it should be noted that coexpression of Kir6.2-ΔC with TMD0-L0 constructs longer than 232 amino acids progressively decreases P o , although not to the level of Kir6.2-ΔC alone (Babenko and Bryan, 2003). Residues in NBD2 and the distal C terminus of SUR1 and SUR2A have been shown to contribute to the increased apparent affinity for inhibition of K ATP by ATP (Babenko et al., 1999b). Whether this represents a direct effect on ATP binding (K IB in Fig. 2) or an allosteric effect on the binding site is not known. The interactions between TMD0-L0 and Kir6.2 highlighted in Fig. 6 may explain the increased P o resulting from coexpression of Kir6.2 with SUR1, but the nature of the increased apparent affinity for ATP inhibition remains a mystery. Nucleotide binding to the NBDs of SUR1 Lee et al. were able to resolve two different structures of K ATP in the presence of Mg 2+ , ATP, and PIP 2 (Fig. 3). These structures, termed "propeller" (PDB accession no. 6C3P, 5.6 Å) and "quatrefoil" (four leafed, PDB accession no. 6C3O, 3.9 Å), based on their appearance when viewed perpendicular to the membrane plane, differ mainly in a rigid-body movement of the ABC core domain. In both structures, the NBDs are dimerized with Mg nucleotides bound to each site. Fig. 7 shows the NBDs of the quatrefoil structure. The NBSs of SUR1 are asymmetric. Like other ABCC family members, SUR1 has one catalytically (ATP hydrolysis) competent NBS (NBS2, the consensus site) and one site that is unable to catalyze hydrolysis of ATP (NBS1, the degenerate site; ter Beek et al., 2014;Vedovato et al., 2015). Interestingly, whereas the samples were prepared in the presence of ATP, NBS2 is occupied by MgADP rather than MgATP in both the propeller and quatrefoil structures. This suggests that in the time between sample preparation and flash freezing, the ATPase activity of NBS2 was sufficient to hydrolyze bound ATP. Alternatively, the ADP may have been present as an impurity in the original ATP stock, and occupancy of NBS2 by ADP may simply reflect a higher binding affinity for ADP over ATP . Absent any direct measurement of real-time binding affinity of the two NBSs, it is difficult to differentiate between these two possibilities. The overall structure of the NBD dimer is similar to that of other ABC transporters and to a previous homology model of the NBDs based on the structure of MJ0796 (Smith et al., 2002;Masia and Nichols, 2008). The two NBSs are formed at the dimer interface with portions of each site contributed by each NBD. The A loop (orange), Walker A , and Walker B motifs (red) of NBS1 are contributed by NBD1, whereas the ABC signature sequence (magenta) of NBS1 is contributed by NBD2. Likewise, the A loop and Walker motifs of NBS2 are contributed by NBD2, and the signature sequence of NBS2 is located on NBD1. The purine rings of ATP/ADP are coordinated via π-stacking interactions with aromatic residues of the A loop of each binding site (W688 in NBS1 and Y1353 of NBS2). The Walker A lysine residues (K719 in NBS1 and K1384 in NBS2) coordinate the β and γ phosphates of ATP in NBS1 and the β phosphate of ADP in NBS2 as expected (Smith et al., 2002;ter Beek et al., 2014). The Walker B domains contain pairs of acidic residues (D853/D854 in NBS1 and D1505/ E1506 in NBS2). The first residue of each pair coordinates Mg 2+ . The second binds and polarizes the attacking water molecule in the hydrolysis reaction in typical ABC transporters (ter Beek et al., 2014). The ABC signature sequence of NBS2 (LSG GQ, residues on NBD1) is intact, but it is replaced by FSQ GQ in NBS1, perhaps explaining why that site is unable to hydrolyze ATP ( Fig. 7; Matsuo et al., 1999). Each NBD is composed of a RecA subdomain (678-781 in NBD1 and 1,343-1,434 in NBD2) and a helical subdomain (782-877 in NBD1 and 1,435-1,498 in NBD2). In typical ABC transporters, the RecA and helical subdomains within each NBD rotate toward one another upon nucleotide binding, assuming a more compact conformation (ter Beek et al., 2014). When NBD1 and NBD2 of the quatrefoil form of SUR1 are aligned with the NBDs of the SU-inhibited structure at their respective helical subdomains, it becomes apparent that NBD2 adopts a more compact conformation in the presence of agonist, but NBD1 does not. This results in a large gap between MgADP and the signature sequence in NBS2. Lee et al. (2017) speculate that this gap may be sufficient to allow nucleotide dissociation from NBS2, allowing MgADP to equilibrate at this binding site from solution, even when the NBDs are dimerized. ABC exporters use the hydrolysis of ATP as a "power stroke" to change conformation and move solutes across the plasma membrane (Higgins and Linton, 2004). Previous work, including electrophysiological studies and photoaffinity labeling experiments, suggests that K ATP is activated by ATP binding (with or without Mg 2+ ) to NBS1 and MgADP binding to NBS2 Vedovato et al., 2015). It is widely believed that K ATP activation by MgATP requires hydrolysis of ATP to ADP (Zingman et al., 2001). However, whereas NBS2 of SUR1 is competent to hydrolyze ATP de Wet et al., 2007), it remains unclear whether ATP hydrolysis occurs on the timescale of channel gating. The measured specific hydrolysis rate of the concatenated SUR1-Kir6.2 construct used by Lee et al. for structural characterization was 0.02 s −1 (50 s for one reaction cycle to occur), a value close to that previously reported for SUR1 (0.03 s −1 ; de Wet et al., 2007). The specific hydrolysis rate for the bona fide ABC transporter P-glycoprotein (with two active ATPase sites) was much faster (0.62 s −1 ) when measured using the same technique used by Lee et al. (Kim and Chen, 2018). Interestingly, the hydrolysis rate for purified CFTR, an ABCC family member that is also an ion channel, was 0.06 s −1 , only threefold faster than that for K ATP . It is important to note, however, that ATP hydrolysis by CFTR results in the termination of opening bursts (Csanády et al., 2010), not channel activation, as has been proposed for K ATP . The hydrolysis rate for the SUR1-Kir6.2 concatemer was much slower than the rate at which macroscopic K ATP current appears after MgATP application to Kir6.2-G334D/ SUR1 channels (which are not inhibited by nucleotides; Proks et al., 2010). However, it should be noted that the time course of the increase in K ATP current after MgATP application depends on multiple rate constants (binding, dissociation, conformational changes, and hydrolysis, if present), so direct comparison of the macroscopic activation rate to the ATP hydrolysis rate is not possible absent further information. K ATP channel current is directly activated by MgADP binding in the absence of ATP, indicating that the act of hydrolysis in and of itself is not required to drive an activating conformational change in SUR1 (Proks et al., 2010). Saturating concentrations of ATP activate Kir6.2-G334D/SUR1 channels to the same extent as ADP, and differences in the relative ability of the two ligands to gate the channel can be explained by differences in their binding affinities (Proks et al., 2010;Vedovato et al., 2015). The CFTR gating cycle involves hydrolysis of ATP. As a consequence, single-channel analysis of CFTR currents indicates a step in the gating process that is not at equilibrium (i.e., irreversible; Csanády et al., 2010). In contrast to this, single-channel analysis of K ATP gating shows no evidence of nonequilibrium gating (Choi et al., 2008). Thus, it remains an open question whether ATP must be hydrolyzed to ADP to activate Kir6.2 via the NBDs of SUR1, and further investigation is required. Glibenclamide binding and inhibition of K ATP The higher-resolution inhibited structure of K ATP shows clear density for glibenclamide bound in the transmembrane region of SUR1 (Fig. 8 A; Martin et al., 2017a). The drug is sandwiched between residues on helices TM7, TM8, and TM11 on one side and TM15 and TM17 on the other (Fig. 8 C). Most of the amino acid residues at this binding interface are conserved between SUR1 and SUR2A with the exception of S1238 (Y in SUR2A) and T1242 (S in SUR2A). These substitutions may explain the relative insensitivity of Kir6.2/SUR2A channels to glibenclamide and tolbutamide compared with Kir6.2/SUR1 (Venkatesh et al., 1991). Previous studies using SUR1/SUR2A chimeras, taking advantage of this difference in SU apparent affinity, implicated TM16-TM17 as part of the binding site (Ashfield et al., 1999;Babenko et al., 1999a). In the structure, residue S1238 of TM16 is positioned close to the cyclohexyl moiety at one end of glibenclamide (Fig. 8 C). Substitution of S1238 with tyrosine (the equivalent SUR2A residue) in SUR1 reduces the apparent SU binding affinity, whereas replacing the tyrosine in SUR2A with serine confers higher sensitivity to the SU gliclazide (Ashfield et al., 1999;Proks et al., 2014). The structure suggests that the introduction of a bulkier residue at that position may create a steric clash that destabilizes SUs at this site. The SU tolbutamide has a butyl chain at the same position as the cyclohexyl moiety of glibenclamide. This aliphatic chain would also be expected to clash with a tyrosine residue at a position analogous to S1238 in the SU binding site (i.e., as in SUR2A). Many of the residues in close apposition to glibenclamide in Fig. 8 C had not previously been implicated in drug binding. As an additional test of their structure, Martin et al. (2017a) used site-directed mutagenesis coupled with patch-clamp and flux assays to verify the contribution of these residues to SU inhibition. SUs affect K ATP channels formed by Kir6.2/SUR1 in two distinct ways: they prevent nucleotide-dependent stimulation, and they reduce the intrinsic P o of K ATP (Proks et al., 2014). These two effects may be physically separable. In Kir6.2/SUR2A, SUs only affect the P o in the absence of nucleotides (L), not nucleotide stimulation (Fig. 2, E or K NBD ; Proks et al., 2014). The structure of K ATP bound to glibenclamide suggests that SUs may prevent nucleotide stimulation of K ATP by wedging themselves in between the two half-transporter domains of the ABC core, thus preventing association of the NBDs. However, this observation does not explain the effect of SUs on the intrinsic P o of K ATP . Fig. 8 B shows a surface representation of the glibenclamide binding site. Glibenclamide is nestled in a pocket that follows its contours very closely. By comparison, when the same region is shown in the propeller form of K ATP , the site collapses, such that it can no longer accommodate drug binding. This is an expected consequence of a model in which glibenclamide prevents NBD association by disrupting the movement of TMD1 and TMD2 and explains the observation that nucleotide binding to the NBDs destabilizes SU binding (Bernardi et al., 1992;Ueda et al., 1999). Nucleotide binding has the opposite effect on a class of K ATP -selective K + channel openers (KCOs), slowing their dissociation from SUR2A/B Reimann et al., 2000). Binding studies have implicated TM16-TM17 and the loop between TM13 and TM14 in KCO binding (Uhde et al., 1999;Hambrock et al., 2004). Mutagenesis and binding studies describe a near continuous surface in L0 that was proposed to form an SU-binding site (Mikhailov et al., 2001;Vila-Carriles et al., 2007;Ashcroft et al., 2017). In light of this information, Li et al. (2017) initially suggested that there was density in their 5.6-Å structure that might correspond to bound glibenclamide at this position. The 3.63-Å structure of K ATP bound to glibenclamide revealed this density to be previously unmodeled residues. The effects of mutating L0 on glibenclamide binding are likely to be either allosteric through an effect on channel opening (L0 has been shown to affect intrinsic P o , i.e., L, of Kir6.2; Babenko and Bryan, 2003) or via an indirect destabilization of the glibenclamide site by L0. SUR1 is not a bona fide ABC exporter Whereas SUR1 has no known transport function, it remains a possibility that it transports some as-yet-unidentified ligand. However, comparison of the nucleotide-bound (quatrefoil) structure of the ABC core of SUR1 with that of the ABC transporter Sav1866 in the presence of ADP (PDB accession no. 2HYD) suggests that SUR1 is not capable of moving solutes across the plasma membrane ( Fig. 9; Dawson and Locher, 2006). In the presence of ADP, Sav1866 adopts an obviously outward-facing conformation, which is necessary to release transported solute molecules into the extracellular space. In contrast to this, the quatrefoil form and propeller form of SUR1 are closed on the extracellular side, despite association of the NBDs. Therefore, any transport function for SUR1 remains unlikely. Conformational changes in K ATP : The pore domain The acquisition of K ATP structures in three distinct conformations (inhibited, propeller, and quatrefoil) allows for some speculation about the conformational changes associated with channel gating. All of these structures were solved in the presence of ATP, which was resolved in the inhibitory site of Kir6.2 of each form Martin et al., 2017a,b). Therefore, one might reasonably expect the pore domain to be closed in all three structures. Fig. 10 A clearly demonstrates that whereas there may be global differences in the conformation of the three different structures of K ATP (Fig. 3), there are no apparent structural changes in the pore domain. The channel gate is clearly closed in all three structures, with F168 occluding the pore at the M2 bundle crossing. The cytoplasmic domain of Kir6.2, on the other hand, rotates 11.5° clockwise (when viewed from the intracellular side) between the inhibited and quatrefoil structures (Fig. 10 B). A similar rotation (15° clockwise) underlies the putative opening transition of Kir3.2 (Whorton and MacKinnon, 2011). A comparison of various crystal forms of KirBac3.1 highlights a 23° rotation of the cytoplasmic domains (Clarke et al., 2010). In two structures of KirBac3.1 in the nontwist form (clockwise rotation of the cytoplasmic domains relative to the twist form, when viewed from the intracellular side), the selectivity filter is in a conductive conformation (i.e., there is density for four K + ions). Structures in which the cytoplasmic domain is in the twist form all have changes in ion occupancy, which the authors believe to reflect blocked or subconductance states. Interestingly, these changes in ion occupancy occur with no concomitant change in the opening of the bundle-crossing gate. The model of the quatrefoil state of K ATP (PDB accession no. 6C3O) features three K + ions in the selectivity filter. The other structural models in the PDB (accession nos. 6BAA and 6C3P) do not include any ions in the selectivity filter, but inspection of the EM density in the pore region (contoured at 5 σ) shows that there may, in fact, be ions in the selectivity filters of all three structures (Fig. 10 C). It is tempting to speculate that these putative changes in ion occupancy may result from rotations in the cytoplasmic domains and that the pore structure in the propeller and quatrefoil forms may be described as adopting a preopen closed state. However, it should be noted that cytoplasmic domain of a mutant KirBac3.1 (S129R) with an open bundle crossing gate is in the twist form, arguing against the idea that a clockwise rotation of the cytoplasmic domain results in channel opening, at least in prokaryotic Kirs (Bavro et al., 2012). Conformational changes in K ATP : SUR1 The ABC core domain of SUR1 in the presence of glibenclamide adopts an inward-facing conformation with the NBDs spaced very far apart and offset relative to one another (Figs. 4 and 11). Subsequent to binding Mg nucleotides, the NBDs of SUR1 align, forming a dimer, and stabilizing a conformational change that brings TMD1 and TMD2 closer together (Fig. 11). This "activated" conformation of the SUR1 ABC core is very similar in the propeller and quatrefoil forms. Currently, there is no structure for a nucleotide-and SU-free apo state of SUR1. It remains a possibility that the inhibited structure may represent distortions introduced by drug binding between the two half transporters of the core domain. Thus, Fig. 11 also includes a speculative apo state of SUR1, based on the apo structure of the bacterial transporter TM287/288 (PDB accession no. 4Q4H;Hohl et al., 2014). This structure was chosen because the NBDs of TM287/288, like SUR1, contain one consensus NBS and one degenerate site. In the apo structure of TM287/288, the NBDs remain partially associated in an open-dimer conformation, even in the absence of nucleotides. A similar structure was observed for the transporter MsbA. The crystal structure of MsbA was solved in two different apo states, one of which (closed apo, PDB accession no. 3B5X) is inward facing, but with the NBDs in close apposition (Ward et al., 2007). In this closed apo state, the NBDs are offset as in the inhibited structure of K ATP , introducing a twist at the TMDs. It has also been suggested that the NBDs of CFTR remain partially associated throughout many gating cycles, with ATP bound to NBS1 (Basso et al., 2003). Whether SUR1 adopts a similar structure in the apo state is an open question. Conformational changes in K ATP : Subunit rearrangements All of the structures to date of the K ATP complex represent closed channels with ATP bound to the inhibitory site on Kir6.2. As such, the open state of K ATP remains elusive. The interface formed between TMD0 and M1 of Kir6.2 is essentially unchanged in the three different structures (inhibited, propeller, and quatrefoil). Therefore, the mechanism by which TMD0/L0 modulates the intrinsic P o of Kir6.2 (represented by L in Fig. 2) or increases the apparent ATP affinity at the inhibitory site (affecting K IB or D) is still unknown. However, a comparison of the three available structures suggests a possible mechanism by which nucleotide occupancy of the NBDs of SUR may be communicated to the pore domain (Fig. 2, coupling factor E). The overall topology of K ATP is very similar in the inhibited state and propeller structure (Figs. 3 and 12). The NBDs of SUR align and dimerize upon nucleotide binding, with no substantial rearrangement of the channel complex. The transition from the propeller to quatrefoil structures, however, involves a large rotation and translation of ABC core domain of SUR1 (Figs. 3 and 12). When viewed from the cytoplasm, the NBD dimer rotates nearly 90°, bringing NBD2 in close apposition with the cytoplasmic domain of Kir6.2. This creates a new putative polar interface that may stabilize the 11.5° rotation of the cytoplasmic domain of Kir6.2 (Fig. 13 A). Whereas much of L0 is unresolved in the quatrefoil structure, it is likely that such a large rotation of the ABC core domain would also disrupt the interface between L0 and Kir6.2, which could potentially promote channel opening (Pratt et al., 2012). Is there any evidence for such a large rotation of the ABC core domain? Is the observed polar interface formed between Kir6.2 and NBD2 physiologically relevant? It is certainly possible that the quatrefoil structure is distorted by the rather short linker (only six amino acids) between the C terminus of SUR1 and the N terminus of Kir6.2. Lee et al. (2017) provide some evidence that their concatenated construct forms functional channels that are inhibited by ATP and tolbutamide and activated by diazoxide. Introduction of the Kir6.2-G334D ATP-binding mutant into their concatenated construct allowed them to demonstrate activation by 1 mM MgATP, so the functional connection between Kir6.2 and SUR1 was at least partially intact. However, it should be noted that a similar concatenated construct (6-glycine linker) showed reduced ATP sensitivity compared with wild-type channels (Cartier et al., 2001;Shyng and Nichols, 1997). What evidence is there for a new interface between NBD2 and the cytoplasmic domains of Kir6.2? Whereas the resolution in the NBDs of the quatrefoil form is lower than that of the TMDs, Fig. 13 A shows that there is reasonably good density for many of the amino acid residues at this putative interaction site (in particular, H276 and H278 of Kir6.2 and R1352 of SUR1). Earlier structure-function studies demonstrated that mutations at this interface affect nucleotide activation of K ATP . A nearby mutation on the external surface of NBD2, G1400R (based on the numbering in the quatrefoil structure; Fig. 13 A), completely abolishes nucleotide activation of K ATP (de Wet et al., 2012). Mutations of R1352 are associated with PHHI (R1352P; Verkarre et al., 1998;Saint-Martin et al., 2015) and leucine-sensitive hypoglycemia (R1352H; Magge et al., 2004). R1352P channels do not traffic properly to the plasma membrane (Saint- Martin et al., 2015), whereas R1352H affects channel function by reducing the extent of MgADP and diazoxide activation compared with wild-type (Magge et al., 2004). Other studies have more broadly supported the existence of interactions between the cytoplasmic regions of SUR1 and Kir6.2. Babenko et al. (1999b) showed that the distal C terminus of SUR1 contributes to ATP inhibition, suggesting proximity to the cytoplasmic domain of Kir6.2. Lodwick et al. (2014) used thermodynamic mutant cycle analysis to identify a salt bridge between K338 in Kir6.2 and E1318 in NBD2 of SUR2A. Breaking this salt bridge potentiates the effects of the KCO pinacidil and antagonizes the effects of glibenclamide. The equivalent position on SUR1 (D1354 in the quatrefoil structure) is ∼20 Å from K338, which is too far away for a salt bridge. Still, the ability to form a salt bridge, even transiently, between NBD2 and the cytoplasmic domains of Kir6.2 in functional channels suggests that a rotation of the ABC core domain of SUR1 relative to Kir6.2 of the scale suggested by the quatrefoil structure is possible during the K ATP gating cycle. The quatrefoil form also predicts the formation of a novel interface between TMD0 and TMD2. Fig. 13 B shows the interactions (mostly hydrophobic) between helices TM2-3 of TMD0 and TM15-16 of TMD2. TM15-16 are the two helices that are domain swapped in the ABC core structure and the helix between TM15 and TM16 directly contacts NBD1 (Fig. 4), suggesting a potential pathway by which occupancy of the NBSs may be transmitted through TMD0 to the pore. Further investigations are necessary to test whether these new interfaces form during K ATP gating and whether such interactions are associated with channel opening. Conclusions and open questions The structures of K ATP discussed in this review are both corroborative and revelatory. As expected, the structure of Kir6.2 is very similar to other published structures of inward rectifiers and SUR1 is a fairly typical ABC exporter. However, several features of these models are novel and/or unexpected. The structure of TMD0 was unknown before the publication of the first inhibited-state structures. It was well accepted that TMD0 could directly associate with Kir6.2. However, the precise nature of this interaction was unclear (Chan et al., 2003). Localization of interacting sites between TMD0/L0 and Kir6.2 help explain how these residues may contribute to the increase in P o when coexpressed with the pore subunit (changes in L; Fig. 2). Site-directed mutagenesis and careful electrophysiology had determined the approximate location of the inhibitory ATP binding site on Kir6.2 (reflected in K IB ; Fig. 2), but the details of binding were not well modeled, probably because of the unexpected and unusual conformation adopted by ATP (Antcliff et al., 2005;Lee et al., 2017;Martin et al., 2017a). The new structures reveal the precise location of the inhibitory site relative to the putative PIP 2 -binding site (K PIP ) and suggest a mechanism by which the two ligands may interact (either directly or through changes in pore gating; Fig. 2, parameters C and D). Perhaps most exciting is that comparison of the structures provides a new speculative model to describe activation of K ATP by Mg nucleotides (Fig. 2, E) and how the increase in P o brought about by nucleotide binding can be distinct from the direct increase in P o (L) from interactions between TMD0 and Kir6.2. Several functionally important regions of K ATP remain disordered or otherwise unresolved in all of the structures. The N terminus of Kir6.2 is unresolved up to position 32 in all of the structures. The first 14 amino acids of Kir6.2 may be important for coupling to SUR1. Deleting or disrupting this region abrogates the increase in apparent affinity for nucleotide inhibition and high-affinity SU block usually conferred by SUR1 (Babenko et al., 1999a;Giblin et al., 1999;Reimann et al., 1999). Both the C terminus of Kir6.2 and the loop between TMD1 and NBD1 of SUR1 contain RKR ER retention motifs (Zerangue et al., 1999). Neither region is resolved in any of the available structures. However, based on the available structural evidence, the two regions are very far apart, disfavoring a mechanism by which the two regions mutually obscure one another to allow exit from the ER. This is consistent with data suggesting that the RKR of Kir6.2 is masked by SUR1, whereas the RKR of SUR1 binds 14-3-3 proteins to enable forward trafficking (Heusser et al., 2006). Figure 12. Putative conformational changes in the K ATP complex upon SUR1 activation. Cartoon representation showing the transition from the inhibited state of K ATP through the propeller form to the quatrefoil form. In the inhibited form, the NBDs are out of alignment and spaced far apart. In the presence of Mg 2+ and nucleotides, the NBDs first dimerize (propeller form), but the overall conformation of the complex remains unaffected. In the quatrefoil form, the dimerized NBDs rotate (along with TMD1 and TMD2) such that a new interface is formed between NBD2 and the cytoplasmic domain of Kir6.2. The interface between TMD0 and Kir6.2 remains largely unaffected. Very few protein structures answer more questions than they raise. In keeping with this, the K ATP structures provide ample fodder for further experiments as many key questions remain unanswered. What does the apo state of K ATP look like? How is the structure of SUR2A different from that of SUR1 and why is its regulation by metabolism and its interaction with Kir6.2 different? How does SUR contribute to the increased apparent affinity of Kir6.2 for inhibitory nucleotide binding? How do SUs affect the intrinsic P o of Kir6.2? Most importantly, all of the structures to date represent closed channels. How do the various domains of K ATP rearrange to allow for channel opening? Isolating the K ATP complex in a stable open state may prove to be difficult. Most open-state structures of Kirs were solved using channel mutations that stabilize opening (Cuello et al., 2010a,b;Whorton and MacKinnon, 2011;Bavro et al., 2012;Zubcevic et al., 2014). It is possible that the presence of a positively modulating, MgADP-bound SUR1 and specific KCOs may allow for a wild-type open K ATP structure to be solved, as the presence of a Ca 2+ bound RCK domain has allowed for the solution of an open-state structure of MthK (Jiang et al., 2002). However, the introduction of mutations that either reduce inhibitory binding of nucleotides to Kir6.2 (e.g., G334D;Drain et al., 1998) or increase the intrinsic P o (e.g., Kir6.2-C166S; Trapp et al., 1998) may prove necessary if an open-state structure is to be obtained. The locations of all three classes of NBS has been unequivocally established along with the inhibitory SU site and a putative site for PIP 2 binding. However, understanding the details of ligand binding alone does not illuminate the mechanism by which these ligands affect P o , how different binding events converge energetically on the channel pore to influence its activity, or how binding of one ligand may influence that of another. Future efforts may produce cryo-EM structures of still more gating intermediates of K ATP . However, to fully understand the behavior of such a motley protein complex with several different modules acting in concert to change P o , structural snapshots must be supplemented with lower-resolution approaches that can follow the dynamics of functional channels. Now that multiple interacting sites between Kir6.2 and SUR1 have been described structurally, directed attempts may be made to stabilize or disrupt these interfaces to determine which properties these sites confer from SUR1 to Kir6.2 (e.g., increase in intrinsic P o , enhanced nucleotide inhibition, nucleotide activation, and PIP 2 stabilization). It is time to emerge from the liquid ethane, dust off those old voltage clamps, cross-linkers, and FRET pairs, and get to work. Figure 13. New interfaces formed in the quatrefoil structure (PDB accession no. 6C3O). (A) Potential polar contacts between NBD2 (green) and Kir6.2 (yellow) shown as sticks. MgADP is shown as cyan spheres. The EM density for the residues at the interface has been contoured at 3 σ. (B) Putative interface formed between TMD0 and TMD2 in the quatrefoil structure. On the left, helices TMD3 and TM15 are in the foreground. The image on the right is rotated 160° to show interactions between TM16 and TM2.
2018-05-03T00:19:16.288Z
2018-05-07T00:00:00.000
{ "year": 2018, "sha1": "8168cce861830e3a77bdf8a808cb68b92ad88313", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jgp/article-pdf/150/5/653/1188609/jgp_201711978.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ea29db35a1d3f74b079d6857d0a68b817710cdd", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
42270661
pes2o/s2orc
v3-fos-license
Immunohistochemical staining of lipid droplets with adipophilin in paraffin-embedded glioma tissue identifies an association between lipid droplets and tumour grade Background: Cytoplasmic lipid droplets are important in cancer metabolism and a clear relationship has been established between their accumulation and increased tumour grade in glioma. The development of the novel immunohistochemical marker adipophilin has proven to be a useful method of detecting lipid droplets in paraffin embedded tissue from many diseases. Our aim was to assess the distribution of adipophilin stained lipid droplets in paraffin embedded glioma tissue and to evaluate whether it is a useful indicator of lipid droplets in brain tumours. Methods: Immunohistochemical staining for adipophilin was undertaken in a tissue microarray containing 65 paraffin embedded gliomas of varying grade. The number of tumour cells containing adipophilin positive lipid droplets was then quantified and statistically analysed. Results: We found a statistically significant accumulation of lipid droplets in high grade glioblastoma compared to low grade astrocytomas when we quantified the percentage of tumour cells containing adipophilin-positive lipid droplets (p<0.001). A significant positive correlation (rs=0.83) was detected between increasing tumour grade and the percentage of tumour cells containing lipid droplets, p=0.0001. Conclusions: We have determined that adipophilin is a useful immunohistochemical marker of lipid droplets in brain tumours. The ability to detect lipid droplets within paraffin embedded gliomas will greatly facilitate the evaluation of this tumour characteristic which is related to grade and prognosis. Introduction Cytoplasmic lipid droplets are increasingly regarded as an important cellular component in both normal tissue [1] and in disease. Recent evidence suggests that the intracellular accumulation of lipid droplets is increased in a diverse range of diseases [2,3]. Once thought to be a storage compartment for neutral lipids, these organelles are now thought to be an important regulator of metabolic function and cell signalling [2][3][4][5]. The importance of lipid droplets in cancer metabolism is gaining clinical significance with many tumours including glioma found to have altered lipid profiles that change with grade and in response to treatment [3,6,7]. Lipid accumulation has been detected in high grade gliomas using ex-vivo nuclear magnetic resonance (NMR) techniques and evidence from electron microscopy and fluorescent labelling with Nile Red has determined that cytoplasmic lipid droplets are the major contributor to this lipid [8,9]. In addition in-vivo magnetic resonance spectroscopy studies of both paediatric and adult brain tumours have reported increased lipids at diagnosis as a marker of poor prognostic outcome [10][11][12]. However, lipid droplets are not routinely assayed in tumour biopsies as histological detection methods were thought to be limited to frozen tissue. Whilst frozen tissue is used for histological diagnosis it is not as routine as the formalin fixed paraffin doi: 10.7243/2055-091X-4-4 embedded (FFPE) tissue that nearly all diagnostic pathology is performed on. Recent advances have discovered a specific immunohistochemical marker localised to lipid droplets that can be carried out in FFPE tissue called adipocyte differentiation related protein (ADRP) or adipophilin [13]. The sub-cellular immunohistochemical expression of adipophilin has been found to be a useful marker of lipid accumulation in both tumours and non-neoplastic disease [4,[13][14][15][16]. As intracellular lipids are a marker of both prognosis and treatment response in brain tumours [10,12,17], the ability to routinely detect lipid droplets in FFPE biopsies taken from tumours at diagnosis may allow their detection to be more readily translated into clinical practice. As such, we have investigated adipophilin expression using immunohistochemistry in a series of tissue microarrays (TMAs) containing glioma tissue of varying grade and tumour type. By undertaking this study we aim to evaluate whether adipophilin is a useful indicator of lipid droplets in brain tumours. Materials and methods We obtained quality controlled high density commercial FFPE brain tumour microarrays from US Biomax (Rockville, MD, USA). These contained 65 glioma cases of varying diagnoses and grade along with normal brain tissue and normal tissue from other regions as controls for antibody validation purposes ( Table 1). Each case was represented by two 1.5mm cores with high resolution interactive H & E images available for each. All cases were evaluated by pathologists and cores were selected to contain representative tumour rather than necrosis. To validate the adipophilin antibody prior to use, we stained test TMAs containing brain tumour and control tissue, along with a commercial neuroblastoma cell line (BE2M17) that was grown in our laboratory as previously described [18] and processed into a paraffin embedded cell block. Previous work has shown that this cell line contains many large lipid droplets [18]. Immunohistochemical staining for adipophilin (AP125, Progen, Germany) was undertaken at a 1:50 dilution overnight at 4°C following heat mediated antigen retrieval. A Dako Envision polymer labelling system (K4065, Dako UK Ltd) was used to visualise the antibody, with diaminobenzene as the chromagenic label, followed by counterstaining with haematoxylin. Appropriate positive and negative tissue controls were included in all runs. Quantification of the percentage of tumour cells containing cytoplasmic positivity for adipophilin-stained lipid droplets was undertaken independently by two individuals including a pathologist (IC). Rare nuclear staining was also noted, however only cytoplasmic staining was considered positive in tumour cells. Only staining within tumour cells was quantified, any adipophilin positive macrophages were not scored. Where there was discordant scores between the two observers, the cases were reviewed and consensus agreed. Scores from each core were averaged to provide a single score for each case. Averages were then obtained across tumour type and grade with statistical comparisons between groups undertaken with ANOVA, followed by the Student's t-test for post hoc comparisons. Correlations were undertaken using Spearman's rank correlation test. Results Adipophilin stained lipid droplets were identified in similar distribution patterns in the BE2M17 FFPE cell lines as previously determined using Nile red [18], providing evidence that the adipophilin antibody is correctly labelling lipid droplets (Figure 1 in 75% of all glioma cases analysed. This ranged from cases with very few tumour cells being stained (<5%) to cases with very high numbers of positive tumour cells (>90%). The mean percentage (±standard error) of tumour cells expressing adipophilin positive lipid droplets in grade four glioblastoma was 56.19±5.69% (with a range from 30-93%) which is significantly higher than in grade one and two astrocytoma, p<0.0001, (Figure 2). The mean percentage for grades one and two astrocytoma respectively, was 3.21±2.02% (with a range from 0-11%), and 5.64±1.58 (with a range from 0-26%). Grade three astrocytoma had significantly less adipophilin positive cells (19±2.46%, range from 0-35%) than grade four glioblastoma (p=0.01) but significantly more than grades one (p=0.001) and two, (p=0.04 ). A statistically significant correlation was detected between tumour grade and the percentage of tumour cells containing lipid droplets, rs=0.83, p<0.0001. Strong positive expression was largely limited to cases of high grade glioblastoma, whilst weak positive staining was predominant in low grade pilocytic astrocytoma (Figure 3). There was also an increase in lipid droplets in high grade anaplastic oligodendroglioma (14.16±2.57) compared to low grade oligodendrogliomas (3.33±2.04) although case numbers were too small for formal statistical comparisons. Similar values of adipophilin positive tumour cells were found in the anaplastic astrocytomas and anaplastic oligodendrogliomas, with no statistically significantly difference between the tumour types. Cancer adjacent normal tumour cases were negative for adipophilin expressing cells, however, occasional cytoplasmic and nuclear adipophilin positivity (not counted during scoring) was found within neurons of normal brain. The scores for the two individual cores were within 5% of each other for the majority of cases (and less than 10% for all cases) suggesting that adipophilin expression can be reliably detected within a tissue microarray according to established standards [19,20]. Discussion We have evaluated a series of brain tumours of varying grade for the expression of the lipid droplet marker adipophilin. To our knowledge this is the first study to examine lipid droplets in FFPE histological sections from gliomas of varying grade using an immunohistochemical marker. We have identified a significant increase in the percentage of tumour cells expressing adipophilin labelled lipid droplets in high grade tumours, suggesting that there is a relationship between the accumulation of lipid droplets and increasing tumour grade. A similar finding has been reported in Burkitt lymphoma doi: 10.7243/2055-091X-4-4 where adipophilin was shown to be a sensitive marker of lipid accumulation in high grade cases, and that it may be useful diagnostic discriminator in otherwise difficult cases [15]. As lipids are a marker of poor prognosis in gliomas [10][11][12] the ability to detect them in diagnostic biopsies is likely to be of clinical relevance. Interestingly, there were fewer adipophilin positive tumour cells in all three cases of giant cell glioblastoma when compared to grade four glioblastoma. As the giant cell variant is commonly regarded as less aggressive [17], this provides further support for the relationship between high lipid and worse prognosis. A similar trend was noted in the low grade oligodendrogliomas when compared to the anaplastic oligodendrogliomas, however these still had far fewer lipid droplet containing tumour cells than grade four glioblastomas. An exception to the relationship between grade and lipid in brain tumours may be grade two pleomorphic xanthoastrocytomas (PXA). These often contain cytoplasmic lipids with a generally favourable prognosis [21,22]. However as this is a rare variant, usually restricted to childhood, it was not included in our series from which only tumours from adults were analysed. High grade glioblastoma in particular is known to be a heterogeneous tumour. Hence it is possible that analysis of a limited number of cores from each case within a tissue microarray may not accurately represent the entire tumour. However, our analysis found that the adipophilin scoring in two cores taken from different regions in each case was very consistent. Research using different markers in glioma tissue microarrays reported that whilst in some instances immunohistological scoring from single cores may not reflect the whole tumour section on an individual case basis, that when considered across a series of tumours, scores were not significantly different between microarrays and whole sections [23,24]. This suggests that like other tumour types, accurate histological measures can still be undertaken in a microarray despite tumour heterogeneity [23,25]. There is a well known association between lipids and necrosis in many tumour types [6]. The tissue cores selected for this study were taken from tumour only rather than the necrotic core demonstrating that lipid is also detected within the cytoplasm of tumour cells and is not only a marker of necrosis within brain tumours. Strong positive expression of adipophilin was generally only found in grade four tumours suggesting not only that a greater number of tumour cells contain lipid droplets, but also that there is a greater number and/or an increase in size of these organelles within the cell. Previous work from our group has found that the size of lipid droplets can vary across several neural-derived tumour cell lines and that both size and composition can change in response to treatments that target metabolic pathways [26,27]. Disruption of lipid metabolism in glioma has also been identified as a potential therapeutic target [7,28]. As lipid accumulation appears to be an important indicator of increased metabolic state within high grade malignancies there is likely to be biological and diagnostic value to observing lipid droplets in biopsied brain tumours. Cytoplasmic adipophilin positivity was detected within the neurons of normal brain in only three cases suggesting that there are few lipid droplets present in normal brain tissue. Nuclear lipid droplets were also detected within occasional neurons. Although most studies have focused on cytoplasmic lipid droplets [3,4], recent evidence using confocal and electron microscopy in cultured hepatocytes suggests that neutral lipids within the nucleus also form into spherical lipid droplets of unique composition and size and that these may be involved with nuclear lipid homeostasis [29,30]. Alternatively it has also been proposed that cytoplasmic lipid droplets may be mistaken as nuclear in instances where they have a close association with the nuclear envelope [31]. Although recent evidence in Alzheimer's and Huntington's disease has reported that cytoplasmic lipid droplets do occur in neurons in a disease state [32], it is unclear as to the significance of this finding within normal neurons in this study and this requires further investigation. An important advantage of establishing adipophilin immunohistochemical staining in paraffin embedded gliomas is the ability to investigate lipids both retrospectively in archival glioma tissue as well as prospectively in future studies. As increased lipids have been shown to have prognostic value, it will be important to directly link the presence of lipid droplets to survival and outcome in glioma. Of additional clinical significance will be linking lipids to molecular prognostic markers such as IDH1 and ATRX mutations and MGMT promoter methylation, all of which are known to influence survival in glioma. Further studies are also required to investigate the relationship between adipophilin-stained lipid droplets and measures of in vivo and ex-vivo lipids from MR spectroscopy as well as their links to survival. Conclusions In summary, we have shown that adipophilin is a valuable marker of lipid droplet status in paraffin embedded glioma tissue and used the method to establish the relationship between the accumulation of cytoplasmic lipid droplets and increased tumour grade in these tumours. The availability of a robust method for determining lipid droplet status in paraffin embedded tissue will greatly facilitate the further evaluation of lipid droplets as a biomarker of grade and prognosis and its translation into routine clinical practice.
2019-04-02T13:07:22.004Z
2017-05-11T00:00:00.000
{ "year": 2017, "sha1": "29ae7b132d9324c524e9b2d9a62637cfadfa9c66", "oa_license": "CCBY", "oa_url": "http://www.hoajonline.com/journals/pdf/2055-091X-4-4.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fde43b8a0227028986ea7bc5658427f40157c078", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
54601432
pes2o/s2orc
v3-fos-license
MicroRNA-Regulated Rickettsial Invasion into Host Endothelium via Fibroblast Growth Factor 2 and Its Receptor FGFR1 Microvascular endothelial cells (ECs) represent the primary target cells during human rickettsioses and respond to infection via the activation of immediate–early signaling cascades and the resultant induction of gene expression. As small noncoding RNAs dispersed throughout the genome, microRNAs (miRNAs) regulate gene expression post-transcriptionally to govern a wide range of biological processes. Based on our recent findings demonstrating the involvement of fibroblast growth factor receptor 1 (FGFR1) in facilitating rickettsial invasion into host cells and published reports suggesting miR-424 and miR-503 as regulators of FGF2/FGFR1, we measured the expression of miR-424 and miR-503 during R. conorii infection of human dermal microvascular endothelial cells (HMECs). Our results revealed a significant decrease in miR-424 and miR-503 expression in apparent correlation with increased expression of FGF2 and FGFR1. Considering the established phenomenon of endothelial heterogeneity and pulmonary and cerebral edema as the prominent pathogenic features of rickettsial infections, and significant pathogen burden in the lungs and brain in established mouse models of disease, we next quantified miR-424 and miR-503 expression in pulmonary and cerebral microvascular ECs. Again, R. conorii infection dramatically downregulated both miRNAs in these tissue-specific ECs as early as 30 min post-infection in correlation with higher FGF2/FGFR1 expression. Changes in the expression of both miRNAs and FGF2/FGFR1 were next confirmed in a mouse model of R. conorii infection. Furthermore, miR-424 overexpression via transfection of a mimic into host ECs reduced the expression of FGF2/FGFR1 and gave a corresponding decrease in R. conorii invasion, while an inhibitor of miR-424 had the expected opposite effect. Together, these findings implicate the rickettsial manipulation of host gene expression via regulatory miRNAs to ensure efficient cellular entry as the critical requirement to establish intracellular infection. Introduction Pathogenic Rickettsia species include obligate intracellular and vector-borne Gram-negative α-proteobacteria known to cause spotted fever and typhus rickettsioses in humans. As such, bacteria within the genus Rickettsia are divided into the spotted fever, typhus, ancestral, and transitional groups. As the respective etiologic agents of Rocky Mountain spotted fever in the Americas and Mediterranean spotted fever in the Europe and Asia, Rickettsia rickettsii and R. conorii represent two major pathogenic species belonging to the spotted fever group of rickettsiae. During infection of their mammalian hosts, rickettsiae primarily target microvascular endothelial cells (ECs) lining the small and medium-sized blood vessels, triggering host responses characterized by endothelial activation and sequelae associated with the loss of endothelial barrier integrity, leading to fluid imbalance in vital organ systems, including the skin, lungs, and brain, and thrombotic complications such as disseminated intravascular coagulation in severe cases of disease [1,2]. MicroRNAs (miRNAs) are a family of small noncoding RNAs (about 20-24 nucleotides long) capable of regulating a wide array of biological processes, including cellular development, differentiation, and proliferation; regulation of the cell cycle and metabolism; and pathways of degradation such as autophagy and apoptosis [3]. The predominant function of miRNAs is to regulate protein translation by binding to complementary sequences in the 3' untranslated region (3' UTR) of target messenger RNAs (mRNAs), resulting in translational repression and mRNA decay [4]. Recently, miRNAs have been shown to execute important biological roles in host-pathogen interactions, and alterations in miRNA expression are being increasingly recognized as an integral component of the host response to infection by bacterial pathogens as well as a novel molecular strategy exploited by bacteria to manipulate the mechanisms governing host defense pathways. The miRBase, one of the prominent databases of human miRNAs, currently lists more than 2500 miRNAs that have been predicted to regulate about 60% of protein-coding genes [5]. Among these, there are a number of immunologically relevant miRNAs with well-defined roles as the regulators of immune cell function and activation as well as the resolution of immune responses, establishing their contributions as important determinants of innate and adaptive immunity [6][7][8][9][10]. Furthermore, the complexity of the mechanisms underlying such fine regulation of host immune responses at the molecular level is evident in the fact that a single miRNA can interact with and bind to a number of different mRNAs, and conversely, that a single mRNA may be subjected to regulation by several miRNAs acting cooperatively to govern post-transcriptional control of mRNA translation and protein output [11]. Abundant evidence now suggests a role for host miRNAs in the replication and propagation of viruses. A generalized emerging theme in this context is the manipulation of host miRNAs to escape an antiviral response and/or to promote viral infection [12]. Similarly, virus-encoded miRNAs carrying sequences similar to or completely different than host miRNAs have also been implicated in the regulation of important biological processes, such as modulation of the viral life cycle, pathogenesis, and latency [13,14]. Along this line, a number of important bacterial pathogens, for example, Helicobacter pylori, Listeria monocytogenes, Salmonella enterica serovar Typhimurium, and Mycobacterium tuberculosis, among others, have also been documented to alter host miRNAs [15][16][17][18][19]. Taken together, these studies suggest that bacterial pathogens also exploit host miRNAs to ensure and prolong their survival within the host. We have recently reported on the differential expression of miRNAs and utilization of fibroblast growth factor receptor 1 (FGFR1) as one of the host cell surface receptors to facilitate entry during R. rickettsii and R. conorii infection of cultured human ECs [20,21]. In the present study, we have identified two host miRNAs that experience downregulated expression in response to rickettsial infection of human dermal, pulmonary, and cerebral ECs, in correlation with induced expression of FGF2 and FGFR1. Our results further suggest an identical pattern of alterations in the expression of these miRNAs and FGF2/FGFR1 in the lungs as a target organ system in a murine model of infection and the potential utility of these miRNAs as diagnostic biomarkers of rickettsial diseases. Endothelial Cell Culture Human dermal microvascular endothelial cells (HMECs) were obtained from the Centers for Disease Control and Prevention (Atlanta, GA). HMECs were cultured in MCDB131 medium (Caisson's Laboratories) supplemented with Fetal Bovine Serum (FBS) (10% v/v; Aleken Biologicals), epidermal growth factor (10 ng/mL, Thermo Fisher Scientific, Waltham, MA, USA), L-glutamine (10 mM, Thermo Fisher Scientific), and hydrocortisone (1 µg/mL, Sigma) [22]. Human cerebral microvascular endothelial cells (HCECs) were kindly provided by R. K. Yu and S. S. Dasgupta, Institute of Molecular Medicine and Genetics, Medical College of Georgia, Augusta, GA. These immortalized cell lines, which display typical morphological, phenotypic, and functional characteristics of microvascular endothelium, were grown in culture as recommended [23,24]. Primary human lung microvascular ECs (HLMECs) were purchased from Lonza and maintained in culture according to the manufacturer's instructions. All cell cultures were incubated and maintained at 37 • C in an incubator with 5% CO 2 . Cell Infection and Transfection R. conorii (strain Malish 7) and R. rickettsii (Sheila Smith) were grown in cultured Vero cells, purified by differential centrifugation as described previously [25], and the stocks were aliquoted as volumes of ≤500 µL and kept frozen at −80 • C to avoid freeze-thaw cycles. The infectivity titers of purified stocks were estimated by citrate synthase (gltA)-based quantitative PCR and plaque formation [26]. ECs were seeded at a dilution to achieve 80% to 90% confluence and infected with approximately 6 × 10 4 plaque forming units (pfu) for every cm 2 of culture surface area with R. conorii or R. rickettsii to achieve approximately 5 intracellular rickettsiae per cell according to our standard established procedures [20,22]. At different times post-infection, culture medium was removed by gentle aspiration and the cells were directly lysed in TRI Reagent ® (Molecular Research Center). In all experiments, the viability of both mock controls and Rickettsia-infected ECs was ascertained microscopically. The mimics and inhibitors for miR-424 and miR-503, along with the negative controls (mirVana™ miRNA mimic and inhibitor negative controls), were purchased from Applied Biosystems/Thermo Fisher Scientific. The miRNA mimics were transfected for 24 h, while the miRNA inhibitors were transfected into ECs for 72 h using Lipofectamine RNAiMAX according to the manufacturer's recommendations prior to infection with R. conorii for 6 h. RNA Preparation Total RNA was extracted from R. conorii-infected and corresponding mock control ECs according to our standard TRI Reagent ® protocol optimized in accordance with the manufacturer's recommendations as described previously [21]. For RNA isolation from blood, the MagMAX mirVana Total RNA Isolation kit (Thermo Fisher Scientific) was used. The resultant RNA preparations were subjected to treatment with DNase I to remove contaminating genomic DNA and quantified using a MultiSkan™ Go Spectrophotometer (Thermo Scientific). The RNA quality was then assessed by visualization of 18S and 28S RNA bands on an Agilent Bioanalyzer 2100 (Agilent Technologies, Santa Clara, CA, USA). The electropherogram for each sample was used to determine the 28S:18S ratio and the RNA integrity number (RIN) [27]. RNA preparations with a RIN number ≥9.0 were used in further experiments. Quantitative Real-Time PCR TaqMan ® two-step RT-PCR assays containing primers for both miRNA-specific reverse transcription and quantitative PCR were obtained from Applied Biosystems. Total RNA (1 µg) for each sample was reverse-transcribed using the TaqMan MicroRNA cDNA synthesis kit (Applied Biosystems) and miRNA-specific primers for miR-424 and miR-503 as well as oligo (dT) primers for concurrent analysis of 18S (18S ribosomal RNA) and Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression as a house keeping control. The expression of miRNAs was analyzed by real-time PCR using the TaqMan ® assay specific for each microRNA (Applied Biosystems). 18S RNA was employed as an endogenous control and used to normalize for miRNA expression. The mRNA expression of FGF2 and FGFR1 was measured using the gene-specific TaqMan primers, and GAPDH was utilized as an endogenous control to normalize the mRNA expression between different samples [20]. The ∆ Ct values for experimental (infected) samples were compared to the baseline mock control cells, which were assigned a value of 1, and the relative expression was determined by comparative Ct ( ∆∆ CT method) as described earlier [20]. Briefly, we measured the amplification of the target and housekeeping genes in infected and control samples, and the Ct values for the target genes were normalized to the housekeeping species using the StepOne™ Plus software version 2.3. We next determined the relative quantitation by comparing normalized target quantity in each experimental (infected) sample to the normalized target quantity in mock controls (uninfected). For determining the rickettsial copy number, total DNA (host and rickettsial) was extracted using the DNeasy Blood and Tissue Kit (Qiagen, Germantown, MD, USA) according to the manufacturer's instructions and quantified by a spectrophotometer (Thermo Fisher Scientific). Quantitative PCR was performed using the rickettsial outer membrane protein A (ompA) primer pair RR190.547F and RR190.701R for spotted fever group rickettsiae [28]. In Vivo Model of Infection All animal experiments were performed in accordance with the research protocol approved by the Institutional Animal Care and Use Committee. The University has a file with the Office of Laboratory Animal Welfare and an approved Assurance Statement (#A3314-01). C3H/HeN mice (Charles River) were infected with 2.25 × 10 5 pfu of R. conorii per animal administered intravenously (IV). The control animals received an IV injection of the identical volume of saline. Four animals were used per group in two independent experiments, i.e., n = 8. On day 3 post-infection, mice were anesthetized by inhalational isofluorane for collection of blood by cardiac puncture, after which the mice were euthanized and the lungs were removed aseptically and preserved in an RNAlater solution for isolation of total RNA and analysis by q-RT-PCR using miRNA-specific Taqman assays (Applied Biosystems, Waltham, MA, USA). Statistical Analysis All experiments were performed at least three times with technical triplicates to calculate the results as the mean ± standard error (SE). Statistical analysis for differentially expressed miRNAs in R. conorii-infected and mock control groups was performed by one/two-way ANOVA with Dunnett's post-test using GraphPad Prism 4.00. The p value for statistical significance among experimental conditions being compared was set at ≤0.05. Results A recently published study and miRNA databases report fibroblast growth factor (FGF2) and its receptor (FGFR1) as validated targets for miR-424 and miR-503 [29]. Also, we have recently demonstrated the involvement of fibroblast growth factor receptor 1 (FGFR1) in the internalization of R. conorii into host endothelium in vitro and during R. conorii infection in vivo [21]. Therefore, we investigated the levels of expression of miR-424 and miR-503 in R. conorii-infected human ECs. The tropism for the vascular endothelium lining of small and medium-sized vessels in vivo is intriguing, considering that pathogenic rickettsiae are capable of infecting a wide range of cultured cell types in vitro. During natural infections, rickettsiae enter through the skin, primarily affecting the lungs and the brain and causing pulmonary and cerebral edema due to compromised vascular permeability. We, therefore, chose to study all three types, i.e., dermal (HMECs), cerebral (HCECs), and lung (HLMECs) ECs, to measure the expression of miR-424 and miR-503. Confluent ECs were infected with R. conorii for various times (0.5, 1, 1.5, 3, and 6 h), total RNA was isolated, and miR-424/503 expression was measured by qRT-PCR using Taqman miR-specific primers. As compared to the baseline in mock control HMECs, there was a dramatic decrease in the expression of miR-424 and miR-503 as early as 30 min post-infection (83.3 ± 3% and 78.4 ± 4%, respectively, p ≤ 0.01, Figure 1A). Significant and nearly identical downregulation of the expression of both miRNAs were observed up to 6 h post-infection as compared to the mock controls. In the lung microvascular ECs (HLMECs), expression of miR-424 and miR-503 was also significantly downregulated (86.9 ± 5% and 94.4 ± 3% at 3 h and 92.6 ± 3% and 95.7 ± 3% at 6 h, respectively, p ≤ 0.01) in infected cells as compared to the mock controls ( Figure 1B). Similar results were obtained in the cerebral microvascular ECs (HCECs), where the expression of miR-424/503 in infected cells was downregulated by 94.8 ± 3% and 96.3 ± 2% at 3 h and 94.3 ± 5% and 97.4 ± 2% at 6 h post-infection, respectively (p ≤ 0.01) ( Figure 1C). Overall, all three types of ECs demonstrated a similar pattern of dramatic downregulation of both miRNAs in infected cells. In addition, HMECs infected with R. rickettsii, another spotted fever group Rickettsia species, also displayed a similar pattern of regulation of expression for both miRNAs, where miR-424 depicted 96 ± 6% and miR-503 showed 93 ± 4% downregulation 6 h post-infection as compared to the mock controls. cells. In addition, HMECs infected with R. rickettsii, another spotted fever group Rickettsia species, also displayed a similar pattern of regulation of expression for both miRNAs, where miR-424 depicted 96 ± 6% and miR-503 showed 93 ± 4% downregulation 6 h post-infection as compared to the mock controls. As a follow up to changes in miR-424 and miR-503 expression, we next measured the expression of FGF2 and its receptor FGFR1 in R. conorii-infected ECs. In HMECs, we observed a time-dependent , and HCECs (C) were infected with R. conorii (Rc) for various time periods up to 6 h. RNA was extracted and qRT-PCR assays were performed to measure the expression of miR-424 and miR-503. The data was normalized to 18S RNA, and relative expression was calculated by the ∆∆ Ct method. The results are presented as the mean ± standard error (SE) of three independent experiments. The asterisk indicates statistically significant change (p ≤ 0.01). HMECs: human dermal microvascular endothelial cells; HLMECs: human lung microvascular endothelial cells; HCECs: human cerebral microvascular endothelial cells; Con: control. As a follow up to changes in miR-424 and miR-503 expression, we next measured the expression of FGF2 and its receptor FGFR1 in R. conorii-infected ECs. In HMECs, we observed a time-dependent increase in the expression levels of both FGF2 and FGFR1, with the maximum increase at 6 h post-infection (3.06 ± 0.2-and 3.01 ± 0.5-fold, respectively, p ≤ 0.01) (Figure 2A). In HLMECs, there was a similar time-dependent increase in the steady-state mRNA levels for both FGF2 and FGFR1, as evidenced by a 4.65 ± 0.7-and 3.5 ± 0.4-fold increase (p ≤ 0.01) in their expression, respectively ( Figure 2B). Lastly, HCECs also displayed a similar, but much larger increase in the mRNA expression levels of both FGF2 and FGFR1 at 6 h post-infection, when the infected cells exhibited about 7.87 ± 1.0-fold increase in FGF2 and 6.83 ± 0.6-fold increase in FGFR1 mRNA levels as compared to the mock controls (p ≤ 0.01) ( Figure 2C). A plausible explanation for only a modest increase in the expression of FGF2/FGFR1 levels in HMECs is that these cells display comparatively higher basal expression of FGF2 and FGFR1. Again, R. rickettsii infection also resulted in a similar pattern of induced FGF2/FGFR1 mRNA expression in HMECs, where we observed a 3.21 ± 0.6-fold increase in FGF2 expression and a 2.84 ± 0.2-fold increase in FGFR1 expression in infected cells compared to the mock controls. As an important corollary to in vitro findings, we further analyzed the expression level of miR-322 (mouse orthologue of miR-424) and miR-503 in the lungs of mice infected with R. conorii. qRT-PCR revealed about 61.5 ± 4.4% and 54.5 ± 6.8% reduction in the levels of miR-322 and miR-503, respectively, on day 3 post-infection as compared to the lungs of mock control mice (p ≤ 0.01 for both) ( Figure 3A). Subsequent determination of FGF2 and FGFR1 mRNA levels demonstrated an 18.2 ± 4.4-fold increase in the steady-state expression of FGF2 and a 12.3 ± 2.3-fold increase in FGFR1 in the lungs of infected mice in direct comparison to their basal expression in the lungs of mock control animals ( Figure 3B). Since miRNAs are also increasingly recognized for the potential to be used as biomarkers in various disease conditions including infection, we next determined the levels of these two miRNAs in the blood of mice infected with R. conorii. Our findings clearly demonstrate about a 72.1 ± 5% and 69.4 ± 5% reduction in the expression of both miR-322 and miR-503, respectively, in the blood of infected mice, representing a significant change (p ≤ 0.01) in comparison to their basal levels in a corresponding cohort of mock control animals ( Figure 3C). Together, these findings recapitulate in vitro changes observed in cultured ECs for both miRNAs and expression levels of FGF2 and FGFR1 in an established mouse model of rickettsial infection. As an important corollary to in vitro findings, we further analyzed the expression level of miR-322 (mouse orthologue of miR-424) and miR-503 in the lungs of mice infected with R. conorii. qRT-PCR revealed about 61.5 ± 4.4% and 54.5 ± 6.8% reduction in the levels of miR-322 and miR-503, respectively, on day 3 post-infection as compared to the lungs of mock control mice (p ≤ 0.01 for both) ( Figure 3A). Subsequent determination of FGF2 and FGFR1 mRNA levels demonstrated an 18.2 ± 4.4fold increase in the steady-state expression of FGF2 and a 12.3 ± 2.3-fold increase in FGFR1 in the lungs of infected mice in direct comparison to their basal expression in the lungs of mock control animals ( Figure 3B). Since miRNAs are also increasingly recognized for the potential to be used as biomarkers in various disease conditions including infection, we next determined the levels of these two miRNAs in the blood of mice infected with R. conorii. Our findings clearly demonstrate about a 72.1 ± 5% and 69.4 ± 5% reduction in the expression of both miR-322 and miR-503, respectively, in the blood of infected mice, representing a significant change (p ≤ 0.01) in comparison to their basal levels in a corresponding cohort of mock control animals ( Figure 3C). Together, these findings recapitulate in vitro changes observed in cultured ECs for both miRNAs and expression levels of FGF2 and FGFR1 in an established mouse model of rickettsial infection. To ascertain whether or not miR-424 and miR-503 function as the regulators of FGF2/FGFR1 mRNA during R. conorii infection, we next performed a series of gain-and loss-of-function experiments using miRNA-specific mimics or inhibitors. We conducted these studies using HCECs, based on highly significant changes in both miRNAs and FGF2/FGFR1 expression in this particular cell type in response to rickettsial infection. To this end, miRNA-mimics and inhibitor sequences specifically targeting miR-424 and miR-503, along with a negative control (mirVana™ miRNA mimic negative control), were transfected into HCECs using Lipofectamine ® RNAiMAX, and their effects on miR-424, miR-503, and FGF2/FGFR1 expression were determined by qRT-PCR. As expected, introduction of the mimics resulted in a dramatic increase in miR-424 and miR-503 expression. Interestingly, infection with R. conorii was able to counteract the effects of miR-mimics, resulting in reduced miR-424 and miR-503 expression in HCECs transfected with the mimic when compared to the corresponding mock controls ( Figure 4A). Conversely, mRNA expression of FGF2/FGFR1 was significantly downregulated in cells transfected with miR-424 and miR-503 mimics alone and those infected with R. conorii following the delivery of mimics for both miRNAs. To ascertain whether or not miR-424 and miR-503 function as the regulators of FGF2/FGFR1 mRNA during R. conorii infection, we next performed a series of gain-and loss-of-function experiments using miRNA-specific mimics or inhibitors. We conducted these studies using HCECs, based on highly significant changes in both miRNAs and FGF2/FGFR1 expression in this particular cell type in response to rickettsial infection. To this end, miRNA-mimics and inhibitor sequences specifically targeting miR-424 and miR-503, along with a negative control (mirVana™ miRNA mimic negative control), were transfected into HCECs using Lipofectamine ® RNAiMAX, and their effects on miR-424, miR-503, and FGF2/FGFR1 expression were determined by qRT-PCR. As expected, introduction of the mimics resulted in a dramatic increase in miR-424 and miR-503 expression. Interestingly, infection with R. conorii was able to counteract the effects of miR-mimics, resulting in reduced miR-424 and miR-503 expression in HCECs transfected with the mimic when compared to the corresponding mock controls ( Figure 4A). Conversely, mRNA expression of FGF2/FGFR1 was significantly downregulated in cells transfected with miR-424 and miR-503 mimics alone and those infected with R. conorii following the delivery of mimics for both miRNAs. In contrast to our findings with the mimics, the miR-424 and miR-503 inhibitors reduced the cellular miRNA levels by about 75%, while R. conorii infection further reduced the miR-424 and miR-503 expression by about 50% in the presence of the inhibitors specific to these miRNAs ( Figure 5A). Accordingly, opposite effects on FGF2/FGFR1 mRNA levels were also clearly evident when the inhibitors of miR-424 and miR-503 were used in these experiments ( Figure 5B). Together, these findings yield evidence for the direct involvement of miR-424 and miR-503 in the regulation of FGF2/FGFR1 expression during rickettsial infection of host ECs. In contrast to our findings with the mimics, the miR-424 and miR-503 inhibitors reduced the cellular miRNA levels by about 75%, while R. conorii infection further reduced the miR-424 and miR-503 expression by about 50% in the presence of the inhibitors specific to these miRNAs ( Figure 5A). Accordingly, opposite effects on FGF2/FGFR1 mRNA levels were also clearly evident when the inhibitors of miR-424 and miR-503 were used in these experiments ( Figure 5B). Together, these findings yield evidence for the direct involvement of miR-424 and miR-503 in the regulation of FGF2/FGFR1 expression during rickettsial infection of host ECs. Based on our recent findings implicating FGFR1 in rickettsial internalization into ECs and evidence for miR-424 and miR-503-mediated regulation of FGF2/FGFR1 expression, we next investigated the effects of miR-424 and miR-503 mimics and inhibitors on R. conorii internalization into ECs. ECs transfected with the mimics or inhibitors of miR-424 and miR-503 were infected with Based on our recent findings implicating FGFR1 in rickettsial internalization into ECs and evidence for miR-424 and miR-503-mediated regulation of FGF2/FGFR1 expression, we next investigated the effects of miR-424 and miR-503 mimics and inhibitors on R. conorii internalization into ECs. ECs transfected with the mimics or inhibitors of miR-424 and miR-503 were infected with R. conorii and the copy number of internalized rickettsiae was determined ( Figure 6). Our results suggest that miR-424 and miR-503 mimics significantly inhibit R. conorii internalization, whereas inhibitors of both miRNAs have an opposite enhancing effect of facilitating rickettsial entry into ECs. These results corroborate our earlier findings that FGF2/FGFR1-mediated entry of R. conorii into host ECs is regulated by miR-424 and miR-503. R. conorii and the copy number of internalized rickettsiae was determined ( Figure 6). Our results suggest that miR-424 and miR-503 mimics significantly inhibit R. conorii internalization, whereas inhibitors of both miRNAs have an opposite enhancing effect of facilitating rickettsial entry into ECs. These results corroborate our earlier findings that FGF2/FGFR1-mediated entry of R. conorii into host ECs is regulated by miR-424 and miR-503. Discussion MicroRNAs play a major role in human diseases, with the aberrant expression of miRNAs capable of interacting with several oncogenes and tumor suppressors now reported in all cancers [30]. In addition, miRNA regulatory networks comprised of either a single miRNA or those executing their effects as clusters consisting of either related family members or disparate miRNAs can impact the mechanisms responsible for normal physiology as well as nonmalignant disorders [31]. Rapidly accumulating evidence implies important roles for miRNAs in the modulation of inflammatory responses, cell penetration, innate and adaptive immunity, and tissue remodeling consequent to infection as critical attributes of intricate and complex interactions between bacterial pathogens and their hosts. For example, miR-146 and miR-155 represent two well-studied miRNAs, due in large part to their roles in the regulation of inflammation and immunity during bacterial infections [17]. Yet, another emerging theme is the possibility of their exploitation as circulating biomarkers for bacterial infections (for example, pulmonary tuberculosis and H. pylori-associated gastritis) and the potential for their application as novel therapeutic targets [32,33]. Pathogenic Rickettsia species are known to target the microvascular endothelial lining of small and medium-sized blood vessels during human infections and to exploit redundant mechanisms to gain entry, for release into the cytoplasm as free intracytoplasmic energy parasites, and for survival inside the host cell [1,2]. As a ubiquitous and multifunctional regulator of the proliferation, differentiation, and angiogenic potential of different mammalian cells, FGF2 is an important protein belonging to the large family of fibroblast growth factors. It is involved in both morphogenic and mitogenic pathways and regulates a variety of important cellular functions underlying developmental processes by binding to and activating the receptor tyrosine kinases FGFR1-FGFR4 [34]. A fifth receptor, FGFR5 (also known as FGFRL1) can also bind FGF2, but is devoid of a tyrosine kinase domain and may negatively regulate signaling [35]. Derived from a single mRNA, there are five different isoforms of FGF2 (34, 24, 22.5, 22, and 18 kDa), of which the four high-molecular-weight forms arise from the upstream CUG codons, whereas the 18 kDa isoform arises from the downstream AUG codon [36,37]. The human FGFRs, FGFR1 through 4, are a subfamily of receptor tyrosine kinases associated with the activation of multiple cell signaling cascades and responses such as proliferation, Figure 6. miRNA-mediated rickettsial internalization into host endothelial cells: ECs were transfected with a miR-424 mimic (1 nM) or inhibitor (200 nM) prior to infection with R. conorii for 6 h. Cells were lysed and DNA was isolated using a Qiagen kit for determination of the copy number of rickettsiae by qPCR. The data are presented as the mean ± SE of three separate experiments. The asterisk (* p ≤ 0.01) indicates statistically significant change. Discussion MicroRNAs play a major role in human diseases, with the aberrant expression of miRNAs capable of interacting with several oncogenes and tumor suppressors now reported in all cancers [30]. In addition, miRNA regulatory networks comprised of either a single miRNA or those executing their effects as clusters consisting of either related family members or disparate miRNAs can impact the mechanisms responsible for normal physiology as well as nonmalignant disorders [31]. Rapidly accumulating evidence implies important roles for miRNAs in the modulation of inflammatory responses, cell penetration, innate and adaptive immunity, and tissue remodeling consequent to infection as critical attributes of intricate and complex interactions between bacterial pathogens and their hosts. For example, miR-146 and miR-155 represent two well-studied miRNAs, due in large part to their roles in the regulation of inflammation and immunity during bacterial infections [17]. Yet, another emerging theme is the possibility of their exploitation as circulating biomarkers for bacterial infections (for example, pulmonary tuberculosis and H. pylori-associated gastritis) and the potential for their application as novel therapeutic targets [32,33]. Pathogenic Rickettsia species are known to target the microvascular endothelial lining of small and medium-sized blood vessels during human infections and to exploit redundant mechanisms to gain entry, for release into the cytoplasm as free intracytoplasmic energy parasites, and for survival inside the host cell [1,2]. As a ubiquitous and multifunctional regulator of the proliferation, differentiation, and angiogenic potential of different mammalian cells, FGF2 is an important protein belonging to the large family of fibroblast growth factors. It is involved in both morphogenic and mitogenic pathways and regulates a variety of important cellular functions underlying developmental processes by binding to and activating the receptor tyrosine kinases FGFR1-FGFR4 [34]. A fifth receptor, FGFR5 (also known as FGFRL1) can also bind FGF2, but is devoid of a tyrosine kinase domain and may negatively regulate signaling [35]. Derived from a single mRNA, there are five different isoforms of FGF2 (34, 24, 22.5, 22, and 18 kDa), of which the four high-molecular-weight forms arise from the upstream CUG codons, whereas the 18 kDa isoform arises from the downstream AUG codon [36,37]. The human FGFRs, FGFR1 through 4, are a subfamily of receptor tyrosine kinases associated with the activation of multiple cell signaling cascades and responses such as proliferation, differentiation, and survival. Although all of these FGFRs are expressed on various cells and tissues at varying levels and FGF2 interacts with them all, FGFR1 has the highest affinity for FGF2 and is primarily responsible for FGF2-induced signaling in ECs, which determines multiple vascular endothelial functions, including growth, migration, and angiogenesis [38][39][40]. Published reports have implicated miR-424 and miR-503 in the regulation of expression of FGF2 and FGFR1 in ECs [29,41], and our laboratory has recently identified FGFR1 as one of the host cell receptors exploited by spotted fever rickettsiae for internalization into host ECs [21]. These findings served as the rationale for further investigation of the expression of miR-424/503 in human dermal, pulmonary, and cerebral microvascular ECs during R. conorii infection in vitro [42]. Our results for this aspect of the study convincingly demonstrate a dramatic downregulation of both miRNAs and significantly increased mRNA expression of FGF2/FGFR1 in infected ECs over the basal levels of expression in mock controls. Because miR-424 (miR-322 in rodents) and miR-503 are co-transcribed as a polycistronic primary transcript (pri-miRNA) and comprise the miR-424(322)/503 cluster [43,44], it is not surprising that both miRNAs exhibit an identical pattern of transcriptional regulation. In addition, both miR-424 and miR-503 regulate the expression of FGF2 and FGFR1 by binding to the 3 -UTR sequences [41,45] and FGF2 upregulates the expression levels of mature miR-424, clearly establishing a regulatory loop between miR-424 and FGF2 [29]. Importantly, as a follow-up to our in vitro findings, we have further ascertained the decreased expression of both miRNAs and increased expression of FGF2/FGFR1 mRNAs in the lungs as one of the major target organs to illustrate that altered miRNA/mRNA levels in three different types of cultured ECs potentially correlate with in vivo changes in an established murine model of spotted fever rickettsiosis. The R. conorii mouse model of infection has routinely been used for in vivo investigations because R. conorii infection in susceptible C3H/HeN mice closely mimics the disseminated endothelial infection, which is the major feature of pathogenesis and displays the overall pathology of Rocky Mountain Spotted Fever (RMSF) and Mediterranean Spotted Fever (MSF) in humans. Also, a direct and authenticated R. rickettsii mouse model is not yet available, mainly due to resistance of a number of mouse strains to pathogenic R. rickettsii [46,47]. Innate immunity is the first line of defense in response to invading pathogens, and the importance of miRNAs as determinants of host-pathogen interactions now represents a rapidly emerging area of enquiry, due mainly to the importance of miRNAs in the modulation of the host cell transcriptome and host immune responses towards microorganisms. In ECs, several miRNAs have been shown to be involved in the control of a variety of physiological and pathological functions, including angiogenesis, regulation of oxidative stress and antioxidant mechanisms, nitric oxide release, vascular inflammation, and mediation of intercellular communication [42]. As a single layer of cells lining the entire vascular tree throughout the body, the microvascular endothelium plays an important role in the regulation of hemostasis and functions as a transport gatekeeper for the exchange of substances such as nutrients, hormones, and metabolic waste. It is also important to consider, however, that microvascular ECs in the capillary beds of different tissues are endowed with distinct structural, phenotypic, and functional attributes. Accordingly, organ-specific ECs have distinct expression patterns of gene clusters to support functions that are unique and critical to the development of that particular organ system and tend to display distinct barrier properties, angiogenic capabilities, and metabolic profiles. Because pulmonary and cerebral edema are prominent pathologic features of human rickettsial infections, suggesting critical involvement of the microvasculature of the lungs and brain in disease pathogenesis, we compared the expression levels of miR-424 and miR-503 in dermal, brain, and cerebral ECs. Interestingly, our findings reveal a similar pattern of significant downregulation for both miRNAs and concordant increase in FGF2/FGFR1 expression in these ECs, with the most striking changes in cerebral ECs. These results are in general agreement with our previous findings that both macro-as well as microvascular ECs infected in vitro with spotted fever rickettsiae display relatively similar responses in regard to the activation of signal transduction cascades, expression and secretion of cytokines and chemokines, and induction of oxidative stress and consequent antioxidant mechanisms [22]. FGF2 and FGFR1 are primary regulators of ECs proliferation and angiogenesis, and FGF2 exerts its proangiogenic effects via the activation of FGFR1. Therefore, it is possible that in addition to facilitating the process of pathogen internalization, miRNA-governed enhancement of FGF2 and FGFR1 expression may promote endothelial proliferation, providing the host cellular niche critical for the survival, growth, and replication of pathogenic rickettsiae, being intracellular parasites. In addition, FGF2 also resides in the extracellular matrix (ECM), where it is tightly bound to heparan sulfate proteoglycans, which protects it from proteolysis and limits its diffusion through the extracellular matrix to potentiate its regulatory effects via signaling through FGFRs [48,49]. Also, a variety of microbes interact with different ECM proteins to effectively establish an infection, evade immune responses, and spread from cell to cell [50]. Since the ECM is intimately involved in cell adhesion and cell-to-cell communication, FGF2 may also play a role in facilitating the intercellular spread of rickettsiae. In addition, truncated forms of FGFR1 have been demonstrated to freely circulate in the blood [51], lending support to the possibility of its involvement in the systemic dissemination of rickettsiae during infection of the mammalian hosts. The miRNA profiling for markers of human diseases has been performed with success in biological samples such as cerebrospinal fluid, peripheral blood cells, plasma, serum, and whole blood. The discovery of circulating miRNAs in peripheral blood and the evidence for their stability in the blood has led to the completion of several investigations confirming the differential expression of specific miRNAs and their potential use as diagnostic and prognostic markers of human disease. The findings of this study reveal substantial downregulation of miR-322/503 expression in the serum of infected mice as compared to the corresponding control subjects, suggesting the usefulness of these two miRNA candidates as potential biomarkers of human rickettsial infections. In summary, the present study illustrates that miR-424/503 are significantly downregulated in three different types of ECs representing the primary targets of infection in humans, and that such downregulation may promote high levels of FGFR1 expression to facilitate subsequent pathogen invasion and/or dissemination. Further functional analysis to determine the precise roles of miR-424 and miR-503 in the host-pathogen interplay and pathophysiology should lead to the development of novel diagnostics and/or therapeutics to combat the scourge of human spotted fever rickettsioses. Author Contributions: A.S. and S.K.S. conceived the idea and designed the experiments; A.S., H.P.N. and J.P. performed the experiments; A.S. analyzed the data; and A.S. and S.K.S. wrote the manuscript. Funding: This work was supported in part by an exploratory research grant R21 AI117483 from the National Institute of Allergy and Infectious Diseases at the National Institutes of Health, Bethesda, MD, USA, to S.K.S. and A.S. Conflicts of Interest: The authors declare no conflict of interest.
2018-12-12T19:54:02.958Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "8a2af7bbb49e6296de8ef2b0edbb6edfef146a26", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/cells7120240", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a2af7bbb49e6296de8ef2b0edbb6edfef146a26", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
239459908
pes2o/s2orc
v3-fos-license
Analysis of the status of treatment of benign thyroid diseases — a public health problem aggravated in the COVID-19 pandemic era Highlights • The COVID-19 pandemic have negatively impacted the surgical treatment of Goiters.• The postponed surgical treatment was worsened by closure of hospital beds and bad public management.• With safety protocols, surgeries for goiter and benign thyroid conditions can still be performed.• The restart surgeries for goiters will reduce the negative economic and patient health impact. Introduction ''Economics is not about things and tangible material objects; it is about men, their meanings and actions'' Ludwig Heinrich Edler von Mises (1881---1973). The COVID-19 pandemic forced the WHO to come up with a recommendation to postpone all elective surgeries worldwide, supported by national and international class societies, impacting the surgical treatment of the benign thyroid conditions, which include cysts, goiters, toxic goiters, adenomas, and thyroiditis. 1 The real problems resulted from all this necessary postpone has been little evaluated from standard point of care in these cohort of patients. This article review will focus only on the impact of the COVID-19 pandemic on treatment of benign conditions of the thyroid gland and their implications, as of June 2021, in an emergent country as Brazil. The present text has the objective of informing and raising more important questions, which are the base of a healthy scientific evolution to be later used in favor of patients. Contemporary review Benign thyroid diseases More than 90% of the nodules detected in the thyroid are indolent benign lesions, a very common finding in day-to-day clinical practice and only about 4%---7% are actual carcinomas. In 2%---6% of cases, benign lesions are diagnosed by palpation on clinical examination, by ultrasound findings in 19%---35%, and by incidental post-mortem findings in 8%---65%. 2,3 Most nodules diagnosed today are approximately 1.0 cm in diameter, are either barely palpable or not palpable at all, and are accompanied by an absence of thyroid dysfunction. Based on previous studies, the prevalence of benign thyroid lesions is high at 6.4% in women and 1.5% in men, occurring in 59.2% in a Brazilian population and 15.85 patients/1000 inhabitants in Korea. 4---7 Current studies demonstrate that these nodules present slow, progressive growth, limited to approximately 5 mm in 5 years for the main nodule in cases of multinodular disease. The majority of patients are mostly asymptomatic, do not require treatment, and can be simply referred to clinical follow-up to ''wait-and-see'' policies in the treatment of these benign thyroid nodules, in a either non-pandemic and pandemic situation. 2 Otherwise, some other studies also report gradual growth of the nodules and noted a progressive increase in gland volume in the form of multiple nodules which are correlated to progressive growing goiter and an increased risk of hyperthyroidism; in which case the benign condition can become symptomatic, also increasing the cardiologic risk due arrhythmias and the overall surgical risk. Particularly, patients with a family history of thyroid nodules and a high dietary intake of iodine were found to be susceptible; these studies concluded that the gradual increase of thyroid function in such cases is directly related to the increase in the goiter volume, a worrying situation nowadays in COVID-19 era. 8 Regarding the treatment, a study of 488 patients who underwent surgery for goiters over 15 years, about 25% of goiters were classified as large (between 106 and 176 g) and 75% as small (between 18 and 37 g); obesity and black race were found to be risk factors associated with goiter growth. 9 In another study, patients with goiter had a high risk of having grade-III obesity, with a strong causal link, due to insulin resistance and increased leptin, leading to thyroid dysfunction and stimulation of thyroid parenchyma growth. 10 The onset of dysphagia, decubitus dyspnea, foreign-body sensation or globus pharyngeus, Pemberton's sign (a late sign of cervicothoracic goiter with vascular compression), and a multinodular goiter on palpation, whether associated with hyperthyroidism or not, indicates goiter compression of the respiratory-digestive tract. The associated condition of obstructive sleep apnea syndrome and other comorbidities lead to also an increased risk of mortality in these patients. 11 This requires prompt surgical treatment due to the risk of aspiration with recurrent pneumonia and difficult clinical treatment. Compressive goiters may lead to difficult orotracheal intubation in the emergency room. The main indications for goiter surgery are compression of the digestive tract, airways, intrathoracic growth, marked growth during the follow-up period, vascular compression, cosmetic deformity, and risk of malignancy. 12 Although life threatening nonmalignant thyroid condition, the clinical situation above has been delayed in COVID-19 pandemic, when only select cases of malignant histology has been operated, with evident negative impact on the overall patient health, as mentioned somewhere below. Regarding the type of surgery, in a Brazilian study of 1789 patients who underwent goiter surgery, the goiter was found to be benign in 62.4% (n = 1116) of patients undergoing total thyroidectomy and 37.6% (n = 673) of those undergoing partial thyroidectomy. The authors concluded that total thyroidectomy is effective, showing benefit over partial thyroidectomy, with the same rate of complications, i.e., 12.2% of transient hypoparathyroidism, 1.6% of definitive hypoparathyroidism, 1.9% of transient lower laryngeal nerve injury, and 0.35% of definitive lower laryngeal nerve injury. 13 Thyroidectomy is the formal treatment indication in medium to large goiters, and total thyroidectomy is superior to partial operation. This surgery is considered safe in experienced hands, with low complication rates: less than 1% for definitive dysphonia due to injury of the laryngeal nerve and about 1% for hypoparathyroidism. 14 Despite the fact that most surgeons: endocrine surgeons, otorhinolaryngologists, head and neck surgeons and general surgeons are familiarized with the neck anatomy, thyroid anatomy and technical principles of the thyroid surgery and believe that the thyroid surgery is a ''safe procedure''; in the COVID-19 pandemic this perception were severely impaired, once more patients with ''bigger complicated disease'' has been submitted to surgery, increasing the overall complication rate, being of relevant concern. The COVID-19 pandemic aspects The World Health Organization (WHO) was notified about the first cases of an atypical pneumonia in Wuhan, China, on December 31, 2019; the new virus was officially named SARS-CoV-2 on February 11, 2020; the disease it caused was named coronavirus disease 2019 (COVID-19) and the WHO declared COVID-19 a pandemic on March 11, 2020. The virus can spread by direct, indirect, or close contact (up to 1 m), through aerosol or micro-aerosol particles, salivary and respiratory secretions, when talking, coughing, and sneezing. People become infected when the virus comes into contact with the mucous membranes of the mouth, nose, or eyes. According to data from Johns Hopkins University Coronavirus Resource Center, updated in real time, as of July 03, 2021, 183.274.120 people worldwide have been infected with SARS-CoV-2, with overall death of 3.966.575; of whom 18.687.469 people were located in Brazil. The country had recorded 521.952 deaths, with incidence in June/21 of 65.165 cases/day; and daily mortality of 1.879 patients, resulting in a mean mortality rate of 2.46%, ranging from 1.66% in the Federal District (the administrative area of the national capital city, Brasília) to 5.64% in the State of Rio de Janeiro. 15 As pandemic evolute, appropriate knowledge has been learned and the health safety policies officially implemented in Brazil included social distancing, personal protection equipment (PPE; use of face masks, alcohol-based hand sanitizer, etc.), and vaccination. Hospitals were also forced to redirect and reschedule surgeries. Only time-sensitive surgeries, such as oncological surgeries and cases in which an imminent risk of life remain indicated (urgency or emergency), postponing the surgical thyroid benign conditions as per recommendations published by the Brazilian Society of Head and Neck Surgery (SBCCP) and by other were performed. 15---18 Recently, new SARS-CoV2 variants have been detected worldwide, first in Africa (B.1.351) and England (B.1.1.7), and later in Brazil, (strains P1, P2, and B1.1.33, with mutations in the viral genes N501Y and E484K). Variant P1 was dubbed the ''Brazilian variant'', and is characterized by a greater capacity for dissemination and contamination, higher mortality index and a higher growth speed in relation to the common SARS-CoV-2 strains. The P1 strain has resulted in a second wave of the pandemic that caused chaos and health system collapse in the Northern Brazilian city of Manaus in January 2021. 19,20 The direct consequence of these variants is that the effects of the disease are no longer as limited to elderly and vulnerable patients (those with comorbidities) as the original SARS-CoV-2. Now young patients without comorbidities are also becoming rapidly and severely ill, requiring longer hospitalization time, both in normal and ICU beds. Some of these cases result in death, with a mortality rate of almost 80% for patients who required orotracheal intubation and assisted ventilation. In Brazil, this situation has currently resulted in an unprecedented occupation of hospital beds, including ICU beds, increased consumption of hospital supplies (anesthetics, antibiotics, anticoagulants, corticoids, etc.) by both the public health system (the so-called Brazilian Unified Health System, or SUS in the Portuguese acronym) and the Private Health System (PHS). This collapse situation of increased hospital length stays, more occupied beds, increased consumption of hospital supplies lead to overall delay during indeterminate time of all the surgical benign thyroid conditions in favor only of the cancer patients; data not officially positioned by the federal government but adopted by the federation states, justified in part by the pandemic, but the immense failures of the Brazilian health system, both in SUS and the PHS, are also partly to blame. A similar hospital bed occupancy situation was observed in the U.S. from November 2020 to early February 2021, but the rates decreased after that period, probably reflecting the initiation of systematic mass vaccination. 15 The enormous physical and emotional stress of the entire team of health professionals who directly and indirectly care for these cases should also be mentioned, including physicians of various specialties, nurses, nursing assistants, physical therapists, and psychologists. 21---24 The Public and Private Health System aspects In a technical study by the Brazilian National Confederation of Municipalities, it was reported that over a period of 10 years (2008---2018), more than 40,000 hospital beds were lost, with more closures (23,091 beds) than openings (18,000 new PHS beds) in the case of SUS, even before the pandemic. It was also found that the national average is 2.1 beds/1000 inhabitants, which is below the WHO recommendation of 2.5 to 3 beds/1000 inhabitants. Additionally, these beds are unevenly distributed between SUS and the PHS, and also between Brazilian states and municipalities (Fig. 1). 25 The decrease of hospital beds over the years, the shrinking of the health sector, corruption in the health system, embezzlement of public funds, a decrease in the population's income, with about 12 million patients migrating from the PHS to SUS, and many other factors all culminated in a health crisis, overwhelming the facilities for patient care in both health systems. This further resulted in a bottleneck at hospitals, reducing vacancies for both hospitalization and surgery. 26---29 Another issue is the persistent decrease in reimbursement from SUS to hospitals, up to 77% for the cost of surgeries, a fact that may justify the administrative decisions of managers to reduce admissions to contain the financial crisis in public hospitals. 30 Another aspect is the difference in medical remuneration paid by SUS when total thyroidectomy is performed for cancer vs for benign goiters, a distortions of the SUS remuneration table that needs to be corrected. 31 The difference is very large and in favor of oncological cases, even though it is often more laborious to surgically operate on a large goiter than a thyroid cancer. This difference in remuneration for both medical fees and hospital services makes surgeons and hospitals reluctant to treat patients with benign thyroid diseases. The SBCCP is aware of this issue and has been participating in a group that proposes corrections in the SUS remuneration table, but this is a long and arduous task. Informally, based on unpublished but dynamic data that vary for each head and neck surgery department in Brazil, it has been observed that the duration of care (defined as the time from the first consultation to the hospital discharge after surgery) of patients with benign thyroid diseases (goiter, adenomas) used to vary nationwide from 1 year and 6 months to up to 3 years before pandemic, worsening during COVID-19 era. As cited, the WHO have come up with a recommendation to postpone all elective surgeries worldwide, supported by national and international class societies; however, head and neck surgery was not included in the initial recommendation. 32,33 During the COVID-19 pandemic, the safety of the patient and the head and neck surgery team should be paramount, and several guidelines have been posteriorly published in this regard. 17,34 However, to date, there is no consensus on the best conduct in cases of goiter and benign diseases. However, the results of some guidelines that helped in the selection of patients to be submitted to surgery during the COVID-19 pandemic can be extrapolated. 35 Other specific guidelines have oriented the ideal moment selection of patient to surgery, taking into account the capacity of the hospital network and the sufficient availability of hospital supplies. 18 The prioritization of head and neck surgeries during the pandemic was discussed in a Stanford University article suggesting a 30---90 day postponement of goiter surgeries considered less urgent, i.e., in cases with no signs of airway commitment. 36 It was noted that out of an estimated worldwide volume of 4,845,604 head and neck surgeries scheduled during the pandemic, 3,950,551 (81.5%) were cancelled. In Brazil, almost 247,444 of total surgeries were cancelled in a period of 12 weeks, and no specifically thyroid surgery number were cited; thus, generating a large social impact, which will have a long-term negative effect in health and economic terms, also be harmful for the patients by worsening their disease. 37 The SBCCP recommendations for the safe resumption of surgical procedures are a landmark in this specialty, by guiding surgeons about surgery indications during the pandemic. The recommendations include the postponement of goiter and benign thyroid surgeries (Item 2), except for ''goiters with airway compression and evident respiratory symptoms, and Graves' disease with contraindications to clinical treatment''. 16,38 In this respect, the SBCCP took an excellent step by not mitigating the consequences of the postponement of benign thyroid surgeries, understanding the harmful effects of this postponement, and always weighing the risk/benefit ratio during the COVID-19 pandemic. Similar to other manifestations regarding the cancellation or postponement of head and neck surgeries, 39 the present article aims to alert professionals involved in the treatment of these patients (surgeons, endocrinologists, and multidisciplinary teams) about the risks of further postponing benign thyroid surgeries, even considering all the other associated hindrances mentioned above. There is a real risk that the postponement of surgery for benign conditions as a result of the pandemic will cause a deterioration in patients' physical and mental health status, increasing work disabilities and burdening society by increasing the social cost. This could be catastrophic in emergent countries where this increased disease-related social expenditure on surgical treatment may increase the risk of national impoverishment. 37,40 There are those who argue, not without reason, that the slow growth of goiter and benign thyroid conditions are not an aggravating factor for the patient, and it is possible to wait until safer times. However, this line of argument does not take into consideration the fact that goiters with ''borderline'' indication for surgery today will, in a short time, either grow, clinically deteriorate or cause compression, thus making the surgical act more exhaustive, more time-consuming, and with a higher risk of complications; almost 35% of growing goiters becoming substernal (grade II---III) will require sternotomy, with high risk to dysphonia (OR = 14.29) and transient hypoparathyroidism (OR = 4.48); 41---45 the risk of surgical morbidity rate in toxic goiter is nearly 37% 44 and the mortality can be high as 3.1% in the postoperative period. 43 For goiters that already have surgical indication due to compressive symptoms or hyperthyroidism, every effort should be made for their prompt surgical resolution, while following the due safety measures mentioned before. Thus, surgeries can be performed during the pandemic, as long as safety protocols for both the patient and the surgical team are adopted, to decrease the impact of postponing surgeries, both from the patient's health and from the economic point of view, as reported by other surgical departments, respecting, of course, the main present law stablished at the time. 46,47 Proposed suggestions We believe the following suggestions can help guide the surgical outcome in cases of goiter and benign thyroid conditions, although they are not intended to provide an extensive coverage of the subject. The authors also present a flowchart with the propositions for better management of goiter, retrosternal goiter, toxic goiter, and benign thyroid cases, classified at the physical and radiologic exam of the thyroid gland (Fig. 2). The goiter were defined as retrosternal according to the Eschapase's definition (3 cm below the sternal manubrium), 44 or grade II and III degree of extension according to the cross-section imaging CT system, 42 with good correlation to intraluminal compression. The severity of clinical decompensation was divided in: Stable ---when all clinical and laboratory findings were the same as previous clinical patient data; Mild ---when there were laboratory alterations as low TSH with normal Thyroid hormones concentrations or mild symptoms of respiratory discomfort; and Severe ---when there were severe clinical signals and symptoms alterations, as weight of loss, palpitations, hypertension, tremor of extremities, fatigue, anxiety, shortness of breath, evident airway and pharynx worsening compression, low TSH and high thyroid hormones concentrations compared to previous exams. Cases of patients with goiter and benign pathologies should be placed in a separate list of thyroidectomies, excluding carcinomas, to give an estimate of the number of patients. Once patients are selected, they may be recalled and re-evaluated for symptoms, imaging, and laboratory tests, in an attempt to re-select those with worse symptoms or conditions to receive immediate treatment. If there is doubt regarding the severity of a case in this new selection, a clinical meeting of the department can be held to discuss the case, similarly to the ''tumor boards'' for oncologic cases. Cases should then be reclassified according to severity as either stable, mild decompensation, and severe decompensation. This will allow the medical team to establish the degree of urgency required in the treatment process: immediate action, short-term scheduling, or long-term scheduling. Once this new list is obtained, the feasibility of treating cases must be jointly discussed with the hospital manager or head of the department, while assessing the current situation of beds, staff, and supplies needed for surgery, as well as the current situation of COVID-19 in that hospital and municipality, aiming at patient and staff safety. Signature of a free and informed consent form. Negative COVID19 tests taken 48---72 h before surgery for all patients must be submitted. Admission to an isolation ward, following the safety standards for COVID-19. Surgery with minimal aerosols and precautions for the anesthesia and surgical team. Conclusion It can be concluded that surgeries for goiter and benign thyroid conditions can still be performed during the COVID-19 pandemic, as long as safety protocols are followed for the patient and the medical team. This will help in reducing the negative economic impact as well as the impact on patient health. Conflicts of interest The authors declare no conflicts of interest.
2021-10-23T13:11:03.343Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "bee727e5f9c4c41ea65699c6a924202bcb89acf6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.bjorl.2021.08.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b54a86ab636a2537a1529365cd22bcbf3eeb4cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
148571537
pes2o/s2orc
v3-fos-license
Return to the Sea, Get Huge, Beat Cancer: An Analysis of Cetacean Genomes Including an Assembly for the Humpback Whale (Megaptera novaeangliae) Abstract Cetaceans are a clade of highly specialized aquatic mammals that include the largest animals that have ever lived. The largest whales can have ∼1,000× more cells than a human, with long lifespans, leaving them theoretically susceptible to cancer. However, large-bodied and long-lived animals do not suffer higher risks of cancer mortality than humans—an observation known as Peto’s Paradox. To investigate the genomic bases of gigantism and other cetacean adaptations, we generated a de novo genome assembly for the humpback whale (Megaptera novaeangliae) and incorporated the genomes of ten cetacean species in a comparative analysis. We found further evidence that rorquals (family Balaenopteridae) radiated during the Miocene or earlier, and inferred that perturbations in abundance and/or the interocean connectivity of North Atlantic humpback whale populations likely occurred throughout the Pleistocene. Our comparative genomic results suggest that the evolution of cetacean gigantism was accompanied by strong selection on pathways that are directly linked to cancer. Large segmental duplications in whale genomes contained genes controlling the apoptotic pathway, and genes inferred to be under accelerated evolution and positive selection in cetaceans were enriched for biological processes such as cell cycle checkpoint, cell signaling, and proliferation. We also inferred positive selection on genes controlling the mammalian appendicular and cranial skeletal elements in the cetacean lineage, which are relevant to extensive anatomical changes during cetacean evolution. Genomic analyses shed light on the molecular mechanisms underlying cetacean traits, including gigantism, and will contribute to the development of future targets for human cancer therapies. Introduction Cetaceans (whales, dolphins, and porpoises) are highly specialized mammals adapted to an aquatic lifestyle. Diverging from land-dwelling artiodactyls during the late Paleocene or Traits evolved for life in the ocean, including the loss of hind limbs, changes in skull morphology, physiological adaptations for deep diving, and underwater acoustic abilities including echolocation make these species among the most diverged mammals from the ancestral eutherian (Berta et al. 2015). One striking aspect of cetacean evolution is the large body sizes achieved by some lineages, rivaled only by the gigantic terrestrial sauropod dinosaurs (Benson et al. 2014). Cetaceans were not limited by gravity in the buoyant marine environment and evolved multiple giant forms, exemplified today by the largest animal that has ever lived: the blue whale (Balaenoptera musculus). Based on evidence from fossils, molecules, and historical climate data, it has been hypothesized that oceanic upwelling during the Pliocene-Pleistocene supported the suspension feeding typical of modern baleen whales, allowing them to reach their gigantic sizes surprisingly close to the present time (Slater et al. 2017). Although the largest whales arose relatively recently, large body size has evolved multiple times throughout the history of life (Heim et al. 2015), including in 10 out of 11 mammalian orders (Baker et al. 2015). Animal gigantism is therefore a recurring phenomenon that is seemingly governed by available resources and natural selection (Vermeij 2016), where positive fitness consequences lead to repeated directional selection toward larger bodies within populations (Kingsolver and Pfennig 2004). However, there are tradeoffs associated with large body size, including a higher lifetime risk of cancer due to a greater number of somatic cell divisions over time (Peto et al. 1975;Nunney 2018). Surprisingly, although cancer should be a body mass-and age-related disease, large and long-lived animals do not suffer higher cancer mortality rates than smaller, shorter-lived animals (Abegglen et al. 2015). This is a phenomenon known as Peto's Paradox (Peto et al. 1975). To the extent that there has been selection for large body size, there likely has also been selection for cancer suppression mechanisms that allow an organism to grow large and successfully reproduce. Recent efforts have sought to understand the genomic mechanisms responsible for cancer suppression in gigantic species (Abegglen et al. 2015;Caulin et al. 2015;Keane et al. 2015;Sulak et al. 2016). An enhanced DNA damage response in elephant cells has been attributed to $20 duplications of the tumor suppressor gene TP53 in elephant genomes (Abegglen et al. 2015;Sulak et al. 2016). The bowhead whale (Balaena mysticetus) is a large whale that may live more than 200 years (George et al. 1999), and its genome shows evidence of positive selection in many cancer-and aging-associated genes including ERCC1, which is part of the DNA repair pathway (Keane et al. 2015). Additionally, the bowhead whale genome contains duplications of the DNA repair gene PCNA, as well as LAMTOR1, which helps control cellular growth (Keane et al. 2015). Altogether, these results suggest that 1) the genomes of larger and longer-lived mammals may hold the key to multiple mechanisms for suppressing cancer, and 2) as the largest animals on Earth, whales make very promising sources of insight for cancer suppression research. Cetacean comparative genomics is a rapidly growing field, with 13 complete genome assemblies available on NCBI as of late 2018, including the following that were available at the onset of this study: the common minke whale (Balaenoptera acutorostrata) (Yim et al. 2014), bottlenose dolphin (Tursiops truncatus), orca (Orcinus orca) (Foote et al. 2015), and sperm whale (Physeter macrocephalus) (Warren et al. 2017). In addition, the Bowhead Whale Genome Resource has supported the genome assembly for that species since 2015 (Keane et al. 2015). However, to date, few studies have used multiple cetacean genomes to address questions about genetic changes that have controlled adaptations during cetacean evolution, including the evolution of cancer suppression. Here, we provide a comparative analysis that is novel in scope, leveraging whole-genome data from ten cetacean species, including six cetacean genome assemblies, and a de novo genome assembly for the humpback whale (Megaptera novaeangliae). Humpback whales are members of the family Balaenopteridae (rorquals) and share a recent evolutionary history with other ocean giants such as the blue whale and fin whale (Balaenoptera physalus) ( Arnason et al. 2018). They have an average adult length of more than 13 m (Clapham and Mead 1999), and a lifespan that may extend to 95 years (Chittleborough 1959;Gabriele et al. 2010), making the species an excellent model for Peto's Paradox research. Our goals in this study were 3-fold: 1) to provide a de novo genome assembly and annotation for the humpback whale that will be useful to the cetacean research and mammalian comparative genomics communities; 2) to leverage the genomic resource and investigate the molecular evolution of cetaceans in terms of their population demographics, phylogenetic relationships and species divergence times, and the genomics underlying cetacean-specific adaptations; and 3) to determine how selective pressure variation on genes involved with cell cycle control, cell signaling and proliferation, and many other pathways relevant to cancer may have contributed to the evolution of cetacean gigantism. The latter has the potential to generate research avenues for improving human cancer prevention, and perhaps even therapies. Results and Discussion Sequencing, Assembly, and Annotation of the Humpback Whale Genome We sequenced and assembled a reference genome for the humpback whale using high-coverage paired-end and matepair libraries (table 1, NCBI BioProject PRJNA509641) and obtained an initial assembly that was 2.27 Gb in length, with 24,319 scaffolds, a contig N50 length of 12.5 kb and a scaffold N50 length of 198 kb. Final sequence coverage for the initial assembly was $76Â, assuming an estimated genome size of 2.74 Gb from a 27-mer spectrum analysis. Hi Rise scaffolding using proximity ligation (Chicago) libraries (Putnam et al. 2016, table 1, NCBI BioProject PRJNA509641) resulted in a final sequence coverage of $102Â, greatly improving the contiguity of the assembly by reducing the number of scaffolds to 2,558 and increasing the scaffold N50 length 46-fold to 9.14 Mb (table 2). The discrepancy between estimated genome size and assembly length has been observed in other cetacean genome efforts (Keane et al. 2015), Return to the Sea, Get Huge, Beat Cancer . doi:10.1093/molbev/msz099 MBE and is likely due to the highly repetitive nature of cetacean genomes ( Arnason and Widegren 1989). With 95-96% of near-universal orthologs from OrthoDB v9 (Simão et al. 2015) present in the assembly, as well as 97% of a set of core eukaryotic genes (Parra et al. 2009), the estimated gene content of the humpback whale genome assembly suggests a high-quality genome with good gene representation (table 1). To aid in genome annotation, we carried out skin transcriptome sequencing, which resulted in 281,642,354 reads (NCBI BioProject PRJNA509641). These were assembled into a transcriptome that includes 67% of both vertebrate and laurasiatherian orthologs, and we predicted 10,167 protein-coding genes with likely ORFs that contain BLAST homology to SwissProt proteins (UniProt . The large number of missing genes from the transcriptome may be due to the small proportion of genes expressed in skin. Therefore, we also assessed homology with ten mammalian proteomes from NCBI and the entire SwissProt database, and ab initio gene predictors (see Materials and Methods, supplementary Methods, and supplementary fig. 1, Supplementary Material online) for genecalling. The final genome annotation resulted in 24,140 protein-coding genes, including 5,446 with 5 0 -untranslated regions (UTRs) and 6,863 with 3 0 -UTRs. We detected 15,465 one-to-one orthologs shared with human and 14,718 with cow. When we compared gene annotations across a sample of mammalian genomes, the humpback whale and bottlenose dolphin genome assemblies had on average significantly shorter introns (P ¼ 0.04, unpaired T-test, supplementary table 1, Supplementary Material online), which may in part explain the smaller genome size of cetaceans compared with most other mammals (Zhang and Edwards 2012). We estimated that between $30% and $39% of the humpback whale genome comprised repetitive elements (table 3). Masking the assembly with a library of known mammalian elements resulted in the identification of more repeats than a de novo method, suggesting that cladespecific repeat libraries are highly valuable when assessing repetitive content. The most abundant group of transposable elements in the humpback whale genome was the autonomous non-long terminal repeat (LTR) retrotransposons (long interspersed nuclear elements or LINEs), which comprised nearly 20% of the genome, most of which belong to the LINE-1 clade as is typical of placental mammals (Boissinot and Sookdeo 2016). Large numbers of nonautonomous non-LTR retrotransposons in the form of short interspersed nuclear elements (SINEs) were also detected; in particular, over 3% of the genome belonged to mammalian inverted repeats. Although the divergence profile of de novo-derived repeat annotations in humpback whale included a decreased average genetic distance within transposable element subfamilies compared with the databasederived repeat landscape, both repeat libraries displayed a spike in the numbers of LINE-1 and SINE retrotransposon subfamilies near 5% divergence, as did the repeat landscapes of the bowhead whale, orca and dolphin, suggesting recent retrotransposon activity in cetaceans (supplementary figs. 2 and 3, Supplementary Material online). Slow DNA Substitution Rates in Cetaceans and the Divergence of Modern Whale Lineages We computed a whole-genome alignment (WGA) of 12 mammals including opossum, elephant, human, mouse, dog, cow, sperm whale, bottlenose dolphin, orca, bowhead whale, common minke whale, and humpback whale (supplementary table 2, Supplementary Material online), and employed human gene annotations to extract 2,763,828 homologous 4-fold degenerate (4D) sites. A phylogenetic analysis of the 4D sites yielded the recognized evolutionary relationships ( fig. 1A), including reciprocally monophyletic Mysticeti and Odontoceti. When we compared the substitutions per site along the branches of the phylogeny, we found a larger number of substitutions along the mouse , which may be attributed to long generation times or slower mutation rates in cetaceans (Jackson et al. 2009). Germline mutation rates are related to somatic mutation rates within species (Milholland et al. 2017); therefore, it is possible that slow mutation rates may limit neoplastic progression and contribute to cancer suppression in cetaceans, which is a prediction of Peto's Paradox (Caulin and Maley 2011). We also obtained 152 single-copy orthologs (single-gene ortholog families or SGOs, see Materials and Methods and supplementary Methods, Supplementary Material online) identified in at least 24 out of 28 species totaling 314,844 bp, and reconstructed gene trees that were binned and analyzed using a species tree method that incorporates incomplete lineage sorting (see Materials and Methods, Zhang et al. 2018). The species tree topology (supplementary fig. 4, Supplementary Material online) also included full support for the accepted phylogenetic relationships within Cetacea, as well as within Mysticeti and Odontoceti. Lower local posterior probabilities for two of the internal branches within laurasiatherian mammals were likely due to the extensive gene tree heterogeneity that has complicated phylogenetic reconstruction of the placental mammalian lineages (Tarver et al. 2016). We estimated divergence times in a Bayesian framework using the 4D and SGO data sets independently in MCMCtree (Yang and Rannala 2006), resulting in similar posterior distributions and parameter estimates, with overlapping highest posterior densities for the estimated divergence times of shared nodes across the 4D and SGO phylogenies (supplementary figs. 5 and 6 and tables 4 and 5, Supplementary Material online). We estimated that the time to the most recent common ancestor (TMRCA) of placental mammals was 100-114 Ma during the late Cretaceous, the TMRCA of cow and cetaceans (Cetartiodactyla) was 52-65 Ma during the Eocene or Paleocene, the TMRCA of extant cetaceans was 29-35 Ma during the early Oligocene or late Eocene (between the two data sets), the TMRCA of baleen whales was placed 9-26 Ma in the early Miocene or middle Oligocene, and the TMRCA of humpback and common minke whales (family Balaenopteridae) was 4-22 Ma during the early Pliocene or the Miocene ( fig. 2A). Arnason et al. (2018), we estimated that the largest humpback whale population sizes were !2 Ma during the Pliocene-Pleistocene transition, followed by a steady decline until $1 Ma. The PSMC trajectories of the two humpback whales began to diverge $100,000 years ago, and the estimated confidence intervals from 100 bootstraps for each PSMC analysis were nonoverlapping in the more recent bins. Both humpback PSMC trajectories suggested sharp population declines beginning $25,000-45,000 years ago. However, interpreting inferred PSMC plots of past "demographic" changes is nontrivial in a globally distributed species connected by repeated, occasional gene flow such as humpback whales (Baker et al. 1993;Palsbøll et al. 1995;Jackson et al. 2014). The apparent changes in effective population size may represent changes in abundance, interocean connectivity or a combination of both (Hudson 1990;Palsbøll et al. 2013). Several genetic and genome-based studies of cetaceans have demonstrated how past large-scale oceanic changes have affected the evolution of cetaceans (Steeman et al. 2009), including baleen whales ( Arnason et al. 2018). Although the population genetic structure of humpback whales in the North Atlantic is not fully resolved, the level of genetic divergence among areas is very low (Larsen et al. 1996;Valsecchi et al. 1997). Therefore, the difference between the two humpback whale PSMC trajectories may be due to recent admixture (Baker et al. 1993;Palsbøll et al. 1995;Ruegg et al. 2013;Jackson et al. 2014), intraspecific variation and population structure (Mazet et al. 2016), as well as errors due to differences in sequence coverage (Nadachowska-Brzyska et al. 2016). Segmental Duplications in Cetacean Genomes Contain Genes Involved in Apoptosis and Tumor Suppression Mammalian genomes contain gene-rich segmental duplications (Alkan et al. 2009), which may represent a powerful mechanism by which new biological functions can arise (Kaessmann 2010). We employed a read-mapping approach to annotate large segmental duplications (LSDs) !10 kb in the humpback whale genome assembly and ten additional cetaceans for which whole-genome shotgun data were available (see Materials and Methods, supplementary Methods, and supplementary table 6, Supplementary Material online). We found that cetacean genomes contained on average 318 LSDs (656 SD), which comprised $9.9 Mb (61.8 Mb) and averaged $31 kb in length (62.4 kb). We identified 10,128,534 bp (0.4%) of the humpback whale genome assembly that comprised 293 LSDs averaging 34,568 bp in length. Fifty-one of the LSDs were shared across all 11 cetacean genomes (supplementary fig. 9, Supplementary Material online). In order to determine the potential role of segmental duplications during the evolution of cetacean-specific phenotypes, we identified 426 gene annotations that overlapped cetacean LSDs, including several genes annotated for viral response. Other genes on cetacean LSDs were involved in aging, in particular DLD in the bowhead whale and KCNMB1 in the blue whale; this may reflect relevant adaptations contributing to longevity in two of the largest and longest-lived mammals (Ohsumi 1979;George et al. 1999). Multiple tumor suppressor genes were located on cetacean LSDs, including 1) SALL4 in the sei whale; 2) TGM3 and SEMA3B in the orca; 3) UVRAG in the sperm whale, North Atlantic right whale, and bowhead whale; and 4) PDCD5, which is upregulated during apoptosis (Zhao et al. 2015) and was found in LSDs of all 11 queried cetacean genomes. PDCD5 pseudogenes have been identified in the human genome, and several Ensembl-hosted mammalian genomes contain one-to-many PDCD5 orthologs; however, we annotated only a single copy of PDCD5 in the humpback whale assembly. This suggests that in many cases, gene duplications are collapsed during reference assembly but can be retrieved through shotgun read-mapping methods (Carbone et al. 2014). We annotated fully resolved SALL4 and UVRAG copy number variants in the humpback whale genome assembly, and by mapping the RNA-Seq data from skin to the genome assembly and annotation (see Materials and methods), we found that three annotated copies of SALL4 were expressed in humpback whale skin, as were two copies of UVRAG. We also found that $1.45 Mb (6923 kb) of each cetacean genome consists of LSDs not found in other cetaceans, making them species-specific, which averaged $24.4 kb (614.6 kb) in length (supplementary table 7 and fig. 9, Supplementary Material online). The minke whale genome contained the highest number of genes on its species-specific LSDs (32). After merging the LSD annotations for the two humpback whales, we identified 57 species-specific LSDs for this species, comprising $977 kb and containing nine duplicated genes. Humpback whale-specific duplications included the genes PRMT2, which is involved in growth and regulation and promotes apoptosis, SLC25A6 which may be responsible for the release of mitochondrial products that trigger apoptosis, and NOX5, which plays a role in cell growth and apoptosis (UniProt Consortium 2015). Another tumor suppressor gene, TPM3, was duplicated in the humpback whale assembly based on our gene annotation. However, these extranumerary copies of TPM3 were not annotated on any humpback whale LSDs, lacked introns, and contained mostly the same exons, suggesting retrotransposition rather than segmental duplication as a mechanism for their copy number expansion (Kaessmann 2010). According to the RNA-Seq data, all seven copies of TPM3 are expressed in humpback whale skin. Duplications of the tumor suppressor gene TP53 have been inferred as evidence for cancer suppression in elephants (Abegglen et al. 2015;Caulin et al. 2015;Sulak et al. 2016). During our initial scans for segmental duplications, we noticed a large pileup of reads in the MAKER-annotated Return to the Sea, Get Huge, Beat Cancer . doi:10.1093/molbev/msz099 MBE humpback whale TP53 (data not shown). We PCR-amplified, cloned, and sequenced this region from a humpback whale DNA sample, inferring four haplotypes that differ at two bases (supplementary Methods, Supplementary Material online). After manually annotating TP53 in the humpback whale, we determined that these nucleotide variants fell in noncoding regions of the gene; one occurred upstream of the start codon whereas the other occurred between the first and second coding exons. Other genomic studies have concluded that TP53 is not duplicated in cetaceans (Yim et al. 2014;Keane et al. 2015;Sulak et al. 2016). We consider the possibility of at least two TP53 homologs in the genome of the humpback whale, although more data are required to resolve this. Regardless, cancer suppression likely arose in different mammalian lineages via multiple molecular etiologies. Overall, our results reveal several copy number expansions in cetaceans related to immunity, aging, and cancer, suggesting that cetaceans are among the large mammals that have evolved specific adaptations related to cancer resistance. Accelerated Regions in Cetacean Genomes Are Significantly Enriched with Pathways Relevant to Cancer In order to determine genomic loci underlying cetacean adaptations, we estimated regions in the 12-mammal WGA with elevated substitution rates that were specific to the cetacean branches of the mammalian phylogeny. These genomic regions departed from neutral expectations in a manner consistent with either positive selection or relaxed purifying selection along the cetacean lineage (Pollard et al. 2010). We successfully mapped 3,260 protein-coding genes with functional annotations that overlap cetacean-specific accelerated regions, which were significantly enriched for Gene Ontology (GO) categories such as cell-cell signaling (GO:0007267) and cell adhesion (GO:0007155) (table 4). Adaptive change in cell signaling pathways could have maintained the ability of cetaceans to prevent neoplastic progression as they evolved larger body sizes. Adhesion molecules are integral to the development of cancer invasion and metastasis, and these results suggest that cetacean evolution was accompanied by selection pressure changes on both intra-and extracellular interactions. Cetacean-specific genomic regions with elevated substitution rates were also significantly enriched in genes involved in B-cell-mediated immunity (GO:0019724), likely due to the important role of regulatory cells which modulate immune response to not only pathogens but perhaps tumors as well. In addition, cetacean-specific acceleration in regions controlling complement activation (GO:0006956) may have provided better immunosurveillance against cancer and further protective measures against malignancies (Pio et al. 2014). We also found that accelerated regions in cetacean genomes were significantly enriched for genes controlling sensory perception of smell (GO:0007608), perhaps due to the relaxation of purifying selection in olfactory regions, which were found to be underrepresented in cetacean genomes (Yim et al. 2014). Selection Pressures on Protein-Coding Genes during Cetacean Evolution Point to Many Cetacean Adaptations, Including Cancer Suppression To gain further insight into the genomic changes underlying the evolution of large body sizes in cetaceans, we employed phylogenetic targeting to maximize statistical power in pairwise evolutionary genomic analyses (Arnold and Nunn 2010). This resulted in maximal comparisons between 1) the orca and the bottlenose dolphin and 2) the humpback whale and common minke whale. Despite their relatively recent divergences (e.g., the orca:bottlenose dolphin divergence is similar in age to that of the human:chimpanzee divergence, see fig. 2), the species pairs of common minke:humpback and orca:dolphin have each undergone extremely divergent evolution in body size and longevity ( fig. 3). Humpback whales are estimated to weigh up to four times as much as common minke whales and are reported to have almost double the longevity, and orcas may weigh almost 20 times as much as bottlenose dolphins, also with almost double the lifespan (Tacutu et al. 2012). In order to offset the tradeoffs associated with the evolution of large body size, with the addition of many more cells and longer lifespans since the divergence of each species pair, we hypothesize that necessary adaptations for cancer suppression should be encoded in the genomes, as predicted by Peto's Paradox (Tollis et al. 2017). For each pairwise comparison, we inferred pairwise genome alignments with the common minke whale and orca genome assemblies as targets, respectively, and extracted protein-coding orthologous genes. We then estimated the ratio of nonsynonymous substitutions per synonymous site to synonymous substitutions per synonymous site (d N /d S ) in Among an estimated 435 genes with d N /d S >1 in the common minke:humpback pairwise comparison, we detected eight genes belonging to the JAK-STAT signaling pathway (3.9-fold enrichment, P ¼ 1.1E-3 Fisher's exact test) and seven involved in cytokine-cytokine receptor interaction (4.1-fold enrichment, P ¼ 1.7E-2 Fisher's exact test) suggesting positive selection acting on pathways involved in cell proliferation. These genes included multiple members of the tumor necrosis factor subfamily such as TNFSF15, which inhibits angiogenesis and promotes the activation of caspases and apoptosis (Yu et al. 2001). A d N /d S >1 was also detected in seven genes involved in the negative regulation of cell growth (GO:0030308, 3.1-fold enrichment, P ¼ 8.03E-3 Fisher's exact test), and five genes involved in double-strand break repair (GO:0006302, 4.0-fold enrichment, P ¼ 8.03E-3 Fisher's exact test). Although these results suggest the evolution of amino acid differences since the split between common minke and humpback whales in genes affecting cell growth, proliferation, and maintenance, the GO category enrichment tests did not pass significance criteria after Bonferroni corrections for multiple testing. We found 18 genes that are mutated in cancers according to the COSMIC v85 database (Forbes et al. 2015) in the common minke:humpback comparison, including a subset of five annotated as tumor suppressor genes, oncogenes, or fusion genes in the Cancer Gene Census (CGC; Futreal et al. 2004) which are highlighted in . These results are consistent with our accelerated region analysis based on the WGA, which showed accelerated evolution in immunity pathways (above, see table 4). For instance, eight genes (CD58, CD84, KLF13, SAMSN1, CTSG, GPC3, LTF, and SPG21) annotated for immune system process (GO:0002376) were found in cetacean-specific accelerated genomic regions and also had a pairwise d N /d S >1 in the orca:dolphin comparison, mirroring other recent genomic analyses of immunity genes in orcas (Ferris et al. 2018). Our results also suggest that the evolution of gigantism and long lifespans in cetaceans was accompanied by selection acting on many genes related to somatic maintenance and cell signaling. As a more accurate assessment of selection pressure variation acting on protein-coding genes across cetacean evolution, we conducted an additional assessment of d N /d S using branch-site codon models implemented in codeml (Yang 1998). We employed extensive filtering of the branch site results, including both false discovery rate (FDR) and Bonferroni corrections for multiple testing (see Materials and methods), and conservatively estimated that 450 protein-coding genes were subjected to positive selection in cetaceans. These include 54 genes along the ancestral cetacean branch, 12 along the ancestral toothed whale branch, 84 along the ancestral baleen whale branch, 74 in the ancestor of common minke and humpback whales, and 212 unique to the humpback whale branch ( fig. 4A). Cetacean positively selected genes were annotated for functions related to extensive changes in anatomy, growth, cell signaling, and cell proliferation ( fig. 4B). For instance, in the branch-site models for humpback whale, positively selected genes are enriched for several higher-level mouse limb phenotypes including those affecting the limb long bones (MP:0011504, 15 genes, FDRcorrected P-value ¼ 0.001), and more specifically the hind limb stylopod (MP:0003856, seven genes, FDR ¼ 0.024) or (Thewissen et al. 2006). Enriched mouse phenotypes are also related to the unique cetacean axial skeleton (MP:0002114, 25 genes, FDR ¼ 0.016), most notably in the skull, including craniofacial bones (MP:0002116, 17 genes, FDR ¼ 0.018), teeth (MP:0002100, nine genes, FDR ¼ 0.004), and the presphenoid (MP:0030383, three genes, FDR ¼ 0.003). Past analyses of the cetacean basicranial elements revealed that the presphenoid was extensively modified along the cetacean lineage (Ichishima 2016). Positively selected genes unique to the humpback whale were significantly enriched for a single biological process: regulation of cell cycle checkpoint (GO:1901976; 18.57-fold enrichment, P ¼ 0.02 after Bonferonni correction for multiple testing), suggesting positive selection in pathways that control responses to endogenous or exogenous sources of DNA damage and limit cancer progression (Kastan and Bartek 2004). We detected a significant number of protein-protein interactions among humpback whale-specific positively selected genes (number of nodes ¼ 204, number of edges ¼ 71, expected number of edges ¼ 51, P ¼ 0.004; supplementary fig. 10, Supplementary Material online), including genes that are often coexpressed and involved in DNA repair, DNA replication, and cell differentiation. For instance, we identified significant interactions between DNA2, which encodes a helicase involved in the maintenance of DNA stability, and WDHD1 which acts as a replication initiation factor. Another robust protein interaction network was detected between a number of genes involved in the genesis and maintenance of primary cilia. The highest scoring functional annotation clusters resulted in key words such as ciliopathy (seven genes) and cell projection (16 genes), and GO terms such as cilium morphogenesis, cilium assembly, ciliary basal body, and centriole. The primary cilia of multicellular eukaryotes control cell proliferation by mediating cell-extrinsic signals and regulating cell cycle entry, and defects in ciliary regulation are common in many cancers (Michaud and Yoder 2006). Our branch-site test results indicated that the evolution of cetacean gigantism was accompanied by strong selection on many pathways that are directly linked to cancer (fig. 4C). We identified 33 genes that are mutated in human cancers (according to the COSMIC database) that were inferred as subjected to positive selection in the humpback whale lineage, including the known tumor suppressor genes ATR, which is a protein kinase that senses DNA damage upon genotixic stress and activates cell cycle arrest, and RECK, which suppresses metastasis (Forbes et al. 2015). Multiple members of the PR domain-containing gene family (PRDM) evolved under positive selection across cetaceans, including the tumor suppressor genes PRDM1, whose truncation leads to B-cell malignancies, and PRDM2, which regulates the expression and degradation of TP53 (Shadat et al. 2010) and whose forced expression causes apoptosis and cell cycle arrest in cancer cell lines (Fog et al. 2012). In baleen whales, ERCC5, which is a DNA repair protein that partners with BRCA1 and BRCA2 to maintain genomic stability (Trego et al. 2016) and suppresses UV-induced apoptosis (Cl ement et al. 2006), appeared to have been subjected to positive selection as Among the cancer-related genes subjected to positive selection in cetaceans, we identified two with identical amino acid changes among disparate taxa united by the traits of large body size and/or extreme longevity. Specifically, PRDM13 is a tumor suppressor gene that acts as a transcriptional repressor, and we found identical D!E amino acid substitutions in this gene in sperm whale, dolphin, orca, and humpback but also manatee (Trichechus manatus) and African elephant (Loxodonta africana) which are large-bodied afrotherian mammals that have been the focus of cancer suppression research (Abegglen et al. 2015;Sulak et al. 2016). Secondly, POLE is a cancer-related gene that participates in DNA repair and replication, and we observed one I!V substitution shared among orca, dolphin, bowhead, humpback, and common minke whale, but also elephant, as well as a second I!V substitution shared with these cetaceans and the little brown bat (Myotis lucifugus). Vesper bats such as M. lucifugus are known for their exceptional longevity relative to their body size, and have been proposed as model organisms in senescence and cancer research (Foley et al. 2018). Parallel changes in cancer-related genes across these phylogenetically distinct mammals suggest natural selection has acted on similar pathways that limit neoplastic progression in large and long-lived species (Tollis et al. 2017). Peto's Paradox and Cancer in Whales and Other Large Mammals Large body size has evolved numerous times in mammals, and although it is exemplified in some extant cetaceans, gigantism is also found in afrotherians, perissodactyls, and carnivores (Baker et al. 2015). Our results suggest that cancer suppression in large and long-lived mammals has also evolved numerous times. However, none of these species is completely immune to cancer. Elephants have at least a 5% lifetime risk of cancer mortality (Abegglen et al. 2015), which is far less than humans, but detecting cancer, and estimating cancer incidence and mortality rates in wild cetaceans is more challenging. Mathematical modeling predicting the lifetime risk of colorectal cancer in mice and humans yielded a rate of colorectal cancer at 50% in blue whales by age 50, and 100% by age 90 . This high rate of cancer mortality is an unlikely scenario, and taken with our genomic results presented here it suggests that cetaceans have evolved mechanisms to limit their overall risk of cancer. Among baleen whales, benign neoplasms of the skin, tongue, and central nervous system have been reported in humpback whales, and Rectangle size reflects semantic uniqueness of GO term, which measures the degree to which the term is an outlier when compared semantically to the whole list of GO terms. (C) Cancer gene names and functions from COSMIC found to be evolving under positive selection in the cetacean branch-site models. Superscripts for gene names indicate as follows: T, tumor suppressor gene; O, oncogene; F, fusion gene. Asterisks indicate P-value following FDR correction for multiple testing: **P < 0.01, ***P < 0.001, ****P < 0.0001. Tollis et al. . doi:10.1093/molbev/msz099 MBE ovarian carcinomas and lymphomas have been detected in fin whales (Newman and Smith 2006). Among smaller cetaceans, one unusually well-documented case study concluded that 27% of beluga whales (Delphinapterus leucas) found dead in the St. Lawrence estuary had cancer, which may have contributed to 18% of the total mortality in that population (Martineau et al. 2002). The authors suggested that the high degree of polycyclic aromatic hydrocarbons released into the estuary by nearby industry may have contributed to this elevated cancer risk (Martineau et al. 2002). By contrast, the larger baleen whale species in the Gulf of St. Lawrence appear to have lower contaminant burdens, likely due to ecological differences (Gauthier et al 1997). Interestingly, unlike in human cells, homologous recombination is uninhibited in North Atlantic right whale lung cells following prolonged exposure to the human lung carcinogen particulate hexavalent chromium (Browning et al. 2017), suggesting adaptations for high-fidelity DNA repair in whales. In this study, we provide a de novo reference assembly for the humpback whale-one of the more well-studied giants living on Earth today. The humpback whale genome assembly is highly contiguous and contains a comparable number of orthologous genes to other mammalian genome projects. Our comparisons with other complete cetacean genomes confirm the results of other studies which concluded that rorqual whales likely began diversifying during the Miocene (Slater et al. 2017;Arnason et al. 2018). We found indications of positive selection on many protein-coding genes suggestive of adaptive change in pathways controlling the mammalian appendicular and cranial skeletal elements, which are relevant to highly specialized cetacean phenotypes, as well as in many immunity genes and pathways that are known to place checks on neoplastic progression. LSDs in cetacean genomes contain many genes involved in the control of apoptosis, including known tumor suppressor genes, and skin transcriptome results from humpback whale suggest many gene duplications, whether through segmental duplication or retrotransposition, are transcribed and hence likely functional. We also use genome-wide evidence to show that germline mutation rates may be slower in cetaceans than in other mammals, which has been suggested in previous studies (Jackson et al. 2009), and we suggest as a corollary that cetacean somatic mutations rates may be lower as well. These results are consistent with predictions stemming from Peto's Paradox (Peto et al. 1975;Caulin and Maley 2011), which posited that gigantic animals have evolved compensatory adaptations to cope with the negative effects of orders of magnitude more cells and long lifespans that increase the number of cell divisions and cancer risk over time. Altogether, the humpback whale genome assembly will aid comparative oncology research that seeks to improve therapeutic targets for human cancers, as well as provide a resource for developing useful genomic markers that will aid in the population management and conservation of whales. Tissue Collection and DNA Extraction Biopsy tissue was collected from an adult female humpback whale ("Salt," NCBI BioSample SAMN1058501) in the Gulf of Maine, western North Atlantic Ocean using previously described techniques (Lambertsen 1987;Palsbøll et al. 1991) and flash frozen in liquid nitrogen. We extracted DNA from skin using the protocol for high-molecular-weight genomic DNA isolation with the DNeasy Blood and Tissue purification kit (Qiagen). Humpback whales can be individually identified and studied over time based on their unique ventral fluke pigmentation (Katona and Whitehead 1981). Salt was specifically selected for this study because of her 35-year prior sighting history, which is among the lengthiest and detailed for an individual humpback whale (Center for Coastal Studies, unpublished data). De Novo Assembly of the Humpback Whale Genome Using a combination of paired-end and mate-pair libraries, de novo assembly was performed using Meraculous 2.0.4 (Chapman et al. 2011) with a kmer size of 47. Reads were trimmed for quality, sequencing adapters, and mate-pair adapters using Trimmomatic (Bolger et al. 2014). The genome size of the humpback whale was estimated using the short reads, by counting the frequency of kmers of length 27 occurring in the 180-bp data set, estimating the kmer coverage, and using the following formula: genome size ¼ total kmers Ä kmer coverage. Chicago Library Preparation and Sequencing Four Chicago libraries were prepared as described previously (Putnam et al. 2016). Briefly, for each library, $500 ng of highmolecular-weight genomic DNA (mean fragment length >50 kb) was reconstituted into chromatin in vitro and fixed with formaldehyde. Fixed chromatin was digested with MboI or DpnII, the 5 0 -overhangs were repaired with biotinylated nucleotides, and blunt ends were ligated. After ligation, crosslinks were reversed and the DNA purified from protein. Biotin that was not internal to ligated fragments was removed from the purified DNA. The DNA was then sheared to $350 bp mean fragment size and sequencing libraries were generated using NEBNext Ultra (New England BioLabs) enzymes and Illumina-compatible adapters. Biotin-containing fragments were isolated using streptavidin beads before PCR enrichment of each library. The libraries were sequenced on an Illumina HiSeq 2500 platform. Scaffolding the De Novo Assembly with HiRise The input de novo assembly, shotgun reads, and Chicago library reads were used as input data for HiRise, a software pipeline designed specifically for using Chicago data to scaffold genome assemblies (Putnam et al. 2016). Shotgun and Chicago library sequences were aligned to the draft input assembly using a modified SNAP read mapper (http://snap. cs.berkeley.edu). The separations of Chicago read pairs mapped within draft scaffolds were analyzed by HiRise to produce a likelihood model for genomic distance between Return to the Sea, Get Huge, Beat Cancer . doi:10.1093/molbev/msz099 MBE read pairs, and the model was used to identify putative misjoins and to score prospective joins. After scaffolding, shotgun sequences were used to close gaps between contigs. Assessing the Gene Content of the Humpback Whale Assembly The expected gene content of the assembly was evaluated using the Core Eukaryotic Genes Mapping Approach (Parra et al. 2009) which searches the assembly for 458 highly conserved proteins and reports the proportion of 248 of the most highly conserved orthologs that are present in the assembly. We also used the Benchmarking Universal Single Copy Orthologs (BUSCO v2.0.1;Simão et al. 2015), which analyzes genome assemblies for the presence of 3,023 genes conserved across vertebrates, as well as a set of 6,253 genes conserved across laurasiatherian mammals. Transcriptome Sequencing and Assembly In order to aid in our gene-finding efforts for the humpback whale genome assembly and to measure gene expression, we generated transcripts from skin tissue by extracting total RNA using the QIAzol Lysis Reagent (Qiagen), followed by purification on RNeasy spin columns (Qiagen). RNA integrity and quantity were determined on the Agilent 2100 Bioanalyzer (Agilent) using the manufacturer's protocol. The total RNA was treated with DNase using DNase mix from the RecoverAll Total Nucleic Acid Isolation kit (Applied Biosystems/ Ambion). The RNA library was prepared and sequenced by the Genome Technology Center at the University of California Santa Cruz, including cDNA synthesis with the Ovation RNA-Seq system V2 (Nugen) and RNA amplification as described previously (Tariq et al. 2011). We used 0.5-1 mg of double-stranded cDNA for library preparation, sheared using the Covaris S2 size-selected for 350-450 bp using automated electrophoretic DNA fractionation system (LabChipXT, Caliper Life Sciences). Paired-end sequencing libraries were constructed using Illumina TruSeq DNA Sample Preparation Kit. Following library construction, samples were quantified using the Bioanalyzer and sequenced on the Illumina HiSeq 2000 platform to produce 2 Â 100 bp sequencing reads. We then used Trinity (Grabherr et al. 2011) to assemble the adapter-trimmed RNA-Seq reads into transcripts. Genome Annotation We generated gene models for the humpback whale using multiple iterations of MAKER2 (Holt and Yandell 2011) which incorporated 1) direct evidence from the Trinity-assembled transcripts, 2) homology to NCBI proteins from ten mammals (human, mouse, dog, cow, sperm whale, bottlenose dolphin, orca, bowhead whale, common minke whale, and baiji) and UniProtKB/Swiss-Prot (UniProt Consortium 2015), and 3) ab initio gene predictions using SNAP (11/29/2013 release;Korf 2004) and Augustus v3.0.2 (Stanke et al. 2008). A detailed description of the annotation pipeline is provided in the supplementary Methods, Supplementary Material online. Final gene calls were annotated functionally by BlastP similarity to UniProt proteins (UniProt Consortium 2015) with an e-value cutoff of 1e-6. Repeat Annotation and Evolutionary Analysis To analyze the repetitive landscape of the humpback whale genome, we used both database and de novo modeling methods. For the database method, we ran RepeatMasker v4.0.5 (http://www.repeatmasker.org, accessed August 21, 2017) (Smit et al. 2015a) on the final assembly, indicating the "mammalia" repeat library from RepBase (Jurka et al. 2005). For the de novo method, we scanned the assembly for repeats using RepeatModeler v1.0.8 (http://www.repeatmasker.org) (Smit et al. 2015b), the results of which were then classified using RepeatMasker. To estimate evolutionary divergence within repeat subfamilies in the humpback whale genome, we generated repeat-family-specific alignments and calculated the average Kimura-2-parameter divergence from consensus within each family, correcting for high mutation rates at CpG sites with the calcDivergenceFromAlign.pl RepeatMasker tool. We compared the divergence profile of humpback whale and bowhead whale by completing parallel analyses, and the repetitive landscapes of orca and bottlenose dolphin are available from the RepeatMasker server (http:// www.repeatmasker.org/species, accessed August 21, 2017). Analysis of Gene Expression Using RNA-Seq Splice-wise mapping of RNA-Seq reads against the humpback whale genome assembly and annotation was carried out using STAR v2.4 (Dobin et al. 2013), and we counted the number of reads mapping to gene annotations. We also mapped the skin RNA-Seq data to the database of annotated humpback whale transcripts using local alignments with bowtie v2.2.5 (Langmead and Salzberg 2012), and used stringtie v1.3.4 (Pertea et al. 2015) to calculate gene abundances by transcripts per million. Analysis of Segmental Duplications in Cetacean Genomes In order to detect LSDs in several cetacean genomes, we applied an approach based on depth of coverage (Alkan et al. 2009). To this end, we used whole-genome shotgun sequence data from the current study as well as from other cetacean genomics projects. All data were mapped against the humpback whale reference assembly. A detailed description of the segmental duplication analysis is provided in the supplementary Methods, Supplementary Material online. Whole-Genome Alignments We generated WGAs of 12 mammals (supplementary table 2, Supplementary Material online). First, we generated pairwise syntenic alignments of each species as a query to the human genome (hg19) as a target using LASTZ v1.02 (Harris 2007), followed by chaining to form gapless blocks and netting to rank the highest scoring chains (Kent et al. 2003). The pairwise alignments were used to construct a multiple sequence alignment with MULTIZ v11.2 (Blanchette et al. 2004) with human as the reference species. We filtered the MULTIZ alignment to Tollis et al. . doi:10.1093/molbev/msz099 MBE only contain aligned blocks from at least 10 out of the 12 species (81% complete). Phylogenetic Reconstruction Using Single-Copy Orthologs We downloaded the coding DNA sequences from 28 publicly available mammalian genome assemblies (supplementary table 9, Supplementary Material online) and used VESPA (Webb et al. 2017) to obtain high-confidence SGOs (supplementary Methods, Supplementary Material online). For phylogenetic analysis, we filtered the SGO data set to include only loci that were represented by at least 24 out of the 28 mammalian species (86% complete) and reconstructed each gene tree using maximum likelihood in PhyML v3.0 (Guindon et al. 2010) with an HKY85 substitution model and 100 bootstrap replicates to assess branch support. The gene trees were then binned and used to reconstruct a species tree using the accurate species tree algorithm (ASTRAL-III v5.6; Zhang et al. 2018). ASTRAL utilizes the multispecies coalescent model that incorporates incomplete lineage sorting, and finds the species tree stemming from bipartitions predefined by the gene trees. Branch support for the species tree was assessed with local posterior probabilities, and branch lengths were presented in coalescent units, where shorter branch lengths indicate greater gene tree discordance (Sayyari and Mirarab 2016). Rates of Molecular Evolution and Divergence Time Estimation We used multiple approaches on independent data sets to estimate rates of molecular evolution and the divergence times of the major mammalian lineages including six modern whales with complete genome assemblies. We first focused on 4-fold degenerate (4D) sites, which are positions within codon alignments where substitutions result in no amino acid change and can be used to approximate the neutral rate of evolution (Kumar and Subramanian 2002). We used the Ensembl human gene annotation to extract coding regions from the 12-mammal WGA using msa_view in PHAST v1.4 (Hubisz et al. 2011). We reconstructed the phylogeny with the 4D data as a single partition in RAxML v8.3 (Stamatakis 2014) under the GTRGAMMA substitution model and assessed branch support with 10,000 bootstraps. Rates of molecular evolution were estimated on the 4D data set with the semiparametric PL method implemented in r8s v1.8 (Sanderson 2002(Sanderson , 2003. A detailed description of the PL method is given in the supplementary Methods, Supplementary Material online. We also used the approximate likelihood calculation in MCMCtree (Yang and Rannala 2006) to estimate divergence times using independent data sets: 1) the above-mentioned 4D data set derived from the WGA, as well as 2) a set of the SGOs that included 24 out of 28 sampled taxa (86%) and was partitioned into three codon positions. We implemented the HKY85 substitution model, multiple fossil-based priors (supplementary table 10, Supplementary Material online ;Mitchell 1989;Benton et al. 2015;Hedges et al. 2015), and independent rates ("clock ¼ 3") along branches. All other parameters were set as defaults. For each MCMCtree analysis, we ran the analysis three times with different starting seeds and modified the Markov chain Monte Carlo (MCMC) length and sampling frequency in order to achieve proper chain convergence, monitored with Tracer v1.7. We achieved proper MCMC convergence on the 4D data set after discarding the first 500,000 steps as burn-in and sampling every 2,000 steps until we collected 20,000 samples. We achieved proper MCMC convergence on the SGO data set after discarding the first 500,000 steps as burn-in and sampling every 10,000 steps until we collected 10,000 samples. Demographic Analysis We used the PSMC (Li and Durbin 2011) to reconstruct the population history of North Atlantic humpback whales, including the individual sequenced in the current study (downsampled to $20Â coverage) and a second individual sequenced at $17Â coverage in Arnason et al. (2018). A detailed description of the PSMC analysis is provided in the supplementary Methods, Supplementary Material online. Nonneutral Substitution Rates in Cetacean Genomes In order to identify genomic regions controlling cetaceanspecific adaptations, we used phyloP (Pollard et al. 2010) to detect loci in the 12-mammal WGA that depart from neutral expectations (see supplementary Methods, Supplementary Material online). We then collected accelerated regions that overlapped human whole gene annotations (hg19) using bedtools intersect (Quinlan and Hall 2010) and tested for the enrichment of GO terms using the PANTHER analysis tool available at the Gene Ontology Consortium website (GO Ontology database, last accessed June 2017) (Gene Ontology Consortium 2015). Detection of Protein-Coding Genes Subjected to Positive Selection In order to measure selective pressures acting on proteincoding genes during cetacean evolution, with an emphasis on the evolution of cancer suppression, we estimated the ratio of nonsynonymous to synonymous substitutions (d N / d S ). To maximize statistical power in pairwise comparisons given the number of available cetacean genomes (six, last accessed September 2017), we implemented phylogenetic targeting (Arnold and Nunn 2010) assuming a phylogeny from a mammalian supertree ). To select genome assemblies most suitable for assessing Peto's Paradox, we weighted scores for contrasts with a lot of change in the same direction for both body mass and maximum longevity. Trait values were taken from panTHERIA (Jones et al. 2009), and we selected maximal pairings based on the standardized summed scores. We then generated pairwise genome alignments as described above based on the phylogenetic targeting results. For each pairwise genome alignment, we stitched gene blocks in Galaxy (Blankenberg et al. 2011) according to the target genome annotations, producing alignments of one-to-one orthologs, which were filtered to delete frameshift mutations and replace internal stop codons with gaps. We then estimated pairwise d N /d S for every Return to the Sea, Get Huge, Beat Cancer . doi:10.1093/molbev/msz099 MBE orthologous gene pair with KaKs_Calculator v2.0 (Wang et al. 2010). To link genes with d N /d S >1 to potential phenotypes, we used orthologous human Ensembl gene IDs to collect GO terms in BioMart (Kinsella et al. 2011) and tested for enrichment of overrepresented GO terms. We also used codon-based models to test for selective pressure variation along branches of the cetacean phylogeny in comparison to other mammal lineages, also known as the branch-site test (Yang 2007). First, the known species phylogeny (Morgan et al. 2013;Tarver et al. 2016) was pruned to correspond to the species present in each SGO family. SGO nucleotide alignments that contained more than seven species were analyzed for selective pressure variation: This is to reduce the risk of detecting false positives (Anisimova et al. 2001(Anisimova et al. , 2002. In general, the branch-site test is a powerful yet conservative approach (Gharib and Robinson-Rechavi 2013), although model misspecification and alignment errors can greatly increase the number of false positives (Anisimova et al. 2001(Anisimova et al. , 2002. Recent studies have concluded that many published inferences of adaptive evolution using the branch site test may be affected by artifacts (Venkat et al. 2018). Therefore, extensive filtering is necessary in order to make reasonably sound conclusions from results of the branch site test. A detailed description of all tested models and the filtering process are given in the supplementary Methods, Supplementary Material online. In total, 1,152 gene families were analyzed. We carried out the branchsite test using PAML v4.4e (Yang 2007). The following five branches were assessed as foreground: humpback whale, the most recent common ancestor (MRCA) of the common minke and humpback whales, MRCA of baleen whales, MRCA of toothed whales, and the MRCA of all whales (cetacean stem lineage). For each model, we kept all genes that met a significance threshold of P < 0.05 after a Bonferonni correction for multiple hypothesis testing using the total number of branch genes (five foreground branches*1,152 genes). We also corrected the raw P-values from the likelihood ratio tests of every gene by the FDR correction where q ¼ 0.05 (Benjamini and Hochberg 1995). The Bonferroni correction is more conservative than the FDR but sufficient for multiple hypothesis testing of lineage-specific positive selection; in these cases, the FDR results in higher probabilities of rejecting true null hypotheses (Anisimova and Yang 2007). Therefore, we used Bonferroni-corrected results in downstream analyses but also report FDR-corrected P-values in figure 4C. Genes were identified based on the human ortholog (Ensembl gene ID), and we performed gene annotation enrichment analysis and functional annotation clustering with DAVID v6.8 (Huang et al. 2009a(Huang et al. , 2009b, as well as semantic clustering of GO terms using REVIGO (Supek et al. 2011). We also searched for interactions of positively selected proteins using STRING v10.5 (Szklarczyk et al. 2017) with default parameters, and tested for the enrichment of overrepresented GO terms as above and for associated mouse phenotypes using modPhEA (Weng and Liao 2017). Data Availability All data that contributed to the results of the study are made publicly available. The genomic sequencing, RNA sequencing, as well as the genome assembly for humpback whale (GCA_004329385.1) are available under NCBI BioProject PRJNA509641. The gene annotation, orthologous gene sets, positive selection results, segmental duplication annotations, and whole-genome alignments used in this study are available at the Harvard Dataverse (https://doi.org/10.7910/DVN/ ADHX1O). Supplementary Material Supplementary data are available at Molecular Biology and Evolution online.
2019-05-10T13:05:45.412Z
2019-05-09T00:00:00.000
{ "year": 2019, "sha1": "fa35b24da54137f98a394bbdfbbdfb529cb0f65a", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/mbe/article-pdf/36/8/1746/29001220/msz099.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fa35b24da54137f98a394bbdfbbdfb529cb0f65a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55601300
pes2o/s2orc
v3-fos-license
Asteroseismic Constraints on the Models of Hot B Subdwarfs: Convective Helium-Burning Cores Asteroseismology of non-radial pulsations in Hot B Subdwarfs (sdB stars) offers a unique view into the interior of core-helium-burning stars. Ground-based and space-borne high precision light curves allow for the analysis of pressure and gravity mode pulsations to probe the structure of sdB stars deep into the convective core. As such asteroseismological analysis provides an excellent opportunity to test our understanding of stellar evolution. In light of the newest constraints from asteroseismology of sdB and red clump stars, standard approaches of convective mixing in 1D stellar evolution models are called into question. The problem lies in the current treatment of overshooting and the entrainment at the convective boundary. Unfortunately no consistent algorithm of convective mixing exists to solve the problem, introducing uncertainties to the estimates of stellar ages. Three dimensional simulations of stellar convection show the natural development of an overshooting region and a boundary layer. In search for a consistent prescription of convection in one dimensional stellar evolution models, guidance from three dimensional simulations and asteroseismological results is indispensable. Hot Subdwarf B stars Subdwarf B (sdB) stars are a class of hot (T eff = 20, 000−40, 000 K) and compact (log g = 5.0−6.2) stars with very thin hydrogen envelopes (M H < 0.01 M ) [1,2]. They form the so-called extreme horizontal branch (EHB) in the Hertzsprung-Russell diagram, where most of them quietly burn helium in their cores for ∼10 8 years. A violent process stripped them of most of their hydrogen envelope before they underwent the helium flash near the tip of the red giant branch (RGB). As a consequence they will not climb up the asymptotic giant branch (AGB) once their core helium is exhausted, but rather evolve directly to become white dwarfs. Although some sdB stars appear to be single and others occur in wide binaries, surveys have concluded that the majority are in close binaries with white dwarf or lowmass main sequence companions [3][4][5]. A small fraction of the latter exhibit eclipses, offering insight into the orbital parameters and masses of the systems. These HW Virginis stars, named after the discovery system, play a crucial role in determining the mass distribution of sdB stars. The median mass of sdB stars estimated from these binaries is 0.469 M [6]. The formation scenarios for sdB stars have to explain their existence in both single and binary systems as well as introduce a process by which the hydrogen envelope of the progenitor stars can be lost. Common envelope ejection (CE) or stable Roche lobe overflow (RLOF) during binary evolution result in systems with close and wide e-mail: jtschindler@email.arizona.edu separations, respectively, [7]. Population synthesis models can well explain the occurrence of binary systems and the mass distribution of the primary and secondary components [8]. To explain the existence of single sdB stars, extreme mass loss on the RGB [9][10][11] as well as white dwarf mergers [12] were suggested. However, the masses from the white dwarf merger channel over-predict the median mass of observed sdB stars [6]. After the discovery of Earth-sized bodies around sdB stars [13][14][15][16], it seems possible that sub-stellar companions may play a role in shaping single sdB stars. Our goal in this article is to discuss the latest results of asteroseismology and their implications for stellar evolution of core helium-burning stars. Therefore we will continue by introducing sdB asteroseismology in Section 1.1 and the state-of-the-art of sdB stellar evolution in Section 1.2. In Section 2 we focus on our main discussion of convection in helium-burning cores in light of asteroseismological results. We present a conclusion to the discussion in Section 3. For a full review of the field of Hot Subdwarfs we refer the interested reader to the publications of Heber [17,18]. sdB Asteroseismology Short period (80−600 s) and longer period (2000−14000 s) stellar pulsations are found in a significant fraction of the sdB star population. The shorter pressure (p)-mode pulsations were predicted by S. Charpinet et al. [20] and independently discovered by Kilkenny et al. [21] at nearly the same time. Currently more than 60 p-mode pulsators, termed V361 Hya stars, have been found among the hotter sdB stars. These short pulsations are due to low-order, low-degree acoustic waves. The longer pulsations have their origin in mid to high order, low degree gravity waves (g-modes [22]). There are about 50 known g-mode pulsators, or V1093 Her stars, on the cooler end of the sdB distribution. Furthermore, at the intersection of these two populations on the EHB exists a narrow range in which hybrid pulsators can be found [23], showing both p-and g-mode oscillations. The non-radial pulsations in sdB stars are opacity driven (κ-mechanism) [24,25] by partial ionization of iron group elements in their stellar envelopes. The balance between gravitational settling and radiative levitation creates a region with an overabundance of these iron group elements (especially Fe and Ni, [26]) in the envelopes of these stars, leading to an opacity bump. The inclusion of both diffusion processes is therefore not only important to explain the low atmospheric helium abundances, but also to create the driving region for the pulsations. Since the pulsation regions are by no means pure, inefficient stellar winds have been proposed to destroy the diffusive balance in the driving region [27]. As can be seen in Figure 1, p-modes can only propagate in the outer part of the star (log q < 0.4). They are reflected back to the surface before reaching the convective core. G-modes on the other hand can penetrate into the deep interiors (log q 0.1). The steep chemical transitions between the hydrogen-rich envelope and the helium mantle and between the base of the mantle and the convective C-O-He core result in spikes in the Brunt-Vaisälä-frequency (see Figure 1). Pulsational modes having nodes (filled circles in Figure 1) close to these transition regions are either partially trapped above or confined to the lower part of the star. This leaves clear signatures on the period spacing of those modes. The p-mode oscillations are therefore sensitive to the transition between the helium mantle and hydrogen envelope, whereas g-modes also probe the transition of the He mantle and the convective core. Thus asteroseismological analysis of sdB g-mode pulsators provides an exclusive window into the interior structure of core helium-burning stars. Quantitative asteroseismological analyses of pulsating sdB stars have been carried out using a forward modeling method [28][29][30][31]. The technique uses a neural network of structural sdB models to create theoretical oscillation spectra, which are then compared to the observed frequencies of a given star. The best possible match in the given parameter space holds information about the structural parameters of the star. Pulsational frequencies derived from light curves of p-and g-mode pulsators using groundbased as well as space-borne (e.g. CoRoT and Kepler) instruments allowed for detailed asteroseismological analysis. Also, measurements of the g-mode period spacing in Kepler light curves of sdB stars [32,33] enable comparisons with pulsation properties of stellar evolution models (e.g. see [34]). Using the forward modeling method [e.g. 35,36] stellar masses, envelope masses, surface gravities and effective temperatures were constrained and agree remarkably well with measurements from other techniques such as light curve modeling of eclipsing binary systems and spectroscopic analyses ( [37] Table 3, [38]). As pure structural models, these results are independent of the uncertainties in stellar evolution physics (e.g. convection, nuclear reaction rates or wind mass loss). Hence, a comparison between stellar evolution models and these asteroseismic measurements may help to identify deficiencies in our current understanding. The dashed line shows the zero age extreme horizontal branch (ZAEHB) for our evolutionary models of M ini = 1.0 M . The spectroscopic data points (small black dots) for sdB stars [40] agree very well with the open and filled squares with error bars derived from eclipsing binary and asteroseismology analyses, respectively [6]. Stellar Evolution of Subdwarf B stars After stars evolve up the red giant branch and start core helium-burning, losing a substantial amount of their hydrogen envelope in the process, they become horizontal branch stars. Subdwarf B stars on the extreme horizontal branch have hydrogen envelope masses too low to sustain hydrogen shell-burning. It is unsurprising that the first systematic stellar evolution studies relevant to sdB stars targeted horizontal branch stars [e.g. [41][42][43]. The detailed physics of the envelope mass stripping due to Roche lobe overflow or common envelopes are mostly neglected in one dimensional stellar evolution models. These processes occur on dynamical timescales of ∼10 3 years and are usually mimicked by extreme mass loss by stellar winds on the RGB, until only a tiny hydrogen envelope M env 0.01 M remains. Very recently, physical arguments for common envelope evolution have been introduced to regulate the extreme mass wind loss in [44]. Similarly, older stellar evolution calculations could not handle the violent helium core flash that occurs in M 2.2 M stars. Stellar structures at the RGB tip were modified to restart the evolution after the helium flash. The He-core material was artificially enriched by 3%−7% of carbon over a region believed to be plausible for the convective mixing during the He-core-flash (see also [9]). The most widely adopted models of sdB stars were created using this technique [e.g. 7, 29, 43] and show only slight differences from more recent models which follow the evolution through the He-flash [45]. Stellar evolution models of sdB stars [7,43] have been successful in explaining the general distribution of the population in the log g−T effdiagram and their atmospheric parameters. With the inclusion of physical diffusion processes later studies were further able to self-consistently predict the instability strips for sdB stars [46]. However, tension exists between recent asteroseismological analyses of sdB stellar structures and stellar evolution modeling [34,39]. As this is also the main focus of this short review, we will expand this discussion in Section 2. In the following paragraphs we highlight some important aspects of sdB stellar evolution models in more depth. The He-flash Modern state-of-the-art stellar evolution codes are capable of evolving stars from the pre-main-sequence through the Helium flash to the ZAEHB [e.g. 45,47]. However, two-and three-dimensional simulations of the He-flash [48,49] have shown that the extent of the convective region during the He-flash, as predicted by standard 1D stellar models, is incorrect. Turbulent entrainment on both edges of the convection zone lead to rapid growth of the convective region on a dynamic timescale. As a consequence the main core flash might not be followed by subsequent mini-flashes leading up to core heliumburning. In the current standard algorithms of convection using Mixing Length Theory (MLT, [50]) turbulent entrainment is not included. The convective region in the standard MLT picture only extends throughout the super-adiabatic region, whereas the convective flows in three dimensional simulations naturally extend into sub-adiabatic regions, where the plumes decelerate ("overshooting"). Additions to the standard algorithms that mimic the physical "overshooting" use prescriptions with a free tunable parameter. Therefore one has to take great caution in interpreting the details of the He-flash calculated by one dimensional stellar models. The Importance of Physical Diffusion Processes To explain the p-mode pulsations in sdB stars, Charpinet et al. [20] proposed that radiative levitation creates an overabundance of iron group elements in the sdB envelope. Partial ionization of these metals can then lead to the driving of pulsation via the κ-mechanism. This was confirmed using static models of sdB stars in diffusive equilibrium [24]. The approach of using static models for the instability calculations was later validated [51], showing that diffusive timescales are short compared to the stellar evolution. In order to explain the g-mode pulsations, Fontaine et al. [25] argued that the same κ-mechanism was at play. However, the models exhibiting g-mode pulsations were several thousand Kelvin too cool compared to the distribution of observed pulsators. This mismatch was termed the blue-edge problem. The discrepancy could be reduced by using artificially enhanced Fe and Ni abundances in the envelope [26] and the OP [52] instead of the OPAL [53] opacities. In a sequence of pioneering studies, Hu et al. [54][55][56] solved the diffusion equations for gravitational settling, thermal diffusion, concentration diffusion and radiative levitation in stellar evolution models of sdB stars. These diffusive processes naturally produce the Fe and Ni enhancements. Bloemen et al. [46] finally solved the blue-edge problem by calculating the pulsation properties for stellar models that included the necessary diffusion physics [55]. The inclusion of these diffusive processes is therefore crucial for any study focusing on the atmospheric abundances or pulsation properties of sdB stars. Asteroseismology and Convection in Helium-burning Cores A Short Introduction to Convective Mixing in One Dimensional Stellar Evolution Most stellar evolution codes treat mixing of convective flows as a "diffusive" process. A "diffusion" operator is chosen for mathematical convenience [57], based on mixing length theory (MLT, [e.g. 50,58]). The extent of the dynamically unstable regions is often assessed by the Schwarzschild or Ledoux criteria for instability. The Ledoux criterion for convection reads where If the composition term Φ/δ ∇ µ is omitted, one obtains the Schwarzschild criterion for convective instability. Using the Schwarzschild criterion one refers to the unstable region as super-adiabatic (∇ rad > ∇ ad ) and to the stable region as sub-adiabatic (∇ rad < ∇ ad ). When the Ledoux criterion is used, composition gradients are able to stabilize regions that would be unstable to the Schwarzschild criterion. A receding convection zone during hydrogen core-burning on the main sequence leaves helium enriched material, stable to mixing if the Ledoux criterion was applied, but unstable to the Schwarzschild criterion. However, the situation is more complicated and double diffusive processes lead to mixing in the seemingly stable region. This is famously called semiconvective mixing and is a needed addition to the standard formulation of MLT, whenever the Ledoux criterion is used [e.g. 59]. Another addition to canonical convection is overshoot mixing. It refers to the transport of energy and material across the boundary from the dynamically unstable into the stable region. Since MLT is a local theory, the deceleration and turning of the convective flow is not captured and the convective zone only extends to the edge of the super-adiabatic region. To remedy this, overshooting algorithms [e.g. 60,61] are used to extend the convection region beyond the stable region defined by the instability criterion. The Physics of Convection during Core-helium-burning After the helium flash has lifted the degeneracy of the Hecore, core-helium-burning starts. First the triple-alpha process fuses helium into carbon and once a significant carbon abundance has been reached the 12 C(α, γ) 16 O reaction dominates the energy generation. With the start of corehelium-burning a core convection zone develops. Three dimensional simulations of convection [48,[62][63][64][65] show how plumes of hotter material are accelerated by buoyancy forces in the super-adiabatic region and start to rise up. Once they reach the sub-adiabatic region, buoyancy braking decelerates the plumes until the flow turns and the material descends back down. The plumes transport kinetic energy and thermal energy upwards and mix the convective region. The flow is highly turbulent and the kinetic energy of the flow is converted into internal energy by the physics of the Kolmogorov cascade. A thin boundary layer develops at the top of the core convection zone. When the convective flow turns, its velocity is perpendicular to the radial coordinate. The material outside the convective zone is not moving and Kelvin-Helmholtz instabilities can develop which slowly entrain material from outside the boundary [65]. The nuclear fusion in the core proceeds to produce carbon and oxygen, which have higher free-free opacities than helium [66]. Therefore the opacity in the convective core slowly rises with the energy production leading to an increase of the super-adiabatic gradient. If helium rich material is entrained, carbon-rich (and later oxygen-rich) material is mixed outward of the convective zone increasing the opacity in the stable region in the process. As a result the boundary region becomes super-adiabatic and the convection zone slowly grows. Secondly, the downwardmixed helium replenishes the nuclear fuel and the phase of core-helium-burning is prolonged. In one dimensional models, convective instability is determined by the Ledoux or Schwarzschild criterion. Mixing length theory (MLT) predicts the convective velocity locally and calculates the convective energy flux for the energy balance. However, fluxes of the convective velocity across zone boundaries are not included in this framework and therefore deceleration of material (physical overshooting) cannot exist in this picture. Without the addition of "overshoot" prescriptions, an extremely sharp abundance gradient develops at the boundary where convective neutrality (∇ rad = ∇ ad for the Schwarzschild criterion) is violated. This framework does not allow for entrainment of material at the boundary and as a consequence the convective zone is unable to grow. Historically [66] overshoot prescriptions are used to circumvent this problem. However, they usually introduce a free tunable parameter and therefore the stellar evolution model loses its "predictive" power. Alternatively, concentration diffusion has been shown to soften the abundance gradient and naturally lead to a growth of the convection zone [67]. Yet, the rate of entrainment through diffusion might not necessarily be the same as for convective entrainment. Both approaches do satisfy convective neutrality at the boundary. This boundary problem is closely related with another phenomenon. It has been shown that the super-adiabatic gradient develops a minimum during core-helium-fusion. [68][69][70][71]. Over time the super-adiabatic gradient can decrease until the minimum reaches convective neutrality. Since MLT defines convection to occur in regions with super-adiabatic excess, the definition of the convective zone becomes ambiguous at this point. In consequence standard algorithms will force the convection zone to split. As a solution the material beyond the minimum was mixed with physically motivated but ad-hoc schemes to satisfy convective neutrality in that region. Many similar dedicated algorithms with different approaches and terminology have been developed to achieve this (e.g. "induced semiconvection" [68], "semiconvection" [69], "partial mixing" [72], "maximal overshooting" [34]). We will refer to this process as "induced semiconvection" for the rest of this article. However, the convection zone does not necessarily split. In some of our models, which include concentration diffusion to enable core growth, the super-adiabatic minimum never decreases to convective neutrality (e.g. see the model with the solid blue line Figure 3 [39]). Core convection during helium-burning remains a complex problem of stellar evolution. We have briefly mentioned the 3D picture of convection and then discussed core convection during core-helium-burning in one dimensional models. It is time we turned towards observational evidence for guidance and constraints. Tension Between 1D Stellar Evolution Models and Asteroseismology The advent of quantitative asteroseismological analyses in the era of satellite-borne precision photometry opens the possibility to study the deep interior of stellar structures. The results of asteroseismology allow us to contrast stellar evolution models with measurements from real stars and can help to constrain the inadequacies of our current modeling prescriptions. Here we will focus on results regarding core-helium-burning convection, but asteroseismology also yields measures of the internal rotation. Subdwarf B stars In contrast with red clump or horizontal branch (HB) stars, sdB stars cannot sustain hydrogen shell-burning nor an outer convective hydrogen envelope. This allows for direct observations of g-mode pulsations in core-helium-burning stars, which makes them unique probes of the core convection zone. Out of the ∼15 known g-mode pulsators for which there are sufficient photometric observations for asteroseismology, three have been analyzed using the forward modeling technique described above. The results directly constrain the extent of the core convection zone as well as the nature of the abundance gradient at the boundary. The analyses for the three pulsators estimate the convection zone to extend out to M cc = 0. . In the last case two equally probable solutions were found. The abundance gradients at the convection zone boundary indicate that all three stars are significantly less than halfway through their He-burning lifetimes, having consumed only about 20%-40% of the helium in their cores. We conducted a study to test whether we could reproduce the interior structures inferred from asteroseismology using standard algorithms in one dimensional stellar evolution models [39]. We carried out these calculations with the Modules for Experiments in Stellar Astrophysics (MESA, [47,76]). While our standard model used concentration diffusion to allow for convective core growth (see also [67]), it was not able to reproduce the larger core sizes for these three stars. Only extreme values for additional (exponential) overshoot (as implemented in MESA [47]) were able to bring the extent of the convective core in marginal agreement with the asteroseismological results. We display the growth of the convective cores for a range of model physics in Figure 3 and contrast them with results of Sweigart et al. [42] using overshoot and induced semiconvection. Our "standard" model shows a monotonically growing convective core that reaches the same extent as the model with induced semiconvection. The convective boundary seems to fluctuate for our models with overshoot, indicating an unstable behavior that might well be model with overshoot (Sweigart + 1987) with induced semiconvection (Sweigart + 1987) basic (OPAL type II opacities) standard (OPAL type II opacities, diffusion) basic +f ov = 0.02 standard +f ov = 0.02 Figure 3. The extent of the convective cores as a function of time for four MESA models [39] (gray and blue curves) and two older models from Sweigart et al. [42] (black curves). The Sweigart models include either overshoot or induced semiconvection. A comparison of the basic and standard MESA models (solid curves) shows the effect of including diffusion in the absence of overshoot. The last two MESA models are the same, except that overshoot is now included. The largest cores result from either overshoot or concentration diffusion to induce core growth. Large values of overshoot do not produce core growth, just larger cores. The rapid fluctuations in the models with overshoot suggest that the mixing algorithm experiences numerical instabilities at the boundary. Finally, we contrast the simulations with asteroseismic results (ovals) from the analysis of three g-mode pulsators [73][74][75]. numerical in nature. However, none of the convective core sizes match the results from asteroseismology, indicated by the colored error ellipses. Assumptions inherent in the parameterization of the stellar structures in the forward modeling may also introduce systematic effects. In particular, estimates of the abundance gradient are very sensitive to its assumed shape. While this could introduce uncertainties in the core He-burning lifetime of those three stars, the standard algorithms of convection still seem inadequate to reproduce the inferred stellar structures. Horizontal Branch and AGB Stars The period spacing of g-modes is very sensitive to the boundary of the convective helium-burning core, as we discussed above (Section 1.1). In contrast to sdB stars, red clump stars are [6] core He-burning stars which retained their hydrogen envelopes. Light curves of these stars measured by the NASA Kepler mission allowed the g-mode spacing to be inferred from mixed modes for hundreds of these stars [77][78][79]. Bossini et al. [80] constructed stellar evolution models using multiple stellar evolution codes and studied the influence of different convective prescriptions in comparison to the observational data. The goal was to find the convection prescription that would match the AGB lumi-nosity at the AGB bump as well as describe the measured period spacing. While standard models (including induced semiconvection and overshoot) would reproduce the AGB bump luminosity, they could not describe the period spacing. However, models with extreme overshoot reproduced the period spacing but failed to match the AGB bump luminosity. The authors then outlined a candidate model with a moderate overshooting region characterized by adiabatic stratification to fit both constraints, for use in further studies. Another study conducted by Constantino et al. [34,81] tested how well a range of stellar models with different convection prescriptions could reproduce the g-mode period spacing as well as the relative numbers of stars on the AGB and RGB in clusters. They demonstrated that the period spacing could only be matched using their new "maximal overshoot" mixing scheme, which produces the largest convective cores possible. Yet, they also stressed that mode trapping can bias the observationally inferred value of the period spacing. If so, the standard model using induced semiconvection would also suffice. Both solutions could explain the cluster counts, if mixing beneath the Schwarzschild boundary during subsequent early-AGB evolution occurred in the "maximal overshoot" models or if models with induced semiconvection could avoid helium core breathing pulses. Neither study provides a clear way forward for the use of convection prescriptions during this phase of stellar evolution. But they do identify important constraints for one dimensional stellar evolution models. White Dwarfs Ongoing asteroseismic analyses of the chemical abundance structure of white dwarfs with carbon-oxygen cores (CO-WD) might deliver further clues to understand convection during core-helium-burning (N. Giammichele, private communication). The oxygen abundance profile of a CO-WD is set by the interplay of core-helium-burning and the convection zone as well as the outward moving heliumburning shell and thermal pulses on the AGB. Estimates of the abundance profile in comparison with 1D stellar evolution calculations could provide new constraints on commonly used convection prescriptions. On The Issue of Core Breathing Pulses An interesting phenomenon encountered in low mass stars at the end of the helium-burning lifetime, when the helium abundance is very low, are the so-called "breathing pulses" [82,83]. During "breathing pulses" the convective zone grows rapidly, mixes new helium fuel down into the core and collapses again. In most cases this effect revives the core and prolongs the helium-burning lifetime considerably. In the late stage of core-helium-burning, the reaction rates depend strongly on the helium abundance. Mixing convective boundary layers with a sharp abundance gradient will lead to a sudden increase in the energy generation rate and can explain the fast expansion of the convective core. Whether this situation occurs in "real" stars is doubtful. Dorman et al. [72] argued that this behavior is exclusively numerical in nature. In the more recent study by Constantino et al. [81], breathing pulses are found in stellar models with overshoot which exhibit a sharp abundance gradient at the convective boundary. The authors show that a monotonically decreasing helium abundance in models with partial mixing, induced semiconvection or maximal overshoot ensures the stability of the convective core. In fact, they point out that non-local treatments of overshooting (see also [84]) seem to avoid avoid breathing pulses naturally [62]. A variety of studies comparing observational constraints to theoretical models [81,[84][85][86] have led to the conclusion that breathing pulses do not occur in natural environments. It is the nature of the helium abundance gradient and the rate of entrainment at the convective boundary during core helium-burning that allow for breathing pulses to occur. Their non-existence provides a strong constraint on the prescriptions of convective mixing and the treatment of the convective boundary. The 12 C(α, γ) 16 O Nuclear Rate and Convection The newest measurements of the 12 C(α, γ) 16 O nuclear reaction rate (NACRE II, [87]) have reduced the uncertainty on its value. However, in order to reach the conditions for this reaction in low mass stars, the measured rate has to be extrapolated to lower energies, thereby introducing new uncertainties. Constantino et al. [34] considered the effect of the 12 C(α, γ) 16 O nuclear reaction rate on the g-mode period spacing, and found that doubling the rate coefficient did not help the stellar models to achieve the larger period spacing observed in red clump stars. Yet, they noted that a higher 12 C(α, γ) 16 O rate increased the convective core mass. Asteroseismological studies of white dwarfs [88,89] inferred the central oxygen abundance to draw conclusions on the effective 12 C(α, γ) 16 O nuclear reaction rate over the star's lifetime. However, great care has to be taken since diffusion [90] or convection can alter the stellar model's pulsation spectrum and may lead to wrong conclusions. In fact, core convection during core-helium-burning and the 12 C(α, γ) 16 O reaction feature a complex interplay [91]. The growth of the convective core depends on the entrainment of more opaque material into the stable region to incite instability. The increase in opacity in turn is due to the carbon and oxygen yields of the reactions of heliumburning. So a higher 12 C(α, γ) 16 O nuclear rate leads to a higher energy production rate and a larger oxygen yield. Both of these lead to a growing convective core if entrainment or some sort of boundary mixing is included. On the other hand, the rate of entrainment determines the amount of helium brought into the burning region and therefore influences the rate of the 12 C(α, γ) 16 O reaction as well. It can be easily shown that the central oxygen abundance depends on the entrainment rate or on the specified amount of "overshoot" at the convective boundary. These effects complicate studies that like to infer the 12 C(α, γ) 16 O nuclear reaction rate from asteroseismology by introducing uncertainties that depend on the adopted algorithm of convective entrainment and vice versa. Conclusion Space-borne precision photometry has stimulated asteroseismological analysis for a variety of stars. Regarding sdB stars much progress has been made in the last fifteen years. It has become clear that the physics of radiative levitation and gravitational settling are needed to reproduce the pulsational instabilities through iron element opacity bumps in their envelopes. Analyses of g-mode pulsations constrained the extent of the convective core, the nature of the abundance gradient as well as internal rotation profiles. While the instability strips can now be reproduced, difficulties for coherent stellar evolution models still persist. In the binary evolution channels, the prospective sdB star loses most of its hydrogen envelope mass during the common envelope or the stable Roche lobe overflow phase. Current stellar evolution models utilize some form of extreme mass loss prescription to mimic this dynamical event. After the star is stripped of most of its hydrogen envelope, it undergoes the He-flash. Although many recent stellar evolution codes can follow the evolution through this violent event, three dimensional simulations show that the one dimensional picture is not fully correct. Lastly the standard algorithms employed to simulate convection in stellar evolution models seem to be inadequate during the phase of core-helium-burning. Asteroseismology of g-mode sdB pulsators indicates larger convective cores than can be reproduced in the current framework of mixing algorithms. Recent studies compared the g-mode period spacing found in red clump stars to stellar evolution models. The authors demonstrate that the current, mostly ad-hoc, treatments of overshooting and/or induced semiconvection can reproduce the observations in a few cases. The difficulties can be traced back to the treatment of physical overshooting and the entrainment at the convective boundary. Unfortunately no consistent prescription of core-helium-burning convection using the standard algorithms has emerged yet. This also applies to other stages of stellar evolution. With recent asteroseismological analyses constraining the convective cores during main-sequence evolution [92,93], a consistent picture might well emerge in the coming years. Where the asteroseismological results constrain the long term behavior of convection on the stellar structure, three dimensional models of convection offer insight into the dynamics of the physical mechanisms in play. These simulations show the development of a boundary layer and the strong fluctuations of the turbulent convective flow. Physical overshooting and entrainment occur naturally when the moving convective plumes enter the subadiabatic region, are slowed down and turn. In order to develop a consistent prescription for convection in evolutionary models, one has to undertake the dangerous procedure of integrating over the turbulent fluctuations. One possibility is to use the 3D simulations to provide closure to the Reynolds averaged Navier-Stokes equations. Approximations on the basis of these equations might allow for implementation in a stellar evolutionary code without calibrations to astronomical data [65]. The recent insight into stellar structures from asteroseismology and convective behavior from 3D simulations has highlighted the known inadequacies in our stellar evolution models, but also provided constraints to tackle them. Still, convection, especially in core-helium-burning stars, remains a fundamental problem of modern astrophysics for now.
2016-12-08T01:51:14.000Z
2016-12-08T00:00:00.000
{ "year": 2016, "sha1": "bd9c7bb62812fff18ed6604f682d65f5f878cf9d", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/29/epjconf_azores2017_04001.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "bd9c7bb62812fff18ed6604f682d65f5f878cf9d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250403583
pes2o/s2orc
v3-fos-license
Self-Supervised Learning for Time-Series Anomaly Detection in Industrial Internet of Things : Industrial sensors have presently emerged as a very important device for monitoring environmental conditions in the manufacturing system. However, abnormal behavior of these smart sensors may cause some failure or potential risk during system operation, thereby increasing the high availability of the entire manufacturing process. An anomaly detection tool in industrial monitoring system must detect any abnormal behavior in advance. Recently, self-supervised learning demonstrated comparable performance with other methods while eliminating manually labeled processes in training. Moreover, this technique decreases the complexity of the training model in lightweight devices to increase the processing time and detect accurately the health of equipment assets. Therefore, this paper proposes an anomaly detection method using a self-supervised learning framework in a time-series dataset to improve the model performance in terms of high accuracy and lightweight method. With the consideration of time-series data augmentation for generating pseudo-label, a classifier using one-dimension convolutional neural network (1DCNN) is applied to learn the characteristics of normal data. This classification model output will effectively measure the degree of abnormality. The experimental results indicate that our proposed method outperforms classic anomaly detection methods. Furthermore, the model deployment in a real testbed is performed to illustrate the efficiency of the self-supervised learning method for time-series anomaly detection. Introduction In recent times, many traditional industries have started their digital transformation journey toward Industry 4.0. The Industrial Internet of Things (IIoT) is a key component of future industrial systems. It provides smart industrial systems with intelligent connectivity through sensors, instruments, and other Internet of Things (IoT) devices. IIoT dramatically improves automation and productivity in critical industries, such as manufacturing, energy harvest, and transportation. Many IIoT applications are based on the development of edge devices and wireless networks that primarily focus on data collection, information retrieval, and robust data communications for industrial operations. Edge devices provide significant compute resources for IIoT applications, allowing for real-time, flexible, and speedy decision-making, which has aided the growth of Industry 4.0. However, the failure or erroneous operation of process industries has been triggered by the abnormal operation or malfunction of IIoT nodes. For instance, in smart factory scenarios, industrial devices serving as IIoT nodes, which exhibit anomalous behavior (e.g., abnormal traffic and irregular reporting frequency), may disrupt the industrial production, resulting in significant economic losses for manufacturers. Edge devices typically collect sensory data from IIoT nodes, particularly time-series data, to evaluate and capture the IIoT node behavior and operational conditions using edge computing. As a result, this sensor timeseries data may be used to detect anomalous IIoT node actions. Anomaly detection is the process of identifying data values or sequences that deviate from the majority of other observations under consideration, which are referred to as normal data. In IIoT systems, anomalies can be generated from various cases, including sensor faults with potential mechanical problems (e.g., overload, parts breaking, environmental effects, etc.), software anomalies (e.g., misoperation, program exceptions, transmission errors, etc.), unusual pollutant, and human influences. Generally, anomalies are classified into two types: A point anomaly is a single data point that appears with an unusual value, whereas a collective anomaly is a continuous sequence of data points considered anomalous as a whole, even if an individual data point may not differ from the expected range [1]. Time-series anomaly detection aims to isolate anomalous subsequences of varying lengths in a time-series dataset. Thresholding is one of the simplest detection techniques. It detects data points that are outside their range of normal. Unfortunately, many anomalies do not cross any boundaries; for example, they may have "normal" values but are unusual at the time they occur (i.e., contextual anomalies). These anomalies are difficult to detect because the context of a signal is frequently ambiguous. Currently, many anomaly detection methods are presented for solving IIoT device abnormality problems [1][2][3][4]. Various statistical methods have been proposed to enhance thresholding, such as Statistical Process Control [5], where the data point is detected as an anomaly if it fails to pass statistical hypothesis testing. However, a huge amount of human knowledge is still necessary to set prior assumptions for the models. Some researchers have also investigated various anomaly detections based on the unsupervised machine learning (ML) approach. One popular method includes segmenting a time series into subsequences (overlapping or otherwise) of a certain length and applying clustering algorithms to find anomalies. Other studies have focused on evaluating sensor time-series data to detect anomalous behavior of IIoT devices using deep anomaly detection (DAD) [6] techniques. DAD approaches can learn hierarchical discriminative features from historical time-series data. A model that either predicts or reconstructs a time-series signal was used, and then the real and predicted or reconstructed values were compared [7]. High prediction or reconstruction errors indicate the presence of anomalies. Despite their success in anomaly detection, existing DAD techniques cannot be immediately applied to the IIoT scenarios with dispersed edge devices for quick and accurate anomaly detection. Because most DAD models are inflexible enough in traditional approaches and the edge devices lack dynamic and automatically updated detection models for various contexts, they are unable to effectively forecast regularly updated time-series data in real time. Moreover, due to the nature of the problem, it is difficult to obtain a large amount of anomalous data, either labeled or unlabeled. The time-consuming labeling of IIoT data with high experiments also becomes a challenge in successfully applying deep learning approaches. To alleviate the aforementioned challenges, we provide a novel solution for automatic time-series anomaly detection on edge devices. This paper introduces an efficient realtime framework with two phases: offline training and online inference. Our proposed offline training framework selects historical data from the database for model training. In this phase, a deep learning method is provided for automatically detecting the anomalies without labeling samples using the self-supervised learning (SSL) model. Specifically, only the normal IIoT data is learned in the training process to explore the features of the supposedly "normal" time series by our algorithm. In online interference, our proposed method is employed in real time where the IIoT time-series sequence is continuously monitored, and the model is updated if the number of abnormal samples is greater than a specified threshold. Finally, we evaluate the effectiveness of the proposed SSL framework using different datasets and demonstrate the enhancement of the detection precision. Also, the learning model is deployed with real data collected from IIoT sensors and update the model based on a monitoring system. Our main contributions can be summarized as follows: • We first introduce an IIoT architecture for real-time data collection based on edge computing. In this way, the historical data can be collected for the offline phase and continuously detect anomalies for every new input data in a real testbed. • We further propose an efficient SSL method based on the normal IIoT sensory data to detect any anomalous pattern. The self-labeled data is generated corresponding to the augmentation data based on rotation and jittering technique. Then, the convolution neural network is presented to classify the timeseries for anomaly detection on the IIoT system. • We conduct extensive experiments in industrial sensor datasets acquired from the real environment to verify the effectiveness of the proposed framework and performance enhancement of SSL in anomaly detection. The comprehensive experimental evaluations indicate that the proposed framework performs significantly better than well-known existing anomaly detection approaches in terms of processing time and detection accuracy. The remainder of this paper is structured as follows. Section 2 provides a brief overview of the frameworks for detecting time-series anomalies. Section 3 describes the proposed system for IIoT data collection and early data processing. Section 4 provides a detailed analysis of our proposed system, which includes our SSL model for extracting time-series data representations and detecting anomalies based on learned data. Section 5 provides an assessment of the framework and experiments in our testbed. Finally, Section 6 presents the conclusions of this article. Related Works The related work in this section is divided into two parts: the traditional anomaly detection method and SSL for anomaly detection. Traditional Anomaly Detection Method Recently, many efforts [1,[8][9][10][11] in ML have been put forward to solve the problem of detecting outliers or anomalous effectively. In most anomaly detection tasks, we assume that only normal data is provided for the training process, and then the model predicts whether a test sample is normal during testing. The variety of papers for anomaly detection based on DAD is generally divided into three categories: • Supervised Deep Anomaly Detection: Typically, a supervised DAD uses the labels of normal and abnormal data to train a deep supervised binary or multiclass classifier. In a multi-cloud environment, Salman et al. [2] employed Linear Regression and Random Forest for anomaly detection and their categorization. Furthermore, Watson Jia et al. [3] have proposed an anomaly detection method using supervised learning based on Long Short-Term Memory (LSTM) along with the statistical properties of the time-series data. However, the supervised DAD method lacks labeled training data, and the performance of the model will be poor due to the imbalanced samples used to detect an anomaly. • Semi-supervised Deep Anomaly Detection: Semi-supervised learning takes into account the problem of classification when only a small portion of data has a corresponding label. For example, Wulsin et al. [4] employed Deep Belief Nets in a semi-supervised paradigm to model electroencephalogram (EEG) waveforms for classification and anomaly detection. Shen Zhang et al. [12] proposed two semi-supervised models based on the generative feature of variational autoencoders (VAE) for bearing anomaly detection. The semi-supervised DAD approach is popular as it can use only a single class of labels to detect anomalies. However, semi-supervised learning still requires that the relationship between labeled and unlabeled data distribution holds during data collection. This makes the model difficult to extend in the future when this distributional similarity is uncertain in the IIoT system. • Unsupervised Deep Anomaly Detection: In the unsupervised DAD approach, the system will be trained using the normal data, so when data falls outside some boundary condition, it is flagged as anomalous. To employ unsupervised DAD, the models such as autoencoder (AE) [6,13,14], LSTM [15][16][17], and Generative Adversarial Network (GAN) [18][19][20] are trained to generate normal data on the training dataset. Subsequently, the models either predict or reconstruct time-series data, identifying outliers based on a comparison of the real and predicted or reconstructed values. Perera et al. [20] employed One-class GAN (OCGAN) to improve robustness using a denoising autoencoder and learn latent space that exclusively represents a given class utilizing two discriminators. Meanwhile, Wu et al. [15] proposed LSTM-Gauss-NBayes in IIoT for anomaly detection. Specifically, the stacked LSTM model was employed to forecast the tendency of the time series, and the Naive Bayes model was used to detect anomalies based on the prediction result. However, because these methods can fit data, there is a risk that they could also fit anomalous data. Moreover, LSTM based on anomaly detection methodology is time-consuming and cannot be used for anomaly detection in real time. Self-Supervised Learning for Anomaly Detection SSL has been widely exploited in different domains, including computer vision [21][22][23][24], audio/speech processing [25], and time-series analysis [26,27]. The algorithm is presented to extract useful features from large-scale unlabeled datasets to generate labels without the need for human annotation. Generally, the SSL process is separated into two successive tasks: pretext task training and downstream task training. The model operates by training the unlabeled dataset through a self-supervised pretext task and, subsequently, transfers the learned parameters to downstream task training. In image data, the pretext task is used to learn representations with rotation prediction and distribution-augmented learning to extract useful features. Specifically, several research generates pseudo labels based on colorization, placing, corrupt, crop and resize, and so on. Those algorithms have demonstrated state-of-the-art performance in extracting useful features and efficiently applying semantic anomaly detection. Inspired by the SSL approach for anomaly detection, we propose a new solution to detect anomalies with feature extraction from time-series data. The idea of our algorithm is based on the augmentation of time-series data, in which we transform time series to different sequences using the rotation and jittering method before training a classifier to distinguish the transformation on time-series data. Because the classifier is trained based on normal IIoT data, the inconsistency when training with abnormal data could be used as the degree of abnormality. To the best of our knowledge, we are the first to apply principles of the SSL to time series for anomaly detection. This algorithm will not only find anomalies with high reliability but also reduce the complexity and training time compared with other DAD. In this paper, we evaluate our algorithm in different time-series datasets to prove the effectiveness of SSL in time series. Furthermore, our model is deployed in an edge device of an IIoT monitoring system to evaluate the algorithm's reliability in real-world scenarios where many factors can impact the model, necessitating frequent model updates. Data Preparation The overall architecture of the system is presented Figure 1, from which we collect our time-series dataset for the proposed method. Specifically, an industrial-grade sensor was used to measure hydrostatic pressure in a smart factory with different locations. Subsequently, the sensor data was collected using a programmable logic controller (PLC) using the Modicon Communication Bus (Modbus) remote terminal unit (RTU) protocol before sending it to the edge server. Our PLC runs the IEC 61131-3 standard for task management and local backup data with a memory card. The IEC 61131-3 standard consists of specific five programming languages: ladder diagram, structured text (ST), instruction list, function block diagram, and sequential function chart (SFC) [28]. In our realistic implementation, the ST and Function Block Diagram (FDB) programing languages is used for reading data from the sensor and establish a connection with the edge server using the Message Queueing Telemetry Transport (MQTT). These techniques enable IIoT data to be transported in real time while maintaining scalability, stability, and reliability. ploying applications. This server is situated close to data resources, removing concerns on latency and bandwidth demands, which were previously causing cloud performance issues. Moreover, the edge server is optimized on computing ability for faster data processing and deployment of intelligent services. The published topic containing sensor values from PLC was subsequently sent to an MQTT broker on the edge server. After converting raw data into usable data, Mongo database (MongoDB) is built for data storage, which addresses high-availability database requirements and provides a flexible query to access the IIoT measurement system. To perform our proposed anomaly detection method, we first downloaded IIoT data from MongoDB for the offline training model and then employed the model to find anomalous points in a real-time monitoring system. The data was recorded at a frequency of one second with three different types of particle-matter (PM) values. The dataset, which consists of approximately 326,000 results for each feature, was collected from the industrial-grade sensor PSU650 for 4 days. These raw data was stored in a database in real time where we export them into a CSV file to facilitate model reading. Then, multiple-sensor values are integrated into a single multivariate time series that measures the level of the air quality with different sizes of the PM (0.5, 1.0, and 2.5 μm). The aggregated data can explore potential information between different variates at the same time as shown in Figure 2. Also, the statistical description of our dataset before data preprocessing is detailed in Table 1. We deployed an edge server based on NVIDIA Jetson Nano Developer Kit, which is a powerful and lightweight computer running on a Linux Operating System (OS) for deploying applications. This server is situated close to data resources, removing concerns on latency and bandwidth demands, which were previously causing cloud performance issues. Moreover, the edge server is optimized on computing ability for faster data processing and deployment of intelligent services. The published topic containing sensor values from PLC was subsequently sent to an MQTT broker on the edge server. After converting raw data into usable data, Mongo database (MongoDB) is built for data storage, which addresses high-availability database requirements and provides a flexible query to access the IIoT measurement system. To perform our proposed anomaly detection method, we first downloaded IIoT data from MongoDB for the offline training model and then employed the model to find anomalous points in a real-time monitoring system. The data was recorded at a frequency of one second with three different types of particle-matter (PM) values. The dataset, which consists of approximately 326,000 results for each feature, was collected from the industrial-grade sensor PSU650 for 4 days. These raw data was stored in a database in real time where we export them into a CSV file to facilitate model reading. Then, multiple-sensor values are integrated into a single multivariate time series that measures the level of the air quality with different sizes of the PM (0.5, 1.0, and 2.5 µm). The aggregated data can explore potential information between different variates at the same time as shown in Figure 2. Also, the statistical description of our dataset before data preprocessing is detailed in Table 1. Data Collection and Preprocessing In the pretext task training of our proposed method, the historical time-series data is extracted from the database for data preprocessing. In the IIoT sensor scenario, data preprocessing is primarily aimed at converting the raw sensor time-series data into a format that the ML model can process. Given a raw data indicates M feature at sample i, S is the total sample of the raw data collection. The main steps for data preprocessing are follows: • Convert timestamps into the same interval: In the IIoT time-series data collection process, the inconsistency of the timestamps may occur due to the effect of network delay. Furthermore, while conducting anomaly detection, the failure that occurs at a specific time is caused by a variety of factors simultaneously. Thus, various types of sensor feature data must be converted into the same time interval; • Clean data: We realized that the data collection may obtain some missing value due to the different types and impacts [8]. Moreover, the alignment of data timestamps also causes missing values. There are many methods to impute missing values, e.g., forward filling and backward filling. Accordingly, the k-nearest neighbor imputation [29] is used to fill the missing values caused by the robustness and sensitivity of this method; • Integrate multiple-sensor feature into single multivariate time series: Multiple PM sensors are typically used for the condition measurement of an industrial site, particularly in a cleanroom environmental monitoring system. Event anomalies are commonly caused by multiple factors. Therefore, various characteristics must be integrated for the model to uncover potential information between distinct variables and reduce the computation time. • Scale multivariate time-series data: To achieve a sustainable learning process, the input data should be scaled before fitting with the model. The StandardScaler is employed in this paper to scale the values of the features with mean 0 and standard deviation 1 to prevent the different sizes of data from affecting the training. The formula for this function is as follows: where x i m(scaler) denotes the scaled value for the mth feature; x i m , the mth feature from time; and µ(x) and σ(x), the mean and standard deviation values of the feature among the samples on the whole dataset. Sliding Sample After completing the above data preprocessing for raw IIoT data, we applied the sliding window to generate time-series data containing the time dependence. Typically, the completed preprocessing data is converted to a multivariate time-series dataset X TS = x , where x t ∈ R N×T denotes the N dimension of measurements at time step t. The goal of time-series anomaly detection is to find a set of anomalous time sequences A seq = a 1 seq , a 2 seq , a 3 seq , . . . , a k seq , where a i seq is a continuous sequence of data points in time that show anomalous values within the segment that appears different from the expected temporal behavior of the training data [26]. Figure 3 presents the anomalous time sequence that contains unusual values inside the multivariate time-series dataset. where denotes the scaled value for the m th feature; , the m th feature from time; and and σ , the mean and standard deviation values of the feature among the samples on the whole dataset. Sliding Sample After completing the above data preprocessing for raw IIoT data, we applied the sliding window to generate time-series data containing the time dependence. Typically, the completed preprocessing data is converted to a multivariate time-series dataset . Each data sequence has T time steps, so that , , , … , , where ∈ ℝ denotes the N dimension of measurements at time step t. The goal of time-series anomaly detection is to find a set of anomalous time sequences , , , … , , where is a continuous sequence of data points in time that show anomalous values within the segment that appears different from the expected temporal behavior of the training data [26]. Figure 3 presents the anomalous time sequence that contains unusual values inside the multivariate time-series dataset. In our implementation, the size of the sequence window (timestep) was set to T = 300, which contains 5 min IIoT data collection. The ML model is driven by data; therefore, the good quality of the input data can determine the upper limit of the model performance. Our work for data preparation can not only exploit the potential information of data but also ensure the stability of the anomaly detection model when collecting real-time data from the edge server. Methodology The proposed framework was divided into two phases: offline training and online monitoring. Offline training corresponds to self-supervised pretext task training, which contains historical IIoT time-series data. In this phase, the time-series data first fed into our preprocessing scheme, after which we deployed data augmentation based on jittering and rotation methods for the pseudo label. Following that, the feature of each time-series data was fed into a classification model to determine which scaling transformation should be employed. Because the classifier was trained based on the normal time-series data, this model was expected to maximize the loss when identifying an anomaly sequence. Accordingly, the inconsistency of the identification model can be used as a measurement of the degree of anomalies. To demonstrate the efficiency of our model, the comparison SSL with another anomaly detection method is conducted on different datasets, including our testbed data. Furthermore, we performed our model in an online phase, which uses what In our implementation, the size of the sequence window (timestep) was set to T = 300, which contains 5 min IIoT data collection. The ML model is driven by data; therefore, the good quality of the input data can determine the upper limit of the model performance. Our work for data preparation can not only exploit the potential information of data but also ensure the stability of the anomaly detection model when collecting real-time data from the edge server. Methodology The proposed framework was divided into two phases: offline training and online monitoring. Offline training corresponds to self-supervised pretext task training, which contains historical IIoT time-series data. In this phase, the time-series data first fed into our preprocessing scheme, after which we deployed data augmentation based on jittering and rotation methods for the pseudo label. Following that, the feature of each time-series data was fed into a classification model to determine which scaling transformation should be employed. Because the classifier was trained based on the normal time-series data, this model was expected to maximize the loss when identifying an anomaly sequence. Accordingly, the inconsistency of the identification model can be used as a measurement of the degree of anomalies. To demonstrate the efficiency of our model, the comparison SSL with another anomaly detection method is conducted on different datasets, including our testbed data. Furthermore, we performed our model in an online phase, which uses what was learned in the offline phase for downstream tasks. This process was repeatedly conducted to evaluate the improvements of our proposed framework. Self-Supervised Learning Paradigm The goal of the SSL model is to learn useful representations of the input data without the need for human annotations. Inspired by the SSL approach for anomaly detection on image data, we specifically employed a new architecture for the time-series dataset, where the pseudo label was generated based on the jittering and rotation method, and an identification model using deep learning for predicting the scaled transformation of the time-series data. The input data for our proposed method is a normal time-series dataset, which is defined as X TS . Based on these data, we conducted a time-series augmentation data using the jittering ζ A and rotation method ζ B . Our experiment considered several methods for generating timeseries data such as scaling, permutation, magnitude warp, etc. but they are either time consuming to produce data or difficult to distinguish among patterns. Accordingly, we realized that jittering and rotation exhibited high performance in the classification process in time-series data due to the characteristics of these data augmentation methods. In detail, jittering (adding noise) presupposes that noisy time-series patterns are common for a given dataset, which can be defined as follows: where ε denotes Gaussian noise added at each time step t and ε ∼ N 0, σ 2 . The standard deviation σ of the added noise is a hyperparameter that needs to be pre-determined. Adding noise to the inputs is a well-known method for increasing the generalization of neural networks [13,14] by effectively creating new patterns with the assumption that the unseen test patterns are only different from the training patterns by a factor of noise. Meanwhile, the rotation method can change the class associated with the original sample, which supports the creation of plausible patterns for time-series recognition. In our framework, rotation is defined as follows: where R denotes an element-wise rotation matrix that flips the sign of the original time t . This time-series data augmentation inspired us to deploy the SSL method using jittered and rotated data, to which the normal time-series after preprocessing was added with noise and flipped by ζ A and ζ B . Specifically, given a time-series data represented by a matrix X TS , each sequence of data x seq was transformed into new sequences z A seq by jittering equation ζ A x seq and z B seq by rotation equation ζ B x seq . The formed new sequences were finally gathered to form new time-series data Z A seq and Z B seq as the result of transformation processing. As presented in Figure 4, the rotation is motivated to produce a temporal irregularity while jittering data, resulting in a new sequence with less noise to serve as a new label. Consequently, a new dataset (Z A seq , Z B seq ) is generated based on the original data X TS , which has the pseudo label to identify rotated data and jittered data. In our implementation of pseudo-labeled input data classification, we deployed the one-dimensional convolution neural network (1DCNN) for the feature recognition and extraction of time-series data. In general, the convolution neural network is widely adopted for encoding certain properties of images into the architecture. Structurally, 1DCNN is nearly identical to CNN, having convolution, pooling, and fully connected layers but a one-dimensional convolutional kernel. In summary, the input data from the IIoT monitoring system learned normal features from jittered and rotated data using a convolution network with a trainable parameter . The self-labeled dataset was established based on different input data, and convolution network was expected to correctly identify which transformation method for time-series data was deployed. Specifically, two CNN blocks are used for the classification model, each of which has a 1D convolutional layer with feature maps set to 64 and the size of convolutional kernels set to 7. Activation functions in all hidden convolutional layers in the classification network were set to Rectified Linear Unit (ReLU) non-linearity described as follows: In our implementation of pseudo-labeled input data classification, we deployed the one-dimensional convolution neural network (1DCNN) for the feature recognition and extraction of time-series data. In general, the convolution neural network is widely adopted for encoding certain properties of images into the architecture. Structurally, 1DCNN is nearly identical to CNN, having convolution, pooling, and fully connected layers but a one-dimensional convolutional kernel. In summary, the input data from the IIoT monitoring system learned normal features from jittered and rotated data using a convolution network f θ with a trainable parameter θ. The self-labeled dataset was established based on different input data, and convolution network g γ was expected to correctly identify which transformation method for time-series data was deployed. Specifically, two CNN blocks are used for the classification model, each of which has a 1D convolutional layer with feature Electronics 2022, 11, 2146 9 of 15 maps set to 64 and the size of convolutional kernels set to 7. Activation functions in all hidden convolutional layers in the classification network were set to Rectified Linear Unit (ReLU) non-linearity described as follows: This function allowed the deep neural networks to converge faster. Subsequently, the feature maps generated from the last 1D convolutional layer were fed into the global average pooling layer to learn global information on each feature map. The final output of the convolutional network g γ consisted of two different values, each representing the probability of jittering data z A seq and rotation data z B seq . The deep learning framework was optimized using an Adam optimizer with a learning rate set to 0.0001. The model was trained to minimize the "sparse_categorical_crossentropy" loss function by comparing the difference between the classifier output and the ground-truth data transformation represented by a one-hot vector. The detailed architecture of the final deep learning framework is presented in Table 2. Anomaly Detection Once the convolutional classifier g γ was trained based on the self-labeled normal IIoT time-series data, the model was expected to be correctly identified by the classifier. In contrast, abnormalities with different distributions from the trained data would most likely mislead the classifier into predicting the probability of jittering and rotated data with higher loss or incorrect identification of data transformation for scaled time-series anomalous data. Hence, the difference or discrepancy between the predicted output by the classifier output and the ground truth of input data might be utilized to indicate the degree of abnormality for incoming time-series data acquired in real time in a monitoring system. Formally, for any new data x (i) seq , the time-series data augmentation technique ζ was applied to generate a new dataset ζ(X TS ) with pseudo label that is based on each transformation method. The ground-truth scaling for this new dataset was represented by the corresponding one-hot vector y. Following that, the inconsistency between the classifier output f θ (ζ(X TS )) and the ground-truth value of pseudo label could be calculated by a certain measurement L( f θ (ζ(X TS )), y), where the measurement L is cross-entropy between f θ (ζ(X TS )) and y. Consequently, the degree of the anomality for the real-time collection of IIoT data can be calculated using Equation (5): where y i denotes the ground-truth value of the pseudo label, andŷ i is the predicted label. To correctly identify anomalous sequences, we set the threshold for anomaly detection based on the maximum values of the cross-entropy loss function in the training dataset. Each sequence with higher values of loss calculation than the threshold is a supposed anomalous sequence. This provides the monitoring system the capability to find abnormal points based on continuously anomalous sequences within a certain time step. To further improve the reliability of our method, we evaluated the SSL model for anomaly detection on different open datasets, which will be discussed in detail in Section 5. Deployment on Edge Devices Given the requirement specifications of the real-time monitoring IIoT system, the edge integrated with an anomaly detection algorithm should be taken into account to accelerate predictability. Furthermore, after training, the deep learning model often has a large size training load, which is a challenge for the Central Processing Unit (CPU) performance. Furthermore, edge devices use less-complex hardware to reduce manufacturing costs. These specifications necessitate the reduction of the processing time and equipment life, or the factory must consider paying for costly services to deploy a deep learning model. Because of its low complexity to other methods, our proposed SSL method will significantly reduce the size of the deep learning model. Evaluation In this section, we conducted two types of experiments to demonstrate the effects of our proposed framework. The first experiment was designed to evaluate our anomaly detection using the SSL algorithm in time-series data on multiple time-series datasets. Meanwhile, the second experiment is carried out to evaluate the improvement of our method in a real-time IIoT monitoring system. Experiment Dataset To measure the performance of our SSL model for anomaly detection, we evaluated it on multiple time-series datasets. In total, three datasets have been collected across different application domains and our real testbed dataset. Specifically, this paper employed the Numenta Anomaly Benchmark (NAB) with multiple types of time-series data introduced by Numenta. This dataset consists of streaming data from various topics, including Internet traffic, advertisement, cloud service, and automatic traffic. In this benchmark, the number of anomalies in the given dataset was divided by the number of anomalous sequences with a defined window size of 10% the length of a data file. Our dataset was collected from the IIoT system, which is specifically presented in Section 3. As previously stated, the SSL method does not rely on labels to train an anomaly detection model. Therefore, the labels are used only for the purpose of performance evaluation. In this paper, we provided a performance evaluation in three real-world time series, including (a) Artificial With Anomaly with artificial data, (b) NAB Machine Temperature collecting from temperature sensor unit, (c) NAB New York city (NYC) taxi analyzing taxi passenger in New York city, and (d) our dataset in the proposed IIoT system. We assume that the training size of these NAB dataset is 1000 and our dataset is 25% of total historical data (1 day). From the perspective of many datasets, our anomaly detection method using SSL in time-series datasets compared can be evaluate with other approaches. The data processing and training model in our experiment are executed in python version 3.9.8, using TensorFlow for constructing the overall neural network. Evaluation Metrics To evaluate the performance of the anomaly detection model when the data labels are available, the accuracy is commonly used to provide an overview of the detection task. However, it is not a meaningful measurement of performance when the number of anomalies is only a small fraction of the total dataset, resulting in an imbalanced data problem. Therefore, three metrics is used as indicators of the model's performance: precision, recall, and F-score. The formulations of these performance measurements are, respectively, presented in Equations (6)- (8). In these definitions, true positive (TP) refers to the number of anomalies that are truly (or correctly) detected as anomalies, whereas false positive refers to the number of normal data that are falsely (or incorrectly) detected as anomalies. A true negative refers to the number of normal data that are truly detected as normal data, whereas a false negative refers to the number of anomalies that are falsely detected as normal data. Specifically, the precision measures a positive predictive rate, whereas the recall returns a TP rate. The F1 score indicates a combination of precision and recall giving their harmonic mean of them. Experimental Results Figures 5-7 present the anomaly detection results of our SSL method using three NAB datasets, respectively. Figure 8 describes the performance evaluation results of our IIoT dataset. The left plot of these figures shows the actual time series and the anomaly point, which is defined by consecutive anomaly sequences and marked in red. We conducted our evaluation with different window sizes of each dataset. As observed, the anomaly points in the three NAB datasets are isolated from the normal data. Specifically, in the Artificial With Anomaly dataset, when the threshold is set to be the maximum of loss training (0.001155), the anomaly sequence is easily defined with very high performance. Although this method also indicates that the accuracy of anomaly detection is quite high, a threshold set from the training process is critical. The reason for this is that the training set is insufficient for these data with higher levels of complexity. In our dataset, we trained with approximately 200,000 samples and used the remaining time-series data for testing. The SSL model still has the capability of detecting anomalies based on anomaly scores, which strongly distinguish normal and abnormal data, as presented in Figure 8. However, due to data limitations in the edge area, the diversity of the dataset was decreased, allowing for wrong anomaly recognition in the validated data. The performance of the proposed model nonetheless shows high accuracy in finding outliers of the entire data. Electronics 2022, 11, x FOR PEER REVIEW 12 of 16 dataset, we trained with approximately 200,000 samples and used the remaining timeseries data for testing. The SSL model still has the capability of detecting anomalies based on anomaly scores, which strongly distinguish normal and abnormal data, as presented in Figure 8. However, due to data limitations in the edge area, the diversity of the dataset was decreased, allowing for wrong anomaly recognition in the validated data. The performance of the proposed model nonetheless shows high accuracy in finding outliers of the entire data. Although SSL can detect anomalous with good performance, it still has some stability issues during training progress. In the fourth day of test dataset, several timeseries sequence meet the false alarm on detection due to because it has not been learned in the training data. Besides, the threshold generated from loss function in the training dataset become critical learning parameter to find accurately the abnormal pattern. Therefore, we consider many different parameters to create a best model in the respective data. Although SSL can detect anomalous with good performance, it still has some stability issues during training progress. In the fourth day of test dataset, several timeseries sequence meet the false alarm on detection due to because it has not been learned in the training data. Besides, the threshold generated from loss function in the training dataset become critical learning parameter to find accurately the abnormal pattern. Therefore, we Although SSL can detect anomalous with good performance, it still has some stability issues during training progress. In the fourth day of test dataset, several timeseries sequence meet the false alarm on detection due to because it has not been learned in the training data. Besides, the threshold generated from loss function in the training dataset become critical learning parameter to find accurately the abnormal pattern. Therefore, we consider many different parameters to create a best model in the respective data. Table 3 presents the evaluation results of our proposed method compared with other DAD techniques in time-series anomaly detection, including traditional LSTM, autoencoder, LSTM autoencoder, and Autoregressive Integrated Moving Average (ARIMA). As observed, our methods outperform others in terms of precision, recall, and F1-score for NAB Artificial, NAB Machine Temperature, and our dataset. This is because this method can easily isolate abnormal sequences by the score of the classification model in the downstream task, whereas the other method risks overfitted data after training. Although ARIMA still exhibits higher precision (0.94067) and F1 score (0.97229) in the NYC taxi dataset, the SSL-based anomaly detection method can also reach quite a high value (precision = 0.93665 and F1 score = 0.96729). The result also shows that the statistical technique such as ARIMA can detect accurately the anomalous, but it takes long time to tune it fitting the data and the model often converges towards the mean value in long-term prediction. Meanwhile, SSL and other neural network method can reach high precision due to learning from by learning data patterns and detecting anomalies if the pattern is different from the predicted value. Especially, the SSL is more reliable than others due to the usefulness of the self-label generated from the normal dataset. Model Comparison The completed model was fed directly into the IIoT monitoring system for a real-time evaluation. In our proposed scheme, Apache Kafka [30] is used, which is an event streaming platform that provides a scalable, reliable, and elastic real-time platform for messaging anomaly points to end-users. Based on these powerful aspects, we can consider how much our proposed method has improved compared with others in terms of processing time. Our model is deployed using NVIDIA Jetson Nano Developer Kit running Ubuntu 18.04 for edge computing, which continuously catches MQTT messages from the PLC WAGO 750-8212 connected with a PSU650 sensor for the measurement of particle matter. Subsequently, the time-series data with three different variates was fed into the database for storage and real-time anomaly detection. Following that, a comparison of our proposed SSL method is performed with an LSTM, Autoencoder, and ARIMA algorithm. The result indicated that the model size of our SSL method was only 709 KB, whereas the LSTM size was 8,971KB and ARIMA, 13,914KB. This shows how lightweight our model is compared with others. Although the Autoencoder model size is 408 KB and has less complexity than our SSL, the accuracy of this method in anomaly detection was lower. Therefore, our method is a suitable solution for model deployment in edge devices for IIoT systems. Conclusions In this paper, we introduced SSL for time-series anomaly detection in an IIoT system. The proposed SSL framework consists of two augmentation techniques in time-series data that capture two different patterns of original samples before feeding them to the classifier. The output of this framework was used to generate anomaly scores to detect anomalies in multiple datasets. The experimental result indicated our method's ability to significantly improve the performance of anomaly detection in real time. The F1 score, precision and recall of our anomaly detection method can reach higher value than other traditional DAD such as LSTM, autoencoder, LSTM autoencoder and ARIMA, which were in this paper. The proposed algorithm was also deployed on an edge device, and this method was found to be compatible with less-complex devices owing to its lightweight model size. Future work may focus on improving the stabilization of SSL training progress and integrating with incremental learning to increase the accuracy of the method by updating the model online with the most recent data.
2022-07-10T15:02:49.628Z
2022-07-08T00:00:00.000
{ "year": 2022, "sha1": "aa8a1ee5324e1e87e6203523915b3d66356cb324", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/11/14/2146/pdf?version=1657284079", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c218cc0153f908335a5661b8bff2d586bbd8607b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
251288125
pes2o/s2orc
v3-fos-license
Parental bonding, depression, and suicidal ideation in medical students Background The psychological condition of university students has been the focus of research since several years. In this population, prevalence rates of depression, suicidal ideation, anxiety disorders and substance abuse are higher than those of the general population, and medical students are more likely to have mental health issues than other students. Aims This study deals with the psychological condition of medical students, with a focus on correlations between depression, suicidal ideation and the quality of the perceived parenting style. Gender differences were also considered. Methods A cross-sectional study was conducted on a population of medical students, with an online questionnaire consisting of a personal data sheet for demographic and anamnestic data, and of three self-rating scales: the Beck Depression Inventory II (BDI-II), for the screening of depressive symptoms; the Beck Hopelessness Scale (BHS), to assess suicidal ideation; the Parental Bonding Instrument (PBI), to investigate the memory of the attitude of one’s parents in the first 16 years of life. Two main affective dimensions were considered by PBI: “care” (affection and empathy) and “protection” (intrusiveness, controlling and constraint). Four different patterns of parenting styles are so evidenced: Neglectful Parenting (low care/low protection), Affectionless Control (low care/high protection), Optimal Parenting (high care/low protection), and Affectionate Constraint (high care/high protection). Results Overall, 671 students (182 males and 489 females) participated. Females, compared to males, experienced more distress and self-injurious behaviors, while males experienced more drugs or alcohol abuse. The BHS and BDI-II scores correlated positively with the PBI score for “protection” and negatively with that for “care.” Affectionless Control and Neglectful Parenting were associated with higher medians of BHS and BDI-II scores. Conclusion The study confirms that the undergraduate medical student population has higher prevalence of depression and suicidal ideation than those detectable in the general population (respectively, 50.2% and 16.7% vs. 15–18% and 9.2%) and that some specific parenting styles correlate with these two clinical variables. The impact of Affectionless Control and Neglectful Parenting on suicidal ideation and depressive symptomatology was more pronounced in females than in males. For males, the role of the father seemed to have less impact on the affective roots of suicidal thoughts and depression. Introduction The transition between high school and university is a crucial period in biological, psychological, social development, through the growth of new bonds, a new sense of self, and a rise in autonomy and responsibility (Taylor et al., 2014). Based on current estimates, 35% of college students met the diagnostic criteria for at least one common mental health illness or a related health issue (WHO World Mental Health International College Student project -Auerbach et al., 2018), with prevalence rates higher than those of the general population of depression, suicidal ideation, anxiety disorders, and substance use or abuse. The most common disorder among college students is depression (21% lifetime prevalence), followed by generalized anxiety disorder (18%-16%), panic disorder (5%) and bipolar disorder (3.5%; Auerbach et al., 2018). Suicidal ideation among university students is around 6.7%, while suicide plans and attempts are, respectively, 1.6% and 0.5% (Downs and Eisenberg, 2012). 9.5% of university students screened positive for an eating disorder (Eisenberg et al., 2011) and about 44% for binge drink; 12.5% suffer from alcohol dependence and 7.8% abuse it (although the percentages fluctuate globally); about 23% of male students and 16% of female students are current marijuana users; as a whole, drug use disorder affects about one student out of 20 (Pedrelli et al., 2015). Admission to medical school and the period leading up to graduation are extremely competitive and demanding. Medical school students are more likely to experience mental health problems than other students (Rotenstein et al., 2016) and they are at greater risk of developing mental disorders or using illegal substances (Mousa et al., 2016;Moutinho et al., 2019). These symptoms are associated with decreased academic performance, poor quality healthcare, and higher medical errors (Agnafors et al., 2021). Female sex, exposure to recent stressful life events, excessive smartphone use, and poor sleep quality are all risk factors for the development of mental disorders in this student population (Lemola et al., 2015). Additionally, medical students with mental health issues seek help infrequently (Gold et al., 2015): more than half of medical students who meet the diagnostic criteria for a mental disorder are reticent to seek professional help due to the fear of getting stigmatized (Mehta and Edwards, 2018). Furthermore, after graduation, the concern of stigma, as well as financial and professional repercussions, is a substantial obstacle to seeking assistance between doctors (Pingani et al., 2016;Deb et al., 2019;Thornicroft et al., 2019). The prevalence rate of depression is higher in undergraduate medical students than in the general population (Lim et al., 2018), and the overall prevalence of depression or depressive symptoms among medical students is 27.2% (Rotenstein et al., 2016), with the higher rates (33%) in the first-year students (Puthran et al., 2016). The prevalence of suicidal ideation, reported as having occurred over the past 2 weeks to the past 12 months, was 11.1% (Rotenstein et al., 2016). The assessment of hopelessness among university students, through the use of the Beck Hopelessness Scale, detect an average value of 3.26 with a range of 1. 16-7.63. In studies on American samples women scored higher than men, unlike studies on non-American samples where men scored higher than women (Lester, 2013). The overall prevalence among medical students of anxiety disorders, the rate ranged from 29.2 to 38.7%, is higher than in the general population. The prevalence rate of eating disorders risk among medical students was found to be 10.4%, higher than in the general population, where it is about 5% (Treasure et al., 2010;Jahrami et al., 2019). The most commonly used drugs by medical students are mainly alcohol (24%), tobacco (17.2%), and cannabis (11.8%), followed by hypnotic and sedative drugs (9.9%), stimulants (7.7%), cocaine (2.1%) and opiate (0.4%; Roncero et al., 2015). Male medical students presented a tendency to consume more of all types of drugs than females, with the exception of tranquilizers (Candido et al., 2018). Furthermore, it is reported that about half of the students experienced burnout during their undergraduate years (Ishak et al., 2013). Among Italian medical students, according to Sampogna et al. (2020), the results showed a high prevalence of substance use, especially alcohol (range 13-86%), cigarettes (range 15-31%,), compared to Italian students from other degree programs. Cigarette use is also slightly higher than in the general population in Italy, which is around 20% (Lugo et al., 2017). The prevalence rate of depression is around 20%, with depressive symptoms reported more frequently by female medical students. The prevalence of suicidal thoughts is around 17%, and is higher in men (Sampogna et al., 2020). The evidence and importance of these data make it necessary to further research the possible bio-psycho-social factors that intervene in these age groups in determining the onset of psychological distress and psychic disorders. With the purpose of assessing determinants implicated in youth distress, we decided to focus on the attachment theory in our work, as it is increasingly considered and constantly evolving. Particularly, we dwelt on Frontiers in Psychology 03 frontiersin.org Parker's construct of parental bonding, widely observed in clinical practice given its correlations with several disorders. Investigations into the relationship between childhood experiences and subsequent adult psychopathology suggest that negative parenting style create a diathesis for emotional and psychiatric dysfunction. Following a biopsychosocial approach to etiology and pathogenesis of psychopathology, parental bonding is one of the factors that might influence how a psychiatric disorder may develops. This diathesis, nevertheless, can be modified by a series of social and inter-personal experiences that have the capacity to neutralize the risk variable (Parker and Gladstone, 1996). Two main affective dimensions were highlighted among the characteristics of the parental educational style observed in practice: care and control (Roe and Siegelman, 1963;Schaefer, 1965;Raskin et al., 1971). The first of these two dimensions concerns the care and affection expressed by the parent, while the second one includes all the aspects of control, intrusiveness, and protection, understood as concern not related to affectionate feelings. Four different patterns of behavior and affective parenting styles are evidenced: low care/low protection (neglectful parenting), low care/high protection (affectionless control), high care/low protection (optimal parenting), and high care/high protection (affectionate constraint; Parker et al., 1979;Favaretto and Torresani, 1997; Figure 1). The variety of studies investigating the correlation between PBI and psychiatric disorders agrees that of all the parental bonding styles, the one most implicated in psychopathology is affectionless control, characterized by poor parental care and high protection-control. This parental style is closely related to impaired formation of positive Internal Working Models (IWM), since due to low care and high parental control, the child struggles to develop a competent and worthy self-model and a reliable and supportive model of others (Otani et al., 2016). These compromised models will persist into adulthood, making the individual more susceptible to the development of psychiatric disorders (Parker and Gladstone, 1996). Several studies highlighted the relationship between poor parental bonding and suicidal ideation and suicidal behavior (Miller et al., 1992;Adam et al., 1994;Martin and Waite, 1994;McGarvey et al., 1999;Yamaguchi et al., 2000;Lai and McBride-Chang, 2001;Diamond et al., 2005;Dale et al., 2010;Freudenstein et al., 2011). Recently, Siqueira-Campos et al. (2021, in a sample of Parenting styles based on the combination of the dimensions "care" and "protection" . Frontiers in Psychology 04 frontiersin.org medical students, showed that there are significant independent associations between maternal affectionless control and depression, between maternal negligent parenting and depression, and between paternal affective constraint and suicidal ideation. This study suggests also that depression influences the association between maternal affectionless control and anxiety and the association between maternal affectionless control and suicidal ideation. Regarding maternal bonding, low care and more often affectionless control have been found to be significantly associated with higher levels of suicidality, leading to the conclusion that this type of parental bonding could be considered a specific (direct or indirect, it is not yet clarified) risk factor for suicidality. Concerning paternal bonding, the data are less consistent, suggesting that also here low care and often affectionless control are associated with an increased risk of suicide (Goschin et al., 2013). Discrepancies between mother and father may be related to cultural differences in the father's role in the upbringing and control of the offspring. We can therefore conclude that affectionless control of the parent who is more invested in the child's growth is correlated with an increased suicide risk. Given this background, we chose to evaluate the relationship between the quality of perceived parenting style with psychological well-being of medical students in Italy since there was no Italian study with the same characteristics in the literature. We decided to give gender a certain relevance in our study since it is often not taken into account in the literature regarding attachment, and furthermore the data in the field are few and discordant, probably due to limitations and design of the studies. Adam et al. (1994) found that in suicidal female adolescents both father and mother are perceived as affectionless and overcontrolling, whereas in males only the mother. Kovess-Masfety et al. (2011), however, reported that males with suicidal ideation and attempts perceived their mothers as affectionless controlling, while fathers only as low caring, while McGarvey et al. (1999) reported that among suicidal males paternal affectionless control was reported more than maternal affectionless control. Other studies have shown that in suicidal females the perception of an affectionless control style is related to both parents (Goldney, 1985;Yamaguchi et al., 2000) or only to the mother (Diamond et al., 2005). Under these premises, we set up a cross-sectional study with the aim to explore the psychological condition of medical students, with a specific emphasis on depression and suicidal ideation, in order to identify correlations between these two clinical variables and the quality of parental bonds. Gender differences were also considered in the descriptive and correlational analysis since literature data on the topic were few and discordant. Therefore, we hypothesized that our sample of medical students would have higher rates of suicidal ideation and depression than the general population, especially the students who had a parental bonding characterized by low care and high control (affectionless control) or low care and low control (neglectful parenting). Moreover, we assume that gender differences relative to the impact of parental bonding on suicidal ideation and depression could be find. Participants The sample of this study includes students (first to sixth year, including out-of-course students) of the Faculty of Medicine of the University of Ferrara, Italy. The study was publicized through the use of electronic media (university email and student message group) and the data collection was based on a questionnaire created in Google Forms. The choice of the Google Forms platform was based on the simplicity of its use (and thus reduce to zero any errors in compilation), and because we expected that an online data collection would guarantee a higher number of participants, creating in this way an adequate sample size. Participation in the research was guaranteed by anonymity: data collection on Google Docs generated a database with numerically encoded information, without any possibility of tracing the identity of the respondent. Recruitment of respondents began on July 20, 2021, and ended on September 12, 2021, in order to ensure more students responded to the questionnaire given the exam period and summer vacation. To ensure that only students enrolled in the medical school would fill it out, we made sure that only those who had received the email from the educational coordinator had access to the questionnaire. Students could only access the questionnaire through exclusive use of their credentials accepted by the University of Ferrara's IT management system. In addition, no underage subjects were included among the participants, as compulsory schooling in Italy lasts one year longer than in other countries, and consequently students access university when they reach the age of majority. The study received approval from the Ethics Committee of University of Ferrara (Italy). Procedure The study was conducted by the Neurological, Psychiatric, and Psychological Sciences Section of the Department of Neuroscience and Rehabilitation, Faculty of Medicine, Pharmacy and Prevention, University of Ferrara (Italy). Demographic and anamnestic data and psychometric measures were retrieved from an online questionnaire, which was delivered through the Google Forms platform. Before beginning the questionnaire, there was a brief description of the study and its instruments to inform and guide the respondent through the completion. Measures Personal data sheet For each participant, the following information was collected: age, sex, year of course, age of father and mother, presence or absence of siblings in the family and some anamnestic data on Frontiers in Psychology 05 frontiersin.org psychological health: whether they had experienced psychological distress with impairments in quality of life, poor performance in daily activities or obstacles in life choices; whether they had used psychopharmaceutical drugs or had undergone psychotherapy sessions; whether they had ever used drugs or alcohol, and whether they were currently abusing drugs or alcohol. It was also assessed if any participants had experienced self-injurious behaviors or suicide attempts. The last two questions in the personal data sheet were about recent life events that have negatively affected psychological condition and quality of life, as follows: "In the last six months, has an event occurred that has negatively affected your psychological condition and quality of life?" and "If YES, which one or ones?, " allowing respondents to choose from the following answers: "The death of a loved one, " "a failure in my academic career, " "a problem in my family of origin, " "an economic problem, " "a sentimental problem, the end of an emotional relationship, " "the separation from my family and/or my country" and "other. " Parental bonding instrument The PBI is a self-administered questionnaire that consists of two forms, one for the father and one for the mother, each with 25 items: 12 items assess the "care" dimension, which implies an affectionate and empathetic attitude, while the remaining 13 assess the "protection" dimension, which implies controlling and constraining behaviors. Based on how children remember their parents in their first 16 years of life, they will assign a rating to the different statements contained in the items. The score is assigned according to a Likert scale with values from 0 to 3 for each statement, so the total score will have a range 0-36 for the "care" dimension and 0-39 for the "protection" dimension. For the "care" dimension, if the score is 24 or higher (for the father) or 27 or higher (for the mother) it will be called "high care, " if lower "low care. " For the dimension "protection, " a score equal to or greater than 12.5 (for the father) and 13.5 (for the mother) indicates "high protection, " if lower "low protection. " There are four possible patterns of behavior and affective parenting style depending on the combinations of the two dimensions: low care/low protection (neglectful parenting), low care/high protection (affectionless control), high care/low protection (optimal parenting) and high care/high protection (affectionate constraint). The PBI is an instrument that has shown excellent test-retest reliability and durability over time, even at 10 and 20 years Parker, 1982;Warner and Atkinson, 1988;Mackinnon et al., 1989;Wilhelm et al., 2005;Murphy et al., 2010). In the current study it was used the Italian version of the PBI, which reports in the sample of university students the following mean values of "care" and "protection": for the mother the "care" mean score was 29.81 (±6.15) and for the father 26.80 (±7.87); regarding "protection, " the mean scores were 13.79 (±7.38) for the mother and 12.41 (±6.96) for the father. In male students, the mean scores were for "mother care" 31.21 (±4.59) and "father care" 27.9 (±7.56); while for "mother protection" 12.17 (±6.01) and "father protection" 11.89 (±5.65). In female students, the mean scores were for "mother care" 27.65 (±7.55) and "father care" 25.1 (±8.13); while for "mother protection" 16.3 (±8.6) and "father protection" 13.55 (±8.61; Scinto et al., 1999). In the sample of students of the validation study, Cronbach's alpha was 0.88 and 0.86 (respectively, care and protection) for the mother, while 0.91 and 0.83 (respectively, care and protection) for the father. In our sample, Cronbach's alpha was 0.92 and 0.88 (respectively, care and protection) for the mother, while 0.92 and 0.87 (respectively, care and protection) for the father. Beck hopelessness scale The BHS is a 20-item self-rating scale that detects and quantify "hopelessness, " that is a negative attributional attitude about future possibilities, included in Beck's cognitive model of depression (Beck, 1967), associated with increased suicidal risk and related to negative feelings about the future, loss of motivation, and loss of expectations (Beck et al., 1974). This scale assesses the severity of negative expectations about the future in both the short and long term. It evaluates the respondent's feelings over the previous week using "True/False" responses corresponding to a score of 0 or 1. The total score ranges from 0 to 20 and higher scores indicate a higher prevalence of suicidal ideation. Of the 20 true-false statements, 9 are FALSE bound and 11 are TRUE bound to indicate the presence of pessimism in the future. The BHS takes 5-10 min to complete, and where required can also be administered orally by the examiner. It is recommended for individuals over the age of 17 (Pompili et al., 2009). This instrument has demonstrated particular utility as an indirect indicator of suicide risk in depressed individuals or individuals who have attempted suicide, and although it was not developed as an instrument to determine hopelessness in adolescents and adults in the general population, it has nevertheless been used for these purposes as well (Greene, 1981;Durham, 1982). In this study we used the Italian version of the BHS (Pompili et al., 2009). The internal consistency reliability of the BHS measured using the KR-20 index (Kuder-Richardson Formula, analogous to Cronbach's alpha for dichotomous measures) ranges between 0.87 and 0.93 for the original version (Beck and Steer, 1993). In the Italian version, the KR-20 index ranges between 0.75 (university student sample) and 0.89 (psychiatric sample; Pompili et al., 2009). In our sample, Cronbach's alpha was 0.73. Beck depression inventory -II The BDI-II is the most widely used instrument for detecting the existence and severity of depressive symptoms taking into account affective, cognitive, somatic, and vegetative domains (Beck et al., 1996). It is based on Beck's theory that depressed patients are characterized by a negative triad, i.e., negative representations of themselves, of the present and of the future (Beck, 1967). It is a self-administered tool containing 21 items, each using a 4-point scale, that takes around 5-10 min to complete. It can be used with individuals aged 13 and above (Beck et al., Frontiers in Psychology 06 frontiersin.org 1996). The patient is asked to consider each statement relating to the way he or she has felt over the past 2 weeks. The following domains are evaluated: sadness, pessimism, past failure, loss of pleasure, guilty feelings, punishment feelings, self-dislike, selfcriticalness, suicidal thoughts or wishes, crying, agitation, loss of interest, indecisiveness, worthlessness, loss of energy, changes in sleeping pattern, irritability, changes in appetite, concentration difficulty, tiredness or fatigue, and loss of interest in sex. Scores in each item range from 0 (absence of symptoms) to 3 (severe symptoms) and the total score ranges from 0 to 63. Higher scores indicate more severe depressive symptoms. Through the questionnaire we are able to have information related to individual items, which can be of help to the clinician. One of them that we decided to take into consideration is item #9 "Suicidal thoughts, " which has been shown to be indicative of suicidal ideation and suicide risk (Green et al., 2015). In our study we used the Italian version of the BDI-II (Ghisi et al., 2006), using a cut-off score ≥ 14 as the threshold for detecting a clinically significant presence of depressive symptoms, with the following score ranges to quantify the severity of the depression: 0-13 minimal, 14-19 mild, 20-28 moderate, and 29-63 severe (Beck et al., 1996). The BDI has proven to be an excellent case-finding screen for depression in a variety of adult samples. However, in the general population the cut-scores should be adapted to the sample because those previously listed refer to a population of patients with a diagnosis of major depression, and therefore are designed to have few false negatives (Hubley, 2014). Furthermore it should be noted that when interpreting the results we should always keep in mind that we are using a screening instrument, and therefore the diagnosis of depression requires further analysis and that we may have response bias (with over-and under-reported symptoms; Hubley, 2014). Cronbach's alpha for the BDI-II is 0.87 and the split-half reliability coefficient is 0.77. In our sample, Cronbach's alpha was 0.92. Data analysis Data were presented as absolute numbers, percentages, mean ± Standard Deviation (SD) if normally distributed, or median and interquartile ranges (IQR) as appropriate on the basis of data distribution. Comparisons were performed using a two-tailed, independent samples student t-test or Mann Whitney U test as appropriate, according to the data distribution for continuous variables. Dichotomous variables were compared using the Chi squared test. The correlation between variables was tested by calculating the Spearman's correlation coefficient. To compare BDS and BHS scores across different categories of parental bonding, we used the Kruskal-Wallis test. To identify variables independently associated with the probability of scoring positive either on BDS or BHS, we calculated the odds ratio (OR) and 95% confidence interval (CI) by means of multivariable logistic regression analysis. In two logistic regression models, dichotomous BDS and BHS were entered as dependent variable. The two models included: sex, age, year of course, "Mother Care, " "Father Care, " "Mother Protection, " and "Father Protection. " Age and year of course were entered as continuous variables, the others as dichotomous variables. Moreover, the same analyses were carried out separately for males and females. The program used for the analysis was SPSS version 25. Descriptive analysis The total number of students registered in the medical school at the university was 1982, while the number of students who responded to the questionnaire was 671. The response rate (RR) was therefore 34%. Of these 671, 489 were females (72.9%) and 182 were males (27.1%). The average age of the sample was 22.75 (±3.525). Regarding the year of the course, the students were divided as follows in the sample: 176 of the first year (26.2%), 199 of the second (29.7%), 66 of the third (9.8%), 71 of the fourth (10.6%), 69 of the fifth (10.3%), 35 of the sixth (5.2%) and 55 out-of-class students (8.2%). Average age of the father in the sample was 57.82 (±6.02) and of the mother 54.78 (±5.3). To the question "Are you an only child?" 548 participants answered "No" (81.7%) and 122 with "Yes" (18.2%; Figure 2). About the descriptive analysis of the anamnestic portion of the first battery of questions, the results are as follows (Figure 3). Within our sample, to the question "Have you ever experienced psychological distress such that you felt your quality of life was significantly altered, encountered obstacles in your life choices, and poor performance in your activities?" 440 participants (65.6%) answered "Yes" while 231 "No" (34.4%). Among the "Yes" respondents, more women (68.1% among females) than men (58.8% among males) reported this distress, and the difference was statistically significant (Chi square = 5.09; p < 0.05). When asked "Have you ever had psychotropic drug therapy in the past?" 603 participants (89.9%) said "No, " whereas 68 (10.1%) answered "Yes. " We found no significant gender differences in this item. To the question "Have you had psychotherapy interviews in the past?" 416 participants (62%) said "No," while 255 or 38%, answered "Yes." Again, we found no significant gender differences. Regarding substance use and abuse, to the question "Have you ever used drugs or alcohol?" 49.9% of respondents (335) answered "No, " while 50.1%, or 336 respondents, answered "Yes. " In this case, as evidenced by the contingency table, we found a statistically significant gender difference (Chi square 26.9; p < 0.01), with 66% of male respondents versus 44% of female respondents reporting drug or alcohol use. Sample data. Anamnestic data. Frontiers in Psychology 08 frontiersin.org To the question "Do you currently tend to use drugs or alcohol with consequences for your performance ability?" 6.9% of the sample (46 individuals) responded positively, while 93.1% (625 individuals) responded negatively. Also here, as in the previous question, and as evidenced by the contingency table, we found a statistically significant gender difference (Chi square 8.6; p < 0.01), with 11.5% of males answering affirmatively to the item, compared to 5.1% of females. When asked "Have you ever experienced self-injurious behaviors supported by suicidal ideation?" 581 respondents (86.6%) answered "No, " while 90 (13.4%) answered "Yes. " As evidenced by the contingency table, we found a statistically significant gender difference in responses to this item (Chi square 11.7; p < 0.01), with 16.2% of female respondents responding positively, compared with 6% of males. Regarding the last two items, to the question "In the last six months did an event occur that negatively affected your psychological condition and quality of life?, " 52.5% of respondents (352 individuals) answered "No, " while 47.5% (319ss) answered "Yes. " We found a statistically significant gender difference in this item as well (Chi square 7.3; p < 0.01), with more females (50.7% of females) than males (39% of males) reporting a recent negative event. Among those who answered "Yes" to the last item ("In the last six months did an event occur that negatively affected your psychological condition and quality of life?"), it was also asked to select which of the events on the list had negatively impacted quality of life and psychological status. We left the option of selecting more than one response for this item. These were the percentages of the selected items: "Other" (42%; 136ss), followed by "A failure in my academic career" (33.3%; 108ss), "A problem with my family of origin" (22.5%; 73ss), "A sentimental problem, the end of an affective story" (21%; 68ss), "The death of a beloved person" (18,2%; 59ss), "An economic problem" (10.2%; 33ss) and finally "The separation of my family and/or my country" (4%; 13ss). About the PBI, the results are as follows. Referring to the father, the average score on the dimension "care" was 22.38 (±8.42), while on the dimension "protection" it was 11.94 (±7.14), whereas considering the mother, the mean score related to the dimension "care" was 27.47 (±7.4), while for the dimension "protection" it was 14.09 (±7.70). In the BHS, and referring to the score, the median was 4 with an Interquartile Range (IQR) 2-7. No statistically significant gender differences were evident in this case. Concerning the cut-off, participants who had a score greater than or equal to 9 were 112 (16.7%), while 8 or less were 559 (83.3%). Again, no statistically significant gender differences were found regarding the cut-off. In the BDI-II, referring to the score, the median was 14, with an interquartile range 7-22 (Table 3). In this case, however, we found a statistically significant difference (p < 0.01) in gender, as the median of females (14; IQR 9-23) was higher than the median of males (9; IQR 5-17.25). When we refer to the cut-off, 50.2% of the sample (337ss) showed a possible depressive disorder because they scored 14 or higher, while 49.8%, (334 respondents) scored 13 or lower (Table 3). We again showed a statistically significant gender difference (p < 0.01): 54.19% of females exceeded the cut-off, in contrast to 39.56% of males. On question #9 of the BDI-II, "Suicidal thoughts, " 17.73% of our sample (119 responses) scored 1 or higher, indicating suicidal ideation, while 82.27% (552) had a score of 0. We found no statistically significant gender differences. Because BHS also assesses suicidal ideation, we wanted to compare this item with median scores, and the #9 BDI-II cut-off. Among respondents who scored 1 or higher on item #9 of the BDI-II the median BHS score was 8 (IQR 4-13), whereas among those who scored 0 the median BHS score was 3(IQR 1-6). This difference was statistically significant (p < 0.01). BHS and BDI scores significantly differed in the four categories of PBI. For Parenting Father categories, we found a significant difference for both BHS (p < 0.001) and BDI-II (p < 0.001) scores. At the post hoc analysis, regarding BHS, the significant differences were between the category "Optimal parenting" and "Affectionless control" (p < 0.001) and between "Optimal parenting" and "Neglectful parenting" (p < 0.005). In fact, for the "optimal parenting" group the median BHS score was 3 (IQR 1-5) while for the "Affectionless control" category the median was 4 (IQR 2-8) and for "Neglectful parenting" it was 4 (IQR 2-7; Table 5). By analyzing separately the two genders, we found the above significant differences only for women (p < 0.001 for both BHS and BDI). There were no statistically significant differences regarding male sex. Regarding Parenting Mother categories, we again found a significant difference for both BHS (p < 0.0001) and BDI-II (p < 0.0001) scores. The differences remained statistically significant analyzing men (p < 0.05 for BHS, and p < 0.0001 for BDI) and women (p < 0.0001 for both scores). Discussion Through this discussion, by taking up and summarizing the results and comparing them with the existing literature on these topics, we will try to dwell both on the confirmations we have received as they add statistical relevance to the concepts, and on the novelty aspects that characterized the work. To begin with, the results obtained from the personal data sheet showed us significant gender differences. More females than males suffer a distress affecting their quality of life, with selfinjurious behaviors sustained by suicidal ideas and reported a negative event affecting their psychological condition. On the other hand, more males than females reported use or abuse of drugs or alcohol. This last data is in line with the current literature, which shows a higher percentage of males than females in alcohol and drugs use/abuse among medical students (Candido et al., 2018). Another interesting finding was "a failure in my academic career" as the main specific negative event impacted over psychological status and quality of life of participants. The finding is consistent and understandable given the specificity of the sample studied but should not be underestimated. Results in our sample using BHS to assess suicidal ideation are consistent with the finding in Lester's 2013) review of hopelessness in college students. Nonetheless, the percentage of participants who exceeded the cut-off value indicative of suicidal ideation (16.7%) was higher than that reported in the Rotenstein et al. 's (2016) meta-analysis regarding the prevalence of suicidal ideation among medical students (11.1%) and also exceed the overall lifetime prevalence of suicidal ideation, which is 9.2% (Nock et al., 2008). Assessment with item #9 of the BDI-II likewise found the presence of suicidal ideation in nearly one in five students. However, our data concurs with that of Sampogna et al. (2020) about the percentage of suicidal thoughts in Italian medical students, which was reported to be around 17%. Because our finding agrees with the Italian data but differs from those abroad and in the general population, the evidence adds statistical significance to the psychological condition of medical students in Italy. Regarding depressive symptoms in medical students our study reports high prevalence rates, with more than a half of the sample who exceeded the BDI-II cut-off. This data differs from Rotenstein et al. (2016), which reported that the median summary prevalence among medical students was 32.4% (95% CI, 25.8-39.7%) for the Beck Depression Inventory (BDI) with a cut-off score of 10 or greater (the cut-off of 10 in the BDI is comparable to that of 14 in the BDI-II). As seen earlier regarding suicidality, again the Italian finding is higher than the foreign one. Our findings also confirm the well-known evidence of gender differences in reporting the presence of depressive symptoms, with female gender at higher risk of developing depression (Malhi and Mann, 2018). Our sample therefore showed higher prevalence rates of suicidal ideation and depression than the data in the literature and the general population. This finding is also relevant as it could be related to the mental health consequences of the SARS-CoV2 pandemic, even though our data collection occurred in the summer, a season with lower percentages of Covid19 cases and less psychological distress due to restrictions. About parenting styles our study showed mean scores in the two dimensions of "Care" and "Protection" quite in line with those found on a sample of Italian university students in Scinto's study (Scinto et al., 1999) with the evidence of a trend already highlighted by the literature that mothers are rated as more caring and protective than fathers Parker, 1983;Truant et al., 1987). With respect to the impact that the quality of parental bonds has on suicidal ideation and depression, we received several confirmations, plus some new evidence. First, in BHS data analysis, score correlates positively with BDI-II score and with the "protection" dimension, and negatively with the "care" dimension. Similarly, we found that the BDI-II score correlates positively with the BHS score and with the "protection" dimension, and negatively with the "care" dimension. These correlations agree with the current literature where we saw that affectionless control, the pattern of parenting characterized by low care and high control, is related to suicidal ideation (Goschin et al., 2013) and major depression (Parker, 1979;Enns et al., 2002;Heider et al., 2006;Visioli et al., 2012). Furthermore, the correlation between BDI-II and BHS confirms how depressive symptoms play a crucial role in suicidal ideation (Klonsky et al., 2016). Secondly, considering the associations between BHS and BDI-II scores with the four Parental Bonding styles evidenced by the PBI, we found significant differences. Regarding the father, "Affectionless control" and "Neglectful parenting" related to higher BHS and BDI-II scores than "Optimal parenting. " Once the same analysis was done individually on the two genders, these differences were no longer relevant, but remained only among females. About the mother, with respect to BHS and BDI-II, we found more statistically significant differences between the four patterns than in the father. As for BHS, in addition to "Affectionless control" and "Neglectful parenting, " we also detected a higher average score of BHS in "Affectionate constraint, " perhaps indicating the importance of the role of the "Protection" dimension suicidal ideation's onset. Likewise the average scores for BDI-II were higher in "Affectionless control" and "Neglectful parenting" than in "Affectionate constraint" and "Optimal parenting. " If we consider the gender comparison, findings highlighted significant differences in males and females. Il appeared to us that males were less sensitive to father parenting with regard to suicidal ideation and depression. Furthermore, the mother's parenting, in both males and females, seemed to have a greater impact than the father's in relation to the BHS and BDI-II scores. This remark agrees with those made in the review of Goschin et al. (2013). Thirdly, we found that the "Care" dimension of father and mother was a "protective factor" toward suicidal ideation in the overall sample, but once the analysis is differentiated for gender, this dimension remains "protective" only in females. As for depressive symptoms, female gender was the most relevant predictor, along with "Mother protection" as well. Again, the "care" dimension of father and mother proved to be a "protective factor" for depressive symptoms. In the two-gender analysis, however, the dimension "care" of father and mother is not more "protective" in males, but only in females. This discordance between males and females on how the quality of their parental bond affects suicidal ideation and depression, suggests that perhaps males are less sensitive than females to Parental Bonding. This point in particular is a novelty in our work, which could be a starting point for future studies on several grounds: on one hand, to find confirmation of this statement, and on the other, to investigate its reasons and evolution. The significance of the finding is possibly related to sociocultural factors and the predominant role that the mother has played and still plays in child growth in Italian context, although there have been changes in the last years. Imagining a study similar to ours in the next years might in fact show changes that would help us understand and analyze more carefully the factors involved in psychopathology. This study has some limitations. In the sample, males are substantially less represented than females (27.1% vs. 72.9%). In addition, cross-sectional studies are not adequate to test etiological hypotheses but only to formulate them and are susceptible to biases such as responder bias, recall bias, interviewer bias and social acceptability bias. The use of self-rated psychometric instruments may be susceptible to cognitive bias, bias of overestimation or underestimation of symptoms, or in the case of Frontiers in Psychology 12 frontiersin.org PBI, since retrospective, even to memory bias. Finally, another limitation of this study is the absence of a control group. Conclusion These data are partial and preliminary, so more studies are necessary in order to further expand knowledge on this topic. Nevertheless, we may consider some clinical implications of the work, and suggest some recommendations to students, families, universities, and society. From a clinical perspective, the work suggests that we should pay attention, in assessing depressive symptomatology and suicidal ideation, to the patient's relationship with his or her family. Moreover, in focusing on this determinant, it is good to take into account the gender differences that might be observed. Furthermore, even in our study we emphasized the higher impact of mother' s bad parenting on the susceptibility and onset of depression and suicidal ideation. Nevertheless, it is good to take into consideration the figure of the father, who might over the years get a stronger impact on the psychopathology as a result of sociocultural changes concerning the roles and functions of the father figure in the family setting and in the specificity of the Italian context. This is precisely why the PBI could help, as an agile tool that allows us in a relatively short time to have important insights into both parenting styles. The recommendations to be made are therefore many. As for students, the decision to undertake a long and difficult course of study such as Medicine should be more individual and independent and less related to family or cultural heritage. Moreover, it is interesting to point out that the choice to pursue this type of career is probably linked to original narcissistic wounds, and consequently to reparative drives ("caring for others to care for oneself ") that may manifest themselves in the choice of a "helping profession, " particularly the medical one. It would be a good idea, therefore, to activate counselling services starting from high schools, so as to wisely guide students to a more conscious choice for their future. With regard to parents, however, the importance of care and of limiting control is significant and should also be combined with better harmony and cooperation between parental figures. Universities should try to strengthen specific counselling services, and to conduct student evaluations that are as integrated and comprehensive as possible. Given the onset of major psychiatric illnesses precisely in this stage of life, it is necessary to intercept the specific vulnerabilities of this juvenile population early, to capture the individual reasons underlying psychological distress, and to diagnose the existence of psychiatric disorders. Strengthening these interventions would allow us to intervene preventively, limiting the recurrence and the amplitude of negative outcomes. So, it might be worthwhile in the first two years of the degree to submit all students to psychiatric screening evaluations by means of quickly and easily administered psychometric selfassessment instruments, which, although limited in their reliability, can provide interesting and useful information in order to prevent the onset of psychological distress. Concerning the society, the study confirms that in Italy the percentages of depressive symptoms and suicidal ideation in medical students are higher than in other countries. This could stimulate a reflection on what the differences are in the conditions of our students compared to those abroad, trying to draw lessons from other realities, which could increase students' wellbeing. Our study also found that, among students, a high percentage (33%, 1 in 3 students) reported an academic failure as an event that worsened their quality of life and negatively impacted their psychological health. The sometimes more social than personal need and drive to "succeed" could be another key to interpret these results. University students, today more than in the past, have a specific vulnerability to the psychological stress due to performative demands they are subjected to. This, along with the reality that the onset of many psychiatric disorders occurs precisely between adolescence and early youth, prompts us to dwell on the importance for universities, families, and society to focus on the quality of life and mental health of their students. While it is important to maintain a meritocratic mechanism of "rewarding" work through grades, at the same time, the necessity of supporting students in their academic journey with proper attention to their mental condition must be emphasized. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement All procedures performed in the study were in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards, and were approved by the Ethics Committee of the University of Ferrara (Italy). All patients took part on a voluntary basis and were not remunerated for their participation. They were assured of the anonymity and confidentiality of the information provided and were informed that they could stop completing the questionnaire at any time if they so wished. They were also assured that the collected data would be used only for the purposes of the study. Author contributions ST and JS contributed to conception and design of the study, planned the research project, and organized the database. JS wrote the first draft of the manuscript and was responsible for the review of the literature. ST contributed to the preparation of the manuscript and wrote sections of the manuscript. IC performed the statistical analysis. SC contributed to conception of the study, supervised the design of the study, and critically reviewed the Frontiers in Psychology 13 frontiersin.org manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
2022-08-04T13:45:23.502Z
2022-08-04T00:00:00.000
{ "year": 2022, "sha1": "a0adf80398d3af0a8db22862ec1983ab10c592e7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "a0adf80398d3af0a8db22862ec1983ab10c592e7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
3045566
pes2o/s2orc
v3-fos-license
HIV-1 Quasispecies Delineation by Tag Linkage Deep Sequencing Trade-offs between throughput, read length, and error rates in high-throughput sequencing limit certain applications such as monitoring viral quasispecies. Here, we describe a molecular-based tag linkage method that allows assemblage of short sequence reads into long DNA fragments. It enables haplotype phasing with high accuracy and sensitivity to interrogate individual viral sequences in a quasispecies. This approach is demonstrated to deduce ∼2000 unique 1.3 kb viral sequences from HIV-1 quasispecies in vivo and after passaging ex vivo with a detection limit of ∼0.005% to ∼0.001%. Reproducibility of the method is validated quantitatively and qualitatively by a technical replicate. This approach can improve monitoring of the genetic architecture and evolution dynamics in any quasispecies population. Introduction Many viruses have such high replication and mutation rates that they exist as a quasispecies in vivo [1]. A viral quasispecies population contains a variety of genotypic variants that are related by similar mutations and exist in varying abundance depending on their relative fitness within the host environment. In this report, we refer to viral quasispecies as the whole population of genotypic variants, whereas viral sequence is defined as the individual viral variant within quasispecies population. Viral sequence variation in the quasispecies population can be rapidly generated by point mutation and/or recombination [1,2]. Mutation rates can be as high as in the order of one per replication cycle, in which the progeny virus is unlikely to be identical to its parental template. This diverse array of viral sequences permits robust adaptation and evolution. Often, genotypes with a particular set of mutations gain a significant fitness advantage through synergistic phenotypic effect among multiple mutations, which is also known as epistasis. Epistasis has an important role in host adaptation and may drive evolution towards drug resistance and immune evasion [3][4][5][6][7]. In many cases, virus drug resistance requires two or more mutations in concert, especially when multiple drugs are applied simultaneously [7][8][9]. Therefore, monitoring individual viral haplotypes in the quasispecies populations within patients is important to estimate the risk of viral rebound and further provide customized treatment [10]. Characterizing the population structure of viral quasispecies in the host also helps to understand the evolutionary landscape and cis-interactions among genetic elements. Clonal sequencing has been frequently employed to examine the genetic makeup of individual viruses within a quasispecies population. However, clonal sequencing has a low throughput and a high sequencing cost per nucleotide. It limits the number of viral sequences, hence haplotype variants, being genetically interrogated. On the other hand, next generation sequencing (NGS) technology provides enough throughput and sensitivity to detect very rare viral mutations. Nevertheless, the short read lengths of NGS pose a challenge in reconstruction of individual viral sequences within a viral quasispecies. First of all, it is often difficult to distinguish rare mutations that exist in the quasispecies population with sequencing errors from NGS. Secondly, haplotype phasing is extremely challenging when mutations are sporadic and are separated by long, highly conserved or even completely identical regions. These technical challenges make it extremely difficult to reconstruct viral quasispecies from NGS data. Existing methods in reconstructing viral quasispecies from NGS platforms rely heavily on computational tools, including the development of read graph-based or probabilistic-based algorithms that utilize the information from overlapping reads [11][12][13][14][15][16][17][18][19][20]. Although they provide an approximation of haplotype information present in a viral quasispecies, the sensitivity and accuracy vary depending on sequencing error rate and quasispecies diversity. As a result, it is critical to develop a viral quasispecies recontruction method with higher sensitivity and accuracy in both mutation calling and haplotype phasing. In order to genetically define a viral quasispecies population, we developed a novel analytical technique to assemble short Illumina amplicon sequence reads derived from individual viral sequences. In contrast to algorithmic-based methods for quasispecies reconstruction, tag linkage approach is a molecular-based approach. To the best of our knowledge, this is the first experimental approach that specialized in quasispecies reconstruction. The methodology consists of three key steps: 1) Assigning unique tags to individual viral sequences to distinguish each variant within the viral quasispecies, 2) Controlling the complexity of the library during amplification to ensure sufficient coverage for sampled viral sequences, and 3) Using a tag linkage strategy to deduce the fulllength templates from non-overlapping amplicons. Here, we provide a proof-of-concept study by utilizing this approach to genetically characterize an HIV-1 quasispecies population under two conditions: an isolated in vivo virus population and the virus population derived from the same chronically infected HIV-1 patient passaged ex vivo in cell culture. We achieve a detection limit of ,0.005% to ,0.001%. The reproducibility is validated with a technical replicate. Overall, this approach enables accurate haplotype phasing with very high sensitivity. Library Preparation for Sequencing The underlying rationale is to assign a unique tag to individual viral sequences within the quasispecies and to distribute the tag to every sequencing read originated from the same viral sequence ( Figure 1A). Individual viral sequences within the quasispecies can be assembled by grouping sequencing reads that share the same tag. As a result, the tag linkage approach described in this study permits reconstruction of individual viral sequences from NGS reads despite the lack of overlap. The workflow for sequencing library preparation is summarized in Figure 1B-F. Briefly, individual DNA molecules are assigned a unique tag by PCR ( Figure 1B). The tag consists of a 13 ''N'' sequence that allows distinguishing 4 13 & 70 million molecules. After tagging individual DNA molecules within the pool, the complexity of the pool is being controlled. Complexity is defined as the number of tagged DNA molecules being processed after the first round of PCR. Thus, the more tagged molecules are being processed, the higher the complexity becomes. If complexity is too high, individual tagged molecules will not be covered repeatedly, leading to a failure in assemble individual DNA molecules ( Figure S1A in File S1). On the other hand, if complexity is too low, sequencing capacity will be wasted due to redundant sequencing coverage of individual tagged DNA molecules being processed ( Figure S1B in File S1). Nonetheless, for quasispecies determination, it is more detrimental if the complexity is too high versus too low because excessive complexity will abolish the sequence assembly process ( Figure S1 in File S1). In general, the relationship between complexity and expected coverage for an individual viral sequence can be calculated with the expected sequencing output: In this formula, sequencing capacity and length of region of interest can be predetermined. Therefore, complexity is estimated solely based on the desired coverage of each tagged DNA molecules. For example, if the region of interest is 1 kb and 1 Gb of sequencing output is expected, then a complexity of 100,000 gives on average 10-fold coverage for individual tagged DNA molecules being processed. With sufficient coverage for an individual viral sequence, we can distinguish sequencing error from true mutation as described previously [21], in addition to haplotype phasing. Therefore, complexity control represents a critical step in our experimental design. After controlling the complexity, a PCR is performed to generate multiple copies of individually tagged DNA molecules ( Figure 1C). The resultant DNA pool is then divided into a series of PCRs to generate products with different lengths ( Figure 1D). For every pool, the resultant PCR products contain two different restriction sites on each ends. Next, restriction enzyme digestions generate two sticky ends and remove the constant region for PCR in the earlier step. A self-ligation step follows with the addition of a short insert ( Figure 1E). The short insert can serve as a barcode for multiplex sequencing. This ligation step circularizes the DNA, resulting in different sequence regions being proximal to the tag and further allowing linkage formation between any distal region with the tag -another key step in our experimental design. In the final step, a short amplicon (,200 bp) is recovered for NGS ( Figure 1F). Each NGS read, from 59 to 39, will cover a tag for short read assembly within a quasispecies sample, a barcode for quasispecies sample identification, and a particular region of interest on the targeted viral sequence. NGS reads sharing the same tag belong to the same DNA molecules. Therefore, haplotypes of individual viral genomes within the quasispecies population can be interrogated. A more detailed schematic representation of the key steps in our approach is shown in Figure S2 in File S1. Assembly of Two HIV-1 Viral Quasispecies Virus derived from a chronically infected HIV-1 patient was analyzed before (in vivo) and after (ex vivo) cell culture passaging for 10 weeks. In vivo virus sample represented the viral quasispecies within the HIV-1 infected patient. Whereas in ex vivo passaging, virus from the same patient was passaged serially in primary CD4 + T lymphocytes from an HIV-1-uninfected donor and reflected the evolution of the viral quasispecies population in the absence of intra-patient selection pressure. We limited the complexity by processing roughly 300,000 viral sequences to ensure sufficient coverage (,50-fold) in all regions for any given viral sequence ( Figure 1B). Twelve non-overlapping amplicons, which cover a 1,295 nucleotide stretch and encompass most of the gag and a portion of the pol genes of the HIV-1 genome, were prepared. Sequencing was performed using an Illumina HiSeq 2000 machine. Sequencing coverages in different regions were similar ( Figure 2A). The numbers of unique tags in different regions were also comparable ( Figure 2B). The absence of apparent coverage bias confirmed the quality of sequencing library preparation. For each region, tags with fewer than three occurrences were filtered and removed to adequately apply the error correction algorithm. This filter eliminated 35-57% of tags depending on region. For a complete viral sequence to be assembled, sequences of all 12 amplicon regions sharing the same tag had to be available ( Figure S1 in File S1). We successfully assembled 54,583 viral sequences in the in vivo viral quasispecies and 228,936 viral sequences in the ex vivo quasispecies, thus validating the complexity control procedure ( Figure S1 in File S1). However, about ,30-40% of the tags were present in only one or two regions, which we attributed to PCR or sequencing errors at the tag region ( Figure 2C). To further evaluate the data quality, the appearance of stop codons in gag was examined. Given that viable virus requires translation of a full length Gag polyprotein, stop codons would likely represent PCR errors. While ,0.4% (in vivo) and ,0.2% (ex vivo) of the assembled sequences contained a stop codon, this number dropped dramatically (,0.05%) after we filtered-out the sequences with just one occurrence ( Figure 2D). Further increasing the cutoff stringency, however, did not significantly suppress the stop codon occurrence frequency. These rare viral sequences were likely to be non-functional virus within the viral quasispecies population generated by hypermutation [22][23][24]. 47,083 assembled viral sequences from the in vivo viral quasispecies and 223,966 assembled viral sequences from the ex vivo viral quasispecies passed this quality filter, yielding 2,672 and 1,983 unique viral sequences, respectively. The number of unique viral sequences we successfully assembled represented a . 20 fold increased as compared to that of the previously reported algorithm-based quasispecies assembly method [11][12][13][14][15][16][17]. Additionally, the detection limits of rare viral sequences in this study (,0.005% and ,0.001% for the in vivo and ex vivo viral quasispecies, respectively) also significantly exceeded that reported for the algorithm-based technique, which was reported to be ,0.1% to ,1% [15][16][17][18]. Comparison with Algorithmic-based Approach To the best of our knowledge, the existing quasispecies reconstruction approaches are algorithmic-based inference methods. In contrast, tag linkage approach is a molecular-based, direct interrogation method. It is devoid of any inference error that is intrinsic to algorithmic-based approach. Consequently, it enables a much higher accuracy in quasispecies reconstruction than conventional algorithmic-based approach. We compared the performance of our tag linkage method with two algorithmicbased approaches: 1) the state-of-the-art ShoRAH tool [13], and 2) a recently published approach, QuasiRecomb [12], which takes natural recombination event into account. To implement the algorithmic-based approaches, single-read DNA sequencing library of the in vivo quasispecies sample was prepared by standard DNA fragmentation. We also employed the tagging strategy here to distinguish true mutations from sequencing error as previously described (see materials and methods) [21]. As a result, quasispecies reconstructions by ShoRAH and QuasiRecomb were minimally confounded by sequencing error. To provide a reference for comparison, we conducted traditional clonal sequencing for the in vivo quasispecies population. In this experiment, a 1106 bp region in the gag gene was considered. A total of 20 randomly selected clones were sequenced, which represented 14 different haplotypes. ShoRAH reconstructed 252 viral sequences from the in vivo quasispecies sample. However, none of the 14 haplotypes were being reconstructed ( Figure 3). For those 14 haplotypes, the respective closest viral sequence deduced by ShoRAH had an edit distance ranging from 1 to 12. QuasiRecomb, on the other hand, reconstructed 1343 viral sequence and was able to identify 1 out of 14 haplotypes from clonal sequencing. This haplotype had an estimated occurrence frequency of 0.8% from QuasiRecomb while it accounted for 7 out of 20 clones in clonal sequencing. It implied that haplotype frequency estimation by QuasiRecomb was inaccurate and that a significant amount of reconstructed haplotype by QuasiRecomb was false positive. QuasiRecomb can also be run in a conservative mode, in which only major haplotypes were reconstructed. Under this running mode, only 6 haplotypes were reconstructed and none of them overlapped with the 14 haplotypes being clonal sequenced. In contrast, 9 out of 14 haplotypes from clonal sequencing were included in the quasispecies reconstructed by our tag linkage approach (Figure 3). The most abundant haplotype from clonal sequencing matched the most abundant reconstructed haplotype from tag linkage approach in this region of interest. The other 8 identified haplotypes were estimated to have an occurrence frequency from 0.002% to 0.02%. It highlighted the sensitivity and accuracy of our tag linkage approach in reconstructing rare haplotypes. The missing five haplotypes were 1-3 edit distances away from their respective closest viral sequence in the quasispecies reconstructed by our tag linkage approach. Overall, tag linkage approach achieved a significant improvement over algorithmic-based approaches in quasispecies reconstruction, both qualitatively and quantitatively. Diversity Comparison between in vivo and ex vivo HIV-1 Quasispecies We next examined the sequence diversity in both in vivo and ex vivo quasispecies populations. The most frequent viral sequence represented 8.1% of the in vivo viral quasispecies, whereas the most dominant viral sequence represented 32.5% of the ex vivo viral quasispecies ( Figure 4A). The two most dominant viral sequences in the ex vivo sample comprised more than half of the total viral quasispecies while the in vivo viral quasispecies was much more diverse. At the amino acid level, 80% of the in vivo viral quasispecies were represented by four protein sequences, with a total of 42 unique protein sequences in the population ( Figure 4B). In contrast, while only two protein sequences represented 80% of the ex vivo viral quasispecies, there were 201 unique protein sequences. Table S1 in File S1 provides a summary of this data. A phylogenetic tree analysis demonstrated the effect of differential selection pressures on viral quasispecies evolution from in vivo to ex vivo, in which two distinct sub-population clusters could be observed ( Figure 4C and D). Recombination Pattern of HIV-1 Quasispecies HIV-1, as a diploid retrovirus, is capable of generating recombinant proviral transcript via a template switching event during the reverse transcription step in the viral replication. It facilitates further diversification for adaptation [2]. The depth and comprehensiveness of our data permit an investigation of this viral recombination, as a linkage disequilibrium pattern. Here, we employed the r correlation to measure linkage disequilibrium. r was computed between 38 SNPs that had an occurrence frequency above 0.1% in either the in vivo or ex vivo viral quasispecies ( Figure 5). Several strong correlations (r 2 . 0.5) were observed in both the in vivo and ex vivo viral quasispecies. Nonetheless, the linkage disequilibrium was more pervasive and spanned a larger region in the in vivo viral quasispecies than in the ex vivo viral quasispecies. From the in vivo viral quasispecies, we observed two linkage disequilibrium blocks, a ,200 nucleotide block from position 900 to 1100 and another from nucleotide position 1400 to 1600. The presence of two closely spaced recombination nucleotide blocks suggests that there is a recombination hotspot between position 1100 to 1400, which is located at the p24 region of the gag gene. Another possibility is that certain haplotypes provided a fitness advantage and were positively selected. Further characterization would be needed to dissect the underlying mechanism. Reproducibility from a Technical Replicate To assess the reproducibility, a technical replicate was performed for the ex vivo viral quasispecies population ( Figure 1C-E). The technical replicate was repeated for all steps beginning at the stage of generating amplicons of varying length ( Figure 1C) -a key step of our approach. Majority of the viral sequences in the replicate (replicate 2) overlapped with the original data set (replicate 1) ( Figure 6A). However, a significant fraction of viral sequences was covered by only one of the replicates, but those represented a small fraction, ,3% to 9%, of the viral quasispecies ( Figure 6B and C). Viral sequences that were observed in only one of the two replicates typically had an occurrence frequency , 0.01% ( Figure 6B). It suggests that the difference between replicates was due to sampling limit, where viral sequences with a low occurrence were more likely to be unsampled by one of the replicates. Replicate 2 covered 97% of the viral quasispecies in the first replicate, whereas replicate 1 covered 91% of viral quasispecies in the second replicate ( Figure 6C). The genetic composition of the viral quasispecies reconstructed from replicate 2 was comparable to that of replicate 1 ( Figure 4A and 6D). Occurrence frequency for individual viral sequences exhibited a correlation of 0.87 (Pearson correlation at normal scale) between replicates ( Figure 6E). These results provided further validation of the tag linkage technique in both qualitative and quantitative manners. Discussion With the advancement of sequencing technology, NGS continues to increase read length and throughput. Nonetheless, the trade-off between read length and throughput still exists [25]. Sequencing platforms with long reads such as Pacific Bio and 454 pyrosequencing have a relatively low throughput. NGS machines with higher throughput such as Illumina and SOLiD do not afford long reads. Despite currently having the highest throughput, the short read length of Illumina creates a challenge in assembling reads into continuous long sequences. This study describes an amplicon-based tag linkage approach to characterize viral quasispecies population structures and provides a proof-of-concept example showing a very high detection sensitivity. Unlike algorithm-based approaches, the accuracy of our amplicon-based molecular tag approach is independent of viral quasispecies population diversity. In addition, it incorporates an error correction step to identify NGS platform errors, resulting in a dramatic increase in the sensitivity to detect rare haplotypes [21]. Algorithm-based approach for viral quasispecies reconstruction can usually handle 10 to 100 viral sequences at various statistical confidence. In contrast, our tag linkage approach can reconstruct close to 1000 sequences with high confidence as indicated by our replicates. It achieves a significant improvement in accuracy and sensitivity from the algorithm-base approach [11][12][13][14][15][16][17][18][19][20]. The major limitation in our approach is the length of deduced sequence, which is restricted by the upper limit of PCR (typically 10 kilobases). Another potential pitfall is PCR recombination. In our protocol, we tried to minimize this artifact by using a high processivity and fidelity DNA polymerase for PCR [26]. In addition, a long PCR extension time was used to ensure extension completion of the amplicon to minimize PCR recombination [27]. Our technical replicate control shows that a majority of the viral quasispecies population content (.90%) are captured in both repetitions, including rare variants, indicating that any artifact by PCR recombination is minimal. Additionally, the high correlation of occurrence frequency for individual viral sequences between each replicate confirms reproducibility. Overall, our control experiments and concurrent analysis validate the amplicon-based tag linkage approach as a highly sensitive methodology for viral quasispecies assembly. By reconstructing individual sequences within the viral quasispecies, we are able to detect linkage disequilibrium throughout the region of interest. Genome recombination is a frequent process occurring intra-patient for diversification and adaptation [28][29][30][31][32]. Recombinant generation is a non-random process as recombination coldspots and hotspots have been reported in HIV-1 [33][34][35][36]. In this study, we observed a more pervasive linkage disequilibrium in the in vivo viral quasispecies compared to that of the ex vivo, suggesting that there may be genetic interactions within the linkage disequilibrium block that are important for chronic infection. Alternatively, this observation may also be attributed to a higher recombination frequency during ex vivo passaging due to an increase in co-infection occurrence. We demonstrate the power of our tag linkage approach in capturing linkage disequilibrium in a viral quasispecies, which can be further utilized to examine genetic interactions and to identify functional residues. Our technique provides a sensitive and accurate tool to study the evolutionary trajectory of viral quasispecies. It permits the monitoring of a multi-drug resistance (MDR) viral sequence and epistasis within viral quasispecies -an important factor in viral evolution and adaptation [37,38]. Highly active antiretroviral therapy (HAART) therapy is a common treatment to suppress HIV progression by utilizing a drug cocktail designed to target viral proteins at multiple essential stages of the viral life cycle. However, viral rebound can be caused by MDR HIV with extremely low occurrence frequency [9,[38][39][40]. In addition, as most drug resistant mutations compromise viral fitness, drug resistant viruses often carry additional mutations to compensate for this fitness cost [6,10,[41][42][43]. The tag linkage approach provides an important tool to survey the genetic makeup of viral quasispecies and to estimate the risk of viral rebound and virulence by surveillance of pair-wise or even higher-order genetic interactions between mutations. (C) A neighbor-joining phylogenetic tree depicting viral nucleotide sequences with occurrence frequencies above 0.05%. The occurrence frequency for each individual node is color coded as described in Figure 4A. (D) A small segment in the phylogenetic tree from Figure 4C is selected. This segment is replotted along with viral sequences with occurrence frequency form 0.001% to 0.05%, which are not included in Figure 4C. doi:10.1371/journal.pone.0097505.g004 Although this study is based on HIV quasispecies samples, tag linkage approach is not limited to HIV and can potentially be applied to other viral quasispecies, such as hepatitis B virus (HBV), hepatitis C virus (HCV) and influenza virus. For example, tag linkage approach can be applied to study multi-drug resistance that are also found in naturally occurring HBV as in the case of HIV [44,45]. This technique is also suitable for studying ciselements that are prevalent in HCV due to its intrinsic replication property [46]. In addition, tag linkage approach can be utilized to examine permissive and compensatory mutations that are shown to be important in the evolution of influenza virus [47][48][49]. This technique can also be extended beyond the monitoring of viral quasispecies. One application is to examine the dynamics of CD4 + and CD8 + cells in the immune system during viral infection. They have an active role in virus detection and clearance during both acute and chronic infection. During the establishment of persistent viral infections, the immune system co-evolves with the virus [50]. A complex dynamic occurs between the heterogeneous immune populations and the evolving viral quasispecies. The medical significance of this virus-host dynamic is highlighted by a recent study describing the rise of a broadly neutralizing HIV-1 antibody from co-evolution with acute phase virus [51]. The methodology we describe here offers the research community an approach to understand the dynamic interplay between the host and virus in exquisite detail at the population level. Ethics Statement The study was approved by UCLA IRB. A chronically-infected HIV-1 patient without undergoing antiretroviral therapy was recruited from the Los Angeles area and provided written informed consent. Subjects and Specimen Collection Total peripheral blood mononuclear cells (PBMCs) were isolated from the patient's whole blood sample by standard Ficoll gradient. The plasma viral load at the time of collection was 130,234 viral copies/ml. Recovery of Virus from PBMCs and Virus Passaging Ex vivo passaging was conducted as previously described [52]. Briefly, virus was passaged serially in primary CD4 + T lymphocytes from an HIV-1-uninfected donor [53]. After each passage of ,7 days, supernatant virus was collected, titered, and used to infect fresh cells with an MOI of 1. The concentration of the tagged DNA sample was measured using NanoDrop 1000 spectrophotometer (Thermo Fisher Scientific). This concentration was used as a reference to calculate the dilution-fold in the subsequent complexity control step. In the complexity control step, ,300,000 copies of tagged DNA sample were used as the input for PCR using the primer set: 59-CAC ATA GAT ACT ATG CGG CCG C-39 and 59-GTT TAA CTT TTG GGC CAT CCA TTC CTG GC-39. This complexity was calculated based on a ,50-fold coverage for individual viral sequence with 30 Gb expected sequencing output per viral quasispecies sample. This was followed by 12 PCR using the universal forward primer, 59-CAC ATA GAT ACT ATG CGG CCG C-39, and the reverse primers as stated in Table S2 in File S1 to add the XhoI restriction enzyme site on the 39 end using the product of the complexity control step as template. Consecutive PCR pools should have a different product size approximately corresponding to the sequencing read length minus 80 bp (Table S3 and S4 in File S1). From this step forward, the 12 pools were processed independently until sample combination at the high-throughput sequencing step. The products were then subjected to double digestion by NotI and XhoI. NotI and XhoI were chosen because they were not present in the consensus sequence of the target DNA template region. A small insert, which could serve as the population ID, was prepared by annealing 59-GGC CCG ACG TAA CGA T-39 and 59-TCG AAT CGT TAC GTC G-39, each with a phosphate group attached at the 59 end. Ligation was performed using the small insert to DNA sample at the molar ratio as stated in Table S2 in File S1. One unit of T4 DNA ligase (Life Technolgies) was used in each ligation reaction. The reaction condition followed manufacturer's instructions. All ligations were performed overnight at 20uC in 100 uL total reaction volume. The ligated products were used as the templates for PCR to add the 59 flow cell adapters and the reverse read Illumina sequencing priming site using the universal forward primer, 59-AAT GAT ACG GCG ACC ACC GAG ATC TAC ACT CTT TCC CTA CAC GAC GCT CTT CCG-39, and the reverse primers as stated in Table S2 in File S1. The 39 Illumina flow cell adapters were then added by PCR using the primer set: 59-AAT GAT ACG GCG ACC ACC G-39 and 59-CAA GCA GAA GAC GGC ATA CGA GAT CGG TCT CGG CAT TCC TGC TGA ACC GCT CTT CCG-39. The resultant amplicons from all 12 pools were then mixed. High-throughput sequencing was done by an Illumina HiSeq 2000 machine with an equivalent of 0.75 lane per sample and 26100 bp paired-end reads. All PCRs in this study were performed using KOD DNA polymerase with 1.5 mM MgSO 4 , 0.2 mM of each dNTP (dATP, dCTP, dGTP, and dTTP) and 0.4 uM of forward and reverse primer. PCR extensions were performed with 50 seconds per kb at 68uC. Annealing temperature for a given PCR was 5uC below the lowest melting temperature of the pair of primers. All primers in this study were designed to target conserved regions within the quasispecies which were determined by clonal sequencing of the sampled viral sequences. This sequencing library preparation could potentially be adapted to study viral RNA using a reverse transcription primer tag as decribed by Jabara et al [54]. Raw sequencing data have been submitted to the NIH Short Read Archive under accesion number: SRP032753. Clonal Sequencing After recovering the DNA by PCR as described above, the amplicon was inserted into target p83-2 plasmid using In-Fusion kit (Clontech). Twenty clones were randomly selected and subjected to capillary sequencing (Laragen). Data Analysis Sequencing reads were mapped by BWA with 8 mismatches allowed [55]. Pair-end reads containing two or more short inserts (barcodes) were discarded. Error-correction was performed as described previously to distinguish true mutation from sequencing error [21]. The error-correction step grouped all reads sharing the same tag and mapped to the same region into a read cluster that was further conflated into a ''error-free'' read. As described in Kinde et al. [21], most reads sharing the same tag should share the mutation pattern during mapping. In contrast, a sequencing error would have a low occurrence frequency within a read cluster and could be distinguished from true mutations. Through this process, sequencing error would be corrected to generate an ''error-free'' read. Read cluster with a size of ,3 reads were discarded to increase the confidence in generating an ''error-free'' read. Since intermolecular concatenation at the ligation was observed, a mutation that existed in 45% of the reads within a conflated read cluster that also shared the same tag was considered as a true mutation. The correlation between technical replicates indicated that intermolecular concatenation did not pose a major barrier in the accuracy of viral quasispecies assembly. Nonetheless, further application should adjust the ligation reaction volume to decrease the intermolecular concatenation during ligation (circularization step). Next, ''error-free'' reads that shared the same tag were assembled into a contiguous sequence, which represented a single viral sequence. Data processing and analysis were conducted by custom Python scripts. All scripts are available upon request. Phylogenetic Tree Construction ClustalX was used to create the neighbor-joining phylogenetic tree [56]. The phylogenetic tree was mid point-rooted and displayed by FigTree. Linkage Disequilibrium We used the r 2 correlation to quantify linkage disequilibrium between two SNPs. R 2 was computed as per convention. Briefly, r 2 = (P AB 2 P A x P B ) 2 /(P A 6 P B 6 (1 P A ) 6 (1 P B )), where P AB represented the occurrence frequency of viral sequences that carry both SNP A and SNP B; P A represented the occurrence frequency of viral sequences that carry SNP A; P B represented the occurrence frequency of viral sequences that carry SNP B. DNA Library Preparation for Error-free Sequencing Gag-pol region was PCR amplified using the primer set: 59-GAC TAG CGG AGG CTA GAA GGA GAG AG-39 and 59-CAT GTT CTT CTT GGG CCT TAT CTA TTC-39. The resultant DNA product was sheared to around 200 bp to 600 bp by sonication using the Sonic Dismembrator Model 100 (Fisher Scientific). Dismembrator was set to power level four and samples were pulsed three times for 10 seconds. Samples were kept on ice for 45 seconds in between pulses. End repair and 39 dA-tailing were performed respectively by end repair module and dA-tailing module (New England BioLabs). The DNA product was then ligated to an Y-shape adaptor carrying a nine-nucleotide tag of random 'N' sequence. As a result, each ligated product contained an 18-nucleotide tag, nine from each of the 59 and 39 end. Y-shape adaptor was prepared by annealing two oligonucleotides: Quasispecies Reconstruction by ShoRAH and QuasiRecomb ''Error-free'' reads were generated as described above. Here, a mutation that existed in 95% of the reads within a conflated read cluster that also shared the same tag was considered as a true mutation. Reads were mapped by BWA with 8 mismatches allowed [55]. All reads were treated as single end read. ''Errorfree'' mapped reads were processed by ShoRAH version 0.6 with a window size of 40, a window shift of 1 and default settings for other parameters [13]. Quasispecies reconstruction by QuasiRecomb was performed by default setting [12]. Due to the huge memory requirement of QuasiRecomb, 500,000 mapped reads were randomly sampled and processed. Further increase the number of input reads generated memory error. To limit the false positive rate, a refinement reconstruction was performed using 'refine' option. We employed '-conservative' option for high confidence haplotype reconstruction to identify major haplotypes. Supporting Information File S1 Figures S1 and S2 and Tables S1-S4. Figure S1. Concept of complexity control. In this graphical demonstration, we employ a simple example with five amplicons and 30 reads sequenced. A total of nine viral sequences are present in the viral quasispecies with the genotype being A or B. The colored boxes represent the tag for distinguishing an individual viral sequence within the viral quasispecies. Different colors represent different nucleotide sequences in individual tags. The white boxes represent individual viral sequences. During the Amplicon generation and sequencing step, each column of amplicons represents one genomic region of the viral quasispecies. (A) Complexity is too high (complexity = 9) where each viral sequence is not sufficiently covered. (B) Complexity is too low (complexity = 1) where each viral sequence is excessively covered and therefore, there is a waste of sequencing capacity. (C) Complexity is well-controlled (complexity = 3) such that individual viral sequences are sufficiently covered for sequencing error correction and for sequence assembly. Figure S2. Key step in the experimental design. (A) A detailed representation that shows the cassette sequence in Figure 1B. (B) A detailed representation that shows the cassette sequence after ligation. (PDF)
2016-05-12T22:15:10.714Z
2014-05-19T00:00:00.000
{ "year": 2014, "sha1": "fef00845be483e430abb1b725d80c0ac188c9304", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0097505&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19fc04d8346626eb299a6c2085ea3a3155495d7e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
216226570
pes2o/s2orc
v3-fos-license
On a heavy path – determining cold plasma-derived short-lived species chemistry using isotopic labelling Cold atmospheric plasmas (CAPs) are promising medical tools and are currently applied in dermatology and epithelial cancers. While understanding of the biomedical effects is already substantial, knowledge on the contribution of individual ROS and RNS and the mode of activation of biochemical pathways is insufficient. Especially the formation and transport of short-lived reactive species in liquids remain elusive, a situation shared with other approaches involving redox processes such as photodynamic therapy. Here, the contribution of plasma-generated reactive oxygen species (ROS) in plasma liquid chemistry was determined by labeling these via admixing heavy oxygen 18O2 to the feed gas or by using heavy water H218O as a solvent for the bait molecule. The inclusion of heavy or light oxygen atoms by the labeled ROS into the different cysteine products was determined by mass spectrometry. While products like cysteine sulfonic acid incorporated nearly exclusively gas phase-derived oxygen species (atomic oxygen and/or singlet oxygen), a significant contribution of liquid phase-derived species (OH radicals) was observed for cysteine-S-sulfonate. The role, origin, and reaction mechanisms of short-lived species, namely hydroxyl radicals, singlet oxygen, and atomic oxygen, are discussed. Interactions of these species both with the target cysteine molecule as well as the interphase and the liquid bulk are taken into consideration to shed light onto several reaction pathways resulting in observed isotopic oxygen incorporation. These studies give valuable insight into underlying plasma–liquid interaction processes and are a first step to understand these interaction processes between the gas and liquid phase on a molecular level. Introduction Cold atmospheric pressure plasmas (CAPs) have recently transitioned from laboratories to clinics, offering a safe and effective application directly to the patient's body. [1][2][3] Major applications of medical plasmas are wounds, skin-derived diseases and, as off-label use, palliation in cancer patients. [4][5][6][7][8][9][10][11][12] CAPs offer a well-documented efficacy in inactivating bacteria 13 that may contribute to the stimulation of wound healing processes. 14,15 Plasma treatment inuences cell or tissue physiology at various levels, including metabolism, signaling, and cell fate, leading to immunomodulation, angiogenesis, tissue proliferation, or migration. [16][17][18][19] High intensity treatment results in the shutdown of cellular processes and cell death by apoptosis or necrosis-like processes. 20,21 These effects, especially in combination with immune cell modulation, are currently investigated for cancer treatment. [22][23][24][25] Medical plasmas are multicomponent systems containing electrons, ions, electric elds, and a multiplicity of reactive oxygen and nitrogen species (ROS/RNS). Depending on plasma source, feed gas composition, and distance to the target, ROS/ RNS generation varies. [26][27][28][29][30] A major role of these plasmaderived ROS/RNS is assumed, 31,32 but incongruity regarding the mode of action exists. It was argued that cell membraneassociated proteins pick up and translate the induced signals 33 or that species can cross the cell membrane using specic pore proteins 34 and trigger intracellular responses via cytosolic sensor proteins 35 or the mitochondria. 36 With regard to the lifetime of most reactive species described for CAPs, it must be acquiesced that only a small fraction can indeed diffuse into a cell or the cell's vicinity, leaving the question of the ultimate mechanism still open. 37,38 With the controllability of plasma treatments in biomedical applications becoming increasingly a ZIK Plasmatis, Leibniz Institute for Plasma Science and Technology (INP Greifswald), Felix-Hausdorff-Str. 2, Greifswald 17489, Germany. E-mail: jan-wilm.lackmann@ inp-greifswald.de; kristian.wende@inp-greifswald.de b Cellular Biochemistry & Metabolomics, University of Greifswald, Felix-Hausdorff-Str. 4, Greifswald 17487, Germany relevant to improve safety and efficacy, 39 knowledge of the relevant players in plasma-target interaction, their respective trajectories andmost importantlytheir (bio) chemistry is mandatory. It can be concluded that another route of interplay between the plasma-derived species and the biological system is the covalent modication of biomolecules and the subsequent change of their activity or biological value. It has been shown that plasma-derived ROS/RNS are capable of oxidizing amino acids, 40 proteins, [41][42][43][44][45] or lipids. 46,47 Thiol groups are one of the major targets for plasma-generated species. [48][49][50] Hence, CAP impact can yield to (non-enzymatic) post-translational modications (PTMs), with some of which transport signicant inuence in cellular signaling. [51][52][53][54] However, so far no specic member of the various ROS or RNS could be attributed to be the major driver of these reactions. One of the reasons is the limited knowledge that is present on plasma-derived liquid chemistry. Hydrogen peroxide is a frequently reported product of the plasma liquid interaction, yet its reactivity is too low to oxidize biomolecules signicantly ( 55,56 The presence of other species, such as hydroxyl radicals, superoxide anion radicals, or nitric oxide can be shown in liquids by electron paramagnetic resonance, 57 assumed from gas phase distributions, 27,58,59 or from the detection of modied organic or inorganic targets by the plasma treatment. 51,60 Given their high reactivity, the contribution of short lived ROS and RNS to the modication of biomolecules is assumingly signicant. It is a major challenge in plasma chemistry that has so far not been fully met: to determine the short lived species reactivity and to distinguish between species stemming from primary reactions in the gas phase and species created in secondary or tertiary reactions at or in the targeta liquid, a gel, or a tissue. 26 Gorbanev et al., 61 and Benedikt et al. 62 presented rst indications by showing the activity of gas phase-derived species in aqueous model systems. In this work, the stable oxygen isotope 18 O was used in the gas ( 18 O 2 ) and liquid phase (H 2 18 O) to shed light onto the behavior and reactivity of reactive oxygen species (ROS) following CAP treatment. Predominantly, the argon-driven atmospheric-pressure plasma jet kINPen with shielding gas 63 was utilized to treat cysteine as a chemical probe while selected experiments included the use of the helium-driven COST microplasma jet as a reference source. 64 The chemical impact on cysteine was assessed using high-resolution mass spectrometry with a special focus on isotope distribution patterns. These results allow insight into the trajectories of plasmagenerated ROS hitting a liquid surface and their reaction with organic tracer molecules, indicating that both gas phasederived species and liquid phase-derived species have a biochemical potential. Short-lived species generation (plasma sources) The argon-driven atmospheric-pressure plasma jet kINPen 09 (neoplas) 63 was used together with a curtain gas device 65 to provide dened atmospheric conditions for the experiments. The kinpen was powered by 1.1 W at a frequency of 1 MHz. Gas ux was kept constant for all conditions at 3 standard liter per minutes (slm) of pure, dry argon (5.0, Air Liquide) with the curtain gas set to 5 slm of nitrogen (5.0, Air Liquide). Besides pure argon, 1% oxygen (purity 4.8, Air Liquide, Ar/O 2 ) was used for the experiments as these conditions offered promising oxidative thiol modication potential. 53 The COST-jet 64 was powered by constant 300 mW at a frequency of 13.56 MHz. Total gas ux was kept constant at 1 slm of pure dry helium (5.0, Air Liquide) with 1% oxygen admixture (purity .8, Air Liquide, He/ O 2 ). For the experiments with the kINPen, either light oxygen or heavy isotope oxygen (purity 99%, 18 O 2 , Sigma-Aldrich) was used. Due to costs, only light oxygen was used for the control experiments with the COST-jet. All connections were ushed with nitrogen prior to switching from one oxygen variant to the other. Sample preparation and treatment Cysteine (L-cysteine, Sigma-Aldrich) was dissolved in doubledistilled water (MilliQ) or water with isotopically labelled oxygen (H 2 18 O, 97% purity, Eurisotop) to a nal concentration of 300 mM. Simultaneous treatments with 18 O labeled water and 18 O labeled gas were not performed. Treatments of the cysteine solutions were performed with the different gas compositions in 24-well plates using 750 ml of solution per sample and a distance between jet nozzle and liquid surface of 9 mm. All treatments were performed for 60 s and resulting samples were stored on ice and directly measured. High-resolution mass spectrometry Analysis. Mass spectrometry was carried out on a TripleTOF 5600 system (Sciex). Samples were diluted 1: 1 with alkaline buffer (0.3% ammonium hydroxide in methanol) and directly infused using an electronically controlled syringe pump. Each sample was acquired for 1 min using identical system settings for all samples (capillary temperature 150 C, curtain gas: 35 psi N 2 , ion source gas 1 : 20 psi N 2 , ion source gas 2: 25 psi N 2 , ion spray voltage: À4 kV). To identify the structures of all observed masses, each peak of interest was isolated, fragmented, and resulting fragment masses acquired (MS/MS, collision energy À24 eV, declustering potential À10 kV) and annotated. To allow a relative quantication of observed signals, an internal standard (IS) was mixed into the sample directly in front of the mass spectrometer emitter using a mixing tee connector. Here, the amino acid valine (L-valine, Sigma-Aldrich) was used due to its mass difference to other expected signals and little interference with the rest of the spectrum. All measurements were performed in triplicates. Data analysis and branching calculation. Aer acquisition, samples were processed using the Analyst soware (Analyst TF 1.7, Sciex). First, background noise was determined and 300 counts subtracted from the full spectrum (15 times background) to increase signal-to-noise quality. Aerwards, molecule structures were identied using the acquired MS/MS data and the "Formula Finder" as well as "Mass Calculator" functions of the PeakView soware (PeakView 1.2.0.3, Sciex). The areas of all isotope peaks were calculated and normalized on the internal standard area to allow quantitative comparison between measuring runs or each identied structure, the theoretical isotope pattern was calculated. Intensities for all observed peaks were adjusted to remove naturally occurring 13 C isotope intensities to prevent interference with isotope signals stemming from integrated 18 O. Further isotope traces identied for each compound in treatments with pure argon in unlabeled water were considered as controls for impurities. Therefore, the values of isotopic masses identied in treatments with argononly were subtracted from each corresponding isotopic mass identied in experiments with labeled oxygen. The error estimation was done by considering biological triplicates for each condition and technical duplicates for each sample, for six measurements total. Other several potential systematic errors were considered in the presented analyses. First, both isotopic labeled gas and water were not of 100% purity. Therefore, an additional error of 1% had been taken into account for all quantications using 18 O 2 as well as 3% using H 2 18 O. Furthermore, evaporation had to be considered when working with isotopically labeled water. Aer treatment of 60 s, 20 ml of the 750 ml were evaporated. Evaporated heavy water might be dissociated in the discharge, thereby becoming a primary species while erroneous considered as a tertiary species. Therefore, an additional systematic error of 4% has to be considered. In total, expected systematic error were 1% for treatments with 18 Results and discussion Cysteine as model compound Cysteine and derivatives have been suggested as model systems to estimate the liquid phase chemistry of plasma sources, to compare the impact of discharge parameter variations such as working gas composition, and to facilitate the standardization of treatment procedure of plasma discharges for biomedical applications. 48,50 The products resulting from the reaction between the CAP-derived species and cysteine were analyzed by mass spectrometry (Fig. 1). As reported previously, covalent changes to the cysteine (structure 1) by the plasma-derived species were observed, with cystine (RSSR, structure 2), cysteine sulnic acid (RSO 2 H, structure 3), cysteine sulfonic acid (RSO 3 H, structure 4), and cysteine-S-sulfonate (RSSO 3 H, structure 5) and the sulte (SO 3 2À ) and sulfate (SO 4 2À ) ions as dominant products. These compounds were chosen for their relevance in the transformation pathway of cysteine under certain redox conditions. The presence of RSO 3 H or SO 4 2À indicates a strongly oxidizing environment (oxygen in the feed gas, long treatments), whereas the presence of RSSR, RSO 2 H, or RSSO 3 H reveals weakly oxidizing conditions (short treatments, nitrogen shielded feed gas). 53 Here, this model was used to investigate transport processes at the interface between the gas phase (the effluent) and the target (cysteine solutions). To monitor such reactions, the chosen cysteine model with its multiple oxidation states of the thiol moiety seemed to be superior to the phenol model. Gas phase and liquid phase-derived species contribute to plasma liquid chemistry To trace the reactive species, the 18 Fig. 2). Hence, a differentiation between gas phase and liquid phase-derived reactive species could be made. Indeed, the transfer of gas phase 18 O species into the liquid was observed for a number of products (e.g. RSO 2 H and RSO 3 H). These observations were in agreement with data published by Benedikt et al. for a phenol model investigating a micro atmospheric plasma jet (APPJ). 62 Additionally, a strong role of the plasma treated target (cysteine solution in water) as an additional source of reactive species was identied. The compound cysteine-S-sulfonate (RSSO 3 H) almost exclusively contained liquid phase-derived oxygen. Most other products did not show such a clear-cut oxygen incorporation, indicating a mixed attack of gas and liquid phase derived species. Using principal component analysis (Fig. 3), general differences in product formation and oxygen incorporation due to the various treatment conditions were easily observable in the two principal components explaining the largest differences between samples (39.6% and 34.9%, respectively). Ar/O 2 (kINPen) and He/O 2 (COST-jet) treatment were found in close proximity to each other, indicating similar products and isotope distributions aer treatment, which was in good agreement with previous works. The loadings of the principal components (Fig. 3b) indicated a signicant impact due to the presence or absence of incorporated 18 O. The products cysteine sulnic and sulfonic acid. Both molecules are created by CAP treatment due to the stepwise oxidation of the thiol moiety, 40,50 and were observed for all direct This journal is © The Royal Society of Chemistry 2020 RSC Adv. plasma treatment conditions. Without molecular oxygen, admixture (kINPen Ar-only) small yields of sulfonic acid indicate the strong role of gas phase-derived oxygen species for its formation (Fig. 4). Additionally, a limited incorporation of liquid phase species was observed that might stem from the initiation of the oxidative chain: 16 O molecular oxygen in the gas phase). Here, water is cleaved either by the impact of electrons, UV photons, or energy-rich (metastable) noble gas species (reaction (1)). Atomic oxygen (O( 3 P)) has been suggested as another potential reactant for water cleavage (reaction (2)), 62,67 however in conditions with the highest levels of O( 3 P) present (COST jet He/O 2 ), 68 the lowest overall inclusion of aqueous 18 O species into RSO 2 H and RSO 3 H was found ( Fig. 4 and Table 1). With that, reaction (2) is not a major pathway in the existing conditions and O( 3 P) predominantly reacts directly with the present organic molecules without a detour via liquid phase derived OH radicals. A similar formation rate of cysteine sulnic acid was observed for the kINPen Ar only in comparison to Ar/O 2 , while RSO 3 H yields were only 25% of that of Ar/O 2 . Either this suggest a low conversion rate of RSO 2 H into RSO 3 H in Ar-only treatment, or that the formation of RSO 3 H follows a different reaction path than RSO 2 H. The incorporation of 18 O (water) was increased in the argon-only case, indicating that a proportional larger number of 18 OH radicals from the solvent were created when limited amounts of gas phase ROS (O( 3 P), O 2 ( 1 D g )) were available. Hence, the lysis of water is achieved according to reaction (1) by electrons (in the case of the kINPen), Ar or He higher energy states (discussed in ref. 26), and (V)UV photons which are highest if no molecular gas admixture is made. 69 Taken together, a mixture of gas phase-derived and liquid phase secondary species attacks cysteine. Concordantly, reactive molecular dynamics simulations indicate that a proton abstraction from the thiol moiety by one hydroxyl radical followed by addition of another hydroxyl radical yielding cysteine sulfenic acid paved the way for all further modications. 49 A liquid phase-localized hydroxyl radical seems to perform this initial attack predominantly: R-SH + cOH / R-Sc + H 2 O (3) Further reaction to sulfonic acid seem to be dominated by gas phase-derived oxygen species. For the kINPen, argon-only treatment of cysteine in H 2 18 O, the 16 O/ 18 O ratio of 2 : 1 indicated that precisely one oxygen atom out of the three included in the RSO 3 H stems from the liquid as suggested by the reactions (3) and (4). In contrast, the ratios for kINPen Ar/O 2 and COST He/O 2 ( 16 O/ 18 O 4 : 1) showed that on average less than one atom derived from the water lysis, fostering the relevance of gas phase-derived species for the production of the products and provide evidence that O( 3 P) plays an important role in introducing observed modications. O( 3 P) interacts with a rate constant of about k ¼ 1 Â 10 12-13 cm 3 mol À1 s À1 with free thiols, 70 indicating high modication efficacies. It is capable of oxidizing a thiol moiety directly to sulfenic acid. 71 Especially the nal oxidation step from sulnic to sulfonic acid requires a gas phase-derived oxygen species, with O( 3 P) and O 2 ( 1 D g ) as potential candidates as shown from gas phase measurements and 0D/2D model simulation (reactions (5)-(9)). 27,72-75 is the other species of interest, since it is capable of producing many of the observed products on its own (reactions (6)-(9)). 77,78 While it reacts with a rate constant of k ¼ 8.3 Â 10 6 M À1 s À1 with free thiols, it can penetrate signicantly further into the liquid bulk due to its much longer half-life 79 as compared to O( 3 P), thereby offsetting its lower reaction rate. In the light of its biological impact in cell models and therapy, both O( 3 P) 51,80 and O 2 ( 1 D g ) 81 must receive a signicant attention when interpreting CAP affect in biomedical research or (re)design plasma sources for the application. R-Sc + O( 3 P) / R-S-Oc The products S-sulfonate and sulfate. RSSO 3 H and SO 4 2À were produced in signicant amounts, with the COST-jet being more effective than the kINPen (Fig. 5). Different pathways may generate RSSO 3 H: the photolytic or radical driven cleavage of a C-S bond of the intermittently formed cystine (RSSR) and subsequent oxidation of the outer sulfur moiety, the oxidative This journal is © The Royal Society of Chemistry 2020 RSC Adv. cleavage of oxidized cystine derivatives, or RSSO 3 H is formed from the attack of a cSH-derived species, e.g. sulte (SO 3 2À ) on cysteine, cystine, or the thiyl radical R-Sc. [82][83][84][85][86] While a single reaction pathway cannot be determined with the current data, and several reactions ultimately yield the same product, the following proposed reactions will contribute to the formation of RSSO 3 H. Further oxidation leads to the destruction of the S-S bond of RSSO 3 H, yielding SO 4 2À again: [87][88][89] R-SH / Rc + cSH or R-Sc + cH and R-S-S-R / R-S-Sc + cR (10) R-S-SO 3 À + 2cOH / R-S-OH + SO 4 2À + H + (18) While the cleavage of disuldes by sulte ions is welldescribed, 89,90 its contribution must be debated. In contrast to RSO 2 H and RSO 3 H, between 80% and 90% of oxygen added to RSSO 3 H stemmed from liquid phase species (Fig. 5 and Table 1). The gas phase-derived ROS are not directly involved in its formation, precluding the formation of sulte ions from SH radicals via reactions (11) and (12), or from the cleavage of a C-S bond in cysteine sulnic or sulfonic acid (R-SO-O À /R-SO 2 -O À , reactions (5)-(9)). However, sulte ions found aer the plasma treatment contain a 50% mix of gas and liquid derived oxygen, indicating that also reactions precluding gas phase-derived oxygen lead to its formation. While no information is available on the reaction mechanisms of hydroxyl radicals with sulydryl radicals in liquids, their impact is the key to the observed isotope pattern of RSSO 3 H. Following reaction (1) 91 Additionally, the COST jet emits negligible VUV radiation, 64 yet yielded the highest levels of RSSO 3 H. In contrast, when water photolysis is strongest (kINPen Ar plasma) by far the lowest amounts of RSSO 3 H were detected (Fig. 5a). 86 This suggests, that (a) the local production of OH radicals from photolysis in the interface zone does not favour the production of RSSO 3 H or (b), that RSSO 3 H generated in the interfacial zone is immediately decayed again. With that, it must be assumed that RSSO 3 H is produced in the bulk by the action of OH radicals generated from the liquid. The question is; how do they get theregiven their short live time they cannot penetrate that far. Interestingly, it is the O( 3 P) atom that may act as an intermediate carrier of chemical energy: according to recent experiments, the atom can penetrate a measurable distance in aqueous solutions, 76 and following reaction (2) can lead to the formation of OH radicals distant from the interface region. 67 Such, the number of accessible precursors increases while the local OH radical density is comparably low, reducing the decay of RSSO 3 H once it formed via reaction (18) and other short lived gas phase species. However, the cleavage of water according to (2) yields a 1 : 1 mixture of 16 It can be formed from numerous precursors by breaking the C-S bond both before and aer oxidation events (reactions (19)- (22)). Beside the chemical breakage, VUV radiation may contribute as the binding energy of C-S bonds (272 kJ mol À1 , 2.8 eV) lies well within the range of the emitted photons. 94 In addition, a chemical cleavage via three or four-atom transition states is possible, potentially with the contribution of radical species. 95 It was observed that all cysteine products decay with further treatment, with RSO 3 H as a nal product. 49 Ultimately, its accumulation also stops, indicating that consumption processes leading to the formation of SO Tables 1 and 2). Origin of oxygen in oxidized cysteine derivatives: gas phase versus liquid phase species. Using the measured abundances of each product and its respective isotope pattern, the distribution of 18 Tables 1 and 2). The overall pattern indicated that there is no 100% contribution of either gasphase-derived species or liquid-phase derived species. However, there a clear indications for diametric origins of the incorporated oxygen atoms in some products. The extremes were RSO 3 H, that incorporated a majority of gas phase-derived oxygen (up to 83.8%), and RSSO 3 H, that predominantly included liquid phase-derived oxygen (maximum 91.1%). The other products showed a more equally shared origin of the oxygen atoms from gas-phase and liquid-phase, especially the sulte SO 3 2À ion with almost 50 : 50 distribution in all experiments. The assumed oxidation end product SO 4 2À also shows a commensurate isotope distribution, but with a signicant tilt towards gas-phase derived oxygen. kINPen versus COST-jetatomic vs. singlet oxygen? kINPen (Ar/O 2 ) and COST-jet (He/O 2 ) yielded similar amounts of the major products that also share a similar oxygen isotope distribution. However, the COST-jet featured slightly higher oxygen incorporation from the gas phase compared to that of the kINPen. In addition to ubiquitous hydroxyl radicals produced by both sources under all conditions, 30,57 other components of a discharge might affect radical formation in the liquid. Currently, electrons inuencing the liquid surface are discussed as initiators for various liquid chemistry processes. 96 However, electrons play a minor role for the two sources used here. The COST-jet features an electric eld perpendicular to the gas ux, 64 thereby preventing electrons from leaving the electrode area. While the kINPen is ignited using a linear electric eld, the discharge is relatively remote from the treatment zone (12 mm in total taking into account liquid displacement due to the gas ux). A known difference between both plasma sources is generation of O( 3 P) and O 2 ( 1 D g ) and the observed isotope pattern differences seem to be related to these species. The COST-jet produces high amounts of O( 3 P) 64 with about 8 Â 10 14 cm À3 atoms at the working distance of 4 mm. 97 In comparison, calculated O( 3 P) densities for the kINPen reach a similar level (5 Â 10 14 cm À3 ). 58 However, TALIF spectroscopy measurements indicate a highly dynamic density of O( 3 P) in the kINPen's effluent. Starting as high as 3.5 Â 10 15 cm À3 , densities quickly decrease with a rate of 0.5 Â 10 15 cm À3 mm À1 along the z-axis of the effluent resulting in lower O( 3 P) levels at the gas-liquid interphase in normal conditions (9 mm nozzleliquid). In contrast to O( 3 P), O 2 ( 1 D g ) densities in the effluent of both sources are comparable, with a tendency to higher production rates in the COST jet (1 Â 10 15 cm À3 in normal conditions, up to 6 Â 10 15 cm À3 at high power settings 98 ) than in the kINPen (8 Â 10 14 cm À3 at standard conditions 27 ). O 2 ( 1 D g ) has a signicantly longer half-life. For the kINPen, O 2 ( 1 D g ) was still measured at 192 mm away from the nozzle. However, the loss starting from 100 mm was signicant (2/3 of the initial value). Both jets produce ozone, especially at high oxygen admixtures and long distances from the nozzle. Due to limitations in solubility and negligible effects in control experiments using an ozonizer (data not shown), a limited role in respect to liquid chemistry can be assumed. Taken together, the higher inclusion of gas phase species in COST-jet treated cysteine might be due to higher levels of O( 3 P) interacting either directly with the cysteine molecule or with the liquid, yielding OH radicals. Summary and conclusions Heavy oxygen ( 18 O 2 ) was used to track the fate of plasmagenerated ROS in a cysteine model using mass spectrometry. Furthermore, H 2 18 O was used in a reverse experiment to observe the role of liquid-derived reactive species under the same conditions. It became apparent that gas and liquid phase species play different roles: while some products are mostly driven by gas phase species, others require the presence of liquid phase species derived from the solvent system (Fig. 6). The observed isotope distribution pattern allows the assumptions that, (a) gas-phase derived short-lived reactive species are active in the gas-liquid interphase, (b) short-lived reactive species are generated in the liquid phase especially from gasphase derived ROS (e.g. atomic oxygen), and (c) the formation of liquid phase species occurs in the interface and in deeper layers. The dominant gas phase-derived species was found to be O( 3 P) and OH radicals in the liquid phase. Concerning the application of CAP in the clinics, these results suggest a signicant role of the target onto the treatment efficacy: in a humid environment such as the mucosa or during surgery, target derived species as OH radicals intensify the oxidative impact of the CAP. When treating dry tissue such as intact skin, gas phase derived species dominate and an overall milder impact of the CAP results. Recent data on the oxidation of complex lipids by CAP corroborate these conclusions. 99 Conflicts of interest There are no conicts to declare. Fig. 6 Labeled oxygen branching ratios for monitored species. Branching ratios are indicated for the kINPen (bold) and COST-jet (italic). Oxygen can either stem from the water in the treated target (blue, left side of arrows) or from the plasma (red, right side of arrows). For clarity, the actual protonation/deprotonation is not reflected.
2020-03-19T10:46:57.785Z
2020-03-17T00:00:00.000
{ "year": 2020, "sha1": "66cbd14d81281ca9abc241f991902080bc5c2a3a", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/c9ra08745a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01c9f6f8399e396dd1f05ff94d66755a81308375", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
149566321
pes2o/s2orc
v3-fos-license
How Saint Clare of Assisi Guided Her Sisters . Impulses for the Today ’ s Leadership Context Saint Clare and leadership? A lot of research on her person has been done in recent years. However, her importance for today’s management has not been taken into account. In this article, we will look more closely at her understanding of leadership and how the medieval saint led the community of her sisters. To do this, we first look at biographical reports and written testimonies (about and written by her) that characterize her leadership actions and behavior. First and foremost, it was her endeavor to lead a life according to Jesus Christ under the privilege of poverty. In this presentation, the excerpts from the canonization process and passages of her order rule are of central importance. These testimonies provide valuable information on her understanding and her leadership style. Her biography, her leadership, and the values that shape her actions provide valuable insights into today’s leadership challenges. Through her example, St. Clare can help us to train ourselves as authentic leaders and to reflect on our own leadership and values. She can sensitize people to cultivate an appreciative inner attitude in dealing with others and thus develop our own effect as (leadership) personalities. Introduction The authors of this volume want to show the diversity and relevance of Franciscan Spirituality for postmodern society and the globalized world.This article presents a feminine perspective of Franciscan spirituality and shows the importance of St. Clare of Assisi for today's management and leadership.Clare of Assisi is one of the great women of the Christian and Franciscan tradition.In this article we want to inform about Clare's understanding of leadership.We focus on the specific attitudes of her spirituality, in the light of the practical question of how to lead others, especially the community of her sisters. In the life, spirituality and work of Clare of Assisi we can still find an answer to many questions and challenges in today's world of work and leadership.Clare's biography, letters, writings and the Acts of the Process of Canonization are intended to bring us closer to her understanding of management and leadership.The description of the main aspect in her vita and the work with the focus on central values-based on evangelical poverty-will be presented first.In a second step we will focus on the leadership understanding and management style of St. Clare.It becomes clear: leadership means to be servant, to have a sense of responsibility, to be courageous and to be ready to fight for one's own goals.In the final step we will draw conclusions for today's leadership context: First and foremost, we learn from St. Clare about essential virtues and values for the attitude of our own leadership. Background Who is Clare of Assisi?Clare was born in 1193/1194 as the eldest daughter of Favarone di Offreduccio and his wife, Ortulana, in the upper town of Assisi.The son of a merchant, Francis of Assisi (1182Assisi ( -1226)), lived in her hometown, whose "scandalous way of life" in turning to the poor and socially excluded must have deeply impressed her (see Carney 1993;Dalarun 2005;Kreidler-Kos 2003;Kreidler-Kos et al. 2015).He abandoned a life of luxury for a life devoted to Christianity after reportedly hearing the voice of God ordering him to rebuild the Christian church and live in poverty.St. Francis represented a radical way of life.He established an attitude based on love for people and God, and considered it his mission in life to live the gospel.Clare felt called to follow the gospel, as Francis taught and lived.On the night of Palm Sunday 1212, she joined Francis and his community.It was a conscious decision against family policy, deliberately against a predetermined path to wealth and security of the residential tower.Instead, she chose a life of seclusion, in the evangelical vision (she received from the preaching of Francis of Assisi) she rejected the prestige and privileges offered by her noble family.This is why she chose to live in worship, building a new kind of monastic life which can be described as, " . . .an enclosed contemplative life in an urban setting with highest poverty" (Bekker 2005, p. 2).During her lifetime, she founded a monastic religious order for women in the Franciscan tradition.She is the first female leader in the history of Christianity to write a religious rule sanctioned by pontifical approval (Gregg 2017, p. 35). On 11 August 1253, two days after pontifical approval of her rule, Clare died.Within two months of Saint Clare's death, Pope Innocent IV sued the papal bull, Gloriosus Deus, 18 October 1253, in which he entrusted Bishop Bartholomew of Spoleto with the responsibility of promoting the Cause of her canonization.On 24 November 1253, they visited the convent of San Damiano in Assisi and officially interviewed under oath thirteen of the sisters who had lived with Saint Clare.Two other sisters, one of whom was in the infirmary, were interviewed on 28 November 1253, and, on the same day, Sister Benedetta, the Abbess of San Damiano, spoke on behalf of the entire community and declared the willingness of all the sisters report on the holiness of Saint Clare.Two years later, on 12 August 1255, Clare was canonized by Pope Alexander IV (Armstrong 2006). The religious community that Clare founded spread rapidly throughout the rest of Italy, Europe and the world.The Poor Clares or Poor Sisters (as the Order is called today) boast, 800 years later, over 20,000 active members in 76 countries (Bekker 2005, p. 2). Aim of the Study What does St.Clare have to do with leadership?How does she lead her sister community?In the research literature of recent years, these questions have only attracted particular attention so far.In the American research landscape, there are some approaches that associate Clare with servant leadership, especially her letters to Agnes of Prague (Bekker 2005;Karecki 2008;Christenson 2013;Till and Petrany 2013).There are also explanations for Clare's incarnated leadership (Self 2008) as well as a connection between Clare and the leadership of the Sacred Heart (Burchard 2012).In German-speaking countries, there is a study on Clare's potential as a mentor (Löser and Zimmerbauer 2010) and little studies on Clare and Leadership (Dienberg 2016;Gerundt 2018). With this article, we want to contribute and show what distinguishes St. Clare of Assisi in her leadership actions and behavior.For this purpose, different sources (The Legend of Saint Clare, the Acts of the Process of Canonization as well as the Form of Life of the Poor Ladies, her Testament, her Blessings and excerpts from her Correspondence with Agnes of Prague) are often used to draw an authentic picture of her management understanding and leadership style. It should be noted: her community has grown rapidly and her personality also has an enormous impact on other great personalities, especially women of that time.One example is princess Agnes of Prague (1203-1282) (Van den Goorbergh and Zweerman 2000;Mueller 2001;Ledoux 2003) who took the example of St. Clare.She exchanged her life of aristocratic privilege for a life in contemplation and prayer as a "poor lady" (Armstrong 2006, p. 39). Embracing the Poor Christ In the truest sense of the word, leading and guiding people means accompanying people and the community on their way to God, creating and securing the framework conditions so that individual and the community succeed in leading a life pleasing to God.Central to the understanding of the leadership of St. Clare is the knowledge of the source from which she lives and through which she can lead.It is her trust in a reliable foundation: Clare of Assisi wrote about her intention to live her life exclusively out of Lord's vocation: "Observing the holy gospel" (FLCl 1:1) became a life's work for her, but also for St. Francis and future generations.Clare only wanted to be dependent on Christ and to live every day to serve and share the gospel."Gaze upon him, consider him, contemplate him, as you desire to imitate him", she wrote to her confidante Agnes of Prague (2 LAg 18:19).To embrace the poor Christ, that was St. Clare's program.She aligned herself with Jesus Christ and let herself be guided by him.Her life plan was about God, about God-seeking, about meeting God."Her whole life is founded in God" (Proc XI: 2).She was held and supported by God.Clare gained strength through daily prayer.She drew from this source.As a dying woman Clare cheered God, even in a time when her life's work was not yet finished.She has a source that gave her support and orientation and at the same time gave her enough security to evolve herself further, to develop her own personality and to dare something new. Clare seemed to be aware of her vocation: she did not speak of herself in a small or simple way-although these attributes have always been attributed to her.In her Testament, she stated that the Lord had "called us to such great things" (TestCl 21).She had the confidence to go out and to find out how she could live her vocation and how she could "please God".Considering the fact that Clare had no earthly model for her purpose, she did "pioneering work". Her courage and determination were nourished by her trust in God: Clare could make a difference because she was moved by God.She followed the footsteps of Jesus Christ and experienced herself as led by God.Clare first received in her life and then accepted the "call of Jesus".She followed Jesus and became an example for others.She encouraged people to follow a new path.She herself was a source of inspiration for many more women of her time, who also wanted to get away from the usual life and to follow another, higher purpose.Thus, her "religious leadership was used by others in the women's religious movement in the thirteenth century as an example setting forth a timeless model of female sanctity" (Gregg 2017, p. 35). Clare and Her Community of the Poor Ladies From the beginning of May 1212, a new community was built around Clare of Assisi in San Damiano (Mueller 2010;Kuster 2013).Clare and her biological sister and first companion Agnes were accompanied, in the course of time, by new followers.Without fixed structures, the women first went an innovative way: together they led a religious, unmarried and poor life, with sedentary-contemplative and social-charitable elements, (Kreidler-Kos et al. 2015, p. 96) which faced the world and at the same time were protected by an inner form, and were bound to a fixed place and renounce an evangelical wandering life (Kreidler-Kos et al. 2015, p. 101). What was special about Clare and her sister congregation was the Privilege of Poverty (Mueller 2006;2010, p. 50).Clare thus made an unimaginable request to the Pope and the Church.But she managed to defend this way of life without mighty support, economic, or financial security, privileges or property.It is this very concrete form of poverty that has its roots in the incarnation and life of Jesus Christ.It expresses itself in lifestyle, manual labor, simple houses, and the renunciation of all possession.Poverty is therefore more than renouncement or renunciation of material things; it is a basic attitude, a habit of life: to recognize oneself as dependent, not to make oneself larger than one is, to experience life as a gift, to be open to what one encounters in the world and other people, to search and find God in the simple things of daily life.The attitude of being poor, emptying oneself and being open to the better, the greater and the different is a fundamental attitude in the Franciscan-Clarian spiritual life.This openness on the other hand, leads to a possible transformation in the encounter with the world and with others.It can happen that this openness transforms the individual.Something happens to him or her-and the individual must let it happen.Thus, the community in San Damiano does not fit into any of the previously established patterns (Kreidler-Kos et al. 2015, p. 99) and Clare's way of life was for a woman at this time no easy task and a novelty in church history. The period from 1216 to 1226 (death of Francis) is regarded as the stage of the formation of the young women's community.In the beginning St. Clare and her companions did not have a written rule to follow beyond a very short formula vitae given by St. Francis.This form of life was a guideline in the initial phase.Clare wrote the final rule, the "Form of life of the Order of the Poor Sisters" and fought until the end of her life for papal recognition.After the pope had issued a new regulation for women's communities, which in turn did not address the core of their vocation, Clare began to write her own rule.The Rule presents poverty as the key to the life of a sister of the Poor Ladies. From the beginning it was recognized that the focus of a sister's life was "to observe the Holy Gospel of our Lord Jesus Christ, by living in obedience, without anything of one's own" (Mueller 2010). Her Form of Life is a transcript of lived experience and her attention is directed to the shaping of life in the community of sisters.Above all, the living relationship of the sisters to one another and to the brothers, which should be characterized by love.For some of the phrases, St. Clare uses the Benedictine Rule and takes the title of abbess, but fills this position with attributes that Francis uses for the Minister General.She therefore did not embody the typical role of the abbess as described in the institutional documents in force at the time, such as the Rule of Hugolino or that of Innocent IV.These rules functioned with a hierarchical understanding of monastic leadership and patronage that Clare did not emphasize in her own rule (Till and Petrany 2013, p. 48). Sisterly ministry, mutual benefit, and love were central features in Clare's understanding of community and leadership.The way of life of the poor sisters is primarily based on Clare's own way of life and takes it as an example.She therefore offers specific guidelines on how to lead and guide the sisters of San Damiano. Clare's Understanding of Management and Leadership: The Abbess is a Servant (for All) Clare's Process of Canonization contains a wealth of material about Clare's life in the monastery of S. Damiano.It consists of a sworn testimony about the life of Clare.From these Acts we learn that she first had to grow into her role as leader and at first even refused to take over the leadership of the community (Proc I: 6).After a fierce controversy with Francis, she finally assumed the leadership responsibility from 1214, without ever claiming the title "abbess" for herself or calling herself as such.She tried to avoid the title at all costs and presented herself as servant and maid of others (for example Agnes of Praque 1 LAg 2; 1 LAg 33; 3Ag2; 4 LAg 2).Clare had a clear vision and gave a living testimony in her discipleship and commitment to poverty.She also expected from each individual sister this trust in Jesus Christ and his message.In her Testament she wrote: "For the Lord Himself has placed us as a model, as an example and mirror not only for others, but also for our sisters whom the Lord has called to our way of life as well, that they in turn might be a mirror and example to those living in the world.Since the Lord has called us to such great things that those who are a mirror and example to others may be reflected in us, we are greatly bound to bless and praise God and to be strengthened more and more to do good in the Lord" (TestCl; see also: FLCl 8:10.14-16)But it was Clare's resistance and refusal to accept the markings and privileges of the religious temporal forces of her time, her courage, her determination, and her energy that make a difference.The community around her went its own way with a common goal in mind.Attitudes of humility and awe, solidarity, gratitude, love, and kindness were of central importance. What does that say about the woman, who had led her sister community for 40 years and how was she perceived in her leadership?The sisters described the Saint in her (leadership) acting as maternal and caring.Love for and service to other people was at the center.Thus, Clare presented anything but a handsome style of leadership that reflected the feudal power structure (Kreidler-Kos et al. 2015, p. 114).She lived a radical understanding of service.For the rest of her life, she did not want to be in the foreground.This is why she placed herself at the service of others-and for God.This basic understanding forms the maxim of how to lead and live: She was a servant for each individual sister and cared for the benefit of the whole community.This was also an expression of her humility. From the files of the Process of Canonization we learn about Clare's practical approach to mentality and everyday virtues.Clare, although abbess, considered herself the servant of all.She washed the feet and cleaned the commodes of the sick, and covered her sisters at night from the cold. (Proc VI: 7,24).Clare embodied an attitude that manifested itself in concrete actions and deeds, less through words than through gestures and signs that touched and showed that she cared for each individual sister, her search for God, and the whole community.Clare lived a "foot-washingleadership" (Karecki 2008). This paper uses Clare's Form of Life (Rule) as a central reference to describe her understanding of management and leadership.Especially Chapter 4 gives relevant statements for today's management and leadership, as it is about the structure of the community.Clare stated that the abbess should look after her sisters like a mother."She should with discernment provide them clothing according to the diversity of persons, places, seasons, and cold climates, as it shall seem expedient to her by necessity" (FLCl 2:16)."Abbess and mother" are mentioned in the same breath (FLCl 4:7; TestCl 63). She expressed that sisters who had an office, especially the abbess, should be guided by her example and love, not by authority and status: "I beg that sister who will have the office [of caring for] the sisters to strive to exceed others more by her virtues and holy life than by her office so that encouraged by her example" (TestCl 61). Again it becomes clear that the "success for the community" depends on one's own attitude.If this is characterized by love and altruistic behavior, the sisters willingly followed their life in the community and make it obedient (TestCl 61-70). Several sisters (witnesses) in the Process of Canonization emphasized Clare's great honesty, kindness, humility, compassion, gentleness, righteousness, and patience (Proc I: 1,3; Proc II: 2,8; Proc IV: 3.9; Proc VII: 11,22-24; Proc VIII: 1,1-3; Proc X: 2,4-6; Proc XII: 6).These virtues always apply to strengthen each other and are aimed at welfare and empowerment.In return, Clare empowered her sisters and companions to act in accordance with their values-this is the opportunity for transformation.Clare did not claim "power" alone, and did not commit to a title.Instead, she consistently described herself as servant and maid (1 Ag2, 1 Ag 33).She acted in the service of love (Mertens 2011).Finally, one of the most important things about her leadership is the fact that she serves as an example for someone who wants to live all life according to the Gospel (Gregg 2017, p. 35). Clare's Leadership Style Clare's leadership style is at first strikingly democratic and fraternal.In the Acts of the Process of Canonization, Sister Pacifica (witness 1) mentions the great importance that the community held for Clare.It represents a pleasant and very demanding form of leadership (Proc I: 14).Sister Cecilia (witness 6) reports that Clare was very gentle and "highly attentive".She drew a "motherly picture" of Clare and emphasized that the community was something precious for her. The sixth witness testified: "God chose her as mother of the virgins, as the first and principal abbess of the Order, so that she guarded the flock and strengthened the other sisters of the Order with her example in the goal of the holy Order.She was certainly most diligent about encouraging and protecting the sisters, showing compassion toward the sick sisters.She was solicitous about serving them, humbly submitting herself to even the least of the serving sisters, always looking down upon herself" (Proc VI: 2,7-9) Clare fought for the fellowship with loving zeal.That is why she received the love and appreciation of the sisters (LegCl 38).Clare felt responsible but also grateful for her concrete community (Maier 2011).Especially in the early years of the constitution, the solidarity and friendship of the women who risked a life in poverty together gave Clare power to hold on (Kreidler-Kos et al. 2015, p. 109).Clare, and especially the first sisters, defended the "privilege of poverty" as their most fundamental spiritual treasure.Finally, with her Form of Life, Clare gave her community a solid foundation and orientation in their way of life. In the following step, we want to shed more light on single passages of the Rule in which she defines the core ideas for her life-program and the life of future generations.In the first lines of Chapter 4, Clare described the abbess's election procedure and the circumstances that required a new election (4:1-7).An admonition to the abbess (4:8-14) followed.Finally, it was clarified to what extent the community was involved in decisions and had the opportunity to participate (vv.15-24) (Maier 2011).Clare formulated codes of conduct for living together, which can be sources of inspiration for our way of life.In FLCl 4:8-10 she writes: "Let whoever is elected reflect upon the kind of burden she has undertaken on herself and to Whom she must render an account of the flock committed to her.Let her also strive to preside over the others more by her virtues and holy behavior than by her office, so that, moved by her example, the sisters may obey her more out of love than out of fear.Let her avoid exclusive loves, lest by loving some more than others she give scandal to all." Clare emphasized the enormous responsibility demanded of the leading sister and described the task of an abbess as a burden for which the respective sister must be accountable.The abbess should act for the community through her example.She has to be convincing in everyday life and finally she has to follow words with deeds.The persons entrusted should be convinced of "love" and of a consequent lifestyle, not by the demonstration of power or perseverance in the formal function (TestCl 59-66).It is a matter of an encounter at eye level and in mutual respect.The abbess should always remember to remain objective and impartial, to renounce personal sympathies in order not to survive or discriminate against anyone.Humility is one of the most cardinal leadership values for her. Clare was committed to authenticity: the abbess must live the central values and serve the community.Authenticity requires you to be with oneself.It requires to know oneself well, to be self-reflexive with oneself and to seek phases of peace and silence.The retreat allowed Clare to (re)focus herself and to concentrate on the essentials. Clare wanted to remain silent and pray.The cultivation of the relationship with God is of enormous importance to her.Within the monastery, she created a place where she normally prayed (TestCl 59-66).Clare's neighbor and distant relative, Sister Pacifica, who was one of the first sisters to join Clare in her penitential life, explained that Clare had spent much of the night in vigilant prayer (Proc I:7).The undisturbed and personal prayer was a special value for her-for example in her rule she set the prayer times for her sister community and prayed during periods of illness and crisis.Already on her deathbed she reminded her sisters to "stay in prayer" (Proc X: 10,43).The ability to immerse oneself intensively in prayer, to free oneself from everyday life and worries, gave Clare a different presence, a certain clarity: "She was vigilant in prayer and sublime contemplation.At times, when she returned from prayer, her face appeared clearer than usual and a certain sweetness came from her mouth."(Proc VI: 3,10) and "When she returned from her prayer, the sisters rejoiced as though she had come from heaven".(Proc I:28) By cultivating her prayer life and spirituality, Clare was more easily able to be there and take care of herself and others.It allowed her to have a more intense form of presence for her sisters.Several sisters testified in the Process of Canonization and pointed out that the most important fruits of Clare's prayer were the words of consolation and guidance that she was able to give her sisters (Proc I:9; IV:4; VI:3f.)(Mueller 2010, p. 63). In addition, her leadership style lived on empowerment, her respected maturity, and the self-responsibility of each sister: The abbess was democratically elected: "In order to preserve the unity of mutual love and peace, let all who hold offices in the monastery be chosen by the common agreement of all the sisters" (FLCl 4:22). With this determination, Clare trusted the sisters that they could judge when an abbess is harming the community."If at any time it should appear to the entire body of sisters that she is not competent for their service and common welfare, the sisters are bound as quickly as possible to elect another as Abbess and mother" (FLCl 4:8).In this way, the abbess who did not fulfill her "obligation" could be voted out and replaced, so that no sister could claim a permanent office and rely on it.Life in the unity of mutual love and peaceful coexistence was at the center; this should be preserved and protected.All who held offices had a ministry function and the abbess should be the maid of all the sisters (FLCl 10:5).The abbess should know the needs of each of the sisters.The abbess is expected to be attentive to these needs.She must be obedient by listening to the needs of her sisters. Clare took up the term "obedience": obedience involves the willingness to listen to someone else, to confine oneself to one's own desires for the sake of the other.Obedience is the effort to learn to distinguish all voices that speak.Obedience also means engaging in a common journey.It is not the monastery that creates the community, but the willingness to listen to one another and renounce one's own for the sake of the other.The abbess should inspire the sisters through her virtue and holy way of life.In this way the sisters could obey her more out of love than of fear (FLCl 4:7).Clare explained that the abbess's task was to comfort the advocate and to be the last refuge for the suffering and the restless . Listening to and nurturing the sisterly conversation was an essential part of understanding the role of an abbess and leader.She advised to be a "refuge" (FLCl 4:11-12).It encouraged people to perceive the needs and feelings of other people, especially the sisters, to be ready to talk, and to be responsive to worries and needs.This required a reception on the other side and the attitude of being with the heart of the "listener". Closely related is a wording from the tenth chapter: The abbess must exhort the sisters, correct them humbly and charitably, and be aware of each other's independence (see FLCl 10:1).When Clare talked about exhortation and correction, we need to think about how she wanted to be understood and how she thought exhortation should be done.For admonition is not identical to criticism-it is more about assistance, encouragement and comfort.It is about solidarity and meeting at eye level.In the correction, the abbess is not the mother of her daughters, but a sister among sisters.As such, she should discipline them with humility and charity.As a sister among sisters, every correction is made by an abbess with the intention of inviting a sister to greater fidelity.Only charity could, of course, convince a sister to feel secure enough to put aside any excuse, listen carefully to the abbess, and reform her conduct.The aim of teaching, personal interaction, and correction of the sisters is fidelity to the forma vitae of the sisters-the life to which God has inspired them (Mueller 2010, p. 249). For example, Clare knew exceptions in the practice of fasting for the weak and the sick.She considered their individuality into account and did not abandon their worries and needs: "If she needs it, the sister may use it; otherwise, let her in all charity give it to a sister who does need it."(FLCl 8:10).For those who could not maintain the hardness of life, she allowed a moderate lifestyle: "Those who are ill may lay on sacks filled with straw and may use feather pillows for their heads; those who need woolen stockings and quilts may use them" . She accordingly called the companions to harmony with one another, goodness, and humility in the monastery.As mentioned earlier, Clare established a democratic leadership model for her sister community.She emphasized the personal responsibility of each sister for the benefit of the community.In the fourth Chapter of the Rule she writes : "The abbess is bound to call her sisters together at least once a week in the chapter, where both she and her sisters should humbly confess their common and public offenses and negligences.There let her consult with all her sisters concerning whatever concerns the welfare and good of the monastery, for the Lord frequently reveals what is better to the youngest." Clare formulated how the other sisters are involved in responsibility and decision making.She called her sisters together and held a weekly Chapter.There it was possible to discuss important topics with the entire convent, seek advice, or consult with each other. First of all, the attitude with which the sisters came together was important for Clare: she described a humble attitude and an honest confession that it is each individual's responsibility how perfect life or unity is.It is therefore necessary to reflect on the contribution each individual can make to the success of relationships, projects, or tasks.It also resonates with the idea of being at peace with oneself, in one's own relationship with God and the sisters.It is necessary to order one's thoughts before talking about community.First of all, the necessary conditions and terms are to be created, only then dialogue and exchange takes place.The exchange of information is consciously addressed to all, not only to a closed circle of sisters.In this way, the entire potential of ideas, creativity and knowledge can be captured, succeed and used for the company.This is why Clare initiated discussions with everyone and sought the opinion of younger people.She sought forms of sisterly dialogue that were characterized by humility and willingness to listen, but also by an inspired and committed approach to authentic conviction. Discussion After examining Clare's life through her biographies and testimonies and reading what can be deducted from her life, we collected characteristic components of her management understanding and leadership style.The following table gives an overview of the essential components (Table 1).For Clare, it was a development process: she had to grow into her leadership role.It should be noted that there was no direct role model for her to imitate or be guided by Kreidler-Kos (2011, 2013).At first, she was only a companion of Francis, a member with a nonspecific role.Then she was forced to take over the government of the sisters.When she was urged by Francis to take over the leadership, she refused.At first sight, leading and serving seemed incompatible to her.It seemed to be a conflict of interest for her, because she only wanted to be a servant of Jesus Christ and a servant of her sisters.According to the legend, Clare rejected the name and office of abbess and wanted to serve her sisters, but three years after her conversion, pushed by Francis, she accepted the government of the sisters.She defined her leadership as an office of ministry and service.Later on, her attitude toward this office was the acceptance of a deeper sense of service.As abbess, she carried out family tasks: washing the sisters' hands, serving those who sat at the table, and waiting for those who ate.She rarely gave an order, but did what is necessary spontaneously, preferring to do things herself rather than to command her sisters (LegCl 12). When we ask if and how the Clarian spirit can be translated into today's leadership, it is advisable to look at approaches such as Greenleaf's concept of Servant Leadership.Clare was a leader who focused on service.She embodied the essential elements that would later include Robert Greenleaf's work on Servant Leadership.He defined the concept as follows (Greenleaf 1998, pp. 18-19): "The servant-leader is servant first . . .It begins with the natural feeling that one wants to serve, to serve first.Then conscious choice brings one to aspire to lead.He or she is sharply different from the person who is leader first, perhaps because of the need to assuage an unusual power drive or to acquire material possessions." Subsequent studies and research derive further criteria and categories from Greenleaf's work.Larry Spears for example identifies ten characteristics of serving leaders in Greenleaf's writings: listening, empathy, healing, awareness, persuasion, conceptualization, foresight, stewardship, commitment to the growth of others, and building community (Spears 2010, pp. 25-30;1998;2002)."Leadership experts" such as Bolman, Deal, Covey, Fullan, Sergiovanni, and Heifitz also describe these characteristics as essential components of effective leadership.Leadership is, first and foremost, serving people.Service becomes an attitude of life. The examples of Clare's life story explicitly show that everything depends on a person's attitude.It is not the techniques, the methods, and the concepts that are important, but the attitude towards oneself, one's fellow (wo)men and entrusted goods.Franciscan-Clarian-that is the attitude that aims nothing other than to live the Gospel of Jesus Christ and make it accessible to the world.Clare's leadership was essentially based on her trust and clear orientation towards Jesus Christ, which she recognized in radical poverty.Clare lived her life through her relationship with God and prayer.The cultivation of the prayer life supported her in daily management of ministry and service; it gave her and her sisters the necessary strength, but also structure.From this source, Clare, despite her monastic lifestyle, could act courageously and persistently, building relationships and networks. As mentioned in the introduction, there are few studies on the understanding of leadership and management of Saint Clare of Assisi, especially in the German-speaking world.Internally, we have also concentrated our research activities more intensively on Francis of Assisi as well (Dienberg 2009;Gerundt 2012;Warode and Gerundt 2013, 2014, 2015;Dienberg and Warode 2015;Warode 2016). The feminine side of Franciscan Spirituality has recently moved into our focus (Gerundt 2018).The task for the future will be to strengthen and put into practice the essential characteristics of a Franciscan-Clarian understanding of leadership.Similarities, but also the essential differences in the understanding of the two Saints should be worked out.The effort to find, develop and put into practice sustainable leadership concepts is still unbroken.Our task is "to look into the past to inform our present as we discern the direction of our future" (Swan 2014, p. 8). Conclusions: Impact on Leadership Today Finally, what can we learn about leadership from this medieval woman?Are her life and message applicable in today's situations?Can she talk to the people of the 21st century? Clare's life and work were marked by a firm inner attitude that made her mature into an authentic leader.She characterized a leadership that is fuelled by the unconditional following of Jesus Christ in poverty, humility, and sisterhood (living like brother and sister).She understood her ministry as a service to others.Clare was fully committed to the vision of the Gospel in evangelical poverty which she had fashioned with Francis and then implemented with the sisters at San Damiano.Without this kind of commitment, the community would have failed and threatened its mission to glorify God through its way of life .Being faithful to the original vision is the most important form of service that she and every other leader can provide. What can the ideal of evangelical poverty teach us today?What can we learn from it?Humility and modesty would be useful for many of us today.In contrast to a constant quest for successful self-empowerment, the life and work of St. Clare was marked by a commitment to values and virtues such as solidarity, justice, freedom, and sustainability.It points out to today's leaders that they should be humble towards the given responsibility.Power, leadership, and responsibility should always be associated with humility.It reminds us, how each one of us is a small part of a big whole.Therefore, it is essential to look to after the common good and to subordinate one's own interests to the overriding purpose.This does not prohibit the appearance of self-confidence, as long as the idea exists to do the best for the cause and the following persons. We can learn that humility depends on our own attitude towards ourselves and our fellow human beings.Although Clare did not want to be a leader, she took responsibility without ever being addressed by her official title.This attitude fit with her understanding of poverty and humility.She wanted to follow the "poor" Christ: a life of simplicity, sharing of goods, and solidarity with the poor.In San Damiano, Clare founded a revolutionary way of living together, without claims to power and property.The renunciation of power and property gave Clare and her community freedom.Her attitude of poverty reflects her trust in God, the Father and Creator of all things. Determining the attitude of service in today's leadership context and focusing on the question "What can I do for others so that they can personally evolve and achieve the common goals" is worthwhile in many ways.The underlying idea: Only what serves everyone ultimately serves ourselves.A serving leadership is an invitation to change perspectives in order to lead companies even more successfully into the future and, at the same time, bring more humanity and a deeper sense into the working world.A community or organization will only prosper if individual gifts for the common good are developed and put into the service of the group's mission.Without a clear focus, there can be growth, but it will not be coincide with the original purpose for which the group was formed. This includes discussion and awareness of one's own life resources and self-direction, from which one can draw for both, the professional and the private context.Leaders have the task of becoming aware of the sources of their actions.The attitude of hearing is central.The cultivation of one's own spirituality sensitizes us to listen to ourselves (inner perspective) and to listen to what is brought to our side (dialogue orientation to the outside).From St. Clare's biography it becomes clear that the focus on God and the prayer helped her to be present-that means more than just presence.By preparing herself internally for the coming and ensuring an inner availability, she could fully concentrate on her counterpart/her sisters. Clare's perseverance and courage teach us how important it is to stand up for our beliefs and ideals-even when it is unpleasant.A solid dose of courage, determination and assertiveness is essential for successful leaders.Those who want to lead effectively and sustainably must not shy away from conflicts and risks.A leader must be prepared to think continuously about his/her own ideas, expectations, and goals in order to be able to assert their own position wisely and persistently-even against resistance.Clare showed the necessity to become independent and to develop a style after one's own conviction.A certain compromise is part of it.In her behavior and actions, Clare has led us to think in new directions, to be visionary and to go new ways.It becomes clear that everyone who wants to achieve goals must position oneself and communicate clearly. The biography of St. Clare tells us that it is necessary to cultivate and shape relationships, and to build a sustainable network.We need people who accompany us on our way, who support us, who are mentors and friends.Clare could not look at any earthly role model, but she herself was a counselor and role model for her sisters.Networks are most important for working life, especially for women.From Clare we can learn to present ourselves with a healthy self-confidence, not to be "small", to be courageous, to think in big contexts and to build a strong network to realize our own goal.To do this, Clare must break a few rules and "break through societal barriers for women of her time" (Gregg 2017, p. 49 ) A serving acting leader will also take care of individual members of the organization.There is no substitute for constant attention to the needs of others. In short, Clare's leadership is inspiring for our day, because she shows us how to Clare offers an alternative leadership concept through her writings and wealth of thoughts.It is a practical concept that has been lived in monasteries for more than 800 years and can have positive effects for today's leaders.Despite all the parallels that have been identified in the Franciscan-Clarian leadership concept, the limits of transferability may not be disregarded.The respective context, the historical background and circumstances must always be considered. Any attempt to transfer Clare into today's leadership must not hide the diversity and plurality of life horizons with their experiences.Not all of the prescriptions that Clare has formulated can be transferred into today's leadership challenges; this expectation must be abandoned.But to explain these aspects in detail is not the subject of this article. empathy -to accept the uniqueness and greatness of the others to serve others for to sake of serving [FLCl 10:4-5] -to be grateful to others [Proc IV:3,9] -to console those who are afflicted [FLCl 4:11-12] -to be the last refuge for those who are troubled [FLCl 4,11-12] humility -not over-valuing ourselves: Clare rejects the title and expands to be addressed as an abbess [Proc I:6] -it enables to respect the worth of all persons [FLCl 4:15-18] -Unassuming behavior of being humble [1 LAg 2; 2LAg1; 2 LAg 2; 3LAg2; 4LAg2] -Realize: you do not have the answer to everything.It causes to seek the advice of others and listen to their advice [FLCl 4:15-18] -the ability to be vulnerable and humble [Proc VIII 1:1-5] -to have awe for the responsibility: To be an abbess/leader is a burden and demands justification [FLCl 4,9] trust and faith -first of all: trust in God [FLCl 1] -to be guided by God (and his Holy Spirit) [TestCl 2-6] -Trust in one's talents and abilities [FLCl 4:7.24] -Trust in others and treat them with respect and goodwill [FLCl 8:12-13; FLCl 4:15-18] -to have integrity and acceptance/respect Recognition of responsibility/ stewardship -to have respect for the responsibility.To be an abbess/a leader is a burden and demands justification [FLCl 4,8] -to be a servant: You have to try the best for the community [LAg; FLCl 10:4-5; BlCl 5] -necessity to be impartially and objective [FLCl 4:10] -to be an authentic person [FLCl 4:8-9; Proc I: 1,3; Proc II: 2,8; Proc IV: 3.9; Proc VII: 11,22-24; Proc VIII, 1:1-5; TestCl 19-21] -to hold all in sacred trust for the greater good of all people and things [FLCl 2:10; 4:19; 7:1] Spirit of serving -not center attention on her own accomplishments, but rather on other people to concentrate on service limits, the negative effects of self-interest -Clare lives a radical understanding of service: in word and deed [Proc VI, 7:24] -she is appreciatively, caring and turned to the people [FLCl 8:12-13] -Servant and handmaid in all things [1 LAg 2; 2LAg 1; 2LAg2; 3LAg2; 4LAg2] Table 1 . Essential components of St. Clare's leadership understanding • develop a respectful inner attitude • reflect our own leadership behavior and our own values • develop our own effects as a (leadership) personality • perceive the patterns of personality, communication and dealing with conflicts in daily leadership and to take consciously action • be able to trust one's own competencies and use one's strengths effectively • gain natural authority and integrate our personality into the leadership style • create trust, feedback culture and open communication and • find access to the interlocutor.
2019-05-12T14:24:10.014Z
2018-11-06T00:00:00.000
{ "year": 2018, "sha1": "1b1e5093788183c9aa96d4de3911ac67de253272", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1444/9/11/347/pdf?version=1542624180", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1b1e5093788183c9aa96d4de3911ac67de253272", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Sociology" ] }
53568205
pes2o/s2orc
v3-fos-license
Some Linguistic Neutrosophic Cubic Mean Operators and Entropy with Applications in a Corporation to Choose an Area Supervisor : In this paper, we combined entropy with linguisti neutrosophic cubic numbers and used it in daily life problems related to a corporation that is going to choose an area supervisor, which is the main target of our proposed model. For this, we first develop the theory of linguistic neutrosophic cubic numbers, which explains the indeterminate and incomplete information by truth, indeterminacy and falsity linguistic variables (LVs) for the past, present, as well as for the future time very effectively. After giving the definitions, we initiate some basic operations and properties of linguistic neutrosophic cubic numbers. We also define the linguistic neutrosophic cubic Hamy mean operator and weighted linguistic neutrosophic cubic Hamy mean (WLNCHM) operator with some properties, which can handle multi-input agents with respect to the different time frame. Finally, as an application, we give a numerical example in order to test the applicability of our proposed model. Definition 10. Letg = p (α,α) ,p (β,β) ,p (γ,γ) , be an LNCN that depends on LT S ,p. Then, the score function, accuracy function and certain function of the LNCN,g, are defined as follows: (i): (ii): (iii): Now, with the help of the above-defined function, we introduce a ranking method for these function. Entropy of LNCSs Entropy is used to control the unpredictability in different sets like the fuzzy set (FS), intuitionistic fuzzy set (IFS), etc. In 1965, Zadeh [37] first defined the entropy of FS to determine the ambiguity in a quantitative manner. This notion of fuzziness plays a significant role in system optimization, pattern classification, control and some other areas. He also gave some points of its effects in system theory. Recently, the non-probabilistic entropy was axiomatized by Luca et al. [38]. The intuitionistic fuzzy sets are intuitive and have been widely used in the fuzzy literature. The entropy G of a fuzzy set H satisfies the following conditions, Differences occur in Axiom 2 and 3. Kaufmann [39] suggested a distance measure of soft entropy. A new non-probabilistic entropy measure was introduced by Kosko [40]. In [41] Majumdar and Samanta introduced the notion of two single-valued neutrosophic sets, their properties and also defined the distance between these two sets. They also investigated the measure of entropy of a single-valued neutrosophic set. The entropy of IFSs was introduced by Szmidt and Kacprzyk [42]. The fuzziness measure in terms of distance between the fuzzy set and its compliment was put forward by Yager [43]. The LNCS was examined by managing undetermined data with the truth, indeterminacy and falsity membership function. For the neutrosophic entropy, we will trace the Kosko idea for fuzziness calculation [40]. Kosko proposed to measure this information feature by a similarity function between the distance to the nearest crisp element and the distance to the farthest crisp element. For neutrosophic information, we refer the work by Patrascu [45] where he has given the following definition including from Equation (30) to (33). It states that: the two crisp elements are (1, 0, 0) and (0, 0, 1). We consider the following vector: B = (µ − ν, µ + ν − 1, w) . For (1, 0, 0) and (0, 0, 1), it results in B Tru = (1, 0, 0) and B Fal = (−1, 0, 0) . We will now compute the distances as follows: The neutrosophic entropy will be defined by the similarity between these two distances. The similarity E c and neutrosophic entropy V c are defined as follows: Definition 15. Suppose that H = xî,p (αH,αH)(xî) ,p (βH,βH)(xî) ,p (γH,γH)(xî) | xî ∈ X is an LNCS; we define the entropy of LNCS as a function G˚k :k(X) → [0, t], where t is an odd cardinality with t + 1. The following are some conditions. 3. H is less uncertain than I; we assume Depending on the entropy value in Equation (34), we can obtain G˚k(H) ≤ G˚k(î). is an LNCS in U. Then, the entropy of U will be: The Method for MAGDM Based on the WLNCHM Operator In this section, we discuss MAGDM, based on the WLNCHM operator with LNCN. Let U = {U 1 , U 2 , . . . , U m } be the set of alternatives, V = {V 1 , V 2 , . . . , V n } be the set of attributes andẘ = (ẘ 1 ,ẘ 2 , . . . ,ẘ n ) T be the weight vector. Then, by LNCNs and from the predefined linguistic term set ϕ = {ϕ j | j ∈ [0, t]} (where t + 1 is an odd cardinality), the decision makers are invited to evaluate the alternatives Uî(î = 1, 2, . . . , m) over the attributes V j (j = 1, 2, . . . , n). The DMs can assign the uncertain LT S to the truth, indeterminacy and falsity linguistic terms and the certain LT S to the truth, indeterminacy and falsity linguistic terms in each LNCNs, which is based on the LT S in the evaluation process of the linguistic evaluation of each attribute V j (j = 1, 2, . . . , n) on each alternative Uî(î = 1, 2, . . . , m). Thus, we obtain the decision matrix S = (sî j )m × n, gî j ,gî j = (pαˆı Based on the above information, the MAGDM on the WLNCM operator is described as follows: Step 1: Regulate the decision making problem. Step 2: Calculategî j = W LNCM(sî 1 , sî 2 , . . . , sî n ) to obtain the collective approximation value for alternatives Uî with respect to attribute V j . Step 5: In this step, we find out the sequence of the alternatives Uî(î = 1, 2, . . . , m) . According to the ranking order of Definition 8, with a greater score function ϕ(S), the ranking order of alternatives Uî is the best. If the score functions are the same, then the accuracy function of alternatives Uî is larger, and then, the ranking order of alternatives U i is better. Furthermore, if the score and accuracy function both are the same, then the certain function of alternatives Uî is larger, and then, the ranking order of alternatives Uî is best. Numerical Applications A corporation intends to choose one person to be the area supervisor from five candidates (U 1 − U 4 ), to be further evaluated according to the three attributes, which are shown as follows: ideological and moral quality (V 1 ), professional ability (V 2 ) and creative ability (V 3 ). The weights of the indicators areẘ = (0.5, 0.3, 0.2). Procedure Case 1: If the weights of the element are absolutely unidentified, then we use the suggested technique to solve the above problem in which the decision making steps are as follows: Step 1: Let U = {U 1 , U 2 , . . . , U 4 } be a set of alternatives and V = {V 1 , V 2 , V 3 } be a set of attributes. Let S = (sî j ) 4×3 be a set of decision matrices. A decision matrix evaluates each alternative based on the given attributes; Step 2: Calculate sî j = W LNCHM(sî 1 , sî 2 , . . . , sî n ) to obtain the overall assessment value for alternatives Uî with respect to attribute V j . Step 3: We utilize the entropy of LNCSs to calculate the weight of the attributes, i.e., let s j = (p (αj,αj) ,p (βj,βj) ,p (γj,γj) ) be the LNCN and G˚k(s j ) be the weight of attributes, i.e., Step 5: We find the values of score function ϕ(S) as: = 0.657 Step 6: According to the value of the score function, the ranking of the candidates can be confirmed, i.e., S 4 S 2 S 1 S 3. , so S 4 is the best alternatives. Case 2: If the DM gives the information about the attributes and weight and the weight vector isẘ = (0.1, 0.5, 0.4), then the score function ϕ(Sî)(î = 1, 2, 3, 4) of Case 2 can be obtained as follows; ϕ(S 1 ) = 0.451, ϕ(S 2 ) = 0.435, ϕ(S 3 ) = 0.504, ϕ(S 4 ) = 0.492. The ranking of these score functions is S 3 S 4 S 1 S 2 . Thu,s due to the diverse weights of attributes, the ranking of Case 2 is different from that of Case 1. In the MADM method, the attribute weights can return relative values in the decision method. However, due to the issues such as data loss, time pressure and incomplete field knowledge of the DMs, the information about attribute weights is not fully known or completely unknown. Through some methods, we should derive the weight vector of attributes to get possible alternatives. In Case 2, the attribute weights are usually determined based on DMs' opinions or preferences, while Case 1 uses the entropy concepts to determine weight values of attributes to successfully balance the manipulation of subjective factors. Therefore, the entropy of LNCS is applied in the decision process to give each attribute a more objective and reasonable weight. Comparison Analysis From the comparison analysis, one can see that the advanced method is more appropriate for articulating and handling the indeterminate and inconsistent information in linguistic decision making problems to overcome the insufficiency of several linguistic decision making methods in the existing work. In fact, most of the decision making problems based on different linguistic variables in the literature not only express inconsistent and indeterminate linguistic results, but the linguistic method suggested in the study is a generalization of existing linguistic methods and can handle and represent linguistic decision making problems with LNN information. We also see that the advanced method has much more information than the existing method in [26,32,44]. In addition, the literature [26,32,44] is the same as the best and worst and different from our methods. The reason for the difference between the given literature and our method may be the decision thought process. Some initial information may be missing during the aggregation process. Moreover, the conclusions are different. Different aggregation operators may appear [32], and our methods are consistent with the aggregation operator and receive a different order. However, [32] may have some limitations because of the attributes. The weight vector is given directly, and the positive and negative ideal solutions are absolute. Other than this, the ranking in the literature [26,32,44] is different from the proposed method. The reason for the difference may be uncertainty in LNN membership since the information is inevitably distorted in LIFN. Our method develops the neutrosophic cubic theory and decision making method under a linguistic environment and provides a new way for solving linguistic MAGDM problems with indeterminate and inconsistent information. Conclusions In this paper, we work out the idea of LNCNs, their operational laws and also some properties and define the score, accuracy and certain functions of LNCNs for ranking LNCNs. Then, we define the LNCHM and WLNCHM operators. After that, we demonstrate the entropy of LNCNs and relate it to determine the weights. Next, we develop MAGDM based on WLNCHM operators to solve multi-attribute group decision making problems with LNCN information. Finally, we provide an example of the developed method.
2018-11-19T16:24:29.022Z
2018-09-22T00:00:00.000
{ "year": 2018, "sha1": "6d654aead95e13070f86739eb51788280f7a3fb9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-8994/10/10/428/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8c77ed61d6727814496fbdc7cc143ccb2be87b5e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
233249463
pes2o/s2orc
v3-fos-license
Investigating Anti-mutagenic Activities of Lantana camara L. (Verbenaceae) Applying Salmonella typhimurium and the Ames Test Introduction: Genetic mutations have a significant role in causing cancers, and plants are effective on cancer recovery by producing metabolites. In this regard, the present study aimed to evaluate the Lantana camera anti-mutation effects applying Salmonella typhimurium in the tissues. 3 Although the third method offers highly particular and targeted therapy, it is limited and highly expensive. 6 Further, cancers recur after treatment. 3,6 Recently, new methods have been used to find new compounds with anticancer effects from difficult resources for controlling the harmful effects of anticancer medicines and finding better compounds. 3 Medicinal plants have long been a natural resource for the treatment of many ailments. According to the World Health Organization report, many plants are currently used for medical purposes. 7 Additionally, the metabolites of plants are useful for different therapeutic aims, 8 and plant compounds have biological roles such as pain reliever, along with antiinflammatory and antimicrobial activities. 3 Further, they are the resources of nearly 25% of therapeutic drugs 9 and more than 60% of anticancer drugs are derived from the plants. 10 As discussed earlier, it is essential to develop newer, safer, and more effective substances for treating cancer. Plant compounds are beneficial materials for developing other medicines with high performance while fewer side effects. 10 The Verbenaceae plant family includes various plant genus and species, most of which have been traditionally utilized as remedies for some disease. 11 Lantana camera from the plant Verbenaceae family is endemic of Africa and America. The leaves of this plant are effective on the treatment of bellyache, wounds, rheumatism, pain in a tooth, pneumonia, and other ailments. 12 Furthermore, L. camara has several biologically active compounds. Moreover, many terpenes, fatty acids, and flavonoids have been extracted from this plant in phytochemical studies. 13,14 Additionally, this plant is claimed to have anti-protozoal, 12 anti-bacterial, anti-fungal, 12,13 antioxidant, 14 insecticidal, 15 and anti-viral 16 activities, as well as allelopathic properties. 17 Similarly, the major essential compounds of L. camara are γ-curcumin (6.3%), Davanone (7.3%), germacrene D (10.9%), α-humulene (11.5%), and β-caryophyllene (23.3%). 18 Considering the above-mentioned explanations, this research aimed to investigate the anti-mutagenic activities of the L. camera extract applying mutant Salmonella typhimurium through the Ames test. Plant Material Different parts of L. camara were prepared from the National Botanical Garden of Iran (Tehran Iran) in Spring 2018. Plant Extract Different parts of the plant were prepared and dried in shadow. Then, they were powdered, and 50 g of them were converted to the extract by adding alcohol (Methanol 80%) using the percolation method. In addition, the extracts were concentrated by a rotary system at 40°C (the concentrated extract was about 5 g), dehydrated in the oven (40°C), and finally, their anti-mutagenic activities were investigated based on the aim of the study. 19 Bacterial Strains The histidine auxotrophic mutant strains (His-) of Salmonella typhimurium (TA100) were obtained from the Laboratory of Microbiology of Kharazmy University (Tehran, Iran) and used to determine the occurrence of base-pair mutations. These mutant strains cannot grow on a minimal mineral medium, and only those bacteria having mutated to wild (His+) type by the reverse mutation in the presence of a mutagen (Sodium azide, NaN 3 ) can grow on this medium. Therefore, the presence of an anti-mutagenic substance (e.g., a plant methanolic extract), along with the mutagen (i.e., NaN 3 ) can reduce the rate of the reverse mutation. Anti-mutagenic Activity Assay The anti-mutagenic effect of the extract was evaluated by the Ames method using the mutant strain of Salmonella typhimurium (TA100) in the presence of NaN 3 and counting grown colonies indicating the incidence of a reverse mutation. The mutant Salmonella typhimurium strain (TA100) that requires histidine for growing in minimal media is suitable for measuring the antimutagenic activity of mutagenic substances. [20][21][22] In this phase, the anti-mutagenic effect of the extract was evaluated by adding S 9 (The sterile extract of the mouse liver containing microsomal enzymes). The cytochrome oxidase enzyme (P450), which inactivates oxidant and toxic compounds, can be found in the membrane of liver cells, especially the endoplasmic reticulum membrane). Thus, the metabolic and antimutagenic activities of compounds are strengthened in the presence of the microsomal extract of the liver (S 9 ). The concentration of 1% or 1 μg/mL of the concentrated extract was used because of its suitability for assaying antimutagenic activity without killing the bacteria. Then, the anti-microbial activity of the methanolic extract against Salmonella typhimurium was assessed by the microbial culture and the disk diffusion method to obtain the minimum inhibitory concentration, which was obtained 6.25 μg/mL for both leaf and flower extracts. Further, Dimethyl sulfoxide was considered as the solvent. 19 Next, the anti-mutagenic test was performed by adding the plant extract (0.5 mL) to the fresh overnight culture (0.5 mL) and the Histidine-Biotin solution (0.5 mL) containing top agar (10 mL) and NaN 3 (1.5 μg) in a test tube. The contents of this tube were shaken for 3 seconds by a shaker and then evenly spread on the entire surface of the minimal glucose agar medium. Then, the experiment was repeated three times, and petri dishes were placed in an incubator (at 37°C for 24 hours). 21,22 The positive control contained NaN 3 (1.5 μg) as a mutagen per plate, and plates without any NaN 3 or the plant extract, which only consisted of 0.5 mL sterile distilled water, were considered as the negative control. After the incubation, grown colonies were counted per plate. [21][22][23] In the second experiment, 0.5 mL of the S 9 compound (prepared from the laboratory complex of the Islamic Azad University of Tehran, Science and Research Branch) was added to all plates. Calculating the Percentage of Mutation Inhibition The average number of grown colonies per plate was determined, and the mean mutation inhibitory activity was calculated by the "Ong" formula. 24 This formula computes the percentage of mutation inhibition based on the number of the grown colonies per plate as follows: Percentage of inhibition = [1-T/M] ×100 where T denotes the numbers of the grown colonies each petri with the mutagen and the plant extract, and M represents the number of grown colonies in the plates of the positive control. The mutagenicity of NaN 3 without the extract (positive control) was considered as 100% growth (i.e., 0% mutation inhibitory activity). 21 Finally, the anti-mutagenic activity was categorized as moderate (25%-40%) or strong (>40%). 18,21,25 Analysis of Data The findings were presented as the mean ± standard deviation of three replications per sample in each experiment. Furthermore, any significant difference between the mean of the grown colonies per petri dish was analyzed by SPSS statistical software (version 22) using the one-way analysis of variance, and the significance level was considered as P < 0.05. Results The present study assessed the anti-mutagenic activities of plant extracts applying the mutated Salmonella typhimurium (TA100) strain. Positive control plates, including sodium azide (NaN 3 ), were used to induce reverse mutations. NaN 3 converts several mutant bacteria into wild types (these bacteria can grow on the minimal mineral medium without histidine). Moreover, negative control plates, including distilled water without the presence of NaN 3 , were applied to induce spontaneous mutations, and the resulting colonies indicated that several bacteria in the medium spontaneously mutated and became wild. In this case, the number of colonies is extremely low compared to the positive control (Figure 1, a-b) and is negligible. The number of the grown colonies in plates containing the extracts was lower compared to the positive control due to the anti-mutagenic activities of the plant extracts that inhibited the reverse mutations of bacteria in the presence of NaN 3 (Figure 1, c-f). The number of the grown colonies and the percentage of mutation inhibition calculated by the Ong formula 24 are presented in Table 1. Anti-mutagenic Activity of Lantana camara Leaf Methanolic Extract The statistical results of the anti-mutagenic activities of the leaf methanolic extract in the absence of S 9 showed that the number of the colony-forming unit (CFU) of the mean grown colonies (201.66 ± 4.83) significantly decreased compared to the control (P < 0.05), and Note. S 9 : Sterile extract of the mouse liver containing microsomal enzymes. * A significant difference was at P = 0.026, at the level of 5% ** Significant difference was at P = 0.018 at the level of 5%. mutation inhibition percentage was calculated as 75.59 ± 0.73. Additionally, strong anti-mutagenic activity (above 40%) was observed according to the standard "Ong" formula. 24 In addition, the mean number of the grown colonies of this extract in anti-mutagenic studies in the presence of S 9 demonstrated a significant decrease (125.66 ± 1.37 CFU, P < 0.05) compared with the positive control, showing a percentage of mutation inhibition of 84.79 ± 0.17 ( Figures 2 and 3). As shown in Figure 2, the anti-mutagenic effect with S 9 was higher compared to the absence of S 9 . Anti-mutagenic Activity of Lantana camara Flower Methanolic Extract Based on the results, the mean grown colonies significantly decreased (416.66 ± 3.38) CFU in the presence of the flower methanolic extract and the absence of S 9 (P <0.05) compared with the positive control. Further, the mutation inhibition percentage was 49.57 ± 0.55 and antimutagenic was above 40% although it was significantly lower compared to the leaf extract (P = 0.018). In the presence of S 9 , the mean number of the grown colonies of the flower extract showed a significant decrease (311.33 ± 1.68 CFU and P < 0.05) compared with the control. Furthermore, the percentage of mutation inhibition was 62.32 ± 0.23, and anti-mutagenic activity was also above 40% although it was significantly lower than that of the leaf extract (P = 0.026). Discussion The bacterial reverse mutation assay is a simple, rapid, and inexpensive assay for the detection of the mutagenic and anti-mutagenic activities of different substances. The damage of DNA by mutagens may be the main cause of most genetic defects and cancer. In addition, the anti-mutation and anti-cancer activities of plants are due to their secondary metabolites. 26 Further, the plant structures of L. camara have many of these compounds which are accountable for several medical properties for treating diseases such as cancers, measles, chickenpox, asthma, edema, blood pressure, eczema, eye infections, tetanus, and malaria. 27 These research findings represented that L. camara methanolic extracts had anti-mutation activities by applying the Salmonella typhimurium reverse mutation assay and the Ames test, which is in line with the results of Zare et al on the anti-mutation and anti-cancer activities of two species from the Verbenaceae family (Lippia genus), namely, Lippia citriodora and Lippia nodiflora, which were attributed to their flavonoids and essential oil components. 28 Furthermore, Begum et al reported the existence of flavonoids as the components of L. camara. 29 Our results are also in conformity with those of Ghasemian et al, demonstrating the effects of secondary metabolites on the anti-mutagenic and anti-oxidant activities of the pomegranate peel extracts of two cultivars (from Iran) using the Ames test. They also suggested that the existence of flavonoid compounds in these plants was responsible for these activities. 21 Additionally, Ruberto and Baratta reported the anti-oxidant and anti-cancer activities of phenolic compounds. 30 Meanwhile, phenolic compounds are found to be the major constituents in the plants of the Verbenaceae family, including L. camara, 31 which can explain its anti-mutagenic activity. In another study, Vicuña et al concluded that essential oils or fatty compounds (e.g., terpenoids) in the Verbenaceae family are responsible for anti-tumor and anti-carcinogenic effects by augmenting DNA repair mechanisms. 32 Moreover, Sefidkon indicated Between the Extracts of Lantana camara With the Control With S 9 (+ S 9 ) and Without S 9 (-S 9 ) ± standard error. Note. The comparison was made at the 0.05 level. S 9 is a sterile extract of the mouse liver containing microsomal enzymes such as the cytochrome oxidase enzyme (P450) which causes the antitoxic action and inactivates oxidant and cancer compounds. In addition, the positive control contained sodium azide (NaN 3 ) as a mutagen per plate, and plates consisting of only sterile distilled water without any NaN 3 or the plant extract were considered as negative control. Note. The comparison was made at the 0.05 level. S9 is a sterile extract of the mouse liver containing microsomal enzymes such as the cytochrome oxidase enzyme (P450) which causes the anti-toxic action and inactivates oxidant and cancer compounds. In addition, the positive control contained sodium azide (NaN3) as a mutagen per plate, and plates consisting of only sterile distilled water without any NaN3 or the plant extract were considered as negative control. that the vegetative and reproductive parts of L. camara, which were planted in Iran, contained essential oils and fatty compounds including β-caryophyllene (14.0% and 22.5%), sabinene (16.5% and 7.3%), 1,8-cineole (10.0% and 6.0%), humulene (6.0% and 10.8%), and bicyclogermacrene (8.1% and 18.5%). 33 Therefore, the existence of the essential oils can partly be involved in the observed anti-mutagenic and anti-cancer activities of L. camara as well. According to our results, the leaf extract demonstrated the highest anti-mutagenic effects in the presence of S 9 (+S 9 ). Effective compounds (i.e., essential oil, flavonoid, and the like) are probably more abundant in the leaves as compared to the flowers of the plant, or the types of the compounds in the flowers probably differ from those of the leaves, which needs to be studied and analyzed in the future. Conclusion Generally, the findings of the research showed that the L. camara methanolic extracts of its flowers and leaves had potent anti-mutagenic activity against Salmonella typhimurium. These activities are probably related to the existence of flavonoids and different fatty compounds in this plant. The findings revealed that anti-mutagenic activity was higher in the leaf extract compared to the flowers. Thus, it is suggested that future studies directly investigate the anti-cancer activities of this plant on human and animal cancer cell lines. Ethical Approval Not applicable. There was no need for moral confirmation considering that bacterial samples were used in this study. Conflict of Interest Disclosure The authors declare that there is no conflict of interests.
2021-04-16T13:01:44.836Z
2020-09-16T00:00:00.000
{ "year": 2020, "sha1": "cfba25b312a6c87c0b8f9396cdb9e53b15283cf0", "oa_license": "CCBY", "oa_url": "http://ijbsm.zbmu.ac.ir/PDF/ijbsm-20418", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cfba25b312a6c87c0b8f9396cdb9e53b15283cf0", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
255965180
pes2o/s2orc
v3-fos-license
Design Methodology for a Magnetic Levitation System Based on a New Multi-Objective Optimization Algorithm Multi-objective (MO) optimization is a developing technique for increasing closed-loop performance and robustness. However, its applications to control engineering mostly concern first or second order approximation models. This article proposes a novel MO algorithm, suitable for the design and control of mechanical systems, which does not require any order reduction techniques. The controller parameters are determined directly from a special type of rapid analysis of simulated transient responses. The case study presented in this article consists of a magnetic levitation system. Certain difficulties such as the nonlinearity identification of the magnetic force and duo magnetic field sensor scheme were addressed. To point out the advantages of using the developed approach, the simulations as well as the experiments performed with the help of the created algorithm were compared to those made with common MO algorithms. Introduction Any engineering system goes through a design stage. A general rule is to try to resolve as many obstacles as possible during the design stage-this includes controller tuning. Certain circumstances may exist that interfere with online tuning. For example, the plant may have to be put offline. This may cause unnecessary stalls for a production line, which this particular system is a part of, which, depending on the process at hand, could be costly. Another case may be a long loop time-some processes in industry take a matter of days or longer. During the tuning process, it is naturally required to have several step-responses for a better quality of the tune. Finally, it is beneficial to have an idea of a good tune before the final implementation, which may lead to a better optimization overall. The problem of applying and optimizing a proper controller is one of the central tasks of engineering. Whether this is a classic PID controller or a nonlinear one, the goal is to help the physical process at hand run within the needed boundaries and properties. A revisit of the classic control theory and a re-evaluation of its possibilities with modern computing powers provides for a new controller tuning method and a powerful tool for design optimization. Our method of controller tuning deals with a plant in the design stage. An algorithm was developed which performs a search in the parameter space with all the main transient response characteristics being rapidly estimated by it. It is based on fast computations of the inverse Laplace transform and curve analysis of the continuously generated stochastic Laplace images. The idea behind this method appeared while working with a magnetic levitation system. A demand for a more efficient tune of a PID controller during the design stage led to the development of the algorithm described in this work. PID controllers are widely used in various industrial applications. The effectiveness of such controllers depends on tuning, which is essentially an optimization problem. To address this task, a number of tuning methods have been developed ever since the appearance of these controllers [1,2]. Materials and Methods: The Magnetic Levitation System The magnetic levitation set-up involves a well-known phenomenon often used for control system studies. Recently, it has become important in a very wide range of industrial applications where magnetic suspension techniques can be profitably applied. The best known ones are high-speed ground transportation [9,10] and high-speed bearings with reduced noise and friction [11,12]. Levitation can in general be achieved in two ways. The first one is using AC in a primary coil. As a result a current is induced in a secondary coil, which is repelled from the primary one [13]. The height of the levitation can be controlled by amplitude and/or frequency of the AC. The second method is using DC current in the primary coil and a permanent magnet or a piece of ferromagnetic material as the levitated object. If a permanent magnet is being used, both attraction and repulsion are feasible. If however a ferromagnetic material is being used for levitation, only attraction is possible. From a control theory point of view it makes a difference whether attraction or repulsion is used in a magnetic levitation project (see Figure 1). [9,14]. In the case of repulsion, the open loop system is stable. If, however, attraction is being used, the system is open loop unstable and is useless without a proper closed-loop design. In order to study magnetic levitation control, various experimental setups have been used [15,16] but the most commonly used experimental setup is the one shown in Figure 2 [17,18]. The main goal of such systems is to make the permanent magnet levitate at a desired height. Various approaches known from control theory have been used for this purpose (root locus [19,20], state space [21], disturbance rejection control [22], fuzzy control [23], sliding mode control [24], fuzzy sliding mode control [25], robust control [26], neural network control [27], various nonlinear approaches [28,29]). The authors of [30] deal with dynamical uncertainties and exterior perturbations in a magnetic levitation system using a real-time prescribed performance control. This allows for chattering reduction and faster convergence to the equilibrium point. In [31], an analytical method using Lagrange equations for the analysis of magnetic levitation (MagLev) systems is proposed. This provides for an interesting MagLev model which distinguishes the primary and induced currents and also the equilibrium height of the levitating object on the input voltage through the mutual inductance of the system. Before we discuss the designed controller tuning and system design method, let us describe the constructed magnetic levitation system. Let us start with the development of the system's model. To do that, we use the classic block diagram representation shown in Figure 3. The z d is the desired position of the permanent magnet (input signal), while z is the actual position (output signal). The G c , G a , G o and G s are the transfer functions of the corresponding parts of the experimental setup. Application of block diagrams leads to very descriptive relationships, especially in the case of automatic feedback systems. To acquire the model of the system we need to determine all of the expressions behind these blocks. Object By far the most difficult element to model is the object. Its input signal is the voltage U c on the coil and the output signal is the position of the magnet z. First, we should analyze the force acting upon a permanent magnet in a magnetic field. Determination of the Magnetic Force To determine the magnetic force F m , we used the experimental setup shown in Figure 4. An electromagnetic coil is a length of wire wound in a joined sequence of concentric rings through which an electric current i flows. The magnetic field B (or its component B z ) of a single ring can be obtained by the application of the Biot-Savart law, where µ 0 = 4π · 10 −7 H/m is the magnetic permeability of vacuum and R is the radius of the coil's winding. A permanent magnet can be modeled as a collection of many microscopic current loops (magnetic dipoles). The net effect of these small current loops is a surface current i m , which is called the Amperian current [32]. Let the current loop (magnetic dipole) have a magnetic moment of µ = µ x , µ y , µ z and be in a uniform magnetic field B = B x , B y , B z . If the loop is small enough, then the torque acting on such a loop is given by a simple expression [33]: therefore, the force F acting on a magnetic dipole in this field is the gradient of the potential energy associated with this torque: As magnetic dipole moment µ has only a vertical component (µ z in this case), which is constant, we can write the vertical component of the force F z as follows: The magnet was attached to the plastic pedestal using adhesive tape. The pedestal was made with a screw so that it is easy to put the magnet at a desired height z. The pedestal was put on scales. When there is an attractive force between the electromagnet and the permanent magnet, the reading on the scales is lower. In this way, a matrix of possible heights z (measured from the lower end of the electromagnet) and electrical currents i through the coil was formed. The results are given in Table 1. The relationship between these variables can be obtained using any available curve fitting tool. The data from Table 1 must be imported and then fitted using a custom function f in the form of F m = f (i, z), where i and z are independent variables of current and coordinate and F m is the dependent variable of magnetic force. In many papers dealing with the magnetic levitation, the force of the coil is approximated by the formula F m = const · i/z 2 . This yields acceptable results in some cases but in this research we paid extra attention to the accuracy of the acquired expression. For a real magnetic coil, it is often quite difficult to acquire an explicit expression of the magnetic field. One could be surprised by the scale of difficulties appearing on this path. For objects with an infinite dimension, explicit formulas usually do exist since it is possible to perform a limit passage. Another possibility is the geometry of the given current contour having a symmetry axis (like the solenoid when the field is calculated at some point on the coaxial line of the magnet). One of the obstacles to consider is the fact that the majority of the formulas are for a magnetic force applied to a material point. In the case of a magnetic force applied to a permanent magnet things are even more complicated due to the shapes of the interacting magnetic objects as effects such as mutual inductance have to be taken into account. In the case of non-simplest volumetric bodies it is often impossible to integrate the expression (1) analytically. Numerous approaches exist to simplify this process such as Maxwell's Method [34]. This is why, in engineering, numerical approaches such as the finite elements method or the boundary integral equations method are often implemented. However, in our research we require an explicit expression for the force applied to the permanent magnet. It is safe to assume that a model with an explicit formula representing a closer geometry to the original set up would provide a better fit for the given measured data; we show that in Table 2. Here, we establish that one of the best fits for our data is, in fact, the expression using (4): where µ 0 = 4π · 10 −7 H / m is the magnetic permeability of vacuum, µ z is a magnetic dipole moment of the permanent magnet and a is a constant related to the length of the coil L and the winding turns per unit length n. The length of the coil is L = 54 mm, while the mean radius of the coil's winding is R = 37 mm. The permanent magnet has a form of a cylinder with 4 mm radius and 5 mm height. For a neodymium magnet of this size, the vertical component of the magnetic moment can be estimated to have a value of |µ z | = 0.49 A · m 2 . As a means of comparison we chose the commonly used functions: Using curve fitting software, we determined the value of parameter a and also the plausibility of the fit as a sum of squared errors (SSE). The results are summarized in Table 2. The fit using expression (5) has a sum of squared errors of 5.6 × 10 −5 , which is by two orders of magnitude lower than the most used fit ax/y 2 . Therefore, expression (5) provides for a much better approximation. Experimental Determination of Coil Resistance and Inductance The electromagnet used in our experiment was a reel of PU enamelled, unjacketed copper wire. Its length was 230 m and the cross sectional area was 0.246 mm 2 . The resistance of the coil R c can easily be measured using a multimeter. In the case of our coil it was R c = 16.3 Ω. In order to determine the inductance of the coil L c , a resistor R r = 468 Ω was added in series to the coil as shown schematically in Figure 5. The ratio between output voltage U out (t) and input voltage U in (t) can be obtained by a simple voltage divider rule, The ratio of amplitudes of the output and input voltages is, therefore, given by: By a simple algebraic manipulation, we get the following result: The actual waveforms for both U in (t) and U out (t) are shown in Figure 6. For the signal U in = 5.12 V and ω = 6280 rad/s we get the U out = 2.87 V. Finally, using Equation (9) we get L c = 52.1 mH. The Transfer Function of the Object With all the parameter values known, we can now create the model of the object, the input of which is the voltage while the output is the current i(t). If we neglect the viscous resistive forces of air, the only forces acting on the magnet are the gravity and the magnetic force. The movement in the vertical direction (z-axis) can then be described by the following differential equation: where m = 3 grams is the mass of the magnet, g is the gravitational acceleration and the force F z is given by the expression (5). Letẑ(t) be the relative change of coordinate z from the initial state z 0 : Zero initial valueẑ(+0) = 0 : satisfies the rule for differentiation of originals for the Laplace Transform [5]. It is necessary to linearize the force F m in Equation (11). We select the operating point to be the equilibrium state at z 0 = −25 mm. Using (5) to calculate the initial current from equation We expand the function F z in a Taylor series at the point (i 0 , z 0 ), whereî = 0,ẑ = 0. Since the operating point is chosen in a way that mg = F z (i 0 , z 0 ) we rewrite the Equation (11) as where and The Laplace Transform of the Equation (13) is Note that, with this system and the direction of the Z-axis, a and b must be greater than 0. The transfer function of Equation (13) is, therefore: Combined with the expression (10) the resulting transfer function of our magnetic levitation system is Actuator As shown in Figure 3, an actuator is an element between the controller and the object. In our case, its primary role was to supply proper voltage and current for the electromagnet. It is shown schematically in Figure 7. It was made of an optocoupler (LED-phototransistor pair and a MOSFET driver) and a MOSFET. The MOSFET was in series connection with the electromagnet. A snubber diode was added in order to suppress the transients, which appeared due to the pulse width modulated input signal U PW M (which is the output voltage of the controller shown in Figure 7). The maximum value of U PW M is equal to 3.33 V. The voltage U cc was selected to be 16.2 V, so that the maximum current through the electromagnet is 1 A. The transfer function of the actuator is, therefore, equal to: Sensor The role of the sensor is to detect the actual position of the permanent magnet. In general it can be detected in two ways as shown in Figure 8. The first and the most common one is the application of a photo-emitter and a photoreceiver [17,25,28,[35][36][37] or some similar arrangement based on optical means [22,38] The second one is the application of one or two Hall sensors [39,40]. Sometimes an inductive sensor is used [41,42]. In our case, a pair of SS49E Hall sensors have been used. The idea behind using a Hall sensor is to detect the magnetic field of the permanent magnet. When it moves closer to the electromagnet, an increase in the magnetic field can be detected. The problem is, however, that the electromagnet generates a magnetic field of its own as well. When the current through the electromagnet is changed due to the varying control signal, the sensor cannot tell if the magnetic field changed due to the changed current through the electromagnet or due to the movement of the permanent magnet. If a pair of such sensors is used (one at the upper and one at the lower end of the electromagnet), their output signals can be subtracted and the resulting signal is due to the magnetic field of the permanent magnet alone. In order to get this signal the circuit shown in Figure 9 is used. It is composed of two operational amplifiers. The first one is used for subtraction and the second one for amplification. The circuit output for various distances of the permanent magnet is given in Table 3. The dependence between the voltage U s and the permanent magnet position can then be obtained in a similar way as in the case of the magnetic force F m . Using a curve fitting software, we get a reasonably good approximation with: At this stage, we could also make linearization and get the G s denoted in Figure 3, but it is easier to include z = f −1 (U s ) in the controller. Controller The role of the controller is played by the Arduiuno Due microcontroller. In reality, the block diagram of the whole system shown in Figure 3 should be modified as shown in Figure 10. The microcontroller has two inputs. One is the sensor voltage U s , which can be used to obtain the permanent magnet position z using The other one is the desired position of the permanent magnet z d , which can be entered into a microcontroller via a Serial Monitor (serial communication with a PC). With the subtraction of z from z d we get the error signal. The error signal is the input for G c (block Contr. Alg. in Figures 3 and 10). The control algorithm is the common PID algorithm. The control signal (output of the controller) is a pulse width modulated signal. Problem Statement Providing the underlying calculation method is fast enough, one can look at the tuning problem from a different angle. While the classical methods do exist and provide the user with satisfying results, sometimes they seem a little bit locked on to their sequence of actions. It is always a good idea to let the simulation "run free" in terms of possible perturbations, irregularities and parameter values. Real systems tend to yield slightly different results to simulations. This raises an important question-what is the "optimal" controller tuning? Should we stop when have reached acceptable transient response characteristic or is it beneficial to keep going to explore the system's behavior over an area of controller parameters? By area we mean certain intervals within the parameter space (rectangles on a plane, certain cubic areas in 3-dimensions parameter space). In the literature, the transient response analysis mostly comes down to analyzing the known characteristics of a first-or second-order-like mechanical system [43][44][45][46][47]. Usually, systems of higher order are approximated to it by known techniques. The convenience of the second-order-like system approximation, the simplicity of the action-result methodology of adjusting loop gains led to other techniques of step-response analysis being overlooked. Higher order systems do not have such an intuitive correlation between transient response parameters and controller gains. Therefore, a second-order approximation is commonly used. In this section, we describe a new optimization algorithm and test it in a case of the magnetic levitation of a small cylinder with a PID controller as means of keeping it at a desired height. PID controllers are still one of the most commonly used controllers in industrial applications [1]. The idea behind them is intuitive-with the help of examples, one can understand the core principles without referring to the Laplace Transform. There are numerous ways of tuning a PID controller. Many of these methods involve the stepresponse method either by using a process model or an experiment. The output is measured or calculated as a function of time. By analyzing it, a new set of controller parameters is chosen. Many strategies exist in properly applying a PID controller. Ref. [48] suggests a heuristic algorithm using wavelets for online tuning of a gain adapted PID-controlled linear actuator. Permanent magnets are implemented as excitation with the aim of soft landing which increases reliable functionality and component life. In [49], another technique is utilized to achieve tracking and soft landing for electromagnetic actuators. Pre-action is employed to enable the system to avoid power saturation. For constrained processes, it was demonstrated that a PID with anti-windup is able to provide similar or even better results than model predictive control when certain solutions are considered [43]. Conditions on nonlinearity and uncertainty are addressed in [50] so that a high order affine-nonlinear system under an extended PID controller can be semi-globally stabilized with a fast rate of regulation error convergence. First, let us unite the actuator and object into a single plant block. The resulting transfer function of the plant would be G ao (s) = G a (s) · G o (s), and G ao (s) = 3723 Due to negative coefficients of the polynomial in the denominator the system is unstable, hence a controller is needed. As seen in Figure 10, since G s (s) · G −1 s (s) = 1, we have a unity feedback control system. The transfer function of the PID controller in a parallel form is G c (s) = K p + K i /s + K d · s, where K p , K i and K d are positive real values. The resulting transfer function of the controlled magnetic levitation system T(s) ends up being where A = 3723, a 3 = 1, a 2 = 312.9, a 1 = −783.3, a 0 = −2.45 · 10 5 . This transfer function is going to be the subject for testing our algorithm. In order to obtain a step response of our system, first we multiply the transfer function (21) by 1/s and then perform the inverse transformation. Since the function (21) is in the form of a polynomial divided by a polynomial of degree lesser than the one in the nominator, then the inverse Laplace transformation of such a function is given by a partial fraction expansion [5]. Therefore, the explicit expression of the signal f (t) and its derivative f (t) at any given point in time is presented. We are going to show how, for this magnetic levitation system represented as a transfer function, our algorithm will provide numerous stable solutions while recording all the necessary step-response characteristics. An array with such data is created in the process which is later used for analysis and optimization. While we do provide the reader with a mathematical background, it is not required from a user to go in-depth into the inner workings of the inverse Laplace transform. Description of the Algorithm Mathematical optimization is a process of finding the best selection from a set of available options such as minimizing a function by choosing different input values. Usually, the input values are bounded. The type of domains and criteria for the best choice can vary largely depending on the type of problem. Therefore, these tasks are a significant part of applied mathematics. The algorithm starts with a calculation of random controller parameters within the limits. • Let M be the number of simulations we would like to perform with a given transfer function. • Assume (K p,min , K p,max ), (K p,min , K p,max ) and (K p,min , K p,max ) to be the limits of the controller parameters, then K p = K p,min + (K p,max − K p,min ) · ρ 1 , where ρ 1 , ρ 2 and ρ 3 are uniformly distributed random numbers. This algorithm can scan any region of parameter space of interest, however for the initial search it is safe to assume that K p,min = K i,min = K d,min = 0. The upper limit, however, requires more attention since the equipment at hand may not handle too large values of the controller parameters. For example, in our case the upper limit of the electrical current in the coil was around 1A, or, in terms of voltage-16.3 V. It is easy to calculate the upper limits with the inverse Laplace transform of the expression where ∆z is the height (difference) a magnet needs to move up by. By substituting the t = 0 in the resulting expression of f (t), one may find whether the modeled response would correspond to that of a real plant. For the given set of controller parameters, the algorithm continues by finding the roots α m of the polynomial in the denominator P(s), which in our case is s(a 3 s 4 + a 2 s 3 + (a 1 + AK d )s 2 + (a 0 + AK p )s + AK i ). If any of the roots are in the right-half plane, then the solution is unstable and we move to a new simulation. However, if all the roots are in the right-half plane the algorithm proceeds with the analysis of the resulting step-response signal. Before we start, let us introduce the variables and their initial values (see Figure 11). The key parameter of this process is the current recorded reference timet. The two mechanisms for when this value changes are described (see the algorithm's description). • tcurrent time that the algorithm uses for calculations. Starts from zero; • f (t)-the value of the step response signal at the current time t; • f (t)-the value of the step response signal's derivative at the current time t; • ∆t-the time step. This value changes as the algorithm progresses, depending on the step response. You can safely start with a very low value as the algorithm quickly finds the suitable time step for your process. For example, if one can ignore the changes occurring on the time scale of 1 ns, then the initial value can be ∆t = 1 ns. •t-key parameter, the current recorded reference time. The two mechanisms for when this value changes are described later; • n = 0-how many decimal amplitude values the signal has crossed. An auxiliary counter needed for the first mechanism for detectingt; • k = 0-an auxiliary counter needed for the second mechanism for detectingt. Number of local extremums; • t min -the minimum out of all recorded reference timest. Needed for the calculation of the time step ∆t. This parameter helps the algorithm to distinguish the possible rapid oscillations (see Figure 12); • N = 100-a positive integer regulating how fine do we divide the shortest reference time t min to calculate the ∆t. Increase this parameter for additional accuracy; Now, using pseudo code, let us describe the inner workings of the algorithm (see Algorithm 1). The algorithm scans the controller parameters' space while performing the fast inverse Laplace transform calculations. It provides us with an explicit formula for the response of the initial system to a unit step signal. At the same time, it determines important signal parameters which are the number of oscillations before settling time, peak value, overshoot peak time, etc. For example, these signal data allow us to sort out highly oscillating responses. One important feature of the designed algorithm is the fact that it automatically takes into account the Nyquist-Kotelnikov-Shannon theorem. This theorem specifies that a sinusoidal function in time or distance can be regenerated with no loss of information as long as it is sampled at a frequency greater than or equal to twice the frequency of oscillation. As shown above, the second mechanism of determiningt ensures the appropriate level of time resolution. In is important to keep this in mind while using some of the commercially available software. Let us show how, using a software without directly controlling the sampling rate may result in a wrong representation of a step response of a given system. To demonstrate the announced effect, we crafted a special transfer function of the same type as before (21), but A = 4.028 × 4.86 × 10 5 , a 3 = 1, a 2 = 207.7, a 1 = −1257, a 0 = −2.61 × 10 5 ; and K p = 14, K i = 1.6, K d = 30. Assume one needs to know the step response characteristics of the current process represented as this transfer function. For this demonstration, the time step ∆t was manually controlled (see Figure 12). If we select the time step to be larger, like on the first plot-the transient response is a smooth curve with little to no overshoot. However, once we start decreasing the time step, oscillations emerge. At first these oscillations are angular. This indicates that the time moments at which we calculate the signal response miss some of the oscillations. This becomes obvious when, after the time step of 81 microseconds (the last two plots), new oscillations stop emerging and no new peaks appear. This means that we have reached the needed accuracy to truly represent this system's step response. On its own, the algorithm chose the appropriate time step in less than 81 microseconds (for this process the last recorded time step was ∆t = 2 microseconds). This feature makes the developed algorithm adaptive to different time scales. Algorithm 1 Processing of the given signal. Using the following algorithm, we gather large statistical data Part 1. while T s = 0 continue until the settling time T s is determined t ← t + ∆t. increasing time by ∆t The first mechanism of recordingt (orange dots on Figure 11): if n ≤ [ f (t)/0.1] < n + 1 then where [ ] represents the floor function n + +;t = (2t + ∆t)/2; when the signal crosses decimal values a reference time is recorded (orange dots on Figure 11). if n + k = 1 then if this is the first a reference timet is recorded, then t min =t; t max =t; assign this value to both t min and t max else t max =t; renewing only the maximum reference time end if end if ∆t = t min /N; calculate time step See Part 2 for further explanation. Part 2. The second mechanism of recordingt involves an extremum search (green dots on Figure 11). Since the function f (t) is continuous, a moment of time when the derivative f (t) changes sign implies a local extremum. if f (t) · f (t − ∆t) < 0 then f (t) changes sign, so a local extremum occurs in (t − ∆t, t) t ex k = (2t + ∆t)/2; k + +; record the local extremum time and the number of them f OA max = max(OA k , OA max ) recording the maximum OA end if if k = 1 then if this is the first extremum. This is done so that the t max does not increase uncontrollably. t max = max(t max , t ex k ); the condition k = 1 is needed so that t max does not increase uncontrollably elset = t ex k − t ex k−1 ; record the time between oscillations for additional accuracy (Nyquist theorem) t min = min(t min ,t); t max = max(t max ,t); in case this time is lower that the current t min or larger than the current value of t max end if end if ∆t = t min /N; calculate time step At last, let us describe the test for finding T s . Algorithm 1 Cont. if f (t) ∈ [0.97, 1.03] then the signal appeared in the 3% range t 3% = t; m = 0; record the time and assign zero to the logic parameter m needed for the following cycle while t ≤ t 3% + t max AND m = 0 do starting from t 3% for the time length of t max we check for the 3% criteria 1.03] then the signal exited the 3% area m = −1; therefore, the test for settling time stops and the algorithm resumes its work end if end while if m = −1 then if the 3% condition did not break once during this test T s = t 3% we have found the settling time end if end if Figure 12. A showcase of modelling a mechanical system without satisfying the Nyquist theorem. For some mechanical systems it may be of great importance to distinguish oscillatory step responses. Such oscillations may, for example, result in unnecessary sideways oscillations which may result in complete loss of control over the object. The developed algorithm, however, requires no prior knowledge about the time scale of the process involved as it determines the appropriate time step ∆t = 2 microseconds automatically. Multi-Objective Optimization The developed algorithm returns the values of settling time T s , percentage overshoot PO, number of local extremums k and the largest amplitude between the oscillations OA max in the form of Table 4. Using this table, one can easily arrange the column with a certain stepresponse characteristic (overshoot, for example) in ascending order to chose those controller parameters that yielded the preferred value (in our case, the lowest overshoot). The user is provided with enough information to sort out oscillating responses or understand their magnitude, using the parameter OA max . The reason we try to avoid too much overshoot and oscillations is due to the nonlinearity of the system. A large overshoot may result in the inadequacy of the linearized model and instability of the real system. Table 4. Step-response characteristics provided by the tuning algorithm. The value of OA max provides us with the scale of oscillations. If the number of local extremums k ≤ 1, then this value is obviously not defined. In order to compare the effectiveness of the developed algorithm, we used common minimum search methods. One of them is the NSGA-II (Non-dominated Sorting Genetic Algorithm) with overshoot PO, % and settling time T s as the two objective functions. NSGA-II deals with a set of solutions simultaneously which improves the computational speed. Quite often this feature allows it to select several solutions of Pareto set in a single run of the algorithm [53]. To investigate this further, a search using the PSO (Particle Swarm Optimization) algorithm was performed with overshoot PO, % being the single objective function. PSO is an optimization method, which iteratively tries to improve a solution with regard to a given measure of quality. A particle is usually an element in some vector space-in our case (K p , K i , K d ). PSO performs searching via a swarm of particles that updates each iteration. Using simple formulae, each particle moves in a direction depending on its previous best position and the best position among all of the particles in the whole swarm. Optimization ends if the relative change in the objective value over the last iterations is less than a defined function tolerance [54]. For the NSGA-II and PSO the Global Optimisation Toolbox in Matlab has been used. The decision variables of the search (the controller parameters) were limited to K p ∈ [0, 200], K p ∈ [0, 250], K p ∈ [0, 10] due to hardware limitations (23). Figure 13 shows that NSGA-II outputs one point on the Pareto-front. This result stays the same after around one thousand total function evaluations. Usually in these situations, a rule of thumb is to find the single-objective minima to start the algorithm search from. By doing so, one may be able to obtain a wider Pareto set by avoiding a potential local minima. It is possible, however, that one still ends up with a single point, which means there is only one feasible Pareto point. In this study, starting from various points in the controller parameter space did not yield a different result. The search using PSO yielded the same solution as seen in Figure 13. With this, we may conclude that there is no tradeoff curve (Pareto front) because all the objectives are minimized at the same point. The developed algorithm, on the other hand, was not affected by any local minima. While it is not a genetic-based algorithm the computational time was around the same as for the other two methods (simulations showed that all of the three algorithms required around 10 3 function evaluations). Experimental Verification To provide an experimental confirmation, as well as to prove the effectiveness of the designed algorithm, we compared its results to the measurements using the magnetic levitation system, thoroughly described in the corresponding section. First, we made the magnet levitate steadily at z 0 = −25 mm with an empirical tune of (K p , K i , K d ) = (150, 45, 6.25). There are two main reasons behind this. First is to standardize the initial conditions before the start of each experiment. Second is for the system to be more comparable with the linearization. Then, we inserted new PID parameters in the microcontroller software to make sure the system is still stable. If it was, we forced the permanent magnet to rise ∆z = +1 mm, while recording the transient response. The comparison between the experiments and the simulations performed by the algorithm can be seen in Figure 14. The transients are between −25 mm and −24 mm. Both states are very close to the point of linearization, so the system performs fairly well and the results agree with the modeling. The relative values of overshoots related to the measured curves correspond with those obtained by the algorithm. The step responses with larger overshoot are expected to diverge more from the modeled ones. The divergence increases as the permanent magnet travels further from the equilibrium point at which the magnetic force function was expanded into the Taylor series. For this research, as was shown before, Formula (5) has an approximation with the sum of squared errors lower by several orders of magnitude than the most commonly used formulas. However, the linearization still adds some inaccuracy. The real magnetic force changes with distance z by a different law. It is quite difficult to carefully estimate a finite size coil's magnetic force applied to a permanent magnet, when the dimensions of the coil are comparable to the distance to the permanent magnet. Another obstacle would be taking into account the shape of the permanent magnet. These are serious mathematical problems that go beyond this research's goal which is to demonstrate the capabilities of the designed controller tuning and system design algorithm. For the purpose of this research, it is sufficient to use the approximation we presented in Section The Magnetic Levitation System- Figure 14 reflects that. We also discuss the possibility of improving the accuracy of the modeling in the following section. The Multi-Objective Problem for Optimal Coil Parameters Here we show how, by taking into account more information about the magnetic levitation system and varying one or more of the coil's parameters, one may find their optimal values. It is possible to use the data provided by the algorithm as a criterion for optimization. One may look at it as a Monte-Carlo method of sorts. Here we discuss an example of this method's realization. Before, as an approximation we used formula (5) for the magnetic field of the coil: This is, however, the magnetic field caused by just one of the rings within the coil. For that reason, let us perform an integration over the length of the coil. We should point out, though, that any mistake in the formulas at this stage will result in a wrong analysis overall. So extra caution should be taken to keep the expressions correct. In our setup, the permanent magnet is situated below the coil, which means z < 0. That is why where L is the length of the coil and n is the density of winding turns per unit length. To get the explicit expression of the magnetic force in (4), the final step is to perform the differentiation of this expression. The linearization of the process involved requires a Taylor transformation, after which Newton's equation is converted to where Now the acquired transfer function has the length of the coil L, the radius of the coil R and the density of winding turns per unit length n within the transfer function. This opens a whole new mathematical problem of optimizing these parameters for best performance. For this, our algorithm with its rapid calculations is a great tool for optimization. This simulation data can be the basis for a multi-objective optimization to determine the best values for these parameters. First, we postulate an optimization criterion which may consist of one feature of the step response or several of them. Then, the algorithm provides characteristics of thousands of modeled step responses that can be visually represented as in Figure 13 using radar charts or a Pareto-optimal front. Conclusions In this study, we presented a framework for designing control systems. For this purpose, an algorithm was created which determines the key features of simulated transient responses. With modern computing power, this algorithm is suitable for multi-objective optimization tasks such as nominal parameters of complicated mechanical systems. The expected curve form of regular stable transient responses opens a possibility of fast curve analysis using the developed algorithm. This, in turn, opens the possibility of collecting large simulation data samples, which can be applied to solve different multi-optimization problems such as optimal coil parameters. A comparison with common genetic search methods showed the competitiveness of the designed algorithm. With the crafted magnetic levitation system as a means of testing, we were able to show the algorithm's versatility. Specific features were discussed, such as linearizing the magnetic levitation force on the permanent magnet as well as curve fitting of the measured data. In this case, the main focus was reducing the overshoot, for which the proposed method was able to provide numerous suitable sets of controller parameters. The transient response behavior, as well as values of overshoot related to the measured curves, corresponded well with those obtained by the simulation. Therefore, this method proved to be highly effective for pre-production purposes. The main benefits of the developed algorithm can be summarized. Coding, applicable for any programming language. An important feature is its immunity to the effect of local minima trapping. As shown in Figure 13, the developed algorithm in the given conditions proved to be versatile and provided multiple suitable solutions to the multi-objective problem between objective functions of overshoot and settling time. In addition, we showed how the algorithm automatically adjusts the size of the time step of the simulated signal to meet the conditions of the Nyquist-Kotelnikov-Shannon theorem. As we mentioned, this feature is particularly important for open-loop unstable systems (such as the magnetic levitation system) since unsupervised large oscillations may result in unwanted sideways irregularities in motion and instability of the whole controlled system. This method is a basis for solving a multi-objective problem of optimal coil dimensions and proportions. Additional accuracy in the nonlinearity identification process will lead to a wider optimization problem of not only the controller parameters but also key features of any magnetic levitation system.
2023-01-18T16:02:39.035Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "204875ce8ae478b4f4dfaa4a27c262c99f406735", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/2/979/pdf?version=1673687882", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf4db3f3d65478d28d554e8bbfa66166e1969afa", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
52348867
pes2o/s2orc
v3-fos-license
Inhibition of autophagy enhances the antitumour activity of tigecycline in multiple myeloma Abstract Accumulating evidence shows that tigecycline, a first‐in‐class glycylcycline, has potential antitumour properties. Here, we found that tigecycline dramatically inhibited the proliferation of multiple myeloma (MM) cell lines RPMI‐8226, NCI‐H929 and U266 in a dose and time‐dependent manner. Meanwhile, tigecycline also potently impaired the colony formation of these three cell lines. Mechanism analysis found that tigecycline led to cell cycle arrest at G0/G1 with down‐regulation of p21, CDK2 and cyclin D1, rather than induced apoptosis, in MM cells. Importantly, we found that tigecycline induced autophagy and an autophagy inhibitor bafilomycin A1 further amplified the tigecycline‐induced cytotoxicity, suggesting that autophagy plays a cytoprotective role in tigecycline‐treated MM cells. Mechanisms modulating autophagy found that tigecycline enhanced the phosphorylation of AMPK, but did not decrease the phosphorylation of Akt, to inhibit the phosphorylation of mTOR and its two downstream effectors p70S6K1 and 4E‐BP1. Tigecycline effectively inhibited tumour growth in the xenograft tumour model of RPMI‐8226 cells. Autophagy also occurred in tigecycline‐treated tumour xenograft, and autophagy inhibitor chloroquine and tigecycline had a synergistic effect against MM cells in vivo. Thus, our results suggest that tigecycline may be a promising candidate in the treatment of MM. | INTRODUCTION Multiple myeloma (MM) is characterized by the accumulation of malignant plasma cells in the bone marrow and usually accompanied by the secretion of monoclonal immunoglobulins that are detectable in serum or urine. 1 Combined with autologous stem cell transplantation and improvements in supportive care, the employment of novel drugs such as proteasome inhibitors, immunomodulatory agents and monoclonal antibodies has effectively improved response and substantially enhanced overall survival in the past decade. [2][3][4] However, drug resistance resulting in relapse commonly occurs and MM remains an incurable disease. Therefore, novel therapies are urgently needed. Tigecycline is the first member of a new generation of tetracyclines called glycylcyclines approved by the FDA in 2005, which is a broad spectrum antibiotic used for the treatment of bacterial infections. The mechanism of action is that tigecycline can inhibit bacterial protein synthesis by binding to the 30S ribosomal subunits. 5 Beyond its role as an antimicrobial, accumulating evidence shows that tigecycline has anticancer properties. It can inhibit the growth and metastasis of multiple tumour cells, including acute myeloid leukaemia, 6 gastric cancer, 7 melanoma, 8 neuroblastoma, 9 cervical squamous cell carcinoma 10 and glioma. 11 The anticancer mechanism of tigecycline appears to vary in different tumour types. Besides the inhibition of mitochondrial protein synthesis, other mechanisms including autophagy have been found to be involved in antitumour effects. 7 Autophagy, or cellular self-digestion, is a cellular process by which the cell ensures sufficient metabolites by breaking down its own organelles and cytosolic components when nutrients become limiting. 12 A growing evidence demonstrates that autophagy is involved in development, differentiation and tissue remodelling in various organisms. 13 Autophagy is also implicated in certain human diseases including inflammation, neurodegeneration and cancer. 14 Paradoxically, autophagy can contribute to cell damage but may also serve to protect cells. When autophagy occurs, microtuble-associated protein light chain 3-I (LC3-I) is converted to the membranebound form (LC3-II), which is associated with autophagic vesicles and exhibits classical punctate distribution, as classical protein markers of autophagy. 15 Meanwhile, p62/sequestosome-1 (SQSTM1) is degraded following an increase in autophagic flux for which this protein presently serves as another classical hallmark. 16 Mammalian target of rapamycin (mTOR) as an evolutionarily conserved serine/threonine kinase has two structurally and functionally distinct complexes termed mTOR complex 1 (mTORC1) and mTOR complex 2 (mTORC2), which can tightly regulate autophagy. 17 AMPactivated protein kinase (AMPK) is one of the major stress-sensing enzymes and can actively regulate metabolism and cell proliferation. Prominently, AMPK is also a critical regulator of autophagy. Phosphorylation of AMPK results in inhibition of mTOR, which activates autophagy. 18 In this study, we have demonstrated that tigecycline significantly inhibits the proliferation and colony formation of MM cell lines RPMI-8226, NCI-H929 and U266 by inducing cell cycle arrest at G0/ G1 phase. Additionally, autophagy also plays a cytoprotective role in tigecycline-induced MM cells, and combination with chloroquine and tigecycline synergistically inhibits the tumour cell growth in a mouse xenograft model of RPMI-8226 cells. | Cell viability assay Human MM cell lines RPMI-8226,NCI-H929 and U266 were cultured in RPMI-1640 medium supplemented with 8% fetal bovine serum in a humidified atmosphere containing 5% CO 2 at 37°C. The cell viability was determined using the Cell Counting Kit-8 (CCK-8) assay according to the manufacturer's protocol (Dojindo, Kumamoto, Japan). Briefly, RPMI-8226, NCI-H929 or U266 cells were seeded at a density of 8 × 10 3 /well in 96-well plates and | Colony formation assay Multiple myeloma cells were seeded at about 1 × 10 4 cells/well in 6well plates with or without tigecycline (20 μmol/L) in methylcellulose medium (MethoCult ™ H4034, Stemcell Technologies, Vancouver, BC, Canada). After incubation for 7 days in a 5% CO 2 atmosphere incubator at 37°C, the cells were examined using an inverted microscope equipped with a CCD camera. A colony is defined as a cluster of at least 60 cells, and visible colonies were counted. Cells were then washed with PBS twice, and counted using a haemocytometer. | Electron microscopy assay After treatment with 20 μmol/L tigecycline for 48 hours, RPMI-8226 cells were fixed in PBS (pH 7.4) containing 2.5% glutaraldehyde at 4°C for more than 2 hours. The cells were postfixed in OsO4 at room temperature for 60 minutes and were subsequently stained with 1% uranyl acetate, dehydrated through graded acetone solutions and embedded. Finally, the autophagosomes were observed under a transmission electron microscope (H-7500, Hitachi, Japan). | Western blot analysis After treatment with different concentrations of tigecycline with or without bafilomycin A1 (baf A1), the cells were collected and lysed immediately using RIPA lysis buffer (Beyotime Institute of Biotechnology) supplemented with PMSF and Halt protease and phosphatase inhibitor cocktail (Pierce, Rockford, IL). The protein was boiled for 8 minutes in 1× loading buffer and subjected to Western blot analysis using antibodies against SQSTM1/p62, LC3, p-mTOR, mTOR, p-AMPKa, AMPKa, p-p70S6K, p70S6K, p-4E-BP1, 4E-BP1 or GAPDH as reported previously. 19 The bands were visualized by an enhanced chemiluminescence reagent (Thermo Fisher, Fremont, CA), and the optical densities of the bands were analysed using Image J software (NIH, Bethesda, MD). When tumour volume reached approximately 300 mm 3 , the mice were randomized into four groups. The vehicle group was given saline, and the treatment groups were injected with tigecycline twice daily (75 mg/kg by intraperitoneal injection) or chloroquine daily (50 mg/kg by intraperitoneal injection) (Sigma-Aldrich) or both drugs, respectively. Tumour length and width were measured every 2 days, and the volume was calculated using the formula: volume = length × width 2 × 0.5236. All mice were sacrificed after 14 days. Animal procedures were carried out in accordance with institutional guidelines after Whenzhou Medical University Animal Care and Use Committee approved the study protocol. | Statistical analysis The data are presented as mean ± SEM and analysed by one-way ANOVA followed by post hoc Turkey's test to determine the differences between the groups. Differences were considered significant at P < 0.05. | Tigecycline inhibits the proliferation and colony formation in MM cells Tigecycline has been reported to possess a potent antitumour activity against multiple solid or haematological malignancies, which arouses our interest to investigate whether tigecycline has a similar antitumour effect on MM. As expected, tigecycline dramatically impaired the cell viabilities of three MM cell lines (RPMI-8226, NCI-H929, and U266) tested in a time-and dose-dependent manner ( Figure 1A). The soft agar clone formation assay is a technique widely used to assess the survival and tumorigenic capabilities of tumour cells. 20 The parallel effects were observed in soft agar assays with the above three cell lines. Obviously, tigecycline at 20 μmol/L considerably inhibited the colony formation, characterized by small colony size, compared with the vehicle cells ( Figure 1B). Both colony number and total cell number were significantly reduced by tigecycline in all three MM cell lines tested ( Figure 1C). These data suggested that tigecycline potently inhibits the proliferation and colony formation in MM cells. | Tigecycline induces cell cycle arrest at G0/G1 phase in MM cells As it has been reported that tigecycline impairs the cell viability mainly through inducing cell cycle arrest rather than apoptosis, 11 we first analysed the effect of tigecycline on the cell cycle of MM cells and found that tigecycline treatment led to an increase in G0/G1 phase with diminished S phase (Figure 2A), suggesting that tigecycline is capable of inducing G0/G1 arrest to decelerate the cell cycle, and preventing the cells from entering the S phase and proliferating. As CDK2 is one of the key kinases controlling G1/S transition and DNA replication and p21 is a critical regulator of CDK2, we measured these two proteins and found that tigecycline markedly decreased the levels of CDK2 and p21 in three MM cell lines RPMI-8226, NCI-H929, and U266 ( Figure 2B). Cyclin D1, a major cyclin driving the G1/S phase transition, 21 was also dramatically decreased by tigecycline in all three MM cell lines tested ( Figure 2B). We next analysed whether tigecycline induces apoptosis of MM cells and found that tigecycline almost did not induce apoptosis in RPMI-8226 cells ( Figure S1). These results strongly indicated that tigecycline impairs the cell viability of MM cells mainly because of cell cycle arrest at G0/G1 phase rather than apoptosis induction. | Tigecycline induces cytoprotective autophagy in MM cells Tigecycline has been demonstrated to induce autophagy in gastric cancer cells. 7 To explore whether autophagy is also functionally involved in tigecycline-induced cytotoxicity, we analysed the expression levels of LC3 and SQSTM1/p62 and found that tigecy- | AMPK/mTOR signalling pathway is involved in tigecycline-induced autophagy in MM cells The mTOR signalling pathway serves as a central regulator of cell metabolism, growth, proliferation and survival. Recent studies have also revealed that mTOR signalling is tightly related to autophagy. 23 The mTOR kinase downstream targets the eukaryotic initiation factor 4E-BP1 and the p70S6K1. Inhibition of mTOR signalling leads to dephosphorylation of 4E-BP1 and p70S6K1 to induce autophagy. Therefore, we firstly evaluated the phosphorylation of these three proteins and found that tigecycline dramatically inhibited the phosphorylation of mTOR and two downstream effectors 4E-BP1 and p70S6K in three MM cell lines RPMI-8226 ( Figure 4A), NCI-H929 ( Figure 4B) and U266 ( Figure 4C). As the upstream positive regulatory role of Akt in mTOR activation has been reported, 19 we evaluated the Akt phosphorylation and found that tigecycline slightly elevated the Akt phosphorylation in NCI-H929 cells (Figure S2), suggesting that tigecycline-induce mTOR signalling inhibition is not through directly influencing the Akt signalling. We subsequently analysed the phosphorylation of AMPK, another important signalling pathway can regulate the mTOR signalling, 24,25 and found that tigecycline significantly enhanced the phosphorylation of AMPK in all three MM cell lines tested (Figure 4). These findings manifested that tigecycline promoted autophagy in MM cells mainly through regulating the AMPK/mTOR signalling but not Akt signalling. | DISCUSSION The prognosis of patients with MM is still dismal despite improved remissions with novel agents. Tigecycline as a FDA-approved antibiotic can inhibit the synthesis of bacterial proteins and mitochondrial protein translation. Recently, it has been reported that tigecycline alone, or in combination with other therapeutic agents, can effectively kill, even eradicate multiple solid tumours and haematological malignancies. 26,27 In the present study, we demonstrated that tigecycline potently impaired the proliferation in MM cells in a dosedependent manner. Furthermore, the soft agar assay displayed that tigecycline dampened survival and self-renewal of MM cells in vitro. were injected subcutaneously into the right flank of female NOD/SCID mice (n = 5 for each group). When the tumours reached approximately 300 mm 3 in volume, intraperitoneal injections of tigecycline (Ti; 75 mg/kg, twice a day) or chloroquine (CQ; 50 mg/kg, daily), or saline (Veh), or both Ti and CQ were administered for 14 d, tumour volume and the body weight were measured every other day. C, Size and weight of xenograft tumour were elevated after mice were killed. D, The levels of LC3 and SQSTM1/p62 in tumour tissues were determined by Western blot. E, The levels of p21, cyclin D1 and CDK2 in tumour tissues were determined using Western blot analysis. Results were expressed as mean ± SEM, and images shown were representatives of at least three independent experiments. *P < 0.05, **P < 0.01, vs tigecycline alone group apparatus in plasma cells. 29 To ascertain the role of tigecyclineinduced autophagy in MM cells, we used Baf A1 as an in vitro autophagy inhibitor and chloroquine as an in vivo autophagy inhibitor. These two results revealed that autophagy acted as a cytoprotective role in tigecycline-treated MM cells. Combination of autophagy inhibitor with tigecycline has a stronger antitumour effect on MM cells than tigecycline alone. The molecular mechanism of autophagy involves several conserved Atg proteins. In the present study, we only simply detected the level of LC3 II and its degradation substrate SQSTM1/p62 to confirm the occurrence of autophagy. We mainly clarified the upstream mechanisms of autophagy induction and found that the activation of AMPK is implicated in tigecycline-induced autophagy. In contrast to normal cells, MM cells relied heavily on glycolysis and mitochondrial respiration to meet the high demands of energy production and metabolism. [30][31][32] Tigecycline as a mitochondrial protein translation inhibitor leads to energy deprivation, and AMPK is a crucial cellular energy sensor. The activation of AMPK subsequently resulted in the inhibition of mTOR, which can regulate protein synthesis, cell growth and proliferation through its downstream effectors 4E-BP1 and p70S6K1. Targeting mTOR has been proved to be effective for MM treatment. 33 Concurrently, increased activation of Akt may be because of a marked decrease in mTOR activity induced by tigecycline in MM cells. In conclusion, we first showed that tigecycline inhibited MM cells proliferation and growth both in vitro and in vivo through inducing cell cycle arrest at G0/G1 phase. Second, we found that autophagy did occur and exerted a cytoprotective role in tigecycline-treated MM cells, and autophagy inhibitor could enhance the efficacy of tigecycline. Therefore, it is highlighted that combination of autophagy inhibitor and tigecycline might be a promising therapeutic strategy for MM.
2018-09-26T06:26:21.523Z
2018-09-24T00:00:00.000
{ "year": 2018, "sha1": "493a1ae4db7770d6886ff36de532494c5fa6c8d3", "oa_license": "CCBY", "oa_url": "https://www.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.13865", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "493a1ae4db7770d6886ff36de532494c5fa6c8d3", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
2374596
pes2o/s2orc
v3-fos-license
Laparoendoscopic Single-Site Isobaric Hysterectomy in Endometrial Cancer This case highlights the feasibility of laparoendoscopic single-site surgery and isobaric hysterectomy for early clinical stage, low-risk endometrial cancer. INTRODUCTION Laparoscopy is becoming a standard surgical treatment for early stage endometrial cancer, resulting in survival rates similar to laparotomy. 1 Gynecological surgery is a minimally invasive approach, able to both reduce postoperative pain and improve cosmetics in a safe manner. In this context, several laparoendoscopic single-site (LESS) hysterectomies have been described in recent literature for the surgical management of early stage endometrial cancer 2,3 Preliminary data show the feasibility of this approach and the benefits in terms of postoperative pain. Isobaric laparoscopy is a valid alternative to pneumoperitoneum in patients with contraindications to its induction. The isobaric technique allows surgeons to avoid the risks of intraabdominal pressure and to retain the advantages of minimally invasive access. 4 Therefore, the use of the isobaric technique eliminates postoperative shoulder pain, typical of pneumoperitoneum classic laparoscopy. In addition, the isobaric technique also allows the surgeon to perform an epidural anesthesia and continuous suction of blood loss or peritoneal fluids, without compromising the stability of the work chamber. 4 The technique can be applied to any patient, even in the presence of pneumoperitoneum-contraindicating diseases. Several studies have demonstrated that pneumoperitoneum may have hemodynamic, metabolic, and neurologic effects. Pneumoperitoneum causes positive pressure in the abdominal cavity and a reduction of venous return to the heart, resulting in peripheral venous stasis. Moreover, large quantities of CO 2 used for the pneumoperitoneum induction pass in the blood stream causing hypercapnia. As a consequence, the reduced cardiac preload and the strong secretion of stress hormones by the kidneys and adrenals determine heart failure in predisposed patients. Venous stasis increases the thromboembolism risk in patients with blood dyscrasias. Hypercapnia due to insufflation of CO 2 with subsequent respiratory acidosis can affect patients suffering from respiratory pathologies. Hence, the use of pneumoperitoneum has contraindications in high-risk patients affected by severe cardiovascular insufficiency, advanced chronic obstructive bronchitis , glaucoma, blood dyscrasias, obesity, and neurologic diseases. 5 To our knowledge, this is the first report that describes a LESS hysterectomy performed via isobaric technique on a conscious patient who received epidural anesthesia and a TAP (Transversus Abdominis Plane) block anesthesia. The TAP block was developed for postoperative pain control in gynecological and abdominal surgery, but can also be used for analgesia during surgical procedures as described by our team. 6,7 CASE STUDY A 39-y-old woman with untreated chronic hypertension and a body mass index (BMI) equal to 34, affected by early clinical stage endometrioid endometrial cancer, was admitted for isobaric LESS-hysterectomy plus bilateral salpingooophorectomy. The patient underwent a staging magnetic resonance imaging (MRI) and a transvaginal ultrasound with a clinical FIGO result (International Federation of Gynecology and Obstetrics) stage IA. 8 The gasless technique was preferred to standard pneumoperitoneum due to the patient's severe obesity and untreated chronic hypertension. The team of anesthesiologists chose the TAP-Block analgesia, because the isobaric laparoscopy allows the use of local anesthesia. This technique is known for its positive outcomes in gynecologic surgery, as it minimizes intra-and postoperative opioid use, length of stay, and postoperative nausea and vomiting. 7 The procedure was performed via a multichannel single port (Olympus Winter & IBE GMBH, Hamburg, Germany) inserted into the umbilicus through an open access. The Laparotenser (Lucini Surgical Concept, Milan, Italy) was the surgical instrument used to replace classic pneumoperitoneum-inducing devices: it elevates and retracts the abdominal wall, creating a large intraabdominal operative space. The procedure was performed using the Trendelenburg position up to 30 degrees. Initially, 2 curved needles (Pluriplan) with blunt tips were introduced subcutaneously through 2 very small (2-mm) incisions in the supra-pubic skin. The Laparotenser has 2 particular devices: the lifting device that allows the operator to lift with minimum effort and movement, and a divaricator device composed of needles in the terminal part, which can appropriately distribute the tissue tensions during the procedure. The divaricator exploits the elastic properties of the tissue that lies between the needles. In addition, the unique tips will not apply any cutting force in the tissue, avoiding trauma. The needles are suspended from a mechanical arm attached to a rigid pillar, and the arm is elevated as far as to obtain optimal exposure without using pneumoperitoneum (Figures 1, 2). Intraabdominal visualization was obtained with a 5-mm, 30°telescope with flexible handling (Olympus Winter & Ibe GmbH). Standard straight 5-mm instruments, such as graspers, cold scissors, suction/irrigation instruments, and a multifunctional device that grasps, coagulates, and transects simultaneously (PKS Cutting Forceps; Gyrus ACMI, Hamburg, Germany) were used. Following coagulation of the tubes, an intrauterine manipulator was used. The coagulation and section of the round ligaments was performed to allow entry into the retroperitoneal space (Figure 3). The ureter was visualized, and a hemostatic clip was positioned bilaterally at the origin of the uterine artery and at the ovarian vessels. An adequate margin of the vagina was ensured before the colpectomy was performed using a bipolar hook (Figure 4). The uterus and adnexa were taken out through the vagina with the manipulator in situ. The frozen section of the uterus confirmed the diagnosis of endometrioid endometrial well-differentiated cancer infiltrating Ͻ 50% of the myometrium, so that pelvic and aortic lymphadenectomy was not performed according to our internal guidelines. 9 During surgery the patient was conscious and felt no pain. On a scale from 0 to 10 her degree of pain was rated as 0, according to the Numeric Rating Scale. 10 No vascular or visceral injuries and no intraoperative portsite bleeding occurred during the surgical activities. No hematoma or subcutaneous injuries were found upon removal of the Laparotenser tool. The operation time was 150 min with an estimated blood loss of 30 mL. No wound hematoma, wound infection, or delayed bleeding were observed postoperatively. The patient reported complete satisfaction with the cosmetic appearance and postoperative pain control. During the hospital stay, the patient quantified pain using the Numeric Rating Scale: she declared "2" in the first 6 h after the surgery and "1" in the following hours. She did not experience any shoulder pain typical of standard laparoscopy, and she never asked for routine analgesic drugs during the postoperative period. She was mobilized and discharged the day after the surgical procedure with optional analgesic therapy. No postoperative complications were reported in the first 30 d. Definitive histology confirmed the frozen section diagnosis. She was declared disease-free after a 9-mo follow-up. DISCUSSION Laparoscopy is becoming a standard surgical treatment for early-stage endometrial cancer, resulting in survival rates similar to those of laparotomy. 1 The advantages of performing total laparoscopic hysterectomy or a laparoscopic-assisted vaginal hysterectomy, such as a shorter hospital stay, decreased pain, and quicker resumption of daily activities, have previously been reported. 11,12 Efforts have been made to further reduce the invasiveness of the laparoscopic approach through single-port surgery and the so-called mini-laparoscopy, in which instruments of Յ 3 mm are used. Preliminary studies have confirmed the feasibility of this new technique. 2,3,13 We describe the first total single-port isobaric hysterectomy performed on an early-stage endometrial cancer patient in TAP-block analgesia. The patient declared complete satisfaction with the cosmetic appearance, postoperative pain control and quality of life. No postoperative complications were reported in the first 30 d, and after 9 mo of follow-up, she is disease free. Isobaric laparoscopy could be very useful in endometrial cancer patients with a high BMI and associated comorbidities (cardiovascular disease, advanced chronic obstructive bronchitis, and neurologic diseases). The population of women with early clinical stage endometrial cancer are often excluded by a minimally invasive pneu-moperitoneum-inducing laparoscopic approach, because of the pneumoperitoneum-related contraindications. 14,15 The combinations of LESS surgery and isobaric technique might be the solution in all these cases. The present case highlights the feasibility of LESS and the isobaric hysterectomy in TAP-block anesthesia for early clinical stage low-risk endometrial cancer. Intraoperative parameters, such as operation time, estimated blood loss, and postoperative outcomes, hospital stay, cosmetic result, and disease-free survival, are comparable to those obtained through standard laparoscopy. We also observed a decreased opioid use during surgery and the absence of postoperative anaesthesia-related neuro-vegetative symptoms. These additional benefits make this technique an attractive alternative to classic laparoscopy which does not allow the use of TAP-block analgesia. Larger prospective studies need to confirm these results and to compare the gasless LESS procedures with the conventional gas-based LESS hysterectomy.
2016-05-31T19:58:12.500Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "29018652ba3727f0efe7e7d4c979a2f201acff82", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3771810/pdf/jls354.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "29018652ba3727f0efe7e7d4c979a2f201acff82", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235168712
pes2o/s2orc
v3-fos-license
Long non-coding RNA FEZF1-AS1 promotes the proliferation and metastasis of hepatocellular carcinoma via targeting miR-107/Wnt/β-catenin axis Hepatocellular carcinoma (HCC) is a public health problem around the world, with the molecular mechanisms being still incompletely clear. This study was carried out to explore the role and mechanism of long-noncoding RNA (lncRNA) FEZF1-AS1 in HCC progression. RNA sequencing and quantitative real time polymerase chain reaction (qRT- PCR) were applied to identify differently expressed lncRNAs in HCC tissues and adjacent normal tissues. CCK8 assay was adopted to test cell proliferation and flow cytometry was taken to detect cell apoptosis. Wound healing assay and transwell experiment were performed to determine cell migration and invasion. To validate the function of lncRNA FEZF1-AS1 in vivo, tumor-burdened models were established. The results showed that lncRNA FEZF1-AS1 level was prominently enhanced in HCC tumor specimens and overexpression of FEZF1-AS1 promoted the proliferation, migration and invasion of HCC cells. In mechanism, overexpression of FEZF1-AS1 reduced the expression of miR-107 which inhibited the activation of Wnt/β-catenin signaling. Overexpression of β-catenin promoted cell proliferation, migration and invasion which were inhibited by FEZF1-AS1 downregulation. In conclusion, our study demonstrated that FEZF1-AS1 promoted HCC progression through activating Wnt/β-catenin signaling by targeting miR-107, which provided a novel target for the therapy of HCC. INTRODUCTION Hepatocellular carcinoma (HCC) is a leading cause of cancer death. It is the main type of primary liver cancer [1,2]. Although HCC therapy is in fast development, such as tissue resection, liver translation, chemotherapy, radiotherapy and biotherapy, patients with HCC still present with unfavorable prognosis and low 5-year survival rates [3,4]. In recent years, the molecular mechanisms underlying HCC have attracted more and more attentions, and many genes play a critical role in HCC development [5]. However, it's still a long road to elucidate fully the molecular mechanisms of HCC. It is of significance to find novel biological markers for the diagnosis and treatment of HCC. Evidence has identified that miRNAs are frequently deregulated in cancers and involve in cancer development by altering the expression of target genes [16,17]. For instance, miR-143 was found to be downregulated in bladder cancer [18] and breast cancer [19]. LncRNAs have been shown to usually act as sponges of miRNAs and then contribute to the development of cancers [20]. Using the bioinformatics methods, miR-107 is shown to be a predict target of lncRNA FEZF1-AS1, but their interaction in HCC waits to be verified. We aimed to detect the regulatory pattern of FEZF1-AS1 in HCC in this study. RESULTS The HCC tumorigenesis always accompanied with alteration of genes, and growing evidence suggested that lncRNAs are critical for cancer procession [21]. To explore the expression profiles of lncRNAs in HCC, the Arraystar Human lncRNA Microarray was performed in 3 paired HCC tissues and the neighboring normal tissues. A prominent alteration in FEZF1-AS1 level was found in the lncRNA profile ( Figure 1A, 1B). The expression level of FEZF1-AS1 was found to be upregulated in HCC tissues as compared to the adjacent normal liver tissues by qRT-PCR ( Figure 1C). We then detected the FEZF1-AS1 expression level in HCC cell lines. FEZF1-AS1 showed a higher expression pattern in SUN-182 cells and a lower expression pattern in Hep3b cells among the 4 cells, SNU-182, SNU-398, SNU-449 and Hep3b ( Figure 1D). To investigate the role of FEZF1-AS1 in HCC, it was overexpressed and downregulated in SNU-398 and SNU-449 because these two cells showed moderate level of EFZF-AS1 among the 4 cells ( Figure 1D). qRT-PCR was used to detect the FEZF1-AS1 expression ( Figure 2A) and CCK8 assay was used to assess the cell proliferation. Overexpression or downregulation of FEZF1-AS1 promoted or suppressed cell proliferation of SNU-398 and SNU-449 cells ( Figure 2B, 2C). The flow cytometry demonstrated that FEZF1-AS1 overexpression suppressed the apoptosis of SNU-398 and SNU-449 cells ( Figure 2D-2F). The results of transwell chamber and wound healing assays showed that FEZF1-AS1 overexpression significantly enhanced the invasion ( Figure 2G-2I) and migration ( Figure 3A, 3B) of HCC cells, and downregulation of FEZF1-AS1 caused opposite results. It is believed that EMT is a main cause of cancer cell migration and invasion. Therefore, we assessed FEZF1-AS1 role in HCC cell EMT. Results of western blot showed that FEZF1-AS1 overexpression induced EMT in HCC cells with increased expression levels of ICAM1 and Vimentin and decreased expression level of E-cadherin ( Figure 3C, 3D), while downregulation of FEZF1-AS1 decreased ICAM1 and Vimentin expression and increased E-cadherin expression. To verify the function of FEZF1-AS1 in HCC in vivo, SUN-398 cells with FEZF1-AS1 stable downregulation and/or β-catenin stable overexpression were transplanted into nude mice. Figure 7A showed the histopathological changes of the mice tissues and no cachexia was found after the nude mice were sacrificed. Downregulation of FEZF1-AS1 inhibited tumor growth with smaller volumes and less weights as compared with the sh-NC or normal saline group, whereas overexpression of βcatenin rescued the reduction of the tumor growth caused by shFEZF1-AS1 ( Figure 7B, 7C). Additionally, FEZF1-AS1, wnt3a, β-catenin, ICAM1 and Vimentin expressions were repressed and E-cadherin and miR-107 levels were increased in the sh-FEZF1-AS1 group, whereas these tendencies were neutralized by β-catenin overexpression ( Figure 7D, 7E). DISCUSSION HCC is a malignant cancer which is characterized by low survival rate [24]. Although great progress has been made in improving the therapeutic method of HCC in the past years, the efficacy is still unsatisfied. It's important to investigate the oncogenic mechanisms in HCC. We proved in this study that FEZF1-AS1 had a high expression level in HCC. Also, we found that FEZF1-AS1 contributed to the malignant alterations of HCC via targeting miR-107/Wnt/β-catenin axis, as well as EMT, which is a main cause of cancer metastasis [25]. It was believed that lncRNAs played vital roles in HCC metastasis [26]. EZF1-AS1 is dysregulated in several cancers and contributes to carcinogenesis by targeting miRNAs or specific signaling pathway [13,[27][28][29]. Previous studies demonstrated that FEZF1-AS1 had an impact on the proliferation and invasion of HCC cells [15,30]. Herein, we confirmed a high expression pattern of FEZF1-AS1 in HCC and FEZF1-AS1 overexpression promoted proliferation, migration and invasion of HCC cells. Further, we found miR-107 was another target of FEZF1-AS1 in HCC. From the luciferase gene reporter and qRT-PCR assays, we observed that FEZF1-AS1 could bind MiRNAs play vital roles in controlling cell proliferation, apoptosis, migration and invasion. Dysfunction of miRNAs contributes to the occurrence and development of cancers [31]. MiR-107, a widely studied miRNA, was believed to be implicated in the development of many kinds of cancers. However, miR-107 function is controversial. For instance, some researched identified that miR-107 was a tumor suppressor in gastric cancer, NSCLC, breast cancer and AGING bladder cancer [32][33][34][35], but some found that miR-107 was an oncogene in gastric cancer, bladder cancers and breast cancer by targeting different pathway [36][37][38]. Wang et al. [39] found that miR-107 could inhibit cell proliferation in HCC by binding to the 3'-untranslated region of HMGA2. In contrast, some studies demonstrated that miR-107 promoted HCC cell proliferation [40]. We found that miR-107 level was downregulated in HCC. miR-107 upregulation leaded to a significant inhibition in Wnt/β-catenin signaling with decreased expressions of β-catenin and wnt3a and an increased level of p-GSK-3β. However, miR-107 was identified to inhibit cell proliferation, colony-forming ability and tumorigenicity in osteosarcoma, with increased expression level of β-catenin [23]. We speculate that different cell contents maybe a main cause of the different roles of miR-107 plays in regulating β-catenin expression. AGING Furthermore, we explored whether Wnt/β-catenin signaling was regulated by FEZF1-AS1 in HCC. We found that β-catenin overexpression greatly enhanced cell proliferation and metastasis in HCC which were inhibited by FEZF1-AS1 downregulation. This result suggested that downregulation of FEZF1-AS1 inhibited HCC progression through inhibiting the Wnt/β-catenin pathway. However, miR-107 role in AGING this process is not clear and we intend to clarify it in our further study. As a whole, our study identified that FEZF1-AS1 accelerated the malignant behavior of HCC via miR-107/Wnt/β-catenin axis. Tissue obtainment HCC tissues and the adjacent normal liver tissues were harvested from HCC patients in Shanghai Jiao Tong University Affiliated Sixth People's Hospital between 2013 and 2015. Patients participated in this study received no other therapeutic method before surgery. This study was conducted in accordance with the World Medical Association Declaration of Helsinki and was approved by the Research Ethics Committee of Shanghai Jiao Tong University. Informed consent was written by each patient. Cell growth and apoptosis assay The cell proliferation was tested by CCK8 kit (Beyotime, Beijing, China). 96-well plates were used to seed the cells at a density of 2000 cell/cm 2 and the cells were transfected/infected with plasmids/lentivirus after AGING 24 hours of incubation at 37C. Following 24,48,72,96 or 120 hours of incubation after inoculation, 10 μl CCK8 solution was then added into each well. The cells were incubated for 2 hours. Then, the absorbance was measured by the reader (model 680; Bio-Rad, Hertfordshire, UK) at 450 nm. Flow cytometry was used to determine the apoptosis rates of cells which were transfected/infected with sh-FEZF1-AS1, OE-FEZF1-AS1, sh-NC, OE-NC, mimic-NC, and mimic-miR-107. Annexin V(FITC)/PI Apoptosis Detection Kit (BioLengen, CA, USA) was used according to the instruction. Wound healing assay and transwell assay Cells were seeded into six-well plates in a density of 3000 cells/cm 2 . After cell transfection/infection with plasmids/lentivirus, wounds were scraped by tips. The distance of the wound was captured after 0 hour and 24 hours of incubation at 37C by DM2500 bright field microscope (LEICA, Wetzlar, Germany). The migration distance was measured by Image J. Upper chamber was coated with matrigel. Cells were seeded in the density of 3000 cells/cm 2 . Bottom chamber was filled with DMEM with 10% FBS and 1% penicillin/streptomycin. 4% paraformaldehyde (PFA) was used to fix the invaded cells and crystal violet was used to stain the cells. The amount of invaded cells was counted by image J. In vivo assay A mouse xenograft model was established using male BALB/c nude mice (Shanghai Slac Laboratory Animal Company (Shanghai, China) at an age of six weeks. Animals were housed in a controlled environment (50% humidity, 23  2C, 12-h light-dark cycle) with free access to water and food. 2 × 10 6 SUN-398 cells were stably transfected with sh-FEZF1-AS1 and/or OE-βcatenin and resuspended in 100 μL normal saline. Then, the cells were then injected subcutaneously in flanks of mice. Cells in normal saline served as negative control. Four weeks later, all animals were euthanized and tumor xenografts were harvested. Weight of tumors were measured. All experiments regarding animal handling were approved by the Animal Care and Use Committee of Shanghai Jiao Tong University Affiliated Sixth People's Hospital and performed following the Guide for the Care and Use of Laboratory Animals. Statistical analysis All experiments were performed in triplicate and repeated three times. Data were analyzed by the SPSS software (version 24.0) and shown as mean  standard deviation. Student's t-test and one-way analysis of variance (ANOVA) were used for statistical AGING comparisons between two groups or among multiple groups, respectively. A value of p < 0.05 was considered statistically significant.
2021-05-25T06:16:13.598Z
2021-05-23T00:00:00.000
{ "year": 2021, "sha1": "18fb0c7d63c5b9b82eb827eab300d6b843e46595", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.202960", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2615d9f86e7d1588d3667a3fbc9242284aa55475", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
18510172
pes2o/s2orc
v3-fos-license
A complete reduction of one-loop tensor 5- and 6-point integrals We perform a complete analytical reduction of general one-loop Feynman integrals with five and six external legs for tensors up to rank R=3 and 4, respectively. An elegant formalism with extensive use of signed minors is developed for the cancellation of inverse Gram determinants. The 6-point tensor functions of rank R are expressed in terms of 5-point tensor functions of rank R-1, and the latter are reduced to scalar four-, three-, and two-point functions. The resulting compact formulae allow both for a study of analytical properties and for efficient numerical programming. They are implemented in Fortran and Mathematica. Introduction At the proton-proton collider LHC and the planned e + e − collider ILC, a large number of particles per event may be produced. The hope is to discover one or several Higgs bosons or supersymmetric particles, which are typically expected to be quite heavy. The interest is also directed to the study of known massive particles like the W and Z bosons or the top quark. Since the production rates are large, a proper description of the cross-sections will typically include one-loop corrections to n-particle reactions, where some of the final state particles may be massive. The Feynman integrals for reactions with up to four external particles have been systematically studied and evaluated in numerous studies. We just want to mention here the seminal papers [1] and [2] and the Fortran packages FF [3] and LoopTools [4], which represent the state of the art until now. The treatment of Feynman integrals with a higher multiplicity than four becomes quite involved if questions of efficiency and stability become vital, as it happens with the calculational problems related to high-dimensional phase space integrals over sums of thousands of Feynman diagrams with internal loops. In this article, we will concentrate on the evaluation of massive one-loop Feynman integrals with n external legs and some tensor structure, where the denominators c j have indices ν j and chords q j , We will study in the following the cases n = 5 with R ≤ 3 and n = 6 with R ≤ 4, and we will conventionally assume q n = 0. The space-time dimension is d = 4 − 2ǫ. There are several strategies one might follow. One is the reduction of higher-point tensor integrals to tensor integrals with less external lines and/or lower tensor rank [5,6,7,8]; a second approach is essentially numerical [9,10] or semi-numerical [11,12,13]. A third one rests on the unitarity cut method [14,15,16,17]. In this case, a one-loop amplitude is evaluated as a whole, by using Cutkosky rules, instead of computing loop integrals from each of the Feynman diagrams. It is impossible to give here a comprehensive survey of recent activities, and we would like to refer to e.g. [18,19,20,21] for recent overviews on the subject. Here, we will advocate yet another approach and reduce the tensor integrals algebraically to sums over a small set of scalar two-, three-and four-point functions, which we assume to be known. Whether such a complete reduction is competitive with the other approaches might be disputed. Evidently, this depends on the specific problem under investigation. For a study of gauge invariance and of the ultraviolet (UV) and infrared (IR) singularity structure of a set of Feynman diagrams, it is evident that a complete reduction is advantageous, and it may also be quite useful for a tuned, analytical study of certain regions of potential numerical instabilities. We have chosen a strictly algebraic approach and will rely heavily on the algebra of signed minors which was worked out in detail by Melrose in [22]. One of the basic observations of Melrose was that in four dimensions all the scalar integrals can be reduced to scalar 4-point functions and simpler ones. In [23], a representation of arbitrary one-loop tensor integrals in terms of scalar integrals was derived. The representation includes, however, scalar integrals with higher indices ν j and higher space-time dimensions d + 2l. The subsequent reduction to scalar integrals with only the original indices and the generic space-time dimension d is possible with the use of integration-by-parts identities [24] and generalizations of them with dimensional shifts. The latter have been derived in [25], and a systematical application to one-loop integrals may be found in [26]. 1 Basically, the reduction problem has been solved this way for n-point functions. There was one attempt to use the Davydychev-Tarasov reduction for the description of one-loop contributions to the process e + e − → Hνν [27], and the numerical problems due to the five-point functions were discussed in some detail. To a large extent they root in the appearance of inverse powers of Gram determinants. This feature of the Davydychev-Tarasov reduction was identified as disadvantageous soon after its derivation, e.g. in [28], where a strategy for avoiding these problems was developed. Besides the problem of inverse powers of the Gram determinant of the corresponding Feynman diagram, there are additional kinematical singularities related to sub-diagrams. This will not be discussed here; we refer to e.g. [28,29,5,6,7,8,13,17] and references therein. In this article, we investigate the reduction of tensor integrals with five and six external legs which are of immediate importance in applications at the LHC. In Section 2 we represent tensor integrals by scalar integrals in shifted space-time dimensions with shifted indices. Section 3 and Section 4 contain our main result. In Section 3 we go one step further in the reduction of five-point tensors compared to [26] and demonstrate how to cancel all inverse powers of the Gram determinant appearing in the Davydychev-Tarasov reduction. Earlier results for tensors of rank two may be found in [30]. Section 4 contains the reduction of tensorial six-point functions to tensorial 5-point functions. The corresponding Gram determinant is identically zero [26,6,8], and the reduction becomes quite compact. Some numerical results and a short discussion are given in Section 5. The numerics is obtained with two independent implementations, one made in Mathematica, and another one in Fortran. The Mathematica program hexagon.m with the reduction formulae is made publicly available [31], see also [32] for a short description. For numerical applications, one has to link the package with a program for the evaluation of scalar one-to four-point functions, e.g. with LoopTools [4,33,3], CutTools [34,12], QCDLoop [35]. Appendices are devoted to some known, but necessary details on Gram determinants and the algebra of signed minors and to a short summary about the reduction of dimensionally shifted four-and five-point integrals. Representing tensor integrals by scalar integrals in shifted space-time dimensions At first we give the reduction of tensor integrals to a set of scalar integrals for arbitrary n-point functions. Following [23,26], assuming here the indices of propagators to be equal to one, ν r = 1, one has: n,i , (2.1) is an operator shifting the space-time dimension by two units and where [d+] l = 4 + 2l − 2ǫ (observe that p is the number of scalar propagators of the "p-point function" and that equal lower and upper indices cancel, p ≤ n). In (2.2-2.4), the coefficients n ij , n ijk and n ijkl were introduced. These stand for the product of factorials of the number of equal indices: e.g. n iiii = 4!, n ijii = 3!, n iijj = 2!2!, n ijkk = 2!, n ijkl = 1! (indices i, j, k, l all different from each other). Of particular relevance are the following relations for the successive application of recurrence relations to reduce higher dimensional integrals: where ν ij = 1 + δ ij , In the next step the integrals in higher dimension have to be reduced to integrals in generic dimension. Here particular attention has to be paid to I (3.14) R = 3 tensor integrals The tensor integral of rank 3 can be written as: We will now rewrite this into another representation, thereby avoiding Gram determinants () 5 in the denominators of the new tensor coefficients E ijk , E 00k : The symbol ′ in these equations denotes a sum 5 s,t,u=1 in terms proportional to I stu 2 , 5 s,t=1 in terms proportional to I st 3 , and 5 s=1 in terms proportional to I s 4 . Concerning the symmetrization in (3.22), we point out that the original expression (3.18) is obviously symmetric under (i ↔ j), while this is not explicitly seen in (3.22) anymore. Later on, however, this symmetry will become apparent again. All terms with factors of the type i j 5 can be considered, due to (3.6), as belonging to some g µν term. For other terms we have to use (3.21), which yields terms with () 5 to be cancelled. These are explicitly given in the coefficients of I . (3.27) Next we will use : As trivial as this relation may look, it plays the crucial role of splitting off i k 5 in order to produce g µν terms. It might also have been written as: but then it would not fulfill its purpose. The first term at the rhs. of (3.28) cancels a () 5 , while the second term enters the g µν -terms, all of which are collected in (3.36). The complete coefficient of I s 4 in (3.16) is thus given by: Finally we have to investigate the last line of (3.22), being left with the factor 0 k At the end we can determine the g µν terms from the above by collecting all terms containing factors of the type i j where the square bracket means symmetrization of the included indices, and use has been made of (3.6). Collecting all terms of type i j It turns out to be useful for the simplification of the coefficients of I st 3 and I stu 2 in (3.36). For the coefficient of I st 3 , we apply relation (3.37) with µ = 0. The last term on the r.h.s. of (3.37) is combined with the term on the third line of (3.36) using (3.31): , After summation over s and t, the last term on the r.h.s. will vanish. Furthermore we apply (3.24) is symmetric in s, t and u, we consider the sum over all permutations of any fixed set of values of s, t and u. We find that so that the two last terms on the r.h.s. of (3.37) can be dropped in this case. Thus we have: Collecting all contributions, our final result for the tensor of rank 3 can be written as: Hexagons The 6-point function has the nice property that the tensors of rank R can be reduced to a sum of six 5-point tensors of rank R − 1. This property has also been derived in [5]; an earlier demonstration of this property, however, has been given already in [26]. The simplification in this case is due to the fact that () 6 ≡ 0, which has extensively been discussed in [26]. Beyond that, in our approach, the above results for the 5-point tensors can be directly used, thus reducing the 6-point tensors of up to rank R = 4 to scalar 4-and 3-and 2-point integrals. Particularly simple results are thus obtained for the 6-point tensors using the results of Appendix A and Sections 3.1 and 3.2. What was missing in [26] is exactly this simplification, which comes with the cancellation of the Gram determinant () 5 ; see Appendix A of that paper. Scalar and vector integrals According to (I.33) we write (see [22] and also (I.55)): and (3.2) now reads: Here we see already the general scheme of reducing 6-point functions to 5-point functions: In general, in any signed minor (· · · ) 5 a further column r r is scratched, resulting in a (· · · ) 6 and in the scalar functions a further propagator is scratched. As in (3.3) and (3.4), with the use of (I.57), we obtain: 3) While in (3.4) the first part vanishes in the limit d → 4, here its disappearance is due to (I.61): Indeed (4.5) will play a crucial role for the higher tensor reduction. The resulting form in (4.4) is already the generic form for the higher tensors too! Therefore it appears useful to introduce the vector, applying further (A.15) of [22] and (I.61): summing over all 5 (dependent) vectors. v r projected on these vectors reads: With this definition we can write in a compact way: R = 2 tensor integrals The equation (2.2) reads in this case: . (4.10) We consider the limit d → 4 and use (I.67): Writing it like in (3.5), we obtain by using (3.4): to be compared with (4.4). For completeness we specify E r i , which we read off from (3.4) to be: and finally: We remark that due to (4.14), E r i = 0 for r = i and correspondingly this will be the case for all higher tensors such that limitations like r = i could be dropped but are convenient to keep in numerical programs. R = 3 tensor integrals Equation (2.3) reads in this case: and with (I.60) we have: (4.17) The first term on the r.h.s. is eliminated due to (4.5) and the next two terms cancel due to (4.11). Taking into account I Using again (I.57) and the definition we obtain: From (3.10) and (4.19) I r 5,ij reads: so that we get: where in the second term we can drop the limitation r = i, j since it is automatically fulfilled due to the numerator i r j r 6 , vanishing for r = i and r = j. Thus summation over i and j is possible, using (4.19), with a result: or: with I µ ν ,r R = 4 tensor integrals The tensor integral in (2.4) contains three different integrals in higher dimension, which have to be reduced or to be eliminated. We begin with I (1 + δ is + δ ks )I . Now, making use of n ijkl = ν ij ν ijk ν ijkl , we see that due to (4.5) the first part in (4.31) drops out after insertion into (2.4). The second contribution of (4.31) yields: We have: (4.35) Using (I.67) we have for d = 4: and we see that this contribution is canceled by the last three terms of the type I [d+] (x−1) n,jk in (2.4). The first three terms of this type are evaluated by means of (I.59) to yield: Inserting this into (2.4), the first part yields a vanishing contribution due to (4.5) . The second term yields, again due to (4.5): which cancels the last term in (2.4) and the total contribution thus reads: g µν q λ i q ρ j + g µλ q ν i q ρ j + g νλ q µ i q ρ With (4.19) it is now easy to see that the square bracket in (4.40) cancels out the second part in (4.39) and using the definition: we obtain: and with the same argument like the one used after (4.23) we obtain the final result: with: (4.47) Numerical results and discussion In order to illustrate the numerical results which can be obtained with the described approach, we will evaluate a representative collection of tensor coefficients. We rely on two implementations of the formalism, one has been established in Fortran, and the other one in the Mathematica package hexagon.m. In the following, we denote the scalar five-point function by E 0 and the scalar six-point function by F 0 . The tensor decompositions of pentagons E and hexagons F read: Please observe the difference of E 0 , F 0 and E 0 , F 0 in the following. The kinematics is visualized in Figure 5.1. Deviating from the first sections, we have chosen here q 0 = 0 in order to stay close to common conventions of other numerical packages. For the evaluation of the scalar two-, three-and four-point functions, which appear after the complete reduction, we have implemented two numerical libraries: • For massive internal particles: Looptools 2.2 [4,33]; • If there are also massless internal particles: QCDLoop-1.4 [35]. We observed that Looptools may become unstable in the presence of massless internal particles, while QCDLoop seems to be generally slower. Our Mathematica package has an implementation of only Looptools. For completeness, we would like to mention also other publicly available Fortran packages for tensor functions, which we found useful for comparisons: • Six-point tensors with massive internal particles: none; • Five-point tensors with massive internal particles: Looptools [4,33] ; • Five-point tensors with both massive and massless particles: none; • Five-and six-point tensors with only massless internal particles: golem95 [37]. The two independent numerical implementations have been checked in several ways: • By internal comparisons of the two codes, relying on the formulae presented in this article; With alternative, direct representations of the tensor integrals with sector decomposition 3 [38] and Mellin-Barnes representations [39,40]; • By simplifying the numerator structures algebraically and subsequent evaluation of the resulting integrals of lower rank; • By direct comparison with other tensor integral packages [33,37]. Some of the comparisons were documented in [32]. 3 We used a Mathematica interface to the GINAC package sector decomposition in order to have a convenient way to evaluate tensor Feynman integrals. We restrict ourselves to a few phase-space points, see Tables 5.1 to 5.3. The first configuration corresponds to the reaction gg → ttqq, with external momenta generated by Madgraph [41,42]. The second configuration comes from [37], while the third is a slight modification of the first one. The kinematical input is completed by adding the masses of internal particles. We begin with massive six-point tensors. For the kinematics introduced above, we determine the tensor components with our Fortran pacakge as shown in Tables 5.4 to 5.6. They are complex, finite numbers. Only independent components of the tensors are shown, all the remaining ones are obtained by permutations of indices. Selected tensor coefficients of five-point tensors for the case of massive internal particles are shown in Table 5.7. 4 The coefficients have been compared with LoopTools 2.2 and indeed we agree. For the massive six-point functions, there is no alternative package publicly available. In presence of massless internal particles, we face potential infrared singularities. Then, the loop functions are Laurent series in ǫ, starting with a term proportional to 1 ǫ 2 , and one has to care about re-normalizations compared to our basic definition 1.1. A popular measure is [35,37]: When discussing Feynman integrals with a dependence on inverse powers of ǫ there appears a dependence of their constant terms on these conventions. For convenience of the reader, the tables are produced with a normalization as introduced in Equation 5.8, with the choice µ = 1. For the case of six-point and five-point functions with only massless internal particles, we show only a few sample coefficients in Table 5.9 and Table 5.8, which are produced with our Fortran package. The phase space point chosen here is defined in Table 5.2. We checked that, within double precision, we completely agree with corresponding numbers produced with golem95. Finally, to complete the list of relevant results, we show also sample tensor coefficients for the case of both massive and massless internal particles, for five-point tensors in Table 5.10 and for six-point tensors in Table 5.11. For this case with mixed internal masses, there is no other publicly released code available. 4 Please notice that we show here five-point tensor coefficients, while in the case of six-point tensors we have shown tensor components. The tensor components are representation independent and should be preferred as numerical output. For the five-point tensors with massive internal particles, however, we have arranged for a one-to-one correspondence with output of LoopTools 2.2, so it might be interesting to have, in this case, the tensor coefficients instead. 0.20363139 -0.04415762 -0.05710657 p 6 = −(p 1 + p 2 + p 3 + p 4 + p 5 ), m 1 = · · · = m 6 = 0.0 Table 5.2: The external four-momenta for the six-point numerics; all internal particles are massless. This set of momenta comes from [37]. For five-point functions, we shrink line 2 and fix p 1 + p 2 → p 1 in order to retain momentum conservation. To summarize, we have presented in this article tensor integrals of rank R ≤ 3 for five-point functions and of rank R ≤ 4 for six-point functions. This is sufficient for the calculation of e.g. four fermion production at the LHC with NLO QCD corrections. There are further reactions of interest which will need higher-point functions and higher ranks of five-and six-point functions. The details of their reductions have been left for a later investigation. Table 5.5: Tensor components for a massive rank R = 3 six-point function; kinematics defined in Table 5 A. Gram determinants and algebra of signed minors In this section relations are derived, which will turn out to be indispensable in our tensor reductions. We begin with some notational remarks on Gram determinants G n−1 , G n−1 = |2q j q k |, j, k = 1, · · · , n − 1. (A.1) The modified Cayley determinant of a diagram with n internal lines with chords q j is: () n = |C jk | , j, k = 0, · · · , n, (A From our choice q n = 0, it follows that both determinants are related: and we will usually call () n the Gram determinant of the Feynman integral. Signed minors [22] are determinants (with a sign convention) which are obtained by excluding rows and columns from the modified Cayley determinant () n . They are denoted by the symbol j 1 j 2 · · · j m k 1 k 2 · · · k m n , (A.5) labelling the rows j 1 , j 2 , · · · , j m and columns k 1 , k 2 , · · · , k m which have been excluded from () n . The sign of a signed minor is defined by (−1) j 1 +j 2 +···+jm+k 1 +k 2 +···+km × Signature[j 1 , j 2 , · · · j m ] × Signature[k 1 , k 2 , · · · k m ], (A. 6) where Signature gives the sign of permutations to place the indices in increasing order. This agrees e.g. with the definition of the operator Signature[List] in Mathematica. As an example may serve the quantity ∆ n : We now will derive two relations between signed minors. Let us introduce is UV-and IR-finite. Beyond that, as it is done frequently [8], I can be used as well as a "master integral" (see e.g. (3.14)) without reduction to the generic dimension.
2008-12-11T12:39:50.000Z
2008-12-11T00:00:00.000
{ "year": 2008, "sha1": "53faff4488880f45c3b28cad1c06fac22bd7d816", "oa_license": null, "oa_url": "https://bib-pubdb1.desy.de/record/91419/files/PhysRevD.80.036003.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "33aeec17298111735f65edcee526becefa78f325", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
151732793
pes2o/s2orc
v3-fos-license
Validating strengths use and deficit correction behaviour scales for South African first-year students As is the case with new recruits and newly appointed employees in organisations, first-year students face many challenges adjusting to a new academic environment. Some of these challenges include exposure to independent living, academic pressure, emotional vulnerability, social adaption and problems managing time and finances (Darling, McWey, Howard, & Olmstead, 2007; Fairbrother & Warn, 2003; Misra, Mckean, West, & Russo 2000). The university environment also pose stressors of its own, including adapting to an academic environment (Awino & Agolla, 2008; Ongori, 2007), a new semester system and often inadequate resources available for Orientation: It is well known that the first year at university can be very challenging and stressful for students. While some students mainly depend on the university to assist them through this time, other students want to proactively manage this stressful period themselves by focusing on their strengths and developing in their areas of weakness. Two new scales measuring proactive strengths use and deficit correction behaviour have recently been developed for employees. However, the psychometric properties of these new scales have not yet been tested on first-year students in the South African context. Introduction As is the case with new recruits and newly appointed employees in organisations, first-year students face many challenges adjusting to a new academic environment.Some of these challenges include exposure to independent living, academic pressure, emotional vulnerability, social adaption and problems managing time and finances (Darling, McWey, Howard, & Olmstead, 2007;Fairbrother & Warn, 2003;Misra, Mckean, West, & Russo 2000).The university environment also pose stressors of its own, including adapting to an academic environment (Awino & Agolla, 2008;Ongori, 2007), a new semester system and often inadequate resources available for The introduction of these two specific types of proactive behaviour is based on the notion that the ultimate challenge for positive psychology is to synthesise positive and negative aspects of human behaviour and to develop a combined focus of strengths and deficits, rather than an exclusive focus on one or the other.Therefore, it is important to develop and eventually overcome weaknesses as well as nurturing strengths (Linley, Joseph, Harrington, & Wood, 2006;Lopez, Snyder, & Rasmussen, 2003;Seligman, Parks, & Steen, 2004).Indeed, several recent studies have demonstrated that both strengths use and deficit correction behaviour can be related to valuable outcomes (Meyers et al., 2015;Peterson & Seligman, 2004;Van Woerkom et al., 2016). Originally, the strengths use and deficit correction scales were introduced as additional forms of proactive behaviour and were conceptualised and measured in the organisational context (Stander & Mostert, 2013;Van Woerkom et al., 2016).However, the constructs of strengths use and deficit correction behaviour seem valuable to apply to first-year students.Strengths use behaviour is positively associated with well-being and vitality (Park, Peterson, & Seligman, 2004) and enables individuals to achieve success by fulfilling their potential (Govindji & Linley, 2007).Therefore, when first-year students demonstrate behaviour in which they use their strengths by adapting to new circumstances and their study environment, it could instil positive emotions and behaviour.This will allow them to tap into their personal resources (Frederickson, 2001) and increase their confidence in their abilities to succeed in their studies (Kaslow, Falender, & Grus, 2012).Also, when students work on improving their weaknesses or deficits, it can foster behaviour to identify ways of overcoming obstacles in pursuit of study goals, can ultimately lead to personal mastery and growth (Senge, 1990) and can lead to improvement in their performance (Dunn & Shriner, 1999;Ericsson, Nandagopal, & Roring, 2009). Focusing on behaviours that emphasise strengths use and deficit correction is also important for universities as institutions (Luthans, Avolio, Avey, & Norman, 2007) because this type of behaviour from first-year students will help build resilience and promote adjustment, enhancing academic success and help lowering the high drop-out rate of first-year university students (DeRosier, Frank, Schwartz, & Leary, 2013).Furthermore, this could ultimately result in successful, educated and well-adjusted individuals equipped with knowledge, skills and competencies that will enable them to excel in the future (Pidgeon, Rowe, Stapleton, Magyar, & Lo, 2014;Wang, 2009). Based on the discussion above, it is clear that studying strengths use and deficit correction behaviour of first-year students is important.However, the scales measuring these two constructs have been developed and validated in the organisational context and have not yet been validated and tested in a sample of first-year students. Research objective The goal of the present study is to validate the proactive strengths use and deficit correction scales for South African first-year university students.More specifically, this study aims to test the factorial validity, measurement invariance, reliability and convergent and criterion validity of these two scales. Proactive strengths use and deficit correction behaviour Proactivity means the anticipation of both problems and opportunities and then to act upon them by taking a longterm view and then search actively for feedback (Balluerka, Gorostiaga, & Ulacia, 2014).Crant (2000, p. 436) explains that by using proactive behaviour, the role of taking initiative is to 'improve one's current circumstances and challenge the status quo rather than to passively adapt to current conditions'.Proactive behaviour is also closely related to personal initiative, defined as a proactive and persistent behaviour form that individuals initiate to achieve work goals (Frese & Fay, 2001;Frese, Kring, Soose, & Zempel, 1996).Relevant proactive behaviours include taking charge (Morrison & Phelps, 1999), employing personal initiative (Frese & Fay, 2001), undertaking flexible role orientations (Parker, Wall, & Jackson, 1997), suggesting ideas for future improvements, self-started problem-solving, implementing change initiatives and social network-building (Grant & Ashford, 2008).Van Woerkom et al. (2016) argue that it would also be valuable to measure strengths use and deficit correction as forms of proactive behaviour.Although several studies focus on the identification of strengths (e.g., the StrengthsFinder, Rath, 2007; the values in action inventory of strengths, Peterson & Seligman, 2004;and StandOut, Buckingham, 2011), recent studies have showed that it is the use of strengths that leads to favourable outcomes, including performance (Van Woerkom et al., 2016), well-being and greater self-esteem (Govindji & Linley, 2007;Harzer & Ruch, 2013;Wood et al., 2011).Strengths use behaviour is the active looking for opportunities to use one's strengths and refers to the initiative that students may take to use their strengths in their study environments.Individuals who use their strengths can experience significant increase in their personal growth initiative, hope and resilience and ultimately their performance (Luthans et al., 2007;Meyers et al., 2015). Students may also take the initiative to overcome, develop or correct their areas of weaknesses or deficits.This is in line with goal orientation theory (Van de Walle, 1997).One may argue that during this phase of a students' life, there are several new challenges and obstacles that they have to overcome.It is likely that students in a new university environment may show the desire to develop themselves by acquiring new skills and improving their competencies, specifically students with learning goal orientation competence (Dweck & Leggett, 1988).Therefore, deficit correction behaviour is the active looking for opportunities to correct or develop one's deficits or weaknesses and refers to the initiative that students may take to develop or correct their shortcomings in their study environment.Van Woerkom et al. (2016) developed the strengths use and deficit correction behaviour scales as part of the four-factor Strengths Use and Deficit Correction Questionnaire (SUDCO) -a questionnaire that measures strengths use and deficit correction from both the organisational and individual perspective.Because the first two scales are specifically developed for the organisational context and the items refer to the organisation's support, these scales are not applicable to students.Therefore, only the two individual proactive behaviour scales will be examined in this study. Factorial validity: Two studies confirm the four-factor structure of the SUDCO (Stander & Mostert, 2013;Van Woerkom et al. 2016) comprising the following factors: perceived organisational support for strengths use, perceived organisational support for deficit improvement, strengths use behaviour and deficit correction behaviour.An exploratory factor analysis in the study of Van Woerkom et al. (2016) clearly showed a four-factor structure, where the four factors explained 64.73% of the variance.Confirmatory factor analyses (CFAs) were also used in these two studies to confirm the factor structure of the SUDCO (Stander & Mostert, 2013;Van Woerkom et al. 2016).Four competing models were tested, including a four-factor model, a one-factor model (including all four dimensions), a two-factor model (distinguishing between strengths use and deficit improvement) and another twofactor model (differentiating between organisational and individual dimensions).The results of these studies showed that the four-factor model showed a significantly better fit compared to the competing models.Although all four factors were included in these studies, it is clear that proactive strengths use and deficit correction behaviour are two separate, although related, constructs.Based on these results, it is expected that a two-factor model will show a significantly better fit compared to a one-factor model (Hypothesis 1). Measurement invariance: Measurement invariance refers to the level of comparability of scores across cultures (He & Van de Vijver, 2012, 2013;Van de Vijver & Tanzer, 2004) and investigates if measurement operations yield measures of the same attribute under different conditions (Horn & McArdle, 1992).Therefore, members from different groups who have the same standing on a particular construct should score the same on a test and ascribe the same meaning to measurement items (Schmitt & Kuljanin, 2008;Steenkamp & Baumgartner, 1998).Researchers will only be able to unambiguously interpret group differences when the measurement invariance of an instrument has been confirmed (Horn & McArdle, 1992;Steenkamp & Baumgartner, 1998).Van de Vijver and Tanzer (2004) identified three levels of invariance.Firstly, configural invariance occurs when the model fits the data satisfactorily in all groups.When all nonzero factor loadings are significantly and substantially different from zero, and any correlations between the factors are significantly below a unity of one, one can indicate that there is discriminant validity between the (sub) factors comprising the above-mentioned construct (Byrne, Shavelson, & Muten, 1989).Secondly, metric invariance (also known as equal factor loadings) indicates that the units of measurement are similar across the groups tested.Metric invariance is an essential condition when comparing across groups and for all levels of measurement equivalence.Thirdly, scalar invariance indicates that subjects who have the same value on the latent construct should show equal values on the observed variable (Byrne et al., 1989).Van Woerkom et al. (2016) investigated measurement invariance of the SUDCO.This was performed by means of configural, metric and scalar models for tests of invariance (Preti et al., 2013) based on age and gender in a multi-group analytical framework.The results showed strong measurement invariance, which indicates that male and female subjects, as well as employees from the different age groups perceive the items of the four dimensions in a similar way.Based on these results, it is expected that the proactive strengths use and deficit correction behaviour scales will also demonstrate measurement invariance between different campuses and language groups of first-year students (Hypothesis 2). Reliability: Adequate reliability scores have been found in previous studies for the proactive strengths use and deficit correction scales.Van Woerkom et al. (2016) and Stander and Mostert (2013) reported Cronbach's alpha values of α > 0.90 for both scales.Therefore, it is hypothesised that the strengths use and deficit correction behaviour scales will be reliable (α ≥ 0.70; Hypothesis 3). Convergent validity: Convergent validity was investigated by relating the proactive strengths use and deficit correction behaviour scales to theoretically related constructs (Campbell & Fiske, 1959), including a general proactive behaviour measure (Belschak, Den Hartog, & Fay, 2010) and a general Strengths Use Scale (Govindji & Linley, 2007).As strengths use and deficit correction behaviours are considered to be forms of proactive behaviour, it can be assumed that these scales would correlate with a general scale measuring proactive behaviour.The Strengths Use Scale (Govindji & Linley, 2007) was included to assess the extent to which students use their strengths.Although it is argued that strengths use and deficit behaviour is positively related, a stronger correlation is expected between proactive strengths use behaviour than proactive deficit correction behaviour.Therefore, it is expected that strengths use and deficit correction behaviour will be related to general proactive behaviour and general strengths use (Hypothesis 4). Criterion validity: In order to establish criterion validity of the proactive strengths use and deficit behaviour scales, the empirical association with external criterion that might be consequences of strengths use and deficit correction behaviour will be examined (DeVellis, 2011).This study will focus on potential outcome variables, including student burnout, student engagement and life satisfaction. Students' experience of burnout manifest in feelings of exhaustion because of 'excessive studying and too many demands', that could leave them feeling incompetent with a cynical and detached outlook towards their studies.On the other hand, student engagement refers to a positive and fulfilling state of mind where students experience high levels of energy and are dedicated towards their studies (Schaufeli, Salanova, González-Romá, & Bakker, 2002).When students show proactive behaviour and initiate more favourable circumstances for themselves (Crant, 2000) by searching and using opportunities to apply their strengths and correct or develop their deficits, it could lead to feelings of fulfilment, accomplishment and competence, leading to increased levels of energy, motivation and enthusiasm (Erickson & Grove, 2007;Langelaan, Bakke, Schaufeli, & Van Doornen, 2006;Schaufeli & Salanova, 2007) and ultimately reduced feelings of burnout (Linley & Harrington, 2006;Seligman, Steen, Park, & Peterson, 2005).Also, the extent to which students apply proactive behaviour will determine the effort they put into educationally purposeful activities (Hu & Kuh, 2001).Coates (2007) states that when students use their strengths, they will choose to partake in learning and challenging academic activities, engage in formative communication with academic staff, become involved in enriching educational experiences and actively seek support from the university's learning entities.This self-starting behaviour from students promotes a sense of accomplishment (Kuh, Kinzie, Buckley, Bridges, & Hayek, 2007) that can lead to engagement (Coates, 2009).In addition, students can be engaged by improving their deficits by means of challenging themselves to learn (Coates, 2005), trying out new ideas and practicing their current skills.Also, when students selfassess, they refocus their own responsibility to remain engaged in the learning process (Krause, 2005).The findings of Van Woerkom et al. (2016) support this notion, showing that strengths use behaviour is strongly and positively related to vigour and dedication, while deficit correction behaviour was negatively related to cynicism.Based on these results, it is expected that strengths use behaviour and deficit correction behaviour will be negatively related to burnout (Hypothesis 5) and positively related to engagement (Hypothesis 6). Life satisfaction can be seen as a subjective self-assessment of an individual's quality of life defined by feelings of contentment, fulfilment and happiness (Diener, Emmons, Larsen, & Griffin, 1985;Hamarat & Steele, 2002).Researchers agree with Seligman's findings that strengths use is not only a predictor of subjective well-being among students but also of life satisfaction (Forest et al., 2012;Linley, Nielsen, Gillett, & Biswas-Diener, 2010;Proctor et al., 2011).By using one's strengths, it is possible to enhance a fulfilling and satisfying life (Isaacowitz, Vaillant & Seligman, 2003;Seligman, 2002).In a similar fashion as with engagement, developing and improving one's weaknesses could also enhance general satisfaction with one's life (Rust, Diessner, & Reade, 2009).Therefore, it is expected that both strengths use and deficit correction behaviour will be positively related to life satisfaction (Hypothesis 7). Research approach A quantitative cross-sectional research design was used.Struwig and Stead (2001) describe the quantitative design as a form of conclusive research involving large representative samples and structured data collection procedures.Using the cross-sectional research design, the data were gathered by means of an electronic survey, making it possible to study participants at an exact point in time (Du Plooy, 2002).This approach is economical, cost-effective and saves time for the study. Research participants and procedure A convenient sample of first-year students studying at a South African tertiary institution with different campuses was used (N = 776).After permission was obtained from the university, data collection took place.The survey was web-based, and a link was sent to the respondents through e-mail.The e-mail explained the purpose and goal of the study and stated the possible value it can add to the university and its students.The participants were also ensured of the confidentiality and anonymity of their information and results.Participation was strictly voluntary.The proposed time-frame for completing the questionnaire was approximately 25-30 minutes.A reminder of completion was sent after 2 weeks of receiving access to the link. Measuring instruments A socio-demographic questionnaire was administered and included questions on age, gender, race, language, campus, faculty and degree.In addition, the following questionnaires were administered: Proactive strengths use and deficit correction behaviour: It was measured with the two individual sub-scales of the Strengths Use and Deficit Correction (SUDCO) questionnaire (Van Woerkom et al., 2016).Five items that related best to the student context were chosen for proactive strengths use behaviour (e.g.'I use my strengths proactively') and five items to measure deficit correction behaviour (e.g.'I make an effort to improve my limitations').All the items were measured on a 7-point Likert-type scale ranging from 0 (Almost never) to 6 (Almost always).Van Woerkom et al. (2016) found the scales to be reliable (Cronbach's α for strengths use behaviour = 0.92; Cronbach's α for deficit correction behaviour = 0.93). General strengths use: It was measured with the Strengths Use Scale (Govindji & Linley, 2007).The scale consists of 14 items that enquire about the extent to which individuals use their strengths, which are then rated on a scale ranging from 1 (strongly disagree) to 7 (strongly agree).The items in this scale were developed from a review of positive psychology literature (Wood, Linley, Maltby, Kashdan, & Hurling, 2011) and are the only measure available to assess strength use rather than the presence of strength.The Strength Use Scale has good psychometric properties including a clear onefactor structure, high loading items, high internal consistency (Cronbach's α > 0.90) and test-retest reliability of r = 0.84, as well as criterion and predictive validity with various indices of well-being (Govindji & Linley, 2007;Wood et al., 2011). Proactive behaviour: It was measured by means of an adapted scale of Belschak, Den Hartog, and Fay (2010).The scale consists of 11 items that are measured on a seven-point scale, ranging from 1 (disagree strongly) to 7 (agree strongly).The first seven items measure students' behaviour within a study group (e.g.'When working in a study group, you personally take the initiative to help share knowledge with group members').The second set of items consist of four items referring to students' personal preference towards studying and career-enhancing methods (e.g.'On a personal level, when you study you find new approaches to execute your tasks so that you can be more successful').The alpha coefficient for the scale is 0.80 (Belschak & Den Hartog, 2010). Student burnout: It was measured with the Maslach Burnout Inventory-Student Survey (MBI-SS) (Schaufeli et al., 2002), measured as one factor (De Beer & Bianchi, in press) using items from the core components of burnout, exhaustion and cynicism (Schaufeli & Taris, 2005).Exhaustion was measured with five items (e.g.'I feel emotionally drained by my studies') and cynicism by means of four items (e.g.'I have become less enthusiastic about my studies').Items were scored on a sevenpoint frequency rating scale ranging from 0 (never) to 6 (always).The MBI-SS has been validated internationally (Schaufeli et al., 2002) and in South Africa (Mostert, Pienaar, Gauche, & Jackson, 2007).Mostert et al. (2007) reported Cronbach's α values of 0.74 for exhaustion and 0.68 for cynicism. Student engagement: It was measured with the Utrecht Work Engagement Scale-Student Survey (UWES-S) (Schaufeli et al., 2002), also as one factor, using items from the core components of engagement, vigour and dedication (Llorens, Schaufeli, Bakker, & Salanova, 2007;Van Wijhe, Peeters, Schaufeli, & Van den Hout, 2011).Vigour was measured with five items (e.g.'When I study, I feel like I am bursting with energy').Dedication was also measured with five items (e.g.'I am enthusiastic about my studies').Items were scored on a seven-point Likert scale ranging from 0 (never) to 6 (every day).The UWES-S has been validated internationally (Schaufeli et al., 2002).In South Africa, Mostert et al. (2007) also reported acceptable Cronbach's α of 0.70 for vigour and 0.78 for dedication. Life satisfaction: The Satisfaction with Life Scale (Diener et al., 1985) was used to measure life satisfaction on a 7-point Likert-type scale ranging from 1 (strongly disagree) to 7 (strongly agree).Five questions were used (e.g.'So far I have gotten the important things I want in life').The internal consistency of the scale was found to be reasonable (α = 0.67; Diener et al., 1985). Statistical analysis Mplus 7.2 (Muthén & Muthén, 2014) was used to determine the psychometric properties of the adapted questionnaire.To determine the factorial validity, CFA was used.The maximum likelihood estimator was used with the covariance matrix as input (Muthén & Muthén, 2014).To assess fit of the measurement and structural models, the following fit indices were considered: χ² statistic, the comparative fit index (CFI), the Tucker-Lewis index (TLI), the root mean square error of approximation (RMSEA) and the standardised root mean square residual (SRMR).Acceptable fit is considered at a value of 0.90 and above for the CFI and TLI (Byrne, 2001;Hoyle, 1995).For the RMSEA, a value of 0.05 or less indicates a good fit, whereas values between 0.05 and 0.08 are considered to be an acceptable model fit (Browne & Cudeck, 1993).The cut-off point for SRMR was set at 0.05 (Hu & Bentler 1999).The Akaike information criterion (AIC) and Bayesian information criterion (BIC) were also used to compare the fit of competing models (i.e. the lowest AIC and BIC value indicated the best fitting model; Van de Schoot, Lugtig, & Hox, 2012).Cronbach's α coefficients were used to determine the reliability of the constructs. Measurement invariance was investigated based on campus and language groups.This was performed in Mplus by ascertaining the significance of the configural (similar factor structure), metric (similar loadings) and scalar (similar intercepts) models compared against each other.In instances where invariance tests are applied, a p > 0.05 is sought for the chi-square difference test to show that the models do not differ significantly. Pearson product-moment correlation coefficients were used to investigate the relationship between the latent variables.In terms of statistical significance, the cut-off value was set at the 95% level (p ≤ 0.05).Effect sizes were used to decide on the practical significance of the correlations (Steyn, 1999).A correlation of 0.30 and larger indicates a medium effect, whereas a correlation of 0.50 and larger indicates a large effect.Regressions were also added to create a structural model in order to investigate the hypothesised relationships between proactive SUDCO behaviour, burnout, engagement and life satisfaction. Results This section focuses on reporting the results for testing the factorial validity, measurement invariance based on campus and language groups, reliability and convergent and criterion validity.Results are presented in tables, followed by a description after each table. Factorial validity In order to determine the factorial validity of the proactive strengths use and deficit behaviour scales for students, CFA was used to test two competing measurement models.The first model was the hypothesised two-factor model consisting of strengths use behaviour (specified as the first dimension with five items loading on this factor) and deficit correction behaviour (specified as the second factor with five items loading on this factor).Competing was a one-factor model, where one factor was specified -the five strengths use and five deficit correction behaviour items loaded on a single factor.Table 1 displays the results after comparing the two-factor and one-factor measurement models. The results presented in Table 1 show that the two-factor model was the best fit for the data.This model fitted the data significantly better compared to the one-factor model (∆χ 2 = 508.89;∆df = 1; p < 0.05).These results offer support for Hypothesis 1 -that a two-factor structure will fit the data significantly better compared to a one-factor structure.Table 2 presents the results for the standardised loadings of the items for the latent variables. Table 2 indicates that the items loaded sufficiently on the respective factors.Standard errors were small, which indicates accurate estimations.For the strengths use behaviour factor, the smallest factor loading was for item 2 (0.62; 'I focus on the things I do well'), while the largest loading was for item 3 (0.79; 'I make the most of my strong points').For deficit correction behaviour, the smallest loading was for item 6 (0.61; 'I concentrate on my areas of development'), while the largest proved to be for item 7 (0.78; 'I focus on developing the things I struggle with'). Measurement invariance testing Invariance was tested between the different campuses and language groups.Three campus groups were included in the sample.The participants of each campus consisted of the following: Campus 1 (396 participants), Campus 2 (296 participants) and Campus 3 (73 participants).Because 73 participants from Campus 3 were not sufficient for a CFA model, invariance was only tested between Campus 1 and Campus 2. Invariance among the 12 language groups in the present study could not be determined, as there were not enough participants in each language group.Instead, the participants were divided into two groups.The first group, consisting of 335 individuals, was labelled 'Western Germanic'.This group consisted of English-and Afrikaans-speaking students.The second group were labelled 'African' and consisted of 443 participants.The results of the measurement invariance tests are reported in Table 3. As can be seen in Table 3, the two scales showed strong measurement invariance across all campuses, indicating no significant difference between metric against configural invariance (p = 0.35), scalar against configural invariance (p = 0.49) or scalar against metric invariance (p = 0.60).The two scales also showed strong measurement invariance for both Germanic and African language groups, indicating no significant difference between metric against configural invariance (p = 0.29), scalar against configural invariance (p = 0.16) and scalar against metric invariance (p = 0.16).These results confirm Hypothesis 2 -that the proactive SUDCO behaviour scales will demonstrate measurement invariance between different campuses and language groups of first-year students. Reliability coefficients, convergent validity and relationships with outcome variables Table 4 displays the Cronbach's α coefficients and the correlation matrix for the latent variables of the research model. Convergent validity was established, since significant and positive relationships were found -both strengths use behaviour and deficit correction behaviour were statistically significantly correlated with strengths use (r = 0.74; r = 0.56) and also with proactive behaviour (r = 0.51; r = 0.47).These results provide support for Hypothesis 4. Criterion validity To test for criterion validity, a structural model was tested where strengths use behaviour and deficit correction behaviour predicted student burnout, student engagement and life satisfaction.The fit of the structural model was satisfactory (χ² = 2035.762;CFI = 0.90; TLI = 0.89; RMSEA = 0.06; SRMR = 0.05).Table 5 displays the results. Discussion This study argues that two recently developed scales, proactive strengths use behaviour and proactive deficit correction behaviour, could be valuable to examine among first-year university students.However, these two scales were developed and validated for employees working in organisations and have not yet been validated in a sample of students.Therefore, the objective of this study was to validate the proactive SUDCO scales (Van Woerkom et al., 2016) in a sample of South African first-year university students.More specifically, the study aimed at providing evidence by investigating the factorial validity, measurement invariance, scale reliability, and convergent and criterion validity of these two scales. To examine the factorial validity, two competing measurement models were tested (a one-factor model vs. a two-factor model).The hypothesised two-factor model showed a significantly better fit to the data compared to a one-factor model, indicating that the two forms of proactive behaviour are distinct, although related.This result is in line with validation studies on employees, which also showed that these two scales are two distinct factors (Stander & Mostert, 2013;Van Woerkom et al., 2016). Measurement invariance is seen as a requirement for any study in a cross-cultural situation (He & Van de Vijver, 2012) and focus on the level of measurement at which scores across different groups can be compared (Van de Vijver & Tanzer, 2004).This study focused on the measurement invariance across two unique campuses and between two main language groups.The configural, metric and scalar models were compared against each other respectively.The language groups consisted of the Germanic and African groups, and students from two distinct campuses were included as part of the invariance testing.No significant differences were found.These results provide preliminary evidence that these two scales have the potential to be administered successfully to students from different groups in cross-cultural studies, particularly for campus and language groups.Furthermore, the conclusions concerning similarities and differences found in these types of studies can be considered valid with more confidence and not discriminatory towards a specific language group or between different campuses (He & Van de Vijver, 2012;Van de Vijver, 2011). In order to determine whether the proactive strengths use and deficit scales were reliable, Cronbach's α coefficients were calculated.Cronbach's α coefficients ≥ 0.70 were found for strengths use behaviour (α = 0.84) and for deficit correction behaviour (α = 0.84).Supporting results were found in the studies by Van Woerkom et al. (2016) and Stander and Mostert (2013), who found Cronbach's α values ≥ 0.90 for all four SUDCO scales.These results show promise that items consistently will measure the extent to which students apply proactive behaviour towards strengths use and deficit correction behaviour.The results can also be used for further studies aiming to investigate these constructs reliably among first-year students in a tertiary educational environment. The next objective was to determine the convergent validity of the proactive SUDCO behaviour scales by investigating the relationship between theoretically similar constructs (i.e.general strengths use and proactive behaviour).Pearson product-moment correlation coefficients showed that both strengths use behaviour and deficit correction behaviour were moderately to strongly and positively related to strengths use and proactive behaviour.As expected, the relationship between proactive behaviour towards strengths use and general strengths use was much stronger compared to the relationship between proactive behaviour towards deficit correction and general strengths use, while both scales were related with about equal strength to proactive behaviour. Finally, the criterion validity was examined by testing a structural model specifying the direct impact of proactive SUDCO behaviour on three relevant student outcomes, including student burnout, student engagement and life satisfaction.All the regression paths in the structural model were significant and in the expected direction. The results showed that both scales were significantly negatively related to burnout, with strengths use behaviour showing a stronger relation with burnout (β = -0.26)than deficit improvement behaviour (β = -0.16).It has been shown in previous studies that individuals' use of their strengths is associated with lower stress levels (Buick & Muthu, 1997;Proctor, Maltby, & Linley, 2011;Wood et al., 2011).This may be because individuals experience a higher level of perceived competence to perform in their studies when using their strengths.When students are able to use their strengths, they tend to feel more content and good about themselves and are therefore more motivated to fulfil their potential (Linley & Harrington, 2006;Seligman et al., 2005).In addition, when individuals improve and develop their perceived deficits, it may create a sense of mastery or accomplishment.Performing tasks that fall within one's area of deficits and improving these deficits can have a positive effect on goal achievement, which, in turn, increases feelings of competence, which can reduce the effects of burnout (Erickson & Grove, 2007;Maslach, 2006;Schaufeli & Peeters, 2000). With regards to the relationship with engagement, the results indicated that both SUDCO behaviours were significantly positively related to engagement.Interestingly, deficit correction behaviour had a stronger relationship with engagement (β = 0.34) compared to strengths use behaviour (β = 0.24).When students take the initiative to engage in activities that require continuous learning and place themselves in the position to practise skills and tasks in which they usually underperform, they will experience a sense of accomplishment in their studies, which in turn can lead to increased motivation and engagement.Wang, Cullen, Yao and Li (2013) found that first-year students who behave proactively in a university environment experienced higher engagement levels.This may be because it is arguable that educational settings are focused and organised in such a way as to enhance students' strengths and overcome their weaknesses.The authors viewed it as essential for students to work proactively on overcoming 'pessimistic tendencies' in order to become more engaged in their educational and social environment.Also, studies have shown that employees' engagement is directly related to the use of their strengths (Lopez, Hodges, & Harter, 2005). Finally, both SUDCO behaviours showed a significant and positive relationship with life satisfaction, although the relationship between strengths use behaviour and life satisfaction was much stronger (β = 0.38) than the relationship between deficit correction behaviour and life satisfaction (β = 0.16).The first-year student sample in the present study, who displayed self-starting behaviour to proactively use their areas of strengths, were most probably able to deal with challenges associated with the university environment and as a result experience higher levels of life satisfaction.This suggests that students who utilise their strengths and improve on their deficits will not only be able to deal with university challenges and stressors but also can have meaningful personal and study experiences (Seligman, 2011), which heighten their levels of life satisfaction.The study of Stander, Diedericks, Mostert and De Beer (2015) provides support for this finding by also demonstrating a positive predictive relationship between strengths use and life satisfaction.Additionally, the student sample studied by Rust et al. (2009) experienced significant increases in life satisfaction when improving their character strengths and weaknesses against a compared group who was not assigned to work on strengths and/or weaknesses.The group was required to keep weekly logs on how they used their strengths and tapped into opportunities to improve on weaknesses.The success of the second group was measured by the feasibility of the plans made in order to achieve this result, and the number of times they sought weekly feedback from trustees.Those who performed the abovementioned activities frequently experienced higher levels of life satisfaction. To conclude, the results of this study confirmed the validity of the proactive SUDCO scales, including factorial validity, measurement invariance, reliability and convergent and criterion validity.Both scales were significantly related to important student outcomes, including burnout, engagement and life satisfaction.Interestingly, compared to deficit correction behaviour, strengths use behaviour was more strongly related to burnout and life satisfaction, while deficit correction behaviour was more strongly related to engagement, compared to strengths use behaviour.This indicates that the two scales have different relationships with outcome variables.In general, this study provides a good foundation for future studies that want to examine proactive SUDCO behaviour among university students, specifically first-year students. Limitations and recommendations Although the present study makes valuable contributions to the measurement of SUDCO behaviour of first-year university students, some limitations and recommendations for future research should be mentioned.Firstly, the main focus of the study was on first-year university students.Future studies should also include students from different higher education tertiary institutions and students from different academic years.Secondly, a cross-sectional research design was used, which implies that the present study was restricted from exploring causal statements about the hypothesised relations to outcome variables.In order to draw more specific conclusions about the relation of SUDCO behaviour to student burnout, student engagement and life satisfaction, longitudinal research studies are recommended (Govindji & Linley, 2007).Thirdly, the present study could only investigate relations to three outcome variables, including burnout, engagement and life satisfaction.As the field of strengths use and deficit improvement is still relatively new (especially among students), it would also be valuable to investigate causal relationships of the SUDCO scales with other important outcome variables relevant to the student context, such as flourishing, well-being and academic performance.A fourth limitation was the use of a single selfreport questionnaire since common method variance between predictor and outcome variables might have occurred (Malhotra, Kim, & Patil, 2006).Future studies could consider using mixed methods to obtain richer data, including interviews, reflection diaries and focus groups. Practical implications Literature is readily available on first-year university student drop-out rates and the challenges they face.However, literature is limited on the role that students' SUDCO behaviour may have on their success and wellbeing.The findings of the present study can be used to help students obtain knowledge about the outcomes of being proactive in using strengths and deficits.The findings of the present research will also add value to universities and educators by providing a better understanding of what proactive behaviour towards SUDCO entails and whether students are demonstrating this behaviour.Universities may develop supporting structures and interventions and work in collaboration with educators to provide first-year students with opportunities to apply their strengths and develop their weaknesses and thereby enhance the process of adapting and coping with a new academic environment.The results of the present study can serve as a basis for programs aimed at (1) providing academic, social and personal support in the first year; (2) involving students in activities to help familiarise them with the university, and thus become effective learners (e.g.guiding students to connect to university life and committees in order to develop a sense of belonging, Tinto, 1999Tinto, , 2000;;Pitkethly & Prosser, 2001); (3) exposing students to the university's diverse groups in order to enhance their learning experience (Pitkethly & Prosser, 2001); (4) promoting effective, proactive and healthy ways to deal with university stress and demands; and (5) promoting increased performance, resilience, effective coping skills and positive reinforcement.These programmes might even be adapted to suit the needs of senior university students.This may lead to improved conditions for tertiary educators and students, as well as enhanced wellbeing and academic success among students. TABLE 1 : Results of the measurement models. TABLE 2 : Standardised factor loadings of the items for the latent variables. *, p < 0.001; no cross-loadings of items between the different items.SE, Standard error. TABLE 3 : Results of the invariance testing based on campus and language. TABLE 5 : Regression results for the structural model. TABLE 4 : Cronbach's alpha coefficients, correlation matrix for the latent variables.
2018-12-11T17:35:14.505Z
2017-01-27T00:00:00.000
{ "year": 2017, "sha1": "3e9b642ab9a90f7740f26772c014163f17db5217", "oa_license": "CCBY", "oa_url": "https://sajip.co.za/index.php/sajip/article/download/1395/2071", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3e9b642ab9a90f7740f26772c014163f17db5217", "s2fieldsofstudy": [ "Psychology", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
246285793
pes2o/s2orc
v3-fos-license
Cold and Hot gas distribution around the Milky-Way-M31 system in the HESTIA simulations Recent observations have revealed remarkable insights into the gas reservoir in the circumgalactic medium (CGM) of galaxy haloes. In this paper, we characterise the gas in the vicinity of Milky Way and Andromeda analogues in the HESTIA (High resolution Environmental Simulations of The Immediate Area) suite of constrained Local Group (LG) simulations. The HESTIA suite comprise of a set of three high-resolution {\sc arepo}-based simulations of the LG, run using the Auriga galaxy formation model. For this paper, we focus only on the $z = 0$ simulation datasets and generate mock skymaps along with a power spectrum analysis to show that the distributions of ions tracing low-temperature gas (HI and SiIII) are more clumpy in comparison to warmer gas tracers (OVI, OVII and OVIII). We compare to the spectroscopic CGM observations of M31 and low-redshift galaxies. HESTIA under-produces the column densities of the M31 observations, but the simulations are consistent with the observations of low-redshift galaxies. A possible explanation for these findings is that the spectroscopic observations of M31 are contaminated by gas residing in the CGM of the Milky Way. INTRODUCTION Our understanding of the tenuous gas reservoir surrounding galaxies, better known as the circumgalactic medium (CGM), has dramatically improved since its first detection, back in the 1950s (Spitzer Jr 1956;Münch & Zirin 1961;Bahcall & Spitzer Jr 1969). The CGM is a site through which pristine, cold intergalactic medium (IGM) gas passes on its way into the galaxy and it is also the site where metal-enriched gas from the interstellar medium (ISM) gets dumped via outflows and winds (Anglés-Alcázar et al. 2017;Suresh et al. 2019). CGM gas is often extremely challenging to detect in emission due to its low ★ E-mail: damle@uni-potsdam.de † ***Herschel fellow*** column densities. Therefore, most of our knowledge about its nature stems from absorption line studies Tumlinson et al. 2017) of quasar sightlines passing through the CGM of foreground galaxies. Numerous studies through the last decade involving quasar ab-sorption line studies of various low and intermediate ions tracing a substantial range in temperatures and densities have revealed the complex, multiphase structure of the CGM (Nielsen et al. 2013;Tumlinson et al. 2013;Bordoloi et al. 2014;Richter et al. 2016;Lehner et al. 2018). Lehner et al. (2020) have gone a step ahead in quasar absorption line studies by obtaining multi-ion deep observations of several sightlines heterogeneously piercing the CGM of a single galaxy (M31). Recent studies conclude that a significant percentage of galactic baryons could lie in the warm-hot virialized gas phase (Peeples et al. 2014;Tumlinson et al. 2017), increasingly emphasizing the importance of high ions in describing the CGM mass budget (Tumlinson et al. 2017). O , which is an important tracer of the warm-hot CGM ( ∼ 10 5.5 K), has been detected in gas reservoirs around star forming galaxies in Far UV (Tumlinson et al. 2011). Even hotter CGM gas, traced primarily by O and O , has been detected around galaxies in X-ray studies (Das et al. 2019b;Das et al. 2020). Apart from these high ions, Coronal Broad Lyman alpha absorbers could also contribute towards constituting the hot CGM (Richter 2020). Significant progress is also being made via systematic CGM studies targeting diverse galaxy samples which provide insightful views into the synergy between the CGM and the evolution of its host galaxy. The presence of warm gas clouds around late-type galaxies at low redshift (Stocke et al. 2013), the impact of starbursts (Borthakur et al. 2013) and AGN (Berg et al. 2018), evidence of a bimodal metallicity distribution in the form of metal-poor, pristine and metal-rich, recycled gas streams (Lehner et al. 2013) have given us a peek into the interplay between the CGM and its parent galaxy. The theory of galactic winds injecting metal-rich gas from the ISM out to the CGM (Hummels et al. 2013;Ford et al. 2013a) is now being supported by observational evidence (Rupke et al. 2019). Despite all the advancements in the past few years, limited sightline observations and our technological inability to probe substantially lower column densities in the CGM of other galaxies indicate that we cannot yet fully rely solely on these studies to give us a complete picture of the workings of the CGM (Tumlinson et al. 2017). Therefore, studying the MW and the LG CGM (which will always have better CGM datasets as compared to those for non-LG galaxies) assumes a great importance in this context. High-velocity, warm O gas has been observed extensively around the MW Savage et al. 2003;Wakker et al. 2003). HST UV spectra of a list of low and intermediate ions have further helped us track the expanse of high-velocity clouds (HVCs) around our galaxy (Lehner et al. 2012;Herenz et al. 2013). A low-velocity, cool-ionized CGM component has also been detected recently around our MW (Zheng et al. 2019;Bish et al. 2021). Additionally, a long hypothesised hot, diffuse galactic gas phase (Gupta et al. 2012) has been observed using the highly ionized O and O ions (Miller & Bregman 2015;Das et al. 2019a). While the observations of our own galaxy's CGM certainly provide us with more sightlines and enable us to detect slightly lower column densities as compared to other galaxies' CGMs, galactic CGM observations are fraught with a greater possibility of contamination from sources lying in the line-of-sight of our observations, thereby masking the true nature of our galaxy's CGM. With the advent of the above observations, complementary studies with regards to the CGMs around the galaxies, generated using cosmological galaxy formation simulations, started gaining momentum (Vogelsberger et al. 2020). Cosmological simulations, in general, have been extremely successful in replicating many pivotal observational properties central to the current galaxy formation and evolution model (Vogelsberger et al. 2014a,b). These include galaxy morpholo-gies (Ceverino et al. 2010;Aumer et al. 2013;Marinacci et al. 2014;Somerville & Davé 2015;Grand et al. 2017), galaxy scaling relations (Booth & Schaye 2009;Angulo et al. 2012;Vogelsberger et al. 2013), * / halo relationship (Behroozi et al. 2010;Moster et al. 2013), and star formation in galaxies (Behroozi et al. 2013;Agertz & Kravtsov 2015;Sparre et al. 2015;Furlong et al. 2015;Sparre et al. 2017;Donnari et al. 2019). Like observations, cosmological simulations provide different approaches to quantify the typical baryon and metal budgets of galaxies (Ford et al. 2014;Schaye et al. 2015;Suresh et al. 2016;Hani et al. 2019;Tuominen et al. 2021). They reveal how the motions of gas manifests itself in various forms like inflow streams from the IGM, or replenished outflows from the galaxy out to its CGM, or stellar winds or supernovae and AGN feedback Wright et al. 2021;Appleby et al. 2021). Given that the computational studies of the CGM have provided an enormous insight into the evolution of galaxies, it is worthwhile to look back to our local environment, i.e. the Local Group (LG). Apart from tracking the formation history of MW-M31 (Ibata et al. 2013;Hammer et al. 2013;Scannapieco et al. 2015) and the accretion histories of MW-like galaxies (Nuza et al. 2019), our LG, over the past decade, has proved to be an ideal site for studies involving ΛCDM model tests (Klypin et al. 1999;Wetzel et al. 2016;Lovell et al. 2017), dwarf galaxy formation and evolution (Tolstoy et al. 2009;Garrison-Kimmel et al. 2014;Pawlowski et al. 2017;Samuel et al. 2020), effects of environment on star formation histories of MW-like galaxies , local universe re-ionization (Ocvirk et al. 2020), and the cosmic web (Nuza et al. 2014b;Forero-Romero & González 2015;Metuki et al. 2015). Observational constraints of the Local Universe have resulted in an emergence of constrained simulations, where the large-scale structure resembles the observations (Nuza et al. 2010;Libeskind et al. 2011;Knebe et al. 2011;Di Cintio et al. 2013;Nuza et al. 2013). It is also worthwhile to note that such LG constrained simulations might be the setups best equipped to separate out any sources of possible contamination towards the MW CGM. A simulation of a Milky-Way-like galaxy in a constrained environment was done by the CLUES (Constrained Local UniversE Simulations) project , which were one of the first cosmological simulations to include a realistic local environment within the large-scale LG structure. Nuza et al. (2014a) carried out a study on the = 0 gas distribution around MW and M31 in the CLUES simulation to characterize the effect of cosmography on the LG CGM. They analysed the cold and hot gas phases, computed their masses and accretion/ejection rates, and later compared their results with the absorption-line observations from Richter et al. (2017). We build upon the approach adopted in Nuza et al. (2014a) by analysing the constrained LG simulations, (Libeskind et al. 2020), which in comparison to the original CLUES simulations have better constrained initial conditions. In we, furthermore, use the Auriga galaxy formation model (Grand et al. 2017), which produces realistic Milky-Way-mass disc galaxies. In comparison to the previous CLUES simulations, we carry out a more extensive analysis to predict column densities of a range of tracer ions (H , Si , O , O and O ) selected to give a complete view of the various gas phases in and around the galaxies. This helps us, for example, with the interpretation of absorption studies of the LG CGM gas. The aim of this paper is to provide predictions for absorptionline observations of the gas in the LG. We achieve this by studying the gas around LG galaxies in the state-of-the-art constrained magnetohydrodynamical (MHD) simulations, (High resolution Environmental Simulations of The Immediate Area). The compar-ison between and some of the recent observations makes it possible to constrain the galaxy formation models of our simulations. This paper is structured as follows: § 2 describes the analysis tools and the simulation. We present our results in § 3, which include Mollweide projection maps ( §3.1), power spectra ( §3.3) and radial column density profiles ( §3.4). We compare our results with some of the recent observations and other simulations in §3.5 and 3.6. Further, we discuss the implications of our results in the context of current theories about CGM and galaxy formation and evolution in §4. We also analyse the possibility of MW's CGM gas interfering with M31's CGM observations in §4.1. Finally, we sum up our conclusions and provide a quick note about certain caveats and ideas to be implemented in future projects ( §5). METHODS We use three high resolution realizations from the suite, a set of intermediate and high resolution cosmological magnetohydrodynamical constrained simulations of the LG, analysed only at the present time ( = 0). The project is a part of the larger CLUES collaboration Libeskind et al. 2010;Carlesi et al. 2016), whose principal aim is to generate constrained simulations of the local universe in order to match the mock observational outputs with real observations from our galactic neighbourhood. The following subsection summarises the technical specifications of these simulations. A more extensive description of the simulations can be found in the official release paper (Libeskind et al. 2020). Initial Conditions The small scale initial conditions are obtained from a sampling of the peculiar velocity field. The CosmicFlows-2 catalog ), used to derive peculiar velocities, provides constraints up to distances farther than that was available for the predecessor CLUES simulation. Reverse Zel'dovich technique (Doumler et al. 2013) handles the cosmic displacement field better, hence offering smaller structure shifts. A new technique, bias minimisation scheme , has been employed for simulations to ensure that the LG characteristic objects (e.g. Virgo cluster) have proper mass. The above mentioned new elements (see for further details) in conjunction with the earlier aspects of constrained realization (Hoffman & Ribak 1991) and Wiener Filter (Sorce et al. 2013) offer a clear edge over the previous generation CLUES simulations. Low-resolution, constrained, dark-matter only simulations are the fields from which halo pairs resembling our LG were picked up for intermediate and high resolution runs. Note that only the highest resolution realizations (those labelled 09−18, 17−11 and 37−11) are used for our analysis in this paper. The first and second numbers in the simulation nomenclature represent the seed for long and short waves, respectively, both of which together constitute to the construction of the initial conditions. Two overlapping 2.5ℎ −1 Mpc spheres centred on the two largest = 0 LG members (MW and M31) represent the effective high resolution fields which are populated with 8192 3 effective particles. The mass resolution for the DM particles (gas cells) in the high-resolution simulations is 1.5 × 10 5 M (2.2 × 10 4 M ), while the softening length ( ) for the DM is 220 pc. While the entire process of selecting cosmographically correct halo pairs involves handpicking MW-M31 candidates with certain criteria (halo mass, separation, isolation) that lie within the corresponding observational constraints, there are yet a few other bulk parameters ( * vs halo , circular velocity profile) and dynamical properties (total relative velocities) which are organically found to agree well with observations (Guo et al. 2010;Van der Marel et al. 2012;McLeod et al. 2017). Galaxy formation model The moving-mesh magneto-hydrodynamic code, (Springel 2010;Pakmor et al. 2016), has been employed for the higher resolution runs. , which is based on a quasi-Lagrangian approach, uses an underlying Voronoi mesh (in order to solve the ideal MHD equations) that is allowed to move along the fluid flow, thus seamlessly combining both Lagrangian as well as Eulerian features in a single cosmological simulation. The code follows the evolution of magnetic fields with the ideal MHD approximation (Pakmor et al. 2011;Pakmor & Springel 2013) that has been shown to reproduce several observed properties of magnetic fields in galaxies (Pakmor et al. 2017;Pakmor et al. 2018) and the CGM . Cells are split (i.e. refined) or merged (i.e. de-refined) whenever the mass of a particular mesh cell varies by more than a factor of two from the target mass resolution. We adopt the Auriga galaxy formation model (Grand et al. 2017). A two-phase model is used to describe the interstellar medium (ISM), wherein a fraction of cold gas and a hot ambient phase is assigned to each star-forming gas cell (Springel & Hernquist 2003). This twophase model is enabled for gas denser than the star formation threshold (0.13 cm −3 ). Energy is transferred between the two phases by radiative cooling and supernova evaporation, and the gas is assumed to be in pressure equilibrium following an effective equation of state (similar to fig. 4 in Springel et al. 2005). Stellar population particles are formed stochastically from star-forming cells. Black holes (BH) formation and their subsequent feedback contributions are also included in the Auriga framework. Magnetic fields are included as uniform seed fields at the beginning of the simulation runs ( = 127) with a comoving field strength of 10 −14 G, which are amplified by an efficient turbulent dynamo at high redshifts (Pakmor et al. 2017). Gas cooling via primordial and metal cooling (Vogelsberger et al. 2013) and a spatially uniform UV background (Faucher-Giguère et al. 2009) are included. Our galaxy formation model produces a magnetized CGM with a magnetic energy, which is an order of magnitude below the equipartition value for the thermal and turbulent energy density . In our galaxy formation model, the CGM experiences heating primarily from sources such as SNe Type II, AGN feedback (see fig. 17 in Grand et al. 2017), stellar winds and time-dependent spatially uniform UV background. Stellar and AGN feedback are especially important since they heat and deposit a substantial amount of metals as well as some baryonic material into the CGM (Vogelsberger et al. 2013;Bogdán et al. 2013). We do not include extra-planar type Ia SNe or runaway type II SNe. We expect the uncertainty due to not including these in our physics model to be extremely small with respect to that due to treating the ISM with an effective equation of state (see for example fig. 10 in Marinacci et al. 2019). Quasar mode feedback is known to suppress star formation in the inner disc of galaxies (particularly relevant at early times) while the radio mode feedback is known to control the ability of halo gas to cool down efficiently at late times (hence relevant in the context of this study). In general, radio mode feedback is instrumental in Table 1. Properties of MW and M31 analogues at = 0 for the three LG simulations. The simulations are referred to as 09−18, 17−11 and 37−11, following the nomenclature used in (Libeskind et al. 2020, see also § 2.1.1). We show the LG distance (defined as the distance of a galaxy from the geometric centre of the line that connects MW and M31), the mass in stars and gas bound to each galaxy, and 200 and 200 of each galaxy. SFR is the star formation rate for all the gas cells within twice the stellar half mass radius. SFR / is the SFR-weighted gas metallicity, normalized with respect to the solar metallicity. We also list the observational estimates for MW from Bland-Hawthorn & Gerhard (2016) We use the halo finder (Springel et al. 2001;Dolag et al. 2009;Springel et al. 2021) to identify galaxies and galaxy groups in our analysis. When the simulations were run, black holes were seeded in haloes identified by . Our Table 1 lists key properties for the three realizations of the MW-M31 analogues, which we consider in this paper. We define 200 as the radius within which the spherically averaged density is 200 times the critical density of the universe. 200 is the total mass within 200 . The overall 200 , * , gas and 200 values for our MW-M31 analogues are broadly consistent with typical observational estimates (see fig. 7 in Libeskind et al. 2020; see also Bland-Hawthorn & Gerhard 2016;Yin et al. 2009). Among the two most massive galaxies in each of our LG simulations, the galaxy with a larger value of 200 is identified as M31, while the other galaxy is identified as MW. Global properties of the analogues The MW analogues reveal BH values an order of magnitude larger than that stated in the observations of Bland-Hawthorn & Gerhard (2016). This does not, however, necessarily mean that the AGN feedback has been too strong during the simulations, because we see realistic MW stellar masses at = 0. For the CGM, which we study extensively in this paper, the overestimated MW BH masses therefore do not necessarily indicate too strong AGN feedback. We also note that our MW analogues are still consistent with MW-mass galaxies (see fig. 5 of Savorgnan et al. 2016). Similarly, the SFR at = 0 is also comparable to or larger than observed. We note that the SFR of M31 is larger by a factor of a few in in comparison to observations. The generation of winds is closely tied to the SFR in our simulations, so it is possible that the role of outflows is over-estimated by in comparison to the = 0 observations of M31. Integrated over the lifetime of the galaxies, does, however, produce realistic stellar masses at = 0 1 . We, therefore, do not regard the discrepancy between the = 0 SFR as more problematic than the uncertainty already in place by using an effective model of winds, or, for example, by the simulated M31 galaxies having different merger histories or disc orientations than the real M31. In comparison to the SFR values from Bland-Hawthorn & Gerhard 2016, other observational studies report slightly larger SFR values for MW (1-3 M yr −1 , 3-6 M yr −1 , 1-3 M yr −1 , 1.9 ± 0.4 M yr −1 : McKee & Williams 1997;Boissier & Prantzos 1999;Wakker et al. 2007Wakker et al. , 2008Chomiuk & Povich 2011), but these are nevertheless lower than the values. We also notice that the MW analogue in the 09−18 simulation exhibits a substantially higher SFR than others. However, all * /SFR values (except those for the MW analogue in 09−18 simulation) for our sample are still well within the observational constraints of normal star-forming galaxies with masses comparable to the MW and M31 (see fig. 8 in Speagle et al. 2014). Thus, overall, the galaxies seem to be slightly more star-forming in comparison to the observations but this does not induce larger uncertainties in our analysis than already present due to multiple other factors which we highlighted earlier. We also note that the SFR-averaged gas metallicity is consistent with the M31 measurement in Sanders et al. (2012). Analysis In this section, we describe the methodology adopted in order to compute the ion fractions in the CGM, underlying assumptions, their possible effects on the interpretation of our results and the process of creating Mollweide maps from the computed ion fractions. We make use of the photo-ionization code to obtain ionization fractions for the tracer ions H , Si , O , O and O , and we generate Mollweide projection maps using the package to create mock observations. Ionization modelling with Two principal ionization processes in the CGM and IGM are collisional ionization and photo-ionization (Bergeron & Stasinska 1986;Prochaska et al. 2004;Turner et al. 2015). An equilibrium scenario is generally assumed in both these processes thus resulting in a collisional ionization equilibrium (CIE) and a photo-ionization equilibrium (PIE). Such a bi-modal attempt in the ionization modelling has to date proved to be sufficient to well explain the co-habitation of both high and low ions in different phases at the same time within a common astrophysical gas environment. Generally, high ions (e.g., O , O , Ne , Mg ) are found to be better modelled via CIE while the low and intermediate ions (e.g., Fe , N , S ) lend themselves better to PIE, owing to the temperatures in the various gas phases and the strength and shape of the UV background field. CIE, which assumes that the ionization is mainly carried by electrons, can be pensated by too high AGN feedback, and this would result in a stellar mass consistent with observations but at the same time a too massive BH mass. Addressing such a hypothesis would require running additional simulations which is beyond the scope of this paper. well characterised ) using the relation, where H ,coll is the neutral hydrogen fraction in CIE, H ( ) is the temperature dependent recombination rate of hydrogen and H ( ) is the collisional ionization coefficient, both for hydrogen. PIE, on the other hand, assumes photons to be the primary perpetrators and can be better described as, where H ,photo is the neutral hydrogen fraction in PIE, e is the electron density and Γ H is the photo-ionization rate. We determine the ionization fractions using the code (version C17; Ferland et al. 2017), which is designed to model photoionization and photo-dissociation processes by including a wide combination of temperature-density phases for a list of elements, in order to simulate complex astrophysical environments realistically and produce mock parameters and outputs. The temperature of each gas cell is given as input to (in practice, we use lookup tables to speed up the calculation, see below), which determines the ionization state in post-processing. For the star-forming gas cell we directly set all atoms to be neutral, because most of the mass is in the cold phase. We include both CIE and PIE in the modelling code. The UV background from Faucher-Giguère et al. (2009) is used. Self-shielding prescriptions, in particular for H gas in denser regions, are adopted from Rahmati et al. (2013). We do not include AGN continuum radiation for the sake of simplicity. While excluding the AGN radiation might affect the ion fractions in regions close to the galaxy (e.g. ISM), it is much less likely to have any dominant impact in the CGM. Our modelling is identical to that introduced in Hani et al. (2018), with the only difference that we use a finer resolution grid for the output tables. In our analysis, we impose a metallicity floor of 10 −4.5 to avoid metallicity values lower than those present in our tables. Note that we do not include photo-ionization from stars or AGN in this work. For this paper, we focus on the five tracer ions listed in Table 2 for which we generate mock observables; two of which are largely representative of the cold and cool-ionized ( ∼ 10 4 − 10 5 K) gas (H and Si ) and the three ions representative of the warm-hot ( > 10 5.5 K) gas (O , O and O ). These five ions have a host of robust corresponding observational CGM data as well (e.g. Liang & Chen 2014;Werk et al. 2014;Johnson et al. 2015;Richter et al. 2016Richter et al. , 2017Lehner et al. 2020). Si may also be produced by photoionization at a much lower temperature than 10 5 K. However, neither does our ionization modeling include photoionization from stars nor is it optimal in describing gas colder than 10 4 K. Therefore, this remains an uncertainty in our ionization modelling. The overall ion abundances are naturally depending on the gas metallicity distribution in . In Appendix A we, therefore, derive radial gas metallicity profiles for the simulated MW and M31 galaxies (see Fig. A1). We conclude that the disc gas metallicity in is up to 3 times higher than realistic MW-and M31-mass galaxies (Sanders et al. 2012;Torrey et al. 2014). The gas metallicity profile of the CGM of MW and M31 is not well constrained observationally, but we speculate the might as well have a slightly too high gas metallicity there. We will keep this in mind when comparing our simulations to observations (see Sec. 3.5.3). Small numbers indicate the location of galaxies other than M31 and MW. All dense H absorbers with HI > 10 20 cm −2 are associated with a galaxy. The distributions of the oxygen ions tracing warmer gas reveal a less clumpy and more spherical distribution around the massive galaxies. Note that the actual spatial extent spanned by galaxy number 7 in the 17−11 realization is far smaller as compared to its projected spatial extent in the corresponding skymap ( Fig. 1; see also § 3.2). Generation of skymaps The analysis in this paper extensively uses skymaps showing the column density distribution of the different ions. To define the unit vectors characterising each sightline, we use the Mollweide projection functionality from the package (Zonca et al. 2019), associated to the HEALP -scheme (Górski et al. 2005). Each HEALP sphere consists of a set of pixels (12 pixels in three rings around the poles and equator) that give rise to a base resolution. The grid resolution, side , denotes the number of divisions along the side of each base-resolution pixel. The total number of equal-area (Ω pix ) pixels, pix , can be expressed as, pix = 12 2 side . The area of each pixel is, Ω pix = /(3 2 side ) and the angular resolution per pixel is, pix . We select side = 40 for all the Mollweide projection plots in this paper. This yields the total number of pixels (which we hereafter refer to as sightlines) pix = 19, 200, and an angular resolution Θ pix = 1.46 • . Each sightline starts in ( , , ) = (0, 0, 0) (we shift our coordinates to our desired origin; see § 3 for further discussion) and ends 700 kpc away in the direction of the unit vector defined by the HEALPix-pixel. A sightline is binned into 50,000 evenly spaced gridpoints, so we get a grid-size of 14 pc. At each gridpoint we set the ion density equal to the value of the nearest gas cell (we use the -function, KDTree, to determine the nearest neighbour). The projected ion column density for a sightline is then calculated by summing the respective ion number densities over the grid-points constituting a sightline. Skymaps We create skymaps centred on the geometric centre of the LG, which we define to be the midpoint between MW and M31. Based on such skymaps we will later compute projected column density profiles of M31 (see § 3.4), which makes it possible to directly compare our simulations to observations. A similar frame-of-reference also proved useful for Nuza et al. (2014a); though they used it to obtain plots for studying the entire LG but did not produce whole skymaps from this point. (Table 1). All ions reveal an over-density centred on the MW and M31. In this order, the ions trace gradually warmer gas, and it is, therefore, not surprising that we see a gradually more diffuse distribution in the projection maps. H and Si are much more centred on the inner parts of the haloes in comparison to O , O and O , the latter filling the space all way out to 200 (and even beyond). 200 is shown as dashed circles around MW and M31 -note, that a circle in a Mollweide projection appears deformed. We see that all dense gas blobs with HI > 10 20 cm −2 are associated with regions overlapping with the galaxies from our catalogue (see Appendix B). It fits well with the expectation that such high column densities are typically associated with the ISM of galaxies. The CGM regions of the MW and M31 analogues show a rich structure of H -features. In 09−18, the M31 analogue, for example, reveals a bi-conical structure, characteristic of galaxy outflows. Many of the extended, diffuse gas streams (particularly in Si , but also in H ) go far beyond the haloes of the MW and M31 analogues. We see varying distributions of Si gas across the three simulations in corresponding skymaps. While the 17−11 and 09−18 simulations show an excess of higher column density, clumpy Si , 37−11 shows an excess of lower column density, diffuse Si (See § 3.3.3 and 3.4 for further discussion). Smaller stellar mass and 200 values for MW-M31 in case of 37−11 (in comparison to the other two simulations) could be one possible reason for such a heterogeneity across the Si distributions. Satellite galaxies in the LG The satellite galaxies in the simulations have been marked with a galaxy number in each panel, and their properties are summarised in the catalogue tables in Appendix B. We include those galaxies which have gas > 0 (as identified by the halo finder) and are within 800 kpc of the LG centre. The 800 kpc cutoff is slightly larger in comparison to the 700 kpc cutoff, used when generating the skymaps; we have chosen this slightly larger cutoff for the satellite galaxy catalogue to ensure that all galaxies contributing to the skymaps are included. Below, we show that all dense H blobs are associated with a galaxy from our catalogues, so a 800 kpc cutoff sufficiently selects all the galaxies contributing to the skymaps. The satellite galaxies are generally more prominent in H and Si in comparison to the higher ions. Galaxy 12 from the 37−11 simulation does, however, reveal significant amounts of O , O and O . This satellite has a stellar mass of * = 3.2 × 10 9 , which is comparable to the LMC galaxy in the real LG (it has * = 3 × 10 9 following D'Onghia & Fox 2016). Recently, it has been suggested that the LMC galaxy may have a warm-hot coronal halo (Wakker et al. 1998;Lucchini et al. 2020) that is responsible for the presence and spatial extent of the Magellanic Stream. Adams et al. (2013) presents a study of 59 ultra compact highvelocity clouds (UCHVCs) from the ALFALFA H survey while Giovanelli et al. (2013) reports the discovery of a low-mass halo in the form of a UCHVC. From both these studies, a common conclusion emerges: low-mass, gas-rich halos (detected in the form of Compact HVCs/UCHVCs), lurking on the fringes of the CGMs of massive galaxies in our Local Volume (MW-M31, for example), are more likely to be discovered through their baryonic content (traceable primarily via H ). Other observational papers (de Heij et al. 2002;Putman et al. 2002;Sternberg et al. 2002;Maloney & Putman 2003;Westmeier et al. 2005b,a), based on objects detected around MW and M31, also support this hypothesis. We find similar H column densities (∼ 10 19 cm −2 ) at proj 200 ( 200 kpc), as reported in the above observations. One can also very clearly notice the presence of such small halos in our skymaps. Hence, we can safely conclude that our results also support the existence of low-mass halos at circumgalactic distances. Ram pressure stripping in the LG Many of the satellite systems show disturbed H and Si gas distributions (see the satellite galaxies with galaxy numbers 4 and 13 for simulation 09−18; 2, 5 and 7 for simulation 17−11; 2 for simulation 37−11) to varying degrees. The satellite galaxies' proximity to either of MW or M31 certainly plays a pivotal role (as do their own kinematic motions through their surroundings) in producing ram pressure stripping in their ISMs as well as generating asymmetries in their respective CGMs Hausammann et al. 2019). The ions tracing the warmer gas appear to be less sensitive to such disturbances. In the context of galaxy clusters, ram-pressure stripping of the ISM gas is an important process in quenching galaxies (Gunn & Gott 1972;Abadi et al. 1999). Jellyfish galaxies are examples of galaxies experiencing such stripping, where the ram-pressure from intracluster gas strips and disturbs the ISM of star-forming galaxies (Poggianti et al. 2017;Cramer et al. 2019). Such galaxies have long extended tails, which are stabilised by radiative cooling and a magnetic field (Müller et al. 2020). Given the many disturbed galaxies with extended tails in our simulations, we argue that observations of such galaxies may provide insights into the same processes, which are usually studied in jellyfish galaxies in galaxy clusters. It would specifically be interesting, if such examples of jellyfish galaxies in the LG could be used to provide insights into the growth of dense gas in the galaxy tails. The growth of dense gas in such a multiphase medium has recently been intensively studied in hydrodynamical simulations of a cold cloud interacting with a hot wind (Gronke & Oh 2018;Li et al. 2019;Sparre et al. 2020;Kanjilal et al. 2021;Abruzzo et al. 2022). We further discuss the possibility of constraining gas flows in the tails of LG satellites in § 4. Cartesian projections The inescapable nature of skymap projections often teases one into a likely misinterpretation of the angular extent spanned by objects within them. This misinterpretation, however, is circumvented by Cartesian projection plots. An example for this is the case of satellite galaxy number 7 in the 17−11 realization. Its distance to the LG midpoint is only 114 kpc (see Table B2 in Appendix B), which is much smaller in comparison to M31 (338 kpc). In the H skymap, this galaxy appears much larger in comparison to the Cartesian projection, which we present in Fig. 2. This galaxy, hence, appears to be visually dominant in the Mollweide projection map simply because of its proximity to the LG centre, and its location on the skymap, where it appears to be in the direction of the M31 analogue. Thus, this example demonstrates that it is much harder to distinguish galaxies in the skymap in comparison to a Cartesian projection, which should be kept in mind when it comes to the visual interpretation of the skymaps. In our analysis, all large and dense H blobs with HI > 10 20 cm −2 are associated with a galaxy from our galaxy catalogue. There is a minor blob at < −600 kpc in the 37−11 H projection map (Fig. 2), which is not included in the catalogue, because its distance to the LG centre is larger than our cutoff value of 800 kpc -hence it is not present in the skymaps, but only visible in the Cartesian projection. We also see that the satellite galaxies, which we described in § 3.1 as having disturbed gas distributions according to the skymaps, also look disturbed in Fig. 2. Indeed, the deformed nature seems even more pronounced in the Cartesian projection. Power Spectra In the previous subsections, we have clearly seen that the low ions largely follow a clumpy distribution while the high ions follow a much smoother profile. One way to neatly quantify such distribution patterns is by creating power spectra for each ion and capture the scales over which the corresponding ion exhibits most of its power. Formalism The spatial scales contributing to a skymap can be quantified by a power spectrum. First, the column density of a given ion is decom-posed into spherical harmonics as where is a pixels unit vector, is the multipole number, and is the coefficient describing the contribution by the mode corresponding to a spherical harmonics base function ( ). The angular power spectrum is then defined as, We use the function anafast to compute for each of the column density skymaps. We have subtracted the monopole and dipole moments, and we constrain the power spectrum to ≤ 2 side = 80, because contributions at higher may be dominated by noise (following the documentation for the anafast function). In Fig. 3, we show the power spectra for the different ions. We show the power relative to = 2, which makes the -dependence for the different ions easy to compare. We have scaled the by a factor of ( + 1), so the plot shows the total power contributed by each multipole. The angular scale corresponding to each multipole number is estimated as 180 • / . Contributions from odd and even modes We start by characterising the 09−18 simulation. The modes with even -values are systematically larger in comparison to the modes with odd -values. This zigzagging could easily be misinterpreted as an effect of noise, but we remark that it has a physical origin caused by the MW and M31 having a similar angular extent, a similar column density and being located in opposite directions (as seen from the skymap-observer's position). These two galaxies, hence, contribute with an approximate reflection-symmetric signal. Due to the identity, (− ) = (−1) ( ), only the modes with even contribute to a reflection-symmetric map, so this explains the domination of even modes. A domination of even modes is especially visible for 10 for all ions in all three simulations. For 09−18 the domination is also present for higher for all ions, but for 17−11, the signal vanishes at 10 for O and O . The angular coherence scale From the behaviour of the power spectra for 09−18, we see that the H skymaps have more structure on small scales of 5 • (relative to a larger scale of = 2) in comparison to the other ions. The amount of power on this angular scale ( 5 • ) is indeed gradually decreasing from H , Si , O , O to O (with the only exception being O in 37−11, which shows higher power on this scale than O and O ). This is completely consistent with the picture that we get from visually examining the different skymaps in Fig. 1, where the ions tracing the coldest gas also seem to have the clumpiest distribution on small angular scales. The behaviour of the power spectra for 17−11 and 37−11 are broadly consistent with this picture. H has more power at smaller scales ( 20) across all simulations in comparison to the other four ions. For 37−11, O shows more power on small scales in comparison to O and O , which is most likely an effect of the O ion being influenced by outflows (this ion, for example, reveals a bi-conical outflow for MW for 37−11 in Fig. 1). For 37−11, the H spectrum reveals the most power on small angular scales -this fits well with our scenario that H gas is clumpy Figure 3. We show power spectra generated based on the ion skymaps (Fig. 1). The power spectra are normalised to the = 2 value. The ions tracing the coldest gas (H and Si ) have more power on small angular scales ( 10 • ) in comparison to the high ions O , O and O . This fits well with the visual impression from the skymaps in Fig. 1. The power spectra reveal a preference for modes with even -values. This is because the skymaps have a contribution from a reflective component, with MW and M31 being in opposite directions (as seen from the frame-of-reference of the skymap's observer). on small scales. For the warmer ions such as O the power is a decreasing function of (if we ignore the fluctuations caused by even modes having more power in comparison to odd modes), implying that fluctuations on large angular scales are dominating. Similar trends are found in the other simulations. Intriguingly, the Si power spectra for 09−18 and 17−11 show an increasing trend at small scales ( 10 • ), while 37−11 Si power spectra shows a decreasing trend at similar scales. This pattern is, indeed, coherent with our observations regarding the Si skymaps (see § 3.1). We discuss this aspect a bit further in § 3.4, where we introduce the column density distributions. Column density profiles Radial column density profiles are often used as an observational probe of the spatial distribution of the CGM in galaxies. In Fig. 4, we show the M31 radial profiles for our ensemble of ions with a particular focus on the median and 16-84th percentile of the distributions. The background points show all our sightlines. Overall trends As expected, the median column density is a declining function of radius for all ions in all simulations. The scatter is, however, behaving differently. Ions tracing the warm-hot gas (O , O and O ) have a much lower scatter in comparison to the ions characteristic of dense-cold gas (H and Si ). The profiles of the former ions are well-behaved and the column density profiles can be well-described as a monotonic decreasing function of projected distance (this feature is well documented for O ; Werk et al. 2013;Liang et al. 2018) with a scatter of 0.1-0.2 dex. On the other hand, H and Si reveal extreme outliers. In simulation 17−11 galaxies 4, 11 and 21 (see Fig. 1), for example, contribute with high H column densities ( 10 20 cm −2 ) at a projected radius of proj = 1.5 ± 0.5 200 . This shows that the H column density is clumpy and influenced by satellite galaxies. Similarly, Si show multiple clumps, but their correlation scales seem slightly larger in comparison to H , which is consistent with our power spectrum analysis. Despite the clumpy nature of Si , we still find the mean of the projected column density profile to be decreasing (as, for example, is also seen in the observed sample of galaxies from Liang & Chen (2014)). These trends are also applicable to the projected column density profiles of the MW, which we show in the Appendix Fig. C1. H is again influenced by individual satellite galaxies, and there is generally an increased scatter for ions tracing low-temperature gas in comparison to the high ions. Origins of the clumpy CGM at large radii The presence of H sightlines mimicking Lyman limit-like column densities (∼ 10 17 cm −2 ) in Fig. 4 as well as Si sightlines lying above 10 12 cm −2 , out to 200 in our simulations, indicate that the cool-clumpy CGM extends to large distances up to virial radii. In fact, using VLT/UVES and HST/STIS data, Richter et al. (2003) as well as Richter et al. (2009) have identified such a population of Lyman-limit like optical and UV absorption systems in the Milky Way halo at high radial velocities, most likely representing the observational counterpart of CGM clumps far away from the disk. It is then worthwhile to contemplate about the physical origins of this clumpy CGM gas. A comparison with corresponding H data from Liang & Chen (2014) (henceforth, LC2014) reveals that the cool, clumpy CGM (log (H ) > 10 16 cm −2 ) has a similar spatial extent (> 2-3 proj / 200 ) as seen in our data. It is important to note that most of this clumpy CGM gas is not associated with the ISM of the satellite galaxies because those regions have far greater densities (a factor of ∼ 4-5 times higher) than that being discussed here. However, ram pressure stripping from the motions of many of the satellite galaxies within the 200 of MW-M31 can deposit such intermediate-column density cool gas at these distances. We elaborate on ram pressure stripping and its effects on the CGM of LG in § 4.2. Gas accretion mechanisms onto the host galaxy, in itself could be a potential source for cool, slightly under-dense gas clumps manifesting as cold CGM at large distances. Galactic fountain flows have long been hypothesised as a possible means to efficiently circulate gas, metals and angular momentum A distinct blob of high column density H absorbers, which can be seen at a distance of ∼ 1.5 proj / 200 in the H profile for 17−11, can be correlated with satellite galaxies numbered 4 and 11 in the corresponding skymap (H skymap for 17−11 in Fig. 1). between the ISM and the CGM (Fraternali et al. 2013;Fraternali 2017). Thermal instabilities arising from cold gas parcels from the ISM regions moving outward rapidly through the warm ambient CGM regions can result in the growth of intermediately dense cool gas. However, it is not immediately clear which of the above three processes could be the most dominant. While carrying out an elaborate tracer particle analysis or delving deeper into the ram pressure stripping processes could provide better clarity about the root cause of this distant cold CGM, it is beyond the scope of this paper. The bi-modal distribution of Si Interestingly, Si column density distributions show a strong bimodality, with a higher sequence of sightlines clustered around ∼ 10 14 cm −2 and another lower sequence clustered around ∼ 10 8 cm −2 . This bimodality is expected due to the bi-conical outflows, which we identified in Fig. 1. This bimodal feature is indeed most prevalent for M31 in 09-18 and 17-11, where the bi-conical outflows were most visible. However, it is practically highly unlikely to detect the lower sequence of Si column densities in near future; hence this bi-modal feature will not show up in the Si observational datasets. Comparison with observations While the previous subsections primarily dealt with the theoretical interpretations of our results from the power spectra and column density profiles, this subsection is dedicated to analysing how well these results match with data from observations and other simulations. We base our comparison on three different observational datasets: • M31 observations from the Project AMIGA (Absorption Maps in the Gas of Andromeda; Lehner et al. 2020). Project AMIGA is a UV HST program studying the CGM of M31 by using 43 quasar sightlines, piercing through its CGM at different impact parameters ( proj = 25 to 569 kpc). Such a large number of sightlines for the Andromeda galaxy enables a constraining quantitative comparison to the corresponding mock data from our simulations. • Absorption-line measurements of Si from LC2014. They present a study of low and intermediate ions in the CGMs of a sample of 195 galaxies in the low-redshift regime. However, 50% of the LC2014 sample consists of dwarf galaxies. To enable a fair comparison, we select only galaxies in a comparable mass range to our M31 simulations. We specifically only include their galaxies with 10 10.6 M ≤ * ≤ 10 11.1 M . In our context, the data pertaining to Si (1206 Å) ion is relevant. Since this is an absorption-line study, they measure all ion abundances in terms of equivalent width (EW). In order to translate their EW measurements into column density values, we plot a corresponding curve of growth for different " " parameters. From the curve of growth it is clear that for log (Si ) < 12.0 and log (Si ) > 18.0, translating EW into column densities is straightforward. However, in the 12.0 < log (Si ) < 18.0 regime, -parameter degeneracy sets in and a single EW measurement can result in different values for column densities depending on theparameter adopted. For this reason we exclude the sightlines from LC2014 at distances / 200 < 1 (where this degeneracy is present). • O ion measurements from Johnson, Chen & Mulchaey (2015) (henceforth J2015). They present a study of distribution of heavy elements of sight-lines passing galaxies with different impact parameters. Like LC2014, the eCGM galaxy sample in J2015 also comprises of galaxies spanning a range of stellar masses (log M * /M = 8.4-11.5), so we again apply a mass cut of 10 10.6 M ≤ * ≤ 10 11.1 M and we also only include late-type galaxies. In Fig. 5 we compare the projected Si and O profiles for M31 from to these observational datasets, and we also show the EAGLE simulations (Oppenheimer et al. 2017) and the FIRE-2 simulations . We discuss the comparison to the other simulations in Sec. 3.6. Comparing to Si observations At low impact parameters, proj 200 , the observed range of Si column densities in AMIGA and our simulations are consistent 2 . For sightlines probing proj 1.5 200 , our simulations under-predict the observed column densities. Some of the shown observational data points are upper limits, implying that the observations leave the possibility for individual sightlines with column densities as low as ours, but the simulations generally fall short by at least an order of magnitude at proj 1.5 200 . On the other hand, is perfectly consistent with the upper limits from LC2014. Indeed, there are tensions between the high Si column densities reported by Project AMIGA (in M31) and the upper limits from LC2014. A possible reason for this could be contamination of gas from the MW halo or Local Group environment for the M31 observations. We will further assess this hypothesis in Sec. 4.1. Comparing to O observations When comparing the O column density in AMIGA and we again see larger values in the former. At the same time, reveals larger column densities in comparison to the J2015 observations of galaxies from the low-redshift Universe. The offset between the observed J2015 and AMIGA might again be caused by contamination of MW absorption in the latter dataset, or an alternative possibility for the offset is that one dataset is the mean of a sample of low-redshift galaxies and the other only takes into account a single galaxy's profile (M31). The normalization of the metallicity profile in In Appendix A we show that the gas metallicity in the disc of the galaxies is up to a factor of 3 higher in comparison to observations. The naïve expectation is that the CGM gas metallicity is too high by a similar factor, and this would cause the column density profiles in Fig. 5 to be overestimated by up to 0.5 dex. If we scale the M31 Si and O column density profiles down by 0.5 dex, the agreement with LC14 and J2015 improves, whereas the tension between the AMIGA observations and becomes stronger. This supports our conclusion that is well consistent with these observations of low-redshift galaxies. Johnson et al. (2015). The Si profile from is consistent with the LC14 upper limits, but there is an inconsistency between these and the Project AMIGA observations. Similarly, the J2015 and Project AMIGA observations of O are inconsistent, and is only in reasonable agreement with J2015. In Fig. 6 we discuss that a likely explanation for the offset between and the AMIGA observations is contamination of gas from the MW to the AMIGA dataset. Comparison with other simulations These are zoom simulations with non-equilibrium cooling. The corresponding average 200 for this subsample is 195 kpc (see fig. 2 in Oppenheimer et al. 2016). This dataset is at = 0.2, since the authors compare it with the COS-Halos data which covers the same redshift. For the FIRE-2 simulation comparison we compare to the m12i halo (log 200 12 M at = 0) using the FIRE-2 model with cosmic ray feedback (their simulation data is taken from fig. 17 in Lehner et al. 2020). Further details about the simulations and CGM modelling in FIRE2 simulations can be found in . The simulations show many similar trends to EAGLE and FIRE-2 and they, furthermore, all under-predict the AMIGA column densities of Si and O at proj 1.0 200 . On the other hand, all the simulations are broadly consistent with the observational datasets we have compiled based on LC2014 and J2015. Convergence test In Appendix D we compare the high-resolution simulations analysed in Fig. 5 with intermediate-resolution simulations having an eight times larger dark matter particle mass. This convergence test does not challenge our derived column density profile. Using the same simulation code and galaxy formation model as in our paper, van de Voort et al. (2019) showed that increasing the spatial resolution significantly boosts the H column density in the CGM. Idealised simulations furthermore reveal the possibility of gas to fragment to the cooling scales (McCourt et al. 2018;Sparre et al. 2019), which for dense gas is significantly below our resolution limit. Exploring the resolution requirement in the CGM of cosmological simulations is, however, still a field of ongoing research, so it is still a possibility that the idealized simulations over-estimate the needed spatial resolution. We note that Si and O trace warmer gas in comparison to H , so these ions are expected to be less affected by resolution issues than H . Even though our convergence test does not reveal any signs of a lack of convergence, it is still a possibility that our column densities are affected by a too low spatial resolution. Biased column density profiles caused by the MW's CGM? We have found that observations of low-redshift galaxies disagree with the observed column densities of the M31 by the Project AMIGA. A possible explanation for this finding could be obser- . We demonstrate how the gas in the MW's CGM may influence the observationally derived median column density profiles around M31. We have generated skymaps centred on the MW (instead of the LG, as done in previous figures), where we remove gas lying within a radial cutoff ranging from 10 to 150 kpc from MW (solid lines). For the dashed lines we additionally constrain gas to be within 100 km s −1 of the M31. As in Fig. 5, the data points from Project AMIGA survey (filled grey markers), LC2014 (orange downward markers) and J2015 (filled yellow markers) have been overplotted. Even when only including gas within 100 km s −1 of M31, the Si profile of 09-18 and 37-11 is increased to 10 15 cm −2 by clouds within 10-120 kpc from the MW centre. For 17-11, a velocity selection of gas very well removes gas within 150 kpc of the MW. For O , the contamination from the MW's CGM is also significantly changing the profiles in 09-18 -here gas residing within 150 kpc of the MW may boost the column density by 1.0 dex. vational biases, for example, caused by gas clouds in the CGM of the MW contributing to the projected column density profile of M31. Such a bias does not play a role in our previous skymap analysis, because the skymaps are created by an observer in the geometric centre of the LG, and hence, the MW's CGM does not contribute to the sight-lines towards M31. We now turn to addressing the role of such a bias in the three realizations of the simulations. We re-analyse our simulations with an observer located in the MW centre, and create skymaps of the different ions as before. In order to incorporate the larger distance from the MW to M31 (as opposed to the smaller distance from the LG centre to M31 in earlier analysis), we use longer sightlines (each 1400 kpc in length). To ensure grid-size uniformity with respect to the earlier analysis, we increase the number of gridpoints from 50,000 to 100,000. To determine the role of the MW's CGM, we create skymaps excluding gas within 10, 30, 60, 90, 120 and 150 kpc of the MW's centre. The corresponding projected radial column density profiles are seen as solid lines in Fig. 6. The three different realizations show a significant amount of Si and O residing in the MW's CGM at a distance of 10-120 kpc from the MW's centre. Observationally, a hint of the gas clouds' spatial origin can be obtained by looking at its line-of-sight velocity. In Fig. 6 we also construct profiles, where we exclude gas clouds with a line-of-sight difference (|Δ |) exceeding 100 km s −1 from M31's velocity (see dashed lines in Fig. 6). From our different realizations we see a different behaviour. For 09−18 and 37−11, the column density profiles of Si and O increase up to 10 15 cm −2 and by 1.0 dex, respectively (this is the difference between dashed lines indicating a cutoff of 10 kpc and 120 kpc in Fig. 6), caused by gas residing between 10-120 kpc of the MW's CGM. For 17−11, the situation is less extreme, and the inferred column density profile of M31 is unaffected by the MW, when a velocity cut in the line-of-sight velocity is applied. This analysis shows that the MW's CGM can substantially bias the inferred projected column densities of M31. For Si , the potential bias is stronger in comparison to O . For O in 17−11, a velocity cut alone is successful in completely removing MW contributions. As seen from the lower middle panel in Fig. 6, this still gives us a small discrepancy (∼ 0.5 dex) with AMIGA observations. This means that our 17−11 analogues inherently do not produce enough O to completely match the AMIGA observational trends. However, the opposite is true for the other two simulations where we clearly see our results matching fairly well with the AMIGA observations, when we include the contribution of gas from the MW halo. Overall, we infer that the biases estimated by our MW centred skymaps provide a likely explanation for the differences between the simulations and the AMIGA observations (seen in Fig. 5). At the same time, it also provides a likely explanation for the differences between the low-redshift galaxy samples (LC2014 and J2015) and the Project AMIGA 3 . In reality, contamination of the gas from the Magellanic Stream (MS) to the M31 CGM observations is also a possibility. The MS passes just outside of the virial radius ( vir = 300 kpc) of M31 (see fig. 1 in Lehner et al. 2020). For the purpose of ascertaining the level of MS contamination, Lehner et al. (2020) use Si as their choice of ion (mainly because it is most sensitive to detect both weak as well as strong absorption). However, they do not remove entire sightlines merely on the suspicion of possible MS contamination. Instead, they analyze individual components and find that 28 out of 74 (38%) Si components are within the MS boundary region (and having Si column density values larger than 10 13 cm −2 ). These are not included in the sample from then on. For the remaining non-MS contaminated components, they find a trend of higher Si column density at regions away ( MS > 15 • ) from the MS main axis ( MS = 0 • ). This shows that the MS contamination is negligible for these components. However, they do find a fraction (4/22) of dwarf galaxies out of their M31 dwarf galaxies sample falling in the MS contaminated region. This means that while they do take utmost care to avoid any MS contamination in their results, there could still be some residual contributions (especially in the cold gas observations of M31's CGM) from the MW CGM. These could manifest in the form of slightly enhanced column densities in observations at regions beyond M31's virial radius. Gas stripping in the Local Group A characteristic that appears across all our realizations is the distorted nature of the CGMs of many satellite galaxies. High-velocity infall motions of dwarf galaxies through complex gravitational potential fields, typical in galaxy groups and clusters results in the dwarf galaxy CGM becoming structurally disturbed. In some extreme cases this can also result in trailing stripped gas tails (Smith et al. 2010;Owers et al. 2012;Salem et al. 2015;McPartland et al. 2016;Poggianti et al. 2017;Tonnesen & Bryan 2021). While a few very clear examples of such galaxies have been described in detail in §3.1, there are certainly many more. The role of stripped gas from the CGMs of satellite galaxies towards augmenting the pre-existing gas reserves of the host galaxy and thereby influencing the CGM of the host galaxy is rather well known from the observations of the MS, which emanates from the interaction of the Small and Large Magellanic Clouds on their approach towards the MW (e.g. Fox et al. 2014;Richter et al. 2017). However, a scarcity of deep observations means that very little is known about the part played by the diffuse gas from other satellite galaxies in our LG. Few studies pertaining to such observations reveal low neutral gas abundances around dwarf galaxies, though they might still harbour sizeable reserves of ionized gas (Westmeier et al. 2015;Emerick et al. 2016;Fillingham et al. 2016;Simpson et al. 2018). By carefully analysing the gas flow kinematics across time-frames for these dwarf galaxies within , it will be possible, in future studies, to obtain not just their mock proper and bulk gas motions, but also various parameters regarding their stripped gas such as its spatial extent, cross-section and physical state. The Gaia DR2 proper motions of MW and LG satellites (Pawlowski & Kroupa 2020), along with corresponding comprehensive UV, optical and X-ray datasets from HST-COS, UVES, Keck and Chandra, can then provide us with clues regarding which realizations are most likely to produce these real observations. Furthermore, implementing similar sightline analysis, done in this paper for MW-M31, for multiple satellite systems over a range of their respective impact parameters, can yield extensive mock datasets that could then prove useful in the wake of future surveys that will be sensitive to even lower column density gas. Physical modelling of the CGM In recent years, our understanding of the CGM has dramatically improved, and it is encouraging that our simulations are broadly consistent with observations. This is despite of our relatively simple physics model. Theoretical work has for example suggested that parsec-scale resolution, which is so far unattainable in cosmological simulations like , may be necessary to resolve the cold gas in galaxies (Mc-Court et al. 2018;Sparre et al. 2019;Hummels et al. 2019;van de Voort et al. 2019, -we note, however, that these results are so far only suggestive and the need for parsec-scale resolution has so far not been demonstrated, yet this could be a potential reason for the offset). Results from van de Voort et al. (2019) proved that ∼ 1 kpc resolution in the CGM boosts small-scale cold gas structure as well as covering fractions of Lyman limit systems; this might also hold true for slightly less dense but slightly more ionized cool gas. McCourt et al. (2017) proposed a cascaded shattering process via which a large cloud experiencing thermal instability can cool a couple of orders of magnitude (from ∼ 10 6 K to ∼ 10 4 K), mainly as a result of continued fragmentation within the larger cloud. They compute the characteristic length scale, associated with shattering, to be ∼ 1−100 pc. Multiple observations also show that cool gas is indeed present in form of small clouds out to ∼ vir in galaxy haloes (Lau et al. 2016;Hennawi et al. 2015;Stocke et al. 2013;Prochaska & Hennawi 2008). Using Cloudy ionization models, Richter et al. (2009) have determined the characteristic sizes of the partly neutral CGM clumps in the MW halo based on their HST/STIS absorption survey, leading to typical scale lengths in the range 0.03 to 130 pc (see tables 4 & 5 in Richter et al. 2009). From their absorber statistics, these authors estimated that the halos of MW-type galaxies contain millions to billions of such small-scale gas clumps and argue that these structures may represent transient features in a highly dynamical CGM. Thus, it is clear that the length scales involved in these processes are still at least an order of magnitude below what is currently achievable in the highest resolution zoom-in simulations. It is also worth mentioning that have recently introduced a novel framework modelling multiphase winds, which may be relevant for future cosmological simulations of the CGM. Lehner et al. (2020) discusses feedback processes, which may also affect how gas and metals are transported to large radii. The role of cosmic ray feedback in influencing the CGM has recently gained interest from multiple research groups (Salem et al. 2014(Salem et al. , 2016Buck et al. 2020;Hopkins et al. 2020), and it has been shown to significantly alter gas flows in the CGM of simulations. CR-driven winds from the LMC ) as well as those from the resolved ISM (Simpson et al. 2016;Girichidis et al. 2018;Farber et al. 2018) have been shown to change both the outer and inner CGM properties, respectively. Similarly, magnetic fields have been shown to influence the physical properties of the CGMs of simulated galaxies, thereby modifying the metal-mixing in the CGM (van de Voort et al. 2021). Despite of agreeing relatively well with the observations, we note that there are still some important challenges for future galaxy formation models in terms of understanding physical processes in the CGM. CONCLUSIONS We have analysed the gas, spanning a range of temperatures and densities, around the MW-M31 analogues at = 0 in a set of three simulations. These LG simulations use the quasi-Lagrangian, moving mesh code, along with the comprehensive Auriga galaxy formation model. We have set our frame of reference to the LG geometrical centre and generated ion maps for a set of five ions, H , Si , O , O and O . Some important conclusions have emerged from our study: • We have created mock skymaps of the gas distribution in the LG. All dense gas blobs with HI > 10 20 cm −2 are associated with a galaxy; either a satellite galaxy or MW/M31 themselves. The skymaps of H and Si reveal strong imprints of satellite galaxies, whereas the tracers of warmer gas (O , O and O ) are mainly dominated by the haloes of MW and M31. The projected column density profiles of the latter ions are, indeed, well-described by monotonic decreasing functions of the impact parameter. In comparison, the projected H -and Si -profiles have a much higher scatter caused by blobs associated with the satellite galaxies. • A power spectrum analysis of the skymaps shows that H , Si , O , O and O have a gradually higher coherence angle on the sky -ions tracing the coldest gas are most clumpy. This confirms the impression we get by visually inspecting the skymaps, and it is also consistent with the behaviour of the column density profiles. • The visual inspection of the simulated skymaps reveal multiple satellite galaxies with disturbed gas morphologies, especially in H and Si . These are LG analogues of jellyfish galaxies. Future simulation analyses and observations can give a unique insight to the physical processes in the ISM and CGM of these galaxies. • For the M31 analogues we compare the Si and O column density profiles to observations of M31 and low-redshift galaxies. The spectroscopic observations of M31 and low-redshift galaxies reveal remarkably different column density profiles. Using our simulations, we find that the gas residing in the Milky Way may contaminate the sight-lines towards M31, such that the M31 column densities are boosted. For Si and O we see this contamination boosting the column density profiles up to as much as 10 15 cm −2 and by 1.0 dex, respectively, even when only including gas within a 100 km s −1 of the M31 velocity. Contamination of gas from the MW, hence, provides one of the likely explanations for the offset between observations of M31 and low-redshift galaxies. • The M31 analogues from have Si and O column density profiles broadly consistent with low-redshift galaxy constraints. If we include a contamination from MW gas, then in 2 out of 3 M31 realizations we can also reproduce the large column densities observed in the direction of M31 in Project AMIGA. DATA AVAILABILITY The scripts and plots for this article will be shared on reasonable request to the corresponding author. The code is publicly available (Weinberger et al. 2020). APPENDIX A: RADIAL GAS METALLICITY PROFILES We obtain the radial gas metallicity profiles in spherical shells equally spaced in the logarithmic radius (log ) for the galaxies in Fig. A1. Overall, the gas metallicities for MW and M31 look similar. The galaxies are metal-rich in the inner disc regions (3-10 times the solar metallicity inside 10 kpc), after which the metallicity drops sharply out to the CGM regions (as low as 0.2 times solar metallicity at 500 kpc). Beyond this point, the metallicities rise again due to the presence of the pairing galaxy at those distances. As observed by Conroy et al. 2019, we also see our galaxies exhibiting a turn-over from being metal-rich (at < 10 kpc) to metal-poor (at > 30 kpc). For the MW in 17-11 and M31 in 37-11 the central gas metallicities reach values as high as 10 . These values are clearly a factor of 2-3 higher than for M31 observations (Sanders et al. 2012), and these also exceed our expectations for MW-like galaxies (see fig. 10 in Torrey et al. 2014 for a compilation of observations of MWmass galaxies). We, therefore, conclude that produces a disc metallicity, which is up to a factor of 3 higher than expected from observations. There are no strong observational constraints on the MW and M31 CGM metallicity, but when comparing to observations we keep in mind the possibility that our simulations might have a CGM metallicity, which is up to a factor of 3 too high in comparison to real galaxies. APPENDIX B: A LISTING OF THE MOST RELEVANT PARAMETERS FOR THE MOST MASSIVE GALAXIES IN EACH REALIZATION In Table B1, B2 and B3 we show properties of the satellite galaxies in each of the simulations. The galaxy numbers appear in Fig. 1 of the main paper, and we see that all the dense H regions are associated with one of the galaxies listed in the tables. APPENDIX C: COLUMN DENSITY PROFILES FOR THE MW In Fig. C1 we show the radial column density profile of the simulated MW for the different ions. This is complementary to the M31 column density profiles in Fig. 4. APPENDIX D: CONVERGENCE TEST We perform a convergence test, where we compare the highresolution simulations, which we presented in the main paper, to intermediate-resolution simulations with an eight times lower mass of the dark matter particles. In Fig. D1, we test whether the column density profiles of Si and O are converged. In simulation 09-18, the column densities at 200 are higher in the intermediate resolution simulation in comparison to the high-resolution simulations. For 17-11 and 37-11, we have the opposite trend -we see the highest column densities in the high-resolution simulations. The median profiles of O are only slightly affected by resolution with the difference between intermediate and high resolution simulations being less than a factor of two. We conclude that, on the whole, the column density profiles are well converged. This paper has been typeset from a T E X/L A T E X file prepared by the author. Table B1. A list of properties for the most massive galaxies in the 09−18 realization. Galaxy no. 0 corresponds to the M31, while galaxy no. 9 corresponds to the MW. Remaining galaxies can be correlated with their respective galaxy nos. in Fig. 1 37-11 Figure A1. Radial gas metallicity profiles for the galaxies. The profiles show two distinct regimes-metal-rich in the inner disc regions ( < 10 kpc) and metal-poor in the CGM regions ( > 30 kpc). The rise in metallicities at > 500 kpc occurs due to the presence of the pairing galaxy at these distances. Figure D1. We perform a convergence test of Fig. 5. The thick red line shows the median of the high-resolution simulations, which was also shown in Fig. 5. The blue line and contour show the median and 16-84 percentiles, respectively, of intermediate resolution simulations with an eight times lower mass resolution (dark matter particles have an eight times higher mass) in comparison to the high resolution simulations. Examination of the median profiles does not indicate a lack of convergence, so our column density profiles are well converged.
2022-01-27T02:16:00.097Z
2022-01-26T00:00:00.000
{ "year": 2022, "sha1": "01009d876a8be948d742458f10d7364aa043379e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2201.11121", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "01009d876a8be948d742458f10d7364aa043379e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
11598354
pes2o/s2orc
v3-fos-license
Myelin injury induces axonal transport impairment but not AD-like pathology in the hippocampus of cuprizone-fed mice Both multiple sclerosis (MS) and Alzheimer's disease (AD) are progressive neurological disorders with myelin injury and memory impairment. However, whether myelin impairment could cause AD-like neurological pathology remains unclear. To explore neurological pathology following myelin injury, we assessed cognitive function, the expression of myelin proteins, axonal transport-associated proteins, axonal structural proteins, synapse-associated proteins, tau and beta amyloid and the status of neurons, using the cuprizone mouse model of demyelination. We found the mild impairment of learning ability in cuprizone-fed mice and the decreased expression of myelin basic protein (MBP) in the hippocampus. And anti-LINGO-1 improved learning ability and partly restored MBP level. Furthermore, we also found kinesin light chain (KLC), neurofilament light chain (NFL) and neurofilament heavy chain (NF200) were declined in demyelinated hippocampus, which could be partly improved by treatment with anti-LINGO-1. However, we did not observe the increased expression of beta amyloid, hyperphosphorylation of tau and loss of neurons in demyelinated hippocampus. Our results suggest that demyelination might lead to the impairment of neuronal transport, but not cause increased level of hyperphosphorylated tau and beta amyloid. Our research demonstrates remyelination might be an effective pathway to recover the function of neuronal axons and cognition in MS. IntroductIon Multiple sclerosis (MS) is a progressive neurological disorder of young adults, characterized by myelin destruction and neurodegeneration in the central nervous system (CNS) [1]. Similar to Alzheimer's disease (AD), memory impairment is a common symptom of MS [2,3]. Previous imaging studies have demonstrated that altered hippocampal measures and decreased hippocampal volume correlate with memory impairment in MS patients [4][5][6]. And similar results are also found in AD patients [7][8][9]. Postmortem studies have shown that 53 to 79% of MS hippocampi are detected demyelination and demyelinated hippocampi consist with significant decreases in neuronal proteins essential for the function of neurons [10][11][12]. Myelin is an important structure in the CNS, which contributes to the fast and effective brain function [13,14]. Demyelination may cause the change of neuronal proteins and dysfunction of neurons, and lead to cognitive impairment [15]. MS is one of common demyelinated diseases in the CNS. Besides MS, myelin impairment is also detected in the brain of AD patients [16]. Previous image research has demonstrated that the white matter is injured in the brain of AD patients [8,17,18]. And the autoantibodies (antibodies against myelin associated proteins) are observed in significant higher titers in AD patients, compared with healthy controls [19]. Furthermore, postmortem studies have shown that demyelination is coexisted with amyloid plaques in the brain of AD cases [20,21]. And in the AD transgene mice, myelin impairment precedes the appearance of beta amyloid and hyperphosphorylation of tau [22]. However, the relationship between demyelination and AD-like pathology is still unclear. The cuprizone model is one of the most common experimental demyelination animal models of MS, which is induced through a non-inflammation way [23]. The cuprizone is a drug special to impair the mature oligodendrocyte leading to demyelination, but it does not influence the viability of the neuroblastoma cell line SH-SY5Y cells, microglia, and astrocytes, and the proliferation and survival of oligodendrocyte precursor cells (OPC) [24]. Previous research has showed that cuprizone could induce significant demyelination in the corpus callosum, hippocampus, and so on [25,26]. Therefore, in this research, we used cuprizone-fed mice as the demyelination model to detect whether demyelination could induce AD-like pathology. LINGO-1 is a transmembrane protein, which is specifically expressed in oligodendrocytes and neurons [27]. Numerous studies, both in vitro and in vivo, have showed that antagonist the function of LINGO-1 can promote myelin formation of oligodendrocyte [9,[28][29][30]. LINGO-1 antibody can promote remyelination and functional recovery in experimental autoimmune encephalomyelitis (EAE) mice [29,31]. However, LINGO-1 deficiency has no effect on inflammation [29]. All these research has demonstrated that LINGO-1 antagonist is one of the important ways to promote remyelination in the CNS. In the research, we found that the cuprizone model had mild spatial learning impairment with significant demyelination in the hippocampus. And anti-LINGO-1 had slightly improved the ability of learning and partly increased the expression of myelin basic protein (MBP) in the hippocampus. We also found kinesin light chain (KLC), neurofilament light chain (NFL) and neurofilament heavy chain (NF200) were declined in the cuprizone model, and anti-LINGO-1 treatment could partly improve the expression of KLC, NFL and NF200. Furthermore, the synaptic protein spinophilin was decreased in the hippocampal cortex and it was slightly increased after anti-LINGO-1 treatment. However, we did not find the increased level of beta amyloid, abnormal phosphorylation of tau and neuronal loss, which are important hallmarks of AD. Our research suggests that demyelination may lead to the impairment of neuronal transport, and decreased expression of neurofilament proteins and synaptic protein, but not cause AD-like hyperphosphorylated tau, increased level of beta amyloid and neuronal loss. Furthermore, remyelination may be an effective pathway to recover the function of neuronal axons and cognition. results the behaviors of cuprizone-fed mice The mice were experienced cuprizone administration for about ten weeks, and the behaviors were tested at weeks 9 to 9.5, including the elevated plus maze (EPM), open field test, sucrose preference test and Morris water maze test (MWT). LINGO-1 antibody treatment was began in the third week and continued to the end of behavioral tests without stopping cuprizone administration. The results showed that the cuprizone-fed mice had no anxiety and depression-like behaviors, displayed as that there was no difference between the cuprizonefed mice and control mice in the open field test ( Figure 1A-1C), elevated plus maze (EPM) ( Figure 1D-1E), and sucrose preference test ( Figure 1F). In MWT, during the training days, cuprizone-fed mice performed worse than the controls, showing as the latency to reach the platform in cuprizone-fed mice were longer than that in the control mice on the fourth day ( Figure 1G). And after the LINGO-1 antibody treatment, the latency in cuprizone-fed mice was similar to that in control mice ( Figure 1G). On the detecting day, neither the distance nor the time in the platform quadrant was different between the cuprizone-fed mice and control mice ( Figure 1H and 1I). demyelination in the hippocampus In the hippocampal cortex, we detected the myelinassociated proteins, including MBP, 2',3'-cyclic nucleotide 3'-phosphodiesterase (CNPase), and proteolipid protein (PLP). We found that the expression of MBP was significantly declined in the cuprizine-fed mice compared with the control group and it was increased in the LINGO-1 antibody treated group ( Figure 2A). However, other myelin-associated proteins, including CNPase and PLP, were reduced but not significantly in the cuprizinefed mice ( Figure 2B and 2C). In immunofluorescence staining, we also found decreased expression of MBP in subregions of the hippocampal cortex, including the cornu ammonis 1 (Ca1), Ca3 and dentate gyrus (DG), in the cuprizine-fed mice, and LINGO-1 antibody could partly increase the level of MBP ( Figure 2D-2L). reduction of proteins essential for axonal transport in the hippocampus Efficient axonal transport is essential to maintain neuronal function [32]. KLC is responsible for binding of cargo during fast anterograde transport, whereas dynein (Dyn) is the important proteins involve in retrograde transport [32]. We found that the level of KLC was significantly declined in the hippocampus of the cuprizine-fed mice and it was slightly increased, but not significantly, after LINGO-1 antibody treatment ( Figure 3A). However, no significant change of Dyn level was measured among the three groups of mice ( Figure 3B). In immunofluorescence staining, we also found decreased expression of KLC in subregions of the demyelinated hippocampal cortex, including Ca3 and Ca1, and LINGO-1 antibody could partly increase the level of KLC ( Figure 3C-3H). However, the expression of KLC was similar in DG among the three groups ( Figure 3I-3K). The neurofilaments (NFs) are a major component of the neuronal cytoskeleton, involved in providing structural support for the axon and regulating axon diameter, essential for the formation of axonal networks [33]. In the hippocampal cortex, the expression of NF200 in the cuprizine-fed mice was lower than that in the control mice, and after six-week LINGO-1 antibody treatment, NF200 was increased significantly ( Figure 4A). Also NFL was significantly reduced in the cuprizine-fed mice and was . *denotes statistical significance compared with controls (P < 0.05). #denotes statistical significance compared with the cuprizine-fed mice without treatment (P < 0.05). Images were captured from stained frozen sections using a fluorescence microscope equipped with 10×objectives. Scale bar, 1000µm. were displayed the immunofluorescence staining of KLC in subregions of the hippocampus, including Ca3 (C-E), Ca1 (F-H) and DG (I-K).*denotes statistical significance compared with controls (P < 0.05). Images were captured from stained frozen sections using a fluorescence microscope equipped with 40×objectives. Scale bar, 200µm. increased but not significantly ( Figure 4B) after six-week LINGO-1 antibody treatment. In immunofluorescence staining, we also found decreased expression of NF200 in subregions of the demyelinated hippocampal cortex, including Ca3, Ca1 and DG, and LINGO-1 antibody could increase the level of NF200, which was consistent with the finding of western blot ( Figure 4C-4K). no increase of hyperphosphorylated tau and betaamyloid in the hippocampus Hyperphosphorylation of tau is one of the pathological hallmarks of AD. In the research, we detected the level of phosphorylated and total tau. Tau hyperphosphorylation at the Ser396 and Thr231 epitopes was not found in the cuprizine-fed mice, displayed as the level of tau hyperphosphorylation at the Ser396 and Thr231 episodes in the cuprizine-fed mice was similar to that in the control mice, and after LINGO-1 antibody treatment, the phosphorylation of tau was also not changed ( Figure 5A and 5B). Moreover, total level of tau, tau-5 was similar among the three groups ( Figure 5C). Beta-amyloid accumulation is another pathological hallmark of AD. And beta-amyloid peptide is derived from proteolytic cleavage of the amyloid protein precursor (APP) in the axon [34]. In the research, we used western blotting to detect the level of APP, and we found that there was no difference in the level of APP among the three groups ( Figure 5D). And we also detected the level of beta-amyloid in the hippocampus and found that the level of beta-amyloid was similar among the three groups ( Figure 5E). decreased level of synaptic protein in the hippocampus Postsynaptic density proteins (PSD95 and PSD93) and Spinophilin are the neural proteins essential for synaptic plasticity [35]. In the research, we found no significantly different in the expression of postsynaptic density proteins (PSD95 and PSD93) among the three groups ( Figure 6A and 6B). And the expression of Spinophilin, associated to spines, was markedly decreased after ten-week cuprizine-fed and was slightly increased but not significantly in the LINGO-1 antibody treated groups ( Figure 6C). no obvious neuronal loss in the hippocampus Neuronal loss is also one of the main characteristics of AD. In the research, we detected the expression of NeuN, a marker of neuron, using western blot, and found no significant difference in the expression of NeuN among the three groups in the hippocampus ( Figure 7A). Also, neuronal status in subregions of the hippocampus was determined by NeuN immunofluorescence staining. And the results were consistent with that of western blot ( Figure 7B-7M). dIscussIon In the present study, we found the mild spatial learning impairment accompanied by the reduction of MBP in the hippocampus of the cuprizone-fed mice. After LINGO-1 antibody treatment, the learning ability of the cuprizone-fed mice was improved with an increased level of MBP. Meanwhile, the anterograde transport protein KLC, neurofilament proteins NF200 and NFL were significantly declined in the cuprizone-fed mice. After the treatment with LINGO-1 antibody, the proteins of KLC, NF200 and NFL were also increased. However, the abnormal hyperphosphorylated tau, increased beta amyloid and neuronal loss were not observed in demyelinated hippocampus of the cuprizone-fed mice. Our results suggest that demyelination might lead to the impairment of axonal transport but not cause abnormal hyperphosphorylated tau and increased beta amyloid. Cognitive impairment is one of the common symptoms in MS patients [1]. Previous research has demonstrated that cognitive impairment is correlated to demyelination in the brain of MS patients and EAE mice [31,[36][37][38]. We found in our study that the cuprizonefed mice displayed mild spatial learning deficits with the decline of MBP and the ability of learning was improved after myelin repair. Combined with previous research, our study suggests that demyelination might cause the cognitive impairment which could be reversed by promoting remyelination. MS patients also have suffered the abnormal emotion, such as depression and anxiety. Up to 60% MS patients have depression and over 30% patients have anxiety [1,39]. However, in our research, the depression and anxiety-like behaviors were not detected in cuprizone-fed mice. It might be caused by the hyposensitivity of tests, which were used to evaluate depression and anxiety-like behaviors. Furthermore, to our acknowledgement, the variance within every group is fairly high in our research, and this may mean that we underestimate the significance of differences among groups. Myelin wraps around neuronal axons and it is essential to functional integrity and long-term survival of the neuronal axons [40,41]. The impairment of myelin can cause different type of axonal pathology. Previous research has demonstrated that there is a tight relationship between the myelin and axonal transport [12,42]. Kinesins are known as the main molecular motors that drive cargoes from the neuronal cell bodies to the distal nerve terminals along the microtubule [43]. KLC is one of components of kinesin-I and may involve in coupling cargo to the heavy chain or modulating its ATPase activity www.impactjournals.com/oncotarget Figures c.-K. were displayed the immunofluorescence staining of NF200 in subregions of the hippocampus, including Ca3 (C-E), Ca1 (F-H) and DG (I-K).*denotes statistical significance compared with controls (P < 0.05). #denotes statistical significance compared with the cuprizine-fed mice without treatment (P < 0.05). Images were captured from stained frozen sections using a fluorescence microscope equipped with 20×objectives. Scale bar, 200µm. Figure 5: no increase of hyperphosphorylated tau and beta amyloid in the hippocampus. The phosphorylated-tau antibodies PT231 A. and PS396 b. as indicated were used to measure the alteration of tau among the three groups in the hippocampus. c. Total level of tau, tau-5 expression was compared among the three groups. The level of APP d. and beta-amyloid e. was measured by Western-blot. *denotes statistical significance compared with controls (P < 0.05). [43,44]. Previous study has suggested that demyelination may lead to the impairment of axonal transport in the MS [12]. We previously find that there is a decreased level of KLC in the demyelinated parahippocampal cortex (PHC) in the EAE mice, and it can be restored after remyelination treatment [31]. In the present study, we also found the reduction of KLC in demyelinated hippocampus of cuprizone-fed mice, which could be partly restored by LINGO-1 antibody. NFs are major components of the neuronal cytoskeleton, which are the basis for specialized axonal structures and required for axonal transport [33,43,45,46]. Previous study has demonstrated that the neurofilament content is increased in the myelinated axonal segment, compared with the unmyelinated part [47]. The reduction of axonal NFs is observed in demyelinated PHC of EAE mice [31]. Furthermore, the impaired NFs can be recovered accompany with the repair of myelin [31]. In the present study, we found the decline of NF200 and NFL in demyelinated hippocampus of cuprizone-fed mice. And also increased level of NF200 and NFL was observed in accordance with myelin repair. Axonal transport associated proteins and normal axonal structure are crucial to the axonal transport in neurons [43]. In our research, we observed that both axonal transport associated protein (KLC) and axonal structural proteins (NF200 and NFL) were decreased in demyelinated hippocampus of cuprizone-fed mice and were partly increased after myelin repair. Furthermore, we did not find neuronal loss in demyelinated hippocampus of the cuprizine-fed mice, which suggests that the decline of KLC, NF200 and NFL was not simply due to the loss of neurons in the hippocampus. Our results suggest that demyelination might lead to the impairment of axonal transport in the hippocampus, which could be ameliorated by remyelination. The hyperphosphorylation of tau and beta-amyloid accumulation are important hallmarks of AD. Much research, including clinical and preclinical research, has detected the level of hyperphosphorylated tau and beta amyloid in MS patients and animal models. However, the results are inconsistent [48][49][50][51]. In our research, we neither found the hyperphosphorylation of tau at pS396 and pT231, nor observed the increased level of beta amyloid in demyelinated hippocampus of the cuprizonefed mice. Based on our finding and the previous research, we infer that demyelination alone can not induce AD-like pathology, such as hyperphosphorylated tau and increased beta-amyloid, in a short time. In conclusion, our results suggested demyelination might cause dysfunction of axonal transport. However, the impairment of myelin alone did not lead to hyperphosphorylation of tau, increase of beta amyloid and neuronal loss, which are important hallmarks of AD. Animals Adult (C57BL/6, eight-week-old, male) mice were purchased form the animal research center of shanghai laboratory and were housed at 22°C-24°C. Food and water were available ad libitum. Animals were cared for in accordance with the National Institutes of Health Guidelines for Animal Care. All experimental procedures were approved by Animal Care and Use Committee in Southeast University. At the beginning of the research, the mice were randomly divided into three groups. Three groups included the control group (n = 17), the non-pharmacologic treatment Cuprizine-fed group (n = 17) and the pharmacologic treatment Cuprizine-fed group (n = 17). 8-Week-old C57/Bl6 mice were fed with 0.2% (w/w) cuprizone (bis-cyclohexanone oxaldihydrazone) (Sigma) in ground breeder chow for about ten weeks. lInGo-1 antibody treatment The LINGO-1 antibody in the research was generated based on the method of Mi et al [29], but using the BALB/c strain of mice. In our previous research, we have demonstrated that the LINGO-1 antibody is specifically binding to the LINGO-1 protein [31]. LINGO-1 antibody treatment was begun in the third week, for significant demyelination is detected in the third week in the Cuprizine-fed mice [26]. For systemic drug delivery, the mice in the treatment group received intraperitoneal injections of 10 mg/kg LINGO-1 antibody once every six days like our previous research [31]. The mice, in the other two groups, were administered 0.9% NaCl once every six days. During LINGO-1 antibody treatment, the cuprizine was continued to feed the mice and it did not stop until the mice were killed. behavioral analyses At weeks 9 to 9.5, we tested the behaviors of mice. Before the behavioral tests, the mice were taken to the new environment to acclimate for two days. The order of the behavioral tests was from low-stimulation experiments to high-stimulation, as follows: the elevated plus maze, open field test, sucrose preference test (low-stimulation experiments), and Morris water maze (high-stimulation one). the elevated plus maze (ePM) The elevated plus maze (EPM) is an experiment, which is widely used to assess anxiety in rodents. The EPM test was conducted following the previously described way [31]. Open field test The open field test is used to assess the general locomotor activity and anxiety of rodents. The test was conducted following the previously described way [31]. Each mouse was placed in the center of the open field apparatus (50 cm x 50 cm x 60 cm) and can move freely for 5 min. The average speed and time/distance in the center was recorded to measure the locomotor activity and anxiety levels. Between each trial, the maze was wiped clean with a damp sponge and dried with paper towels. sucrose preference test The sucrose preference test is a way, used to test the level of anhedonia in mice. The test was conducted following the previously described way [31]. The mice were habituated to 2% sucrose solution for one day prior to the start of the experiment. On the test day, the mice were housed singly with ad libitum food and two bottles-one with water and the other with 2% sucrose solution-for 24 hours. The bottles were reversed halfway through the time to avoid a side preference. The weights of the two bottles were recorded to calculate the sucrose consumption. The preference for the sucrose solution was calculated as a percentage of total liquid consumed. The sucrose preference rate was calculated using the following formula: sucrose preference rate = sucrose consumption / (water consumption + sucrose consumption) × 100%. Morris water maze The water maze test was a good way to test spatial learning and memory ability [52]. The maze consisted of a 1.2-m diameter circular pool filled with water (22 °C) that was made opaque by the addition of non-toxic, water-based white food coloring. A circular Plexiglas escape platform (10 cm in diameter) was located in the center of one of the quadrants of the pool. The experiment consisted of two phases including five consecutive training days and one detecting day. The animals underwent four trials over the training days with the platform submerged 1.5 cm below the surface of the water (60s maximum trial duration; 20-30 min interval). The latency to reach the platform was analyzed to assess learning in the mice. On the last day, mice were tested with a single trial without the platform, starting from the opposite quadrant of the platform for 60 s. The percentage of the distance and time in the platform quadrant was measured to evaluate the memory performance. Western blot Mice were killed after the behavioral tests. Mice from each group were deeply anesthetized with chloral hydrate and perfused transcardially with ice-cold 0.9% saline. The brains were dissected from the skulls, and the dorsal hippocampus was dissected under an anatomical microscope based on the stereotaxic coordinate. Immunofluorescence Mice were killed after the behavioral tests. Mice from each group were deeply anesthetized with chloral hydrate and perfused transcardially with ice-cold 0.9% saline, followed by 4% paraformaldehyde. The brains were dissected from the skulls, post-fixed with 4% paraformaldehyde over night, followed by 10%, 20% and 30% sucrose solutions, each for at least 16 hours. Brain tissue was embedded in Tissue Freezing Medium (Leica, Germany), frozen at −80°C and cut with a Leica microtome into 20-μm coronal sections. Frozen sections were used to observe the expression of MBP, KLC and NF200. Neuronal status in the hippocampus was determined by NeuN immunofluorescence. Sections were incubated over night at 4°C with primary antibodies: anti-MBP (anti-rat monoclonal; 1:200; Abcam), anti-KLC (anti-rabbit monoclonal; 1:50; Abcam), anti-NF200 (anti-rabbit polyclonal; 1:200; Sigma), anti-NeuN (antirabbit polyclone; 1:200; EMD millipore). Following the incubation with primary antibodies, sections were washed and incubated for 2 h at room temperature with secondary antibody: donkey Alexa Fluor 488 F(ab) anti-rat IgG, goat Alexa Fluor 488 F(ab) anti-rabbit IgG, or goat Alexa Fluor 592 F(ab) anti-rabbit IgG. Images were captured from stained frozen sections using a fluorescence microscope. statistical analysis The data were presented as the means ± SEM. A one-way ANOVA with Tukey's post hoc test or least significance difference (LSD) test was used to determine statistical significance. P < 0.05 was set as the cutoff for statistical significance. The statistical analyses were performed using GraphPad Prism 4 software and SPSS 18.
2018-04-03T00:21:54.265Z
2016-04-25T00:00:00.000
{ "year": 2016, "sha1": "e778d8d1918b8bb80a43b03c4cb80c023fb98c29", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=27205&path[]=8981", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e778d8d1918b8bb80a43b03c4cb80c023fb98c29", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
146404883
pes2o/s2orc
v3-fos-license
Study of the Residual Stress in 2A97 Al-Li Alloy Using Different Methods The contour method and X-ray diffraction have been used to measure the quenching stress in 2A97 Al-Li alloy. The surface stresses were measured by X-ray diffraction. Besides, the 2-D distribution of stresses have been given by the contour method. The differences between the two methods can be more than 100 MPa. In addition, finite element model which is suitable for evaluation of residual stresses in this material was established. There is very good correlation between the contour method and finite element model (FEM). The differences are less than 50MPa. The model has high precision to forecast the residual stress in 2A97 Al-Li alloy. Introduction With the development of aviation industry, new materials with high specific strength are in great demand [1]. The 2A97 Al-Li alloy has been widely used in aircraft structure parts because of the low density and improved specific strength [2,3]. In this high-strength alloy, residual stress is an important aspect which can influence the processability and service performance [4,5]. Residual stresses in materials are self-equilibrating [6,7]. They may be ignored in material processing [8]. However, residual stresses can seriously affect processing property and service performance of materials, such as causing deformation and cracking of materials, and reducing fatigue strength [9,10]. It is necessary to characterize residual stresses in the life-cycle of materials. In this work, the residual stress was introduced into 2A97 Al-Li alloy by solution treatment (i.e. quenching). The X-ray diffraction, contour method, and finite element model were used to evaluate the residual stresses from surface to core of the sample. The differences of the test results between these methods had been explained explicitly. A finite element model of residual stress in 2A97 Al-Li alloy was established. X-ray diffraction The residual stresses in X-direction was calculated by {311} crystal face diffraction. Experiments were applied on the X-ray Stress Analyzer (ST, China) with Cr-Kα radiation. The irradiated area was 2mm in diameter. Test points were shown in figure 1. Figure 3. The fix of specimen The sample was cut in half. A CMM (Coordinate Measuring Machine) was used to test the profile of the two cutting surfaces. The particular experiment was applied on a HEXAGON Performance CMM. The measurement accuracy was about 3μm. X-ray diffraction X-ray diffraction was used to measure the residual stress after quenching. The measured position is shown in Fig. 1. Four of them locate in the centre of the four surfaces, and the others are with 14.5mm interval from the midpoints. The residual stress direction is parallel to X-axis. Table2 shows the residual stress results. Most of them are compressive stress. During the solution treatment, thermophysical properties of materials change with temperature, expressing as surface compression stress. The second position shows abnormal tensile stress. It is because of the difference of cooling rate of the sample during quenching [11]. Figure 4 illustrates the range of data from CMM results. The range of X-axis is 0~60mm, the Y-axis is 0~30mm and the Z-axis is -0.08~0.02mm. On the two surfaces, each point is 2mm (y direction)×0.5mm (x direction) away from each other. . CMM data of the cutting surfaces Theoretically, the two surfaces are symmetric. However, because of the errors of cutting and CMM ,they are asymmetric. It is because of the errors of cutting and CMM. Thus, the surfaces should be averaged and fitted. In this work, cubic spline was used and the calculation was developed on MATLAB. Figure 5 illustrates the calculation results. The random errors and anti-symmetric cutting errors had been removed by the above step. The smooth surface was used to simulate stress reconstruction by ANSYS. In order to solve the stress problem, heat conduction should be first calculated [12]. According to Fourier's law and the law of energy conservation, heat conduction equation can be described as the follow equation Contour method There are two key aspects of heat conduction, which are initial condition and boundary conditions [13]. In this problem, the initial condition was the temperature at the beginning of quenching. The balance state of quenching temperature was used as the boundary condition. A model of the sample was established. Because the temperature gradient of the sample was small during the quenching process. Finite element model had been constructed by uniform mesh. The element size was set to 2mm×2mm×2mm. Fig 9 illustrates the grid partition. Fig. 7. The grid partition of finite element model Thermophysical properties of materials change with the temperature. The analysis of temperature and stress fields of quenching process was a nonlinear transient problem. If the changes of material parameters were ignored, the errors will be large. Table 3 shows the properties of the alloy with temperature changing. Simulation results The results of FEM are illustrated in Fig 10. From the surface to core, the residual stresses magnitude increases from -40MPa to 80MPa. It is related to the thermophysical properties of the material. The result of finite element model agrees well with the contour method. The maximum tensile stress of material is about 80 MPa. But the surface stresses could be different. Near the sample surface (Z=0, Z=32), the differences are more than 60MPa. It is related to the establishment of finite element model. During that process, the flow and temperature change of quench medium were ignored. The second-phase precipitation was neglected. They caused the differences of the two methods. Discussions X-ray diffraction, contour method and finite element can evaluate the residual stress, but they all have their own limitations. Measurement of residual stress In X-ray diffraction, the measurement is based on sin 2 ψ method [14]. In this method, diffraction angle (2θ) is only related to crystal orientation(ψ). It can be described as the follow equation: In the equation, 'K' is the X-ray elastic constants (XEC). It is determined by the material and diffraction crystal surface and can be got by a looking-up table [14]. In general, the condition for the above formula is that the material is isotropy in the radiation area. This linear relationship can be influenced by grain size and rolling texture [15]. The materials with large grains or texture are not very suitable to use sin 2 ψ method. Figure 11 shows the relationship between 2θ and sin 2 ψ. The results of all the test points satisfy the formula. So, the results are reliable. Residual stress in the core of material is suitable to be measured by contour method. The 2-D distribution of residual stress can be given [16]. But this method will cut the material, this is not acceptable for some precious samples. The uncertainty of test results is related to the cutting and the surface fitting [17]. In cutting, the cutting width can be hundreds of microns. Thus, the contour of the two cutting surfaces is asymmetric. This anti-symmetric error can be averaged by the data process [18]. However, cutting parameters fluctuate near the material surface. The cutting width will be abnormal as well. This error cannot be avoided. In the subsequent data processing, different fitting functions were used to enhance the reliability of results. Chebyshev, Fourier, Sigmoid, Quadratic spline and Cubic Spline functions are the common methods to fit surface. Among the above functions, Cubic Spline function is the best method [19]. It can restore the contour of the surface truthfully. Fig 9 illustrates the differences of between X-ray diffraction and contour method. At some positions, the differences are over than 100MPa. But the distributions of the two results are in agreement. The differences could be caused by the follows: 1) The chosen of XEC: A theoretical parameter was used. It might be deviated from the actual situation. 2) The error of cutting in the contour method: The cutting width could be abnormal and lead to volatility of the test results. 3) The problem of spatial resolution: The penetration of X-ray might be tens of microns. The results are average in this range. But in contour method, the point spacing is 0.5mm from the surface to core. The spatial resolution of contour method is far larger than micron level. Simulation of residual stress Finite element simulation of heat treatment involves energy exchange theory, heat transfer theory, phase transition theory, elastoplastic theory and etc. All of them should be considered [13]. But in existing models, some of them are used to solve the problem and the others are ignored. It increases the limitations of simulation and reduces the accuracy. In this study, neglect of second phase precipitation in 2A97 Al-Li alloy is unlikely to influence the distribution of macroscopic residual stress. The problems of heat conduction and deformation should be considered while building the model. Conclusions For the 2A97 Al-Li alloy, the quenching stresses of the surface are compressive and the core parts are tensile. 1) Compared with contour method, X-ray diffraction is more suitable for the measurement of the surface residual stress 2) There is a very good correlation between the contour method and finite element model, the differences are less than 50MPa. 3) For the contour method, cutting technology and data processing can be improved to reduce the cutting error of material surface and increase the reliability of the results.
2019-05-07T14:16:11.655Z
2019-04-10T00:00:00.000
{ "year": 2019, "sha1": "317712d07ad8d816e55d85bc1a1233fb51f44111", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/490/2/022078", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bd85cfcb27f31446493a51ed792149a67139feb6", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
27950779
pes2o/s2orc
v3-fos-license
Computer-Aided Recognition of ABC Transporters Substrates and Its Application to the Development of New Drugs for Refractory Epilepsy : Despite the introduction of more than 15 third generation antiepileptic drugs to the market from 1990 to the moment, about one third of the epileptic patients still suffer from refractory to intractable epilepsy. Several hypotheses seek to explain the failure of drug treatments to control epilepsy symptoms in such patients. The most studied one proposes that drug resistance might be related with regional overactivity of efflux transporters from the ATP-Binding Cassette (ABC) superfamily at the blood-brain barrier and/or the epileptic foci in the brain. Different strategies have been conceived to address the transporter hypothesis, among them inhibiting or down-regulating the efflux transporters or bypassing them through a diversity of artifices. Here, we review scientific evidence supporting the transporter hypothesis along with its limitations, as well as computer-assisted early recognition of ABC transporter substrates as an interesting strategy to develop novel antiepileptic drugs capable of treating refractory epilepsy linked to ABC transporters overactivity. Refractory epilepsy, Transporter hypothesis. Drug Resistant Epilepsy: Definition and Current Explanations Epilepsy is the most frequent chronic brain disorder, affecting about 50 million people worldwide [1]. While drug therapy is the treatment of choice and can successfully treat (i.e. provide sustained seizure freedom) about 70% of people with epilepsy [1], the remaining 30% of the patients suffer from drug resistant, refractory or intractable epilepsy [2], i. e. the failure to achieve seizure freedom through adequate trials of two tolerated appropriately chosen antiepileptic drug (AED) schedules [3] (note that a clear, universal definition of refractory epilepsy will be fundamental to approach the limitations of the neurobiological explanations to refractory epilepsy later in this article). This scenario has not changed substantially in spite of the introduction of more than 15 third generation AEDs from 1990 to the moment [4], a fact that has led to a decrease in the industrial interest in the development of new compounds for epilepsy [5]. *Address correspondence to this author at the Department of Biological Sciences, Faculty of Exact Sciences, University of La Plata, La Plata, Argentina; Tel: +542214235333 ext 41; E-mail: lbb@biol.unlp.edu.ar Biological mechanisms underlying drug resistant epilepsy have not been fully elucidated yet [5], though there exist to the day five hypotheses that try to explain the nature of this phenomenon: the transporter hypothesis [6,7], the target hypothesis [7,8], the neural network hypothesis [9], the gene variant hypothesis [10] and the intrinsic severity hypothesis [11]. Historically speaking, the transporter and target hypotheses have been conceived earlier and have therefore been more extensively explored from an experimental viewpoint. The transporter hypothesis sustains that drug resistant epilepsy may be a consequence of the local overactivity of ATP-Binding Cassette (ABC) transporters at the blood-brain barrier (BBB) and/or the epileptic foci. A more detailed overview on the evidence and limitations of this hypothesis is provided in the next subsection. The target hypothesis proposes that the reduced sensitivity to AEDs might be linked to acquired modifications in the structure and/or functionality of AED targets. While some years back constitutive alterations of drug transporters or targets were also considered within the scope of the transporter and target hypothesis, respectively [8,12], leading experts in the field now seem to prefer to categorize intrinsic alterations (genetic variants) of drug targets within the gene variant hypothesis [5]. The gene variant hypothesis, however, also covers other L.E. Bruno-Blanch possible genetic causes of drug resistance (e.g. polymorphic variants of drug biotransformation enzymes). In the frame of this novel classification scheme, only acquired modifications in drug transporters or targets would be contained by the transporter or target hypothesis. Though at first sight this distinction may appear as a trivial classification problem, the nature of the pharmacokinetic or pharmacodynamic alteration could have a profound impact on the clinical approach to the drug resistance issue. While a genetic cause of pharmacoresistance might currently be detected through simple diagnostic tests even before starting the treatment, acquired modifications linked to the pathophysiology of the disease are more difficult to prove and still require more invasive procedures (e.g. surgery resection). Recently, a possible role of epigenetics in drug resistance epilepsy has also been suggested (establishing a sixth hypothesis for refractory epilepsy that might serve to expand the biological basis to other previous hypotheses such as the transporter and the target hypotheses), though to the moment the experimental basis supporting this mechanism remains scarce [13]. The neural network hypothesis maintains that recurrent episodes of excessive neural activity lead to plastic alterations and remodeling of the neural network; abnormal networking might in turn relate to the drug resistance phenomena. The hypothesis is supported by the fact that surgical resection of the seizure focus frequently results in seizure freedom [5]. As a matter of fact, epilepsies of structural cause are linked to drug resistance and abnormal brain imaging, and a good surgical outcome (seizure freedom) is associated to complete surgical resection of the epileptic focus or lesion, pre-surgical presence of lesions in MRI scans and the network complexity [14]. The differences between the alterations in brain plasticity in responsive and non-responsive patients are yet to be elucidated [9]. At last, the intrinsic severity hypothesis relies on epidemiologic studies showing that the single most important factor linked to the prognosis of epilepsy is the number of seizures at the epilepsy onset. Again, the biological basis of disease severity are however no fully understood to the moment [15], so currently the influence of the intrinsic severity hypothesis on treatment choice or treatment development is limited. It has been pointed out that none of the previous hypotheses provides a full or universal explanation to nonresponsive patients with epilepsy: a given hypothesis might be applicable to a particular subgroup of patients or, alternatively, some patients could require multiple hypotheses to explain their non-responsiveness [5,12,16]. The network hypothesis seems so far as the more holistic explanation to drug resistance, since some of the other explanations (e.g. the target hypothesis) could be applied in its context [5]. It is worth underlining that the treatment approach should be highly dependent on the drug resistance mechanisms present in a particular patient. Strengths and Weaknesses of the Transporter Hypothesis of Drug Resistant Epilepsy In eukaryotes, ABC transporters are transmembrane efflux transporters (they export their substrates from the cell) and they are characterized by broad-substrate specificity/ polispecificity [17,18]. They are preferentially expressed in barrier tissues (gut, BBB) and elimination organs (liver, kidneys), limiting the absorption and biodistribution and favoring the elimination of their substrates. Besides their role in the traffic of physiologic compounds (e.g. cholesterol or amyloid beta) they also take part in multi-drug resistance phenomena or pathogenesis in a diversity of disorders [18][19][20]. Though most of the research on ABC transporters has focused on the first historically identified member of the superfamily (P-glycoprotein, Pgp or P-gp or ABCB1 or MDR1), attention of the scientific and medical community has more recently been given to other members, prominently MRPs (ABCCs) and Breast Cancer Resistance Protein (BCRP, ABCG2). Preclinical validation of the transporter hypothesis for drug resistant epilepsy has been achieved, since drug resistance in animal models of refractory epilepsy has been reverted by co-administration of ABC transporters inhibitors. The first steps towards a proof of concept of the potential role of ABC transporters was delivered by Potschka et al. back in 2001 [21]. Using in vivo microdialysis in rats, the authors showed that the levels of carbamazepine in the extracellular fluid of the cerebral cortex could be enhanced through local perfusion of Pgp inhibitor verapamil and MRP1/2/5 inhibitor probenecid. Some time later, the same researchers proved that co-administration of probenecid (50 mg/kg) and phenytoin (6.25 mg/kg) resulted in a clear increase of phenytoin anticonvulsant effect in electrically kindled rats (a 90% increase in the threshold for generalized seizures) [22]. Neither 50 mg/kg probenecid nor 6.25 mg/kg phenytoin exerted significant anticonvulsant effect when given alone. It was discussed that such raise in the seizure threshold was unlikely to result from additive effects of the chosen subanticonvulsant doses. Interestingly, the inhibition of MRP2 (which is located in the apical membrane of endothelial cells and thus opposes to drug penetration in the brain) results in a significant increase of drug levels in the brain which was not secondary to alterations in peripheral drug pharmacokinetics. Similar results were later obtained in the focal pilocarpine model of limbic seizures [23]: while ip administration of oxcarbazepine 100 mg/kg to rats did not prevent seizures, co-administration of verapamil or probenecid resulted in complete protection. The severity of pilocarpine-induced seizures severity was not affected by perfusion of any of the inhibitors alone. Though highly valuable, this initial works had two important limitations: a) the use of first-generation, weak and unspecific modulators of ABC transporters and; b) experiments were performed on animal groups with no discrimination between responder and non-responder subgroups. The first issue was later solved through the use of third-generation, specific inhibitor of Pgp tariquidar. Van Vliet et al. used a chronic epileptic rat model and showed that, while phenytoin alone did not achieve complete suppression of spontaneous seizures, coadministration of phenytoin and tariquidar led to an almost complete seizure control [24]. Inhibition of Pgp by tariquidar increased the phenytoin brain-to-plasma ratio; it was also shown that the maximal administered doses of tariquidar did not exert anticonvulsant activity per se. The effect of coadministration of tariquidar on seizure control reverted after four days, suggesting that tolerance to tariquidar was developed. Definitive preclinical proof of concept was obtained by co-administration of tariquidar to epileptic drug resistant animals associated to up-regulated Pgp expression [25]. The key innovation in this study was the introduction of a protocol to discriminate responsive and nonresponsive animals to phenobarbital (Fig. 1). When co-administering tariquidar, five out of six non-responders became seizurefree or displayed a reduction in seizure of at least 50%. Similar results were obtained in the 3-mercaptopropionic acid model of refractory epilepsy, which is associated to Pgp upregulation at the BBB, astrocytes and neurons [26]. While 3-mercaptopropionic acid epileptic rats showed significantly lower hippocampal phenytoin concentrations compared to the control group, pre-treatment of such animals with the Pgp inhibitor nimodipine led to enhanced hippocampal phenytoin bioavailability (the drug hippocampal bioavailability was, in fact, even higher than in control animals, suggesting other possible interactions between phenytoin and nimodipine). It should be noted, however, that verapamil add-on therapy failed to enhance seizure control in a study on 11 phenobarbital resistant dogs; in fact, some animals showed a tendency to worsening of seizure control [27]. These results highlight the potential importance of inter-species differences and the necessity to validate the transporter hypothesis using appropriate clinical trials. Even so, clinical proof-of-concept remains elusive, as will be discussed next. Regarding clinical data, plenty of evidence has accumulated over the years showing high expression levels of ABC transporters at the neurovascular unit of nonresponder patients [28][29][30][31][32][33][34][35]. It should be commented, however, that most of these studies compared brain samples from patients with intractable epilepsy that had been subjected to surgical removal of the epileptic focus with specimens of human brain with no history of seizures. While brain tissue from epileptic drug-responsive patients would be a more suitable control, such control samples are usually unavailable since the invasive procedure to attain them is ethically unacceptable in responsive epileptic patients. This limitation has fortunately been overcome in more recent studies using positron emission tomography (PET) scans [36,37] which showed that the plasma-to-brain transport rate constant K 1 for [ 11 C]verapamil and (R)-[ 11 C]verapamil tends to be lower in different brain regions of drug resistant epileptic patients compared with seizure-free patients and healthy individuals. Whole brain K 1 was increased in both healthy subjects and pharmacoresistant patients after tariquidar administration. The results from the study using (R)-[ 11 C]verapamil [36] are particularly relevant since some of the limitations emerging from the diversity of factors affecting radiotracer kinetics [38,39] are addressed. These are extremely important steps toward validation of the transporter hypothesis, though definite proof would require reversal of drug resistance (improvement in seizure control) after blocking ABC transporters. Anecdotal cases of refractory patients who have shown improvement when AEDs were co-administered with verapamil have been reported [40][41][42][43], but it is not clear yet if the observed results could be a consequence of the intrinsic anticonvulsant activity of verapamil (e.g. through modulation of calcium influx in neurons) or another effect of this drug on the AED pharmacokinetics. More recently, a pilot study was conducted on seven children with drug resistant epilepsy [44]. The patients received verapamil as add-on therapy to baseline AED. Three individuals with genetically determined Dravet syndrome showed a partial response to adjunctive verapamil; another patient with Dravet syndrome but no known mutation showed partial seizure control during 13 months followed by seizure worsening. Two subjects with structural epilepsy and one with Lennox-Gastaut displayed no improvement. Though the number of patients that took part in the study is very limited, the results are in line with the idea that some therapeutic interventions might be more effective in certain subgroups of non-responders. Later, a double-blind, randomized, single-centered trial (initial sample size = 22) showed mild benefits of verapamil in comparison to placebo as add-on therapy for refractory epilepsy for a subset of the participants [45]. Randomized multi-centered control trials and studies addressing the effect of selective inhibitors of P-gp with no intrinsic activity are still necessary to gain definitive clinical evidence for the transporter hypothesis. Regarding the association between genetic variants of ABC transporters and drug resistant epilepsy, studies outcomes are controversial or sometimes inconclusive; while former meta-analysis failed to establish and association between ABCB1 variants and refractory epilepsy [46], subgroup analysis in more recent ones have suggested associations in Asian and Caucasian subjects [47][48][49], thus contributing to the validity of the hypothesis. The main argument against the transporter hypothesis seems to be that not all AEDs are in fact Pgp substrates. Although apparently conflictive evidence exists regarding which AEDs are substrates and which are not [50,51], results are highly dependent on the experimental setting, including type of assay (in vivo, ex vivo or in vitro, human versus animal models, concentration equilibrium transport assay or non-equilibrium conditions). Still, it seems safe to say that at least some AEDs are unlike Pgp substrates. Some considerations should be taken into account to reach a conclusion regarding the assigned category (substrate or non-substrate). Possible inter-species variability in substrate specificity should not be excluded. In relation to in vitro permeability assays, bi-directional transport assays in presence and absence of a selective Pgp inhibitor might lack sensitivity since directional transport might be masked by the contribution of passive diffusion. The magnitude of this effect depends on the substrate assayed concentration/s, the transporter expression levels in the cell culture, the affinity between the drug and the transporter and the physicochemical features of the test drug (e.g. permeability), among other factors [52]. Starting the assay with identical concentrations of drug on both sides of the cell monolayer (concentration equilibrium transport assay, CETA) removes the concentration gradient, eliminating net diffusion and enhancing the assay sensitivity [53,54]. Even if it has been demonstrated in appropriate models that some available AEDs are not Pgp substrates, does this undermine the transporter hypothesis as explanation drug resistance in epilepsy? Not necessarily. First, Pgp is one among many other efflux transporters possibly involved in refractory epilepsy. While most of the studies to determine the directional transport of AEDs have focused on Pgp, some of the antiepileptic agents are recognized and translocated by other members of the ABC superfamiliy. For instance, the role of ABCG2 in the drug resistance phenomena to AEDs might have been overlooked: while previous work seemed to suggest that several AEDs were not recognized by ABCG2 [49], more recent studies using double knock-out Mdr1a/1b(−/−)/Bcrp(−/−) mice and the CETA model suggest otherwise [55,56]. It is also relevant to note that proteomics studies have shown ABCG2 as the transporter with highest basal expression levels at the BBB of healthy subjects [57,58], which underlines the convenience of assessing recognition by other ABC transporters apart from Pgp when designing novel AEDs. Furthermore, due to the partial overlapping of the substrate specificity of different members of the superfamiliy (which together with reported co-expression and co-localization patterns points to a cooperative role in the disposition of common substrates) [59][60][61], the role of a certain ABC transporter might be concealed due to the function of others, requiring complex models to study the phenomena. The difficulties to quantify the levels of expression of a given transporter in different regions of the brain of an epileptic patient who has not been subjected to a surgical intervention/resection, and the uncertainties regarding the ability of experimental models to reflect the absolute and relative expression levels of the different ABC efflux transporters at the epileptic foci and the BBB (expression levels which might well be highly patientdependent and highly dynamic) contribute to the difficulty of assessing unequivocally the influence of a given transporter in the regional AED bioavailability in the brain. Apart from the need to contemplate the separate and concerted contributions of different ABC transporters to the efflux of AEDs from CNS, the current definition of refractory epilepsy itself suggest that the transporter hypothesis may hold even if known AEDs are recognized by ABC transporters. Since the definition indicates that a patient should be considered unresponsive after failure of two well tolerated and appropriately chosen and used AED trials, the key to the preceding reasoning lies in what is considered an appropriate drug choice. The definition of drug resistant epilepsy weakens the transporter hypothesis if and only if one of the two appropriate therapeutic interventions was in fact a non-substrate for ABC transporters. Presently, in absence of definitive clinical proof of the transporter hypothesis, it is not standard protocol to try at least one AED not recognized by ABC transporters; thus, to the moment the quality of substrate or non-substrate is not related to the appropriateness. If the transporter hypothesis were validated in at least a subgroup of the unresponsive patients, then a method for patient selection capable of identifying patients that may benefit from therapeutic strategies targeting efflux transport will be necessary; furthermore, patient selection should also be considered when designing clinical trials to study the clinical relevance of the transporter-associated resistance [62], excluding other sources of drug resistance as possible confounders. Possible Therapeutic Solutions to Transportedmediated Refractory Epilepsy There are a number of possible therapeutic solutions that could be and are being explored in consequence with the transporter hypothesis. Inhibition of ABC transporters by adding on transporter inhibitors has been already proposed as a possible therapeutic solution to efflux-mediated drug resistant cancer. However, clinical trials to support this approach have so far been disappointing [16, 18, 62 and refs therein] owing to severe safety issues. The reader should bear in mind the physiologic role of ABC transporters as a general detoxification mechanism and their involvement in the traffic of endogenous substrates, which discourages the use of add-on inhibitors in the context of long-term drug treatments (such as the used in epilepsy). The potential effects of such inhibitors in the pharmacokinetics of other drugs should also be considered in a polymedication scenario, due to the high probability of adverse drug interactions. The connection between ABC transporter dysfunction and neurodegenerative diseases such as Parkison's and Alzheimer's diseases can be quoted as an example of the potential risk posed by chronic inhibition of this efflux systems [63][64][65]. Moderate or week inhibitors of ABC transporters thus emerge as possible solutions. So do therapeutic agents directed to the signaling cascade that regulates efflux transporters expression [62]. Such option might prove useful to prevent or ameliorate drug-or diseaseinduced up-regulation of transporters function, e.g. through activation of nuclear receptors or through pro-inflammatory signals, respectively. An extensive review on such approaches can be found in the excellent articles from Potschka [62,66]. In the second place one may mention the use of a Trojan horse stratagem to deliver therapeutic levels of the ABC transporters substrates to the epileptic focus, avoiding the recognition of the efflux pumps. Particulate delivery systems (mainly, pharmaceutical nanocarriers) can be included in this category [67,68]. Interestingly, this approach allows encapsulating efflux pump substrate AEDs of clinical use within advanced delivery systems. Thus, the transference of such technologies to the clinical practice is expected to be more straightforward than other alternatives described in this section. Provided that safe delivery vectors are used, since the safety and efficacy of the pharmaceutical active ingredient have already been demonstrated, this strategy implies better chances of surviving clinical trials. In line with the preceding approach we can mention the design of prodrugs of AEDs either lacking affinity for ABC transporters or displaying affinity for influx transporters that could compensate the efflux pumps influence on BBB permeability. Though a diversity of prodrugs of approved AEDs have been conceived, the interaction of most of them with efflux pumps has not been assessed yet [66,68]. The design of novel AEDs which are not recognized by ABC transporters and the early screening during drug development to discard substrates (thus considering efflux pumps as anti-targets) constitute interesting but underexplored alternative solutions. A scheme illustrating the different strategies overviewed in this subsection is presented in (Fig. 2). The following section will overview recent studies focused on this last approximation. Screening during CNS drug development guarantees that high affinity substrates of BBB efflux transporters are not selected as lead compounds. IN SILICO SCREENING TO FIND THERAPEUTIC SOLUTIONS TO DRUG RESISTANT EPILEPSY The Medicinal Chemistry group from the National University of La Plata has implemented a cascade protocol integrating in silico (ligand-and structure-based), in vitro and in vivo models to detect potential new treatments for efflux transporter-associated refractory epilepsy. The protocol starts with high-throughput cost-efficient in silico screening tools and progressively advances to more expensive models with lower throughput, ending in preclinical models of drug-resistant epilepsy that will not be discussed in detail here. A schematic flux diagram of the protocol is displayed in (Fig. 3). Each step is covered separately under the correspondent subheading, below these lines. Fig. (2). Summary of the therapeutic strategies overviewed in section 1.3. Fig. (3). Cascade protocol to screen for potential therapeutics for drug resistance epilepsy related to P-gp upregulation. In silico Models to Identify ABC Transporters Substrates. Applications to AEDs Screening A large number of computational models to detect potential substrates of ABC transporters have been reported, from pharmacophores to machine learning algorithms [69-71 and refs therein]. Many of those models have been derived from structural homologous series, thus being capable of making accurate predictions in their local chemical space but lacking general applicability. Others lack experimental validation of the predictive ability. While initially attention was drawn to Pgp and, particularly, the prediction of inhibitory activity, most recently, as the relevance of other efflux pumps is recognized and the inhibitors entering clinical trials fail, the focus has been gradually shifting towards other members of the ABC superfamily and the prediction of transport [71,72]. Most of these models show accuracy around 80%, which reflects the challenge posed by the multiplicity of binding mechanisms, the related polyspecificity and the high experimental variability of available data; such demanding proposition for modeling has led some authors to propose ensemble learning as a potential solution [70,73,74]. The reports of applications of such models to the selection of novel AEDs as potential treatment for drug resistant epilepsy are, to date, scarce. In this regard, back in 2011 Di Ianni and colleagues reported a 3-model ensemble of 2D QSAR classifiers capable of differentiating P-gp substrates from nonsubstrates [75]. For this purpose, a 250-compound dataset including 104 P-gp substrates and 146 non-substrates was compiled from literature. Random sampling was applied to split such dataset into a 125-compound training set and a 125compound test set. Linear discriminant analysis was conducted to select conformation-independent models from different subsets of Dragon descriptors. Later, simple data fusion schemes were used to combine individual models in order to optimize specificity. Receiving Operating Characteristic curves were applied to compare model performance and to select the best data fusion scheme. The ensemble showed 90% accuracy in the classification of the substrates, though of course a different balance between sensitivity and specificity could be easily achieved by selecting a different score threshold. Cascade application of the preceding ensemble together with structure-based approaches and a ligand-based classificatory model capable of identifying drugs with anticonvulsant effects in the MES test (see next section for details on the molecular docking) in the virtual screening campaign of ZINC and Drugbank databases led to the identification of anticonvulsant compounds predicted as non-substrates for Pgp [76]. These in silico filters were also applied to an in house library of anticonvulsant compounds previously reported by the same group, including antimicrobial Propylparaben (compound VII) and non-nutritive sweetener potassium Acesulfame (compound VIII) [77,78]. The anticonvulsants discovered through this protocol are displayed in (Fig. 4). The same group has recently reported linear model ensemble capable of identifying substrates for wild-type human ABCG2 [74,79], which might well be integrated to the previously described in silico filters. Structure-based Approaches Structure-based approaches are valuable tools in drug design, since they provide atomic details on the interactions between the target and ligands. Particularly, docking simulations propose possible binding geometries of the complexes, and quantify in some way their binding energies through their scoring functions. This information allows the structural optimization of the ligands to improve their interactions with the targets (or to avoid them in antitargets). Additionally, docking scores provide a numerical variable to discriminate between binders and non-binders (of a defined target) in structure-based virtual screening campaigns. Fig. (4). Anticonvulsant drug candidates predicted as non-substrates of P-gp by joint application of structure-and ligand-based approaches. Regarding P-gp, structure-based methods deal with the lack of experimental information about the 3D architecture of the human protein. Therefore, the target is usually modeled by comparative analysis (homology modeling techniques) with templates of mouse P-gp, which shares more than 80% of sequence identity with human P-gp [79]. The glycoprotein is composed by two sets of transmembrane segments (TMs 1-3,6,10,11 and TMs 4,5,7-9,12), which generate an internal cavity that contains multiple binding sites (Fig. 5) [80 and refs therein]. Experimental data has proved the capacity of P-gp to interact at the same time with more than one substrate, and the existence of new potential sites of interaction with small molecules on the exterior of Pgp cavity [80]. This information is employed in docking protocols to find P-gp substrates/inhibitors. They have, in general, less accuracy than ligand-based approaches [81,75,82]. However, there are successful examples in relation to the identification of P-gp substrates/inhibitors by means of docking simulations, and some of them provide information about the conformations of the binders in the active site [79 and refs therein]. To exploit the potentialities of both ligand-and targetbased methods, docking-based filter was coupled a to a ligand based search of anticonvulsant compounds with no Pgp interactions [76]. As mentioned before, a ligand-based model ensemble was initially applied on the ZINC and Drug Bank databases, with the aim of identifying new anticonvulsant predicted as non-substrates of P-gp. The best 380 candidates were then submitted to the docking simulations and the compounds presumed with high interaction with the glycoprotein were discarded. The candidates were docked into a homology model of human P-gp based on the mouse P-gp structure as template (PDB code: 3G61) [80]. Several scoring functions and conditions were analyzed to select the best model. The abilities of the simulations to reproduce experimental data as well to discriminate known P-gp binders from non-binders were tested. A flexible receptor model with the scoring function of Autodock Vina was able to reproduce experimental conformations of mouse P-gp complexes as well to predict the 85% of the binders and the 77% of nonbinders [80]. Fig. (5) shows the docking solution for the binding of Saquinavir to the P-gp active site, as an example of the characteristic interactions predicted for known binders. The active sites into the P-gp cavity are mostly hydrophobic and they are composed by residues with lipophilic or aromatic side chains. From the 380 candidates selected by ligand-based screening, 275 structures were considered as non-binders by docking; evidencing a high level of consensus between both protocols. As mentioned before, some of the anticonvulsants identified with sequential screening are shown in Fig. (4). IN VITRO EXPERIMENTAL VALIDATION OF THE PREDICTIONS Parent and MDR1 transfected Madin-Darby canine kidney epitelial cells were obtained from the Netherlands Cancer Institute (Amsterdam, The Netherlands). Cells were grown in 25-cm 2 culture flasks using DMEM with 10% fetal bovine serum, 1% L-glutamine, 1% non-essential amino acids and penicillin and streptomycin at 37°C in 5% CO 2 atmosphere. Cells were split twice a week at 70 to 80% confluence in a ratio of 1:20 or 1:30 using s Trypsin-EDTA solution (0.25%). All transport assays were done with cells from passages 19 to 43. Cells were kept at 37°C in 5% CO 2 . The cells were seeded in 6-well Costar Snapwell plates with polycarbonate membrane inserts at a density of 50,000 cells per insert (1.12 cm 2 ) and grown for 4 days in culture medium. The medium was replacing every day. The apical media volume was 0.5 ml, and the basal volume was 2 ml. Integrity of the cell monolayers was determined by measuring the trans-epithelial electrical resistance (TEER, Ω.cm2) using an epithelial voltammeter (Millicell-ERS; Millipore Corporation). In addition, the integrity was checked using Atenolol (ATOL). The apparent permeability coefficients (Papp) of Atenolol across MDCK II-MDR I cell monolayers were typically 1-5x10 -7 cm/s. The expression of Pgp was checked by Western Blot analysis and by transports assay with Trimethoprim, a substrate for Pgp [83]. On the day of the experiment, culture medium was removed, and cells were washed three times with media transport (HBSS, Hanks` balance salt solution, pH 7.4, Gibco-BRL). The filter inserts containing the cell monolayers were placed in an Ussing chamber, and were maintained at 37°C and under constant gassing with carbogen. Test compounds were added to the donor side (4 ml for the apical and basal chamber). At 20, 40, 60, 80, 100 and 120 min, samples (400 µl) were taken from receiver compartment followed by addition of 400 µl of transport media. For the inhibition experiments, cell monolayers were incubated with Amiodarone Chlorhydrate (50 µM) [84] for 1 h in apical and basolateral chambers before adding the test compound. For Pgp substrates, the absorption should be decreased and the secretion increased in cell lines over-expressing Pgp [85]. In the presence of a specific inhibitor for Pgp, the ER≈1. The samples were quantified by HPLC system: Dionex Ultimate 3000 UHPLC (Thermo Scientific, Sunnyvale, CA) configured with a dual gradient tertiary pump (DGP-3000) and a DAD-3000 diode array detector. Student's T test for two samples assuming equal variances was conducted for statistical comparisons. The values obtained with TMP indicated that the ER value is 2.5 times higher in the absence of inhibitor than in its presence, demonstrating expression of Pgp. None of the four drugs evaluated showed significant differences when calculating the ER in presence and absence of Amiodarone (Fig. 6), indicating that in the experimental conditions studied P-gp efflux does not influence the transport of these drugs, which seems consistent with the predictions of the models. Nevertheless, it should be noted that compound VIII presents an ER significantly different from 1, thus suggesting it might be recognized by other efflux transporters different than P-gp, which illustrates the importance of complementing P-gp models with other models to identify efflux by different ABC transporters. CONCLUSIONS Though it is unlikely that a single mechanistic hypothesis will account for a very complex phenomenon such as multidrug resistance, the body of preclinical, clinical and pharmacogenomics evidence suggest a role of ABC transporters in refractory epilepsy. Most of the studies so far have focused on the potential role of P-gp in drug resistant epilepsy; while initial studies have disregarded the effect of other ABC transporters in epilepsy, such studies should be reexamined at the light of recent advances in the field, including more recent and complex in vitro and in vivo models (e.g. the CETA assay and double and triple knockout animal models, which may contribute to explore, respectively, the influence of passive diffusion on drug permeability and the cooperative function of ABC pumps). Despite some existing AEDs are not likely to be transported by ABC carriers, current clinical criteria to diagnose refractory epilepsy does not actually exclude the possibility that a refractory patient could respond to available medications: in absence of definitive clinical validation of the transporter hypothesis, standard management of epilepsy does not consider whether previously administered (and unsuccessful) drugs are or are not substrates of ABC transporters. Nor does it contemplate assessment of the ABC transporters function in a particular patient. In relation to novel therapeutic approaches to drug resistant epilepsy, the use of nano-pharmaceutical delivery systems and the design of new AEDs which are not recognized by ABC transporters (an antitarget approximation) represent safer options compared to co-administration of ABC transporters inhibitors. These are still, however, underexplored alternatives for the management of refractory epilepsy. We have discussed a screening protocol based on the antitarget approach, which integrates in silico, in vitro and in vivo tools to select AED candidates oriented to the therapy of drug resistant epilepsy linked to upregulation of ABC transporters. CONFLICT OF INTEREST The author(s) confirm that this article content has no conflict of interest.
2018-04-03T01:05:59.950Z
2017-01-31T00:00:00.000
{ "year": 2017, "sha1": "dfa691482a9666bd73f37d6b2b570071cc07788c", "oa_license": "CCBYNCSA", "oa_url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/103425/Documento_completo.pdf?sequence=1", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "744e894401001e75d94144152996e8187dd624cf", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
234601487
pes2o/s2orc
v3-fos-license
Meeting Personal Health Care Needs in Primary Care: A Response From the Athletic Training Profession Context: Review of the origins, history, and attributes of primary care demonstrates continued challenges for the future of primary care and care delivery. The profession of athletic training may benefit from a critical self-review to examine its readiness to assist in reinventing primary care. Objective: To explore parity between primary care attributes and athletic training practice and promote a timely and relevant discussion of primary care and public health integration native to athletic training practice, competency-based education with an emphasis on milestones, and the development of clinical specialists to prepare a well-trained workforce. Background: General practitioners developed educational reforms through graduate medical education that resulted in primary care as it is known today. Graduate medical education has refined its assessment of students to include milestones for the purpose of describing the progression of clinical competence with identifiable behaviors. The development of future clinical specialists in primary care will also involve competence in public health. Recommendation(s): Practicing clinicians and educators should begin to critically explore the congruencies between the primary care attributes and athletic training practice. It is important to conceptualize traditional models of care within the frameworks of primary care and public health, given that athletic training practice routinely engages patients at personal, community, and environmental levels. The athletic training skill mix should be purposefully presented within interprofessional health care teams in primary care so that stakeholders can appropriately integrate athletic trainers (ATs) at the point of first contact. It is plausible that continued structural changes in the traditional practice settings will be required to facilitate integration of ATs into primary care. Conclusion(s): The impact of ATs in ambulatory settings and primary care possesses a foundation in the current literature. The ATs are uniquely suited to create a symbiotic pattern of care integrating both primary care and public health for improved outcomes. INTRODUCTION Primary Care and Athletic Training: Shared Paths in Education and Professional Evolution The profession of medicine and the profession of athletic training traversed similar terrains in their collective pursuits for both the education of students and the care of patients. Initial materials for the first-ever certification exam in athletic training were drawn from disciplines such as occupational therapy and nursing, with only a few questions developed that were specifically related to athletic training. 1 The curriculum available to aspiring athletic trainers (ATs) was that which was available within schools of physical education and health, and skills and behaviors were picked out that might ''match'' the behaviors and skills that the AT was expected to apply practically. 1 So too, the profession of medicine progressed from would-be physicians first serving as apprentices to formal medical schools with irregular curricular structures. 2 The Flexner report 2 was the impetus for the current system of medical education that we are familiar with today-four years of medical school followed by postgraduate training. As the profession of medicine continued to evolve, scientific advancement outpaced the physician's ability to successfully apply those advancements to patient care. 3 Science had outrun medical practice, and the growing number of physicians practicing in hospitals gradually began to produce opportunities for specialty practice. The profession of medicine was simply growing too rapidly to be mastered by a single physician. 3 As this shift toward specialization continued, those physicians practicing outside of the hospital setting were left without resources to advance care for patients, which resulted in a perception of poor care provided by general practitioners. Conversely, patients were growing more concerned that the increasing number of specialty physicians lacked the skills to treat them comprehensively as a whole person. 3,4 A proposed answer to salvaging the reputation of the general practitioner and ensuring wholeperson care for patients was residency training for the general physician. John Millis would undertake this task of creating residency training for the newly named ''primary physician.'' 3 The discussion of the origins of primary care is important for athletic training education and practice because it draws an intentional historical parallel between primary care medicine and the beginnings of athletic training. The Certification Committee and the Professional Education Committee worked diligently to promote and create standards for the first-ever athletic training program, but athletic training programs were rejected from schools of health due to a cultural identity defined within athletics and sport science. 1 As athletic training education and practice evolved, apprenticeship programs evolved into curriculum programs certified by the Board of Certification, Inc. Advancing skills and knowledge within athletic training that emerged from the point of care have now propelled entry-level education to the graduate level. 5 The athletic training profession now finds itself with a task that is similar to that undertaken by John Millis-producing didactic and clinical experience beyond entry level for the preservation and vitality of the profession. 3,6 As with the development of residencies in primary care, these are driven by the needs of the patient population and paired with the AT's skill set. The purposeful comparison of the evolution of the primary care physician (PCP) with the history of athletic training validates our professional history as normative within health care because medicine has previously traveled this path. A narrative review was constructed to further examine the parity between athletic training and primary care, because ATs are excellent candidates for moving team-based primary care forward in the future of the American health care system. The AT is routinely found at that point of first contact, and characteristics of ATs' daily practice find them executing the attributes of primary care. Importing the skills of the AT into the point of first contact requires a discussion of the attributes of primary care and a working definition to establish what it means. 7,8 International consensus 9 found that hospital-based care did not translate well into environments where preventable diseases were treated by non-health care workers. In 1978, the Institute of Medicine (IOM) published a report entitled ''A Manpower Policy for Primary Health Care: Report of a Study.'' 7 This report advanced the premise that primary care is a services-based branch of medicine broken down into 5 attributes. According to the 1978 IOM report, ''The five attributes essential to the practice of good primary care are accessibility, comprehensiveness, coordination, continuity, and accountability.'' 7(p16) Accessibility refers to the responsibility of the provider team to assist the patient or the potential patient to overcome temporal, spatial, economic, and psychologic barriers to health care. 7 Reasonably fast responses to requests for service were also included within the accessibility attribute. 7 Yoon et al 10 found that a 10-point increase in timely access to primary care decreased emergency department visits for nonemergent conditions by 7%. Timely access can result in cost savings to the patient because patients with serious illness are less common in primary care. 11 Comprehensiveness of care refers to the willingness of the primary care team to handle the great majority of health problems arising in the population that it services. It is important to note that comprehensiveness of care can be limited to a specific age group or sex, but primary care providers should be able to handle the majority of health concerns that arise within that group. 7 Coordination of care includes arranging contact and referral between the patient and the specialist, seeking the opinion of specialists, explaining diagnosis and treatment, and ensuring that the plan of care is congruent with the patient's economic situation and personal desires. 7 Continuity of care generally involves having the same provider care for a patient from one visit to another with transfer of information that is consistent from one provider to another. 11 Continuous care at its best should also be longitudinal, whereby the same source of care is used over time. 11 Accountability refers to the continual process of collection and documentation of practice outcomes with continual efforts by all members of the primary care team to improve the services provided both in number and quality. 7 The IOM again revisited the topic of primary care in a 1996 report entitled ''Primary Care: America's Health in a New Era,'' 8 in which considerations were made regarding how the community may interface with primary care. As such, the 1996 IOM report defines primary care as ''the provision of integrated, accessible health care services by clinicians who are accountable for addressing a large majority of personal health care needs, developing a sustained partnership with patients, and practicing in the context of family and community.'' 8(p31) This definition was reaffirmed by the IOM 12 as recently as 2012 and was cited in textbooks on primary care as recently as 2015, with no other definitions identified upon review of the literature. 13 The outcomes that define the success of primary care are quality of care, efficiency of care, and equity of care. 14 If athletic training is to contribute to those care outcomes, professionals in the field must continue to educate and train both students and clinicians for that purpose using the full strength of scope of practice. COMPETENCIES, SUBCOMPETENCIES, AND MILESTONES The end goal of medical education is to produce clinicians who can go and care for the health needs of the patient in the 21st century. 15 Expressions such as ''graded patient responsibility,'' ''increased clinical competence,'' and ''integration of basic concepts'' were common at the writing of the Millis report, but they lacked an actionable structure for measuring competence in a sequential manner. 4 Competency-based education has become the most recent focus of medical education to ensure that the graduate medical student possesses the requisite knowledge and skill to practice independently for the overall benefit of the patient. 15 The Accreditation Council for Graduate Medical Education (ACGME) has broken down the content of all medical specialty education into 6 competencies: patient care, medical knowledge, systems-based practice, practice-based learning and improvement, professionalism, and interpersonal and communication skills. Reviews of the components and structure of competency-based education have also been applied to medical residents in training. [16][17][18][19][20] Those competencies have been further broken down into subcompetencies with milestones as a measure of progress and content mastery. 3,15,17,19 Competency-based education has more formally been measured with the Dreyfus model, which proposes that a learner will pass through 5 stages of learning from novice to expert. 20 The Dreyfus model has been preferred for describing the progression of a novice learner to that of an expert because the performance of the skill and the demonstration of knowledge are both contained in each stage. 19 A modified Dreyfus model containing an ''absolute beginner'' stage has also been described to represent a critical deficiency in the learner, as demonstrated in the internal Medicine milestones published by the ACGME. 19 A modified Dreyfus model demonstrating the relationship between knowledge and behavior in mastery of a given sub-competency is represented in Table 1. A milestone further describes and focuses the expected behaviors or outcomes of a resident who progresses along the continuum of novice to expert. 18 Friedman et al 18 found that shifting to a milestone model in the evaluation of residents resulted in more discriminate analysis of skill acquisition over time during the course of a 3-year training program. Because a milestone specifically focuses on the inherent behavior within an acquired skill, it is possible that evaluators are more easily able to determine the level of a learner through those criteria. 18 A representation of the milestone method of evaluation with its foundation within the Dreyfus model is pictured in Figure 1. To date, milestone projects have been completed in all 28 specialty areas as listed by the ACGME. It is fully expected that specialties will continue to move forward with milestone methods of evaluation of residents, because they have been required to document the progress of residents. 17 Family medicine and internal medicine are the focus herein for the sake of this review of the ACGME milestones. Internal medicine was one of the 6 areas of concentration for the primary physician as described in Millis' original report in 1966; family medicine has maintained itself as the specialty to advocate for and promote the importance and characteristics of primary care. 3,21 A key characteristic of the internal medicine milestone project involves interprofessional collaboration with other specialties. The internal medicine resident is given a high level of independence in both interprofessional collaboration and consulting for various problems. 19 In contrast, the family medicine milestones note the importance of disease prevention and health promotion as well as the development and sustainment partnerships at both the patient and community levels. 17 Appropriate discussions of integration of the patient and community levels to health leads directly into an informed discussion of the integration of primary care and public health. 12 INTEGRATION OF PRIMARY CARE AND PUBLIC HEALTH Public health has been defined as what we do as a society to ensure the conditions in which everyone can be healthy. 12 The main metric for improving the health outcomes of the population has been identified as the health indicator. 22 Healthy People 2020 was a campaign 22 based on the recommendations of the Federal Interagency Workgroup, and it is this report that identifies those health indicators. An updated list of leading health indicators for 2030 is soon to be released, and recent objectives for identification of a new set of health indicators has been published by The National Academies Press. 23 Whereas it is true that previous reports 22 have focused on biological markers of health and disease, health behaviors, and health outcomes, the upcoming 2030 leading health indicators will focus more on environmental factors and their impact on overall well-being. 23 Examples of proposed indicators that could be particularly affected by athletic training include lowering the heat vulnerability index and reduction of hospital discharges for ambulatory care sensitive conditions. 23 Recent reports on the evolving nature of public health have called for the use of treatment approaches that extend outside the traditional clinical setting and into the community. 7,19 This concept has been formalized into a call for the integration of primary care and public health. 12 In its 2012 report, 12 the IOM recognized that the nation was ill equipped to meet the needs of the patient in terms of health promotion and prevention services despite an excellent biomedical and specialty medical infrastructure. Primary care had begun to develop a strategy to deal with chronic health concerns in patients via the chronic care model (CCM) developed by Wagner. 12, 24 The CCM encompassed 6 different tools designed to translate the care received by the patient out in the community. Those elements are communities and policies, health care organization, self-management support, delivery-system design, decision support, and clinical information systems. 12 Patients being treated for chronic conditions often receive treatment that requires components of personal effort, time, and resources that must be allocated to improve health outcomes. This results in work for the patient that may create a treatment burden when personal resources and ability are outpaced by the demands of treatment. 24 Previous applications of minimally disruptive medicine have attempted to ease this burden with regular home visits, offering transportation to appointments, and similar services. 24 A recent systematic review and thematic analysis 24 of the application of the CCM to patients with multi-morbidity found that the CCM may not adequately address the practical needs of patients with multi-morbidity or ease the treatment workload experienced by these patients. 24 As a result, the patient must choose between necessary life roles and tasks and pursuit of appropriate care in a timely manner, a decision that may negatively affect health. 24 It is for this reason that primary care and public health must integrate: to assist the patient in minimization of treatment burden to promote better health. The primary care provider can only make better recommendations at an individual level with the input of the public health workforce, and many efforts have been proposed to link primary care and public health via collaboration and training. 12 P4 SYSTEMS MEDICINE: A PATH TOWARD INTEGRATION? If ideal integration includes the goal of expanding the care of the patient outside of the traditional encounter and into the patient's environment, then a requisite level of knowledge and self-determination on the part of the patient about what constitutes health is necessary. 25 A framework for understanding how that level of education, awareness, and participation could be made manifest is P4 systems medicine (P4SM). The P4 stands for medical care that is predictive, preventive, personalized, and participatory. 26 The P4SM takes into account genetic, personal, and environmental factors with the aid of measurable patient data to define the optimal state of health for each person. 26,27 Predictive medicine involves the potential use of genetic markers and specific tools to estimate the patient's response to treatment or injury. 26 Preventive medicine has been described as an approach to prevent a problem that has been predefined via the individual collection and analysis of a patient's family, personal, and genetic data. 26 Personalized medicine involves use of all available patient information-genetic, personal previous and current history, and family history-to formulate treatment plans for presenting clinical problems. It is important to note that personalized care assumes the collection of a varied yet comprehensive panel of patient information to make those decisions. 26 Participatory medicine involves patients by turning them into educated consumers of information regarding their health, condition, and treatment and giving them primary responsibility for carrying out the plan of care. 26 Exercise prescription has been the obvious, cost-effective means of treatment and patient engagement to promote participatory medicine with the addition of proper nutrition and healthy sleep habits. 26, 28 The P4SM has been viewed as fundamentally changing the practice of primary care by honing a precision approach to each patient to minimize error, harm, and waste; it also acknowledges that a holistic approach to health cannot be fully realized without healthy social environments and behaviors. This systems approach to health seeks to view the patient as an integrated whole with a bidirectional relationship between themselves and the environment. 27 OPPORTUNITY FOR ATHLETIC TRAINING Athletic training has begun to use frameworks for behavior change that function at both the personal and environmental levels. One such example is the socioecological framework. 29 This is a framework that attempts to address health behavior change by addressing educational interventions at the intrapersonal, interpersonal, environmental, and society and policy levels. The most notable example of this is within concussion education. 29 The interpersonal and intrapersonal levels can be easily executed within the realm of primary care; whereas, the environmental and policy dimensions fall within the purview of public health interventions. ATs apply their scope of practice within the public health arena with the production and implementation of position statements and other key publications. In order to expand the reach of our expertise with increased relevance for all Americans, the role of the AT as an agent of behavior change needs to be explored. 24,28,30 Although exercise prescription has been identified as an obvious tool for affecting the health of the population, the recognition that those tools can be applied to the healthy and with slight modification to those with chronic conditions such as type 2 diabetes and cardiovascular disease may not be fully appreciated by many ATs. 28,30 Craddock et al 30 provided a review of various health behavior-change interventions that could possibly be used to increase compliance with recommended physical activity guidelines. The health behavior model, theory of planned behavior, and others were reviewed with the overall intention of applying them to patient encounters to remove barriers and facilitate habits of regular exercise. 30 The AT's experience in coordinating care and modifying activity may also be useful in decreasing the overall possibility of treatment burden for patients to assist in diminishing the stress associated with balancing self-care for chronic illness and basic life tasks. 24 As the AT works to learn and execute this role, a quality improvement (QI) approach to addressing health needs will be necessary. 31 The QI approach involves identification of a problem or gap in quality of care, a specific plan to address the problem or quality gap, and evaluation of the results of the plan to determine directions for future change-this has also been referred to as the plan, do, study, act cycle. 32 Shanley et al 33 used a QI framework in cohort of approximately 67 000 student-athletes. The informed use of patient data resulted in prevention and strengthening programs to prevent muscular injury and shoulder pathology in pitchers and allowed them to make recommendations for safe return to activity after anterior cruciate ligament reconstruction. 33 The programs based on a QI initiative also resulted in a $250 000 reduction in secondary insurance claim costs. 33 The ideal use of population health data should result in informed patients who have the ability to play a proactive role in their own health and provides specific, evidence-based information for a specific pathology or concern. In any setting, the presence of an AT who is involved in a continuous quality improvement process within a population creates immediate access to health care. The goal of health care is to improve health outcomes for the patient and the population. A patient who has experienced an improved health outcome as a part of a QI initiative has also experienced a narrowing of a personal-or populationbased health disparity, because access to care is being filtered through external criteria independent of personal barriers to care or insurance coverage. An illustration of the interplay between health care access and QI initiatives at the population level is pictured in Figure 2. THE AT AS A PRIMARY CARE PROVIDER: BUILDING A CASE An appropriate discussion of the AT's role in primary care should be formed after a thorough explanation of the following factors: (1) the potential impact of the current health care climate on athletic training practice and (2) the skill mix that the AT contributes to the primary care team. Starfield 11 has advocated for a capacity process approach to measuring how well primary care is practiced. The IOM 7,8 and Starfield 11,34 have discussed attributes of primary care and further elaborated on the capacity process approach for measuring primary care. The 4 structural elements to primary care are access, the range of services provided, definition of the eligible population, and continuity. The ATs who conceptualize their practice within the construct of primary care must determine how their practice location will serve patients when care will be available, which services will be offered, which patients are eligible to receive care, and how continuity of care will be maintained through documentation in an electronic health record. 11 The reframing of the AT's point of view to see daily interactions and tasks as primary care attributes is a paradigm shift, but it is not out of reach. Hajart [35][36][37] has commented regarding the implications that previous health care reforms have had on the practice of athletic training. Specifically, Hajart stated that ATs may be well positioned to succeed in an accountable care organization-type environment, where incentives are given for highquality, low-cost care. 34 Starfield 38 reported that those nations that have highly developed systems of primary care typically rank high in cost containment when compared with those who do not. In addition, the United States was characterized 38 as having a poor orientation toward primary care as of 2004. More patients have been filtered into primary care clinics due to an increased emphasis on primary care and the continued growth of health maintenance organizations, further substantiating need but exacerbating a long-standing shortage. 36,39,40 As we are currently working in the midst of a shortage of 91 500 PCPs, with further estimates at 139 160 by the year 2030, creative use of health care resources will be required. 36,39 It has also been proposed that the AT could be very well suited to assist in addressing that shortfall. 40 For the AT to fulfill such a role in team-based primary care, skills in virtual consultation, extended hours, and a walk-in care model may be used to ensure expanded access and cost savings. 41 Perhaps most important, an increase of 1 PCP per 10 000 people, resulted in better health outcomes. 42 Advancing the idea that the AT can serve as a primary care provider requires actual data that providers other than physicians are engaged in primary care. Although this may seem obvious, literature on skill mix and task shifting may provide insight that care processes within primary care are changing. 36,43,44 Skill mix has been conceptualized as the presence of health care providers of different disciplines within a practice setting. 43 Task shifting has been operationally defined as the surrendering of tasks usually performed by physicians to nonphysicians-traditionally nurses and physician assistants-with the expectation that those providers have the capacity to complete them. 43,44 Whereas task shifting has not been formally discussed within athletic training apart from the direct supervision of a physician, investigating the value and hiring patterns of ATs within ambulatory settings may provide a possible metric of an emerging skill mix within the profession. Frogner, Westerman, and DiPietro 40 conducted a nationwide survey of ATs employed in ambulatory care settings. Of those ATs surveyed, 60% practiced in multi-specialty practices. 40 Of those in multispecialty practices, 27% were described as working in primary care. It is interesting that the individuals most commonly served by ATs in ambulatory care settings were under the age of 18 years and over the age of 65 years. 40 Data regarding patients outside of those demographics were not disclosed. Because it has been established that the presence of an athletic training-related skill mix does exist within primary care, exploration of common themes between subcompetencies and athletic training practice domains will be explored. The patient care and medical knowledge competencies within graduate medical education in family practice reflect a large degree of similarity to 4 of the 5 athletic training practice domains. 17,45 Figure 3 illustrates some of these comparisons. The athletic training practice domains resemble the family medicine subcompetencies through similar language but also promote wellness and health promotion. 17,45 Whereas the language is broader in the family practice milestone document, the athletic training domains appear to represent more focused perspectives on the role of the primary care provider. The Practice Analysis, 7th edition, uses the term primary health care professionals 45 when describing the AT's role in the management of acute and emergency conditions. In addition, the musculoskeletal diagnosis and management skill set possessed by ATs further substantiates the need for that skill set within primary care, given that 1 of every 7 consultations to primary care is for a musculoskeletal condition. 46 Physicians supervising ATs within ambulatory care settings report being very well satisfied with the musculoskeletal skill set possessed by ATs. 40 It is interesting that physiotherapists in the United Kingdom were able to deliver independent musculoskeletal care within primary care after a brief training regarding interventions for chronic health conditions. 47 Outcomes were good, with patients reporting increased function, decreased health care costs due to primary care visits, and appreciation of the increased time spent with personally tailored advice. 47 This may be a feasible model for ATs to adopt in a team-based primary care setting. Finally, the therapeutic intervention skills possessed by ATs may serve as a mechanism adding value to the primary care experience, increasing patient satisfaction, and lowering costs. The current trajectory and need within health care call for a team-based approach to primary care. This will involve a broad array of skills accompanied by careful and accountable task shifting from physicians to midlevel providers, including ATs. The presence of ATs at the point of first contact in many settings calls for a broadened perspective with measured progress by all clinicians. 40 As such, a milestone project for primary care within athletic training is currently under way. Milestones for the specialty of primary care are in development by the AT Milestones project team. 48 The intentional production of milestones within this area will socialize students and learning professionals into primary care and position them for independent clinical interaction in the care of patients with a broader array of clinical concerns in sustained partnership with physicians. In harnessing the specific practice domains of athletic training, the health needs of the population can be addressed at the point of care and those interventions also transitioned into the community for larger impact. As this project develops, an intentional goal has been established to develop an operational definition of primary care within athletic training practice. MOVING UPMARKET: DISRUPTING ATHLETIC TRAINING PRACTICE FOR THE SAKE OF PRIMARY CARE Innovative models for athletic training practice continue to emerge. Laursen 49 has discussed a patient-centered model for athletic training practice that moves athletic training services out of an athletic department and transitions it toward an independent and interprofessional clinical unit. This is a novel approach that has been adopted by several college and university practices resulting in fewer hours worked, direct supervision by physicians, and reported increased recognition of the athletic training profession among fellow clinicians. 49 This transition out of the traditional athletics model creates an opportunity not only for collaboration, but for expansion of primary care into the traditional settings in which ATs work. The usual mechanism for accomplishing this in the college and university practice setting has been through student health services, whereas the secondary school practice setting is seeing the emergence of school-based health centers (SBHC) with possibility for contribution by ATs in that setting. [49][50][51] Recent work by Noel-London, Breitbach, and Belue 51 demonstrated a 20% increase in the number of clinic visits within an SBHC and a change in perception of the SBHC when the services of the AT were included. It is important to note that there are differences in the composition of SBHCs that may be attributable to socioeconomic status. 51,52 The reason that these transitions out of a traditional athletic training practice model are important is that they provide a structure that is amenable to recognition as a patient centered medical home (PCMH). The PCMH was first introduced by the American Academy of Pediatrics in 1967 and has been proposed to provide patient-centered care that reduces costs while creating a sustained relationship between the patient and provider. 52, 53 The National Committee for Quality Assurance is the largest and most well-known accreditation body for PCMHs in the United States. The organization has set forth 6 concepts with 19 competencies that define the criteria that make up a PCMH. 53 The 6 concepts are (1) teambased care and practice organization, (2) knowing and managing your patients, (3) patient-centered access and continuity, (4) care management and support, (5) care coordination and care transitions, and (6) performance measurement and quality improvement. 53 Whereas it is true that currently only PCPs, physician assistants, and nurse practitioners can be recognized as personal clinicians under current PCMH standards, ATs can play an incredible role in promoting transition to PCMH status by facilitating teambased care, coordinating care, and promoting evidence-based strategies based on population-specific criteria. 53 Merging ATs, school nurses, and counselors into a cohesive SBHC is a comprehensible beginning to establishing a PCMH at the point of care. It is also worth noting that some SBHCs did not have a physician on-site at all times; however, a physician is still expected to have a panel of patients within a PCMH. Recent work 52 has described the state of PCMH recognition in SBHCs throughout the country. The majority of those SBHCs had no recognition as a PCMH, and the majority of SBHCs employed a PCP at less than 1 full-time equivalent. 52 The creative and intentional disruption of athletic training practice within the traditional setting creates an instant interprofessional team that can move forward much more readily, not only in achieving PCMH status but also in providing care for a diverse and comprehensive set of health needs within the population. In general, as the poverty level within a school increased, likelihood of recognition as a PCMH decreased. 52 It is plausible that ATs may be able to readily benefit SBHC revenue through the creative and intentional use of student accident, secondary, and gap insurance policies. Whereas these are typically used in athletics, these may provide added financial benefit to the SBHC due to potentially higher reimbursement rates and multiple payers. ATs have multifaceted experience and skill to directly benefit PCMH status within traditional practice settings. It is important to understand the historical challenges that primary care has faced in order for ATs to respond to the primary care workforce shortage. The creative and honed skill of the AT positions them to have a substantial effect by meeting a need of providing access to comprehensive care throughout the communities in which athletics trainers serve patients. In order to help solve health disparities in access to primary care, ATs must continue to use the platform of health promotion and prevention in population health to simultaneously identify pathology and promote health. The comparison of an AT's scope of practice to primary care attributes is an intentional demonstration of readiness and ability to respond to health care needs and create change. This comprehensive paradigm shift will require both clinical and administrative leaders with experience in primary care. The athletic training primary care milestones are the conduit not only for preparing clinicians to meet the personal health care needs of patients, but also for identifying those leaders expert in primary care who will innovate and advocate for continued change and new solutions. LEARNING AND DOING: EXECUTING PRIMARY CARE ATTRIBUTES IN THE CLASSROOM AND IN PRACTICE The crucial intersection of the primary care attributes with athletic training education and practice involves the intentional pairing of the practice domains with primary care attributes. The requirement of an athletic training student to experience multiple clinical environments with varied patient populations with presumed differences in resources and socioeconomic status 54 should create intentional questions about access to athletic training services and health care for these populations. Exploring this in a reflective journal or case series could be an excellent way to prepare for the realities of clinical practice in which access does indeed vary, along with possible strategies to address lack of access. 54 Finding ways to explore and remove barriers to care is an intentional display of the primary care attribute of access. Comprehensiveness of care involves the recognition of a wide variety of health needs within a patient population. 11 Although it is obvious that an AT may not be able to provide care for all of these entities, comprehensive care still occurs when appropriate referral resources are identified and used. Whereas athletic training students and clinicians are not expected to provide care for every pathology that may present to them, there should be enough contact with these clinical problems for students and clinicians to remain competent. 11,45,54 Students and clinicians should become comfortable reviewing patient documentation and previous medical histories. Coordination of care involves knowledge of past medical history after careful review of information. 11 Using mock or real-time exercises involving review of preparticipation exams and medical histories may allow students to maximize their ability to make decisions about medical eligibility or coordinate care with the appropriate specialist when concerns with patient health do arise. A novel exercise known as previsit planning involves critical examination of a patient's medical history, previous labs, and other clinical information before a scheduled appointment with a provider to formulate a known history with known comorbidities. This allows the student to gain familiarity with clinical medicine, associated lab tests, and terminologies in order to appropriately communicate, think, and promote efficient care. This will allow for thorough communication with the physician and result in student learning regarding how these may affect options for patient care. Most important, practicing clinicians often assume the role of patient advocate when coordinating care for patients using this available knowledge. 11 Care that is continuous involves use of the same source of care over a period of time. 11 For ATs working in traditional practice settings, this primary care attribute is easily attainable, because patients often receive care in one location for a number of years. A longitudinal nature should develop as the clinician and student maintain competence for triage of multiple organ systems in order to formulate care as it becomes person focused. 11,54 Accountability of care revolves around proper documentation and ethical interactions in patient care. 54 Primary care continues to evolve with ever-increasing health care spending. 55 Musculoskeletal disorders lead all causes of health care spending in those aged 20 to 64 years, ahead of diabetes and other conditions. 55 This is reflective of payments made from private and public insurers as well as out-of-pocket costs. 55 It is time for the profession of athletic training to leverage its history of innovation, work ethic, and skills to provide an answer at the point of first contact for patients and the communities served.
2020-12-31T09:02:14.323Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "a3662227a27268f342e9886b43a67e7f337e4332", "oa_license": null, "oa_url": "https://meridian.allenpress.com/atej/article-pdf/15/4/278/2695192/i1947-380x-15-4-278.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "177ed8d311adf311734a0a5c8b5322cefca6b356", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
17789242
pes2o/s2orc
v3-fos-license
Affine Toda Solitons and Automorphisms of Dynkin Diagrams Using Hirota's method, solitons are constructed for affine Toda field theories based on the simply-laced affine algebras. By considering automorphisms of the simply-laced Dynkin diagrams, solutions to the remaining algebras, twisted as well as untwisted, are deduced. Introduction Recent work has shown that soliton solutions can be constructed for affine Toda field theories based on the a (1) n and d (1) 4 algebras [1] as well as the c (1) n algebra [2] when the coupling constant is purely complex. In the case of the a (1) n theory N-soliton solutions have been constructed, whereas for d (1) 4 and c (1) n only static single solitons. The purpose of this paper is to construct static single solitons for all of the remaining algebras, both twisted and untwisted. This is achieved by considering a generalisation of the field ansatz used in [1], although as in [1] a special decoupling of the equations of motion is considered. It is found that once the soliton solutions for d (1) n , e 6 , e 7 and e (1) 8 are constructed, solutions for theories based on the other algebras follow by folding the simplylaced Dynkin diagrams. The solutions for e (1) 6 and e (1) 7 have been obtained independently by Hall [3]. For the simply-laced algebras the number of static solitons is found to be equal to the rank of the corresponding algebra. However by the method that will be employed here, for the non-simply-laced algebras a lesser number is found (as in [2] for c (1) n ). Also, for all the theories the mass ratios of the solitons ‡ can be calculated and are found to coincide with the mass ratios of the fundamental particles in the real-coupling affine Toda theory (i.e. those obtained by expanding the potential term of the Lagrangian density about its minimum [4] [9]). The paper concludes with a discussion of some aspects of topological charge. The equations of motion The Lagrangian density of affine Toda field theory can be written in the form n j (e βα j ·φ − 1). The field φ(x, t) is an n-dimensional vector, n being the rank of the finite Lie algebra g. The α j 's, for j = 1, ..., n are the simple roots of g; α 0 is chosen such that the inner ‡ For g (1) 2 and c (1) n only one soliton is found and so mass ratios cannot be considered. products among the elements of the set {α 0 , α j } are described by one of the extended Dynkin diagrams. It is expressible in terms of the other roots by the equation where the n j 's are positive integers, and n 0 = 1. Both β and m are constants, β being the coupling constant. The inclusion of α 0 distinguishes affine Toda field theory from Toda field theory. Toda field theory is conformal and integrable, its integrability implying the existence of a Lax pair, infinitely many conserved quantities and exact solubility [4][5] [6] (for further references see [7]). The extended root is chosen in such a way as to preserve the integrability of Toda field theory (though not the conformal property), with the enlarged set of roots {α 0 , α j } forming an admissible root system [4]. Setting the coupling constant β to be purely complex, i.e. β = iγ, the equations of motion are Extending the idea of [1], consider the following substitution for the field φ(x, t) § , D x and D t are Hirota derivatives, defined by § For the simply-laced algebras the choice of η i coincides with that of [1], namely η i = 1. However, for the remaining algebras it is other choices of η i which yield soliton solutions. It will be assumed (cf [1]) that Q j = 0 ∀j, although this is not the most general decoupling. (The existence of n + 1 τ -functions (compared to the n-component field φ) is due to the relationship between affine and conformal affine Toda theories [2].) Therefore, In the spirit of Hirota's method for finding soliton solutions [8], suppose , σ, v and ξ are arbitrary complex constants. The constant p j is a positive integer and ǫ an infinitesimal parameter. The method employed is to solve (2.2) at successive orders in ǫ, and then absorb ǫ into the exponential. At first order in ǫ, it is easily shown that Defining the matrices, 1 , ..., δ (1) n ) T is an eigenvector of the matrix K where As K and NC are similar, they share the same eigenvalues. Indeed for the a,d and e theories it has been shown [10] that the squared masses of the fundamental Toda particles are eigenvalues of NC. For the non-simply-laced theories, the eigenvalues of NC are also eigenvalues of a simply-laced theory and so are related to the squared masses of the nonsimply-laced theory. As will be seen in section 5 this leads to the ratios of static energies of the solitons being equal to the ratios of the unrenormalized masses of the fundamental particles described by the Lagrangian fields. It is straightforward to show that for τ j to be bounded as x → ±∞, In all cases, η j is chosen to be since this choice of η j causes each τ j to be raised to a non-negative integer power in the equations of motion (2.2). So, for the simply-laced cases η j = 1 and for single soliton Finally, it is unnecessary to consider the solution corresponding to λ = 0, as it is always φ = 0. Affine Toda solitons for simply-laced algebras The length of the longest roots will be taken to be √ 2 for all cases. It is necessary fix the root lengths in this way, otherwise the parameters m and β in the equations of motion have to be rescaled. Also under this convention, the soliton masses are found to satisfy one universal formula. The a (1) n theory The Dynkin diagram for a (1) n is shown in Figure 3.1a. The eigenvalues of the matrix NC are λ a = 4 sin 2 πa n + 1 . With η j = 1 ∀j, the equations of motion are i.e. those of [1]. For the single soliton solutions p 0 = 1, giving where ω is an (n+1) th root of unity. There are n non-trivial solutions [1] (equal to the number of fundamental particles) with ω a = exp 2πia/(n+1) where 1 ≤ a ≤ n. These n solutions to a (1) n can be written in the form It was shown in [1] that φ (a) (1 ≤ a ≤ n) can be associated with the a-th fundamental representation of a (1) n , and that different values of Im ξ give rise to different topological charges. The topological charges are found to be weights of the particular representation. Therefore, strictly speaking the results presented here correspond to representatives from each class of solution, as the value of ξ and so the topological charge, is not specified. The d (1) n theory The equations of motion for d (1) 4 , whose Dynkin diagram is shown in Figure 3.2a, are slightly different to those for d (1) n≥5 and so will be considered separately. If λ=6, one solution is obtained: The Dynkin diagram for d (1) n (n ≥ 5) is shown in Figure 3.2b. In this case the eigenvalues of the matrix NC are and λ n−1 = λ n = 2. With η j = 1 ∀j, the single soliton has p j = n j ∀j and satisfies the following equations For λ = 2 it is found that theory The Dynkin diagram for e (1) 6 is shown in Figure 3.3a. The eigenvalues of the matrix NC are given by and As in the other simply-laced cases η j = 1 and p j = n j ∀j giving the equations of motion where (a,b)=(0,2),(1,3) and (6,5). A summary of the δ-values for the six single soliton solutions is given in Table 3.3. With reference to Table 3.3, the vector δ (1) = (δ 6 ) T is an eigenvector of the matrix K, which is conjugate to NC. The terms δ (b) a (b ≥ 2) are coefficients of e bΦ in τ a . As usual,the δ-values corresponding to λ = 0 have not been included as they lead to a trivial solution. For the e 7 theory, whose Dynkin diagram is shown in Figure 3.4a, the non-zero eigenvalues of the matrix NC are 3 sin 7π 18 sin 4π 9 , λ 5 = 8 sin 2 4π 9 λ 6 = 8 √ 3 sin 5π 18 sin π 9 , λ 7 = 8 sin 2 π 9 . A summary of the δ-values for the seven soliton solutions for e 7 is given in Table 3.4. Folding and the non-simply-laced algebras With the construction of the soliton solutions in the previous section, enough information has been gathered to deduce solutions to the non-simply laced algebras. From a (1) n to c (1) n . The Dynkin diagram for c (1) n is shown in Figure 4.1a. This is the origin of the idea of 'folding', discussed in [11]: the diagram for a (1) 2n−1 has been 'folded' using its symmetry under the reflection 0 → 0, i → 2n − i of the nodes. Generally, suppose the Dynkin diagram of a simply-laced algebra has some symmetry. The equations of motion then also have this symmetry, so that solitons with this symmetry as an initial condition preserve it as they evolve. Thus a solution for a (1) 2n−1 can be written in terms of {α ′ i } if τ i = τ 2n−i . As will be seen these are solutions for c (1) n . For c (1) n the η ′ j 's (all quantities relating to c (1) n will be denoted by a prime) are given by so that for the single soliton solution p ′ j = 1 ∀j. The equations of motion are then This set of equations is that for a 2n−1 with and so the solutions to c (1) n are those for a (1) 2n−1 with the conditions (4.1.2) imposed. This leads to the requirement that The only a satisfying this equation is a = n, giving ω a = −1, i.e. the only non-trivial soliton of a 2n−1 surviving the folding procedure is that corresponding to the n-th spot on the Dynkin diagram (the trivial solution corresponding to the zeroth spot also survives). Therefore, giving the soliton solution to c (1) n as In fact, with the identification of roots in (4. 2n−1 . This turns out to be a common feature of solitons to the non-simplylaced theories -they are equal to a soliton of the corresponding simply-laced algebra. From 2n−1 , d n+1 , a 2n , and g (1) 2 Turning first to the b (1) n theory, which has Dynkin diagram shown in Figure 4.2a, the set of roots {α ′ i } are expressible in terms of the roots {α i } of d With τ ′ i = τ i , τ ′ n = τ n = τ n+1 , the equations of motion for d (1) n+1 reduce to those for b (1) n . The number of solutions is found to be n − 1 with eigenvalues of NC equal to λ a = 8 sin 2 aπ 2n (1 ≤ a ≤ n − 1). In this case all the soltions of d (1) n+1 survive except those corresponding to the Dynkin spots n and n + 1. Solutions to theories based on twisted algebras such as a (2) 2n−1 , shown in Figure 4.2b, need to be handled slightly differently. The roots of a (2) 2n−1 are obtainable from those of d (1) 2n . However, if we apply the previous procedure and identify τ 's in the equations of motion for d (1) 2n , they are found to be slightly different from those of a (2) 2n−1 , in that the coefficient of m 2 differs. This is because the twisted algebras are obtained from symmetries of the simply-laced diagrams which involve the extended root, which is thus rescaled by folding. 2n by It is necessary, therefore, to consider the equations of motion of d (1) 2n with the following identification of τ -functions: . As a result, solutions of a 2n−1 are those of d (1) 2n with eigenvalue λ (sl) , satisfying (4.2.1) and With this root convention λ (sl) = 2λ (tw) , λ (tw) being an eigenvalue of the extended Cartan matrix for a (2) 2n−1 . As a result, the case a (2) 2n−1 has n solutions corresponding to The solitons of d (1) 2n lost through folding are those corresponding to the k-th spot (1 ≤ k ≤ 2n − 1, k odd) and one of the solitons corresponding to the (2n-1)-th and 2n-th spots. This procedure generalises to the other twisted algebras. Solitons for the d (2) n+1 and a (2) 2n theories are obtained from the d (1) n+2 and d (1) 2n+2 theories respectively, whereas g In a similar manner to the previous two subsections, solitons can be obtained for f Soliton mass In [1] it was shown that the masses of the a (1) n solitons are given by Since the masses of the fundamental Toda particles equal √ λ, the ratios of the soliton masses are equal to the ratios of the fundamental particles. By considering the soliton momentum, it is straightforward to confirm (case-by-case) that (4.1) holds for the solitons of the remaining simply-laced algebras. Consider now the solitons belonging to the other algebras. For the untwisted algebras, as each soliton is also a solution of one of the simply-laced cases, equation (4.1) holds though with the Coxeter number equal to that of the simply-laced algebra (it is easily shown that the Coxeter number of an untwisted non-simply-laced algebra is equal to the Coxeter number of the algebra from which it is folded). Hence, (4.1) holds with the mass ratios being those of the fundamental particles. By relating a solution of a twisted algebra to a solution of the corresponding simply-laced algebra, the masses of the twisted solitons are readily seen to satisy (4.1) also. Topological charges The topological charge of a soliton is defined as, Previous work [1] has shown that for a (1) n the topological charge of the soliton φ (a) is found to be a weight of the a-th fundamental representation. Different choices of Im ξ give rise to different topological charges. For the representations associated with the roots α 1 and α n all weights occur as topological charges whereas for the other representations a lesser number are found. We shall use Dynkin labelling for representations, based on the diagrams in figures 3.1a and 3.2a. As an example consider the case a 3 made up of φ (1) and φ (3) has been studied and is found to have topological charges filling the adjoint representation (1,0,1). Some consideration has also been given to the φ (1) -φ (2) static double soliton which has topological charges, not previously found, occuring as weights of the second fundamental representation. However, a study of this case is not yet complete. Similar consideration has been given to d It is clear that this aspect of the solitons requires a great deal more study.
2014-10-01T00:00:00.000Z
1992-08-24T00:00:00.000
{ "year": 1992, "sha1": "19bd53f2e1bc2ab6078609587e5946a31563a4ed", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9208057", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9bc6e355c06a49d17b8d8d092733bb23bba99537", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
14556373
pes2o/s2orc
v3-fos-license
Homotopy of posets, net-cohomology and superselection sectors in globally hyperbolic spacetimes We study sharply localized sectors, known as sectors of DHR-type, of a net of local observables, in arbitrary globally hyperbolic spacetimes with dimension $\geq 3$. We show that these sectors define, has it happens in Minkowski space, a $\mathrm{C}^*-$category in which the charge structure manifests itself by the existence of a tensor product, a permutation symmetry and a conjugation. The mathematical framework is that of the net-cohomology of posets according to J.E. Roberts. The net of local observables is indexed by a poset formed by a basis for the topology of the spacetime ordered under inclusion. The category of sectors, is equivalent to the category of 1-cocycles of the poset with values in the net. We succeed to analyze the structure of this category because we show how topological properties of the spacetime are encoded in the poset used as index set: the first homotopy group of a poset is introduced and it is shown that the fundamental group of the poset and the one of the underlying spacetime are isomorphic; any 1-cocycle defines a unitary representation of these fundamental groups. Another important result is the invariance of the net-cohomology under a suitable change of index set of the net. Introduction The present paper is concerned with the study of charged superselection sectors in globally hyperbolic spacetimes in the framework of the algebraic approach to quantum field theory [17,18]. The basic object of this approach is the abstract net of local observables R K , namely the correspondence which associates to any element O of a family K of relatively compact open regions of the spacetime M, considered as a fixed background manifold, the C * −algebra R(O) generated by all the observables which are measurable within O. Sectors are unitary equivalence classes of irreducible representations of this net, the labels distinguishing different classes are the quantum numbers. The study of physically meaningful sectors of the net of local observables, and how to select them, is the realm of the theory of superselection sectors. One of the main results of superselection sectors theory has been the demonstration that in Minkowski space M 4 , among the representations of the net of local observables it is possible to select a family of sectors whose quantum numbers manifest the same properties as the charges carried by elementary particles: a composition law, the alternative of Bose and Fermi statistics and the charge conjugation symmetry. The first example of sectors manifesting these properties has been provided in [10,11], known as DHRanalysis, where the authors investigated sharply localized sectors. Namely, a representation π of R K is a sector of DHR-type whenever its restriction to the spacelike complement O ⊥ of any element O of K is unitary equivalent to the vacuum representation π o of the net, in symbols Although no known charge present in nature is sharply localized, the importance of the DHR-analysis resides in the following reasons. First, it suggests the idea that physically charged sectors might be localized in a more generalized sense with respect to (1). Secondly, the introduction of powerful techniques based only on the causal structure of the Minkowski space that can be used to investigate other types of localized sectors. A relevant example are the BF-sectors [3] which describe charges in purely massive theories. BF-sectors are localized in spacelike cones, which are a family of noncompact regions of M 4 . In curved spacetimes the nontrivial topology can induce superselection sectors, see [1] and references quoted therein. However, up until now, the localization properties of these sectors are not known, hence it is still not possible to investigate their charge structure. In the present paper we deal with the study of sectors of DHR-type in arbitrary globally hyperbolic spacetimes. Because of the sharp localization, sectors of DHR-type should be insensitive to the nontrivial topology of the spacetime, and their quantum numbers should exhibit the same features as in Minkowski space. However, the first investigations [16,27] have provided only partial results in this direction, and, in particular, they have pointed out that for particular classes of spacetimes the topology might affect the properties of sectors of DHR-type. The aim of the present paper is to show how this type of sectors and the topology of spacetime are related and that they manifest the properties described above also in an arbitrary globally hyperbolic spacetime. We want to stress that the results of this paper are confined to spacetimes whose dimension is ≥ 3. Before entering the theory of DHR-sectors in globally hyperbolic spacetimes, a key fact has still to be mentioned. The DHR-analysis can be equivalently read in terms of net-cohomology of posets, a cohomological approach initiated and developed by J.E. Roberts [25], (see also [26,27] and references therein). Such an approach makes clear that the spacetime information which is relevant for the analysis of the DHR-sectors is the topological and the causal structure of Minkowski space (the Poincaré symmetry enters the theory only in the definition of the vacuum representation). In particular, the essential point is how these two properties are encoded in the structure of the index K as a partially ordered set (poset) with respect to inclusion order relation ⊆. Representations satisfying (1) are, up to equivalence, in 1-1 correspondence with 1-cocycles z of the poset K with values in the vacuum representation of the net A K : O → A(O). Here A(O) is the von Neumann algebra obtained by taking the bicommutant π o (R(O)) ′′ of π o (R(O)). These 1-cocycles, which are nothing but the charge transporters of DHR-analysis, define a tensor C * −category Z 1 t (A K ) with a permutation symmetry and conjugation. The first investigation of sectors of DHR-type in a globally hyperbolic spacetime M has been done in [16]. First, the authors consider the net of local observables R K⋄ indexed by the set K ⋄ of regular diamonds of M: a family of relatively compact open sets codifying the topological and the causal properties of M. Secondly, they take a reference representation π o of R K⋄ on a Hilbert space H o such that the net A K⋄ : K ⋄ ∋ O → A(O) ≡ π o (R(O)) ′′ satisfies Haag duality and the Borchers property (see Section 4). The reference representation π o plays for the theory the same role that the vacuum representation plays in the case of Minkowski space. Examples of physically meaningful nets of local algebras indexed by regular diamonds have been given in [32]. Finally, the DHR-sectors are singled out from the representations of the net R K⋄ by generalizing, in a suitable way, the criterion (1). As in Minkowski space, the physical content of DHR-sectors is contained in the C * −category Z 1 t (A K⋄ ) of 1-cocycles of K ⋄ with values in A K⋄ , and when K ⋄ is directed under inclusion, there exist a tensor product, a symmetry and conjugated 1-cocycles. The analogy with the theory in the Minkowski space breaks down when the K ⋄ is not directed. In this situation only the introduction of a tensor product on Z 1 t (A K⋄ ) and the existence of a symmetry have been achieved in [16], although the definition of the tensor product is not completely discussed (see below). There are two well known topological conditions on the spacetime, implying that not only regular diamonds but any reasonable set of indices for a net of local algebras is not directed: this happens when the spacetime is either nonsimply connected or has compact Cauchy surfaces (Corollary 2. 19 and Lemma 3.2). There arises, therefore, the necessity to understand the connection between net-cohomology and topology of the underlying spacetime. Progress in this direction has been achieved in [27]. The homotopy of paths, the net-cohomology under a change of the index set are issues developed in that work that will turn out to be fundamental for our aim. Moreover, it has been shown that the statistics of sectors can be classified provided that the net satisfies punctured Haag duality (see Section 4). However, no result concerning the conjugation has been achieved. To see what is the main drawback caused by the non directness of the poset K ⋄ , we have to describe more in detail Z 1 t (A K⋄ ). The elements z of Z 1 t (A K⋄ ) are 1-cocycles trivial in B(H o ) or, equivalently, path-independent on K ⋄ . The latter means that the evaluation of z on a path of K ⋄ depends only on the endpoints of the path. When K ⋄ is directed any 1-cocycle is trivial in B(H o ), but this might not be hold when K ⋄ is not directed. The consequences can be easily showed: let ⊗ be the tensor product introduced in [16]: for any z, z 1 ∈ Z 1 t (A K⋄ ), it turns out that z ⊗z 1 is a 1-cocycle of K ⋄ with values in A K⋄ , but it is not clear whether it is trivial in B(H o ) (we will see in Remark 4.18 that this 1-cocycle is trivial in B(H o )). Now, we know that the nonsimply connectedness and the compactness of the Cauchy surfaces are topological obstructions to the directness of the index sets. The first aim of this paper is to understand whether these conditions are also obstructions to the triviality in B(H o ) of 1-cocycles. This problem is analyzed in great generality in Section 2. We introduce the notions of the first homotopy group and fundamental group for an abstract poset P (Definition 2.4) and prove that any 1-cocycle z of P, with values in a net of local algebras A P indexed by P, defines a unitary representation of the fundamental group of P (Theorem 2.8). In the case that P is a basis for a topological space ordered under inclusion, and whose elements are arcwise and simply connected sets, then the fundamental group of P is isomorphic to the fundamental group of the underlying topological space (Theorem 2.18). This states that the only possible topological obstruction to the triviality in B(H o ) of 1-cocycles is the nonsimply connectedness (Corollary 2.21). Before studying superselection sectors in a globally hyperbolic spacetime M, we have to point out another problem arising in [16,27]. Regular diamonds do not need to have arcwise connected causal complements. This, on the one hand creates some technical problems; on the other hand it is not clear whether it is justified to assume Haag duality on A K⋄ : the only known result showing that a net of local observables, in the presence of a nontrivial superselection structure, inherits Haag duality from fields makes use of the arcwise connectedness of causal complements of the elements of the index set [16,Theorem 3.15]. We start to deal with this problem by showing that net-cohomology is invariant under a change of the index set (Theorem 2.23), provided the new index set is a refinement of K ⋄ (see Definition 2.9 and Lemma 2.22a). In Section 3.2 we introduce the set K of diamonds of M. K is a refinement of K ⋄ and any element of K has an arcwise connected causal complement. Therefore adopting K as index set the cited problems are overcome. In Section 4, we consider an irreducible net A K satisfying the Borchers property and punctured Haag duality. The key for studying superselection sectors of the net A K , namely the C * −category Z 1 t (A K ), is provided by the following fact. We introduce the causal puncture K x of K induced by a point x of M (17) and consider the categories for any x ∈ M admits an extension to a 1-cocycle z ∈ Z 1 t (A K ) if, and only if, a suitable gluing condition is verified (Proposition 4.2). A similar result holds for arrows (Proposition 4.3), and can be easily generalized to functors. These results suggest that one could proceed as follows: first, prove that the categories Z 1 t (A Kx ) have the right structure to describe the superselection theory (local theory, Section 4.2); secondly, check that the constructions we have made on Z 1 t (A Kx ) satisfy the mentioned gluing condition, and consequently can be extended to Z 1 t (A K ) (global theory, Section 4.3). This argument works. We will prove that Z 1 t (A K ) has a tensor product, a symmetry and that any object has left inverses. The full subcategory Z 1 t (A K ) f of Z 1 t (A K ) whose objects have finite statistics has conjugates (Theorem 4.15). In Appendix A we give some basics definitions and results on tensor C * −categories. Homotopy and net-cohomology of posets After some preliminaries, the main topics are discussed in full generality in the first three sections: the first homotopy group of a poset; the connection between homotopy and net-cohomology; the behaviour of net-cohomology under a change of the index set. The remaining two sections are devoted to study the case that the poset is a basis for the topology of a topological space. We stress that the results obtained in the first three sections in terms of abstract posets can be applied, not only to sharply localized charges which are the subject of the present investigation, but also to charges like those studied in [3,1]. Preliminaries: the simplicial set and net-cohomology In the present section we recall the definition of simplicial set of a poset and the notion of net-cohomology of a poset, thereby establishing our notations. References for this section are [26,16,27]. The simplicial set -A poset (P, ≤) is a partially ordered set. This means that ≤ is a binary relation on a nonempty set P, satisfying It is clear that ∆ 0 is a point, ∆ 1 is a closed interval etc... The inclusion maps d n i between standard simplices are maps d n i : ∆ n−1 −→ ∆ n defined as d n i (λ 0 , . . . , λ n−1 ) = (λ 0 , λ 1 , . . . , λ i−1 , 0, λ i , . . . λ n−1 ), for n ≥ 1 and 0 ≤ i ≤ n. Now, note that a standard n-simplex ∆ n can be regarded as a partially ordered set with respect to the inclusion of its subsimplices. A singular n-simplex of a poset P is an order preserving map f : ∆ n −→ P. We denote by Σ n (P) the collection of singular n-simplices of P and by Σ * (P) the collection of all singular simplices of P. Σ * (P) is the simplicial set of P. The inclusion maps d n i between standard simplices are extended to maps ∂ n i : Σ n (P) −→ Σ n−1 (P), called boundaries, between singular simplices by setting ∂ n i f ≡ f • d n i . One can easily check, by the definition of d n i , that the following relations hold. From now on, we will omit the superscripts from the symbol ∂ n i , and will denote: the composition ∂ i • ∂ j by the symbol ∂ ij ; 0-simplices by the letter a; 1-simplices by b and 2-simplices by c. Notice that a 0-simplex a is nothing but an element of P; a 1-simplex b is formed by two 0-simplices ∂ 0 b, ∂ 1 b and an element |b| of P, called the support of b, such that ∂ 0 b, Given a 0 , a 1 ∈ Σ 0 (P), a path from a 0 to a 1 is a finite ordered sequence p = {b n , . . . , b 1 } of 1-simplices satisfying the relations The startingpoint of p, written ∂ 1 p, is the 0-simplex a 0 , while the endpoint of p, written ∂ 0 p, is the 0-simplex a 1 . We will denote by P(a 0 , a 1 ) the set of paths from a 0 to a 1 , and by P(a 0 ) the set of closed paths with endpoint a 0 . P is said to be pathwise connected whenever for any pair a 0 , a 1 of 0-simplices there exists a path p ∈ P(a 0 , a 1 ). The support of the path is the collection |p| ≡ {|b i | | i = 1, . . . , n}, and we will write |p| ⊆ P if P is a subset of P with |b i | ∈ P for any i. Furthermore, with an abuse of notation, we will write The symbol δ stands for ∂. Causal disjointness and net of local algebras -Given a poset P, a causal disjointness relation on P is a symmetric binary relation ⊥ on P satisfying the following properties: Given a subset P ⊆ P, the causal complement of P is the subset P ⊥ of P defined as Note that if P 1 ⊆ P , then P ⊥ ⊆ P ⊥ 1 . Now, assume that P is a pathwise connected poset equipped with a causal disjointness relation ⊥. A net of local algebras indexed by P is a correspondence associating to any O a von Neumann algebras A(O) defined on a fixed Hilbert space H o , and satisfying where the prime over the algebra stands for the commutant of the algebra. The category of 1-cocycles -We refer the reader to the Appendix for the definition of C * −category. Let P be a poset with a causal disjointness relation ⊥, and let A P be an irreducible net of local algebras. A 1-cocycle z of P with values in A P is a field z : z(∂ 0 c) · z(∂ 2 c) = z(∂ 1 c), c ∈ Σ 2 (P), and the locality condition: z(b) ∈ A(|b|) for any 1-simplex b. An intertwiner t ∈ (z, z 1 ) between a pair of 1-cocycles z, z 1 is a field t : Σ 0 (P) ∋ a −→ t a ∈ B(H o ) satisfying the relation and the locality condition: t a ∈ A(a) for any 0-simplex a. The category of 1-cocycles Z 1 (A P ) is the category whose objects are 1-cocycles and whose arrows are the corresponding set of intertwiners. The composition between s ∈ (z, z 1 ) and t ∈ (z 1 , z 2 ) is the arrow t · s ∈ (z, z 2 ) defined as (t · s) a ≡ t a · s a , a ∈ Σ 0 (P). Note that the arrow 1 z of (z, z) defined as (1 z ) a = ½, for any a ∈ Σ 0 (P), is the identity of (z, z). Now, the set (z, z 1 ) has a structure of complex vector space defined as for any α, β ∈ C and t, s ∈ (z, z 1 ). With these operations and the composition "·", the set (z, z) is an algebra with identity 1 z . The category Z 1 (A P ) has an adjoint * , defined on as the identity, z * = z, on the objects, while the adjoint t * ∈ (z 1 , z) of on arrows t ∈ (z, z 1 ) is defined as where (t a ) * stands for the adjoint in B(H o ) of the operator t a . Now, let be the norm of B(H o ). Given t ∈ (z, z 1 ), we have that t a = t a 1 for any pair a, a 1 of 0-simplices because P is pathwise connected. Therefore, by defining t ≡ t a a ∈ Σ 0 (P) it turns out (z, z 1 ) is a complex Banach space for any z, z 1 ∈ Z 1 (A P ), while (z, z) is a C * −algebra for any z ∈ Z 1 (A P ). This entails that Z 1 (A P ) is a C * −category. Two 1-cocycles z, z 1 are equivalent (or cohomologous) if there exists a unitary arrow t ∈ (z, z 1 ). A 1-cocycle z is trivial if it is equivalent to the identity cocycle ι defined as ι(b) = ½ for any 1-simplex b. Note that, Equivalence in B(H o ) and path-independence -A weaker form of equivalence between 1-cocycles is the following: z, z 1 are said to be equivalent Note that the field V is not an arrow of (z, z 1 ) because it is not required that V satisfies the locality condition. A 1-cocycle is trivial in B(H o ) if it is equivalent in B(H o ) to the trivial 1-cocycle ι. We denote by Z 1 t (A P ) the set of the 1-cocycles trivial in B(H o ) and with the same symbol we denote the full C * −subcategory of Z 1 (A P ) whose objects are the 1-cocycles trivial in B(H o ). Triviality in B(H o ) is related to the notion of path-independence. The evaluation of a 1-cocycle z on a path p = {b n , . . . , b 1 } is defined as z is said to be path-independent on a subset P ⊆ P whenever z(p) = z(q) for any p, q ∈ P(a 0 , a 1 ) such that |p|, |q| ⊆ P. ( As P is pathwise connected, a 1-cocycle is trivial in B(H o ) if, and only if, it is path-independent on all P [16]. For later purposes, we recall the following result: assume that z is a 1-cocycle trivial in for any path p with ∂ 1 p, ∂ 0 p ⊥ O [16, Lemma 3A.5]. The first homotopy group of a poset The logical steps necessary to define the first homotopy group of posets are the same as in the case of topological spaces. We first recall the definition of a homotopy of paths; secondly, we introduce the reverse of a path, the composition of paths and prove that they behave well under the homotopy equivalence relation; finally we define the first homotopy group of a poset. The definition of a homotopy of paths ( [27], p.322) needs some preliminaries. An ampliation of a 1-simplex b is a 2-simplex c such that ∂ 1 c = b. We denote by A(b) the set of the ampliations of b. An elementary ampliation of a path p = {b n , . . . , b 1 }, is a path q of the form Consider now a pair {b 2 , b 1 } of 1-simplices satisfying An elementary deformation of a path p is a path q which is either an elementary ampliation or an elementary contraction of p. Note that a path q is an elementary ampliation of a path p if, and only if, p is an elementary contraction of q. This can be easily seen by observing that if c ∈ Σ 2 (P), then c ∈ A(∂ 1 c) and c ∈ C(∂ 0 c, ∂ 2 c). This entails that deformation is a symmetric, reflexive binary relation on the set of paths with the same endpoints. However, if P is not directed, deformation does not need to be an equivalence relation on paths with the same endpoints, because transitivity might fail. Given a 0 , a 1 ∈ Σ 0 (P), a homotopy of paths in P(a 0 , a 1 ) is a map h(i) : {1, 2, . . . , n} −→ P(a 0 , a 1 ) such that h(i) is an elementary deformation of h(i − 1) for 1 < i ≤ n. We will say that two paths p, q ∈ P(a 0 , a 1 ) are homotopic, p ∼ q, if there exists a homotopy of paths h in P(a 0 , a 1 ) such that h(1) = q and h(n) = p. It is clear that a homotopy of paths is an equivalence relation on paths with the same endpoints. We now define the composition of paths and the reverse of a path. Given p = {b n , . . . , b 1 } ∈ P(a 0 , a 1 ) and q = {b ′ k , . . . b ′ 1 } ∈ P(a 1 , a 2 ), the composition of p and q is the path p * q ∈ P(a 0 , a 2 ) defined as Note that p 1 * (p 2 * p 3 ) = (p 1 * p 2 ) * p 3 , if the composition is defined. The reverse of a 1-simplex b, is the 1-simplex b defined as So, the reverse of a path p = {b n , . . . , b 1 } ∈ P(a 0 , a 1 ) is the path p ∈ P(a 1 , a 0 ) defined as p ≡ {b 1 , . . . , b n }. It is clear that p = p. Furthermore Proof. The reverse of a 2-simplex c is the 2-simplex c defined as So, let h : {1, . . . , n} −→ P(a 0 , a 1 ) be a homotopy of paths. Then maps h : {1, . . . , n} −→ P(a 1 , a 0 ) defined as h(i) ≡ h(i) for any i is a homotopy of paths, completing the proof. A 1-simplex b is said to be degenerate to a 0-simplex a 0 whenever We will denote by b(a 0 ) the 1-simplex degenerate to a 0 . Lemma 2.3. The following assertions hold: Proof. By Lemma 2.1 it is enough to prove the assertions in the case that ) and whose support |c 2 | equals |b|. Then The other identity follows in a similar way. We now are in a position to define the first homotopy group of a poset. Fix a base 0-simplex a 0 and consider the set of closed paths P(a 0 ). Note that the composition and the reverse are internal operations of P(a 0 ) and that b(a 0 ) ∈ P(a 0 ). We define where ∼ is the homotopy equivalence relation. Let [p] denote the homotopy class of an element p of P(a 0 ). Equip π 1 (P, a 0 ) with the product . * is associative, and it easily follows from previous lemmas that π 1 (P, a 0 ) with * is a group: the identity 1 of the group is . Now, assume that P is pathwise connected. Given a 0-simplex a 1 , let q be a path from a 0 to a 1 . Then the map is a group isomorphism. On the grounds of these facts, we give the following Definition 2.4. We call π 1 (P, a 0 ) the first homotopy group of P with base a 0 ∈ Σ 0 (P). If P is pathwise connected, we denote this group by π 1 (P) and call it the fundamental group of P. If π 1 (P) = 1 we will say that P is simply connected. We have the following result Proposition 2.5. If P is directed, then P is pathwise and simply connected. One can easily deduce from these relations that ∂ 02 c i = a 0 for any i = 2, . . . , n − 1. By Lemmas 2.1, 2.2 and 2.3, we have completing the proof. Connection between homotopy and net-cohomology Let us consider a pathwise-connected poset P, equipped with a causal disjointness relation ⊥, and let A P be an irreducible net of local algebras. In this section we show the relation between π 1 (P) and the set Z 1 (A P ). To begin with, we prove the invariance of 1-cocycles for homotopic paths. Lemma 2.6. Let z ∈ Z 1 (A P ). For any pair p, q of paths with the same endpoints, if p ∼ q, then z(p) = z(q). Proof. It is enough to check the invariance of z for elementary deformations. that is an elementary ampliation of p. By definition of A(b j ) and the 1cocycle identity we have The invariance for elementary contractions follows in a similar way. Proof. (a) Let c(a 0 ) be the 2-simplex degenerate to a 0 , that is follows from (a), Lemma 2.6 and Lemma 2.3b. We now are in a position to show the connection between the fundamental group of P and This definition is well posed as z is invariant for homotopic paths. Proof. First, recall that the identity 1 of π 1 (P) is the equivalence class [b(a 0 )] associated with the 1-simplex degenerate to a 0 . By Lemma 2.7 we have that π z (1) = 1 and that π z ( ) · u a 0 . Now, let π be a unitary representation of π 1 (P) on H o . Fix a base 0-simplex a 0 , and for any 0-simplex a, denote by p a a path with ∂ 1 p a = a and ∂ 0 p a = a 0 . Let Given a 2-simplex c, we have Hence z π satisfies the 1-cocycle identity but in general z π ∈ Z 1 (A K ) because z π (b) does not need to belong to A(|b|). However note that if we consider π z 1 for some z 1 ∈ Z 1 (A K ), then This entails that if π z is equivalent to π z 1 , then z is equivalent in B(H o ) to z 1 . Finally, assume that π 1 (P) = 1, then z(p) = ½ for any closed path p. This entails that z is path-independent on P, hence z is trivial in B(H o ). Change of index set The purpose is to show the invariance of net-cohomology under a suitable change of the index set. To begin with, by a subposet of a poset P we mean a subset P of P equipped with the same order relation of P. Definition 2.9. Consider a subposet P of P. We will say thatP is a re- Lemma 2.10. Let P be a locally relatively connected refinement of P. (a) P is pathwise connected if, and only if, P is pathwise connected. (b) If ⊥ is a causal disjointness relation for P, then the restriction of ⊥ to P is a causal disjointness relation. Proof. (a) Assume that P is pathwise connected. It easily follows from the definition of a locally relatively connected refinement that P is pathwise connected. Conversely, assume that P is pathwise connected. Given a 0 , a 1 ∈ P, letâ 0 ,â 1 ∈ P be such thatâ 0 ≤ a 1 andâ 1 ≤ a 1 , and letp be a path in P fromâ 0 toâ 1 . Then, b 1 * p * b 0 is a path from a 0 to a 1 , where b 0 , b 1 are 1-simplices of P defined as follows: is a symmetric binary relation satisfying the property (ii) of the definition (2). Let O ∈ P. Since ⊥ is a causal disjointness relation on P, we can find Let P be a pathwise connected poset and let ⊥ be a causally disjointness relation for P. Let A P be an irreducible net of local algebras indexed by P and defined on a Hilbert space H o . If P is a locally relatively connected refinement of P, then, by the previous lemma, P is pathwise connected and ⊥ is a causal disjointness relation on P. Furthermore, the restriction of A P to P is a net of local algebras A P| P indexed by P. Let Z 1 t (A P| P ) be the category of 1-cocycles of P, trivial in B(H o ), with values in the net A P| P . Notice that A P| P might be not irreducible, hence it is not clear, at a first sight, if the trivial 1-cocycleι of Z 1 t (A P| P ) is irreducible or not. This could create some problems in the following, since the properties of tensor C * −categories whose identity is not irreducible are quite complicated (see [21,31,5]). However, as a consequence of the fact that P is a refinement of P, this is not the case as shown by the following lemma. Lemma 2.11. Let A P be an irreducible net of local algebras. For any locally relatively connected refinement P of P, the trivial 1-cocycleι of Proof. Lett ∈ (ι,ι). By the definition ofι we have thatt ∂ 1b =t ∂ 0b for any 1-simplexb of P. Since P is pathwise connected, we have thattâ =tâ 1 for any pairâ,â 1 of 0-simplices of P. By the localization properties oft, it turns out that if we define T ≡tâ for some 0-simplexâ of P, then T ∈ A( O) for any O ∈ P. Now, observe that given O ∈ P, by the definition of causal disjointness relation, there is We now are ready to show the main result of this section. Theorem 2.12. Let P be locally relatively connected refinement of P. Then the categories Z 1 t (A P ) and Z 1 t (A P| P ) are equivalent. Proof. For any z ∈ Z 1 t (A P ) and for any t ∈ (z, z 1 ) define It is clear that R is a covariant and faithful functor from . We now define a functor from Z 1 t (A P| P ) to Z 1 t (A P ). To this purpose, we choose a function f : P −→ P satisfying the following properties: ) whose support is contained in |b|, this is possible because P is a locally relatively connected refinement of P. For anyẑ ∈ Z 1 t (A P| P ) we define For any c ∈ Σ 2 (P), by using the path-independence ofẑ we have Hence F(ẑ) satisfies the 1-cocycle identity, and it is trivial in . Now, we show that the pair R, F states an equivalence between Z 1 t (A P ) and Z 1 t (A P| P ). Givenẑ ∈ Z 1 t (A P| P ), for anyb ∈ Σ 1 ( P), we have that The proof follows once we have shown that the functor F • R is naturally isomorphic to 1 Z 1 t (A P ) . To this end, for any a ∈ Σ 0 (P) let b(f(a), a) ∈ Σ 1 (P) defined as for any a ∈ Σ 0 (P). This means that the mapping u : and (F • R), completing the proof. The poset as a basis for a topological space Given a topological Hausdorff space X. The topics of the previous sections are now investigated in the case that P is a basis for the topology X ordered under inclusion ⊆. This allows us both to show the connection between the notions for posets and the corresponding topological ones, and to understand how topology affects net-cohomology. Homotopy In what follows, by a curve γ of X we mean a continuous function from the interval [0, 1] into X. We recall that the reverse of a curve γ is the curve γ defined as γ(t) ≡ γ(1 − t) for t ∈ [0, 1]. If β is a curve such that β(1) = γ(0), the composition γ * β is the curve Finally, the constant curve e x is the curve e x (t) = x for any t ∈ [0, 1]. Definition 2.13. Given a curve γ. A path p = {b n , . . . , b 1 } is said to be a poset-approximation of γ (or simply an approximation) if there is a partition 0 = s 0 < s 1 < . . . < s n = 1 of the interval [0, 1] such that . . , n (Fig.3). By App(γ) we denote the set of approximations of γ. Since P is a basis for the topology of X, we have that App(γ) = ∅ for any curve γ. It can be easily checked that the approximations of curves enjoy the following properties where β(1) = σ(0), ∂ 0 q = ∂ 1 p. The symbol δ stands for ∂. Definition 2.14. Given p, q ∈ App(γ), we say that q is finer than p whenever p = {b n , . . . , b 1 } and q = q n * · · · * q 1 where q i are paths satisfying We will write p ≺ q to denote that q is a finer approximation than p (Fig.4) Note that ≺ is an order relation in App(γ). Since P is a basis for the topology of X, (App(γ), ≺) is directed: that is, for any pair p, q ∈ App(γ) there exists p 1 ∈ App(γ) of γ such that p, q ≺ p 1 . As already said, we can find an approximation for any curve γ. The converse, namely that for a given path p there is a curve γ such that p is an approximation of γ, holds if the elements of P are arcwise connected sets of the topological space X. Concerning the relation between connectedness for posets and connectedness for topological spaces, in [16] it has been shown that: if the elements of P are arcwise connected sets of X, then an open set X ⊆ X is arcwise connected in X if, and only if, the poset P X defined as is pathwise connected. Note that the set P X is a sieve of P, namely a subfamily S of P such that, if O ∈ S and O 1 ⊆ O, then O 1 ∈ S. Now, assume that P is a sieve of P. Then P is pathwise connected in P if, and only if, the open set X P defined as is arcwise connected in X. We now turn to analyze simply connectedness. Lemma 2.15. Let p, q ∈ P(a 0 , a 1 ) be two approximations of γ. Then p and q are homotopic paths. for s, t ∈ [0, 1]. In general γ i and β i might not have the same endpoints. So let σ 1 , σ 2 be two curves such that σ 1 (0) = γ(s 1 ) σ 1 (1) = β(s 1 ) and Observe that γ ∼ τ 3 * τ 2 * τ 1 , and that for i = 1, 2, 3 the curve τ i has the same endpoints of β i . Furthermore, by construction we have and that τ 2 and σ 2 are contained in the support of c. So, by Lemma 2.16 τ 1 ∼ β 1 and τ 3 ∼ β 3 . Moreover τ 2 ∼ β 2 because the support of c is simply connected. Hence γ ∼ β, completing the proof. S t is nonempty. In fact t ∈ S t because p t is an approximation of γ t . Moreover S t is open. To see this assume p t = {b n , . . . , b 1 }. By definition of approximation there is a partition 0 = s 0 < s 1 < . . . < s n = 1 of [0, 1] such that for i = 0, . . . , n − 1. By continuity of h we can find ε i > 0 such that for any i = 0, . . . , n − 1. So, if we define ε ≡ min{ε i | i ∈ {0, . . . , n − 1}} we obtain that p t is an approximation of γ l for any l ∈ (t − ε, t + ε), hence S t is open in the relative topology of [0, 1]. Now, for any t ∈ [0, 1], let I t ⊆ S t be an open interval of t. Note that for any l ∈ I t , p t is a approximation of γ l . By compactness we can find a finite open covering I t 0 , I t 1 , . . . , I tn of [0, 1], where 0 = t 0 < t 1 < . . . < t n = 1. We also have I t i ∩ I t i+1 = ∅ for any i = 0, . . . , n − 1. This entails that for any i = 0, . . . , n − 1 there is l i such that t i ≤ l i ≤ t i+1 and that p t i , p t i+1 are approximations of γ l i . By Lemma 2.15 we have that p t i and p t i+1 are homotopic, completing the proof. Theorem 2.18. Let X be a Hausdorff, arcwise connected topological space, and let P be a basis for the topology of X whose elements are arcwise and simply connected subsets of X. Then π 1 (X) ≃ π 1 (P). Proof. Fix a base 0-simplex a 0 and a base point x 0 ∈ a 0 . Define where p is an approximation of γ. By (12) and Lemma 2.17, this map is group isomorphism. Corollary 2.19. Let X and P be as in the previous theorem. If X is nonsimply connected, then P is not directed under inclusion. Proof. If X is not simply connected, the by the previous theorem P is not simply connected. By Proposition 2.5, P is not directed. Net-cohomology Let X be an arcwise connected, Hausdorff topological space. Let O(X) be the set of open subsets of X ordered under inclusion. Assume that O(X) is equipped with a causal disjointness relation ⊥. Definition 2.20. We say that P ⊆ O(X) is a good index set associated with (X, ⊥) if P is a basis for the topology of M whose elements are nonempty, arcwise and simply connected subsets of M with a nonempty causal complement. We denote by I(X, ⊥) the collection of good index sets associated with (X, ⊥). Some observations are in order. First, note that I(X, ⊥) can be empty. However, this does not happen in the applications we have in mind. Secondly, we have used the term "good index set" because it is reasonable to assume that any index set of nets local algebras over (X, ⊥) has to belong to I(X, ⊥). This, to avoid the "artificial" introduction of topological obstructions because, by Theorem 2.18, π 1 (P) ≃ π 1 (X) for any P ∈ I(X, ⊥). Given P ∈ I(X, ⊥), let us consider an irreducible net of local algebras A P defined on a Hilbert space H o . The first aim is to give an answer to the question, posed at the beginning of this paper, about the existence of topological obstructions to the triviality in B(H o ) of 1-cocycles. To this end, note that if X is simply connected, then by, Theorem 2.18, π 1 (P) = C · ½. Hence as a trivial consequence of Theorem 2.8 we have the following On the grounds of this result, we can affirm that there might exists only a topological obstruction to the triviality in B(H o ) of 1-cocycles: the nonsimply connectedness of X. "Might" because we are not able to provide here an example of a 1-cocycle which is not trivial in B(H o ). The next aim is to show that net-cohomology is stable under a suitable change of the index set. Let us start by observing that the notion of a locally relatively connected refinement of a poset, Definition 2.9, induces an order relation on I(X, ⊥). Given P 1 , P 2 ∈ I(X, ⊥), define P 1 P 2 ⇐⇒ P 1 is a locally relatively connected refinement of P 2 . (15) One can easily checks that is an order relation on I(X, ⊥). Lemma 2.22. The following assertions hold. (a) Given P ∈ I(X, ⊥), let P 1 be a subfamily of P. If P 1 is a basis for the topology of X, then P 1 ∈ I(X, ⊥) and P 1 P. (b) (I(X, ⊥), ) is a directed poset with a maximum P max . Proof. (a) follows from the Definition 2.9 and from Lemma 2.10. (b) Define P max ≡ {O ⊆ X |O ∈ P for some P ∈ I(X, ⊥)} It is clear that P max ∈ I(X, ⊥). By (a), we have that P P max for any P ∈ I(X, ⊥). Hence P max is the maximum. As an easy consequence of Theorem 2.12, we have the following Theorem 2.23. Let A Pmax be an irreducible net, defined on a Hilbert space H o , and indexed by P max . For any pair P 1 , P 2 ∈ I(X, ⊥) the categories Remark 2.24. Some observations on this theorem are in order. (1) The Theorem 2.23 says that, once a net of local algebras A Pmax is given, the category Z 1 t (A Pmax ) is an invariant of I(X, ⊥). (2) Once an irreducible net A P indexed by an element P ∈ I(X, ⊥) is given, then it is assigned a net indexed by P max . In fact, P is a basis for the topology of X, therefore by defining A(O) ≡ (∪{A(O 1 ) |O 1 ∈ P, O 1 ⊆ O}) ′′ , for any O ∈ P max , we obtain an irreducible net A Pmax such that A Pmax|P = A P . (3) Concerning the applications to the theory of superselection sectors, we can assume, without loss of generality, the independence of the theory of the choice of the index set. Good index sets for a globally hyperbolic spacetime In the papers [16,27] the index set used to study superselection sectors in a globally hyperbolic spacetime M is the set K ⋄ of regular diamonds. On the one hand, this is a good choice because K ⋄ ∈ I(M, ⊥). But on the other hand, regular diamonds do not need to have pathwise connected causal complements, and to this fact are connected several problems (see the Introduction). A way to overcome these problems is provided by Theorem 2.23: it is enough to replace K ⋄ with another good index set whose elements have pathwise connected causal complements. The net-cohomology is unaffected by this change and the mentioned problems are overcome. In this section we show that such a good index set exists: it is the set K of diamonds of M. The net-cohomology of K will provide us important information for the theory of superselection sectors. We want to stress that throughout both this section and in Section 4, by a globally hyperbolic spacetime we will mean a globally hyperbolic spacetime with dimension ≥ 3. Preliminaries on spacetime geometry We recall some basics on the causal structure of spacetimes and establish our notation. Standard references for this topic are [24,34,13]. A spacetime M consists of a Hausdorff, paracompact, smooth, oriented manifold M, with dimension ≥ 3, endowed with a smooth metric g with signature (−, +, +, . . . , +), and with a time-orientation, that is a smooth timelike vector field v, (throughout this paper smooth means C ∞ ). A curve γ in M is a continuous, piecewise smooth, regular function γ : I −→ M, where I is a connected subset of R with nonempty interior. It is called timelike, lightlike, spacelike if respectively g(γ,γ) < 0, = 0, > 0 all along γ, whereγ = dγ dt . Assume now that γ is causal, i.e. a nonspacelike curve; we can classify it according to the time-orientation v as future-directed (f-d) or past-directed (p-d) if respectively g(γ, v) < 0, > 0 all along γ. When γ is f-d and lim t→sup I γ(t) exists (lim t→inf I γ(t)), then it is said to have a future (past) endpoint. Otherwise, it is said to be future (past) endless; γ is said to be endless if neither of them exist. Analogous definitions are assumed for p-d causal curves. The chronological future I + (S), the causal future J + (S) and the future domain of dependence D + (S) of a subset S ⊂ M are defined as: A subset S of M is achronal (acausal) if for any pair x 1 , x 2 ∈ S we have x 1 ∈ I(x 2 ) (x 1 ∈ J(x 2 )). Two subsets S 1 , S 2 ⊆ M, are said to be causally disjoint, whenever A (acausal) Cauchy surface is an achronal (acausal) set C verifying D(C) = M. Any Cauchy surface is a closed, arcwise connected, Lipschitz hypersurface of M. Furthermore all the Cauchy surfaces are homeomorphic. A spacelike Cauchy surface is a smooth Cauchy surface whose tangent space is everywhere spacelike. It turns out that any spacelike Cauchy surface is acausal. A spacetime M satisfies the strong causality condition if the following property is verified for any point x of M: any open neighborhood U of x contains an open neighborhood V of x such that for any pair x 1 , x 2 ∈ V the set J + (x 1 ) ∩ J − (x 2 ) is either empty or contained in V . The spacetime is said to be globally hyperbolic if it satisfies the strong causality condition and if for any pair x 1 , x 2 ∈ M, the set J + (x 1 ) ∩ J − (x 2 ) is either empty or compact. It turns out that M is globally hyperbolic if, and only if, it admits a Cauchy surface. We recall that if M is a globally hyperbolic spacetime, for any relatively compact set K we have: 5. J + (cl(K)) is closed; 6. D + (cl(K)) is compact; by the properties 4. and 5. we have that 7. J + (cl(K)) = cl J + (K) . Although, a globally hyperbolic spacetime M can be continuously ( smoothly ) foliated by (spacelike) Cauchy surfaces [9], for our purposes it is enough that for any Cauchy surface C the spacetime M admits a foliation "based" on C, that is there exists a 3-dimensional manifold Σ and a homeomorphism F : R × Σ −→ M such that Σ t ≡ F (t, Σ) are topological hypersurfaces of M, Σ 0 = C, but, in general, for t = 0 the surface Σ t need not be a Cauchy surfaces [8]. Proof. Let F be the foliation of M based on C as described above. Let (τ (x), y(x)) ≡ F −1 (x) for x ∈ M. Note that is a deformation retract. Hence π 1 (M) is isomorphic to π 1 (C). Let h 1 (t, s) ≡ h(t, γ(s)). Then curve γ(s) = h 1 (0, s) is homotopic to the curve β(s) ≡ h 1 (1, s) lying in C. Given x ∈ M with x = β(1), β(0). It is clear that, as C is 3-dimensional surface, β is homotopic in C to a curve σ lying in C \ {x}. Now, note that the relation ⊥, defined by (16) The set of diamonds Consider a globally hyperbolic spacetime M. We have already observed that the set of regular diamonds K ⋄ of M is an element of the set of indices I(M, ⊥) associated with (M, ⊥), where ⊥ is the relation defined by (16). We now introduce the set of diamonds K of M. We prove that K is a locally relatively connected refinement of K ⋄ , and that diamonds have pathwise connected causal complements. The last part of this section is devoted to study the causal punctures of K induced by points of the spacetime. Since G is simply connected, by Lemma 3.1, D(G) is simply connected. Moreover, note that G is relatively compact in C. As C is closed in M, G is relatively compact in M. By 6., D(G) is relatively compact in M. Finally, K ⊂ K ⋄ (see definition of K ⋄ in [16]). As K is a basis for the topology of M, then K is a locally relatively connected refinement of K ⋄ and K ∈ I(M, ⊥) (see Section 2.5.2). The next aim is to show that the causal complement O ⊥ of a diamond, which is defined as (see Section 2.1) is pathwise connected in K. To this end, by (14), it is enough to prove that As claimed at the beginning of Section 3, we have established that K is a locally relatively connected refinement of K ⋄ , and that any element of K has a pathwise connected causal complement. From now on we will focus on K, because this will be the index set that we will use to study superselection sectors. Causal punctures The causal puncture of K induced by a point x ∈ M, is the poset K x defined as the collection Considered as a spacetime M x is globally hyperbolic [29]. An element O ∈ K x does not need to be a diamond of the spacetime M x . However, K x is a basis for the topology of M x . Furthermore as M x is arcwise connected, K x is pathwise connected. Now, for any O ∈ K x we define namely, the causal complement of O in K x . Proof. Note that O ⊥ | Kx is a sieve, hence its enough to prove that ). This is an arcwise Note that K x | O is a sieve of K. Proof. O is a globally hyperbolic spacetime, therefore there is a spacelike If this holds, since K x | O is sieve and D(C \ {x}) is arcwise connected, by (14), K x | O is pathwise connected. We obtain the proof of this equality in two steps. First. By Lemma 3.8 we have that Then, any f-d endless causal curve through x 2 meets C in C\{x}, therefore x 2 ∈ D(C\{x}) and D(C\{x}) = (M\J(x))∩O. This and ( * ), entail that As the last issue of this section, consider the set K x × K x and endow it with the order relation defied as Net-cohomology Before studying the net-cohomology of K, it is worth showing how the topological properties of the spacetime stated in Lemma 3.1 are codified in the poset structure of K. Lemma 3.11. The following properties hold. (a) π 1 (K) ≃ π 1 (M) ≃ π 1 (C) for any Cauchy surface C of M. (b) Consider a path p ∈ P(a 0 ) where a 0 ∈ Σ 0 (K) is, as a diamond, based on a spacelike Cauchy surface C 0 . Let x ∈ C 0 such that cl(a 0 ) ∩ x = ∅. Then p is homotopic to a path q = {b n , . . . , b 1 } ∈ P(a 0 ) such that |b i |, as a diamond, is based on C 0 and |b i | ∩ x = ∅ for any i. Proof. (a) follows from Theorem 2.18 and from Lemma 3.1. (b) As observed in Section 2.5, since the elements of K are arcwise connected sets of M, there exists a curve γ : [0, 1] −→ M, with γ(0) = γ(1) ∈ a 0 ∩ C 0 , and such that p ∈ App(γ). By Lemma 3.1 γ is homotopic to a closed curve β lying in C 0 \ {x}. This allows us to find a path q ∈ App(β) such that the elements q, as diamond, are based on C 0 . Lemma 2.17 completes the proof. Let A K be an irreducible net of local algebras defined on a Hilbert space H o . Let Z 1 (A K ) be the set of 1-cocycles of K with values on A K and let use denote by Z 1 t (A K ) those elements of Z 1 (A K ) which are trivial in B(H o ). As a trivial application of Corollary 2.21, we have that if M is simply connected, then Z 1 (A K ) = Z 1 t (A K ). This result answers the question posed at the beginning of this paper, saying that the compactness of the Cauchy surfaces of the spacetime is not a topological obstruction to the triviality in B(H o ) of 1-cocycles. As already observed, the only possible obstruction in this sense is the nonsimply connectedness of the spacetime. The next proposition will turn out to be fundamental for the theory of superselection sectors because it provides a way to prove triviality in B(H o ) of 1-cocycles on an arbitrary globally hyperbolic spacetime. Proposition 3.12. Assume that z ∈ Z 1 (A K ) is path-independent on K x for any point x ∈ M. Then z is path-independent on K, therefore z ∈ Z 1 t (A K ). Proof. Let p ∈ P(a 0 ) and let C 0 be the Cauchy surface where a 0 is based. Let us take x ∈ C 0 such that cl(a 0 ) ∩ x = ∅. By Lemma 3.11b p is homotopic to a path q ∈ P(a 0 ) whose elements are based on C 0 \ {x}. This means that q ∈ K x . z(p) = z(q) = ½ because p and q are homotopic and because z is path-independent on K x for any x ∈ M. Superselection sectors We begin the study of the superselection sectors of a net of local observables on an arbitrary globally hyperbolic spacetime M, with dimension ≥ 3. We start by describing the setting in which we study superselection sectors. Afterwards we explain the strategy we will follow, which consists in deducing the global properties of superselection sectors from the local ones. We refer the reader to the appendix for all the categorical notions used in this section. Let K be the set of diamonds of M. We consider an irreducible net of local algebras defined on a fixed infinite dimensional separable Hilbert space H o . We assume that A K satisfies the following two properties. • Punctured Haag duality, that means that for any x ∈ M, where K x is the causal puncture of K induced by x (17). • The Borchers property, that means that , with values in A K . Then, the superselection sectors are the equivalence classes [z] of the irreducible elements z of Z 1 t (A K ). From now on, our aim will be to prove that Z 1 t (A K ) is a tensor C * −category with a symmetry, left-inverses, and that any object with finite statistics has conjugates. Note, that by the Borchers property, Z 1 t (A K ) is closed under direct sums and subobjects. We now discuss the differences between our setting and that used in [16,27]. First, we have used the set of diamonds K, instead of the set of regular diamonds K ⋄ , as index set of the net of local algebras. Secondly, we assume punctured Haag duality while in the cited papers the authors assume Haag duality, that is for any O 1 ∈ K. Punctured Haag duality was introduced in [27]. Both the existence of models satisfying punctured Haag duality and the relation of this property to other properties of A K have been shown in [29]. It turns out that punctured Haag duality entails Haag duality and that A K is locally definite, namely for any x ∈ M. The reason why we assume punctured Haag duality will become clear in the next section. Remark 4.1. It is worth observing that in [29], punctured Haag duality has been shown for the net of local algebras F K⋄ , indexed by the set of regular diamonds, and associated with the free Klein-Gordon field in the representation induced by quasi-free Hadamard states. One might wonder if this property holds also for the net of fields F K⋄|K obtained by restricting F K⋄ to K. The answer is yes, because the net F K⋄ is additive 3 . As observed in Section 3, K ∈ I(M, ⊥) and K K ⋄ . Then, it can be easily checked that punctured Haag duality for F K⋄ entails punctured Haag duality for F K⋄|K . Presheaves and the strategy for studying superselection sectors The way we study superselection sectors resembles a standard argument of differential geometry. To prove the existence of global objects, like for instance the affine connection in a Riemannian manifold, one first shows that these objects exist locally, afterwards one checks that these local constructions can be glued together to form an object defined over all the manifold. Here, the role of the manifold is played by the category Z 1 t (A K ) and the objects that we want to construct are a tensor product, a symmetry and a conjugation. To see what categories play the role of "charts" of Z 1 t (A K ) some preliminary notions are necessary. is the algebra associated with the causal complement of O (see Section 2.1). The stalk in a point x is the Note that A ⊥ (x) is also equal to the C * −algebra generated by the algebras is a net of local algebras over the poset K x . By local definiteness and punctured Haag duality, it can be easily verified that the net A Kx is irreducible and satisfies Haag duality. Furthermore, A Kx inherits from A K the Borchers property. Now, let Z 1 t (A Kx ) be the C * −category of the 1-cocycles of K x , trivial in B(H o ), with values in A Kx . Observe that the category Z 1 t (A K ) is connected to Z 1 t (A Kx ) by a covariant functor defined as endomorphism of the net A Kx , but it is not clear whether this is extendible to an endomorphism of A ⊥ (x): since K x might not be directed, A ⊥ (x) might not be the C * −inductive limit of A Kx . This problem can be overcome by applying, in a suitable way, a different procedure which makes use of the underlying presheaf structure [28]. where p is path in K x such that ∂ 1 p ⊂ O and ∂ 0 p = a. This definition does not depend on the path chosen and on the choice of the starting point ∂ 1 p, as the following lemma shows. Proof. Note that z(p) · A · z(p) * = z(q) · z(q * p) · A · z(q * p) * · z(q) * , for any A ∈ A(O ⊥ ). q * p is a path in K x whose endpoints are contained in O. This means that the endpoints of q * p belong to K x | O , see (19). As K x | O is pathwise connected, Lemma 3.10, we can find a path q 1 in K x | O with the same endpoints of q * p. By path-independence we have that z(q * p) = z(q 1 ). But z(q 1 ) ⊆ A(O) because the support |q 1 | is contained in O. Therefore z(q * p) · A = A · z(q * p) for any A ∈ A(O ⊥ ), completing the proof. . This means that the collection is a morphism of the presheaf . It then follows that y z (a) is extendible to an endomorphism of A ⊥ (x) (see the definition of A ⊥ (x) (24)). Lemma 4.5. The following properties hold: (a) y z (a) : (e) y z (a)(A(a 1 )) ⊆ A(a 1 ) for any a 1 ∈ K x with a ⊆ a 1 . Proof. (a) is obvious from the Definition (28). Hence ∂ 1 p, ∂ 0 p ⊥ a 1 . As the causal complement of a 1 is pathwise connected in K x (Lemma 3.9), the proof follows by (4). (c) and (d) follow by routine calculations. (e) follows by (b) because A Kx fulfils Haag duality. Note that {y z (a) | a ∈ Σ 0 (K x )} is a collection of endomorphisms of the algebra A ⊥ (x) which are localized and transportable in the same sense of the DHR analysis: Lemma 4.5b says that y z (a) localized in a; Lemma 4.5c says that y z (a) is transportable to any a 1 ∈ Σ 0 (K x ). Tensor structure The tensor product on Z 1 t (A Kx ) is defined by means of the localized and transportable endomorphisms of A ⊥ (x) associated with 1-cocycles. To this end some preliminaries are necessary. Let for any z, z 1 , z 2 , z 3 ∈ Z 1 t (A Kx ), t ∈ (z, z 2 ) and s ∈ (z 1 , z 3 ). The tensor product in Z 1 t (A Kx ), that we will define later, is a particular case of ×. Proof. (a) By using Lemma 4.5c and Lemma 4.5d we have where the equality ∂ 0 p 1 = ∂ 1 p 2 has been used. We now are ready to introduce the tensor product. Let us define for any z, z 1 , z 2 , z 3 ∈ Z 1 t (A Kx ), t ∈ (z, z 1 ) and s ∈ (z 2 , z 3 ). . By Lemma 4.5e we have that (z ⊗ z 1 )(b) ∈ A(|b|). Given c ∈ Σ 2 (K x ), by applying Lemma 4.6b with respect to the path {∂ 0 c, ∂ 2 c} we have proving that z ⊗ z 1 satisfies the 1-cocycle identity. By Lemma 4.6b it follows that (z ⊗ z 1 )(b n ) · · · (z ⊗ z 1 )(b 1 ) = z(p) · y z (∂ 1 p)(z 1 (p)), for any path p = {b n , . . . , b 1 }. Therefore as z and z 1 are path-independent in K x , (z ⊗ z 1 ) is path independent in K x . Namely, z ⊗ z 1 ∈ Z 1 t (A Kx ). If t ∈ (z, z 2 ) and s ∈ (z 1 , z 3 ), then by Lemma 4.6a it follows that t ⊗ s ∈ (z ⊗ z 1 , z 2 ⊗ z 3 ). The rest of the properties that ⊗ has to satisfy to be a tensor product in Z 1 t (A Kx ) can be easily checked. Symmetry and Statistics The following lemma is fundamental for the existence of a symmetry. Lemma 4.8. Let p, q be a pair of paths in K x with ∂ i p ⊥ ∂ i q for i = 0, 1. Theorem 4.9. There exists a symmetry ε in where p 1 = p * b and q 1 = q * b and it is trivial to check that p 1 and q 1 satisfy the properties written in the statement. Given t ∈ (z, z 2 ), s ∈ (z 1 , z 3 ), and two paths p, q with ∂ 1 p = ∂ 1 q = a, ∂ 0 p ⊥ ∂ 0 q, by Lemma 4.6 we have The rest of the properties can be easily checked. Now, in order to classify the statistics of the irreducible elements of Z 1 t (A Kx ) we have to prove the existence of left inverses (see Appendix). To this end, consider a sequence {O n } n∈N of diamonds of K such that For any n let us take a n ∈ Σ 0 (K x ) such that a n ⊂ O n . We get in this way an asymptotically causally disjoint sequence {a n } n∈N : for any a ∈ Σ 0 (K x ) there exists k(a) ∈ N such that for any n ≥ k(a) we have a n ⊥ a. This is enough to prove the existence of left inverses. Following [16,27], given z ∈ Z 1 t (A Kx ) and a ∈ Σ 0 (K x ), let p n be a path from a to a n . Let be a Banach-limit over n. φ z a : A ⊥ (x) −→ B(H o ) is a positive linear map and, it can be easily checked, that and that for any b ∈ Σ 1 (K x ) we have where φ z a is defined by (33). Then, the collection φ z ≡ {φ z z 1 ,z 2 | z 1 , z 2 ∈ Z 1 t (A Kx )} is a left inverse of z. Since . The other identity follows by replacing, in this reasoning, b by b. Proposition 4.12. Let z be a simple object. Then, y z (a) : is an automorphism, for any a ∈ Σ 0 (K x ). Proof. Let O ∈ K with x ∈ O and O ⊥ a. As the causal complement of a in K x is pathwise connected, Lemma 3.9, there is a path q of the form b * p, where b is a 1-simplex such that ∂ 0 b = a and ∂ 1 b ⊥ a; p is a path satisfying Now, observe that by Lemma 4.5b we have that y z (a)(z(p)) = z(p) and that y z (∂ 1 p)(A) = A for any A ∈ A(O ⊥ ). By using these relations and the previous lemma, for any A ∈ A(O ⊥ ) we have That is y z (a) z(q) * · A · z(q) = A for any A ∈ A(O ⊥ ). This means that A ⊥ (x) ⊆ y z (a)(A ⊥ (x)), that entails that y z (a) is an automorphism of A ⊥ (x). Assume that z is a simple object of Z 1 t (A Kx ). Let us denote by y z−1 (a) the inverse of y z (a). Clearly, y z−1 (a) is an automorphism of A ⊥ (x) localized in a. Let We claim that z is the conjugate object of z. The proof is achieved in two steps. Proof. Within this proof, to save space, we will omit the superscript z from y z (a) and y z−1 (a). First we prove the relations written above in the case that p is a 1-simplex b. For any A ∈ A ⊥ (x) we have Using this relation we obtain z(b) · y −1 ( completing the first part of the proof. We now proceed by induction: let p = {b n , . . . , b 1 } and assume that the statement holds for the path q = {b n−1 , . . . , b 1 }, then The other relation is obtained in a similar way. Lemma 4.14. Let z be a simple object of Z 1 t (A Kx ). Then z ∈ Z 1 t (A Kx ) and is a conjugate object of z. Proof. By Lemma 4.5e we have that z Where the relations ∂ 00 c = ∂ 01 c ∂ 10 c = ∂ 02 c have been used. Finally by Lemma 4.13 z is path-independent in K x because z is path-independent in K x . Therefore, z is trivial in B(H o ), thus is an object of Z 1 t (A Kx ). Now we have to prove that z is the conjugate object of z (see definition in Appendix). We need a preliminary observation. Let y z (a) be the endomorphisms of A ⊥ (x) associated with z. Then , which proves (38). Now, by (38) and by Lemma 4.13 we have if we take r = r = ½, then r and r satisfy the conjugate equations for z and z, completing the proof. According to the discussion made at the beginning of this section we have Global theory We now turn back to study Z 1 t (A K ). The aim of this section is to show that all the constructions we have made in the categories Z 1 t (A Kx ) can be glued together and extended to corresponding constructions on Z 1 t (A K ). Given z ∈ Z 1 t (A K ) let us denote by y z x (a) the morphism of the algebra (29). For any a ∈ Σ 0 (K) we define We call y z (a) a morphism of stalks because it is compatible with the presheaf structure, that is given O ∈ K, for any pair of points x, . This is an easy consequence of the following for any pair x 1 , x 2 ∈ M with O ∈ K x 1 ∩ K x 2 . Let p be a path in K for which there exist a pair of points x 1 , x 2 ∈ M with |p| ⊂ K x 1 ∩ K x 2 . Then y z x 1 (a)(z(p)) = y z x 2 (a)(z(p)). Proof. By (28) and (29), for A ∈ A(O) we have that y z x i (a)(A) = z(p i ) · A · z(p i ) * , for i = 1, 2, where p i is a path in K x i such that ∂ 0 p i = a, ∂ 1 p i = a i and a i is contained in some diamond O i such that x i ∈ O i and O i ⊥ O for i=1,2. Note that p 2 * p 1 is a path from a 1 to a 2 and that a 1 , a 2 ⊥ O. By (4) we have , which proves (40). Now, let p and x 1 , x 2 be as in the statement. By applying (40) we have y z x 1 (a)(z(p)) = y z x 1 (a)(z(b n )) · · · y z x 1 (a)(z(b 1 )) = y z x 2 (a)(z(b n )) · · · y z x 2 (a)(z(b 1 )) = y z x 2 (a)(z(p)), completing the proof. for any pair of points x 1 , x 2 with O ∈ K x 1 ∩ K x 2 . In fact, by using the gluing lemma for y z (a) we have for any A ∈ A(O), which proves ( * ). Within this proof we have used the identities: Both the identities derive from the Lemma 4.5e, and from the fact that y z x (a) is an automorphism of A ⊥ (x). Now, recall that the conjugate z . Given b ∈ Σ 1 (K), by applying ( * ) we have that for any pair of points x 1 , x 2 with |b| ∈ K x 1 ∩ K x 2 . Therefore, by defining for some point x with |b| ∈ K x , by Proposition 4.2 we have that z ∈ Z 1 t (A K ). Furthermore, by (38) we have y z−1 (a) = y z (a), where y z (a) is the morphism of stalks associated with z. To prove that z is the conjugate of z it is enough to observe that for any b ∈ Σ 1 (K) we have (z ⊗ z)(b) = (z x ⊗ x z x )(b) = ½, (z ⊗ z)(b) = (z x ⊗ x z x )(b) = ½ (see within the proof of Lemma 4.14) for some x ∈ M with |b| ∈ K x . By defining r = r = ½ we have that r and r satisfy the conjugate equations for z and z, completing the proof. Concluding remarks (1) The topology of the spacetime affects the net-cohomology of posets. We have shown that the poset, used as index set of a net of local algebras, is nondirected when the spacetime is either nonsimply connected or has compact Cauchy surfaces. In the former case, furthermore, there might exist 1-cocycles which are nontrivial in B(H o ). In spite of these facts the structure of superselection sectors of DHR-type is the same as in the case of the Minkowski space (as one can expect because of the sharp localization): sectors define a C * −category in which the charge structure manifests itself by the existence of a tensor product, a symmetry, and a conjugation. An aspect of the theory, not covered by this paper, and that deserves further investigation is the reconstruction of the net of local fields and of the gauge group from the net of local observables and the superselection sectors. The mathematical machinery developed in [12] to prove the reconstruction theorem in the Minkowski space does not apply as it stands when the index set of the net of local observables is nondirected. (2) In Section 2 we presented net-cohomology in terms of abstract posets. The intention is to provide a general framework for the theory of superselection sectors. In particular, we also hope to find applications in the study of sectors which might be induced by the nontrivial topology of spacetimes. It has been shown in [1] that the topology of Schwartzschild spacetime, a space whose second homotopy group is nontrivial, might induce superselection sectors. However, as observed earlier, it is not possible, up until now, to apply the ideas of DHR-analysis to these sectors since their localization properties are not known. However the results obtained in this paper allow us to make some speculations in the case that the spacetime is nonsimply connected: the existence of 1-cocycles nontrivial in B(H o ), might be related to the existence of superselection sectors induced by the nontrivial topology of the spacetime. In fact, these cocycles define nontrivial representations of the fundamental group of the spacetime (theorems 2.8 and 2.18). However, what is missing in this interpretation is the proof that these 1-cocycles are associated with representations of the net of local observables. We foresee to approach this problem in the future. Finally, we believe that this framework could be suitably generalized for applications in the context of the generally locally covariant quantum field theories [4], [7]. (3) Some techniques introduced in this paper present analogies with techniques adopted to study superselection sectors of conformally covariant theories on the circle S 1 . In these theories, the spacetime is the circle S 1 ; the index set for the net of local observables is the set J of the open intervals of S 1 ; the causal disjointness relation is the disjointness: given I, J ∈ J, then, I ⊥ J if I ∩ J = ∅. The analogies arise because, referring to Section 2, the poset formed by J with the inclusion order relation, is nondirected, pathwise connected, and nonsimply connected. It is usual in these theories to restrict the study of superselection sectors to the spacetime S 1 /{x} for x ∈ S 1 , i.e. the causal puncture of S 1 in x (see for instance [6,14,15]); the same idea has been used in [2] to study superselection sectors over compact spaces 4 . The punctured Haag duality is strictly related to strong additivity (see [20] and references therein). Finally, in [14] in order to prove that endomorphisms of the net are extendible to the universal C * −algebra, the authors' need to check the invariance of these extensions for homotopic paths (this definition of a homotopy of paths is a particular case of that given in [27] p.322). (4) The way we define the first homotopy group of a poset is very similar to some constructions in algebraic topology. We are referring to the edge paths group of a simplicial complex [30] and to the first homotopy group of a Kan complex [23]. Although similar they are different. Indeed, the simplicial set Σ * (P) of a poset P is not a simplicial complex. Furthermore, if P is not directed, then Σ * (P) is not a Kan complex. A Tensor C * −categories We give some basics definitions and results on tensor C * -categories. References for this appendix are [22,21]. Let C be a category. We denote by z, z 1 , z 2 , . . . the objects of the category and the set of the arrows between z, z 1 by (z, z 1 ). The composition of arrows is indicated by "·" and the unit arrow of z by 1 z . Tensor C * −categories -A category C is said to be a C * -category if the set of the arrows between two objects (z, z 1 ) is a complex Banach space and the composition between arrows is bilinear; there should be an adjoint, that is an involutive contravariant functor * acting as the identity on the objects and the norm should satisfy the C * -property, namely r * r = r 2 for each r ∈ (z, z 1 ). Notice, that if C is a C * -category then (z, z) is a C * -algebra for each z. Assume that C is a C * -category. An arrow v ∈ (z, z 1 ) is said to be an isometry if v * · v = 1 z ; a unitary, if it is an isometry and v · v * = 1 z 1 . The property of admitting a unitary arrow, defines an equivalence relation on the set of the objects of the category. We denote by the symbol [z] the unitary equivalence class of the object z. An object z is said to be irreducible if (z, z) = C · 1 z . C is said to be closed under subobjects if for each orthogonal projection e ∈ (z, z), e = 0 there exists an isometry v ∈ (z 1 , z) such that v · v * = e. C is said to be closed under direct sums, if given z i i = 1, 2 there exists an object z and two isometries w i ∈ (z i , z) such that w 1 · w * 1 + w 2 · w * 2 = 1 z . A strict tensor C * -category (or tensor C * -category) is a C * -category x" (see Section 4.2.2) which is sufficient for the analysis of the categories Z 1 t (A Kx ). The possible statistics of z are classified by the statistical phase χ(z) distinguishing para-Bose (1) and para-Fermi (−1) statistics and by the statistical dimension d(z) giving the order of the parastatistics. Ordinary Bose and Fermi statistics correspond to d(z) = 1. The objects with d(z) = 1 are called simple objects. The following properties are equivalent ( [28]): z is simple ⇐⇒ ε(z, z) = χ(z) · 1 z⊗z ⇐⇒ z ⊗n is irreducible ∀n ∈ N. Conjugation is a property stable under, subobjects, direct sums, tensor products and, furthermore, it is stable under equivalence. It turns out that z has conjugates ⇒ z has finite statistics. The full subcategory of objects with finite statistics C f has conjugates if, and only if, each object with statistical dimension equal to one has conjugates (see [11,19]). First we observe that if each irreducible object of C f has conjugates, then any object of C f has conjugates, because any object of C f is a finite direct sum of irreducibles, and because conjugation is stable under direct sums. Secondly, note that if z is an irreducible object with statistical dimension d(z), then there exist a pair of isometries v ∈ (z 0 , z ⊗ d(z) ) and w ∈ (z 1 , z ⊗ d(z)−1 ) where z 0 is a simple object. Assume given z 0 and a pair of arrows s, s which solve the conjugate equations for z 0 and z 0 . Let φ z is a standard left inverses of z. Setting z ≡ z 1 ⊗ z 0 ; r ≡ d(z) 1/2 · (1 z ⊗ w * ⊗ 1 z 0 ) · v ⊗ 1 z 0 · s; r ≡ d(z) · φ z ι,z⊗z (r ⊗ 1 z ), one can easily show that r, r solve the conjugate equations for z and z ( [11]).
2014-10-01T00:00:00.000Z
2004-12-05T00:00:00.000
{ "year": 2004, "sha1": "aac36bb298f82e4ff7ee2f0c7528ce66bd4c633a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math-ph/0412014", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "aac36bb298f82e4ff7ee2f0c7528ce66bd4c633a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
235656811
pes2o/s2orc
v3-fos-license
Optimization method for accurate positioning seeding based on sowing decision Variable seeding by sowing decision can improve the utilization rate of resources. In order to achieve more accurate position control and sowing rate control, precision sowing decision control system was developed, and the integral separation Proportional Integral Derivative (PID) control algorithm of metering disc speed and grid based dead reckoning method were proposed. In order to test the performance of the system, the experiment of the influence of integral switching on step response and the experiment of seeding response based on simulated sowing decision were carried out. The results showed that average lag distance based on dead reckoning was 72.2 cm less than that of non-dead reckoning; system response distance of integral separation PID control algorithm was 43.1 cm shorter than that of ordinary PID control algorithm. The field experiment showed that the error of the monitoring sowing rate relative to the actual sowing rate was 3.5%, the average transition distance within the speed range of 3-9 km/h was 139.5 cm, and the standard deviation was 12.8 cm. The developed seeding control system improves the accuracy of seeding based on sowing decision, and provides a technical reference for low-cost sowing decision based control system in China. Introduction  Accurate sowing decision is a technology that uses modern agricultural information technology to comprehensively analyze information including soil, environment, and historical yield to get detailed planting density of sowing plots and make accurate sowing. This technology combines positioning technology and variable rate application (VRA) technology for agricultural decision operations. There are many researches on variable rate fertilization [1,2] and accurate pesticide spraying [3][4][5][6] . In recent years, more and more applications have been made in the sowing process [7,8] . Accurate sowing requires the corn seeder to sow the seeds into the soil according to certain row spacing during the journey. Due to uneven ground, machine vibration and other reasons will affect the uniformity of seed-metering, many scholars have studied the variable control system. Through experimental research, they have explored the influence of vehicle speed [9,10] , seed filling [11,12] , vibration [13] and other factors on the uniformity of seed-metering, greatly improving the response speed and stability of the rotating speed of the seed-metering plate. The control system has been also made many improvements, including the change from single chip microcomputer control system to electronic control unit (ECU) integrated control system, the change of system communication mode from serial port communication to controller area network (CAN) communication mode [9,14] , and the gradual development of seed-metering plate drive mode from ground wheel drive to electric drive [14,15] . The speed stability of the electric-driven seed-metering plate is directly related to the characteristics of the drive motor and the control algorithm. In the accurate sowing process based on sowing decision, large decision level difference or sudden change of driving speed will cause fluctuation or even oscillation of the motor speed [14] , affecting the effect of decision-based sowing. Therefore, good step response of motor speed [16] is the key to accurate sowing decision. In this article, the closed-loop control algorithm of motor speed is optimized. Accurate sowing based on sowing decision requires not only uniformity of sowing, but also good analytical ability of the control system to the decision, i.e., small discrimination error of sowing decision. In the process of control execution, the system is affected by mechanical transmission, system inertia and global positioning system (GPS) information receiving period, and the system has certain delay. For this reason, scholars have also carried out a large number of researches. Wei et al. [5] analyzed the reasons for the large error of prescription value in the process of variable prescription spraying through interpretation simulation tests, and quantitatively analyzed the interpretation time of prescription diagram and grid discrimination error. Chen et al. [6] analyzed the influence of GPS positioning accuracy and period on network positioning discrimination error in prescription spraying. In order to improve the positioning accuracy, Yu et al. [17] proposed a method of positioning variable fertilizer applicator with sensor instead of GPS, and used sensor ranging to eliminate accumulated error. Gao et al. [18] adopted delay algorithm to optimize addressing to improve the implementability and accuracy of the interpretation system when designing the real-time interpretation system of unmanned aerial vehicle (UAV) variable spray prescription diagram. Li et al. [19] used Gehash coding to design a preliminary location algorithm, which greatly improved the efficiency compared with the traditional longitude and latitude-based query. He et al. [20] developed a sowing lag compensation algorithm, which can obtain a shorter lag distance. As can be seen from that summary of the literature, during decision-based operations, GPS positioning accuracy [21] , decision interpretation time and GPS information period are the main reasons for system time delay. At the same time, GPS positioning accuracy and grid positioning discrimination accuracy are the main reasons that affect the analytical value. GPS positioning accuracy and period can be improved by hardware, while grid positioning discrimination accuracy is not only affected by positioning accuracy but also affected by positioning identification algorithm. In order to further improve the accuracy of grid positioning discrimination, this article designs a method to correct the lag distance through the lag model. The purpose of this article is to explore a control optimization method to reduce the speed fluctuation of the seed-metering motor caused by the large difference of sowing decision level, and propose a dead reckoning method to improve the accuracy of grid positioning discrimination, so as to improve the stability of system response and the accuracy of grid positioning discrimination. Variable sowing system based on sowing decision Precision variable seeder is mainly composed of seed-metering unit, tablet PC Geshem PPC-GS0792T (Shenzhen Dehang Intelligent Technology Co., Ltd., Shenzhen Guangdong, China), seed-metering drive integrated control unit ECU HYDAC-TTC32, Zhejiang AKELC AQMD3620NS-A motor driver, seed-metering motor (12 V brush DC motor), Hall speed sensor. The tablet PC is mainly used for sowing decision control operation, sending control instructions to the ECU, which is used for receiving instructions sent by the tablet PC and data from sensors, and converting sowing speed values into voltage signals through control algorithms. Voltage signal is sent to each motor driver through CANBUS, and each sowing unit motor driver receives the rotating speed data signal according to its different CAN ID number to adjust the rotating speed of the seed-metering plate. The seed-metering motor adopts brushless DC motor, and Hall sensor is installed at the shaft end to realize real-time measurement of seed-metering speed. The seeder controller supports CAN bus and ISO11783 protocol for reading, analyzing and controlling sowing decisions, and adopts closed-loop proportional-integralderivative (PID) algorithm to control the value of motor speed. The control system records the seed-metering speed and driving speed in real time and stores them in Microsoft Access database. The variable sowing system based on sowing decision integrates GPS positioning receiver MH16-L3 (Shanghai Maihong Electronic Technology Co., Ltd., Shanghai, China) on the basis of accurate variable seeder to provide the geographical location of the system. The control interface on the tablet PC adds the functions of reading, analyzing and displaying sowing decisions. The overall design of the control system is shown in Figure 1. The basic principle of the system is to copy the operation prescription vector file based on environmental information such as soil moisture content and climate to the vehicle-mounted computer through mobile storage, and the vehicle-mounted computer displays the vector file to the electronic map according to the coordinate information of the vector file. The system receives NMEA0183 format navigation message of GPS through RS232 serial port, obtains GPS positioning data in real time and analyzes planting density data at a specific position. The analyzed variables are sent to the seeder controller in the form of character strings. The seeder controller converts the planting density data into control amount, and the sowing actuator adjusts the variable sowing in real time. The variable seeder based on sowing decision is shown in Figure 2. The machine is a 4-row no tillage seeding and fertilizing integrated machine. Each row of sowing unit adopts independent drive, that is, each sowing unit is driven by a motor. The encoder is installed on the ground wheel to measure the speed. The vehicle computer is equipped with a seeding sowing -making system, which is installed in the cab, and the GPS antenna is installed on the roof. Software design of sowing decision and control system The sowing decision applied in this article is provided by the cooperative organization. In order to visualize the sowing prescription data and facilitate the reading of the control system, each shape file format prescription chart file is defined: Integer field ID, floating-point field Volum1 which represents sowing amount per hectare, floating-point field Volum2 which is used to store decision coding information corresponding to field Volum1, double-precision floating-point field Long which stores longitude coordinates of prescription chart, and double-precision floating-point field Lat which stores latitude coordinates. Based on the accurate sowing decision, the starting point is to make the seeds have the same nutrient supply. Referring to the agronomic requirements of maize sowing in Xuchang, Henan, the planting density is 57 000-75 000 plants/hm 2 . The decision information in the airborne control terminal uses decision codes 1-5 to represent the level of planting density, where 1 represents the lowest planting density and 5 represents the highest planting density. The sowing decision code is transmitted to the control system through universal serial bus (USB) to CAN. After the operating position grid is recognized, the system software based on GPS positioning analyzes the sowing decision data to obtain the planting density information at the current position. According to the ISO11783 bus communication protocol, the initialization of CAN communication is realized. ECU receives the speed command issued by the controller in real time through the CAN bus and controls the speed of the seed-metering motor through PID control method to realize the variable sowing operation control function. The sowing decision workflow is shown in Figure 3. As the human-computer interaction interface of the system, the control system software is developed based on VS2012 platform (Microsoft Corporation, New Mexico, USA), and is responsible for displaying and storing operating prescription diagrams and relating operating process data. C# language is used based on Microsoft Foundation Classes (MFC) framework, and the PC human-computer interaction interface is shown in Figure 4. Figure 4a is the main interface of the software. It can be divided into monitoring parameters g area, sowing decision display area, control display area, sowing decision value display area, system control area, sowing decision list display area. Monitoring parameters area can display the tractor speed, longitude and latitude information in real time. Sowing decision display area is used to display the current position and sowing decision in Baidu maps, and the operation trajectory is available during operation. Control display area can zoom and pan the display image. Sowing decision value area is used to display the sowing decision value read from the current point in real time. System control area is mainly used for the operation of prescription chart file, including SHP and JSON format file reading. Sowing decision list is used to read multiple files and display them in a list. After reading the prescription map file, it is marked and displayed on the map according to its coordinate position. Figure 4b is control setting interface, which mainly used for the parameter setting, display and communication debugging, which is divided into monitoring parameters area, parameter setting area, sending and receiving information display area. Parameter monitoring area is used to display the speed and rotation speed of seed metering plate. Parameter setting area is used for communication parameter setting and seeding parameter setting. Sending and receiving information display area is used to display the receiving and sending information during debugging. The embedded geographic information system software component SuperMap (Beijing Chaotu Software Co., Ltd., Beijing, China) is adopted to realize the functions of importing, editing and displaying the sowing decision diagram of vector data format (SHP) on the airborne operating control terminal. In order to obtain the target rotating speed of the seed-metering plate, the width, the number of holes in the seed-metering plate and the row spacing need to be calculated according to Equation (1). where, n is the target rotating speed of the seed-metering plate, r/min; v is the vehicle speed, km/h; k is the number of holes in the seed-metering plate; Q is the planting density for sowing decision, kg/hm 2 ; D is the row spacing, cm. In this study, according to the requirements of sowing row spacing, the row spacing is a fixed value of 60 cm, the number of holes in the selected seed-metering plate is 18, and the vehicle speed is 3-9 km/h. The control software calculates the target rotating speed of the current seed-metering plate in real time according to the analyzed current vehicle speed and planting density. Optimization of location delay model and decision recognition algorithm The current coordinates obtained by GPS positioning system are taken as the coordinate values of the sowing decision, and the coordinate values are taken as the theoretical target seed falling position. In fact, due to the layout of the hardware system, the spatial position is shifted, and the relative distance between the target position and the actual seed falling position is shown in Figure 5. Relative distance mainly includes the distance L 1 (cm) of the GPS antenna relative to the seed-metering port, the distance L 2 (cm) of the seed falling from the seed-metering port and moving in the horizontal direction, showing as follows: (2) where, L is lag distance, cm; L 2 can be calculated from the seed-metering process of the picker finger seed drill, m. The system has a certain response time from obtaining the sowing position to the seed falling, which can be expressed as: 4 (3) where, T is System response time, s; t 1 is time for GPS positioning analysis, s; t 2 is the acceptance time when the control terminal sends an instruction to the controller, s; t 3 is the response time of the driver and the seed-metering motor, s; t 4 is the time when seeds fall to the ground from the seed-metering opening, s. The lag distance Δd of the sowing position is Δd = L -T· V t (4) where, Δd is lag distance, m; V t is traval speed, m/s. In order to achieve the purpose of accurate positioning and sowing, the lag distance model is used as the adjustment basis for airborne control terminals. The accurate sowing system based on sowing decision control is implemented according to the decision quantity of operating grid. The position of the machines and tools provided by GPS in the sowing decision is an important factor affecting the target fertilization amount, but GPS has poor dynamic response capability. In order to further improve the accuracy of grid position analysis in the sowing decision process, a dead reckoning based on velocity sensor was carried out. Encoders E38S6G5-100B-G5N (OMRON Corporation, Kyoto, Japan) with a resolution of 500 pulses per turn. The A-phase pulses of the encoder are received through TTC32 controller's frequency count I/O port. The ground wheel velocity V t (m/s) can be expressed as where, n is the number of pulses per revolution of the limited-depth wheel; r is the radius of the limited-depth wheel, m; f is the pulse frequency value recorded at a certain time, Hz; A coordinate system xOy is established for the plot. The x axis of the coordinate system is parallel to the latitude line and the y axis is parallel to the longitude line. If the rotation of the xOy coordinate makes the x axis parallel to the boundary of the working plot, the geodetic coordinate value obtained by the receiver can be converted into the longitude and latitude coordinate value in the x′Oy′ space rectangular coordinate according to Equation (6). cos sin sin cos x' x y y' x y where, x and y are the longitude and latitude coordinate values measured by the receiver, m, and x′ and y′ are the longitude and latitude coordinate values in the x′Oy′ coordinate system, m. α is the included angle between the two coordinate systems, °. The machine travels at a certain speed, and the heading angle is obtained by GPS analysis. At this time, the sowing decision value is analyzed by dead reckoning. As shown in Figure 6, there is a large difference in the level of sowing decision levels from grid A and grid B to grid C. At the sampling point P 0 identified after entering grid B during the tractor driving process, the coordinates of P t are calculated through the relative relationship between the sampling point and the lag distance Δd as follows. where, p tlng is the longitude of the calculated x′Oy′ coordinate system, m; p tlat is the latitude of the calculated x′Oy′ coordinate system, m; p 0lng is P 0 longitude; p 0lat is P 0 latitude; lng is the longitude value of the current operation position, m; R is the radius of the earth, the path of the plot is a straight line, m; β is 0° or 180 ° for the heading angle of the machine and tool in the space rectangular coordinate system, °. In order to eliminate the influence of lag distance on sowing, the region to which the calculated coordinates belong was determined according to the dead reckoning coordinates, and the grid analysis was carried out in advance. Optimization method of sowing speed control The response characteristic of the working speed of the seed-metering plate is one of the key factors for the sowing system to realize accurate positioning and sowing. In order to improve the accuracy of the control of the seed-metering plate, PID closed-loop control algorithm was adopted in this article. However, when the tractor accelerates or decelerates in the field during the sowing process, the planting density decision will increase or decrease sharply. The output of the seed-metering motor has a large deviation in a short period of time, which will cause integral accumulation of PID operation and overshoot or oscillation. Therefore, the corresponding start-up jump values were calculated by fitting equations under different velocity gradients respectively [15] . This article introduces an integral separation algorithm into PID algorithm, that is, when the deviation value is large, the integral effect is cancelled to avoid the increase of overshoot, while when the deviation value is small, the integral effect is introduced to eliminate static error and improve control accuracy. The control algorithm can be expressed as: where, U(k) is the corresponding adjustment value at the kth sampling; k p is the proportional coefficient; k i is the integral coefficient; k d is the differential coefficient; err(k) is the calculation deviation; β is called the integral switching coefficient, and its value range is: where, ε is the change threshold of the set speed and the actual speed. In this study, the monitoring of the sowing speed is used as feedback. ε represents the setting threshold value of the difference between the measured speed and the target speed. From the above equation, it can be seen that the selection of ε value is difficult to realize. If ε value is too large, the effect of integral separation cannot be achieved, while if ε value is too small, it is difficult to enter the integral area. Therefore, the ε value needs to be determined. Test design 2.5.1 Determination of ε value ε value needs to be set according to specific objects and requirements; otherwise, the control effect of integral separation cannot be achieved and the control accuracy is affected. Firstly, PID parameters were adjusted several times when the speed difference was small. The proportional coefficient k p , integral coefficient k i and differential coefficient k d of the closed-loop system were determined to be 0.155, 0.35 and 0.07, respectively. Then, the speed step response was carried out by setting the sowing decision level, and the integral switching coefficient was determined. During the test, the theoretical rotating speed n(t) (r/min) of the motor was calculated by setting the planting density (57 000-75 000 plants/hm 2 ) of the controller and maintaining the vehicle speed at 6 km/h during operation. The target rotating speed N'(t) (r/min) of the seed-metering plate was calculated by the product of the theoretical rotating speed and the reduction ratio. The seeder controller obtains the feedback rotating speed N(t) (r/min) by monitoring the drive motor shaft through the encoder. In this article, 18 holes were selected, the row spacing was 60 cm, and the calculated rotating speed range of the seed-metering plate was 9.5-37.6 r/min. Range of rotation is 0-38 r/min. Start from 10 r/min, and increase the speed by 4 r/min in each test until it reaches to 34 r/min. The value was determined by monitoring the output analog quantity change of the controller. Finally, the speed step test of 0-38 r/min was carried out on the integral separation control algorithm added with the integral switching. The feedback speed value was monitored and recorded by CAN data analyzer to investigate the performance of integral separation PID control. Field experiments simulating sowing decisions In order to test the performance of the system, a test area was selected. The test area was located in Wanzhuang Village (125°30ʹE, 44°96ʹN), Chencao Town, Xuchang City, Henan Province, the main corn producing area in China. Xianyu 335 corn seeds were selected for the test. In order to investigate the influence of dead reckoning algorithm and integral separation algorithm on sowing, a simulated sowing decision experiment was carried out. Figure 7 is the distribution diagram and division of the experimental field. The experimental field covers an area of 57.6 m60 m. Three groups were set up. The first group is no dead reckoning area and ordinary PID control area (NDR-PID, abbreviated as NDP), the second group is dead reckoning area and ordinary PID control area (Dr & PID, abbreviated as DP), and the third group is no dead reckoning area and integral separation PID (NDr & ISPID, abbreviated as NDIP). Each group set the planting density Q as 0-66000-0-57000-66000-75000 plants/hm 2 (sowing decision level 0-3-0-1-3-5), each decision level is 10 m and is represented by different colors as shown in Figure 7a. During the test, the unit traveled at a constant speed of 3, 6 and 9 km/h along each operation zone, and the on-board computer recorded the latest decision analysis time and the currently executing planting density prescription value in real time. The encoder installed on the drive shaft of the seed-metering plate fed back the frequency signal, and used the frequency divider to output two frequency signals, one as the speed feedback of ECU and the other as the monitoring signal. At the same time, the system automatically recorded the coordinates of the current operation. After four experiments, each group had 16 crop rows. Five rows were randomly selected by manual seed picking to collect data respectively. 15 plant spacings were continuously measured in each row to pick out all the seeds. All the test data were recorded in the form of seed spacing d. If the error between the seed distance and the target seed distance is less than 10%, it is regarded as reaching the target seed distance. As shown in Figure 8a, in the deceleration process, under the condition of 75000 plants/hm 2 , the target plant spacing is 22.2 cm, the measured value is 23.4, and the error is less than 10%. But when the plant spacing is 17.3, it is considered that the target seed spacing is not reached. The distance from the boundary of the decision level area to reaching the target particle distance is defined as lag distance (LD). MLD = (LD1 + LD2 + LD3 + LD4 + LD5)/5 (10) where, LD1, LD2, LD3 and LD4 are the lag distances of 5 rows respectively, cm; MLD is the average of the lag distances of this 5 rows, cm. According to Figure 8b-c, when the sowing decision changes from 1 to 3 (57 000 seeds/hm 2 to 66 000 seeds/hm 2 ), it is difficult to observe the change of grain distance directly according to the measured data. Therefore, the measured data are input to Matllab 2012 software to draw the grain distance change curve, and the lag of grain distance is determined through the change of the curve. a. Control group area design b. Test site zoning Note: Green represents planting density is 0 plants/hm 2 ; orange represents planting density is 66 000 plants/hm 2 ; straw yellow represents planting density is 75 000 plants/hm 2 ; red represents planting density is 57 000 plants/hm 2 . NDP is no dead reckoning and ordinary PID control area; DP is dead reckoning and ordinary PID control area; NDIP is no dead reckoning and integral separation PID area. According to the soil and environmental information, the sowing decision reference is formulated according to the agronomic model, as shown in Figure 9. During the test, the sowing parameters under different vehicle speeds were recorded in the corresponding plot area of the decision reference, and the actual sowing performance based on the sowing decision was investigated. In the actual sowing process, the zoning operation was carried out according to the target speed of 3, 5, 7 and 9 km/h respectively. Through the above-mentioned data recording method, the recorded data was input to ArcGis 10.4.1 to draw the actual vehicle speed distribution map and the motor monitoring speed map, and then the monitoring broadcast quantity distribution map was drawn. The actual plant spacing of each decision code area and boundary was measured by hand-held GPS. Data processing and evaluation were based on the national standard of China, Test Method for Single Grain (Precision) Seeder (GB/T 6973-2005), and the qualified index of grain distance and coefficient of variation were taken as indexes to evaluate the performance of the control system. Note: different colors represent different planting density. Figure 9 Sowing decision reference Influence of integral term on step response The statistics of controller voltage output for speed steps of 4, 16 and 24 r/min are shown in Figure 10. According to the output voltage change waveform of the controller, during the rotating speed change process of 10-14 r/min, the voltage fluctuation steps from 3.4 V to the maximum value of 4.75 V, and the voltage tends to stable state within 1.2 s. During the speed step of 10-26 r/min, the voltage steps from 3.4 V to 5 V and tends to be stable within 1.4 s. During the speed step of 10-34 r/min, the voltage steps from 3.4 V to 5 V, and the voltage oscillates to the peak value of 5 V many times, which is an important factor causing system instability. Through statistics of different speed steps, it is found that when the speed step is more than 18 r/min, it is the critical point for the integral term to cause oscillation. The results of speed step test at 0-38r/min are as Figure 11. a. Ordinary PID b. Integral Separation PID Figure 11 Monitoring rotational speed of seed-metering plate The improved system (Figure 11b) takes 1.3 s time from issuing operation instructions to stabilizing the seed-metering motor, which reduces 0.7 s and has a faster response. At the same time, the overshoot of seed-metering speed is reduced. The overshoot of ordinary PID seed-metering speed is 28.9%, whereas the overshoot of integral separation PID is 10.5%, thus reducing the instantaneous impact on the seed-metering plate and improving the stability. System response test based on simulated sowing decision 3.2.1 Influence of dead reckoning on sowing lag distance Statistics were made on the lag distance under different speeds and decision levels. Lag distance between NDP area and NDIP area was compared. The results are shown in Figure 12. Note: Dr is dead reckoning . Figure 12 Comparison of lag distance between dead reckoning and no dead reckoning under different speeds and decision levels As can be seen from Figure 12, the overall lag distance of no dead reckoning is significantly higher than that with dead reckoning algorithm. The overall average lag distance based on dead reckoning is 63.4 cm, while that of no dead reckoning is 135.6 cm, which shows that dead reckoning reduces the lag distance by 72.2 cm. The lag distance basically shows a trend of increasing with the increase of vehicle speed, which indicates that the higher the vehicle speed, the farther the distance the system travels in the response period. At the same time, the fluctuation of lag distance with dead reckoning algorithm is smaller than that of no dead reckoning, which shows that dead reckoning can effectively reduce the influence of GPS acceptance period and velocity sampling interval on lag distance. This conclusion is consistent with the research by Li et al [19] . Comparing the lag distance between decision level 0 to 3 and 3 to 0, it is found that there is no obvious rule between the lag distance and sowing decision level, which indicates that decision level has no effect on the lag distance. Influence of integral separation PID on sowing response The application of integral separation PID algorithm is proposed to avoid the oscillation of the control system when the rotating speed difference of the seed-metering plate is large. The rotating speed of the system changes when the sowing decision level changes. The response distance is determined by the response time from the beginning of response to the entry of the stable state. The response distance results of NDP area and NDIP area are shown in Figure 13. As can be seen from Figure 13, when the sowing decision level changes in the range of 0-3-0-1, the system average response distance with the integral separation PID control algorithm is 71.2 cm, whereas that with the ordinary PID control algorithm is 114.3 cm, which shows that the integral separation PID algorithm reduces the response distance by 43.1 cm. The response distance of PID control system using integral separation is obviously lower than that of ordinary PID control system when the decision level changes in the range of level 0-3-0. When the sowing decision level is small (1-3-5), taking the vehicle speed of 6 km/h as example, the change of the sowing speed is 19-22-25 r/min, and the change of the speed is small. According to the above values, the test shows that the response distance under different travel speed has no obvious difference between with and without integration term, which is consistent with the results shown in the Figure 13. Comparing the response distance of under sowing decision level of 0-3-0 with that under sowing decision level of 1-3-5, it can be seen that the greater the sowing decision level changes, the more obvious the effect of integral separation PID on shortening the response distance of the system. The greater the sowing decision level changes, the more obvious the effect of vehicle speed on shortening the response distance of the system, which is due to the double influence of vehicle speed and sowing decision level on the sowing speed. Figure 13 Comparison of response distances between integral separation algorithm and non-integral separation algorithm under different speeds and decision levels System performance test based on actual sowing decision The rotating speed of the seed-metering plate is affected by the travel speed and the sowing decision level. In order to better display different rotating speeds, 9 intervals were drawn for the rotating speed. According to the monitoring data of vehicle speed and seed-metering plate speed, the distribution map was drawn on the operation plot as shown in Figure 14, where the seed-metering plate speed is the average value of the four-way seed-metering device speed. According to the distribution diagram of vehicle speed, it can be seen that the vehicle speed in different speed regions fluctuates in a certain extent when operating according to the target speed. According to the statistics of monitoring values, it is found that the target vehicle speed fluctuates most in the area of 9 km/h, with an average vehicle speed of 9.2 km/h and a variance of 0.54 (km/h) 2 , which meets the needs of vehicle speed control. Figure 14b shows that the rotational speed of the seed-metering plate is correlated with the vehicle speed. The rotational speed of the seed-metering plate is affected by the decision levels, and there is a phenomenon of fluctuation in some areas. For example, in the black border area with the target vehicle speed of 5 km/h, the vehicle speed distribution is uniform but the rotating speed fluctuates from 18.2 to 25.9 r/min. The main reason is that there are more stubble in the area and the uneven surface causes the vibration of the sowing unit, thus affecting the uniformity of rotating speed. The maximum monitoring speed for the target speed areas of 3, 5, 7 and 9 km/h are 3.4, 5.3, 7.6 and 8.8 km/h respectively. The corresponding theoretical rotational speed and monitoring rotational speed are shown in Table 1. As can be seen from Table 1, with the increase of vehicle speed, the relative error between the theoretical rotational speed and the monitoring rotational speed of the seed-metering plate increases continuously. Namely, with the increase of sowing decision, this difference is gradually obvious, which is determined by the working characteristics of the seeder. The higher the sowing decision level, the greater the planting density. At the same time, the higher the rotating speed of the seed-metering plate when encountering resistance changes, the greater the fluctuation. According to the monitoring data of sowing situation in each sowing decision level area and under different operation speeds, the distribution of sowing quantity monitoring data is drawn as Figure 15. The actual monitoring distribution of sowing quantity is in a very good consistency with sowing decision. According to the statistics of monitored planting density in each sowing decision level area, the average error of monitored planting density relative to actual planting density is 3.5%. Actual measurement and statistics were carried out on the grain spacing after actual sowing. Because the difference of actual decision-making level is 2, according to the above analysis, the influence of level difference can be ignored. The variation trend of lag distance and response distance with the change of vehicle speed is shown in Figure 16. It can be seen the speed has a significant influence on lag distance and response distance. The standard deviation of the lag distance gradually decreases with the increase of the vehicle speed, mainly due to the unstable feedback of the vehicle speed obtained through the vehicle speed sensor at low speed. The sum of lag distance and response distance is considered as the transition distance when the sowing decision level changes, which can be used as an index to reflect the overall performance of the system. The average transition distance is 139.5 cm and the standard deviation is 12.8 cm. b. Figure 16 Variation trend of (a) lag distance and (b) response distance with vehicle speed Conclusions The variable control system based on CAN bus was built, and the variable control system interface based on sowing decision was developed to realize reading and analysis of sowing decision. A position lag model was established, and a dead reckoning algorithm based on Hall sensor real-time speed measurement and an integral separation PID control algorithm for seed plate speed were proposed. The experiment of the influence of integral term on step response determined that the rotating speed threshold with or without integral term was 18 r/min. The integral separation control method reduced the system response by 0.7 s, and at the same time reduced the phenomenon of seed-metering rotating speed overshoot from 28.9% to 10.5%, reducing the instantaneous impact on the seed-metering plate and improving the stability. The experiment based on simulated sowing decision showed that the average lag distance based on dead reckoning was 63.4 cm, which was 72.2 cm less than that based on no dead reckoning. The system response distance of integral separation PID control algorithm was 71.2 cm, which was 43.1 cm less than that of ordinary PID control algorithm. The experiment based on the actual sowing decision showed that the error between the monitored planting density and the actual planting density was 3.5%, the average transition distance in the vehicle speed range of 3-9 km/h was 139.5 cm, and the standard deviation was 12.8 cm.
2021-06-27T15:52:03.179Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "dd68ec272276849f6c090e2e428b95867bddfe13", "oa_license": "CCBY", "oa_url": "https://ijabe.org/index.php/ijabe/article/download/5758/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dd68ec272276849f6c090e2e428b95867bddfe13", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Engineering" ], "extfieldsofstudy": [] }
222319860
pes2o/s2orc
v3-fos-license
Tracheostomy in Pediatric Intensive Care Unit: Experience from Eastern India Objective Tracheostomy is one of the most commonly used surgical intervention in sick children in the intensive care unit. The literature in the pediatric population is limited, therefore, we conducted this study to evaluate the indications, timing, complications, and outcomes of tracheostomy among the children at our center. Methods This retrospective study was conducted from January 2016 through December 2019. Data was collected from the patients’ records and analyzed. Results During this study period, 283 children were ventilated, of which 26 (9.1%) required tracheostomy. Among this 73% were boys. The median age of the children who underwent tracheostomy was 6.32 y. The most common indication for tracheostomy was prolonged mechanical ventilation [24 cases (92%)] followed by upper airway obstruction [2 cases (8%)]. The average time of tracheostomy was 11.65 d, range (1–21 d). Complications were seen in 14 patients (55%). The most common complications were accidental decannulation, occlusion, pneumothorax, and granulation tissue. Twenty one (80%) patients were successfully discharged, out of which 16 (61%) patients were discharged after decannulation and 5 (21%) were sent home with a tracheostomy tube in situ. Overall mortality in present study was 11.5%; none was directly related to tracheostomy. Conclusions The indication for tracheostomy has been changed from emergency to more elective one. Prolonged mechanical ventilation is the most common indication for tracheostomy. Although the timing of tracheostomy is not fixed, two weeks time is reasonable and it can be done safely at the bedside in pediatric intensive care. Introduction Tracheostomy is one of the most commonly used surgical intervention in critically sick children in the intensive care unit. Children need a tracheotomy for various reasons, either as an emergency or an elective procedure. Pediatric tracheostomy is more challenging because of the small, pliable trachea, limited extension of the surgical field and the risk of anesthesia. The morbidity and mortality for Pediatric tracheostomy are around two to three times more than for adult patients [1][2][3][4]. The indication for tracheostomy has been significantly changed over the last few decades from upper airway obstructions following infections to prolonged mechanical ventilation [5]. With the advent of vaccination against Haemophilus influenza type B and Corynebacterium diphtheria and improvement in the pediatric intensive care, the number of tracheostomies for upper airway disease has been reduced [1]. Now-a-days Pediatric tracheostomy is commonly done for prolonged ventilation, upper airway obstruction, trauma and neurological diseases [6]. In contrary to adults, there is no consensus guideline for the timing of tracheostomy in children [7]. The literature in the pediatric population is limited. Therefore, this study was conducted to analyze the indications, complications and outcomes of tracheostomy at a tertiary care pediatric intensive care unit in Eastern India. Material and Methods This retrospective study was conducted from January 2016 through December 2019. A total of 26 tracheostomies were performed during this period. Data regarding demography, indication, timing, complications, and the outcome of tracheostomy was collected and analyzed. After discharge, the patients were followed up at the hospital every 2 mo for at least 6 mo. All the tracheostomies were carried out by otolaryngologists in the presence of an anesthetist and a pediatric intensivist either in the pediatric intensive care unit (PICU) or in the operation theatre. A standard procedure for tracheostomy was used in all cases. The indication and timing of tracheostomy were decided by the pediatric intensivist. The decannulation protocol of authors' institution includes cannula downsizing and then its gradual occlusion. The decision for the decannulation was combinedly taken but mainly by the treating physicians. Once the child is hemodynamically stable, on minimal or no oxygen, off inotropes then authors planned for the decannulation. A laryngoscopy is performed only when there was difficulty in the decannulation process. The frequency of downsizing depends on the age of the patient and type of tracheostomy tube used initially. Parents and caregivers were involved in the care of the tracheostomy patient. They were taught about the routine care of the tracheostomy, including suctioning and changing of tubes by demonstration. They were also educated about various equipments like suction catheter, suction machine, pressure set up before discharge etc. Results During the study period, 283 children were ventilated, out of which 26 (9.1%) required a tracheostomy. Among this 73% were boys. The median age of the children who underwent tracheostomy was 6.32 y. The youngest child was 4-mo-old and the eldest was 16 y. Seven children were ≤ 1 y. In 19 (73%) patients tracheostomy was performed at the bedside in the pediatric intensive care unit. The most common indication for tracheostomy in present study was prolonged mechanical ventilation secondary to neuromuscular problems -24 cases (92%) followed by upper airway obstruction (UAO) -2 cases (8%). The prolonged mechanical ventilation group was further divided into four subgroups as neuromuscular (7 children), neurological (10 children), traumatic brain injury due to road traffic accidents (5 children) and respiratory (2 children) ( Table 1). The average timing of tracheostomy was 11.65 d, range (1-21 d). In 18 (70%) children it was done within 2 wk and only in 8 (30%) cases after 2 wk. In authors' experience, this delay was because of parental anxiety, stress, and fear about the care of tracheostomy. In 2 cases of upper airway obstruction (UAO), an emergency tracheostomy was performed in the operation theatre. Emergency tracheostomy was performed in one child in the pediatric intensive care unit due to severe respiratory distress after decannulation (Table 1). Complications from tracheostomy were seen in 14 patients (55%). Out of 14, 2 patients had accidental decannulation, 2 had tube occlusion, 1 patient had a cardiac arrest, 2 patients developed pneumothorax, 3 developed granulation tissue, 1 patient had maggots and infection at home, another patient died at home due to occlusion and 1 patient each developed stromal site infection and subglottic stenosis. Twenty one (80%) patients were successfully discharged, of which 16 (61%) patients were discharged after decannulation and 5 (21%) were sent home with a tracheostomy tube in situ. Out of those 5, 2 patients were decannulated on follow-up, 1 child died due to tube occlusion at home, 2 are remaining on tracheostomy for more than 1 y. Three patients got discharged against medical advice, out of which one died on the way home, one died at home after 2 wk due to tube occlusion and one patient was lost to follow-up. Overall mortality in present study was 11.5% ( Fig. 1 and Table 1). Discussion Now-a-days tracheostomy is one of the most commonly performed surgical procedures in the pediatric intensive care unit. Over the last 50 y, the indication for tracheostomy has been changed from acute inflammatory airway obstructions to prolonged mechanical ventilation. This change is because of the introduction of new vaccines and improvement in neonatal and pediatric intensive care [1,5]. In the present study, the rate of tracheostomy was 9.1%, which is almost similar to a study by Kamit Can et al. [8]. The tracheostomy rate of units varies from 2 to 7%, but the rate with different co-morbidities is not clear [9][10][11]. The most common indication for tracheostomy in present study was prolonged mechanical ventilation (92%) followed by UAO (8%), which is similar to many recent studies [8][9][10][11]. Douglas et al. reported 111 children who underwent tracheostomy and found that the most common indication was prolonged mechanical ventilation (32%) followed by craniofacial anomaly causing UAO (18%) and subglottic stenosis (14%) [12]. Contrary to present study, Schweiger et al. found that the most common indication for tracheostomy is upper airway obstruction [13]. Mahadevan et al. from New Zealand also reported that UAO accounts for the majority of tracheostomies [14]. The average timing of the tracheostomy in present study was 11.65 d, range (1-21 d). As per the unit's protocol, any child who required ventilation for > 1 wk, was evaluated for tracheostomy. In the United States, studies have demonstrated that the time for insertion before tracheostomy, and suggested that prolonged mechanical ventilation is associated with increased ICU morbidities and stay and recommended early tracheostomy within 14 d [16]. Although there is a consensus that tracheostomy should be performed in 1 or 2 wk of ventilation in adult patients, no established criteria currently exists regarding the time of tracheostomy in children, and timing of tracheostomy should be individualized for each patient [7,17]. It is known that the pediatric patients tolerate intubation for a longer period than do adults; however, early tracheostomy not only reduces the work of breathing, ventilator-associated complications, sedation requirements, the length of ICU stay and cost but also improves quality care and patient comfort [17]. The complications of tracheostomy were seen in 14 patients (55%). Pneumothorax and minor bleeding were two important perioperative complications. Early postoperative complications were occlusion of the tube, accidental decannulation whereas subglottic stenosis and granulation tissue were late complications. One child on tracheostomy had blocked tubes at home and died. Mortality was 11.5%; none was because of a tracheostomy. Kamit Can et al. found that the complication rate was 25.3% in the pediatric intensive care unit and 11.1% at home; no patients died of tracheostomyrelated complications, which shows that performing a tracheostomy is a relatively safe intervention in the pediatric ICU [8]. The study by Mahadevan et al. found a complication rate of 51% [14]. The average length of tracheostomy in present study was 48 d (5-180 d). In four patients, tracheostomies could not be closed to date: 3 severe traumatic brain injury (TBI) and 1 Guillain barre syndrome (GBS) patients are still on the tube. A study by Schweiger et al. had shown decannulation time ranged from less than one mo to 7 y (median of 5 mo) [13]. The rate of successful decannulation in present study was 18 (69%) and 2 patients are still waiting for closure and 2 patients need tracheal reconstruction surgery. Studies show that decannulation rates vary around 35-75% [1,2,12,14,18]. A study by Sharma and Vinayak showed the rate of decannulation was 82% [19]. The mortality rate of tracheostomy patients is relatively high, between 14 and 19% [2,6,14]; in the present study, it is 11.5%. Schweiger et al. showed mortality of 32%, which is because of an underlying disease rather than due to tracheotomy [13]. Conclusions Tracheostomy is one of the most commonly used procedure now-a-days in the PICU. The indication for tracheostomy has changed from emergency to more of elective one. The most common indication for tracheostomy in present study was prolonged mechanical ventilation. Although the timing of tracheostomy is individualized for each patient, two weeks seems to be reasonable enough. Tracheostomy can be performed safely at the bedside in pediatric intensive care unit, but the patient selection should be made carefully.
2020-10-14T14:07:12.097Z
2020-10-14T00:00:00.000
{ "year": 2020, "sha1": "8fcc2c6d0790aec41f7c51191c9468d94542771c", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s12098-020-03514-6.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8fcc2c6d0790aec41f7c51191c9468d94542771c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237939839
pes2o/s2orc
v3-fos-license
Geographical Characterization of Olive Oils from the North Aegean Region Based on the Analysis of Biophenols with UHPLC-QTOF-MS Olive oil is famous due to the nutritional properties and beneficial health effects. The exceptional properties of virgin (VOO) and extra virgin olive oil (EVOO) are credited to the bioactive constituents of their polar fraction, the phenolic compounds. The concentration and composition of biophenols can be influenced by the geographical origin, the cultivar, as well as several agronomic and technological parameters. In this study, an ultra-high-performance liquid chromatography coupled to quadrupole-time of flight tandem mass spectrometry (UHPLC-QTOF-MS) method was used to determine biophenols in Greek EVOOs from five islands originating from the North Aegean Region (Chios, Fournoi, Ikaria, Lesvos, and Samos) through target and suspect screening. In total, 14 suspect and 5 target compounds were determined in the analyzed EVOOs. The quantitative and semiquantitative results were compared to investigate discriminations between different regions. Significant differences were found between the islands based on the overall phenolic content and the concentration levels of individual compounds, as well. In the case of Lesvos, the territory was separated in subdivisions (zones), and each zone was studied individually. It is important to highlight that the biophenols, and mainly the secoiridoids such as oleuropein aglycone and oleocanthal, are responsible for the organoleptic characteristics of EVOOs, specifically, the bitter and pungent taste. Moreover, these compounds relate to the oxidative stability of VOO and contribute to its long shelf life as compared to other edible vegetable oils [1,[5][6][7][8][9]. Another reason for which the biophenols are great of interest is their biological and pharmacological properties. Phenolics are the main antioxidants in EVOOs [10]. Compounds of EVOO with strong antioxidant activity are hydroxytyrosol and oleocanthal [9,11]. Other biophenols that act against oxidative stress are luteolin, tyrosol, vanillin [12], acetoxypinoresinol, and pinoresinol [11]. Except for their antioxidant As for the MS parameters, the electrospray negative ionization (ESI -) mode was applied. The capillary voltage was set at 3500 V, the nebulizer pressure was 2 bar (N 2 ), the end plate offset was set at 500 V, the drying gas flow rate was equal to 8 L/min (N 2 ), and the drying temperature was set at 200 • C. A cluster solution consisting of 10 mM of sodium formate in a mixture of 2-propanol:water (1:1, v/v) was used for external calibration. Additionally, for internal calibration, the cluster solution was injected at the beginning of each run, in a segment between 0.1 and 0.25 min. Full scan mass spectra, with a scan rate of 2 Hz, were recorded in the range between 50 and 1000 m/z. MS/MS data were collected using the data-dependent acquisition mode based on the fragmentation of the five most abundant precursor ions per scan (AutoMS, otofControl, Bruker Daltonics). The instrument provided a typical resolving power (full-width at half-maximum, FWHM) between 36,000 and 40,000 at m/z 226.1593, 430.9137, and 702.8636. Samples All samples were acquired from the Northeastern Aegean islands of Chios, Fournoi, Ikaria, Lesvos, and Samos during the harvesting period 2017-2018. Overall, 452 Greek EVOOs were gathered. Table S2 summarizes the main olive tree varieties and the number of EVOO samples for each island. In the case of Lesvos, due to the size of the island and increased number of samples, the island was divided into 7 geographical zones ( Figure 1) based on the location where the olive trees were cultivated. Table S3 presents the number of samples corresponding to each zone. After collection, the samples were stored at 4 • C throughout the study. Sample Preparation and Quality Control A validated in-house liquid-liquid extraction (LLE) method [50] was used to isolate the biophenols from the EVOOs. First, 0.5 g of the sample was weighed in an Eppendorf tube and spiked with internal standard (syringaldehyde) at a concentration of 1.3 mg/L. For the extraction, 0.5 mL of water:methanol (20:80, v/v) was used. Then, the mixture was Sample Preparation and Quality Control A validated in-house liquid-liquid extraction (LLE) method [50] was used to isolate the biophenols from the EVOOs. First, 0.5 g of the sample was weighed in an Eppendorf tube and spiked with internal standard (syringaldehyde) at a concentration of 1.3 mg/L. For the extraction, 0.5 mL of water:methanol (20:80, v/v) was used. Then, the mixture was vortexed for 2 min and centrifuged for 5 min at 13,400 rpm. In the next step, the upper phase was filtered through a 0.22 µm filter (CHROMAFIL ® RC, Macherey-Nagel, Düren, Germany). The extract was kept at −80 • C prior to analysis. Then, 5 µL of this solution was injected into the analytical system. In order to ensure the accuracy and reliability of the results, quality control (QC) was performed using a mixture of the standards apigenin, vanillin, oleuropein, tyrosol, and hydroxytyrosol at a concentration of 1 mg/L. At the beginning of the analysis, the mixture of these compounds was injected six times for conditioning of the equipment, and then it was injected every ten injections during the analysis. The results of QC, specifically, the % relative standard deviation (%RSD) for the peak areas, retention time (t R ), and ∆m errors (n = 12) of these five analytes for one laboratory day are presented in Table S4, proving the good performance of the instrument during the analysis. A procedural blank was also prepared in order to detect any potential contamination. Target and Suspect Screening-Data Processing A large number of biophenols were identified in EVOO and other organs of Olea europaea such as olive leaves, drupes, stems, and roots [2,12,25,[51][52][53][54][55][56]. In this work, the study focused on the biophenols identified in VOO in the previous work of our group [42]. For this purpose, a target list of 14 biophenols and a suspect list of 15 bioactive compounds were used and can be found in Tables S5 and S6, respectively. The tables include information for the compounds such as the molecular formula, m/z, t R , and MS/MS fragments, as referred by Kalogiouri et al. [42]. The data were processed via Bruker software, TASQ 1.4 and DataAnalysis 4.4. The determination of the compounds was based on mass accuracy, retention time, isotopic pattern, and MS/MS fragmentation. In particular, extracted ion chromatograms (EICs) were acquired, and the following parameters were applied in the case of target screening: mass accuracy ≤ 2.5 mDa, tolerance of the retention time below ±0.2 min, isotopic fit ≤ 100 mSigma, ion intensity ≥ 500, peak area ≥ 2000, and signal-to-noise (S/N) ≤ 3. In the case of suspect screening, the parameters applied were: mass accuracy ≤ 2.5 mDa, ion intensity ≥ 800, peak area ≥ 3000, and isotopic fit ≤ 100 mSigma. The experimental t R of each analyte was compared with the t R found in VOO samples (Table S6) according to Kalogiouri et al. [42], and the difference was set not to exceed ±0.2 min. In both cases, target and suspect screening, the m/z fragments presented in Tables S5 and S6 were compared with the experimental m/z fragments in the MS/MS spectrums of samples. Statistical Analysis ANOVA was applied using the Data Analysis tool of Microsoft Excel (Microsoft, Redmond, WA, USA). One-way ANOVA was performed to identify statistically significant differences, comparing the results at a 95% confidence level. Target and Suspect Screening Results From the initial target list of 14 compounds (Table S5), five biophenols were determined in the EVOOs: hydroxytyrosol and tyrosol from the group of phenolic alcohols, and luteolin and apigenin from the group of flavonoids and the lignan pinoresinol. The target screening results are summarized in Table 1. Fourteen suspect compounds were determined in the EVOOs. From the initial suspect list (Table S6), only the secoiridoid oleoside was not detected. The suspect screening results are summarized in Table 2. Quantification and Semiquantification Results Standard calibration curves were prepared for the quantification or semiquantification of the biophenols detected in the EVOOs using commercial standards. Mixed standard working solutions of 0.1, 1.0, 2.5, 5, 8, 10, and 12 mg/L were prepared every day. Indicative normalized standard calibration curves (on the basis of 1.3 mg/L of syringaldehyde) constructed for tyrosol, hydroxytyrosol, oleuropein, apigenin, luteolin, and pinoresinol for one laboratory day are presented in Table S7. The quantified and semiquantified analytes are summarized in Table 3. The secoiridoids oleuropein aglycone and ligstroside aglycone were the main biophenols detected in the EVOOs. The average concentrations of oleuropein aglycone and ligstroside aglycone were 254 mg/kg and 127 mg/kg, respectively, followed by oleacein (20.1 mg/kg) and oleocanthal (13.9 mg/kg). Concerning the class of phenolic alcohols, the results showed that the average concentrations of tyrosol and hydroxytyrosol fluctuated at similar levels (2.53 mg/kg and 2.16 mg/kg, respectively). The flavonoids, apigenin and luteolin, were determined in all samples; however, the average concentration of apigenin (2.64 mg/kg) was higher than that of luteolin (1.87 mg/kg). From the class of lignans, 1-acetoxypinoresinol had a high concentration on average (21.5 mg/kg) in the samples. All compounds, apart from three, were detected in 90-100% of the samples. Syringaresinol and 1-hydroxypinoresinol were detected in about 80% of the samples. In addition, the hydroxylated form of elenolic acid was detected in a few EVOOs (<40% of the samples) and its concentration was low (≤1.21 mg/kg). As shown in Table 3, from the sum of each individual compound that was determined, the bioactive content of samples was 475 mg/kg, on average, denoting their high quality and significant nutritional value. Various analytical methods have been developed for the determination of phenolic content and/or individual compounds, making the comparison of results complicated because they are calculated in different ways. Table 4 presents the results of recent works reported in the literature regarding the bioactive content and concentrations of the compounds tyrosol, hydroxytyrosol, apigenin, luteolin, and pinoresinol determined in EVOOs/VOOs originating from Spain, Italy, Greece, Tunisia, Morocco, and Turkey. The advantage of this work compared to the literature is that the application of the UHPLC-QTOF-MS methodology enables the identification of analytes for which there are no available commercial standards, and therefore, they cannot be determined by traditional chromatographic methodologies. Consequently, HRMS is a powerful technique that leads to a thorough phenolic characterization of EVOOs. Phenolic Content • Comparison between the islands As shown in Figure 2, the EVOOs from the islands of Lesvos, Samos, and Chios had the same variances in the phenolic content. This was also verified by ANOVA (p = 0.25). The EVOOs originating from Ikaria and Samos presented the highest phenolic content on average (646 mg/kg and 526 mg/kg, respectively), followed by the EVOOs from Lesvos (470 mg/kg) and Chios (431 mg/kg). In the case of Ikaria and Fournoi, no reliable comparison could be made using statistical analysis, because of the small number of samples (n ≤ 12) compared to the other islands. However, according to Figure 2 and Table 5, both Ikaria and Fournoi were within the range of the other three islands; the samples from Ikaria exhibited higher levels of phenolic constituents (380-939 mg/kg) and the samples from Fournoi had lower levels (155-222 mg/kg). Table 5 presents the statistical parameters median, mean, standard deviation (SD), and range of the phenolic content for each island. • Comparison between the zones of Lesvos Figure 3 shows that the EVOOs from Zone 5 and Zone 6, which are located in the northern part of the island of Lesvos (Figure 1), had a lower phenolic content on average (343 mg/kg and 352 mg/kg, respectively) than did the EVOOs originating from the other geographical zones of the island. In the southern part of Lesvos (Zone 1-4, 7), the average phenolic content in the EVOOs ranged from 446 mg/kg (Zone 2) to 576 mg/kg (Zone 4). Τhe EVOOs from Zone 6 had a statistically significant difference compared to all other zones (p < 0.05), except for Zone 5 (p = 0.93). Moreover, the results of the statistical analysis demonstrated that the EVOOs from Zone 5 presented a statistically significant difference compared to the EVOOs from Zones 1, 3, and 4 (p < 0.05). The statistical parameters median, mean, SD, and range of the phenolic content for each zone are presented in Table 6. • Comparison between the zones of Lesvos Figure 3 shows that the EVOOs from Zone 5 and Zone 6, which are located in the northern part of the island of Lesvos (Figure 1), had a lower phenolic content on average (343 mg/kg and 352 mg/kg, respectively) than did the EVOOs originating from the other geographical zones of the island. In the southern part of Lesvos (Zone 1-4, 7), the average phenolic content in the EVOOs ranged from 446 mg/kg (Zone 2) to 576 mg/kg (Zone 4). The EVOOs from Zone 6 had a statistically significant difference compared to all other zones (p < 0.05), except for Zone 5 (p = 0.93). Moreover, the results of the statistical analysis demonstrated that the EVOOs from Zone 5 presented a statistically significant difference compared to the EVOOs from Zones 1, 3, and 4 (p < 0.05). The statistical parameters median, mean, SD, and range of the phenolic content for each zone are presented in Table 6. Foods 2021, 10, x FOR PEER REVIEW 10 of 24 Secoiridoids In the EVOOs of this work, the most abundant group of biophenols was secoiridoids, followed by lignans, phenolic alcohols, and flavonoids. The percentage (%) of each group in the phenolic fraction is shown in Figure S1. This trend was observed both between the islands of the North Aegean Region and the zones of Lesvos. The island of Fournoi and Zone 5 of Lesvos were exceptions, as the percentage of phenolic alcohols was higher than the percentage of lignans. However, secoiridoids constituted the prevalent category of biophenols in all EVOOs, as their percentage was above 85% in all cases. The major secoiridoids found in the EVOOs of this study were oleuropein aglycone, ligstroside aglycone, oleacein, and oleocanthal, as shown in Table 3. Based on the literature, the most abundant biophenols in VOOs are aglycones derived from the secoiridoids of drupes. These substances, such as oleuropein aglycone and ligstroside aglycone [66], have a significant role in the stability of VOO [10]. The decarboxymethylated dialdehyde form of oleuropein aglycone, oleacein, and the decarboxymethylated dialdehyde form of ligstroside aglycone, oleocanthal, are also very important compounds. Oleocanthal is responsible for the bitterness of VOO and also presents a high anti-inflammatory activity [67]. Oleacein also presents antioxidant properties [1]. Secoiridoids In the EVOOs of this work, the most abundant group of biophenols was secoiridoids, followed by lignans, phenolic alcohols, and flavonoids. The percentage (%) of each group in the phenolic fraction is shown in Figure S1. This trend was observed both between the islands of the North Aegean Region and the zones of Lesvos. The island of Fournoi and Zone 5 of Lesvos were exceptions, as the percentage of phenolic alcohols was higher than the percentage of lignans. However, secoiridoids constituted the prevalent category of biophenols in all EVOOs, as their percentage was above 85% in all cases. The major secoiridoids found in the EVOOs of this study were oleuropein aglycone, ligstroside aglycone, oleacein, and oleocanthal, as shown in Table 3. Based on the literature, the most abundant biophenols in VOOs are aglycones derived from the secoiridoids of drupes. These substances, such as oleuropein aglycone and ligstroside aglycone [66], have a significant role in the stability of VOO [10]. The decarboxymethylated dialdehyde form of oleuropein aglycone, oleacein, and the decarboxymethylated dialdehyde form of ligstroside aglycone, oleocanthal, are also very important compounds. Oleocanthal is responsible for the bitterness of VOO and also presents a high anti-inflammatory activity [67]. Oleacein also presents antioxidant properties [1]. • Oleuropein aglycone As shown in Figure 4, the highest average concentration of oleuropein aglycone was found in the EVOOs originating from Ikaria (347 mg/kg). On the other hand, there were EVOOs from the islands of Lesvos and Samos that presented higher concentrations of oleuropein aglycone with maximums of 756 mg/kg and 708 mg/kg, respectively. The average concentration of oleuropein aglycone in the EVOOs from Lesvos (256 mg/kg) and Samos (259 mg/kg) did not differ statistically (p = 0.91). However, the EVOOs from Chios, whose average concentration of oleuropein aglycone was 180 mg/kg, had a statistically significant difference compared to the EVOOs originating from Lesvos and Samos (p < 0.05). The EVOOs from Fournoi had the lowest average concentration of this secoiridoid (99.2 mg/kg). • Oleuropein aglycone As shown in Figure 4, the highest average concentration of oleuropein aglycone was found in the EVOOs originating from Ikaria (347 mg/kg). On the other hand, there were EVOOs from the islands of Lesvos and Samos that presented higher concentrations of oleuropein aglycone with maximums of 756 mg/kg and 708 mg/kg, respectively. The average concentration of oleuropein aglycone in the EVOOs from Lesvos (256 mg/kg) and Samos (259 mg/kg) did not differ statistically (p = 0.91). However, the EVOOs from Chios, whose average concentration of oleuropein aglycone was 180 mg/kg, had a statistically significant difference compared to the EVOOs originating from Lesvos and Samos (p < 0.05). The EVOOs from Fournoi had the lowest average concentration of this secoiridoid (99.2 mg/kg). • Ligstroside aglycone The EVOOs originating from Chios and Ikaria presented the highest average concentration of ligstroside aglycone (208 mg/kg and 198 mg/kg, respectively), followed by Samos (164 mg/kg) and Lesvos (116 mg/kg). Ligstroside aglycone was detected in low concentrations in the EVOOs from Fournoi, as its concentration ranged between 45.7 and 68.4 mg/kg ( Figure 5). Moreover, the results of statistical analysis showed no significant difference between the EVOOs from Samos and Chios (p = 0.23). However, the samples originating from Lesvos presented a statistically significant difference compared to the samples from Samos and Chios (p < 0.05). • Ligstroside aglycone The EVOOs originating from Chios and Ikaria presented the highest average concentration of ligstroside aglycone (208 mg/kg and 198 mg/kg, respectively), followed by Samos (164 mg/kg) and Lesvos (116 mg/kg). Ligstroside aglycone was detected in low concentrations in the EVOOs from Fournoi, as its concentration ranged between 45.7 and 68.4 mg/kg ( Figure 5). Moreover, the results of statistical analysis showed no significant difference between the EVOOs from Samos and Chios (p = 0.23). However, the samples originating from Lesvos presented a statistically significant difference compared to the samples from Samos and Chios (p < 0.05). • Decarboxymethyl oleuropein aglycone (Oleacein) The EVOOs from the islands of Lesvos, Ikaria, and Samos exhibited high concentrations of oleacein on average (21.5 mg/kg, 21.4 mg/kg, and 18.3 mg/kg, respectively). The highest concentration of oleacein was recorded in EVOO from Lesvos (64.6 mg/kg), as shown in Figure 6. Furthermore, the average concentration of oleacein was 9.33 mg/kg in the EVOOs from Fournoi and 3.15 mg/kg in the EVOOs from Chios. The differences between the EVOOs from Lesvos and Samos were not statistically significant (p = 0.11). On the contrary, the EVOOs from Chios presented a statistically significant difference compared to the EVOOs from Lesvos and Samos (p < 0.05). Foods 2021, 10, x FOR PEER REVIEW 12 of 24 • Decarboxymethyl oleuropein aglycone (Oleacein) The EVOOs from the islands of Lesvos, Ikaria, and Samos exhibited high concentrations of oleacein on average (21.5 mg/kg, 21.4 mg/kg, and 18.3 mg/kg, respectively). The highest concentration of oleacein was recorded in EVOO from Lesvos (64.6 mg/kg), as shown in Figure 6. Furthermore, the average concentration of oleacein was 9.33 mg/kg in the EVOOs from Fournoi and 3.15 mg/kg in the EVOOs from Chios. The differences between the EVOOs from Lesvos and Samos were not statistically significant (p = 0.11). On the contrary, the EVOOs from Chios presented a statistically significant difference compared to the EVOOs from Lesvos and Samos (p < 0.05). • Decarboxymethyl ligstroside aglycone (Oleocanthal) As presented in Figure 7, the average concentration of oleocanthal was higher in the EVOOs from Samos (16.5 mg/kg), Lesvos (14.3 mg/kg), and Ikaria (13.2 mg/kg). The EVOOs originating from Chios and Fournoi had lower concentrations of oleocanthal on average (4.75 mg/kg and 4.99 mg/kg, respectively); however, the concentration levels ranged between 1.12 and 15.6 mg/kg in the case of Chios and between 4.20 and 6.01 mg/kg • Decarboxymethyl oleuropein aglycone (Oleacein) The EVOOs from the islands of Lesvos, Ikaria, and Samos exhibited high concentrations of oleacein on average (21.5 mg/kg, 21.4 mg/kg, and 18.3 mg/kg, respectively). The highest concentration of oleacein was recorded in EVOO from Lesvos (64.6 mg/kg), as shown in Figure 6. Furthermore, the average concentration of oleacein was 9.33 mg/kg in the EVOOs from Fournoi and 3.15 mg/kg in the EVOOs from Chios. The differences between the EVOOs from Lesvos and Samos were not statistically significant (p = 0.11). On the contrary, the EVOOs from Chios presented a statistically significant difference compared to the EVOOs from Lesvos and Samos (p < 0.05). • Decarboxymethyl ligstroside aglycone (Oleocanthal) As presented in Figure 7, the average concentration of oleocanthal was higher in the EVOOs from Samos (16.5 mg/kg), Lesvos (14.3 mg/kg), and Ikaria (13.2 mg/kg). The EVOOs originating from Chios and Fournoi had lower concentrations of oleocanthal on average (4.75 mg/kg and 4.99 mg/kg, respectively); however, the concentration levels ranged between 1.12 and 15.6 mg/kg in the case of Chios and between 4.20 and 6.01 mg/kg • Decarboxymethyl ligstroside aglycone (Oleocanthal) As presented in Figure 7, the average concentration of oleocanthal was higher in the EVOOs from Samos (16.5 mg/kg), Lesvos (14.3 mg/kg), and Ikaria (13.2 mg/kg). The EVOOs originating from Chios and Fournoi had lower concentrations of oleocanthal on average (4.75 mg/kg and 4.99 mg/kg, respectively); however, the concentration levels ranged between 1.12 and 15.6 mg/kg in the case of Chios and between 4.20 and 6.01 mg/kg for the island of Fournoi. The differences in the concentration of oleocanthal were not statistically significant between the EVOOs from Lesvos and Samos (p = 0.053). On the contrary, the EVOOs originating from Chios presented a statistically significant difference compared to the EVOOs from Lesvos and Samos (p < 0.05). for the island of Fournoi. The differences in the concentration of oleocanthal were not statistically significant between the EVOOs from Lesvos and Samos (p = 0.053). On the contrary, the EVOOs originating from Chios presented a statistically significant difference compared to the EVOOs from Lesvos and Samos (p < 0.05). Major Secoiridoids-Differences between the Zones of Lesvos • • Oleuropein aglycone The EVOOs from Zones 1, 3, 4, and 7 exhibited high concentrations of oleuropein aglycone on average (296 mg/kg, 290 mg/kg, 308 mg/kg, and 274 mg/kg, respectively). Moreover, Figure 8 shows that the average concentration of oleuropein aglycone was 235 mg/kg in the EVOOs from Zone 2, 216 mg/kg in the EVOOs from Zone 5, and 204 mg/kg in the EVOOs from Zone 6. The highest concentration of this secoiridoid was found in EVOOs originating from Zone 7 (756 mg/kg) and Zone 2 (732 mg/kg), while the lowest concentration was recorded in EVOOs from Zone 3 (1.44 mg/kg) and Zone 2 (7.87 mg/kg). No statistically significant difference was observed between the samples from Zones 1, 3, 4, 5, and 7 (p > 0.05). In addition, the EVOOs from Zones 2, 5, and 6 had no statistically significant difference (p > 0.05). for the island of Fournoi. The differences in the concentration of oleocanthal were not statistically significant between the EVOOs from Lesvos and Samos (p = 0.053). On the contrary, the EVOOs originating from Chios presented a statistically significant difference compared to the EVOOs from Lesvos and Samos (p < 0.05). • Decarboxymethyl oleuropein aglycone (Oleacein) The EVOOs from Zone 1 exhibited the highest average concentration of oleacein (26.9 mg/kg) and the EVOOs from Zone 5 had the lowest average concentration of oleacein (13.0 mg/kg). As for the EVOOs from the other zones, the average concentration of oleacein ranged from 18.1 mg/kg (Zone 6) to 23.6 mg/kg (Zone 3). As shown in Figure 10, the highest concentrations of oleacein were noted in EVOO from Zone 3 (64.6 mg/kg) and Zone 4 (60.7 mg/kg). The differences between the samples from Zones 1, 2, 3, 4, and 7 were not statistically significant (p > 0.05). The result from ANOVA was the same for the EVOOs from Zones 5, 6, and 7. • Decarboxymethyl oleuropein aglycone (Oleacein) The EVOOs from Zone 1 exhibited the highest average concentration of oleacein (26.9 mg/kg) and the EVOOs from Zone 5 had the lowest average concentration of oleacein (13.0 mg/kg). As for the EVOOs from the other zones, the average concentration of oleacein ranged from 18.1 mg/kg (Zone 6) to 23.6 mg/kg (Zone 3). As shown in Figure 10, the highest concentrations of oleacein were noted in EVOO from Zone 3 (64.6 mg/kg) and Zone 4 (60.7 mg/kg). The differences between the samples from Zones 1, 2, 3, 4, and 7 were not statistically significant (p > 0.05). The result from ANOVA was the same for the EVOOs from Zones 5, 6, and 7. • Decarboxymethyl ligstroside aglycone (Oleocanthal) As shown in Figure 11, the highest average concentration of oleocanthal was found in the EVOOs originating from Zone 1 (16.9 mg/kg). Oleocanthal was detected in low concentrations in the EVOOs from Zone 5, as the average concentration of this secoiridoid was 6.71 mg/kg and the maximum value did not exceed 12.0 mg/kg. ANOVA showed that the EVOOs from Zone 5 presented a statistically significant difference compared to the EVOOs from the other zones (p < 0.05). The EVOOs from Zone 6, which had the second lowest average concentration of oleacein (11.2 mg/kg), did not differ statistically only with the EVOOs from Zone 7 (p = 0.11). On the other hand, the samples from Zones 1, 2, 3, 4, and 7 presented no statistically significant difference between them (p > 0.05). • Decarboxymethyl ligstroside aglycone (Oleocanthal) As shown in Figure 11, the highest average concentration of oleocanthal was found in the EVOOs originating from Zone 1 (16.9 mg/kg). Oleocanthal was detected in low concentrations in the EVOOs from Zone 5, as the average concentration of this secoiridoid was 6.71 mg/kg and the maximum value did not exceed 12.0 mg/kg. ANOVA showed that the EVOOs from Zone 5 presented a statistically significant difference compared to the EVOOs from the other zones (p < 0.05). The EVOOs from Zone 6, which had the second lowest average concentration of oleacein (11.2 mg/kg), did not differ statistically only with the EVOOs from Zone 7 (p = 0.11). On the other hand, the samples from Zones 1, 2, 3, 4, and 7 presented no statistically significant difference between them (p > 0.05). Phenolic Profile The phenolic profile of VOOs/EVOOs constitutes a significant factor, as differences in the distribution of the biophenols can lead to discriminations between samples. For this • Decarboxymethyl ligstroside aglycone (Oleocanthal) As shown in Figure 11, the highest average concentration of oleocanthal was found in the EVOOs originating from Zone 1 (16.9 mg/kg). Oleocanthal was detected in low concentrations in the EVOOs from Zone 5, as the average concentration of this secoiridoid was 6.71 mg/kg and the maximum value did not exceed 12.0 mg/kg. ANOVA showed that the EVOOs from Zone 5 presented a statistically significant difference compared to the EVOOs from the other zones (p < 0.05). The EVOOs from Zone 6, which had the second lowest average concentration of oleacein (11.2 mg/kg), did not differ statistically only with the EVOOs from Zone 7 (p = 0.11). On the other hand, the samples from Zones 1, 2, 3, 4, and 7 presented no statistically significant difference between them (p > 0.05). Phenolic Profile The phenolic profile of VOOs/EVOOs constitutes a significant factor, as differences in the distribution of the biophenols can lead to discriminations between samples. For this Figure 11. Concentration of oleocanthal (mg/kg) in EVOOs from the seven zones (Zones 1-7) of the island of Lesvos. Phenolic Profile The phenolic profile of VOOs/EVOOs constitutes a significant factor, as differences in the distribution of the biophenols can lead to discriminations between samples. For this reason, the present work was extended to the study of each compound individually. The figures of this section (Figures 12-15) present the % ratio of each compound detected in the EVOOs to the overall phenolic content, from the four main categories of biophenols (secoiridoids, phenolic alcohols, lignans, and flavonoids). It was considered appropriate to study the % percentage of compounds, not the concentration in mg/kg, to compare the contribution of each compound to the phenolic content of EVOOs. between 0.32 and 0.59% in the EVOOs between the islands, apart from the samples from Chios whose percentage of hydroxytyrosol was very low (0.05%). Hydroxytyrosol acetate was also low in the EVOOs from Chios (0.76%). On the other hand, the EVOOs originating from Lesvos and Fournoi exhibited high levels of hydroxytyrosol acetate (2.40% and 2.43%, respectively), followed by the EVOOs from Samos (1.75%) and Ikaria (1.45%). Moreover, tyrosol was lower in the EVOOs from Ikaria and Fournoi (<0.5%) and higher in the EVOOs from Chios (1.04%). Figure 12. % Ratio of the compounds from the class of (a) phenolic alcohols (hydroxytyrosol, hydroxytyrosol acetate, and tyrosol), (b) flavonoids (apigenin and luteolin), and (c) lignans (1-acetoxypinoresinol, 1-hydroxypinoresinol, pinoresinol, and syringaresinol) to the phenolic content in the EVOOs between the islands (Lesvos, Samos, Chios, Ikaria, and Fournoi) of the North Aegean Region. Phenolic Profile-Comparison between the Islands As for the class of phenolic alcohols, as shown in Figure 12, hydroxytyrosol ranged between 0.32 and 0.59% in the EVOOs between the islands, apart from the samples from Chios whose percentage of hydroxytyrosol was very low (0.05%). Hydroxytyrosol acetate was also low in the EVOOs from Chios (0.76%). On the other hand, the EVOOs originating from Lesvos and Fournoi exhibited high levels of hydroxytyrosol acetate (2.40% and 2.43%, respectively), followed by the EVOOs from Samos (1.75%) and Ikaria (1.45%). Moreover, tyrosol was lower in the EVOOs from Ikaria and Fournoi (<0.5%) and higher in the EVOOs from Chios (1.04%). Phenolic Profile-Comparison between the Zones of Lesvos As presented in Figure 14, the phenolic alcohols hydroxytyrosol and tyrosol fluctuated at the same levels (0.30-0.67%) between the seven geographical areas of Lesvos. The former showed a lower percentage in the EVOOs from Zone 3 and higher percentages in the EVOOs from Zone 1 and Zone 6, and the latter exhibited a lower ratio in the EVOOs from Zone 5 and a higher ratio in the EVOOs from Zone 1 and Zone 2. Moreover, hydroxytyrosol acetate was lower in the samples from Zone 4 (1.95%) and higher in the samples from Zone 5 (2.74%) and Zone 6 (2.77%). Both flavonoids, luteolin and luteolin, were higher in the EVOOs from Chios (1.10% and 0.76%, respectively). Apigenin was also high in the EVOOs from Fournoi (1.14%). The EVOOs originating from Ikaria presented a low ratio of flavonoids (0.46% for apigenin and 0.35% for luteolin). As for the EVOOs from Lesvos and Samos, they were found in the middle of the scale for both flavonoids. In all the islands, the percentage of apigenin was higher than that of luteolin. The differences were slight between the islands about the lignans 1-hydroxypinoresinol and syringaresinol. However, as presented in Figure 12, syringaresinol was not detected in the EVOOs originating from Fournoi. Furthermore, 1-acetoxypinoresinol and pinoresinol were higher in the EVOOs from Samos (6.74% and 1.22%, respectively) and lower in the EVOOs from Fournoi (0.91% and 0.66%, respectively). As for the secoiridoids, according to Figure 13, oleuropein aglycone did not show significant differences between the islands, as it ranged between 47.8 and 54.1%, with the only exception of the EVOOs originating from Chios (40.3%). On the contrary, the EVOOs from Chios presented the highest percentage of ligstroside aglycone (45.0%), whereas the percentage ranged between 23.7 and 30.7% for the other islands. Oleacein and oleocanthal were lower in the EVOOs originating from Chios (1.06% and 1.59%, respectively). The former was higher in the EVOOs from Lesvos (4.94%) and Fournoi (4.96%), and the latter was higher in the EVOOs from Samos (3.81%) and Lesvos (3.49%). Moreover, 10-hydorxy-10-methyl oleuropein aglycone was higher in the EVOOs from Lesvos (1.72%) and lower in the EVOOs from Fournoi (0.28%). Methyl oleuropein aglycone was higher in the EVOOs from Fournoi (2.31%) and lower in the EVOOs from Chios (0.48%). In the case of 10-hydroxy oleuropein aglycone and 10-hydroxy decarboxymethyl oleuropein aglycone, the fluctuations were slight between the islands (≤0.25%). The EVOOs from Zone 4 presented the lowest percentages for both flavonoids; apigenin was 0.56% and luteolin was 0.38%. Luteolin was also low in the case of EVOOs from Zone 3 (0.41%). On the other hand, apigenin was higher in the EVOOs from Zone 5 (0.96%) and Zone 6 (1.00%) and luteolin was higher in the EVOOs originating from Zone 2 (0.67%). The differences between the seven zones of Lesvos were not significant for the lignans 1-hydroxypinoresinol and syringaresinol, as shown in Figure 14. 1-Acetoxypinoresinol was higher in the EVOOs from Zone 1 (7.44%) and Zone 2 (6.93%), and very low in the EVOOs from Zone 5 (0.33%). Furthermore, pinoresinol was lower in the EVOOs from Zone 4 and Zone 7 (0.76% and 0.88%, respectively), and higher in the EVOOs from Zone 5 and Zone 6 (1.33% and 1.30%, respectively). Figure 15 shows that oleuropein aglycone was higher in the EVOOs from Zone 5 and Zone 6 (61.4% and 57.1%, respectively). In addition, the EVOOs from those geographical areas had the lowest percentages of ligstroside aglycone (18.6% and 19.3%, respectively). Phenolic Profile-Comparison between the Zones of Lesvos As presented in Figure 14, the phenolic alcohols hydroxytyrosol and tyrosol fluctuated at the same levels (0.30-0.67%) between the seven geographical areas of Lesvos. The former showed a lower percentage in the EVOOs from Zone 3 and higher percentages in the EVOOs from Zone 1 and Zone 6, and the latter exhibited a lower ratio in the EVOOs from Zone 5 and a higher ratio in the EVOOs from Zone 1 and Zone 2. Moreover, hydroxytyrosol acetate was lower in the samples from Zone 4 (1.95%) and higher in the samples from Zone 5 (2.74%) and Zone 6 (2.77%). low in the EVOOs from Zone 4 (3.97%) and Zone 7 (4.38%). As for the other secoiridoids, specifically, methyl oleuropein aglycone, 10-hydorxy-10-methyl oleuropein aglycone, 10hydroxy decarboxymethyl oleuropein aglycone, and 10-hydroxy oleuropein aglycone, the differences were slight between the seven zones. Τhe only exception was Zone 5, where the samples exhibited an impressively high percentage of 10-hydorxy-10-methyl oleuropein aglycone (4.44%) compared to the EVOOs from the other geographical zones of the island of Lesvos. Conclusions Nineteen biophenols were determined in the analyzed EVOOs originating from the islands of the North Aegean Region in Greece (Lesvos, Samos, Chios, Ikaria, and Fournoi). The sum of the determined biophenols was calculated to establish a trend between the EVOOs and the geographical origin. The phenolic content of the EVOOs originating from Lesvos ranged between 32 and 1368 mg/kg, between 20 and 1304 mg/kg for those originating from Samos, and between 52 and 1146 mg/kg for EVOOs originating from Chios. The EVOOs originating from Ikaria also exhibited a high phenolic content as the sum of the individual quantified and semiquantified biophenols ranged from 380 to 939 mg/kg, while the EVOOs originating from Fournoi presented a lower bioactive content (155-222 mg/kg). Concerning the four major secoiridoids, the samples from Lesvos, Samos, and Ikaria exhibited higher average concentrations of oleacein and oleocanthal, with EVOOs from The EVOOs from Zone 4 presented the lowest percentages for both flavonoids; apigenin was 0.56% and luteolin was 0.38%. Luteolin was also low in the case of EVOOs from Zone 3 (0.41%). On the other hand, apigenin was higher in the EVOOs from Zone 5 (0.96%) and Zone 6 (1.00%) and luteolin was higher in the EVOOs originating from Zone 2 (0.67%). Conclusions Nineteen biophenols were determined in the analyzed EVOOs originating from the islands of the North Aegean Region in Greece (Lesvos, Samos, Chios, Ikaria, and Fournoi). The sum of the determined biophenols was calculated to establish a trend between the EVOOs and the geographical origin. The phenolic content of the EVOOs originating from Lesvos ranged between 32 and 1368 mg/kg, between 20 and 1304 mg/kg for those originating from Samos, and between 52 and 1146 mg/kg for EVOOs originating from Chios. The EVOOs originating from Ikaria also exhibited a high phenolic content as the sum of the individual quantified and semiquantified biophenols ranged from 380 to 939 mg/kg, while the EVOOs originating from Fournoi presented a lower bioactive content (155-222 mg/kg). Concerning the four major secoiridoids, the samples from Lesvos, Samos, and Ikaria exhibited higher average concentrations of oleacein and oleocanthal, with EVOOs from Lesvos recording the highest values. Furthermore, the phenolic profile of EVOOs presented differences between the five islands. The EVOOs from Chios exhibited the lowest percentage of the phenolic alcohols hydroxytyrosol and hydroxytyrosol acetate, and the highest tyrosol percentage. The samples from Ikaria presented low percentages of flavonoids. Moreover, the EVOOs from Lesvos exhibited a high percentage of 10-hydroxy-10-methyl oleuropein aglycone, while the EVOOs from Fournoi presented a high percentage of methyl oleuropein aglycone and a low percentage of the lignan 1-acetoxypinoresinol. The latter recorded the highest percentage in the case of EVOOs from Samos. As for the zones of Lesvos, the results showed that the EVOOs from the southern part of the island had a higher bioactive content than the EVOOs from the northern part (Zone 5 and Zone 6). Furthermore, the EVOOs from Zone 5 and Zone 6 exhibited the highest percentages of oleuropein aglycone and the lowest percentages of ligstroside aglycone, compared to the other geographical zones. On the other hand, the EVOOs from Zone 4 presented low percentages of the flavonoids luteolin and apigenin, whereas the percentage of 1-acetoxypinoresinol was high in the EVOOs from Zone 1 and Zone 2, and very low in the EVOOs from Zone 5. In conclusion, the observed differences in the phenolic content and profile can lead to discriminations of EVOOs both between the islands of North Aegean and between the different locations of Lesvos. Therefore, taking into account the results of the present study, it is strongly supported that the geographical origin affects the levels of biophenols of EVOOs. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/foods10092102/s1, Figure S1: Types of phenolic compounds (%) in EVOOs between the (a) islands; (b) zones of Lesvos, Table S1: Gradient program of mobile phase, Table S2: The main olive tree varieties and the number of EVOO samples in the five islands of the North Aegean Region, Table S3: Number of samples in each geographical zone of the island of Lesvos, Table S4: QC results, Table S5: List of target compounds, Table S6: List of suspect compounds, Table S7: Standard calibration curve and coefficient of determination (r 2 ) for tyrosol, hydroxytyrosol, oleuropein, apigenin, luteolin, and pinoresinol. Funding: This research was funded by the North Aegean Region through the research program "Novel wide-scope research for the promotion of N. Aegean olive oil and olive products through the designation of their unique characteristics and bioactive content" (Funding number: 14704).
2021-09-28T05:23:36.762Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "b1fe0c65382e14579c9f885fc69c499e0f61b525", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/10/9/2102/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b1fe0c65382e14579c9f885fc69c499e0f61b525", "s2fieldsofstudy": [ "Environmental Science", "Geography", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
256451802
pes2o/s2orc
v3-fos-license
A quantitative approach on environment-food nexus: integrated modeling and indices for cumulative impact assessment of farm management practices Background Best management practices (BMPs) are promising solutions that can partially control pollution discharged from farmlands. These strategies, like fertilizer reduction and using filter strips, mainly control nutrient (N and P) pollution loads in basins. However, they have secondary impacts on nutrition production and ecosystem. This study develops a method to evaluate the cumulative environmental impacts of BMPs. It also introduces and calculates food’s environmental footprint (FEF) for accounting the total environmental damages per nutrition production. Methods This study combines the soil and water assessment tool (SWAT) for basin simulation with the indices of ReCiPe, a life cycle impact assessment (LCIA) method. By these means, the effectiveness of BMPs on pollution loads, production yields, and water footprints (WFs) are evaluated and converted as equivalent environmental damages. This method was verified in Zrebar Lake, western Iran. Here, water consumption, as WFs, and eutrophication are the main indices that are converted into equivalent health and ecological impairments. Two methods, entropy and environmental performance index (EPI), are used for weighting normalized endpoints in last step. Results Results showed that using 25–50% less fertilizer and water for irrigation combined with vegetated filter strips reduce N and P pollution about 34–60% and 8–21%, respectively. These can decrease ecosystem damages by 5–9% and health risks by 7–14%. Here, freshwater eutrophication is a more critical damage in ecosystem. However, using less fertilizer adversely reduces total nutrition production by 1.7–3.7%. It means that BMPs can decline total ecological damages and health risks, which threatens nutrition production. FEF presents a tool to solve this dilemma about the sustainability of BMPs. In the study area, a 4–9% decrease in FEF means that BMPs are more environmental friendly than nutrition menacing. Finally, this study concludes that SWAT-ReCiPe with FEF provides a quantitative framework for environment-food nexus assessment. However, due to the uncertainties, this method is recommended as a tool for comparing management strategies instead of reporting certain values. INTRODUCTION Best management practices (BMPs) are promising solutions for controlling pollution discharged from non-point sources (NPS), including agriculture (Liu et al., 2017). Phosphorous (P) and nitrogen (N) compounds are typical pollutants transported in basins from farmlands (Hanief & Laursen, 2019). Water quality degradation and eutrophication are the possible results of these emissions. Filter strips (FS) (Merriman et al., 2019), fertilizer reduction (FR) (Geng, Yin & Sharpley, 2019), no-tillage farming (Plunge, Gudas & Povilaitis, 2022), tracing and fencing (Sheshukov et al., 2016), constructed wetlands (Li et al., 2021b), straw mulching (Jang, Ahn & Kim, 2017), or changing crop patterns and land-uses (LUs) (Plunge, Gudas & Povilaitis, 2022) are some examples of BMPs. Although these strategies have positive regional impacts on pollution transport (Stubbs, 2016), they may affect other ecosystems (Čuček, Klemeš & Kravanja, 2015), farmers' income (Imani et al., 2017), and even nutrition production. Therefore, assessing the effectiveness of BMPs needs detailed studies in basin scale with combined methods. Some research has recently evaluated the effectiveness of BMPs. In the Great Lakes, Merriman et al. (2019) concluded that multiple BMPs combined with FS can reduce nutrients and sediment more significantly than single BMPs. Here, total phosphorus (TP) and total nitrogen (TN) removal could reach about 20% (Merriman et al., 2019). The results of Liu et al. (2019) similarly showed that combined BMPs with FS are more effective on reducing pollution load than individual BMPs. They recommended that modeling tools for cost-effective analysis can create a more sustainable framework for water quality improvement in agricultural basins (Liu et al., 2019). Imani, Delavar & Niksokhan (2019) also recommended to set priorities for BMPs in critical areas according to their TN and TP reduction and related costs (Imani, Delavar & Niksokhan, 2019). Modeling with field surveys verified that BMPs can reduce 25% nutrient pollution in a basin while sediment entrapment in the riparian zone can develop organic nutrient removal to about 60%. FS can solely act as an effective BMP with 20% TP removal (Sheshukov et al., 2016). Nonetheless, BMPs may reduce the runoff and adversely concentrate pollutants downstream (Jang, Ahn & Kim, 2017). Farmers may also be reluctant to apply BMPs due to economic reasons. Therefore, an integrated knowledge about farm characteristics and the environmental attitudes of farmers is required before adopting BMPs (Liu & Brouwer, 2022). Dai et al. (2018) proposed a combined model to create a series of BMPs placement schemes based on nutrients reduction and related costs. They concluded that nutrient load discharged into the lake and tributaries could be decreased to an acceptable level with a proper tradeoff between costs and risks (Dai et al., 2018). In a brief, recent studies imply that pollution reduction, applicability, and economic issues are the main concerns in BMP assessment. Nonetheless, their probable impacts on larger ecosystems and nutrition production require further evaluation. Most of the literature has shown that the soil and water assessment tool (SWAT) was the main technique for integrated basin modeling. By this tool, the direct impacts of BMPs can be evaluated in hydrological response units (HRUs) and receiving water bodies (Jamshidi, Imani & Delavar, 2020). However, this simulation tool cannot account both direct and indirect cumulative environmental impacts (CIAs). The question of which BMP has the least total impacts on the ecosystem and food production still remains. Life cycle assessment (LCA) has the potential of answering this question through a data inventory that quantifies main ecological indices. These indices can translate data into ecological damages. It provides a framework for comparing strategies quantitatively based on their CIAs. For example, Xu et al. (2017) compared the CIAs of different low impact development BMPs as treatment systems (Xu et al., 2017). Comparing the sludge-dredging methods in Baiyangdian Lake, northern China (Zhou et al., 2021), treatment systems for Yangtze River rehabilitation (Yao et al., 2021), and sea water desalination (Mannan et al., 2019) are other applications of LCA in water quality management. Eutrophication is also a critical subject among the midpoint indices in life cycle impact assessment (LCIA) (Cosme & Hauschild, 2017;Rosenbaum et al., 2017). TN and TP concentrations in water directly affect this problem (Chapra, 2008), while other features, such as water consumption, are also effective on freshwater ecosystems, aquatic habitat or eutrophication intensification (Damiani et al., 2019). Since it is difficult to evaluate the eutrophication potential of agricultural systems, a combined method is required for the CIA of nutrients release from farmlands (Ortiz-Reyes & Anex, 2018). The main purpose of this study is to develop a combined method based on SWAT-LCIA to evaluate and compare the CIAs of BMPs in a basin. The developed framework also introduces a state-of-the-art index for quantifying the food environmental footprint (FEF). This approach accounts related environmental damages of nutrition production in a basin and develops environmental perspective in water-food nexus problems. For these purposes, a lake basin is used as the study area to verify the proposed methodology. Here, the SWAT outputs are the main inventory of related midpoint indices in ReCiPe, a developed LCIA method (Huijbregts et al., 2016). Health, eutrophication, water consumption, aquatic and terrestrial ecosystems are the affected environments. Their CIA is normalized afterwards and evaluated by endpoint indices as ecological and health damages in ReCiPe. In addition, this research considers WF as the driving index for water consumption in LCIA and uses two different methods in calculations for weighting indices. Methodology This study follows a 4-step combined methodology. In the first two steps, data is gathered and a basin is simulated by the SWAT model with the perspective of water quality and quantity. Here, the effectiveness of different BMPs on exporting pollution loads (kg/ha), pollutants concentration in lake (mg/L), crop production yields (ton/ha), nutrition production (Kcal/yr), and water footprint (m 3 ) are evaluated. Thus, the modeling provides a quantitative framework for further environmental-food analysis in basin. In this study, the first two steps, except the nutrition production, follows the previously developed SWAT model by Jamshidi, Imani & Delavar (2020). In the third step and to quantify the CIAs of BMPs, a combined method is developed to convert the modeling results into equivalent environmental damages. An excel-base LCIA method according to ReCiPe (2016 v1.1) is used including related characterization midpoints (water consumption and eutrophication) and endpoints (human health and ecosystem damages) with normalization coefficients. In this step, some new approaches are considered to develop LCIA analysis. For example, the embodied water consumption, directly analyzed by the SWAT model (WF), is introduced as a reliable water consumption index for the LCIA of food crops. This is due to the fact that food crop's WF includes both consumed (blue and green) and polluted (grey) water. These items fit to life cycle assessment of available water in the ecosystem (Bigdeli Nalbandan et al., 2022). In addition, this step considered two different weighting approaches for integrating health and ecosystem damages (endpoints) as a single index. The entropy analysis uses a mathematical equation to calculate the weights of health and ecosystem, while environmental performance index (EPI) applies predefined weights for the two endpoints. In final step, a state-of-the-art index is introduced as "environmental footprint of food production" (FEF) that calculates the cumulative environmental damages of nutrition production in basins. This new index is applicable for quantifying the equivalent environmental damages related to food production. It also compares the cumulative impacts of BMPs and farm management practices by considering different perspectives like WF, pollution emissions, crop nutrition, and ecosystem protection. The main contribution of this research is in its combined method, particularly the third and fourth steps. Here, an environment-food nexus analysis compares the cumulative impacts of BMPs in a basin. The methodology steps are illustrated in Fig. 1. Zrebar Basin, western Iran, was chosen as the study location to verify the method. It doesn't mean this approach is developed for a specific basin. On the contrary, the SWAT-ReCiPe is applicable in any basin for comparing farm management strategies. Study area The proposed methodology is verified in Zrebar Lake basin, western Iran. Zrebar basin encompasses 90 km 2 including 20 km 2 of irrigated and rain-fed farmlands (22%). Its lake meets eutrophication problem mainly due to the agricultural discharges, particularly irrigated farmlands (Imani, Delavar & Niksokhan, 2019). Main rain-fed (RF) crops in this area are wheat, barley, grape and peas with average nutrition values of 3,640, 3,540, 670 and 420 cal/kg, respectively. The irrigated crops include tomato, tobacco, alfalfa, apple with average nutrition values of 180, 0, 230, and 520 cal/kg, respectively in addition to irrigated wheat and barley. Figure 2 shows the dominant LUs in the study area with its geographical conditions. Simulation-calibration In the proposed methodology, the SWAT model is used for basin simulation before accounting environmental damages and footprints of agricultural productions. This model can simulate complicated systems by considering management practices in farmlands, interactions between water quality and quantity, pollution transport, and production yields (Abbaspour et al., 2015;Arnold et al., 2012b;Rivas-Tabares et al., 2019). Therefore, required data such as topography, soil properties, LU type, management practices, and weather/climate were inputted to the model (Table 1). The basin was split into 26 sub-basins and 1,100 HRUs. This model was calibrated and validated based on available data (2006-2013) of monthly lake inflow, nitrate and phosphate concentrations simultaneously. Production yields and evapotranspiration rates were also controlled with the observation data (Jamshidi, Imani & Delavar, 2020). Table 2 outlines the calculated regression coefficient (R 2 ) and RMSE-observations standard deviation ratio (RSR) in the calibrated model. It is noteworthy that the main idea of this research is to develop an integrated method for accounting environment-food nexus. Accordingly, authors used the outputs of the already calibrated SWAT model previously developed for BMP and WF assessment in the study area (Jamshidi, Imani & Delavar, 2020). Thus, simulation-calibration details are skipped here as they can be fully retrieved in the cited reference. BMP scenario This study uses the SWAT outcomes for BMP analysis in three scenarios as defined in Table 3. Base is the scenario without using any BMPs. In BMP1, the application of fertilizers, manure and chemical, and water for irrigation are reduced 25% for farmlands. In BMP2, the reduction equals 50%. In both BMP scenarios, FS is assumed to be implemented in the vicinity of lake. Slim FS represents 10-12 m width, while moderate FS has 20-25 m width. All scenarios are analyzed by the SWAT model in the same period from 2007-2013. Water footprint The WFs of agricultural productions are accounted by the standard method and include the three main elements of green, blue and grey water (Franke, Boyacioglu & Hoekstra, 2013;Hoekstra et al., 2011). It should be noted that WFs calculate the direct embedded water of farmlands and exclude indirect water embodied in further processing of agricultural productions. In these equations, GnWF, BWF and GWF are green, blue and grey water footprints (m 3 ), respectively. ET a refers to the evapotranspiration from soil and vegetations when there is no irrigation (mm). ET b includes the total evapotranspiration during irrigation (ET b > ET a ). Thus, the SWAT models evapotranspiration twice with and without irrigation. It uses the climatic data of minimum and maximum daily temperature with precipitation as mentioned in Table 1. Afterwards, it estimates the actual evapotraspiration for crops by Hargreaves equation (Kisi, 2007;Majidi et al., 2015). L is the exported pollution loads (ton/ha) of pollutant i to the receiving water body. In modeling, output.hru file shows the pollution loads per each HRU and sub-basin. L is the net pollution loads transported from LUs into the 26 th sub-basin, the Zrebar Lake in this study (Arnold et al., 2012a). C max is the maximum allowable concentration of pollutants. C nat equals pollutants concentration in the receiving water on the condition there is no human interference. Here, the C max of TN and TP are assumed constant according to the global limits for controlling the trophic state of lakes (Jamshidi, 2021) and they equal 1.5 and 0.035 mg/L, respectively. C nat of TN and TP are also assumed 0.4 and 0.01 mg/L, respectively (Jamshidi, Imani & Delavar, 2022). Environmental impact assessment The quantification method of environmental damages in basin is compatible with the indices of LCIA. In the current research, LCIA characterization coefficients are derived according to the ReCiPe method, which was previously developed by some collaborations in Europe (Huijbregts et al., 2017). In this method, normalized data at the European and global level are available for 16 midpoint and three endpoint indices. In later updates, ReCiPe considered several conversion coefficients based on global scale, instead of the European scale. However, it preserved the possibility of using these coefficients on the continental and country scales. Another feature of ReCiPe is to expand environmental damages for evaluating the impacts of water consumption on human health, aquatic and terrestrial ecosystems (Huijbregts et al., 2017). However, the current study proposes to use WF for accounting the water consumption of food crops in LCIA. This is due to the ability of WF in calculating water consumption including both water quality and quantity. In this method, all effective environmental factors derived from the SWAT model are initially converted into the equivalent units. Table 4 illustrates eutrophication midpoint coefficients that convert NO 3 , NO 2 , NH 3 and PO 4 to the equivalent environmental damages. The average WFs of crops are considered (m 3 ) for water consumption midpoint in aquatic, terrestrial and marine ecosystems. Equation (5) shows how conversions are carried out. Q is the midpoint index, T represents the output of the SWAT model such as water footprint or pollutant concentration, M is the conversion coefficients, and j is environmental component such as aquatic, terrestrial, and marine. By this equation, it is possible to calculate the equivalent environmental effects of each pollutant in the life cycle period of the product or activity. It should be noted that these coefficients represent average values. It means that they do not need supplementary conversion coefficients for shallow or deep waters as they are free from adjustment for the conditions with different vegetation or trophic state. Moreover, pollution discharge to freshwater has indirect impacts on other ecosystems in long-term. Thus, marine impacts are also considered in calculation even the pollution is not directly discharged to the sea. Since the midpoint indices are calculated based on equivalent units, such as kgN-eq or m 3 water consumed, it is necessary to accumulate these environmental impacts with different units under a single index. This is the most challenging step in conventional CIA. ReCiPe uses equivalent damage-based indices for integrating midpoints into endpoints by Eq. (6). Here, the calculated midpoint indices (Q) are converted into endpoint damage-based indices (D) according to conversion coefficient of E (see Table 5). Here, human health and ecosystem (non-human) damages are two endpoint indices. The former is based on disability-adjusted life years (DALY) and the latter is based on probable number of harmed species in year (species.yr). DALY represents the equivalent years of human life lost by death or being disabled due to illness caused by existing pollutants in the environment. On the other hand, the unit of measuring ecosystem damage is the total number of species lost over time. Table 5 shows the conversion coefficients that turn each equivalent midpoint indices into the two endpoints. ReCiPe model also recommends that endpoints (D) should be normalized by specific coefficients that turn the calculated damages into dimensionless indices per person (Sleeswijk et al., 2008). Normalization and weighting Calculated endpoints are normalized by Eq. (7) on a global scale based on reference coefficients (Table 5). They are finally aggregated according to their weights by Eq. (8). In this study, entropy and EPI are the weighting methods of normalized endpoints. where, C is the annual environmental damage per person, W is the weight of each endpoint, N represents the normalization value and R is the normalized endpoint. Weights can be calculated based on different mathematical methods, such as entropy or fuzzy (Chen et al., 2019; Zeng, Luo & Yan, 2022), or based on expert opinions and references (Chen et al., 2022). In this study, EPI determines health and ecosystem weights as 0.4 and 0.6, respectively (Hsu & Zomer, 2016), whereas entropy method (W En. ) calculates the weights of endpoints through a probabilistic function as Eq. (9). In which, t is the number of available data. In entropy, factors with more data dispersion gain higher weights (Imani, Delavar & Niksokhan, 2019). Here, the weights of endpoints (R) are evaluated based on C variations from 2007-2013 in each BMP scenario. Accordingly, the ecosystem and health endpoints weigh 0.44 and 0.56, respectively in the entropy method. Environmental-food index This study introduces a new index for food and nutrition production in farmlands. It is quantified based on the environmental damages calculated by the SWAT-ReCiPe. This index quantifies the CIA per food production in any area or BMP as Eq. (10). In this equation, FEF is a dimensionless index that represents the CIA of food production. In other words, FEF is the environmental footprint of nutrition production. It can be calculated by the proposed method for comparing major environmental concerns in food production, including water-food nexus. Low FEF (~0) means that strategies used for food production is rather clean, while higher FEF (>1) indicates their destructive condition. C is defined earlier that notes environmental damages (CIA), and S is calculated by Eq. (11). In which T Cal is the daily total nutrition (calories) of food production in the study area, B equals the malnutrition baseline of humans assumed 2,000 cal/day , and P is the global population (7.75 billion) to convert and normalize S per person in global scale. RESULTS AND DISCUSSIONS SWAT outcomes The basin simulation by the SWAT model could calculate the annual pollution loads exported by HRUs in different management scenarios. Figures 3 and 4 cumulative N and P loads discharged by farmlands in three scenarios. Here, the annual variations are due to (1) the precipitation variation influencing pollution transport from RF farms, and (2) temporary water transfers from upstream for irrigation development. The average N pollution of all irrigated and RF farms ranges between 1,176 and 3,985 kg yr −1 . This value for P is between 20 and 82 kg yr −1 . For 20 km 2 farming area, the average export coefficients for N and P are 0.6-2 kg ha −1 yr −1 and 0.01-0.04 kg ha −1 yr −1 , respectively. This range implies that nutrient export coefficients can increase 3-4 times greater than dry periods in the study area. On an average for 2007 to 2013, BMP1 can reduce 33.8%N and 7.7%P pollution exported from all agricultural LUs. BMP2 can improve these removals to 59.9% and 20.9% for N and P, respectively. It implies that basin response to BMPs' is not linear. In addition, P removal requires stricter BMPs than N removal. Yet, nutrient pollution reduction may have different ecological impacts on marine, aquatic and terrestrial systems which are accounted through the combined method. BMPs are also effective on crops production yields, WFs and nutrition productions (Table 6). Environmental impacts For the base scenario, the combined method calculates the environmental midpoint impact (Q) of farming in Zrebar basin as Fig. 5. It shows that freshwater eutrophication is the most critical item during the study period. The embodied water consumed is also significant for damaging the terrestrial ecosystem and human health. This conclusion remains unchanged in BMP1 and BMP2 despite 25-50% FR (Fig. 6). Since TP concentration in lake is the main driver of freshwater eutrophication, consuming less ammonium-based fertilizer can hardly solve eutrophication problem in short term in the combined method. On the contrary, controlling erosion and sediment transport by FS from upstream is more efficient for TP mitigation in the lake. Figure 7 shows the cumulative ecological damages. Their values in Zrebar basin are relatively larger than health problems in all management scenarios. It is noteworthy that human health indices are mostly reliant on toxins and heavy metals. These pollutants were hardly traced in the study area. The results present that farm management strategies can mitigate the average ecological impacts from 1.41E−6 to 1.34E−6 (4.9%) for BMP1 and 1.28E−6 (9.2%) for BMP2. Likewise, these strategies can diminish human health risk from 2.58E−7 to 2.4E−7 (6.8%) for BMP1 and 2.22E−7 (13.9%) for BMP2. It means that using 50% less fertilizer with a FS in this area may totally reduce 9% ecological and 14% health risks (Fig. 8). Here, the cumulative impacts are low but not negligible as they range between 1E−6 and 1E−7 per person. However, these values are meaningless unless they are used as quantitative tools for comparative analysis. Figure 8 summarizes the impacts of BMPs on food production (S) in addition to normalized environmental impacts (per person) on the ecosystem and health. Since nutrition production is intrinsically a favorable action with environmental perspective, their related impacts are negative. The overall environmental impact of farming and related management practices are finally calculated by the weighted average of normalized ecosystem and health damages. The weighing step is carried out with different methods. Since EPI gives higher weights to ecological items, the related results are relatively more than entropy method. Despite different weighting approaches, the overall CIA (C) reduction for BMP1 ranges between 5-8%, while it ranges between 10-13% for BMP2. It implies that using strict BMPs may not necessarily have significant improvement. On the contrary, S is reduced 1.66% and 3.73% by BMP1 and BMP2, respectively. It points an The impacts of management practices on endpoints and food production on average for study period (2007)(2008)(2009)(2010)(2011)(2012)(2013). Agricultural productions in this area have adverse impacts on ecosystem and health, while have constructive impacts on food and nutrition values. BMPs reduce adverse impacts but lose some nutrition production. Full-size  DOI: 10.7717/peerj.14816/ fig-8 important conclusion that although farm management practices may reduce environmental damages, they can adversely reduce the nutrition production. This conclusion highlights an environment-food nexus index for more comprehensive understanding of management impacts. Figure 9 draws the environmental footprint of nutrition production (FEF) in Zrebar Basin. Here, the conventional pattern (base scenario) can quantitatively generate 0.61 (entropy) and 0.78 (EPI) environmental impacts. In other words, 0.61-0.78 environmental units would be damaged for one unit nutrition production. This is a footprint mainly accounted by the impacts of consumed water and eutrophication in the study area originated by the agriculture. Using BMP1 and BMP2 can reduce FEF 6.5-9.1% (entropy) and 4-6.4% (EPI), respectively. It means that 50% FR combined with FS (BMP2) can reduce 6.4-9.1% of FEF in Zrebar basin. Obviously, this new index is more helping for policy makers rather than conventional approach on pollution reductions in a basin. For example, this index can present criteria to compare two alternatives of implementing vegetated FS or changing crop patterns in a basin. The first alternative only reduces pollution loads and consequently environmental impacts, while the second option emphasizes nutrition improvement despite pollution discharges. DISCUSSION What stands out this research and makes it different with previous literature is combining SWAT-ReCiPe for accounting the new damage-based index of FEF. This idea provided a quantitative solution to include water quality issues within water-energy-food nexus problems (Heal et al., 2021). It is verified in a lake basin with different irrigated and rain-fed farming. Results showed that N and P pollution removal by BMPs in Zrebar Lake basin varies between 34-60% and 8-21%, respectively. These ranges are comparable with recent Figure 9 Average environmental footprint of food production in different BMPs and weighting methods for study period (2007)(2008)(2009)(2010)(2011)(2012)(2013). The overall negative and positive impacts of agricultural productions on environment and food production is combined within FEF. Two weighting methods present different values for this footprint. The current study also implied that BMPs may have secondary impacts due to long-term terrestrial and aquatic pollution transport, water consumption, or changing LUs. Similar conclusion has been recently achieved by McAuliffe, Zhang & Collins (2022). They highlighted that direct short-term water quality rehabilitation, such as TN and TP reduction, may not necessarily ends into a sustainable strategy. With the perspective of integrated environmental management, on-farm intervention strategies have by-effects that should be considered in decision-making (McAuliffe, Zhang & Collins, 2022). The proposed method can more or less consider these impacts via LCIA. However, midpoint indices can be different on the subject of basin specifications. For example, water consumption and eutrophication are the main environmental issues in the current study. In different regions, other environmental issues like global warming, LU change, and even air pollution have related indices in ReCiPe (Huijbregts et al., 2017). Variety in midpoint indices may not limit SWAT-ReCiPe and FEF application. On the contrary, its multidisciplinary specification develops its purpose to calculate broader range of environmental damages for integrated monitoring and problem solving. For example, it is conventionally believed that hydropower systems in water reservoirs are a renewable energy source and environmental friendly. Nevertheless, it is recently noted that these systems can be the significant sources of greenhouse gas emissions due to their long-term secondary limnology and ecological impacts (Gemechu & Kumar, 2022). Čuček, Klemeš & Kravanja (2015) recommended LCA method for environmental assessment because of the chance of using footprints, such as carbon footprint, biodiversity footprint, ecological footprint, etc. (Čuček, Klemeš & Kravanja, 2015). These indices can help to account cumulative environmental footprint of productions within LCIA similar to the method used for WF in this study. It is noteworthy that recent literature has also focused on developing social LCA indices (Bouillass, Blanc & Perez-Lopez, 2021;Siebert et al., 2018). This perspective aims on integrating social with environmental-based LCA indices. In other words, the safe and healthy living conditions of farmers, their employment, social fairness, and public commitment to sustainability would also be important in decision-making (Kühnen & Hahn, 2017). Thus, we recommend future studies to assess the cumulative impacts of BMPs based on the combined social-environmental indices of LCIA before FEF evaluation. This study also applied the SWAT model for basin simulation. It could present a reliable framework for integrated LCIA, WF and water quality assessment in different BMPs. It is noteworthy that using WF is more efficient in LCIA than typical water consumption. It has two reasons: (1) WF is the embodied water in production. Thus, it is compatible with other LCA indices as both consider indirect impacts, (2) WF includes equivalent water pollutions in form of GWF within the consumed water. GWF is an exceptional index for LCIA as it bridges water pollution to unavailable water for health or ecosystem consumption. It means that GWF is the only functional index that enables LCIA to include the indirect impacts of water pollution on destroying water resources. Recent studies could even develop the understanding about GWF. In new definition, regional ecological impairments are decisive for GWF calculations (Jamshidi, Imani & Delavar, 2022). The interactions between water resource and ecological assessment does not limit to WF assessment. A recent study used environmental Kuznets curves with the SWAT model. Researchers assessed the relationship between environmental degradation and developing agriculture (Golzari et al., 2022). However, the current study could develop a quantifiable method for integrated water and environmental assessment based on WF, LCIA and FEF indices. The proposed method can also be applicable for climate change. showed the applicability of the SWAT model for water accounting during both wet and dry periods of climate change . On the other hand, the effectiveness of 171 BMPs on reducing TN and TP were previously analyzed by LCA approach (Chiang et al., 2012). Despite the abilities of the SWAT model for basin water quality modeling (Bigdeli Nalbandan et al., 2022;Li et al., 2021a), this method has limits on accurate simulation of some pollutants. Toxins, heavy metals, and microbial pollution require accurate simulation as their impacts on health and ecological midpoint indices are critical in LCIA. However, they are sensitive to contamination transports and environmental conditions. Erosion, sediment adsorption and re-suspension, biomass accumulation, and volatilization are different transports that increase the uncertainties of both field samples and simulated results (Du, Shrestha & Wang, 2019;Ouyang et al., 2018). Further studies can focus on finding proper tools for simulating these pollutants in combination with LCIA. Since uncertainty is the main drawback of the proposed method, authors recommend it as an applicable tool for comparing the effectiveness of different strategies respecting their CIAs and FEFs. CONCLUSIONS Pollution control is only one pillar of BMPs' sustainability assessment. Their impacts in larger ecosystems are also crucial for integrated decision-making. This study developed a method that combines basin simulation and life cycle impact assessment (LCIA). The soil and water assessment tool (SWAT) simulates the basin, while ReCiPe uses the modeling results as an inventory for LCIA. This approach has some advantages for the sustainability assessment of BMPs: It is a quantitative tool based on various environmental indices. The cumulative environmental impact accounts possible aquatic, terrestrial and marine impairments. Thus, it can simplify integrated evaluations and comparing BMPs. It is flexible to include new or integrated indices. The food environmental footprint (FEF) is a state of the art index that quantifies the total environmental damages of one unit nutrition production. In a nutshell, FEF can add environmental footprint in waterfood-energy nexus problems. In the study area, fertilizer reduction and filter strip were effective on controlling nutrient pollution without notable negative impacts. BMPs reduced FEF and the water footprint (WF) and improved eutrophication problem. However, uncertainties were the main limits and drawbacks. These uncertainties are mainly reliant on LCIA coefficients and modeling pollution transports. Thus, this idea is recommended as a tool for comparing strategies instead of reporting certain results. Future studies can focus on upgrading this method. Developing indices, variable midpoints and footprints, besides social indices are some possible research areas. ADDITIONAL INFORMATION AND DECLARATIONS Funding The authors received no funding for this work.
2023-02-01T16:28:46.411Z
2023-01-30T00:00:00.000
{ "year": 2023, "sha1": "0a86de5a399774cd6ebfdeca4da40aeb9459d1f5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.14816", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83df0ec9a3b8dfdbc5c4347b20af8c9caa93b3f1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
225853491
pes2o/s2orc
v3-fos-license
Uniform Cu/chitosan beads as a green and reusable catalyst for facile synthesis of imines via oxidative coupling reaction A nonprecious metal and biopolymer-based catalyst, Cu/chitosan beads, has been successfully prepared by using a software-controlled flow system. Uniform, spherical Cu/chitosan beads can be obtained with diameters in millimeter-scale and narrow size distribution (0.78 ± 0.04 mm). The size and morphology of the Cu/chitosan beads are reproducible due to high precision of the flow rate. In addition, the application of the Cu/chitosan beads as a green and reusable catalyst has been demonstrated using a convenient and efficient protocol for the direct synthesis of imines via the oxidative self- and cross-coupling of amines (24 examples) with moderate to excellent yields. Importantly, the beads are stable and could be reused more than ten times without loss of the catalytic performance. Furthermore, because of the bead morphology, the Cu/chitosan catalyst has greatly simplified recycling and workup procedures. Introduction The development of green and effective heterogeneous catalysts has long been a focus for many researchers due to their sustainable advantages in terms of recyclability and waste reduction. One of the key efforts in the study is to stabilize metal active centers using supporting materials such as inorganic oxides or organic polymers. Particularly, biopolymers have emerged as promising catalytic supports because they are cheap, biodegradable, and non-toxic. Chitosan is the N-deacetylated derivative of chitin found in crustacean shells and exoskeletons of insects. The structure of chitosan contains hydroxyl and amino functional groups, rendering it useful for metal coordination as a supramolecular ligand. 1 In addition, the solubility of chitosan provides exibility for casting into powders, bers, lms, and beads. [2][3][4] Transition metal functionalized chitosan has recently received tremendous attention for heterogeneous catalysis applications. 5 Even though the use of heterogeneous catalysts in the liquid phase can aid considerably in the separation and reuse of the catalysts, the recovery process by ltration or centrifugation can be labor-intensive, and the loss of catalysts mass during the separation is inevitable for nanoscale particles. Therefore, heterogeneous catalysts in the forms of pellets or beads are preferred, especially for industry. Millimeter-sized chitosan beads can be readily prepared by manually dropping an acidic chitosan solution into a basic solution. However, this procedure limits practical application because of the difficulty in scaling-up, low production rate, as well as the challenge to control narrow size distribution and uniform shape. A ow system with computer control should be able to circumvent these problems. Chitosan incorporated copper species have been reported as excellent catalysts for reactions such as N-arylation of amines, 6 borylation of a,b-unsaturated acceptors, 7 C-S and C-N coupling reactions, [8][9][10] and azide-alkyne cycloaddition. [11][12][13] In these reports, chitosan was functionalized to create coordination sites for copper centers, and the resulting chitosan supported copper catalysts were in the form of powders. Therefore, it is of interest to prepare Cu/chitosan beads by employing high precision ow rate generated by a ow system. Recently, we have reported a study on the development of a simple reversible-ow method comprising of a reversible-ow syringe pump with a 3-port switching valve and a holding coil for the preparation of micron-size Cu/chitosan beads. 14 The uniform distribution of size and shape of the catalyst beads with good production rate could be achieved. Thus, this preparation method and the obtained Cu/chitosan beads deserve further investigation. Imines are not only crucial intermediates for the synthesis of organic compounds exhibiting biological activities, but also can be used as dyes, fragrances, catalysts, and polymer stabilizers. 15,16 Conventionally, an imine can be synthesized from the condensation of a primary amine and a carbonyl compound in the presence of an acid catalyst. This acid-catalyzed method provides good yields, but oen requires harsh reaction conditions and prolonged reaction times. Moreover, carbonyl compounds, especially aldehydes, are reactive and not easy to handle or store. The oxidative coupling of primary amines is one of the promising alternatives to synthesize various imine products. 17 Homogeneous and heterogeneous catalysts based on V, 18 Mn, 19,20 Co, 21,22 Ru, 23 Pd, 24 Ir, 25 Au, [26][27][28] and Ce 29 have been proposed to catalyze this reaction. Over the past decades, researchers are very much interested in the copper-based catalysts for the oxidative coupling of primary amines. [30][31][32][33][34][35][36][37][38][39][40][41][42][43][44] However, many problems have not been resolved effectively. For example, the reactions require the use of additives, harsh reaction conditions, long reaction time, and complicated catalyst preparation. Therefore, it is of immense interest to develop a new heterogeneous copper-based catalyst that is efficient, convenient, cheap, eco-friendly, and reusable for the oxidative coupling of amines to form a variety of imine compounds. For these reasons, our research effort has been focused on the development of Cu/chitosan as a simple, eco-friendly, and inexpensive heterogeneous catalyst. Herein, we report an automated system to prepare stable Cu/chitosan beads, and the preparation conditions were examined. These Cu/chitosan beads were well characterized and employed as an efficient and recyclable heterogeneous catalyst for the oxidative coupling of various amines. The beads could be easily separated from the reaction medium by simple decantation, and the catalyst can be reused at least ten times with no signicant loss of activity. This convenient approach for the preparation of catalyst beads could be benecial to both synthetic chemistry and chemical industrial processes. Preparation of Cu/chitosan beads Chitosan (170 mg) was dissolved in 0.15 M HCl (10 mL), and the solution was stirred at room temperature for 30 min. A 2 mL aqueous copper solution (0.25, 0.50, or 0.75 M of Cu(OAc) 2 , CuCl 2 , CuO, or CuI) was gradually added to the acidic chitosan solution, and the obtained viscous solution of Cu/chitosan was stirred for 45 min to allow complexation between copper species and chitosan. The freshly prepared Cu/chitosan solution was rst drawn into the holding coil by the syringe pump operating in the reverse-ow direction. The Cu/chitosan solution was then slowly pushed by the syringe pump into an aqueous NaOH bath leading to the formation of Cu/chitosan hydrogel droplets (Fig. 1). The computer-controlled sequential operational steps of the reversible-ow system are summarized in Table S1. † Finally, the resulting hydrogel beads were washed ve times with DI water, ltered, and dried at room temperature to obtain black Cu/chitosan beads. Characterization The crystal structure was determined by powder X-ray diffraction (XRD) using a Bruker D8 Advance diffractometer with Cu Ka (l ¼ 1.5406 A) X-rays for 2q ¼ 20.0-80.0 . The morphology of Cu/chitosan beads was examined by scanning electron microscopy (SEM, Hitachi S-2500) operated at 15 kV. Energydispersive X-ray (EDX) analysis was employed to obtain elemental mapping using silicon dri detector coupled with SEM system. X-ray photoelectron spectroscopy (XPS) was carried out on Axis Ultra DLD spectrometer at 15 kV, 10 mA, and 150 W. The spectra were calibrated by referencing the C 1s line at BE ¼ 285 eV. Electron spin resonance (ESR) spectra were taken in the X-band using a Bruker ELEXYS 500 instrument. The ESR signals were registered at a microwave power of 20 mW and modulation amplitude of 4.0 G in the eld range of 2500-4000 G with a sweep time of 40 s. Thermogravimetric analyses (TGA, SDT 2960) were carried out in a temperature range of 30-800 C with a heating rate of 20 C min À1 under air ow. The elemental composition of samples was determined by inductively coupled plasma-mass spectrometry (ICP-MS, Perkin-Elmer SCIEX-ELAN Fig. 1 The flow system with software controller using a reversible-flow syringe pump with a 3-port switching valve and a holding coil for preparation of Cu/chitosan beads. 600) with emission line at 324.75 nm. The samples were digested by nitric acid to a clear solution before the measurement. The elemental calibration curves were prepared in the range of 1-100 ppm. Oxidative coupling of amines In a typical reaction, Cu/chitosan beads, amine derivatives, TBHP, hexamethylbenzene as an internal standard, and solvent were added to a reaction tube. For the cross-coupling reaction, another amine derivative was added to the reaction. The mixture was heated at an appropriate temperature for a desired reaction time. Then, the reaction mixture was quickly cooled to room temperature, and the catalyst beads were removed. The liquid mixture was extracted with ethyl acetate (3 Â 2 mL), and the volume was adjusted to 10 mL. The obtained solution was analyzed by gas chromatography-mass spectrometry (GC-MS, Agilent Technologies 7890A instrument with 5975C inert XL MSD) and/or 1 H nuclear magnetic resonance spectroscopy ( 1 H NMR in CDCl 3 , Bruker DPX-400) to obtain conversion and yield. In the recycle experiment, Cu/chitosan beads were reused aer washing with ethyl acetate without drying. Results and discussion Preparation of Cu/chitosan beads Even though at the macroscopic level, all resulting Cu/chitosan beads appear black and spherical aer drying (Fig. 2), the microscopic morphology of the beads depends on the types of copper precursor used in the preparation as illustrated by SEM images in Fig. 3. This is because the viscosity of the Cu/chitosan solution plays a crucial role in the formation of the spherical beads. Under the condition studied, the beads derived from Cu(OAc) 2 have the most spherical shape compared to the others. In addition, the preparation of Cu/chitosan beads by our ow system described in Fig. 1 led to greater uniformity of particle shape with narrower size distribution than the manual dropping method (0.78 AE 0.04 mm vs. 0.92 AE 0.09 mm) due to greater precision of ow rate control in the production of Cu/ chitosan hydrogel droplet. SEM-EDX analysis also conrmed the uniform elemental distribution throughout the bead surface (Fig. 4). In addition to the type of copper precursor, other parameters such as the concentrations of chitosan solution and NaOH bath as well as the ow rate can affect the formation of the beads. An effective concentration of chitosan impacts gel formation. Low concentration of the acidic chitosan solution (i.e., 0.50% w/v) failed to form hydrogel droplets due to its low viscosity. On the other hand, high viscosity of the chitosan solution (i.e., 2.0% w/v) restricted ow-through of the chitosan solution into small i.d. tubing of the ow system. The concentration of NaOH receiving bath affected the rate of gelation and solidication which, in turn, inuenced the size distribution of the beads; and 1 M NaOH solution provided the narrowest size distribution. However, the size distribution of the beads was not signicantly different from the increasing ow rate, while the mean size of the beads increased with the ow rate. We could control the amount of Cu in the beads by varying the concentration of Cu solution (0.25-0.75 M). This parameter did not affect the mean particle size. Nevertheless, Cu content inuenced the mechanical strength of the beads. At higher concentration (>0.5 M) of Cu solution, the obtained beads became more brittle and could be crushed by a magnetic bar during stirring in a liquid medium. From these results, we propose that the most suitable condition to produce Cu/ Characterization of Cu/chitosan beads The thermal property of Cu/chitosan beads was examined by TGA as shown in Fig. 5. The thermal decomposition behavior was similar to that of chitosan with CuO as nal residues in all cases. The copper contents calculated from the residue weight loss were 12, 12, 14, 14% wt for the beads prepared from Cu(OAc) 2 , CuO, CuCl 2 , and CuI, respectively. These numbers are in good agreement with the values obtained from ICP-MS technique (11,11,13, and 12% wt, respectively). The surface property of the beads was studied by XPS. The results revealed that the copper content on the beads prepared from Cu(OAc) 2 was 10% wt. This value is close to that found by ICP-MS technique (11% wt), implying good distribution of copper in the bulk and on the surface of the beads. High resolution XPS spectra in the region of C 1s, N 1s, O 1s, and Cu 2p are displayed in Fig. 6. The C 1s spectrum could be tted with the binding energies of 284.7 (aliphatic carbon C-C, C-H, and C]C), 286. 47 These results suggested that the nitrogen and oxygen atoms of chitosan could chelate to copper species in the beads. The Cu 2p spectrum revealed two peaks at binding energies of 932.8 (Cu 2p 3/2 ) and 938.9 eV (the Auger spectrum of the CuLMM transition), corresponding to copper(II) species. 47 ESR spectroscopic results further conrm the presence of copper(II) species. The ESR spectra of Cu/chitosan beads exhibit a pattern with g k ¼ 2.268-2.273 and g t ¼ 2.090-2.093 (Fig. 7). These g-values are comparable to those reported for the chitosan complexes formed with Cu(NO 3 ) 2 (g k and g t values of 2.264 and 2.091, respectively). 49 XRD patterns reveal the presence of crystalline CuO (JCPDS card no. 45-0937) as illustrated in Fig. 8. Even the sample prepared from CuI exhibited the XRD peaks of CuO. It is possible that Cu(I) species were oxidized to Cu(II) by ambient air under the condition employed in the preparation of the beads. The sample prepared from Cu(OAc) 2 afforded the highest crystallinity compared to the others. The better crystallinity or faster crystallization rate of CuO in this sample may be responsible for the more spherical appearance of the beads (Fig. 3). To probe the functional groups of Cu/chitosan beads, all samples were subjected to FTIR spectroscopic studies. FTIR spectra of all beads display similar signal patterns to that of pure chitosan as shown in Fig. S2. † Oxidative coupling of amines by Cu/chitosan beads Cu/chitosan beads were applied as a catalyst in the oxidative coupling of primary amines to their corresponding imines. Benzylamine was selected as a model starting substrate to investigate the effects of the copper precursor and the reaction parameters including solvent, temperature, and oxidizing agent on the catalytic performance (Scheme 1). The oxidative coupling of benzylamine was performed in acetonitrile at 80 C for 30 min with TBHP as the oxidizing agent. In the absence of Cu/chitosan catalyst, only 18% of benzylamine could be converted to imine (Table 1, entry 1). Likewise, if only chitosan beads were present, 18% conversion could be obtained (entry 2). However, in the presence of Cu/chitosan beads, up to 98% yield of imine could be achieved under the same reaction condition (entries 3-8). Therefore, the oxidative coupling of benzylamine requires Cu species to catalyze the reaction. The Cu content in the beads could be controlled from the initial concentration of Cu solution used in the preparation of Cu/ chitosan beads. When the Cu content was higher, higher imine yield could be obtained (entries 3-5). Interestingly, the copper precursor strongly inuences the product yield. With similar Cu loading, the beads prepared from Cu(OAc) 2 provided the highest yield of 98%, followed by the ones from CuCl 2 (87%), CuO (81%), and CuI (69%), respectively. It is possible that the residue acetate anion acts as a Lewis base and enhances the product yield in addition to supporting the spherical bead structure (Fig. 2). To further conrm the effect of the acetate anion on the catalytic performance, sodium acetate was added to the reaction mixture with Cu/chitosan beads prepared from CuCl 2 ; and the result shows that the imine yield could be raised from 86 to 94% (entries 8 and 9). For comparison, Cu(OAc) 2 as well as the in situ mixture of Cu(OAc) 2 and chitosan beads were used as homogeneous analogs. Here, under the same reaction condition and time, full conversion of benzylamine was observed with 17-20% yield of the imine product (Table 1, entries 10 and 11). The low yield in these cases was caused by the formation of a by-product, benzaldehyde, from the hydrolysis of the imine aer prolonged heating of the reaction mixture. Thus, the homogeneous Cu(OAc) 2 catalyst provided higher reaction rate for the oxidative coupling of benzylamine. However, the homogeneous catalyst is difficult to be separated from the reaction mixture and may cause contamination to the environment. When Cu/chitosan beads were crushed to powders and then used as a heterogeneous catalyst (entry 12), the reaction rate was also higher than Scheme 1 The oxidative self-coupling reaction of benzylamine. This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 21009-21018 | 21013 those of the beads. These results are in good agreement with the low surface area (<5 m 2 g À1 ) and total pore volume of Cu/ chitosan beads, suggesting that the reaction occurred mainly on the external surface. Nevertheless, the workup process for the powder catalyst was much more tedious and timeconsuming as may be anticipated from Fig. 9. The reaction parameters including solvent, temperature, and oxidizing agent can also affect the catalytic performance of Cu/ chitosan beads. Cu/chitosan beads prepared from Cu(OAc) 2 with 11% copper content were employed in the evaluation of the reaction parameters summarized in Table S2. † Under solventfree condition, the oxidative coupling of benzylamine at 80 C using TBHP as the oxidizing agent afforded 98% yield in just 10 min, but the fracture of the catalyst beads could be observed. Likewise, when using toluene, DMSO, EtOAc, and water as the solvent, the beads were misshapen. Among the studied solvents, the reaction in acetonitrile proceeded smoothly and provided the best catalytic results (Table S2, † entry 3). The effect of the oxidizing agents is also demonstrated in Table S2, † entries 9-11. Under static ambient and oxygen bubbling, the coupling reaction in acetonitrile at 80 C was slow with 12 and 14% yield of imine at 30 min. Hydrogen peroxide gave a moderate yield (46%) of imine; however, the beads were shattered into small pieces during the reaction. Then, the reaction was carried out with the mild and versatile oxidizing agent, tert-butyl hydroperoxide (TBHP), which improved the reaction yield to 98%, and the spherical shape of the catalyst beads was preserved. The reaction temperature was also varied from 30 to 90 C as illustrated in Table S2, † entries 12-14. As expected, the yield could be raised with increasing temperature. Lastly, the effect of the catalyst loading was studied (Table S2, † entries [15][16][17]. Increasing the catalyst loading from 5 to 10 mg, the yield at 30 min rose signicantly from 73 to 98%. The further addition of Cu/chitosan bead catalyst did not improve the yield at 30 min. Therefore, the most suitable reaction condition is to perform the oxidative coupling of benzylamine in acetonitrile at 80 C for 30 min with TBHP as the oxidizing agent and 10 mg of Cu/chitosan beads derived from Cu(OAc) 2 as the catalyst to obtain 98% yield of the imine product. To evaluate the versatility of Cu/chitosan bead catalyst, the scope of substrates was extended. A variety of structurally diverse primary amines efficiently underwent oxidation to provide corresponding imine products (Table 2). Excellent catalytic activities were observed for benzylamine derivatives (entries 1-6). When a methyl substituent was present at different positions on the aromatic ring, similar results were obtained (entries 2-4) so the steric on benzylic ring does not much affect the activity. Furthermore, the oxidative couplings of heteroatom-containing and more steric amines could proceed selectively to afford the products in 74-92% yield (entries 7-13). The oxidative cross-coupling of various amines is an important process to further expand the oxidative coupling synthetic methodology for the preparation of diverse imines, particularly the unsymmetrical ones. Cu/chitosan beads were subsequently utilized as a catalyst in the oxidative crosscoupling of amines to imines ( Table 3). The reaction between benzylamine and aniline gave 95% yield of the target product as shown in entry 1. However, the catalytic activities between aniline derivatives or aliphatic amine and benzylamine provided only moderate yields as depicted in entries 2-6. The steric hindrance of aniline derivatives inuences the reactivity more than the substituent on benzylamine. Therefore, at the same reaction time, higher yields were observed for the coupling between aniline and benzylamine derivatives (entries 7-11). The steric effect around reacting site should be the main factor for the declining reactivity. The substrate with a substituent at the para position provided higher yield than those at the meta and ortho positions, respectively (entries 7-9). Reusability of Cu/chitosan beads The reusability of Cu/chitosan beads in the oxidative selfcoupling reaction of benzylamine was tested and the results are shown in Fig. 10. Cu/chitosan beads used in the reaction cycle was separated by simple decantation. Aer washing with ethyl acetate (3 Â 2 mL), the catalyst beads were charged with a new batch of reactants. The separation was very convenient, there was no need to centrifuge or dry the beads before using them in the next reaction cycle. This process demonstrates the advantage of the catalyst beads over powders. Moreover, the results showed that Cu/chitosan beads can be reused more than ten cycles without losing their performance. SEM images (Fig. S4 †) reveal that Cu/chitosan beads still retained the bead structure aer uses, even though the surface texture of the beads appeared rougher aer ten cycles. In addition, from ICP-MS results, the copper content in the used catalyst beads was similar to the fresh ones (11%). This evidence veried the stability of the beads, and no copper species leached into the reaction. A hot-ltration test was performed in the self-coupling reaction of benzylamine with TBHP at 80 C as demonstrated in Fig. 11. Aer the catalyst removal at 15 min, no more product was obtained, suggesting that there was no leaching of any active species during the reaction. Therefore, the catalysis was heterogeneous in nature. Comparison of catalytic activity Among the results for the catalytic oxidative self-coupling of benzylamine, the best condition is to carry out the reaction at 80 C for 30 min by using 10 mg of Cu/chitosan beads derived from Cu(OAc) 2 in the presence of TBHP as the oxidizing agent and acetonitrile as the solvent. The reaction achieved 99% conversion and 98% yield. In Table 4, the catalytic efficiency is illustrated and compared with other copper catalysts (entries 2-9) as well as other metal-based catalysts (entries [10][11][12][13][14][15][16]. Even though it is difficult to directly compare the results due to the different reaction conditions, our Cu/chitosan bead system exhibited satisfactory catalytic activity with short reaction time and could be reused for many cycles with no loss of activity. Conclusions In conclusion, we have developed, nonprecious metal and biopolymer-based catalyst, Cu/chitosan beads, by using a soware-controlled ow system. The size and morphology of the Cu/chitosan beads are reproducible due to high precision of the ow rate. Furthermore, the Cu/chitosan beads can catalyze the oxidation of diverse types of amines to corresponding imines in moderate to high yields under short reaction time. Because of the bead morphology, these catalysts greatly simplify the operation and workup procedures. In the recycle process, aer decantation and washing, the Cu/chitosan beads could be reused without drying. Importantly, the beads could be recycled more than ten times without an appreciable loss of the catalytic performance. Finally, the procedure to prepare Cu/chitosan beads is expected to contribute to the development of new catalyst systems and the application to other reactions. Conflicts of interest There are no conicts of interest to declare. a TOF is the number of moles of reactant consumed per mole of Cu per unit of time. b "-" ¼ Data not available.
2020-06-04T09:12:35.972Z
2020-05-27T00:00:00.000
{ "year": 2020, "sha1": "60de122bbe8c0b7f060b0a03227e09314ab0722d", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra03884a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef34e5f3848ee97f889e57a451bc6809694ced5d", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
247499581
pes2o/s2orc
v3-fos-license
glob-ANRIL as a prognostic biomarker in colon pre-cancerous lesion detection via non-invasive sampling Long non-coding RNAs have been proposed as biomarkers for the detection, prevention and screening of various malignancies. In this study, two lncRNAs (ANRIL and BANCR) were assessed for biomarker application in the early detection of colorectal cancer (CRC) through stool specimen testing, as a non-invasive and cost-effective methodology. A total of 40 stool samples were collected from patients referred to the hospital with colorectal cancer or adenomatous polyps as pre-cancerous lesions; patients were diagnosed using colonoscopy and pathology reports were available. Twenty control samples were also obtained from healthy subjects for comparison. RNA extraction and cDNA synthesis were followed by real-time PCR to evaluate lncRNA expression. The up-regulation of ANRIL in 20% of samples taken from polyp patients, combined with up-regulation in 65% of patients with CRC, confirmed the potential usefulness of ANRIL as a prognostic biomarker (AUC 0.95; P < 0.0001). BANCR relative expression analysis illustrated significant up-regulation in polyp ( P < 0.04) and tumoural participants ( P < 0.03) compared with normal control individuals. The expression patterns of ANRIL and BANCR in polyp cases were significantly correlated according to correlation analysis (r = 0.45, P < 0.045). ANRIL expression patterns in stool specimens of polyp and tumour cases supported the use of ANRIL as a prognostic biomarker for screening patients in the early stages of CRC. Up-regulation of BANCR in pre-cancerous lesions as well as down-regulation of ANRIL may also be a specific marker pair for easy, convenient and fast CRC prognosis. INTRODUCTION In 2020, approximately 1.93 million individuals glob-ANRIL as a prognostic biomarker in colon pre-cancerous lesion detection via non-invasive sampling Shadi Sadri 1 , Leili Rejali 1 , Mahrooyeh Hadizadeh 2 , Hamid Asadzadeh Aghdaei 1 , Chris Young 3 , Ehsan Nazemalhosseini-Mojarad 4 * , Mohammad Reza Zali 4 and Maziar Ashrafian Bonab 2 ally were diagnosed with colorectal cancer, and nearly one million were estimated to have died from this cancer (Xi and Xu, 2021). In a recent study, CRC was rated as the third highest cancer type in men and women in the USA, accounting for around 9% of all cancers, and 17,930 cases and 3,640 deaths were anticipated in 2020 among individuals aged less than 50 years (Siegel et al., 2020). Although developing countries are classed as lowrisk for CRC compared to developed countries, the rate of onset has been increasing in recent decades. More recently, the occurrence of this malignancy has been rising among younger people, highlighting CRC as a major public health burden (Ahmadi Lari, 2020). Despite the chance of recovery for patients who are in the early stages of CRC being more than 90% (Sun et al., 2016a), unfortunately, colorectal cancer is most commonly detected in more advanced stages (Karthik et al., 2014). Therefore, early detection of malignancy is important, and the use of non-invasive methods is preferable (Das et al., 2017;Rejali et al., 2021). Long non-coding RNAs (lncRNAs) have been found to play a crucial role in diverse biological processes through interactions with other cellular molecules, including DNA, RNA and proteins, via multiple pathways (de Bony et al., 2018;Ming et al., 2021). Dysregulation of lncRNAs has been reported in a variety of cancers including CRC (Fang and Fullwood, 2016;Yang et al., 2017;Cao et al., 2021;Melixetian et al., 2021;Wang et al., 2021). Since genetic/ epigenetic and environmental factors are involved in the development and progression of CRC (Toiyama et al., 2014), lncRNAs have become targets of interest for diagnostic, prognostic and therapeutic applications . Recent studies on lncRNAs have also revealed that they can act as tumour suppressors or oncogenes, and gene expression can become activated or be inhibited by lncRNA functions (Xie et al., 2016). Therefore, defining lncRNA function in tumourgenesis is now a priority. Clinical diagnostic biomarkers such as CEA and CA199 have been proposed previously for CRC, but have not demonstrated sufficient sensitivity or specificity for early detection in CRC (Zou et al., 2017). The evaluation of efficient biomarkers to improve screening and early detection in CRC is therefore now a priority (Nissan et al., 2012;Yang et al., 2017). The lncRNA ANRIL (antisense non-coding RNA in the INK4 locus) is located on chromosome 9 in humans (9p21.3) and consists of 21 exons within the CDKN2B-CDKN2A gene cluster. CDKN2B and CDKN2A have well established roles in cell proliferation, apoptosis, senescence and aging (Pasmant et al., 2007). This region is a notable genetic susceptibility locus for several cancers (Green et al., 1996). ANRIL plays a crucial role in gene regulation and histone modification, and is thought to participate in the tumour microenvironment through its involvement in extracellular matrix remodelling, thereby aiding metastasis (Mehta-Mujoo et al., 2019). Along with ANRIL, the other lncRNA evaluated in the present study is BRAF-activated non-coding RNA (BANCR), a lncRNA with four exons located on chro-mosome 9 (9q21). BANCR is closely associated with V600EBRAF, the most frequent mutation type of the BRAF gene, which has also been detected in approximately 5-22% of CRC cases (Brose et al., 2002;Yang et al., 2014). BANCR is frequently deregulated in various human cancers . In colorectal cancer studies, contradictory results indicate that BANCR can act as an oncogene or a tumour suppressor gene, based on contrasting evidence (Brose et al., 2002;Liao et al., 2017). lncRNAs have previously been sampled and investigated in human body fluids such as blood, ejaculate and urine (Zhang et al., 2013). However, this is the first study to examine lncRNA expression in CRC stool samples. In this study, we evaluated the expression level of two lncRNAs, ANRIL and BANCR, in stool specimens. Stool samples provide a very good indication of the colon area, with the possibility of CRC tumoural cell presence (Davies et al., 2005). Furthermore, there are certain advantages for stool analysis: it is a non-invasive sampling technique; there is no need for bowel preparation; it enables screening of the entire length of the colon and rectum; and it produces specimens that are easily transportable (De Maio et al., 2014). Taking these considerations together, the discovery of molecular biomarkers in stool specimens could offer beneficial new options in providing early continuous surveillance of CRC. RESULTS Evaluating the expression of BANCR and ANRIL in tumour and polyp faecal samples qRT-PCR was performed to measure BANCR mRNA expression levels in 20 stool samples collected from CRC patients, 20 from individuals diagnosed with adenomatous polyps and 20 from healthy normal faecal samples. Patients in each disease category were divided into two groups of up-regulated and down-regulated expression of lncRNAs (BANCR, ANRIL) by calculating relative quantification (RQ) and mean of RQ. The expression level of the lncRNA BANCR was significantly up-regulated in polyp samples (P < 0.0003; Table 1). Besides, the correlations between BANCR expression in tumour or polyp samples and the patients' clinicopathological parameters (including sex, age, location, history of colon disease, family history of CRC, diabetes, smoking and alcohol consumption) were measured to determine their clinical significance ( Table 2). The expression level of BANCR in polyp and tumoural specimens was significantly correlated with history of colon disease (P < 0.01, P < 0.04), and specifically in tumour samples with diabetic history and age (P < 0.04, P < 0.03). BANCR was up-regulated in 70% (14/20) of the patients with adenomatous polyps and 35% (7/20) of CRC patients (Table 1). Furthermore, mRNA expression analysis of ANRIL in specimens collected from confirmed polyp diagnosis participants and CRC patients demonstrated the upregulation of ANRIL in tumour patients' faecal samples (Table 1). This was in contrast to down-regulation in faeces samples from individuals with a family history of CRC in polyp and diabetic history in tumoural cases ( Table 2). Although 20% of polyp samples illustrated up-regulation, 80% exhibited down-regulation while 35% were down-regulated via qRT-PCR non-parametric analysis. Relative expression of ANRIL and BANCR in normal, polyp and tumoural samples were compared (Fig. 1). There was a significant difference between normal and polyp samples (P < 0.03, P < 0.04, respectively), and also polyp and tumour specimens (P < 0.02, P < 0.03, respectively), in both ANRIL and BANCR expression levels. A significant difference was also seen between tumour and normal samples in BANCR (P < 0.03). AUROC evaluation of BANCR and ANRIL in polyp and tumour stool specimens For evaluating the characteristics of BANCR and ANRIL as effective biomarkers for polyp detection, area under ROC curves were estimated for RQ < 0.25 (Fig. 2). Correlation evaluation between ANRIL and BANCR in tumour and polyp samples To elucidate the association between the expression of ANRIL and of BANCR, the relative expression values of the lncRNAs were com-pared in each sample set. A significant association with positive correlation was observed in polyp specimens (r = 0.45; P < 0.045). There was a negative relationship between the relative expression levels of ANRIL and BANCR in tumour faeces samples, but no significant correlation was found (r = -0.08; P < 0.76) (Fig. 3). DISCUSSION Early detection of colorectal cancer (CRC) is a priority goal for screening high-risk patients and increasing longterm survival. The quality of CRC screening is currently insufficient for global utilization, with low sensitivity and specificity of testing via conventional stool-based screening. Furthermore, accelerated expenditure with low participation compliance in colonoscopy points to the need for novel biomarker development. Long non-coding RNAs represent an excellent biomarker candidate in stool specimens; this sampling method is non-invasive and involves no risk of colonoscopy side effects for patients. Conventional stool tests are easy, require no preparation, and can be repeated at short intervals with low cost, and should therefore also result in an elevation in compliance rates. ncRNAs play key roles in various cellular functions such as proliferation, differentiation, migration, angiogenesis and apoptosis. Recent studies have shown that abnormal expression of ncRNAs is correlated with different cancers, including CRC. The finding that ncRNAs are stable in stool, blood plasma and serum highlights the opportunity for developing novel innovative procedures using ncRNAs as early diagnostic biomarkers in CRC. lncRNAs can be considerably more sensitive and specific for diagnosis than genomic DNA, mRNA or protein biomarkers (Slaby, 2016). lncRNA function and expression patterns are diverse in cancerous and pre-cancerous lesions. Hence, lncRNA expression evaluation in individuals with confirmed adenomatous polyps and colon cancer is a potentially important biomarker, implementable through non-invasive methodologies for early detection of CRC. To our knowledge, the data presented here constitute the first investigation to evaluate the lncRNAs BANCR and ANRIL in faecal samples of CRC patients and diagnosed polyp cases. However, similar studies have been performed on colorectal tumour tissue, melanoma and lung cancer to investigate BANCR's function in biological processes including proliferation, migration and invasion (Yang et al., 2014;Jiang et al., 2015). BANCR and ANRIL expression as well as clinicopathological parameters of the analysed samples were evaluated. BANCR was up-regulated in faecal samples taken from both adenomatous polyp patients and CRC patients, but the AUROC plots for detecting an association between lncRNA expression in the two sample groups revealed no significant correlation. BANCR lncRNA up-regulation was associated with lymph node metastasis and poor survival rate among colorectal cancer patients by Shen et al. (2017), but no evidence of a correlation with age was reported. Lou et al. (2018) reported significant overexpression of BANCR in breast tumour tissues relative to para-carcinoma normal tissues. They showed that patients who overexpressed BANCR had a poor prognosis compared with patients having low expression of BANCR. In addition, Zhou and Gao (2016) described a key role for up-regulated BANCR in the occurrence and development of hepatocellular carcinoma and the prognosis of affected patients, proposing its application as an lncRNA biomarker for early diagnosis of cancer and prognosis screening. A recently published meta-analysis reported the association of an undesirable prognosis for most cancer patients with elevated BANCR expression (Fang et al., 2020). Sun et al. (2016b) showed higher ANRIL expression in CRC tissue compared with adjacent non-tumour tissues. The overexpression was significantly correlated with a reduction in survival rate. They further predicted that ANRIL may be a primary participant in the advancement of CRC, which agrees with our own data presented here evaluating the expression of ANRIL in polyps and tumours of defined samples. The up-regulation of ANRIL lncRNA in polyps, which are pre-cancerous lesions, confirmed the hypothesis of ANRIL playing a role in the early stage of the disease. Furthermore, the significant correlation between ANRIL relative expression in samples from polyp patients and CRC patients verified ANRIL as a candidate prognostic biomarker in CRC. Uniquely in our study, a trend for down-regulation of ANRIL in samples of CRC patients was found, but not observed to be significant. On the other hand, pathology reports of CRC patients noted that all patients with CRC who entered our study were diagnosed in the early stages of disease. Hence, ANRIL expression may be confined to the later stages of CRC; however, expression may also be variable in different sample types. These findings illustrate the complex and disparate roles of ANRIL in CRC. Using ANRIL lncRNA as a prognostic biomarker for detecting adenomatous polyps from faecal samples will shed light on the early detection of CRC via non-invasive, cost-effective and simple methodology, improving patient outcomes. CONCLUSIONS Different diagnostic and prognostic methodologies for the early detection of colorectal cancer are available. Most routine methods are, however, invasive and expensive. Since the deregulation of lncRNAs in cancerous tissues has been observed in faecal samples, these factors are recognized as non-invasive and preferable biomarker candidates, with acceptable tolerance for affected or at-risk patients compared with biopsy administration. Further research is required into the fundamental mechanisms and functions of BANCR and ANRIL lncRNAs in CRC. MATERIALS AND METHODS Patients Forty stool samples, including 20 from CRC patients, plus 20 from adenomatous polyp patients referred to the hospital with histological confirmation of the disease, were randomly collected. A further 20 samples from healthy individuals who underwent routine screening examination were included as control samples. Patients with a record of chemotherapy or radiotherapy in their history were omitted from the survey. Informed consent was obtained from all subjects involved in the study, which was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board Institutional Research Ethics Committee of Taleghani Hospital, Tehran (ethical approval number: IR.SBMU.RIGLD.REC.1397.187). Collected samples were kept in EDTA buffer at − 80 °C until the time of extraction. The clinical features of patients were verified by an expert pathologist. All relevant information including age, sex, weight, body mass index, smoking and alcohol consumption, blood in faeces, diabetes and a history of colon cancer and chronic inflammatory disease was recorded by questionnaire. Stool RNA extraction Total RNA was extracted from stool samples using the Qiagen RNeasy Plus Mini Kit according to the manufacturer's instruction, with attention to usage of DNase/RNase-free devices and solutions. After extraction, the quantity and quality of RNA was measured using a NanoDrop 1000 spectrophotometer (Thermo Fisher Scientific) and electrophoresis in a 1% agarose gel. The optical density 260/230 nm and 260/280 nm ratios were determined using a NanoDrop application to ensure RNA purity. RNAs were then reverse transcribed into cDNA using the Reverse Transcription Kit (Yekta Tajhiz). Quantitative real-time PCR (qRT-PCR) qRT-PCR reactions were carried out using 75 ng of cDNA, 10 μl of 2 × SYBR Green (Takara) and 200 nM forward and reverse primers in an ABI 7500 Real-Time PCR instrument (Applied Biosystems). 18S rRNAs were included as internal controls. Statistical analysis Statistical analyses were carried out using SPSS software version 23 (SPSS) and Graph-Pad Prism 8.0. The Mann-Whitney test was applied to compare the expression of BANCR and ANRIL in faecal specimens of cancerous patients, and of patients diagnosed with colorectal adenomatous polyps, with that in healthy normal stool samples. Student's t-test or one-way ANOVA was used to determine the correlation between BANCR or ANRIL expression and clinicopathological variables. The Spearman method was used for reporting (r) in correlation analysis. The resulting data are reported as mean ± standard deviation (SD) of RQ. Statistical significance was defined as P < 0.05.
2022-03-18T06:23:24.356Z
2022-03-17T00:00:00.000
{ "year": 2022, "sha1": "dbe7b187967dfd00e9bb1d5c41d60e5ab096f8af", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/ggs/advpub/0/advpub_21-00102/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c1bfe344bd5d89d8d941d2cf154463197aac69a4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
117752659
pes2o/s2orc
v3-fos-license
Observation of $K^*(892)^0\bar K^*(892)^0$ in $\chi_{cJ}$ Decays $K^*(892)^0\bar K^*(892)^0$ signals from $\chi_{cJ} (J=0,1,2)$ decays are observed for the first time using a data sample of 14 million $\psi(2S)$ events accumulated at the BES II detector. The branching fractions ${\cal B}(\chi_{cJ}\to K^*(892)^0\bar K^*(892)^0)$ $(J = 0,1,2)$ are determined to be $(1.55 \pm 0.35 \pm 0.30)\times 10^{-3}$, $(1.58 \pm 0.32 \pm 0.29)\times 10^{-3}$, and $(4.67 \pm 0.55 \pm 0.85)\times 10^{-3}$ for the $\chi_{c0}$, $\chi_{c1}$ and $\chi_{c2}$ decays, respectively, where the first errors are statistical and the second are systematic. The significances of these signals are about 4.2$\sigma$, 4.3$\sigma$, and 7.5$\sigma$, respectively. I. INTRODUCTION Exclusive quarkonium decays constitute an interesting laboratory for investigating perturbative quantum chromodynamics (QCD). In the case of P -wave charmonium χ cJ decays to a pair of pseudoscalars, one finds that the lowest Fock state, the color-singlet contribution, alone is not sufficient to accommodate the data. Indeed, it turns out that the color-octet contribution from the next higher Fock state contributes at the same level as the color singlet one. Its inclusion yields good agreement with experimental data [1,2]. The calculation of the partial width of χ cJ → pp, taking into account the color octet mechanism [3], also obtains results in reasonable agreement with measurements [4]. Nevertheless a recent measurement of the χ cJ → ΛΛ [5] only agrees marginally with this prediction. At present there are no predictions for the majority of the hadronic decay modes. In addition, few two-body decays have been measured. A consistent set of predictions for the branching fractions, as well as more precise experimental measurements, for a number of the two-body decays may lead to further insight into the nature of these 3 P J cc bound states. II. BES DETECTOR BES II is a large solid-angle magnetic spectrometer that is described in detail in Ref. [6]. Charged particle momenta are determined with a resolution of σ p /p = 1.78% 1 + p 2 (p in GeV/c) in a 40-layer cylindrical drift chamber. Particle identification is accomplished by specific ionization (dE/dx) measurements in the drift chamber and time-of-flight (TOF) measurements in a barrel-like array of 48 scintillation counters. The dE/dx resolution is σ dE/dx = 8.0%; the TOF resolution is σ T OF = 180 ps for Bhabha events. Outside of the time-of-flight counters is a 12-radiation-length barrel shower counter (BSC) comprised of gas proportional tubes interleaved with lead sheets. The BSC measures the energies of photons with a resolution of σ E /E ≃ 21%/ √ E (E in GeV). Outside the solenoidal coil, which provides a 0.4 T magnetic field over the tracking volume, is an iron flux return that is instrumented with three double layers of counters that are used to identify muons. In this analysis, a GEANT3 based Monte Carlo simulation package (SIMBES) with detailed consideration of detector performance (such as dead electronic channels) is used. The consistency between data and Monte Carlo has been checked in many high purity physics channels, and the agreement is quite reasonable. III. EVENT SELECTION The selection criteria described below are similar to those used in a previous BES analysis [7]. A. Photon identification A neutral cluster is considered to be a photon candidate when the angle between the nearest charged track and the cluster is greater than 15 • , the first hit is in the beginning six radiation lengths, and the difference between the angle of the cluster development direction in the BSC and the photon emission direction is less than 30 • . The photon candidate with the largest energy deposit in the BSC is treated as the photon radiated from ψ(2S) and used in a four-constraint kinematic fit to the hypothesis ψ(2S) → γπ + π − K + K − . B. Charged particle identification Each charged track, reconstructed using the MDC information, is required to be well fit to a three-dimensional helix, be in the polar angle region | cos θ MDC | < 0.80, and have the point of closest approach of the track to the beam axis be within 2 cm of the beam axis and within 20 cm from the center of the interaction region along the beam line. For each track, the TOF and dE/dx measurements are used to calculate χ 2 values and the corresponding confidence levels for the hypotheses that the particle is a pion, kaon or proton (Prob π , Prob K , Prob p ). C. Event selection criteria Candidate events are required to satisfy the following selection criteria: (1) The number of charged tracks is required to be four with net charge zero. (2) The sum of the momenta of the two lowest momentum tracks is required to be greater than 650 MeV; this removes contamination from ψ(2S) → π + π − J/ψ events and some of the ρ 0 ππ background. A combined probability determined from the four-constraint kinematic fit and particle identification information is used to separate γπ + π − π + π − , γK + K − K + K − , and the different possible particle assignments for the γπ + π − K + K − final states. This combined probability, Prob all , is defined as where χ 2 all is the sum of the χ 2 values from the four-constraint kinematic fit and those from each of the four particle identification assignments, and ndf all is the corresponding total number of degrees of freedom. For an event to be selected, Prob all of the γπ + π − K + K − must be larger than those of the other possibilities. In addition, the particle identification probability of each charged track Prob ID must be > 0.01. The invariant mass distribution for the π + π − K + K − events that survive all the selection requirements is shown in Fig. 1. There are clear peaks corresponding to the χ cJ states. The highest mass peak corresponds to charged tracks final states that are kinematically fit with an unassociated, low energy photon. The π + π − K + K − invariant mass spectrum. For the events in χ cJ mass region (3.30, 3.65) GeV, after requiring that the mass of either (or both) Kπ pair lies between 0.836 and 0.956 GeV, the mass distribution of the other Kπ pair, shown in Fig. 3, is obtained; there is a strong K * (892) signal. The distribution is fitted with a background polynomial plus a P -wave relativistic Breit-Wigner FIG. 2: Scatter plots of K − π + versus K + π − invariant masses for selected γπ + π − K + K − events with π + π − K + K − mass in χc0, χc1, and χc2 mass regions, respectively. function, with a width where m is the mass of the Kπ system, p is the momentum of kaon in the Kπ system, Γ 0 is the width of the resonance, m 0 is the mass of the resonance, p 0 is p evaluated at the resonance mass, r is the interaction radius, and 1 + r 2 p 2 0 1 + r 2 p 2 represents the contribution of the barrier factor. The fit of Fig. 3 gives an r value of (3.4 ± 2.6) GeV −1 with a large error due to the low statistics. Therefore, in later analysis (mainly in the efficiency calculation), we use the value (3.4 ± 0.6 ± 0.3) GeV −1 measured by the K − π + scattering experiment [8] for r. In this paper, the number of K * (892) 0K * (892) 0 events and the corresponding background are estimated from the scatter plot of K − π + versus K + π − invariant masses, as shown in Fig. 4. The signal region is shown as a square box (solid line) at (0.896, 0.896) GeV with the width of 60 MeV. From a Monte Carlo study, a large background comes from ψ(2S) → γχ cJ → γK 1 (1270)K (or K 1 (1400)K) which decays to γπ + π − K + K − final states via K 1 → K * (892)π intermediate decay. This background shows up as the horizontal and vertical bands at m(K * (892)) in the m(K − π + ) versus m(K + π − ) scatter plots of Fig. 2. Hence, backgrounds are estimated from sideband boxes, which are taken 60 MeV away from the signal box and shown as four dashed-line boxes in Fig. 4. Background in the horizontal or vertical sideband boxes is twice that in the signal region. Fig. 4. B. Fit of the mass spectrum After sideband subtraction, the K * (892) 0K * (892) 0 mass spectrum between 3.20 and 3.70 GeV is fitted using a χ 2 method with three Breit-Wigner functions folded with Gaussian resolutions, where the mass resolutions are fixed at their Monte Carlo predicted values [(12.2 ± 0.4) MeV, (12.3 ± 0.3) MeV and (12.2 ± 0.3) MeV for χ c0 , χ c1 and χ c2 , respectively] and the widths of the three χ cJ states are set at their world average values [4]. A χ 2 probability of 70% is obtained, indicating a reliable fit. The number of events determined from the fit are 26.1 ± 5.8, 26.9 ± 5.4 and 55.1 ± 6.3 for χ c0 , χ c1 , and χ c2 , respectively. The statistical significances of the three states are 4.2σ, 4.3σ and 7.5σ, calculated from ∆χ 2 , where ∆χ 2 is the difference between the χ 2 values of the fits determined with and without the signal function. Fig. 6 shows the fit result, and the fitted masses are 3416.2 ± 3.6 MeV, 3507.8 ± 3.6 MeV and 3553.6 ± 1.8 MeV for χ c0 , χ c1 and χ c2 , respectively, in agreement with the world average values [4]. A Monte Carlo simulation is used to determine the detection efficiency. The angular distribution of the photon emitted in ψ(2S) → γχ c0 is taken into account [9]. The K * (892) is generated as a P -wave relativistic Breit-Wigner with r as 3.4 GeV −1 [8]. For each case, 50,000 Monte Carlo events are simulated, and the efficiencies are estimated to be ǫ χc0 = (3.15 ± 0.09)%, ǫ χc1 = (3.25 ± 0.09)%, and ǫ χc2 = (2.96 ± 0.08)%, where the error is the statistical error of the Monte Carlo sample. Note that for the efficiency estimation, the events in the four sideband boxes are subtracted from the events in the signal region of the scatter plot, similar to the treatment of data. The branching fraction of ψ(2S) → γχ cJ , χ cJ → K * (892) 0K * (892) 0 is calculated using where the factor f is the branching fraction of K * (892) 0 to the charged Kπ mode, which is taken as 2 3 . Using the numbers obtained above and the total number of ψ(2S) events, 14.0 (1.00 ± 0.04) × where the errors are statistical only. C. Systematic errors The systematic errors in the branching fraction measurement associated with the efficiency are determined by comparing ψ(2S) data and Monte Carlo simulation for very clean decay channels, such as ψ(2S) → π + π − J/ψ, which allows the determination of systematic errors associated with the MDC tracking, kinematic fitting, particle identification, and efficiency of the photon ID [11]. Other sources of systematic error come from the uncertainties in the number of ψ(2S) events [10], the efficiency estimation using simulated data, the background, the χ cJ and K * (892) 0 mass resolutions, the binning and fit range, etc. Background subtraction In Section IV A, the backgrounds are estimated using the sidebands shown as the four dashed-line boxes in Fig. 4. Moving the sideband boxes 20 MeV away from or closer to the signal region, or varying the background number by one standard deviation, the largest changes of the branching fractions for the χ c0 , χ c1 and χ c2 are about 7.4%, 5.0% and 5.5%, respectively, obtained by re-fitting the K * (892) 0K * (892) 0 mass spectrum and reestimating the efficiency. 3. χcJ and K * (892) 0 mass resolutions Differences between data and Monte Carlo for the mass resolutions of the χ cJ or K * (892) 0 also give uncertainties in the determination of the branching fractions. The maximum possible difference for χ cJ is about 1 MeV. Such a change results in about 4.5%, 2.5% and 2.0% variations in the fitted number of χ c0 , χ c1 , and χ c2 events. If we change the K * (892) 0 window to [0.836 + 0.002, 0.956 -0.002] GeV and [0.836 -0.002, 0.956 + 0.002] GeV, the efficiency variations of the χ c0 , χ c1 and χ c2 are 1.5%, 2.5% and 2.4%, respectively. By varying the width of χ c0 by 1σ, 0.8 MeV, there is almost no change in the final fit result. We use total systematic errors of 5%, 3.5% and 3.5% for this uncertainty.
2019-04-14T02:27:20.747Z
2004-08-06T00:00:00.000
{ "year": 2004, "sha1": "c43b57c7a416638d45a36e398080e10b94c6afbb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ex/0408012", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c43b57c7a416638d45a36e398080e10b94c6afbb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244741194
pes2o/s2orc
v3-fos-license
The Theodorus Variation The Spiral of Theodorus, also known as the "root snail" from its connection with square roots, can be constructed by hand from triangles made with from paper with scissors, ruler, and protractor.  See the Video Abstract.  Once the triangles are made, two different but similar spirals can be made.  This paper proves some things about the second spiral; in particular that the open curve generated by the inner vertices monotonically approaches a circle, and that the vertices are ultimately equidistributed around that inner circle.    Introduction In this note we study what Susan Gerofsky, a Canadian mathematics educator, in [10] calls the reverse Wurzelschnecke. "Wurzelschnecke" is German for "root snail"; this will make sense very soon. This object, this "root snail backward, " is closely related to the Spiral of Theodorus, an ancient figure recently given that name in [8, p. 33], that still attracts attention of mathematicians [9] and artists [15] alike. To construct the Spiral of Theodorus, Figure 2, we start with the equilateral right triangle △OT 1 T 2 with |OT 1 |= |T 1 T 2 |= 1. The hypotenuse is therefore of length . Pythagoras' theorem is seen to hold: ( 1 That is, straightforward for practiced geometers and other mathematicians. The point of doing this with elementary students who may just be learning the Pythagorean theorem is to get them to appreciate this fact. Even to get the students to appreciate that the shorthand √ applies to all of the triangles in turn, starting from = 2, = 3, and so on, is important. . . . Theodorus of Cyrene [14] lived in the fifth century BCE and was "mentioned by Proclus along with Hippocrates as a celebrated geometer. " [12, p. 202] Sir Thomas Heath (1861-1940, a British classicist, in his book A History of Greek Mathematics writes: In Plato's Theaetetus we have the story of Theodorus lecturing on surds and proving separately, for the square root of every non square number from 3 to 17, that it is incommensurable with 1. [12, p. 22] Later in the text, Heath explains: Plato gives no hint as to how Theodorus proved the propositions attributed to him, namely that √ 17 are all incommensurable with 1; there is therefore a wide field open for speculation, and several conjectures have been put forward. [12, p. 204] The modern reader is probably familiar with the argument that the identity 2 = · 2 , for , and being integers, implies, from unique factorization, that every prime divisor of appears to an even power, and hence is a perfect square. We observe that this "modern" proof is practically the same reductio ad absurdum argument that Pythagoreans used to prove that √ 2 is irrational. Keeping in mind that Theodorus's proof appeared several decades after the discovery of the irrationality of √ 2, it is a natural question to ask about the value of Theodorus's approach. Heath discusses this question in some length by presenting a few hypothesis about the nature of Theodorus's proof. One of those hypotheses, that Heath attributes to Friedrich Hultsch (1833 -1906), a German classical philologist, was that Theodorus actually, to use a modern term, visualized Pythagoreans' reductio ad absurdum argument. [12, pp. 204 -205] Fast forward to the 21st century, Jonathan Borwein (1953Borwein ( -2016, a Canadian mathematician, as an example of experimental mathematic accessible at the high school level, in multiple publications used an image that depicts the fact that an assumption that √ 2 is rational leads to a contradiction. See, for example, [2,4,1,3]. In the spirit of experimental mathematics 2 , Borwein (and his coauthor) advise the reader to basically go along the Spiral of Theodorus 3 and create visual proof(s) that nonperfect squares do not have rational square roots. For more about connections between the Spiral of Theodorus and a technique that Borwein called minimal configurations [3], the technique that emphasizes the visual side of the fact that the magnitudes √ , where ∈ N is a nonperfect square, and 1 are incommensurable, see [10,13]. A continuous version Philip J. Davis (1923Davis ( -2018, an American applied mathematician, has in [8, p. 38] the following, most mysterious, formula: [Davis actually uses instead of − 1, there; we make this shift to accommodate the notation in this paper.] Here is the square root of −1. This is an infinite product, but note that if the variable happens to be a positive integer, then something interesting happens, namely, most of the terms in the product cancel out. Let us do the computation explicitly if = 4. Note that the infinite product must be defined as a limit, say because all of the terms except the first 3 in the numerator and the last 3 in the denominator cancel, or "telescope. " This limit can therefore be seen to exist, and equal (1 + / . A similar thing happens if is any positive integer; at the end, we are left with a finite product of complex numbers, and indeed ( + 1) = (1 + / √ ) ( ), and (1) = 1. Great, you say. What has this to do with the spiral of Theodorus? The connection is as follows: embed the spiral in the complex plane, with the origin at = 0 + 0 . Then vertex T 1 = (1, 0) = 1 + 0 , while T 2 = (1, 1) = 1 + . Note that our function has (1) = 1 and (2) = 1 + / √ 1 = 1 + . Now think of the multiplication of complex numbers: as Richard Feynman (1918 -1988), an American theoretical physicist, observed famously, to a physicist complex multiplication is very simple: you multiply the magnitudes, and you add the angles. The angle of the complex number 1 + / √ has tan = 1/ √ , which is exactly the central angle of the th Theodorus triangle; and 2 To use technology to gain insight and intuition, to visualize mathematical principles, to discover new relationships, and to test possibly false conjectures. 3 Borwein never explicitly mentions the Spiral of Theodorus the magnitude of (2) is | (2)|= √ 2 which is the length of the hypotenuse of the first Theodorus triangle (and the base of the second). Since ( + 1) = (1 + / √ ) ( ) we have and since | (1)|= 1 we have by induction that | ( )|= √ . Notice that this is exactly the length |OT |. What we have just proved is that the complex curve ( ) parameterized by the real parameter exactly interpolates the Theodorus spiral (provided, of course, that the infinite product defined when is not an integer actually exists; we don't do this here, because it demands knowledge of the convergence of infinite products which is not usually part of the standard curriculum at least until later; but the proof that it does exist, which is intelligible assuming the reader does have that background, is in [8]). We do, however, show that this works in Maple, in section 3. This analysis is very similar to Euler's interpolation of the gamma function, which indeed this is modelled on; see [5] or, better yet, the magnificent (Chauvenet prize-winning) paper [7]. Walter Gautschi, a Swiss-American mathematician, starts from here and writes in [9] the complex argument of ( ) as the integral of a sum, so that As pointed out in [7] and in [9], this sum converges, but only "painfully slowly". Taking a million terms only gets you three significant figures of accuracy, if you work naively. This is also true of the Theodorus constant, defined as (1) or ...regarding its slope as it crosses the 0 0 ray as a fundamental world constant and calling this world constant in honor of Theodorus, I wanted to determine to about eight or ten figures to the right of the decimal point. -Philip J. Davis, [8, p. 40] Davis then goes on to discuss two different methods to do so. Some experiments in Maple However, we can evaluate that very slowly-converging sum in Maple extremely simply. Everything is done for us, internally, by the Levin's -transform [16]. [This is similar in effect to the first technique discussed by Davis, namely the use of the Euler-Maclaurin sum formula. But it's built-in, and works very well on this example.] > → evalf (Sum( 1 which gives the polar angle ( ) of ( ) = √ ( ) . While this is perfectly possible in Maple by using evalf/Int, and it is rapid and accurate for small values of , as increases this takes increasingly more time to do. For = 2 using 15 Digits the computation takes less than 2 seconds, but for = 16 this takes about 8 seconds (on a 2021 vintage 1.3GHz Microsoft Surface Pro). Gautschi goes on in [9] to find a definite integral expression for ( ), which is both faster and more elegant than the integration of the accelerated sum above, in terms of special functions. In Maple's notation, this is ( This clever formulation involving the erfi function (also known as Dawson's Integral) is less computationally expensive to evaluate, for large , than the integral of the accelerated sum above: while the sum takes about a fifth of a second to evaluate (at 15 Digits) at each value of , the integral of ( ) where ( ) is evaluated by evalf/Sum takes about 8 seconds when = 16; whereas the integral of the special function takes only about 2 seconds. Having both methods available in Maple is good: the agreement of the two results leads to increased confidence. However, a much faster way to evaluate for a succession of values, such as you might need to plot the continuous Spiral of Theodorus, is to solve the differential equation ′ ( ) = ( )/2 numerically, over the range 1 ≤ ≤ where is the "return time" of the map, namely the value such that ( ) = 2 . This numerical solution takes about 2 seconds in Maple using the default rkf45 method, but thereafter the value of ( ) is available essentially instantaneously for any 1 ≤ ≤ by evaluating the computed polynomial interpolant. This raises the question of how accurate some piecewise polynomial interpolant of the discrete spiral would be; that is a question we reserve for another day. By use of fsolve, which takes about 5 minutes using the numerical quadrature of the evalf/Sum form of ( ), we find = 17.6459044714280 to 15 Digits, with ( ) = 2 . This computation is much faster using the Gautschi definite integral, taking less than 30 seconds. The use of the numerical differential equation solver, with event handling, should take only about 2 seconds (essentially the same as just solving the system). The computations in this section are available at This Maple Cloud Link. Experiments with physical triangles led to a surprise Gerofsky materialized the Spiral of Theodorus by creating various models of it from materials that included pieces of furniture, cardboard cutouts, and even edible pastry. [10] During one of her workshops while re-arranging triangles used to build a model of the Spiral of Theodorus, Gerofsky placed the right triangles in reverse order, i.e. with the hypotenuse of the subsequent triangle set on the leg of the previous one, rather than on its hypotenuse as it the case with the Spiral of Theodorus. See Figure 4. In this new arrangement of triangles, the vertices opposite to the leg of length 1, annotated by 0 , 1 , 2 , . . . in Figure 5, form a spiral-like object which she called the reverse Wurzelschnecke. In what follows we analyze some of the properties of the reverse Wurzelschnecke, both analytically and computationally. We also provide an elementary proof of the fact conjectured in [10], i.e. we prove that, when increases, the points approach the boundary of a certain circle. We determine the centre and the radius of that circle. Reverse Wurzelschnecke In this section we draw the reverse Wurzelschnecke in the complex plane and denote its points by , ∈ {0, 1, 2, . . .}. For ≥ 1 we denote the other two vertices of the -determined characteristic triangle by −1 and . See Figure 6. Code Implementation The Maple code used for the creation of Figure 3 did not have to go very far; only 17 nodes were needed. But if one wants to compute many triangles (as we did before we discovered that the inner curve approached a circle), a faster code is needed. Indeed, it was written first, and in Python. The code written in Python can calculate and plot the reverse Wurzelschnecke, as well as show how it overlaps with the Spiral of Theodorus. It can be found on Ewan Brinkman's GitHub here. Running the Code All of the required packages are listed in the file requirements.txt. Python version 3 should be used. When run, two options will appear in the terminal. Note that whenever the program ask for input, pressing the enter key will select a default option. To plot data, type 1 and press enter. The program will look for a data file with the name DATA_FILE (set in settings.py), in the data folder. Options when plotting are shown in Section 7. Figure 9 shows an example plot of the first 100 triangles. To calculate plotting data, type 2 and press enter. Next, input how many triangles should be calculated. A negative will calculate triangles forever, until the program is interrupted. Finally, enter how often it should save calculated triangles. The default, 1, saves every triangle. A value of 2 will save every other triangle, while entering 1000 will save every thousandth triangle. When saving the calculated data, the program will look for a data file with the name DATA_FILE (set in settings.py), in the data folder. If no file is found, a new file with the name DATA_FILE will be created. Options for calculating triangles are shown in Section 7. The Algorithm Each triangle is calculated using a previous triangle's data. Given an th triangle, the program will use that triangle's outside right vertex, rotation and triangle number plus 1 to calculate the next triangle. For example, given the second triangle, the program will use its outside right vertex of ( 3 ), its rotation of arctan √ 2 2 , and a triangle number of 3. First, the new triangle's inside leg length is calculated by taking the square root of the triangle number. See 7 for using a custom function for calculating triangle side lengths. Next, the new triangle is connected to the previous triangle, without being rotated. Figure 10 shows what the third triangle would look like at this stage. Next, the outside right vertex and inside vertex of the triangle are rotated clockwise around the vertex connected to the previous triangle. The rotation is the previous triangle's rotation, plus Figure 11 shows what the third triangle would now look like. This new rotation value will then be passed onto the next triangle, as well as the triangle's outside right vertex and one more than its triangle number. Each triangle, an instance of the class Triangle from the file triangles.py, performs these calculations using the class method calculate_triangle. Settings The constants below are located in the file settings.py. Settings for plotting: • SHOW_TRIANGLES: set to True to plot the reverse Wurzelschnecke in full. Set it to False to only plot a single vertex of each triangle. The vertex to be plotted can be set using PLOT_TRIANGLE_POINT (see below). • PLOT_TRIANGLE_POINT: if SHOW_TRIANGLES is False, this determines which vertex of the triangles is plotted. The options are: "inside" for the inner vertices which form the circle, or either "outside␣left" or "outside␣right" for the other triangles' vertices. • ANIMATE_SPIRAL_OF_THEODORUS: set to True to play an animation showing how the Spiral of Theodorus overlaps with the reverse Wurzelschnecke. See more in the section 7. • SPIRAL_OF_THEODORUS_AMOUNT: how many triangles should be used during the animation showing how the Spiral of Theodorus overlaps with the Reverse Wurzelschnecke. Set the parameter ANIMATE_SPIRAL_OF_THEODORUS to True to play the animation. • SHOW_CIRCLE: set to True to plot the circle that the inside vertices of the triangle go around, as well as the circle's center. • ANIMATE_PLOT: if set to True, each triangle of the reverse Wurzelschnecke will be plotted with a delay in between each one. The delay can be set with ANIMATION_INTERVAL. • ANIMATION_INTERVAL: the amount of milliseconds to pause between each triangle while plotting. This requires ANIMATE_PLOT to be True. • CONNECT_POINTS: set to True to connect each triangle's vertex to the next with a straight line while plotting. SHOW_TRIANGLES should be False and a vertex should be set with the variable PLOT_TRIANGLE_POINT. • PLOT_TITLE: a string used as the title of the plot. • COLOUR_PERCENT_DONE: set to True to colour the triangles or triangle vertices based on how many have been plotted before them. For example, if the first 10 triangles are plotted, triangles closer to 1 will be closer to one side of the colour gradient, while triangles closer to 10 will be closer to the other side of the colour gradient. Settings for calculating triangles: • EXACT_VALUES: set to True to use exact values instead of decimal approximations during calculations. Exact values are calculated using sympy. Note, using exact values becomes slower as the number of triangles increases. It is recommended to set this to False. • OUTSIDE_LEG_LENGTH: the length of the triangles' outside legs. If wanted, this can be changed to a value other than 1. • CUSTOM_HYPOTENUSE_FUNCTION: set to True to use a custom function for calculating the hypotenuse of each triangle. The function is called calculate_hypotenuse and can be found in the file utils.py. For example, instead of taking the square root of the triangle's number plus 1, a cube root could be done instead. The function takes the triangle's number as input (triangle number 1, triangle number 2, etc.), and must return the hypotenuse of the triangle. The triangle's inside leg is then calculated using the hypotenuse. Note, the outside leg length can be set using OUTSIDE_LEG_LENGTH. Settings for the triangle data file: • DATA_FILE: the name of the data file for reading and writing calculated triangles. • HEADERS: the headers of the csv file which stores the calculated triangles. Visualizing the Two Spirals Together In the file settings.py, setting the constant ANIMATE_SPIRAL_OF_THEODORUS to True will play an animation showing how the Spiral of Theodorus (the red lines) can be transformed to overlap with the Reverse Wurzelschnecke (the black triangles). The constant SPIRAL_OF_THEODORUS_AMOUNT sets how many triangles should be used for the Spiral of Theodorus. A copy of the animation can also be found on Ewan Brinkman's GitHub here. During the animation, the reverse Wurzelschnecke does not move. Instead, the Spiral of Theodorus begins by being reflected about the line = 1. Next, it is rotated arctan 1 radians around the point (1, 1), before being translated 2 units left. Large Calculations When billions of triangles are calculated with this program, interesting behaviour, likely numerical in origin, appears. Figure 12 shows It appears that the inside vertices have begun to wobble slightly. After more triangles are calculated, this became more apparent. Furthermore, the path the inside vertices trace appear to be shifting away from the centre of the circle that the inside vertices were originally going around. Figures 13 to 12(b) show some examples. In Figure 12(b), the inside vertices have even moved away from the circle. [Note: one hypothesis is floating point error. We reran the program multiple times to see if the results were reproducible. This just showed that the results were deterministic; but floating-point error is not random, so that did not rule out rounding errors. For triangle number 10 8 , the side length is 10 4 ; even using single precision, we would expect a side length accurate to 10 −3 which would look exact to the human eye. If floating-point error is significant here, it must be because of accumulated errors in the summation of the angles. We have not yet investigated this in detail. ] In Figure 15 we plot the relative differences |S ,Python − S ,Maple |/|S ,Maple | for = 10, 100, . . ., 10 11 (one hundred billion, in the North American sense which we have used consistently throughout this paper). We see a growth in these differences which we have visually fit with a curve · 1.5 , with a constant. We view this as evidence for cumulative rounding errors in the Python program, which does more computation than the Maple computation (which indeed uses sequence acceleration to avoid computation). Concluding Remarks This exploration began with an observation from physical triangles put together in a novel way; then Ewan Brinkman wrote the Python program to compute a large number of triangles, using real arithmetic and trigonometry to put them together. The first puzzle this computation generated was "what is the true shape of the curve being traced out in the centre?" The Python program gave us important clues. Following up on those clues, we were able to give a proof that the ultimate shape is indeed a circle (contradicting the large-scale experiments with the Python program, and thus demonstrating that the interesting behaviour shown must be numerical in origin). The points approach the circle monotonically in radius with the radius of the th point |S |= 1+1/(8 )+ (1/ 2 ). We also showed that the points are equidistributed on that circle, again contradicting some of the Fig. 16. Comparing the two spirals, using the same orientation for simplicity. The fact that all S lie outside the unit circle is not quite evident in Figure 16(b) above, but it is true. very large computations carried out with the Python program. This suggests that the detailed behaviour there might be of interest to numerical analysts; in Figure 12(b) we see something like "epicycles, " for instance; we conclude that these might be accumulated rounding errors. We suspect that even in spite of the visible regularity of the deviations that this is the case. Preliminary experiments comparing with other techniques such as Euler-Maclaurin summation have not shown any contradiction so far, however. After performing the initial Python experiments, we connected with the material of Philip J. Davis' lovely book [8], some of which we summarized in section 2. We then confirmed some of those results by use of Maple in Section 3. See also [11] and [6]. Still, there seems lots left to explore. A good place to start would be [8,Supplement B], written by Arieh Iserles, a professor of numerical analysis at Cambridge University with extensive interests including dynamical systems; his supplement to Davis' book generalizes the Theodorus spiral in several ways, including to matrix iterations.
2021-12-01T16:25:10.050Z
2021-11-29T00:00:00.000
{ "year": 2021, "sha1": "1f703d5acc6033812852c44d4301490993f167da", "oa_license": "CCBYNCSA", "oa_url": "https://mapletransactions.org/index.php/maple/article/download/14500/11516", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3624c57ba1db32b630615fcd44cfda08c80fbc1f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
267139880
pes2o/s2orc
v3-fos-license
White shark comparison reveals a slender body for the extinct megatooth shark, Otodus megalodon (Lamniformes: Otodontidae) The megatooth shark, †Otodus megalodon, which likely reached at least 15 m in total length, is an iconic extinct shark represented primarily by its gigantic teeth in the Neogene fossil record. As one of the largest marine carnivores to ever exist, understanding the biology, evolution, and extinction of †O. megalodon is important because it had a significant impact on the ecology and evolution of marine ecosystems that shaped the present-day oceans. Some attempts inferring the body form of †O. megalodon have been carried out, but they are all speculative due to the lack of any complete skeleton. Here we highlight the fact that the previous total body length estimated from vertebral diameters of the extant white shark (Carcharodon carcharias) for an †O. megalodon individual represented by an incomplete vertebral column is much shorter than the sum of anteroposterior lengths of those fossil vertebrae. This factual evidence indicates that †O. megalodon had an elongated body relative to the body of the modern white shark. Although its exact body form remains unknown, this proposition represents the most parsimonious empirical evidence, which is a significant step towards deciphering the body form of †O. megalodon. Introduction The extinct megatooth shark, †Otodus megalodon (Lamniformes: †Otodontidae), is an iconic prehistoric shark that has captured the attention of both scientists and the public due to its large teeth.Yet, one major challenge palaeontologists have faced is exactly what †O.megalodon looked like because no complete skeleton of the fossil species is known to date.Traditionally, the extant white shark (Carcharodon carcharias) has been used as a model species to reconstruct the body form of †O.megalodon (e.g., Gottfried et al., 1996). The most recent attempts have been the 2D reconstruction work by Cooper et al. (2020), followed by Cooper et al.'s (2022) 3D model of the body of †O.megalodon.Cooper et al. (2020Cooper et al. ( , 2022) ) used the extant white shark as a model representation of †O.megalodon Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts because the fossil shark has been inferred to be regionally endothermic like the extant lamnid sharks that include the white shark (Ferrón, 2017).In particular, Cooper et al. (2022) used an extant juvenile white shark specimen to generate a 3D model of †O.megalodon first, and then conducted a 'model adjustment' using all the extant lamnids because of the uncertainty in the phylogenetic position of †O.megalodon within Lamniformes.Based on their body form reconstruction, they concluded that †O.megalodon was a fast-cruising shark much like the extant lamnids.However, using the extant white shark or other lamnids as a template to reconstruct the body form of †O.megalodon lacks empirical fossil support (Sternes et al., 2023).Furthermore, it is also tenuous on the phylogenetic basis because †O.megalodon, as an otodontid, lies outside of the Lamnidae and may not be closely related to the family at all (Sternes et al., 2023;Figure 1A; but see also Appendix 1). One key question is: "Did †O.megalodon look like a large extant white shark?"It is true that the extant white shark has generally been used to estimate the body size of †O.megalodon (Shimada, 2019;Perez et al., 2021), but unlike preserved teeth that are at least tangibly comparable, the lack of any complete skeleton, or even a complete cranial skeleton or vertebral column, makes any skeletal or body reconstruction speculative.However, there are three critical pieces of information relevant to addressing the question that have become available since Cooper et al.'s (2022) study.First, on the basis of geochemical evidence, the endothermic physiology in †O.megalodon (specifically, likely regional endothermy) is empirically confirmed (Griffiths et al., 2023).Second, the newly described placoid scales of †O.megalodon, particularly the scales' interkeel distances that vary independent of body sizes in sharks, indicate that the general cruising speed of †O.megalodon was likely slower than the cruising speeds of extant lamnids, including the white shark (Shimada et al., 2023).Third, and more significantly, two other lamniform species, the extant planktivorous basking shark (Cetorhinus maximus), which has traditionally been regarded as a sluggish shark, as well as the deep-water, benthopelagic smalltooth sand tiger (Odontaspis ferox) have both been reinterpreted to be endothermic (also likely regional endothermy: Dolton et al., 2023aDolton et al., , 2023b; despite at least O. ferox is suggested to be ectothermic based on isotopic analyses by Griffiths et al., 2023).Hence, while †O.megalodon was indeed 'endothermic ' (Griffiths et al., 2023), the new palaeontological (Shimada et al., 2023) and neontological (at least Dolton et al., 2023a, at present) evidence do not corroborate the previous assumption and its rationale that †O.megalodon must have physically resembled the extant white shark or lamnids in general (Cooper et al., 2020(Cooper et al., , 2022)).Therefore, the purpose of this paper is twofold: 1) to re-evaluate the validity of the most recently proposed body form reconstruction of †O.megalodon; and 2) to provide a new hypothesis on the body form of †O.megalodon based on available evidence. Materials and Methods The main specimen used for the re-evaluation of the recently proposed body form of †O.megalodon and further discussion in this study is IRSNB P 9893, which is housed in the Royal Belgian Institute of Natural Sciences (IRSNB) in Brussels.This fossil specimen, formerly referred to as 'IRSNB 3121' (Gottfried et al., 1996), consists of 141 associated, but disarticulated, vertebral centra from an individual collected from the Miocene of Belgium (Shimada et al., 2021b;Cooper et al., 2022) (Figure 1B).Although it was not associated Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts with any teeth, the specimen is broadly accepted to have come from †O. megalodon due to the large size and structure of the centra, which are consistent with non-cetorhinid lamniform vertebrae (Gottfried et al., 1996;Shimada et al., 2021b;Cooper et al., 2022). Based on the maximum width of the largest centrum in the specimen ('vertebra #4' measuring 155 mm in width), the †O.megalodon individual was estimated to be 9.2 m TL in life based on a linear regression function describing the quantitative relationship between the maximum vertebral width and TL measurements from 16 extant white sharks (Gottfried et al., 1996).Cooper et al. (2022, data S1) also took measurements of each vertebra of IRSNB P 9893 and presented the sum of anteroposterior lengths of all centra to be approximately 11.1 m (Figure 1B).Our study compared that measurement (11.1 m) with an estimated total length (9.2 m) for that specific †O.megalodon individual based on the extant white shark (Gottfried et al., 1996). For comparisons, some preserved extant specimens housed in the following repository institutions were examined radiographically: Field Museum of Natural History (FMNH), Chicago, Illinois, USA; Natural History Museum of Los Angeles County (LACM), California, USA; and Florida Museum of Natural History, University of Florida (UF), Gainesville, USA.We used a Siemens Medical Systems' SOMATOM Sensation 64-slice computed tomography (CT) scanner at the Children's Memorial Hospital, Chicago, Illinois, USA, with the following settings: 120 kVp, effective mAs 200 with automatic exposure control activated, rotation time 0.33 sec, 0.75 pitch, 32 detectors using z-flying focal spot technique, 0.625-mm slice thickness and 0.4 mm overlapping slice reconstruction.Multiple CT images showing the skeletal elements of the specimens were generated using Siemens' InS-pace software. We acknowledge that different types of intra-specific variation may occur in sharks, including sexual dimorphism where, in many lamniform taxa, females tend to reach sexual maturity at larger body sizes or attain larger maximum body sizes (Compagno, 2002).However, for the purpose of re-evaluating the validity of Cooper et al.'s (2022) reconstructed vertebral column of †O.megalodon, we examined in detail the CT scans of a juvenile Carcharodon carcharias specimen (LACM 43805-1), which are available on the MorphoSource data-base: (https://www.morphosource.org/concern/media/000545335).Vertebral diameters were measured from this specimen by using the open-source web program postDICOM (Herten, The Netherlands; www.postdicom.com,last accessed July 25, 2023).Each measurement was taken three times to minimize possible measurement errors and to calculate a mean value that was subsequently used.A total of 163 vertebral centra were measured across the entire body of the specimen (see Appendix 2). Results and Discussion Re-evaluation of the Validity of the Recently Reconstructed Body Form of †O.megalodon Cooper et al. (2022) proposed the most recent 3D model of †O.megalodon and used it to make various inferences on the ecology of the extinct shark.We re-evaluated their assumptions and propositions by considering available evidence and other recent discoveries.Our re-evaluation result is that there are at least four major concerns with their body reconstruction that are worthy of discussion. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts The first issue is the questionable accuracy of their reconstructed vertebral column of †O.megalodon.Cooper et al. (2022) used 141 associated vertebrae from an †O.megalodon individual (IRSNB P 9893) collected from a Miocene deposit in Belgium.Despite being the best-preserved vertebral column of †O.megalodon, there are several major concerns that must be taken into consideration about using this fossil specimen.As Cooper et al. (2022, p. 8) also pointed out, this set of vertebrae is most certainly incomplete.For instance, Cooper et al. ( 2022) followed the sequence of curatorially assigned vertebral numbers that do not represent the vertebral sequence in life and noted that "centra 30, 35 to 37, 45, 105, 131, 136, 141, 146, 147, 149 are missing from the column".Although Cooper et al. (2022) accounted for those vertebrae with artificially and likely arbitrarily (Gottfried et al., 1996) assigned numbers that are interpreted to be missing, exactly how many more vertebrae were present in the vertebral column in life remains uncertain.In fact, vertebral counts are known to vary widely even among lamniform sharks (Springer and Garrick, 1964).It is therefore impossible to even decisively determine the total number of vertebrae, yet alone the total number of precaudal and caudal vertebrae, originally present in †O.megalodon.However, not only did Cooper et al. (2022) choose to assume that all preserved centra in the specimen represent precaudal vertebrae in their 3D model of †O.megalodon, they put the largest vertebrae near the neurocranium of their model (Figure 2).We point out that, in previous studies of both extinct (Conte et al., 2019) and extant (Natanson et al., 2018) lamniform sharks, the largest vertebrae are found in the girthiest portion of their trunk (mid-body), and this condition is also true for the extant white shark (vertebrae 54-64: Appendix 2; Figure 2).When plotting Cooper et al.'s (2022) reconstructed vertebral column, a gradual decline in vertebral diameter starting from the first vertebra is observed whereas the extant white shark shows a gradual increase in vertebral diameter and then a decline, which is the same pattern observed in other extant lamniform sharks (Natanson et al., 2018) (Figure 2).Furthermore, our reexamination of IRSNB P 9893 based on measurements provided by Cooper et al. (2022) suggests that not all centra in the specimen are precaudal vertebrae based on comparisons with a complete vertebral column in the extant white shark (Appendix 2).For example, in reconstruction, the extant white shark may not necessarily be an appropriate body form analog for the extinct species (i.e., †O.megalodon could have had a different body form), or both.In addition, Cooper et al. (2022) noted that their reconstruction of the †O.megalodon head is slightly 'undersized' (p.9), but we would argue that, while the overall length of the cranial region relative to its TL may be on par with that of the extant white shark (see above), at least their jaw reconstruction may actually be oversized relative to its body if the overall skeletal organization of the extant white shark (Figure 3 1B).To reconstruct the body, they scaled the full-body scan of an extant white shark so that their reconstructed vertebral column "ended at the base of the caudal fin" (Cooper et al., 2022, p. 9).Effectively, their †O.megalodon skeletal reconstruction based on the two fossil specimens served practically no purpose in inferring the body shape of †O.megalodon because the entire head and body were based on the extant white shark.Therefore, by taking this methodological assessment along with the other three aforementioned concerns into account, the validity of their 3D model of †O.megalodon is highly questionable. A New Interpretation of †O. megalodon Body Form So, what did †O.megalodon actually look like?Despite their questionable reconstructions, we point out that Cooper et al.'s (2022) study is significant because it left an important clue about the body form of †O.megalodon.Their reconstructed vertebral column based Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts on an associated vertebral set from the Miocene of Belgium was 11.1 m in length (Figure 1B) with the total length of their complete model measuring 15.9 m.The specimen is most certainly incomplete (Gottfried et al., 1996), missing an unknown number of vertebrae (see above).Yet, this specific †O.megalodon specimen was previously estimated to have come from an individual that measured 9.2 m TL (i.e., including the head and caudal fin) based on the quantitative relationship between the maximum vertebral width and TL measured from 16 extant white sharks that ranged 1.9-3.7 m TL (Gottfried et al., 1996;Shimada et al., 2021b).The vertebral centra of †O.megalodon are short, well mineralized and equipped with densely spaced radial lamellae (Leriche, 1926).This vertebral morphotype, which functionally adds architectural strength, is common within Lamniformes and characterizes both the extant white shark (Newbrey et al., 2015) and many other extinct apex predatory lamniform species (Shimada, 1997;Siverson, 1999;Amalfitano et al., 2022).Yet, the much longer vertebral column length measured by Cooper et al. (2022) (11.1 m) than the estimate based on the vertebral diameter sizes of the extant white shark (9.2 m TL) indicates that †O.megalodon had a more elongated body relative to the extant white shark (Figure 4). Cooper et al. ( 2022) did also recognize that their reconstructed 3D model based on the Belgian fossil is "markedly longer than previously estimated for this specimen" (p. 4 of main text) and that their "initial [computer-generated] model [of †O.megalodon] appeared rather thin" (p.16 of their Supplementary Methods).However, constrained by the underlying premise of their study using the extant white shark or Lamnidae as the modern analog for †O.megalodon, they did not consider the possibility that †O.megalodon could have had an elongated body form compared to the extant white shark.Instead, Cooper et al. (2022) attributed the discrepancy to 1) the distant phylogenetic relationship between †O.megalodon and the white shark, 2) the unknown total vertebral count and column structure in †O.megalodon, and 3) the uncertainty in whether the Miocene specimen from Belgium preserves the largest vertebral centrum from the individual.However, not only do these additional explanations make their proposition less parsimonious, their phylogenetic justification to explain the discrepancy is contradictory to their very premise of using the extant white shark as a model for †O.megalodon in the first place.Furthermore, whereas the likelihood of significantly larger vertebrae missing from the Belgian fossil specimen is rather low because diameter differences across the largest preserved centra are subtle and in a tight range (e.g., nearly 42% of the 141 preserved vertebrae measure 130-155 mm: Figure 2), the possibility that more vertebrae could be missing from the specimen would mean that their 11.1 m measurement must be regarded as the minimum possible length of the vertebral column.Alternatively, our proposition is based on evidence that is most parsimonious and empirical: i. Exactly how elongated †O.megalodon's body was relative to the extant white shark is uncertain at the present time (Figure 4) because the extent of missing vertebrae in the associated vertebral set (Figure 1B) is unknown (Cooper et al., 2022;this study).However, besides the aforementioned new palaeontological (Shimada et al., 2023) and neontological (at least Dolton et al., 2023a, at present) evidence, our interpretation is further supported by additional anatomical evidence.In modern lamnids, centrum growth correlates with girth rather than body length (Natanson et al., 2018).White sharks have a thicker vertebral Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts column than short-fin mako (Isurus oxyrinchus) and porbeagle (Lamna nasus) sharks at a comparable body length (Gottfried et al., 1996;Natanson et al., 2002;Doño et al., 2015) but with a similar mass (Kohler et al., 1995).More compression-resistant vertebrae may compensate for the structural issues associated with the thinner columns in shortfin makos and porbeagles (Ingle et al., 2018).The maximum diameter of the †O.megalodon vertebrae from Belgium along with the original vertebral column length of 11.1+ m indicates a vertebral column not only much thinner in relative terms than that of a white shark but also more gracile than those of smaller-bodied lamnids with known vertebral size data (Gottfried et al., 1996;Natanson et al., 2002;Doño et al., 2015).If anything, the data from living lamnids indicate a robust vertebral column in a hypothetical lamnid-like shark the size of an †O.megalodon.Therefore, the remarkably slender vertebral column of the Belgian †O.megalodon specimen raises concerns about the accuracy of girthy, lamnid-like reconstructions of this species suggested by Cooper et al. (2020Cooper et al. ( , 2022)).We also note that the body cross-sectional geometry in Cooper et al.'s (2022) 3D body reconstruction of †O.megalodon is rather rectangular and distorted, but it is generally elliptical in extant sharks (Tomita et al., 2021), suggesting that it is more parsimonious to consider †O.megalodon to also have had an elliptical body cross-section. The exact body form of †O.megalodon (or any other otodontids: see Appendix 1) cannot be elucidated decisively based on the present fossil record (Sternes et al., 2023).Nevertheless, our new interpretation-that †O.megalodon had an elongated body relative to the extant white shark-has significant implications for the biology of the fossil shark, most notably because it would mean that its pleuroperitoneal cavity was likely elongated as well.†Otodus megalodon and its predecessors such as †O.chubutensis apparently occupied a trophic position similar to (McCormack et al., 2022), or possibly higher than (Kast et al., 2022), the extant white shark based on geochemical evidence, where its diet included marine mammals based on bite marks on fossil pinniped and cetacean bones (Aguilera et al., 2008;Collareta et al., 2017;Godfrey et al., 2018).The morphology of placoid scales suggests that the cruising speed of †O.megalodon was probably slower than that of the extant lamnids including the white shark, and its endothermic metabolism is thought to have been used largely to facilitate digesting large, ingested food items and enhancing nutrient absorption and processing (Shimada et al., 2023).Where digestion of food and absorption of nutrients are essential for every vertebrate (Tomita et al., 2023), endothermic fishes possess visceral countercurrent heat exchangers and retain an elevated metabolic rate from food processing (Dickson and Graham, 2004).Sharks have a spiral intestine with complex intestinal muscular activity (Tomita et al., 2023), that is thought to have evolved to increase the absorptive surface area and to reduce the unidirectional flow speed of digesta for prolonging absorptive time (Holmgren and Nilsson, 1999;Leigh et al., 2021).In fact, the spiral intestine is the warmest visceral organ in extant lamnids, along with their warm, large, lipid-rich liver associated with the suprahepatic rete (Carey et al., 1985;Bernal et al., 2001).The elongated body of †O.megalodon would imply that its liver as well as its alimentary canal, including the spiral intestine, within the body cavity may have also been long, which would have concomitantly provided more absorptive area and time with heat-induced nutrient processing efficiency.Furthermore, at least some endothermic fishes can exploit cool waters because of a warm viscera that further elevates the body core Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts temperature (Dickson and Graham, 2004).It is conceivable that the worldwide occurrences of †O.megalodon fossils (Razak and Kocsis, 2018), including cool areas, may, at least in part, be attributed to this physiological condition. Conclusions Cooper et al.'s (2022) 3D reconstruction work is novel, but because the fundamental assumptions and accuracy of their 3D skeletal and body reconstructions are questionable in the first place, their entire conclusions about the lifestyle of †O.megalodon based on their 3D reconstruction must also be considered questionable.In fact, their conclusion that †O.megalodon was a fast or long-distance swimmer like the extant white shark is logically circular because their body reconstruction of the fossil shark was based on the fast-swimming, regionally endothermic lamnids including the white shark with known long-distance travel records (Weng et al., 2007;Jorgensen et al., 2010;Watanabe et al., 2015;Harding et al., 2021).The reality is that there is currently no scientific support for Cooper et al.'s (2022) or any of the previously published body forms of †O.megalodon (Gottfried et al., 1996;Cooper et al., 2020).Furthermore, our results indicate that the previously published †O.megalodon's possible maximum body size estimates of 15-20 m TL (Shimada, 2019;Perez et al., 2021) as well as its ontogenetic growth model (Shimada et al., 2021b) based on the extant white shark are likely underestimated.We must acknowledge that, without direct fossil evidence such as a complete skeleton, extrapolation over 100 million years of otodontid or lamniform evolution and uniquely 'off-the-scale' gigantism of †O.megalodon among macrophagous lamniform sharks (Shimada et al., 2021a) make the direct comparison of body forms even within Lamniformes extremely challenging. Supplementary Material Refer to Web version on PubMed Central for supplementary material. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts ; although the placement of †Otodontidae is tentative and other extinct families are not depicted in this tree, the main point of this illustration is to demonstrate that †Otodontidae lies outside of Lamnidae (both clades highlighted in bold letters) where clades containing one or more species with regional endothermy (indicated by an asterisk [*]) do not share an immediate common ancestry (Sternes et al., 2023).B, Reconstructed vertebral column and its total measured length by Cooper et al. (2022) based on an incomplete associated vertebral set from the Miocene of Belgium; this specific specimen (IRSNB P 9893) was previously estimated to have come from an individual that measured 9.2 m in total length, including the head and caudal fin (Gottfried et al., 1996) based on the modern white shark, not accounted for by Cooper et al. (2022). ), whichCooper et al. (2022) did not account for, is used as a model at face value.The third concern is the lack of ontogenetic consideration.The specific extant white shark specimen scanned forCooper et al.'s (2022) †O.megalodon body reconstruction may not be ideal.Setting aside a slight upward bend of the head that is a rather unconventional posture compared to an otherwise fusiform body that typically characterizes the white shark and sharks in general(Sternes and Shimada, 2020;Paig-Tran et al., 2022;Sternes et al., 2023), the white shark specimen they used represents a 2.56-m-TL juvenile individual.Importantly, allometric changes in girth and the caudal fin morphology at various developmental stages are known for the white shark and other lamnids(Casey and Pratt, 1985;Lingham-Soliar, 2005;Tomita et al., 2018;Sternes et al., 2023).However,Cooper et al. (2022) did not address the possible effects of ontogenetic morphological differences in reconstructing the body form of †O.megalodon.Therefore, we question whether the use of a 2.6-m-TL juvenile white shark is appropriate for the extinct shark that likely reached at least 15 m TL(Shimada, 2019;Perez et al., 2021).The fourth and perhaps the most critical issue is their method of body form reconstruction.Cooper et al. (2022) used a computer tomographic (CT) scan of an extant white shark cranial skeleton as a hypothetical substitute for that of †O.megalodon where they superimposed their artificially reconstructed dentition based on an incomplete associated tooth set of an †O.megalodon individual from the Pliocene of North Carolina, USA, estimated to be 17.3 m in total length (TL) (Perez et al., 2021) onto the digital image of the white shark jaws.Even though the exact size of the cranial skeleton relative to the vertebral column remains uncertain based on the present fossil record, Cooper et al. (2022) then attached their cranial reconstruction to their reconstructed vertebral column based on an incomplete associated set of vertebrae of another †O.megalodon individual from the Miocene of Belgium (Figure e., 11.1 m [= minimum possible actual measured vertebral column length] > 9.2 m [total length of the same fossil individual estimated from the extant white shark]. Figure 1 . Figure 1.Simplified family-level phylogenetic hypothesis of Lamniformes showing all extant clades and †Otodontidae (A: dagger [ †] indicates extinct), and silhouette depiction of fossil vertebral column of †Otodus megalodon (B).A, Current understanding of lamniform phylogeny demonstrating that a large portion of the phylogenetic tree remains unresolved due to conflicting results based on various molecular and morphological studies(Sternes et al., 2023 and references therein); although the placement of †Otodontidae is tentative and other extinct families are not depicted in this tree, the main point of this illustration is to demonstrate that †Otodontidae lies outside of Lamnidae (both clades highlighted in bold letters) where clades containing one or more species with regional endothermy (indicated by an asterisk [*]) do not share an immediate common ancestry(Sternes et al., 2023).B, Reconstructed vertebral column and its total measured length byCooper et al. (2022) based on an incomplete associated vertebral set from the Miocene of Belgium; this specific specimen (IRSNB P 9893) was previously estimated to have come from an individual that measured 9.2 m in total length, including the head and caudal fin(Gottfried et al., 1996) based on the modern white shark, not accounted for byCooper et al. (2022). Figure 2 . Figure 2. The distribution of vertebral diameters throughout each vertebral column, where vertebral number '1' represents the anterior-most centrum in each specimen.A, Graph based on Cooper et al.'s (2022) Data S1 for the vertebral column of †Otodus megalodon from the Miocene of Belgium (IRSNB P 9893), where the vertebral column is most certainly incomplete and the vertebral numbers do not necessarily reflect the original anatomical sequence (grey plots represent significantly damaged vertebrae).B, Graph based on CTscanned data of an extant white shark (Carcharodon carcharias) specimen (LACM 43805-1), Figure 3 . Figure 3. Photographic (*) and CT images ( ** ) of preserved specimens of extant white shark (Carcharodon carcharias) and salmon shark (Lamna ditropis).A, Complete specimen of 126-cm-TL male C. carcharias caught off central California, USA (LACM 43805-1): from top to bottom, external body * and skeleton ** in left lateral view and external body ** and skeleton ** in ventral view.B, Complete specimen of 151 cm TL male L. ditropis caught off central California (FMNH 117475): from top to bottom, external body * and skeleton ** in left lateral view and external body * and skeleton ** in dorsal view.C, Head specimen of estimated 271-cm-TL male C. carcharias caught off southern Florida, USA (FMNH 38335): from top to bottom, external head * and cranial skeleton ** in left lateral view and external head * and cranial skeleton ** in dorsal view.All scale bars equal 10 cm. Figure 4 . Figure 4.Previous and new schematic interpretations of †Otodus megalodon body form.A dark grey silhouette depicting the previously reconstructed †O.megalodon body form byCooper et al. (2022) based on the extant white shark, superimposing a light grey outline showing the newly interpreted body form of †O.megalodon which is more elongated than the extant white shark.Note: it must be emphasized that this illustration should be strictly regarded as schematic as the exact extent of body elongation, the shape of the head, and the morphology and positions of the fins remain unknown based on the present fossil record. This fact strongly indicates that the reconstructed precaudal portion of the vertebral column of Cooper et al. (2022) indeed includes caudal vertebrae.Taking all the information into account, the model of the vertebral column created by Cooper et al. (2022) is most certainly incomplete and inaccurate. Cooper et al.'s (2022)computer model, the largest vertebra in IRSNB P 9893 (centrum 4) was 155 mm in diameter whereas the smallest vertebra (centrum 150) was 57 mm in diameter.When comparing the largest vertebra to the smallest inCooper et al.'s (2022)model, this generates a ratio of 2.7.This same ratio (2.7) is present when comparing the largest vertebra found in the mid-body of the extant white shark to that of a vertebra found in its caudal fin, specifically, vertebrae #61 and #132 measuring 19.75 mm and 7.27 mm in diameter, respectively (Appendix 2).in the 3D skeletal model is oversized relative to its vertebrae if the extant white shark is used.Such a discrepancy may indicate that there is a flaw inCooper et al.'s (2022) skeletal Sternes et al.Page 15 Palaeontol Electronica.Author manuscript; available in PMC 2024 September 21.Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscriptswhere the vertebral column is complete and the vertebral numbers reflect the anatomical sequence. Sternes et al.Page 16 Palaeontol Electronica.Author manuscript; available in PMC 2024 September 21.Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts
2024-01-24T16:46:57.043Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "be1c24ca926711360d759ffaca381b075aea9beb", "oa_license": "CCBYNCSA", "oa_url": "https://eprints.bbk.ac.uk/id/eprint/52781/1/SternesEtAl-PE(Galley)_Alberto.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "5f1a01f2d0f6cc640d4d9e371095eb50b89824cb", "s2fieldsofstudy": [ "Biology", "Geology" ], "extfieldsofstudy": [] }
4972463
pes2o/s2orc
v3-fos-license
Characterization of Immortalized Human Corneal Endothelial Cell Line using HPV 16 E6/E7 on Lyophilized Human Amniotic Membrane Purpose To establish the immortalized human corneal endothelial cell line (IHCEn) by transducing human papilloma virus (HPV) 16 E6/E7 oncogenes, and to identify their characteristics when cultivated on a lyophilized human amniotic membrane (LAM). Methods Primary human corneal endothelial cells (PHCEn) were infected using a retroviral vector with HPV 16 E6/E7, and transformed cells were clonally selected by G418. Growth properties and characteristics of IHCEn were compared with PHCEn by cell counting and RT-PCR of VDAC3, SLC4A4, CLCN3, FGF-1, Col IV, and Na+/K+ ATPase. IHCEn were cultured on LAM. Messenger RNA expressions of VDAC3, CLCN3, and Na+/K+ ATPase, and protein expressions of Na+/K+ ATPase and Col IV in IHCEn cultivated on LAM were investigated by RT-PCR, immunofluorescence, and immunohistochemical staining, respectively. Results Successful immortalization was confirmed by stable expression of HPV 16 E6/E7 mRNA by RT-PCR, and IHCEn exhibited typical corneal endothelial morphology. Doubling time of IHCEn was 30.15±10.96 hrs. Both IHCEn and PHCEn expressed VDAC3, CLCN3, SLC4A4, FGF-1, Col IV, and Na+/K+ ATPase. IHCEn cultivated on LAM showed stronger expression of VDAC3, CLCN4, and Na+/K+ ATPase mRNA than on plastic culture dish. Immunohistochemical staining and immunofluorescence revealed the positive expression of Na+/K+ ATPase and Col IV. Conclusions IHCEn were successfully established, and LAM is a good substrate for the culture of human corneal endothelial cells. The corneal endothelium, a monolayer of differentiated cells located in the posterior portion of the cornea, is essential for maintaining corneal transparency by their dehydrating pumping action on the corneal stroma, and at least 1,500 corneal endotheial cells/mm 2 are required for normal corneal function. The human corneal endothelium comprises a postnatal density of approx. 3,000 cells/mm. 1,2 During life we experience a physiological reduction of cell density of about 0.5% per year, 3 and endothelial cell loss can increase after intraocular surgery or in certain inherited diseases. 4,5 Although once the density of endothelial cells, which have a limited regenerative capcity, reaches a critically low number, the integrity of the monolayer can be compromised, resulting in stromal edema, corneal opacity, and loss of visual acuity. 6 Since a minimum cell density of 500 cells/mm 2 is necessary to ensure a proper pump function, the only effective therapy to restore the corneal endothelium and thereby vision is corneal transplantation. Nevertheless, successful corneal transplantation also requires an optimum endothelial cell density, and donated corneas are very limited in Korea. To overcome these limitations, culturing human corneal endothelial cells on suitable biomaterials and transplanting the construct containing the intact endothelial monolayer to the posterior of the cornea was attempted. [7][8][9][10][11][12][13][14][15][16] However, these experiments were limited by several problems: a lack of cell adherence after transplantation with insufficient substances, the use of non-human corneas, and insurance of a sufficient number of corneal endothelial cells. Recently, the culture of corneal endothelial cells on biodegradable membranes 17 and human corneal endothelium used for transplantation were genetically manipulated by transfection with the SV40 large T-antigen. 18 One group also pursued an alternative strategy to increase the endothelial cell density by transplanting cultured corneal endothelial cells onto donor corneas, which were unsuitable for transplantation. 19 Although these experiments reported a 80-90% graft success rate, the use of donated human cornea has a low efficiency and did not overcome the limitation of corneal transplantation. Therefore, two questions must be resolved for corneal endothelial cell transplantation. First, the isolation of sufficient human corneal endothelial cells is essential for corneal endothelial cell transplantation. The most widely used method for immortalization of cells is the viral oncogene, SV40 large T-antigen, and immortalizations of corneal endothelial cells have been reported using human and murine cells. 20,21 However, such cell lines had been shown to exhibit abnormal phenotypes, including a reduction of Na + /K + pumping action and alterations of collagen expression. Recently, the establishment of more stable and physiologically relevant cell lines has been reported by transducing human papilloma virus (HPV) type 16 E6/E7 genes into various primary cells. 2,3,5 E6 proteins are approximately 150 amino acids in length, and have the important function of binding the tumor suppressor p53, which results in its ubiquitin-mediated degradation. 22 E7 proteins are approximately 100 amino acids in length, and associate with a member of the retinoblastoma tumor suppressor family to facilitate progression into S phase. 23 Moreover, the E7 protein provides the synergistical transforming activity together with the E6 protein. 24,25 Therefore, a more stable immortalization was expected by transducing with E6 and E7 in human corneal endothelial cells. The second necessity for corneal endothelial cell transplantation is the development of a proper substrate sufficient not only for the corneal endothelial cells to adhere and grow, but also to transplant. As mentioned above, previously attempted substrates were abnormal corneal or biochemical materials. These substrates were not appropriate for the culture and transplantation of corneal endothelial cells, in addition to being very expensive and inefficient. Human amniotic membrane, the innermost layer of human placenta, is harmless to humans and stimulates no immune reactions. The amniotic membrane also promotes wound healing and has a cell-protecting effect, which results in its clinical use in dermatology and ophthalmology for dressing wounds. Furthermore, the amniotic membrane has similar aspects to Descement's membrane. The major components of basement membrane are type IV, V, and VII collagen, and abundant extracellular matrix proteins including laminin and integrin. The use of amniotic membrane as a substrate and scaffold for the culture of corneal endothelial cells was expected to be successful. Therefore, this experiment was conducted to establish immortalized human corneal endothelial cells (IHCEn) by stable introduction of HPV 16 E6/E7, and to investigate the biological characteristics of an established corneal endothelial cell line cultivated on a lyophilized human amniotic membrane. Materials All materials for cell culture including fetal bovine serum (FBS) and Opti-MEM were obtained from Gibco BRL (Grand Island, NY, USA). Other media supplements, such as epidermal growth factor (EGF), ascorbic acid, RPMI 1640 vitamin mixture, insect lipid, and chondroitin sulfate, were purchased from Sigma Chemical (St. Louis, MO, USA). Nerve growth factor (NGF) was obtained from R&D Systems (Minneapolis, MN, USA). Isolation and primary culture of corneal endothelial cells Human eyes were obtained in accordance with the tenets of the Declaration of Helsinki and proper informed consent. Human corneal endothelial cells were isolated by trypsin digestion of the posterior portions of excised cornea. After enzymatic digestion at 37℃ for 10 to 15 min, the endothelium was removed by gentle scraping, and seeded onto a tissue culture dish in Opti-MEM supplemented with 5 ng/ml of EGF, 20 ng/ml of NGF, 20 µg/ml of ascorbic acid, 0.005% insect lipid, 200 mg/l of CaCl 2 , 0.02% chondroitin sulfate, 1% RPMI 1640 vitamin mixture, and 8% FBS. Retroviral infection of primary corneal endothelial cells PA317 amphotropic packaging cell lines stably transfected with pLXSN 16 E6-E7 (HPV 16 E6/E7) were purchased from American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were grown to 70-80% confluence, and supernatants were collected for 24 hr and stored in aliquots at -80℃. The primary endothelial cells were infected with 1 ml of virus stock in 3 ml of medium containing Polybrene (Sigma, MO, USA) at 4 µl/ml for 24 hr. The virus was then removed and the medium replaced by Opti-MEM supplemented with 8% FBS and other supplements as described above. Selection media with 200 µg/ml G418 (DUCHEFA, Amsterdam, Netherlands) was added after 72 hr. Cultures were maintained in selective media for two to three weeks. The G418-selected transformed cells were then grown and expanded further. One of the established cell lines was seeded onto 100 mm-diameter culture dishes at a density of 5×10 4 cells/dish. Every two days for 14 days, cells were detached by 0.05% trypsin/0.5 mM ethylenediaminetetraacetic acid (EDTA), and counted in a hemocytometer in triplicate. For doubling time determination, the formula Tc=0.3T / log (A/A0) was used where Tc=doubling time, T=initial time, A=the number of cells at time T of proliferation, and A0=the number of cells at an initial time point. Culture of IHCEn on lyophilized human amniotic membrane Human amniotic membrane from donated human placenta tested by hepatitis or HIV infection was pretreated with 0.025% trypsin-EDTA to remove amnion cells, followed by lyophilization and sterilization using EO gas. Lyophilized amniotic membranes (LAM) were installed on a Teflon ring, and 1×10 5 cells/support IHCEn were seeded onto the amnion side of a denuded LAM. Cells were maintained using opti-MEM for six days by changing the growth media every two days. RT-PCR Corneal endothelial origin of the primary and transformed cells was confirmed by reverse transcription-polymerase chain reaction (RT-PCR). Primary cells and established cell lines were cultured as above, and total RNA was extracted with Trizol reagent (Gibco BRL, NY, USA). The first strand of cDNA was synthesized from two µg of the total RNA in a 20 µl reaction mixture containing Superscript II RNase, H reverse transcriptase, and oligo (dT)18 primer. Target genes Table 2) were then amplified, PCR products were run on a preparative 2% agarose gel, and the bands were photographed. Immunohistochemistry and immunofluorescence An avidin biotin complex technique was selected for staining sectioned specimens (2-4 µm). Paraffin sections were deparaffinized in xylene, rehydrated through decreasing ethanol concentrations and quenched for endogenous peroxidase. Cryostat sections were placed on gelatinized slides, fixed in cold acetone and then rinsed in tris-buffered saline. Nonspecific background was eliminated by incubating tissue sections with non-immuno serum (Histostatin-plus Kits, Reagent A; Zymed Laboratories, CA, USA). Sections were then incubated with monoclonal mouse anti-human collagen IV (Santa Cruze Biotechnology Inc. CA, USA) and antihuman Na + /K + ATPase antibody (Santa Cruze Biotechnology Inc. CA, USA) overnight at 4℃, followed by extensive washing in 0.05 M tris-buffered saline (pH 7.6) before the addition of a biotinylated secondary antibody (reagent B). Sections were washed again and incubated for one hour with peroxidase-conjugated streptavidin (reagent C). The presence of peroxidase was revealed by adding a substrate-chromogen (3-amino-9-ethycarbazole) solution (reagent D) for immunohistochemistry, and an FITC conjugated rabbit anti-mouse IgG for immunofluorescence. Sections were counterstained with haematoxylin. All sections were photographed and the entire tissue area was examined. Characterization of IHCEn The morphological characteristics of isolated primary human corneal endothelial cells (PHCEn) and IHCEn were observed using an inverted microspcope. IHCEn were polygonal to slightly elongated in shape, similar to PHCEn, but were siginificantly different from dendritic corneal fibroblasts and small globular-shaped epithelial cells (Fig. 1). Stable expression of E6/E7 mRNA was observed in IHCEn, but not in PHCEn by RT-PCR analysis. Messenger RNA of the corneal epithelial cell marker, keratin 12, was not expressed in PHCEn and IHCEn (Fig. 2). Proliferative characteristics of IHCEn were elucidated by cell counting every two days for 14 days. The cell growth curve of IHCEn showed typical S-curves: minimal growth for the initial seven days, geometrical growth until day 14, and decreased cell numbers after 14 days. The estimated cell doubling time of IHCEn was 30.15±10.96 hrs (Fig. 3). IHCEn showed a more extended life span (over passage 30), and more rapid proliferation than PHCEn (data not shown). Transformed traits, except for the transducing of HPV 16 E6/E7 oncogenes, were examined by RT-PCR of several channel proteins including voltage-dependent annion channel 3 (CDAC3), sodium bicarbonate cotransporter member 4 (SLC4A4), chloride channel protein 3 (CLCN3), fibroblast growth factor (FGF)-1, type IV collagen (Col IV), and Na + /K + ATPase in PHCEn and IHCEn. mRNAs for all of the mentioned proteins were expressed in IHCEn, with similar expression patterns observed in PHCEn. Characterization of IHCEn cultivated on LAM In order to elucidate the efficiency of LAM for the human corneal endothelial niche, IHCEn were cultured on LAM and harvested for characterization. Messenger RNA expressions of several channel proteins and Na + /K + ATPase were observed in established IHCEn cultivated on LAM. Moreover, their expressions were stronger than in cells grown on a plastic culture dish (Fig. 5). Immunohitochemical staining of Col IV and immunofluorescence of Na + /K + ATPase in IHCEn cultivated on LAM also showed positive expressions results similar to RT-PCR (Fig. 6). Discussion Although the condition of the corneal endothelium is essential for corneal transparency, it is difficult to treat endothelial-damging corneal diseases, since the corneal cells have a very restricted regenerative capacity. In order to overcome these limitations, corneal endothelial cell transplantation had been attempted by several groups, but insurance of a sufficient number of endothelial cells and the development of appropriate endothelial substrates are still not guaranteed. Therefore, this experiment was conducted to establish immortalized human corneal endothelial cells (IHCEn) by stable introduction of HPV 16 E6/E7, and to investigate the biological characteristics of an established corneal endothelial cell line cultivated on lyophilized human amniotic membrane, an ideal coreneal endothelial substrate. IHCEn stably transfected by PA317 amphotropic packaging cell lines with pLXSN 16 E6/E7 showed stable expression of E6/E7 mRNA. There was no corneal epithelial cell contamination observed by RT-PCR analysis of keratin 12, a corneal epithelial cell marker (Fig. 2). IHCEn showed morphological similarity with PHCEn, a longer life span (over passage 30), and more rapid proliferation than PHCEn (data not shown). Moreover, IHCEn showed a typical S-shaped growth curve by cell counting. The estimated cell doubling time of IHCEn, 30.15±10.96 hrs (Fig. 3), was faster than that of a previously established rabbit corneal endothelial cell line in our laboratory (51.30±7.3 hrs). 26 Several immortalization techniques on human corneal endothelial cells by transfection of various oncogenes including SV40 large T-antigen or HPV 16 E6/E7 had been previously reported, 27,28 but there were no reports elucidating the typical cell growth properties such as growth curve or doubling time. Introduction of E6/E7 oncogenes facilitates the degradation of p53 and progression into the S phase synergistically. [29][30][31][32] Moreover, their stability was also confirmed by observing which gene was non-carcinogenic when injected into nude mice. 33 Therefore, we were able to successfully obtain abundant populations of highly proliferative and stably immortalized human corneal endothelial cells. Furthermore, it was also identified that established IHCEn maintained normal corneal endothelial functions similar to PHCEn by confirming VDAC3, CLCN3, CLC4A4, and Na + /K + ATPase mRNA expressions (Fig. 4). Corneal transparency is maintained by uniform structure, avascularity, and hydration of the stroma. The mechanism that maintains corneal hydration has an active component, HCO3-, which is transported from the stroma to the aqueous humor. A number of different hypotheses have been proposed to describe the molecular mechanisms that drive the net HCO3 flux. Common to these models is the recognition that the net HCO3 flux is coupled to and energized by a basolateral Na + /K + ATPase in the corneal endothelium. 34 The tendency to swell is due to the presence in the stromal matrix of non-diffusible, negatively charged molecules such as glycosaminoglycans. Ion transport across the endothelium is thought to involve the active transport of anions from the stroma towards the aqueous humor, followed by the mainly passive diffusion of cations. The osmotic effect of this transendothelial ion transport counters the excess osmotic potential of the stroma. 34 At hydration levels outside the normal range, the cornea loses its transparency. Thus the positive expressions of several channel proteins including VDAC3 and Na + /K + ATPase mRNA in IHCEn means that stably established IHCEn did not lose their normal endothelial functions, and can maintain them during ex vivo expansion. Essential factors for normal endothelial functions such as channel proteins or Na + /K + ATPase were positively expressed in cultivation not only on plastic culture dishes, but also on LAM. Furthermore, the expression patterns on LAM were stronger than that on plastic culture dishes. These results suggest that human amniotic membrane can act as the ideal endothelial niche for IHCEn and PHCEn. The amniotic membrane has a specific advantage that makes it a good substrate for endothelial cell growth. It is a biomaterial different from biochemical substrates such as gelatin membrane or coated hydrolens, it increases cell adherence by abundant extracellular matrix proteins including integrin and lamin, and is easily transplanted since its physical properties are similar to Descement's membrane. Furthermore, our results demonstrate that amniotic membrane acts as a good substrate for the maintenance of corneal endothelial functions. This study indicates that a sufficient immortalized cell line having characteristics similar to PHCEn is useful for in vitro studies of human corneal endothelial functions. Furthermore, it is expected that the culture of corneal endothelial cells on an amniotic membrane, an ideal corneal endothelial niche, is useful for the basic investigation of not only drug response or toxicological evaluation, but also for corneal endothelial cell transplantation or the development of reconstructed bioartificial cornea.
2018-04-03T02:42:55.142Z
2006-03-01T00:00:00.000
{ "year": 2006, "sha1": "dfa2354b543b0437fc38fbac4b2ea6673912f8ba", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc2908816?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "dfa2354b543b0437fc38fbac4b2ea6673912f8ba", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6554845
pes2o/s2orc
v3-fos-license
Keratin 23, a novel DPC4/Smad4 target gene which binds 14-3-3ε Background Inactivating mutations of SMAD4 are frequent in metastatic colorectal carcinomas. In previous analyses, we were able to show that restoration of Smad4 expression in Smad4-deficient SW480 human colon carcinoma cells was adequate to suppress tumorigenicity and invasive potential, whereas in vitro cell growth was not affected. Using this cellular model system, we searched for new Smad4 targets comparing nuclear subproteomes derived from Smad4 re-expressing and Smad4 negative SW480 cells. Methods High resolution two-dimensional (2D) gel electrophoresis was applied to identify novel Smad4 targets in the nuclear subproteome of Smad4 re-expressing SW480 cells. The identified candidate protein Keratin 23 was further characterized by tandem affinity purification. Immunoprecipitation, subfractionation and immunolocalization studies in combination with RNAi were used to validate the Keratin 23-14-3-3ε interaction. Results We identified keratins 8 and 18, heat shock proteins 60 and 70, plectin 1, as well as 14-3-3ε and γ as novel proteins present in the KRT23-interacting complex. Co-immunoprecipitation and subfractionation analyses as well as immunolocalization studies in our Smad4-SW480 model cells provided further evidence that KRT23 associates with 14-3-3ε and that Smad4 dependent KRT23 up-regulation induces a shift of the 14-3-3ε protein from a nuclear to a cytoplasmic localization. Conclusion Based on our findings we propose a new regulatory circuitry involving Smad4 dependent up-regulation of KRT23 (directly or indirectly) which in turn modulates the interaction between KRT23 and 14-3-3ε leading to a cytoplasmic sequestration of 14-3-3ε. This cytoplasmic KRT23-14-3-3 interaction may alter the functional status of the well described 14-3-3 scaffold protein, known to regulate key cellular processes, such as signal transduction, cell cycle control, and apoptosis and may thus be a previously unappreciated facet of the Smad4 tumor suppressive circuitry. Smad4 and its homologs mediate signals from cytokines of the transforming growth factor-β (TGF-β) family from cell surface receptors to the nucleus where they regulate a diverse array of target genes involved in numerous biological functions including embryonic development, cell growth and differentiation, modulation of immune responses, and bone formation. Ligand induced TGF-β receptor stimulation leads to the formation of a hetero-tetrameric receptor complex of two identical heterodimers, which is comprised of a type I and a type II receptor family member, each. Upon receptor activation the receptor-regulated Smads (R-Smads) can transiently interact with the type I receptor. R-Smads are thereby C-terminally phosphorylated by the receptor kinase and, once phosphorylated, they form a hetero-oligomeric complex with the "common-mediator" Smad4. This complex translocates into the nucleus, where it regulates the transcription levels of target genes by interacting with other transcription factors and by recruiting transcriptional co-activators or co-repressors [3]. In addition to R-Smads there are also inhibitory Smads (I-Smads) and other signaling molecules that feed into the TGF-β-Smad signalling cascade such as ERK, JNK and PKC [4]. This rather complex mode of target gene regulation involving Smad4 explains why currently more than 1000 genes were described to be either directly or indirectly regulated by Smad4 [5]. Furthermore, it is obvious that the cellular context will play a crucial role in defining the subset of Smad4 target genes relevant in a particular cellular differentiation state. In the current work, we focused on colon carcinoma (SW480) cells to define potential Smad4 target genes involved in the neoplastic transformation process of this particular cell type. For the detailed investigation of Smad4's tumor suppressor functions, we stably reexpressed Smad4 via gene transfer in human Smad4deficient SW480 tumor cells [6]. We were able to show that re-expression of Smad4 in these colon carcinoma cells was not sufficient to restore TGF-β responsiveness. These cells have accumulated a number of other oncogenic alterations in addition to and presumably prior to Smad4 inactivation [6,7], likely explaining the TGF-β resistance of Smad4 re-expressing derivatives. However, the re-expression of Smad4 in SW480 cells was sufficient to suppress tumor growth in vivo [6] confirming that these cells provide an adequate model to investigate Smad4 tumor suppressor function. Here we focused on the study of the nuclear subproteome of Smad4 re-expressing SW480 cells in comparison to its Smad4 negative cells by establishing a subfractionation strategy coupled with the difference gel electrophoresis (DIGE) system and subsequent MALDI-MS-based peptide mass fingerprinting (PMF) to identify differentially expressed proteins. Of the proteins which were identified as highly upregulated in Smad4 re-expressing SW480 cells, we chose to follow-up on the KRT23 protein. Keratins are major structural proteins in epithelial cells. The keratin multigene family contains 50 individual members, which can be divided in two groups: (i) acidic forms and (ii) basic forms. The KRT9-23 belongs to the acidic group, whereas KRT1-8 are basic keratins [8]. Generally, one basic and one acidic keratin heterodimerize in order to form a functionally active intermediate filament. It has been postulated that the mechanical properties of these dimers are regulated by their specific keratin composition [9]. An association of differential expression patterns of keratins with tumor progression and the utility of measuring the keratin expression status for a differentiation between normal und tumor cells has been established [10][11][12][13]. Furthermore, it has been shown that in normal cellular senescence primary KRT8/18, i.e. keratins that are expressed first during tissue development, become partly substituted by secondary or later keratins (e.g. KRT20, KRT7) in a tissue dependent manner. This effect, however, is disrupted in transformed cells and thus expression profiling of keratins can be used for cancer diagnosis, i.e. the type of keratin detected allows the distinction between normal and cancerous tissue and for the definition of the type of carcinoma, even when the tumor is present as metastasis of unknown origin [14][15][16]. More recently, the classical structural role of keratins was extended to their involvement in cell signalling, stress response and apoptosis, mostly through their interaction with other proteins and/or their phosphorylation, glycosylation, transglutamination, caspase cleavage and ubiquitination state affecting keratin organization, distribution, turnover and function [17]. In colorectal microsatellite instable tumors the over-expression of KRT23 led to cell death. Due to this cellular response KRT23 was associated with a potential role as a tumor suppressor in this subset of colorectal cancers [18]. In this work we present data showing the interaction between 14-3-3 proteins and KRT23. The 14-3-3 protein family consists of 7 isoforms in mammalian cells (β, γ, ε, η, σ, τ, ξ) which form homo-and hetrerodimers with each other [19]. These dimers bind preferentially to the phosphorylated motifs RSXpSXP and RXXXpSXP present on most known 14-3-3 binding proteins [20]. The universal nature of this protein family has been shown using proteomic approaches [21][22][23][24]. The identified broad spectrum of 14-3-3 interacting partners illustrates the vast array of processes in which this protein family is involved. Based on the discovery of cytoplasmic sequestration of BAD a general sequestration model was postulated [25]. The core of this sequestration model is a nuclear export signal (NES)-like domain within the 14-3-3 molecules that bind to target proteins. Upon binding, NES sequences of the target proteins become exposed, which initiates translocation of the target protein from the nucleus into the cytoplasm thereby inhibiting the activity of the target molecule [26,27]. As numerous 14-3-3 interaction partners have been implicated in apoptosis and cell cycle regulation, it is not surprising that 14-3-3 proteins play a crucial role in carcinogenesis. Examples are the sequestration of CDC25s and p21 through 14-3-3ε [28][29][30][31]. Here, we report that re-expression of Smad4 in the SW480 cells strongly induces KRT23 expression, both at the transcript and protein level. A tandem affinity purification (TAP) assay with KRT23 as bait was performed to identify KRT23 interacting proteins. This assay revealed that 14-3-3ε is part of the KRT23-binding complex and that Smad4 re-expression and KRT23 up-regulation correlates with cytoplasmic translocalization of 14-3-3ε. In summary, our data suggest that Smad4dependent KRT23 expression is probably important for the cellular localization of 14-3-3ε. This, in turn, will likely influence cellular signaling through 14-3-3 binding partners involved -amongst others -in cell cycle, intracellular trafficking/targeting and signal transduction; all these processes are known to be altered in carcinogenesis. Methods Reagents and Cell culture -Monoclonal anti-tubulin (TUB2.1) and anti-FLAG (M2) antibodies were purchased from Sigma. Anti-lamin B (C-20) and anti-VSV-G (P5D4) antibodies were purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). Human embryonic kidney (HEK) 293T cells as well as Smad4 negative and re-expressing derivatives of the colon carcinoma cell line SW480 [32] were cultured in DMEM media supplemented with 10% FBS and antibiotics. DNA Constructs and Transient Transfection -cDNAs encoding the wild type FLAG-tagged KRT23 was amplified by PCR and ligated into the SalI-EcoRI site of the pMT2SM expression vector. The expression vector for VSV-G-tagged 14-3-3ε (pcDNA3-VSV-G-14-3-3ε) was kindly donated by H. Hermeking (LMU München, Germany). Cells were seeded onto 10 cm dishes 18 to 24 h prior to transfection and then transiently transfected at 60 to 70% confluence using FuGENE-6 transfection reagent (Roche Diagnostics). 48 h after transfection, cells were harvested for further assays. Immunoprecipitation, Cell Fractionation and Immunoblotting -Smad4 re-expressing and Smad4 negative SW480 cells were lysed in ice cold RIPA buffer to prepare whole cell lysates. Lysates were cleared by centrifugation. HEK 293T cells were transfected with Flag-KRT23 and/or VSV-G-14-3-3ε expression vectors. Transfected cells were lysed in RIPA buffer to prepare whole cell lysates. Immunoprecipitation was performed using ANTI-FLAG M2-affinity agarose, with constant agitation overnight at 4°C. After extensive washes, proteins bound to the beads were eluted with denaturing Laemmli buffer. To isolate nuclear and cytoplasmic fractions, cells were washed twice with cold PBS and resuspended in 500 μl of hypotonic buffer (20 mM Tris-HCl pH 7.4, 5 mM MgCl 2 , 1.5 mM KCl, 0.1% NP-40, 50 mM NaF, 2 mM sodium orthovanadate, and protease inhibitors (Complete, Roche)). Cells were allowed to swell on ice for 10 min and then passed several times through a 26 1/2 gauge syringe needle, followed by centrifugation at 800 × g. The supernatants were further centrifuged at 15,000 × g to remove insoluble pellets, and the resulting supernatants were collected as the cytoplasmic fractions. The pellets were resuspended in 100 μl of TKM buffer (20 mM Tris-acetate pH 7.4, 50 mM KCl, 5 mM MgCl 2 , containing protease and phosphatase inhibitors). After centrifugation at 15,000 × g for 10 min, the supernatants were collected like the nuclear fractions. Whole cell lysates from SW480 cells, immunoprecipitated proteins and proteins derived from cell fractionation were subjected to SDS-PAGE and transferred onto polyvinylidinedifluoride membranes (Millipore), respectively. The membranes were incubated with the indicated antibodies followed by the corresponding secondary antibodies. The membranes were then developed with the ECL Western Blotting Detection System (Pierce). 2D Gel Electrophoresis -Collected nuclear fractions were dissolved in DIGE buffer (30 mM Tris, pH 8.5, 2 M thiourea, 7 M urea, 4% CHAPS). For a minimal labeling of proteins with CyDyes (GE Healthcare) 50 μg protein was incubated with 400 pmol CyDye dissolved in anhydrous DMF p.a. After 30 min of incubation in the dark on ice 10 pmol lysine was added to stop the labeling reaction. Cy3 and Cy5 labels were used for Smad4 negative and Smad4 positive samples, respectively. As internal standard pooled lysates of nuclear fractions containing equal amounts of protein, each, from Smad4 re-expressing and negative cells were Cy2 labeled. After combining all three samples the mixture was applied on the IEF tube gel (20 cm × 0.7 mm). The IEF was performed with carrier ampholyte tube gels with an ampholyte mixture ranging from pH 2-11 for 8.05 kVh. After IEF tube gels were ejected, incubated in equilibration buffer (125 mM Tris pH 6.8, 40% (w/v) Glycerol, 3% (w/v) SDS, 65 mM DTT) for 10 min and subsequently applied on the second dimension gel (20 cm × 30 cm × 0.7 mm). The second dimension (SDS-PAGE) consisted of 15.2%T, 1.3%C polyacrylamide gels which were run in a Tris-Glycine buffer system. For protein identification the preparative gel format was: 20 cm × 1.5 mm for tube gels and 20 cm × 30 cm × 1.5 mm for SDS-PAGE. Image Analysis -CyDye labeled proteins were visualized by Typhoon 9400 laser scanner (GE Healthcare) according to the user manual. Image analysis was performed with the DIA software tool (GE Healthcare) for single gel comparison and the Biological Variance Analysis (BVA) software tool (GE Healthcare) in case of matching gel sets. Parameters for significant changes in the spot pattern were set as follows: changes in the spot volume had to be two-fold, p-value of Student's T-test had to be ≤ 0.05 and the spot had to be detected in at least five of six gel sets. Tryptic in-gel Digestion and MALDI-MS Protein Identification -For protein identification, silver stained protein spots or bands of interest were cut from a preparative gel, in-gel digested with trypsin (Promega, Madison, WI) and extracted as described previously [33]. For MALDI-MS target preparation peptides were concentrated via ZipTips™, eluted on the MALDI-target with 1.2 μL matrix solution (α-cyano-4-hydroxy cinnamic acid in ACN and 0,1% (v/v) TFA (1:2)) and analyzed using the UltraflexTM (Bruker Daltoniks). PMF spectra were acquired in positive mode with 20 kV target voltage and pulsed ion extraction of 17.25 kV. The reflector and detector voltage was set to 21 kV and 1.7 kV, respectively. Peak detection was carried out using FlexAnalysis 1.2 (Bruker Daltonics) with a S/N threshold of 2.5. The monoisotopic peptide mass values were transferred to ProteinScape™ (Bruker Daltonics) for subsequent protein identification via automated database analysis against the human IPI (V3.27; 67528 entries) by Profound (2002.03.01). To confirm the results obtained a randomized database combined with a normal IPI human database was performed. Searches were carried out with a mass tolerance of 100 ppm. Furthermore, propionamide (C, +71 Da) was set as a fixed and oxidation (M, +16 Da) as a variable modification and for the cleavage enzyme (trypsin) one missed cleavage site was allowed. Internal re-calibration of the obtained data was performed using a calibrant list and contained mass values were subsequently excluded prior database search. A Z score of 1.65 was used as threshold for the protein identification. Protein Identification of TAP Purified Proteins -The Protein identification of TAP purified proteins were done by liquid chromatography (LC)-tandem MS (MS/ MS). LC-MS/MS measurements were done with tryptic peptide extracts in 5% (v/v) FA by using a LC Packings Ultimate capillary LC system coupled to 4000 Q Trap™ (Applied Biosystems). Ionspray voltage was set to 3.0 -3.2 kV in positive mode. EMS-scan was performed over a mass range from 400 to 1400 m/z. For MS/MS-scans the three highest signals were isolated and fragmentation was done with collision energy between 15-60 V according to m and z. MGF data were extracted from the raw data using Analyst 1.4.1 (Applied Biosystems) and Mascot (2.2.0) was used for database searches against the human IPI (V3.27; 67528 entries). To confirm the results, a randomized database combined with a normal IPI human database was performed. The mass accuracy was set to 1.5 Da for precursor ions and 0.5 Da for fragment ions. Further modification of cysteine by propionamide (C, +71 Da) was set as a fixed modification and oxidation of methionine (M, +16 Da) was set as a variable modification. One missed cleavage site for the tryptic digestion was allowed. Proteins were identified with three different peptides with an ion score of 32 and higher. RNA Isolation and Northern Blotting -Total RNA was prepared from Smad4 re-expressing and negative cells as described previously [6]. Briefly, 5 μg total RNA was applied on 1% formaldehyde-agarose gel. Subsequently the gel was blotted onto Hybond N membranes (Amersham) with SSC buffer (0.15 M Sodium citrate pH 7.0, 1.5 M NaCl) and filters were hybridized at 50°C overnight with radiolabeled probes using the following oligonucleotides 5'-cgcgtcgaccaccatggactacaaggacgacgat gacaagaactccggacacagcttcag-3' and 5'-acagaattcaacaggcggaaactttcattg-3. Production of Transgenic TAP-KRT23 SW480 Cells -The coding region of KRT23 was PCR-amplified from an image clone using the oligonucleotides 5'-caattgaactc cggacacagcttcag-3' and 5'-gtcgactcatgcgtgcttttggattt-3'. The resulting PCR-products were inserted into the MfeI, SalI site of retroviral vector pBabe-puro. The TAP affinity tag was amplified from pRAV-FLAG (kindly provided by X. Liu, University of Colorado, Boulder, CO, USA) with the oligonucleotides 5'-cgggatccatggcgcaacacgatgaagc-3' and 5'-gccaattgcttgtcatcgtcgtccttg-3'. Resulting PCR-fragments were digested with BamHI and MfeI and subsequently ligated into the pBabepuro vector containing the KRT23 sequence. The correct insert sequences of the newly generated pBabeTAP-KRT23 vector were confirmed by cycle sequencing analysis. Generation of the TAP-KRT23 expression cassette containing retrovirus was done in HEK 293T cells. Transfection of HEK 293T cells was done with FuGene (Roche) according to the manufacturer's instructions. For retroviral virus particle generation the packaging pHIT60 and the envelope pHITG plasmids were cotransfected together with the pBabeTAP-KRT23 retroviral vector. Viral infection of SW480 cells was done with filtered (0.45 μm pore size) retroviral particles released from 293T cells. Transduced SW480 cells were selected with 2 μg/mL puromycin (Sigma). ShRNA Down-Regulation of Keratin 23 in SW480 cells -Keratin 23 expression cells was silenced in SW480 cells by shRNAknock-down. The following oligonucleotide pairs KRT23-1506s: 5'-CCGGGCAC-GAAATCTGCTTTGGAAAGCTCGAGGTTTCCAA AGCAGATTT CGTGCTTTTTG-3', KRT23-1506as: 5'-AATTCAAAAAGCACGAAATCTGCTTTGGAA ACC TCGAGCTTTCCAAAGCAGATTTCGTGC-3' and KR T23-1010s: 5'-CCGGGCTCAG ATTATTCTTCTCAT TGCTCGAGGAATGAGAAGAATAATCTGAGCTT TTTG-3'; KRT23-1010as: 5'-AATTCAAAAAGCTCAG ATTATTCTTCTCATTCCTCGAGCAATGA GAAG AATAATCTGAGC-3' were annealed and subcloned into the EcoRI/AgeI site of the pLKO.1 puro vector (kindly provided by Sheila Stewart). Lentiviruses were made by transfecting packaging cells (HEK 293T) with a 3-plasmid system. DNA for transfections was prepared by mixing 12 μg pCMVΔRR8.2, 1 μg pHIT G and 12 μg pLKO.1 plasmid DNA with 62 μl of 2 M CaCl 2 in a final volume of 500 μl. Subsequently 500 μl of 2x HBS phosphate buffer was dropwise added to the mixture and incubated for 10 min at RT. The 1 ml transfection mixture was added to 50% confluent HEK 293T cell seeded the day before into a 10 cm well plate. Cells were incubated for 16 h (37°C and 10% CO 2 ), before the media was changed to remove remaining transfection reagent. Lentiviral supernatants were collected at 36 h post-transfection and for each infection 3 ml supernatant containing 4 μg/ml polybrene was immediately used to infect target cells seeded the day before in 6 well plates to reach 70% confluency on the day of infection. Cells were incubated for 24 h, and the media was changed to remove virus particles. To control infection rate a parallel infection under the identical conditions targeting the same cell line was prepared using a lentiviral GFP expression control vector (pRRLU6-CPPT-pSK-GFP, kindly provided by Sheila Stewart). 6 days after infection 2 μg/ml puromycin was added to the cell culture media. The knock-down efficiency was monitored by qRT-PCR using the SYBR Green Master Mix reagent (Applied Biosystems) and a 7700 Sequence Detector System (Applied Biosystems). Relative Expression changes are calculated relative to B2M (B2Ms: TGCTGTCTCCATGTTTGATGT ATCT and B2Mas: TCTCTGCTCCCCACCTCTAAGT). The KRT23 expression was measured using the following oligonucleotide pairs: KRT23s: GAACTGGAGCGGCAGA ACA and KRT23as: TTGATTCTTCCCGTGTCCCTT. Results Differential protein profiling of nuclear proteins from Smad4 deficient and Smad4 re-expressing SW480 cells Smad4 re-expressing SW480 cells served as a model system to investigate the effects of the tumor suppressor Smad4 reconstitution on the nuclear protein composition of human colon carcinoma cells. Using 2D-DIGE analysis, we compared nuclear protein fractions derived from six independent replicates of Smad4 re-expressing and Smad4 negative SW480 cells, respectively ( Figure 1A-C). We generated 3 sample sets each, derived from Cy3 labeled Smad4 re-expressing cell lysates and Cy5 labeled Smad4 negative cell lysates, whereas the second sample set (again three lysates, each for Smad4 reexpressing and negative cells) was labeled vice versa to minimize the identification of false positive protein spots. On average about 2000 spots were detected per gel. In the subsequent image analysis, using the DeCyder software (GE Healthcare), a total of 17 spots were identified that show a reproducible differential expression pattern. We considered intensity differences of factor two and above as significant. These 17 differentially expressed protein spots covered the entire pI and molecular weight range of the 2D-gels ( Figure 1A). Of these 17 spots, 14 showed a higher and three a lower abundance in the nuclear fraction upon Smad4 re-expression. The subsequent protein identification by MALDI-MS revealed eight unique proteins ( Table 1). The sequence coverage of these proteins ranged from 12.6 -60.2% with Profound scores between 1.0 -2.5 (1.3 being significant). The following proteins were found to be induced upon Smad4 re-expression: tumor rejection antigen (gp96), heterogeneous nuclear ribonucleoprotein R, eukaryotic translation elongation factor 1 alpha 1, KRT23, KRT18 and Cyclophilin A. In contrast, the amount of KRT8 and RbAp46 was found to be reduced in the nuclear fraction. Keratin 23 up-regulation occurs at the transcription level in a Smad4 dependent manner As it is well known that the expression profiles of keratins change significantly during tumor progression and the consequences of these changes have dramatic effects on the morphology of the cells, we chose to study KRT23 further. About this particular keratin very little is known and our data clearly showed different expression levels in a Smad4-dependent context ( Figure 1A, Spot 10 and 11), i.e. a threefold up-regulation in Smad4 re-expressing cells ( Figure 1B, C). Subsequent Northern blot analysis of Smad4 re-expressing and negative SW480 cells confirmed the Smad4-dependent up-regulation already at the transcription level ( Figure 1D). Having identified and confirmed the proteome data of the novel Smad4 target protein KRT23, we sought to identify protein interaction partners of this uncommon and poorly characterized keratin as an initial step to gather functional information for KRT23. Generation of stably expressing TAP-Keratin 23 cell lines To analyze the interaction partners of KRT23, we chose to perform a tandem affinity purification assay [34]. The N-terminal tag used for this experiment consists of a Flag-tag followed by two TEV cleavage sites and two Protein Z domains. Due to the generally poor transfection efficiency of SW480 cells with standard plasmid transfection strategies, we opted for retroviral gene transfer to generate Smad4 negative and Smad4 reexpressing SW480 cells which stably express the transgenic TAP-KRT23 protein. This approach offered a rapid and reliable means to purify native protein complexes and to identify the proteins by mass spectrometry [34]. The successful expression of the TAP-KRT23 protein was monitored by Western blotting (data not shown). In addition we analyzed the distribution of the over-expressed TAP-KRT23 by confocal microscopy. The immunofluorescence data revealed that the overexpressed protein is exclusively localized in the cytoplasm and surrounding the nucleus. Furthermore we observed a characteristic filamentous structure as expected for a member of the keratin family (Figure 2A, B). Tandem Affinity Purification of Keratin 23 KRT23 interaction partners were isolated following the TAP-strategy. Therefore, KRT23 protein complexes from Smad4 re-expressing and Smad4 negative cells were first affinity purified using IgG-Sepharose prior to TEV cleavage, followed by a second affinity purification step using immobilized ANTI-FLAG M2-Agarose. The FLAG eluates from the second purification step were separated by SDS-PAGE and the proteins visualized by silver staining. Five bands were present in both samples ( Figure 2C, Bands 1 -4 and 6), whereas two protein bands were unique to the Smad4 re-expressing cell line ( Figure 2C, Bands 7 and 8). All bands were excised and in-gel digested with trypsin prior to analysis by nanoLC-MS/ MS (Table 2). We were able to verify that the common bands contain the same set of proteins including, Plectin isoform 11, HSP70, HSP60, KRT8, KRT18. These proteins correspond to typical keratin-associated proteins. Two protein bands of approximately 30 kD, which were only detected in the Smad4 re-expressing cells, were identified as 14-3-3ε and 14-3-3γ, respectively. Validation of Keratin 23-14-3-3ε-Interaction To further investigate the Smad4-dependent interaction between 14-3-3 and KRT23, we analyzed the endogenous expression of 14-3-3ε and 14-3-3γ in Smad4 re-expressing and Smad4 negative SW480 cells. The Western blot analysis showed that the expression of 14-3-3 proteins was reduced in Smad4 negative SW480 cells, compared to Smad4 re-expressing cells ( Figure 3A). We chose to focus on the 14-3-3ε isoform for further analysis, because 14-3-3γ is known to form heterodimers with any other 14-3-3 family member whereas 14-3-3ε is only associated with the 14-3-3γ isoform. Using tagged proteins (Flag-KRT23 and VSV-G-14-3-3ε) in an immunoprecipitation experiment we were able to confirm our finding from the TAPassay showing that KRT23 is an interaction partner of 14-3-3ε ( Figure 3B). Having confirmed that 14-3-3ε is a KRT23 interaction partner, and taking into account previous findings by others showing that keratins are important for nuclear redistribution of 14-3-3 proteins [35,36], we went on to analyze whether KRT23 expression is able to modulate the cellular distribution of 14-3-3ε or not. HEK 293T cells overexpressing VSV-G-14-3-3ε were fractionated into cytoplasmic and nuclear fraction and the protein signal for VSV-G-14-3-3 could be recovered in both fractions. The co-expression of Flag-KRT23 and VSV-G-14-3-3 in turn lead to a decline of the VSV-G-14-3-3ε signal in the nuclear fraction, thus experimentally supporting a model where KRT23 expression is indeed influencing the cellular distribution of 14-3-3ε ( Figure 3C). Cellular distribution of 14-3-3ε in SW480 cells depends on Keratin 23 Having shown a link between KRT23 expression and 14-3-3ε localization, we further hypothesized that Smad4 re-expression in our SW480 cell model system (also shown to induce KRT23 expression) could have a similar effect on the endogenous cellular localization of 14-3-3ε. Indeed, confocal imaging revealed that in Smad4 re-expressing SW480 cells 14-3-3ε is more prominently Figure 3 Interaction of Keratin 23 with 14-3-3ε. A) Endogenous 14-3-3ε and 14-3-3γ expression levels of Smad4 re-expressing and Smad4 negative SW480 cells. 20 μg whole cell lysates derived from Smad4 re-expressing and Smad4 negative cells were subjected to SDS-PAGE. Flagtagged KRT23 and VSV-G-tagged 14-3-3ε were transfected into HEK 293T cells as indicated. B) Confirmation of KRT23-14-3-3ε interaction: cell lysates were immunoprecipitated with anti-Flag antibody and blotted as indicated. C) KRT23 expression leads to cytoplasmic sequestration of 14-3-3ε: following fractionation into cytoplasmic and nuclear fractions, proteins were subjected to Western blot analysis with the indicated antibodies. Anti-lamin B and anti-β-tubulin were used as marker proteins for the purity of the fractions. C, cytoplasmic fraction, N, nuclear fraction. A representative blot from three independent experiments is shown. localized in the cytoplasm, whereas in Smad4 negative cells (with a lower KRT23 expression level) 14-3-3ε showed a pronounced nuclear localization ( Figure 4A). Next we wanted to test whether the Smad4 dependent nuclear exclusion of 14-3-3ε could be rescued by KRT23 knock-down experiments. The KRT23 expression was monitored by quantitative RT-PCR, because to the best of our knowledge the direct monitoring of the expression on the protein level is currently not possible due to the lack of a specific antibody. KRT23 expression was successfully reduced by both shRNA vector constructs; albeit to different levels depending on the KRT23 sequence targeted by the two different shRNA vector constructs ( Figure 4B). In line with the reduced KRT23 expression following KRT23 knock-down, we observed a partial rescue of the nuclear 14-3-3ε localization which became visible in the shRNA vector construct shKRT1010, showing also the best KRT23 knockdown efficiency ( Figure 4A). Identification of potential Smad4 targets In this study we aimed to identify and characterize potential Smad4 target genes involved in colon carcinogenesis. In order to achieve this, we used our previously described model system of Smad4 deficient SW480 cells that re-express Smad4, thereby suppressing in-vivo tumor formation and invasive potential of SW480 cells [6]. Using high resolution 2D gel electrophoresis, we analyzed nuclear protein fractions from Smad4 re-expressing and negative SW480 cells. The following MALDI-MS analysis of differentially expressed protein spots revealed eight different proteins that can be grouped according to their function in i) gene regulatory proteins (hnRNPR; RpAp46; eEFa), ii) stress proteins (gp96; cyclophilin A) and iii) keratins (KRT8; KRT18 and KRT23). All of these proteins apart from gp96 and the keratins are considered nuclear proteins, whereas gp96 is an abundant protein located in the endoplasmic reticulum (ER). Gp96 is the ER-paralog of the cytosolic HSP90 with a role in housekeeping, i.e. maintenance of protein homeostasis in the secretory pathway [37]. The identification of an abundant ER protein is not surprising in this context because of the architecture of the nuclear envelope (NE): The NE consists of two membrane systems, the outer and inner nuclear membrane; the former being contiguous with the rough ER. Thus, we expected in our nuclear fractions protein contaminations derived from ER proteins as well as ribosomal proteins. The observed presence of keratins in the nuclear fractions can be explained as a cytoplasmic contamination, by direct interactions between the nuclear lamina and cytosolic keratins [38] or by association of keratins with the outer nuclear membrane protein nesprin-3 via the cytoskeletal linker protein plectin [39]. The fact that our TAP-experiment also identified plectin, suggests the latter. Smad4 modulates Keratin 23 expression Following the 2D-DIGE analyses which identified the altered expression of KRT23, we analyzed whether this increased expression is transcriptionally or post-transcriptionally regulated. Based on the Northern blot analysis showing a good correlation of KRT23 transcript and protein expression levels in SW480 cells ( Figure 1D) it appears likely that KRT23 is mainly transcriptionally regulated. A subsequent test of eight human pancreatic carcinoma cell lines in which Smad4 expression was reconstituted by a retroviral expression vector revealed in three of the eight cell lines a similar Smad4dependent KRT23 up-regulation (U. Herbrand, unpublished observation), supporting the notion that KRT23 expression levels can be modulated in two major gastrointestinal tumor types directly or indirectly through Smad4. A connection between keratin expression and TGF-β signaling was previously demonstrated in dominant-negative TGF-β type II receptor mice having elevated K8/K18 protein levels [40]. In line with this mouse model data we found that reconstitution of Smad4, a key downstream component of the TGF-β signaling pathway, lead to the down-regulation of KRT8. However, in our model we found that Smad4 reconstitution increased in KRT18 expression. In contrast to the increased KRT8/18 expression in mice in the absence of functioning TGFβ-signaling Zhang et al. showed in a human pancreatic carcinoma cell lines that sodium butyrate and trichostatin A treatment induces KRT23 expression at the mRNA level [41]. This effect could be inhibited by RNAi-mediated knock-down of p21 expression. Interestingly, Smad4 reconstitution in our SW480 model also led to a strong up-regulation of p21 (I. Schwarte-Waldhoff, unpublished observation). As p21 is a well described Smad4 target involved in the TGFβ-signaling pathway, these data would support a model, where Smad4 and p21 are upstream signaling components involved in the Smad4 dependent up-regulation of KRT23 described herein. Clearly, more work will be needed in order to elucidate the key proteins involved in our observed Smad4-dependend up-regulation of KRT23. Bühler et al. recently reported that expression of KRT18 caused an induction of adhesion proteins and a regression of the malignant phenotype in KRT18 overexpressing breast carcinoma cells [42]. Smad4 loss is a late event during tumor progression and correlates with the development of a metastatic tumor in colon carcinogenesis [2,43], fitting well into a model where Smad4 induced expression of specific keratin types in colon cells may help to maintain the cell to cell junctions through desmosomes and hemidesmosomes and thus supporting an epithelial phenotype. Our data hint towards a model were Smad4 dependent KRT18 and KRT23 up-regulation and KRT8 down-regulation mediates a tumor suppressor effect presumably by playing a role in supporting the induction of the epithelial phenotype observed upon Smad4 reconstitution, which was also accompanied by an up-regulation of the invasion suppressor E-cadherin [32]. Keratin 23-14-3-3ε interaction is Smad4 dependent Interestingly, Hesse et al. noted in a phylogenetic tree analysis that KRT23 is an outstanding member of the type I keratins localized on chromosome 17 [44]. These data together with the increasing evidence that intermediate filaments are not only important as structural proteins but are also involved in modulating and controlling cellular signaling processes and apoptosis mostly through interaction with keratin associated proteins (KAPs) prompted us to initiate a more detailed study of KRT23 using the TAP methodology. Five of the seven identified KRT23 associated proteins (PLEC1, HSP70, HSP60, KRT8 and KRT18) were found both in Smad4 re-expressing and negative cells. Both, the 14-3-3ε and γ proteins were only identified in the TAP eluate of Smad4 re-expressing cells. However, we also found that 14-3-3ε and γ protein expression levels are reduced in Smad4 negative cells. Therefore, we hypothesis that this reduced protein level was sufficient to prevent the detection of 14-3-3 proteins by silver staining in our TAP assay. Nevertheless we would also like to point out that our data neither exclude nor clearly support the alternative possibility that Smad4 is directly modulating by any yet unknown mechanism KRT23-14-3-3 interaction. All of these proteins or their homologs have previously been shown to interact with other keratins [17], indicating that our experimental conditions were appropriate to identify KAPs. Furthermore, they provide evidence that although KRT23 is more distant to other KRT family members it is likely to share a number of interaction properties described for other keratins. Liao et al. showed that KRT18 is able to bind 14-3-3η, ξ, and ε as well as HSP70. In their analyses KRT18-14-3-3 interaction was independent of HSP70 [45]. Ku et al. reported that keratin-14-3-3ζ interaction is able to modulate the cellular distribution of 14-3-3 proteins [36]. Similarly, it has been shown that keratin 17, is rapidly induced in wounded stratified epithelia and thus regulating cell growth through binding to the adaptor protein 14-3-3σ. Furthermore, phosphorylation of KRT17 was important for the redistribution of 14-3-3σ from the nucleus to the cytoplasm with concomitant stimulation of mTOR activity and cell growth [46]. These data prompted us to study the influence of Smad4 reconstitution and thus induction of KRT23 expression on the cellular distribution of 14-3-3ε in our model system. Keratin 23 modulates the distribution of 14-3-3 By Western blot analysis we showed that 14-3-3ε requires the presence of KRT23 for its cytoplasmic localization. In addition, laser confocal microscopy showed that the reexpression of the tumor suppressor Smad4 led both to an induction of KRT23 expression and cytoplasmic sequestration of 14-3-3ε. This cytoplasmic sequestration was partially released by KRT23 knock-down in Smad4 reexpressing cells. The close correlation between altering keratin expression and changing of 14-3-3 distribution was previously shown for hepatocytes [35]. In agreement with the hypothesis of Tzivion et al. that intermediate filament proteins alter signaling pathways through 14-3-3 sequestration, it seems plausible that Smad4-dependent up-regulation of KRT23 and/or KRT18 may have a similar effect on 14-3-3 localization [47]. Margolis et al. provided evidence for a pivotal role of keratins as potential drivers of mitotic entry of cells [48]. Due to the preferred binding of 14-3-3 to keratin in the cytoplasm (designated by Margolis et al. as "the 14-3-3 sink"), the cytoplasmic availability of keratins and its binding to 14-3-3 ensures that the 14-3-3 cargo proteins are released enabling them to control downstream cellular processes. Furthermore, it has been shown that 14-3-3 is able to modulate cytoplasmic localization of target proteins by directing them toward the CRM1mediated nuclear export pathway [26,49,50]. These data together with our data are suggestive for a model where KRT23-14-3-3 interaction can mediate the relocalization of nuclear ligands by several mechanisms that ensure cytoplasmic sequestration of the bound 14-3-3 complex. Conclusions In summary, we provide evidence that Smad4 is able to induce either directly or indirectly KRT23 expression. Furthermore, we were able to identify several novel KRT23 interacting proteins among them 14-3-3ε and γ. Finally, we found that KRT23 expression in Smad4 reexpressing cells is able to cytoplasmic sequestration of 14-3-3ε. These findings together with the known signal transduction modulator function of 14-3-3 family members suggests that our observed new regulatory circuit of Smad4 dependent KRT23 up-regulation which in turn modulates the cytoplasmic sequestration of 14-3-3 is a previously unknown facet of the tumor suppressive response we observed upon Smad4 re-expression in our colon cancer model. Therefore, it will be interesting to determine in future experiments, which are the potential proteins whose cellular localization and activity are modulated by the Smad4 dependent KRT23 sequestration of the 14-3-3 complex to the cytoplasm in our SW480 model system and what are their cellular functions in colon tumorigenesis. experimental design. JBM, KM, SKS, WS, ISW and HEM revised the manuscript critically for important intellectual content. SAH participated in its design and coordination. All authors read and approved the final manuscript.
2014-10-01T00:00:00.000Z
2011-04-14T00:00:00.000
{ "year": 2011, "sha1": "1a43ed7ed6d527ad4b73b28e9d06182702003eae", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-11-137", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0a8fcceaab24db5e356d1375272a02840c94179", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16785401
pes2o/s2orc
v3-fos-license
Recurrence of juvenile dermatomyositis 8 years after remission CDASI: Cutaneous Dermatomyositis Area and Severity Index JDM: juvenile dermatomyositis MTX: methotrexate mPSL: methylprednisolone PSL: prednisolone INTRODUCTION Juvenile dermatomyositis (JDM) is a chronic inflammatory disease characterized by typical skin lesions and muscle weakness, which occurs in children and adolescents younger than 16 years. JDM is classified into 3 clinical types according to the posttreatment course: (1) monocyclic, in which there is one episode with permanent remission within 2 years after diagnosis; (2) polycyclic, with multiple relapses within 2 years; and (3) continuous, with pathologic states persisting for more than 2 years. Early treatment with prednisolone is suggested to limit the disorder to the monocyclic course. Only 2 case reports in which monocyclic JDM recurred more than 3 years after remission have been described in the English-language literature. Of these 2 reported cases, 1 patient had no initial treatment and the other had oral prednisolone (PSL) alone. Recently a well-designed randomized, controlled trial found that aggressive therapeutic approaches, such as PSL plus methotrexate (MTX) after methylprednisolone (mPSL) pulse therapy, outperform PSL monotherapy after mPSL pulse therapy with respect to clinical remission, treatment failure, and discontinuation of PSL. Here we present a case of monocyclic JDM that recurred 8 years after remission despite initial treatment with PSL plus MTX after mPSL pulse therapy. INTRODUCTION Juvenile dermatomyositis (JDM) is a chronic inflammatory disease characterized by typical skin lesions and muscle weakness, which occurs in children and adolescents younger than 16 years. 1 JDM is classified into 3 clinical types according to the posttreatment course: (1) monocyclic, in which there is one episode with permanent remission within 2 years after diagnosis; (2) polycyclic, with multiple relapses within 2 years; and (3) continuous, with pathologic states persisting for more than 2 years. 2 Early treatment with prednisolone is suggested to limit the disorder to the monocyclic course. 3 Only 2 case reports in which monocyclic JDM recurred more than 3 years after remission have been described in the English-language literature. 4,5 Of these 2 reported cases, 1 patient had no initial treatment and the other had oral prednisolone (PSL) alone. 4,5 Recently a well-designed randomized, controlled trial found that aggressive therapeutic approaches, such as PSL plus methotrexate (MTX) after methylprednisolone (mPSL) pulse therapy, outperform PSL monotherapy after mPSL pulse therapy with respect to clinical remission, treatment failure, and discontinuation of PSL. 6 Here we present a case of monocyclic JDM that recurred 8 years after remission despite initial treatment with PSL plus MTX after mPSL pulse therapy. CASE REPORT A 4-year-old Japanese boy presented with eruptions on the face, ears, elbows, and knees and with muscular weakness. Physical examination found erythema on the cheeks and ears, keratotic papules and purplish erythema on the dorsa of the hands, and scaly erythema on the knees (Fig 1, A and B). This patient had no symptoms of dysphonia. Cutaneous Dermatomyositis Area and Severity Index (CDASI) was 8. The histopathology of the left knee showed vacuolar changes in the epidermis, deposition of mucin, pigment incontinence, and infiltration of lymphocytes in the papillary dermis (Fig 2, A). Biochemical examination found elevated levels of creatine kinase 425 IU/L (normal range, 12e170 IU/L) and aldolase 19.0 IU/L (2.7e7.5 IU/L). Antinuclear antibody and anti-Jo-1 antibody were negative. Magnetic resonance imaging (T2) found diffuse high-intensity areas in the proximal muscles of the extremities, which suggests edema caused by inflammation (Fig 2, B). Based on the clinical, histopathologic, and radiologic findings, the diagnosis of JDM was made. According to the recommended regimen at that time, 7 the patient was treated with 2 courses of mPSL pulse therapy (30 mg/kg/d for 3 consecutive days per course) followed by combination therapy with PSL (1 mg/kg/d) and MTX (0.4 mg/kg/wk), both of which were tapered out in 6 months. Both clinical and biochemical remission was achieved and persisted for 8 years, suggesting a monocyclic course. At 12 years of age, the patient presented to us with similar symptoms affecting the skin and proximal muscles but without preceding infectious episodes within the previous 3 months (Fig 1, C and D). The IgM class of antiparvovirus B19 antibodies was not detected. Computed tomography scans showed neither interstitial pneumonia nor visceral malignancy. The clinical, histopathologic, and radiologic findings were virtually identical to those observed 8 years before (Fig 2, C and D). These findings confirmed the diagnosis of JDM relapse. Both the skin condition and muscle strength improved with 2 courses of mPSL pulse therapy (1 g/d for 3 consecutive days per course) followed by PSL (0.78 mg/kg/ d) and MTX (0.20 mg/kg/wk). Serum levels of muscle-derived enzymes also returned to normal ranges. However, when the PSL dose was decreased to 0.29 mg/kg/d, elevation of muscle-derived enzymes and muscle weakness recurred, accompanied by pseudohypertrophy of the gastrocnemius muscles. Erythema on the cheeks and keratotic papules on the dorsal hands also reappeared. Although his muscle strength and serum levels of muscle-derived enzymes returned to normal levels after the addition of cyclosporine (0.20 mg/kg/d) and an increase of PSL dose (to 0.78 mg/kg/d), the pseudohypertrophy and the eruptions persisted. The change of cyclosporine to tacrolimus (0.04 mg/kg/d) and decrease of MTX (to 0.08 mg/kg/wk) maintained the normal levels of muscle-derived enzymes and muscle strength. There were no sequelae such as calcinosis, muscular contracture, or cutaneous or gastric ulcers during his course. This patient will continue monthly follow-up, with a gradual PSL dose reduction planned for a minimum of 2 years unless a relapse of JDM occurs. DISCUSSION There are no established methods for predicting the clinical course of JDM. JDM is usually treated with corticosteroid therapy alone or in combination with immunosuppressive agents such as MTX. 8 It is suggested that early and intensive corticosteroidbased therapy leads to a monocyclic course. 3 Although clinical remission was achieved by early intensive treatment with mPSL pulse therapy followed by oral PSL and weekly MTX in the initial episode of JDM in our case, the maintenance therapy was discontinued at 6 months to prevent adverse events associated with long-term corticosteroid use. Because the treatment for JDM is usually continued for at least 2 years, 6,8 the duration of the initial treatment seems short. However, premature cessation of treatment usually leads to early relapse of JDM. Thus, the short duration of treatment may not have been associated with the relapse 8 years after the initial onset in our patient. Although infections often trigger the onset or relapse of JDM, 9,10 there were no infectious episodes in our patient within 3 months before the relapse of JDM. Recently, 2 possible factors, dysphonia and high CDASI 11 score (CDASI [20), have been associated with relapse in a population of dermatomyositis and JDM. 12 However, this patient did not have dysphonia, and CDASI was less than 20. The prognosis of late recurrent JDM is not fully understood. Of the 2 previously reported cases, one had been successfully treated with PSL monotherapy until the relapse, whereas the other showed Vacuolar changes at the dermoepidermal junction of the epidermis, and deposition of mucin, pigment incontinence, and infiltration of lymphocytes in the papillary dermis are observed in the biopsy specimen of the left cheek at initial onset (4 years old) (A) and of the right knee at relapse (12 years old) (C). At the initial onset (T2) (B) (orange arrows) and at relapse (STIR) (D) (yellow arrows), magnetic resonance imaging shows high-intensity areas in the proximal muscles of the thighs, which suggests edema caused by inflammation. (C, Hematoxylin-eosin stain; original magnification: 3200.) spontaneous remission. 4,5 Although the initial episode of JDM was completely cured by shortterm corticosteroid-based treatment, additional intensive immunosuppressive therapy with tacrolimus was required to control the prolonged skin lesions in the relapse. Thus, the late recurrence of monocyclic JDM could be intractable and require attention.
2018-04-03T06:15:38.435Z
2016-12-26T00:00:00.000
{ "year": 2016, "sha1": "6b005d2bb9d5ac98f9ed256321c04a12dcc5fb75", "oa_license": "CCBYNCND", "oa_url": "http://www.jaadcasereports.org/article/S2352512616301217/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b005d2bb9d5ac98f9ed256321c04a12dcc5fb75", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208639137
pes2o/s2orc
v3-fos-license
Sequential Co-immobilization of Enzymes in Metal-Organic Frameworks for Efficient Biocatalytic Conversion of Adsorbed CO2 to Formate The main challenges in multienzymatic cascade reactions for CO2 reduction are the low CO2 solubility in water, the adjustment of substrate channeling, and the regeneration of co-factor. In this study, metal-organic frameworks (MOFs) were prepared as adsorbents for the storage of CO2 and at the same time as solid supports for the sequential co-immobilization of multienzymes via a layer-by-layer self-assembly approach. Amine-functionalized MIL-101(Cr) was synthesized for the adsorption of CO2. Using amine-MIL-101(Cr) as the core, two HKUST-1 layers were then fabricated for the immobilization of three enzymes chosen for the reduction of CO2 to formate. Carbonic anhydrase was encapsulated in the inner HKUST-1 layer and hydrated the released CO2 to HCO3-. Bicarbonate ions then migrated directly to the outer HKUST-1 shell containing formate dehydrogenase and were converted to formate. Glutamate dehydrogenase on the outer MOF layer achieved the regeneration of co-factor. Compared with free enzymes in solution using the bubbled CO2 as substrate, the immobilized enzymes using stored CO2 as substrate exhibited 13.1-times higher of formate production due to the enhanced substrate concentration. The sequential immobilization of enzymes also facilitated the channeling of substrate and eventually enabled higher catalytic efficiency with a co-factor-based formate yield of 179.8%. The immobilized enzymes showed good operational stability and reusability with a cofactor cumulative formate yield of 1077.7% after 10 cycles of reusing. The reduction of CO 2 to methanol by enzymatic cascade reactions mainly involves three enzymes, formate dehydrogenase (FateDH), formaldehyde dehydrogenase (FaldDH), and alcohol dehydrogenase (ADH) (Obert and Dave, 1999;Wang X. et al., 2014;Ji et al., 2015;Kuk et al., 2017;Nabavi Zadeh et al., 2018;Zhang Z. et al., 2018). FateDH converts CO 2 to formic acid, which is subsequently reduced to formaldehyde catalysed by FaldDH. And formaldehyde is further converted to methanol by ADH at the final step. Although this enzymatic cascade reaction features high specificity, it has a relatively low yield with a methanol conversion of merely 43.8% reported by Dave et al. (Obert and Dave, 1999). The possible rate-limiting step is the first reaction in the sequence catalysed by FateDH since the reaction rate of formic acid oxidation is 30 times faster than CO 2 reduction (Rusching et al., 1976). One of the conceivable reasons is the low substrate concentration due to the limited solubility of CO 2 in water. As a result, the increase of CO 2 substrate concentration in the solution may accelerate the forward conversion of CO 2 to formic acid. This assumption was well-demonstrated by Zhang et al. who adopted ionic liquids with high CO 2 solubility to assist the multi-enzymatic conversion of CO 2 to methanol (Zhang Z. et al., 2018). The yield was increased to approximate 3.5-fold compared to the parallel control experiments. Metal-organic frameworks (MOFs) belong to the category of organic-inorganic hybrid porous materials built from the coordination between organic linkers and metal ions as nodes (James, 2003;Long and Yaghi, 2009;Tranchemontagne et al., 2009). Compared with conventional porous materials, MOFs possess the advantages of ultrahigh surface area and porosity, uniform and controllable pore sizes, structural diversity, as well as diverse chemistry. The superior properties of MOFs facilitate their wide applications in various research areas. In particular, MOFs are porous materials desired for the adsorption and storage of gases, such as CH 4 , H 2 , and CO 2 (Li et al., 2009Murray et al., 2009;Farha et al., 2010;Liu et al., 2012;Yang S. et al., 2012;Chaemchuen et al., 2013;He et al., 2014;Tian et al., 2017). In this respect, we envisioned that the transformation of CO 2 to formic acid catalysed by FateDH may also be speeded up if CO 2 is adsorbed in MOFs and used as substrate. On the other hand, MOFs are also ideal solid supports for the immobilization of enzymes as they can maintain the biological activity of enzymes even under denaturing conditions (Lykourinou et al., 2011;Chen et al., 2012Chen et al., , 2018Lyu et al., 2014;Gkaniatsou et al., 2017;Lian et al., 2017;Du et al., 2018;Liang et al., 2019). As a result, we intend to develop a MOF platform aiming at achieving the simultaneous storage of CO 2 and coimmobilization of multienzymes for enhanced cascade reduction of adsorbed CO 2 to formic acid. Amine-functionalized MOFs are considered as a promising candidate to enhance CO 2 capture capacity as the electronegative N atom has a strong affinity to the positive C atom of CO 2 . Tethering amine functionalities in MOFs can be realized by introducing the amine groups on unsaturated metal sites. Chromium(III) terephthalate MIL-101 has a three-dimensional framework consisting of two types of zeotypic mesopores connected by two microporous windows (Férey et al., 2005;Jhung et al., 2007). Except for its distinct merits such as large pore volume, high BET surface area, and excellent stability in water, MIL-101 also contains numerous potential open chromium sites (up to 3.0 mmol/g) (Hwang et al., 2008) with an unoccupied orbital that are expected to anchor amine functionalization via a strong binding interaction with the positive nitrogen atoms. It is also demonstrated that amine-functionalized MIL-101 has high CO 2 capture capacities Yan et al., 2013;Hu et al., 2014;Lin J.-L. et al., 2014;Cabello et al., 2015;Darunte et al., 2016;Huang et al., 2016;Emerson et al., 2018;Zhong et al., 2018;Liu et al., 2019). Thus, in our work, MIL-101(Cr) was fabricated and modified with a series of amines to achieve the efficient storage of CO 2 substrate. Three enzymes were chosen for the transformation of CO 2 to formic acid, carbonic anhydrase (CA), formate dehydrogenase (FateDH), and glutamate dehydrogenase (GDH). The introduction of CA is to accelerate the hydration of CO 2 . Moreover, CO 2 is a thermodynamically stable molecule with low reactivity, so the conversion of CO 2 to methanol requires energy which is supplied by co-factor nicotinamide adenine dinucleotide (NADH). GDH was involved into the biocatalysis integrated system to achieve the continuous regeneration of NADH co-factor. To obtain multienzyme systems with enhanced activity, three principles are considered, substrate channeling, kinetics matching, and spatial distribution (Garcia-Galan et al., 2011;Zhang et al., 2015;Walsh and Moore, 2019). The current challenge for the design of multienzyme conjugates remains in the development of efficient strategies realizing the accurate control of enzyme positioning and spatial organization (Fu et al., 2012;Schoffelen and van Hest, 2012;Lin J.-L. et al., 2014). To conquer this limitation, in this work, we adopted a layer-by-layer self-assembly approach to achieve the sequential co-immobilization of multi-enzymes using MOFs in layered structure as the solid scaffold. As illustrated in Scheme 1, amine-functionalized MIL-101(Cr) was first prepared for the adsorption of CO 2 as substrate. The amine functionalities in MIL-101(Cr) then chelated Cu 2+ via the formation of a complex followed by further coordinating with 1,3,5-benzenetricarboxylic acid (H 3 BTC). These reactions provided a high density of Cu 2+ and H 3 BTC on the MOF surface, which then functioned as nucleation sites for the direct formation of HKUST-1 (Hong Kong University of Science and Technology) layers. On the surface of H 3 BTC@Cu 2+ @MIL-101(Cr), the first HKUST-1 layer encapsulated with CA was SCHEME 1 | Schematic illustration of the preparation of HKUST-1@amine-MIL-101(Cr)-based multienzymes for the reduction of adsorbed CO 2 . fabricated using a co-precipitation method via the self-assembly of metal ions, organic linkers, and enzymes. Based on the first HKUST-1 layer, the second HKUST-1 shell immobilizing FateDH and GDH was constructed using the identical approach. In this respect, when CO 2 was gradually released from MIL-101(Cr), it got access to carbonic anhydrase and was hydrated to bicarbonate ion. The second HKUST-1 layer containing FateDH and GDH directly converted bicarbonate ion to formic acid. The presence of GDH in the second MOF layer achieved the continuous regeneration of NADH co-factor. We found that this sequential co-immobilization route significantly accelerated the cascade biocatalysis reaction rate. The increase of concentration of CO 2 substrate by storing in MIL-101(Cr) also remarkably boosted the conversion yield. Instrumentation The transmission electron microscopy (TEM) images of MOFs were performed using a JEOL 2100F transmission electron microscope (Hitachi, Ltd., Japan). Scanning electron microscopy (SEM) images of MOFs were taken with a JEOL JSM-6700F field emission scanning electron microscope (Hitachi High-Technologies, Tokyo, Japan). Elemental analysis of HKUST-1@amine-MIL-101(Cr) was carried out using an energy dispersive X-ray spectrometer Quantax 200 XF 5010 (Bruker, Germany). Powder X-ray diffractions of MOFs were obtained from a D/max-UltimaIII (Rigaku Corporation, Japan). X-ray photoelectron spectroscopy (XPS) measurements of MIL-101(Cr) and amine-MIL-101(Cr) were performed by a EscaLab 250Xi (Thermo Fisher Scientific, America). Nitrogen adsorption/desorption isotherms and pore size distributions of MOF scaffolds were collected at 77 K using V-Sorb2800P surface area and porosimetry analyzer (Gold APP Instruments Corporation, Beijing, China). High-pressure CO 2 sorption measurements were carried out using an H-Sorb2600 high pressure and temperature gas sorption analyser (Gold APP Instruments Corporation, Beijing, China). The 13 C spectrum of formic acid product was recorded on a 600 MHz Bruker AVANCE III (Bruker Corporation, Germany). A high-performance liquid chromatography (HPLC) 2030 system (Shimadzu, Kyoto, Japan) was applied to determine the concentrations of formate derivatives using a 5020-39001 WondaSil C 18 column (15 × 4.6 cm i.d., 5 µm, GL Sciences) with UV detection at 280 nm. Synthesis of Amine-MIL-101(Cr) MIL-101(Cr) was first synthesized by well dispersing 3.2 g of Cr(NO 3 ) 3 ·9H 2 O, 1.3 g of trimesic acid, and 687 µL of HCl in 40 mL water. The mixture was reacted at 220 • C for 8 h. After the reaction was completed, the unreacted crystalline acid was removed and the product was collected by centrifugation at 12000 rpm for 6 min. The MIL-101(Cr) product was washed with ethanol for three times, and then activated by keeping in 40 mL of 95% ethanol at 80 • C for 8 h. The final MIL-101(Cr) product was dried under vacuum at 160 • C for 8 h before further use. MIL-101(Cr) was then modified with different amines including hexamethylenediamine (HMD), cystamine, and branched polyethyleneimine. Dried MIL-101(Cr) with an amount of 0.2 g was first well-dispersed in anhydrous methanol, and the amine with a weight ratio of 1:1 or 1:2 was added to the solution. The mixture was reacted for 10 min. The collected amine-MIL-101(Cr) was washed with methanol for 3 times and then dried at 120 • C for 6 h before further use. H 3 BTC@Cu 2+ @MIL-101 nanoparticles were first prepared following the above synthetic approach. The first HKUST-1 layer encapsulated with CA was then synthesized by reacting 1 mL aqueous solution containing 50.1 mmol/L copper(II) acetate, 99.9 mmol/L H 3 BTC, and 5 mg CA at 25 • C for 2 h. After washing with water, the bioconjugates were mixed with 1 mL aqueous solution comprising 50.1 mmol/L copper(II) acetate, 99.9 mmol/L H 3 BTC, 3 mg FateDH, and 3 mg GDH. The mixture was reacted for another 2 h to generate the second HKUST-1 layer containing FateDH and GDH. The final product was washed thoroughly with water and collected by centrifugation. CO 2 Storage High pressure CO 2 adsorption experiments were performed at 298.15 K and at pressure of 0-30 bar for 24 h using the H-Sorb2006 high pressure and temperature gas adsorption analyser. Before the adsorption of CO 2 , the MOFs (500 mg) were dried in a sample tube under vacuum at a temperature of 120 • C overnight. Enzymatic Catalysis of CO 2 to Formic Acid For the enzymatic catalysis of stored CO 2 to formic acid using immobilized enzymes, the HKUST-1@amine-MIL-101(Cr) nanocomposites were dried using freeze-drying and used for the adsorption of CO 2 at 298.15 K and 5 bar for 24 h. A mixture solution containing 10 mM L-glutamate and 2 mg/mL NADH in 6 mL of 50 mM phosphate buffer saline solution was purged with nitrogen for 0.5 h to remove the dissolved air. And 30 mg of HKUST-1@amine-MIL-101(Cr)-based multienzymes with stored CO 2 was quickly added to the above solution. The cascade reaction was performed in a sealed flask at 25 • C for different times. For the enzymatic catalysis of bubbled CO 2 using immobilized enzymes, a mixture solution containing 10 mM L-glutamate and 2 mg/mL NADH in 6 mL of 50 mM phosphate buffer saline solution was purged with nitrogen for 0.5 h to remove the dissolved air, and then was bubbled with CO 2 for 1 h. And 30 mg of HKUST-1@amine-MIL-101(Cr)-based multienzymes were quickly added to the above solution. The reaction was performed in a sealed flask at 25 • C for 6 h. For the enzymatic catalysis of bubbled CO 2 using free enzymes, a mixture solution containing 10 mM L-glutamate and 2 mg/mL NADH in 6 mL of 50 mM phosphate buffer saline solution was purged with nitrogen for 0.5 h to remove the dissolved air, and then was bubbled with CO 2 for 1 h. And 5 mg CA, 3 mg FateDH, and 3 mg GDH were quickly added to the above solution. The reaction was performed in a sealed flask at 25 • C for 6 h. After reaction, the supernatant was collected by centrifugation. The formic acid product was derivatized by mixing 200 µL of the sample, 100 µL of 100 mM Na 2 HPO 4 , and 400 µL of 20 mg/mL pentafluorobenzyl bromide in acetone, and reacted at 60 • C for 1 h. The derivatized product was detected by HPLC. RESULTS AND DISCUSSION Amine-Functionalized MIL-101(Cr) for the Storage of CO 2 The transmission electron microscopy (TEM) and scanning electron microscopy (SEM) images in Figures 1a,b exhibited the octahedral morphology of MIL-101(Cr) nanocrystals with apparent corners and edges, which were in good agreement with literatures (Férey et al., 2005;Hwang et al., 2008). Aminefunctionalized MIL-101(Cr) was obtained by the modification of MIL-101(Cr) with a series of amines including HMD, cystamine, and branched PEI with different loadings (50% and 100%) [here denoted as amine-MIL-101(Cr)]. We found that the original morphology of MIL-101(Cr) was preserved after loading of amines confirming that the amine functionalization Frontiers in Bioengineering and Biotechnology | www.frontiersin.org step had little damage to the generic MOF (Figures 1c-f). Then we tested the gas adsorption performance of amine-MIL-101(Cr) for CO 2 . The CO 2 adsorption isotherm at 298 K was illustrated in Figure 4C, and the results of CO 2 sorption data at 5 bar and 298 K for four amine-MIL-101(Cr) were shown in Table 2. Apparently, amine-MIL-101(Cr) showed much higher adsorption capacity for CO 2 compared with parent MIL-101(Cr). At 5 bar and 273 K, the CO 2 adsorption capacity of PEI(100)-MIL-101(Cr) reached 8.25 mmol/g, which was 4.4fold higher than that observed in MIL-101(Cr). Similarly, the CO 2 adsorption capacities of HMD-MIL-101(Cr), cystamine-MIL-101(Cr), and PEI(50)-MIL-101(Cr) was 2.57, 3.11, and 4.48 mmol/g, respectively, which was 1.4∼2.4 fold higher than that of unmodified MIL-101(Cr). The enhancement of CO 2 storage capacity may be ascribed to the introduction of amine functionalities in the MOF pore environment, which donates electrons and improves the affinity of MOF materials toward CO 2 molecules via dipole-quadrupole interactions (Zheng et al., 2011). Clearly, high loading of branched PEI provided more amine functionalities in MIL-101(Cr) according to the XPS Figure 3, which facilitated the enhancement of CO 2 capture capacity. As a result, PEI(100)-MIL-101(Cr) exhibited the highest adsorption capacity for CO 2 . For comparison, we also tested the adsorption capacity of HKUST-1 for CO 2 , which was only 2.17 mmol/g at 5 bar and 298 K. Construction of Multienzymatic Cascade System Three enzymes including CA, FateDH, and GDH were immobilized in HKUST-1 using a layer-by-layer self-assembly approach. HKUST-1 was selected as the solid support for the immobilization of enzymes because of its good solvent tolerance and mild preparative conditions. To fully utilize the stored CO 2 as substrate, the multienzyme system was constructed on the surface of amine-MIL-101(Cr). The enzymes were coimmobilized in HKUST-1 with layered structure to achieve the channeling of substrate. As illustrated in Scheme 1, using amine-MIL-101(Cr) as the core, the first HKUST-1 layer encapsulated with CA was fabricated followed by the second HKUST-1 layer containing FateDH and GDH. In this case, the CO 2 substrate released from amine-MIL-101(Cr) first got access to CA and were hydrated to bicarbonate ions. The HCO − 3 intermediate then migrated directly to the FateDH enzyme and was converted to formic acid. GDH in the outer MOF shell was used to achieve the in situ regeneration of NADH co-factor for the continuous production of formic acid. The enzyme immobilization capacity was 267.4 mg/g for CA, and 669.6 mg/g for FateDH and GDH. It is worthy of note that the size of micropores of amine-MIL-101(Cr) does not match the large dimensions of enzymes. As a result, the immobilization of enzymes will not affect the CO 2 adsorption capacities of amine-MIL-101(Cr). Supplementary Figure 1, the formation of HKUST-1 on the surface of amine-MIL-101(Cr) turned the MOF aqueous solution from green to blue-green. Energydispersive X-ray spectroscopy (EDS) analysis revealed the appearance of 9.98, 2.53, 10.15, and 18.62 at% Cu in HKUST-1@HMD-MIL-101(Cr), HKUST-1@cystamine-MIL-101(Cr), HKUST-1@PEI(50)-MIL-101(Cr), and HKUST-1@PEI(100)-MIL-101(Cr), respectively, implying the formation of HKUST-1 layer. The XRD patterns of HKUST-1@amine-MIL-101(Cr) illustrated in Figure 2B revealed new peaks typical of HKUST-1 nanocrystals. Further characterizations with TEM (Figures 5a-d) and SEM (Figures 5e-h) also confirmed the successful generation of HKUST-1@amine-MIL-101(Cr) nanocomposites. As shown in We next evaluated the gas adsorption capacity of HKUST-1@amine-MIL-101(Cr) for CO 2 . As shown in Figure 4D, the HKUST-1@amine-MIL-101(Cr) had much lower storage capacity for CO 2 presumably as a result of the partial filling of the micropores. But this storage capacity for CO 2 is still superior than using bubbled CO 2 as its solubility in water is only of 33 mM (Zhang Z. et al., 2018). Formic acid production at different reaction times. (C) Production amount of HCOOH catalyzed by HKUST-1@PEI(100)-MIL-101(Cr) immobilized enzymes using adsorbed CO 2 as substrate, HKUST-1@PEI(100)-MIL-101(Cr) immobilized enzymes using bubbled CO 2 as substrate, and free enzymes using bubbled CO 2 as substrate. (D) 13 C NMR spectrum of formic acid produced from HKUST-1@PEI(100)-MIL-101(Cr) immobilized enzymes using adsorbed CO 2 as substrate. (E) Reusability of HKUST-1@PEI(100)-MIL-101(Cr) immobilized enzymes with respect to the number of reaction cycles in which the adsorbed CO 2 was used as substrate. Conversion of CO 2 to Formic Acid The newly constructed HKUST-1@amine-MIL-101(Cr)-based multienzymes containing CA, FateDH, and GDH were employed to reduce CO 2 to formic acid using the stored CO 2 as the starting substrate accompanied by NADH regeneration. The HCOOH synthesis reaction was carried out in batch mode containing 30 mg of MOF-based multienzymes and 2.2 mmol/L NADH in 6 mL reaction system. The preliminary reaction time was 2 h. The formic acid produced from the four multienzyme systems were calculated and compared in Figure 6A and Supplementary Table 1. Clearly, larger CO 2 adsorption capacity of MOFs corresponded to higher HCOOH production yield. To further increase the conversion yield of CO 2 , we also optimized the reaction time. The production amount of HCOOH from the adsorbed CO 2 catalysed in HKUST-1@PEI(100)-MIL-101(Cr) multienzyme system was depicted as a function of reaction time. As shown in Figure 6B and Supplementary Table 2, the highest HCOOH amount of 5.0 ± 0.22 mmol/L was obtained at a reaction time of 6 h which represented a conversion yield of 88.9%. Obviously, the stored CO 2 was not completely transformed to formic acid. One of the possible reason is the partial release of CO 2 because the whole enzymatic catalysis process is performed at 1 bar. We also observed that the produced HCOOH amount decreased with the elongation of reaction time. This can be partly explained by the fact that the reaction rate of CO 2 to HCOOH catalysed by FateDH is much slower than its reverse reaction (HCOOH to CO 2 ) (Rusching et al., 1976;Zhang Z. et al., 2018). As we know, the production of 1 mol formic acid consumes 1 mol NADH. When the regeneration of NADH catalysed by GDH is not as effective as its consumption, the deficiency of NADH may cause the transformation of formic acid to CO 2 . For comparison, we also performed the enzymatic reactions catalyzed by immobilized enzymes and free enzymes using bubbled CO 2 as the substrate. As shown in Figure 6C, the production amount of HCOOH catalysed by free enzymes using bubbled CO 2 as substrate was only 0.38 ± 0.03 mmol/L. By using the immobilized enzymes to catalyse the bubbled CO 2 , the produced HCOOH increased to 3.52 ± 0.13 mmol/L. The conversion using bubbled CO 2 as substrate was also calculated based on the CO 2 solubility of 33 mM in water (Zhang Z. et al., 2018), which was only 10.67% for immobilized enzymes and 1.15% for free enzymes, far <100%. Clearly, the produced HCOOH catalysed by the immobilized multienzyme system using stored CO 2 as substrate was more than 13.1-times higher than that of the corresponding free enzyme systems. These results clearly demonstrated the superiority of our new strategy. The immobilization of enzymes in HKUST-1 layered structure is kinetically advantageous over free enzymes. The adsorbed CO 2 was gradually released from amine-MIL-101(Cr) and was directly converted to bicarbonate ions by CA which was encapsulated in the inner layer. The intermediate bicarbonate ions were then in situ consumed by FateDH immobilized on the outer MOF layer without diffusion through long distance. The porous structure of MOF allowed efficient diffusions of substrate and products. This synthetic route facilitated the channeling of substrate and eventually enabled higher rate of the cascade reaction. Moreover, the use of adsorbed CO 2 as substrate provided CA and FateDH with a high CO 2 concentration stored in a slow-releasing MOF system as required by CA and FateDH, which allowed much more production of formic acid. To further demonstrate that the formic acid was produced from catalysing the CO 2 adsorbed in MOFs instead of free CO 2 in the air. 13 CO 2 was stored in MOFs and used as substrate catalysed by HKUST-1@PEI(100)-MIL-101(Cr)-based multienzymes. The final product was analyzed by 13 C NMR. Figure 6D displayed the prominent peak of 13 C at 174.6 ppm which belonged to H 13 COOH. The surface morphology of HKUST-1@PEI(100)-MIL-101(Cr) immobilized enzymes was also characterized by SEM. As shown in Figure 5i, the immobilization of enzymes did not change the shape and morphology of MOF scaffolds. NADH Regeneration With Glutamate Dehydrogenase (GDH) NADH is the co-factor functioning as a terminal electron donor and hydrogen donor in the cascade enzymatic reaction. The production of 1 mol formic acid from CO 2 consumes 1 mol costly NADH generating NAD + . As the presence of NAD + suppresses the reduction of CO 2 to formic acid and accelerates its reverse oxidation reaction, the efficient regeneration of NADH is highly desirable. Enzymes such as glucose dehydrogenase (Obón et al., 1998;Marpani et al., 2017;Zhang Z. et al., 2018), xylose dehydrogenase (Marpani et al., 2017) and GDH (Ji et al., 2015) have been successfully used for the regeneration of NADH. In our work, GDH was adopted to attain the continuous conversion of NAD + to NADH. We investigated the effects of NADH concentration on the overall reaction efficiency by varying the added NADH amount in the reaction solution at a final concentration between 0.5 and 2.8 mM while keeping the immobilized enzymes amount constant. The NADH-based HCOOH yield (Y HCOOH ) was calculated according to the following equation. where C HCOOH is the HCOOH concentration (mM) at a reaction time of 6 h, and C NADH,initial is the initial NADH concentration (mM). As shown in Table 3, the production of HCOOH raised up to 5.04 mM when the NADH concentration increased from 0.5 to 2.8 mM, while Y HCOOH decreased from 353.88 to 179.82%. This trend was similar to the work reported by Zhang et al. in which the sequential co-immobilization of five enzymes in hollow nanofiber was achieved and used for the synthesis of methanol from CO 2 (Ji et al., 2015). As reported by Pinelo et al. (Zhang Z. et al., 2018), the reaction rate for reducing NAD + to NADH is much higher than its reverse oxidation reaction catalysed by FateDH. The same finding was also observed in our work. NADH was efficiently regenerated by GDH encapsulated in the outer MOF shell. The catalytic performance of our newly designed HKUST-1@PEI(100)-MIL-101(Cr) immobilized systems compares well with the values obtained from other immobilized enzymes published by other groups shown in Table 4. Operational Stability and Reusability The operational stability and reusability of enzymes immobilized in HKUST-1@PEI(100)-MIL-101(Cr) were evaluated by testing the HCOOH production amount after repeated catalysis of adsorbed CO 2 for 10 cycles. After one batch of reaction for 6 h, the MOF scaffold containing enzymes were dried using freeze drying and used for the adsorption of CO 2 at 5 bar and 298 K before the next batch of catalysis. As shown in Figure 6E, the NADH-based HCOOH yield (Y HCOOH ) was still 86% even after 10 cycles of reusing. A cumulative HCOOH yield of 1077.7% was obtained from the 10 reusing cycles of this reaction system indicating the good operational stability and reusability of the immobilized enzymes. We also tested the chemical tolerance of the MOF scaffold. The HKUST-1@PEI(100)-MIL-101(Cr) nanocomposite obtained from 10 cycles of reusing was subjected to SEM measurement. As shown in Figure 5j, there was no change in the morphology of MOF support indicating its high chemical stability. The gas storage capacity of HKUST-1@PEI(100)-MIL-101(Cr) nanocomposite after 10 cycles of reusing was also evaluated. As illustrated in Figure 4D, the repeated reaction did not lead to any decrease in the CO 2 uptake capacity thus confirming the reusability of the MOFs as adsorbent for the storage of CO 2 . CONCLUSIONS We have developed a new MOF scaffold that functions as adsorbent for the storage of CO 2 as well as solid support for the sequential co-immobilization of multienzymes via a layerby-layer self-assembly approach. This new strategy used the adsorbed CO 2 as substrate, facilitated the channeling of substrate, and eventually enabled high catalytic efficiency with a continuous regeneration of NADH co-factor. Improved operational stability and reusability were also observed in immobilized enzymes implying the great potential of our new strategy for the biotransformation of CO 2 used in industrial applications. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material.
2019-12-06T14:04:46.549Z
2019-12-06T00:00:00.000
{ "year": 2019, "sha1": "4bbd7ae4356adfcad558ed5dba127efbfd46f9d1", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2019.00394/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4bbd7ae4356adfcad558ed5dba127efbfd46f9d1", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237936330
pes2o/s2orc
v3-fos-license
In Silico Screening and In Vivo Evaluation of Potential CACNA2D1 Antagonists as Intraocular Pressure-Reducing Agents in Glaucoma Therapy Glaucoma is a leading cause of permanent vision loss and current drugs do not halt disease progression. Thus, new therapies targeting different drug targets with novel mechanisms of action are urgently needed. Previously, we identified CACNA2D1 as a novel modulator of intraocular pressure (IOP) and demonstrated that a topically applied CACNA2D1 antagonist—pregabalin (PRG)—lowered IOP in a dose-dependent manner. To further validate this novel IOP modulator as a drug target for IOP-lowering pharmaceutics, a homology model of CACNA2D1 was built and docked against the NCI library, which is one of the world’s largest and most diverse compound libraries of natural products. Acivicin and zoledronic acid were identified using this method and together with PRG were tested for their plausible IOP-lowering effect on Dutch belted rabbits. Although they have inferior potency to PRG, both of the other compounds lower IOP, which in turn validates CACNA2D1 as a valuable drug target in treating glaucoma. Introduction Glaucoma is a group of eye diseases that can slowly and asymptomatically steal human sight. Most glaucomatous patients do not recognize that they may have glaucoma until they suffer significant vision loss. Vision field loss associated with glaucoma occurs due to optic nerve damage that results mainly from the persistent pressure exerted by elevated IOP, especially in case of primary open angle glaucoma (POAG). POAG accounts for more than 90% of glaucoma cases all over the world and is considered to be one of the leading causes of irreversible blindness, especially in elderly people [1]. Because of this, decreasing IOP is considered as the first-line therapeutic solution in the management of glaucoma. Several commercial IOP-lowering products are available in the drug market. Unfortunately, none of these therapies cure POAG and most of them are associated with local and systemic side effects [2][3][4]. Furthermore, some of them suffer from diminishment of the pharmacological effect upon repeated applications (i.e., tachyphylaxis), which is considered as a fatal defect, especially in case of medication used in treatment of chronic disease such as POAG [5]. In addition, most of these IOP-lowering medications have a short duration of action that require several daily doses, which could exaggerate the severity of their side effects. Even the newly FDA-approved medications that are intended to be used once daily are associated with severe local and systemic side effects that result in poor patient compliance and acceptance. Examples of the side effects of recently approved products are Vyzulta (latanoprostene bunod, 0.024%), which is associated with a permanent pigmentation of the eyelids, lashes and iris [6]; and Rhopressa (netasudil, 0.02%), which causes conjunctival hyperemia and hemorrhage, eye pain upon instillation and cornea verticillate [7]. In addition to the previously mentioned side effects, the chronic use of topically applied anti-glaucoma medication may result in a disturbance of the tear film that impacts the health of the patient's ocular surface and results in a glaucoma-related ocular surface disease and dry eye condition [8]. Therefore, identification of safe, effective and long-acting drug molecules that can selectively interact with a specific target site inside the eye and have the ability to control the IOP is still an urgent medical need. In our recent publications, we demonstrated the localization of a subunit of the L-type calcium channel-CACNA2D1-in the ciliary body and trabecular meshwork, which are the tissues responsible for production and drainage of the aqueous humor, respectively, of human, rabbit and mouse eyes. We reported that pregabalin (PRG) can target this protein and reduce IOP by decreasing the production and/or increasing the drainage of aqueous humor [9,10]. In drug discovery, molecular docking is a commonly used method to predict the binding mode and affinity of a ligand to a protein with a known structure. Typically, the ligand will be tried to dock against a specific region of the protein. The best binding pose will then be determined, and a docking score will be calculated typically based on the energy reduction of the two entities before and after the docking. In this situation, a lower docking score indicates a higher energy reduction and thus a tighter binding between the ligand and the protein [11]. When the structure of the protein is unknown, which is the case in our study, to perform molecular docking, the structure of the protein will need to first be predicted. Homology modeling is considered the most accurate among the computational structure prediction methods for this purpose [12]. In this method, a template of that protein, which is the sequence of another protein with a known structure, must first be identified and selected. The higher identity between the sequence of those two proteins usually means that a more reliable homology model will be constructed. The two sequences will then be aligned and corrected, and a homology model will then be built based on the structure of the template protein and experimental data. Post-modification of the model is sometimes needed for better results. In the current study, to further validate our target protein-CACNA2D1-as a potential target for the treatment of glaucoma, we used the docking method to identify several compounds that are structurally similar to PRG with a good affinity to the target protein. These compounds were also tested in vivo to validate their therapeutic effect. Homology Model of CACNA2D1 and In Silico Screening To screen for the hits of CACNA2D1, a homology model was built and five possible binding sites were identified ( Figure 1). PRG was then used to dock against the five sites to understand its binding mode. The binding affinities of PRG differs dramatically among the five binding sites. The docking scores were −3.585 kcal/mol, −2.909 kcal/mol, −10.016 kcal/mol, −5.616 kcal/mol and −4.245 kcal/mol for Sites 1 to 5, respectively. Although compounds from the NCI library were docked against all five sites, because the docking score of PRG to Site 3 is superior to other sites, the binding affinity to Site 3 became our primary consideration. Acivicin, identified from the compound library, had a docking score of −12.222 kcal/mol, which was the highest of the compounds we evaluated. Zoledronic acid, although it had a much lower docking score of −6.867 kcal/mol, was considered as another hit, because its binding pose aligns well with that of PRG ( Figure 2). The binding poses of all the above-mentioned compounds are illustrated in Figure 3. As previously reported, PRG is a promising IOP-lowering medication that targets CACNA2D1 [9,10]. We predicted that acivicin and zoledronic acid, both of which have hypothetically similar binding modes as PRG, would possess a similar IOP-lowering activity. Thus, to validate our hypothesis and the evaluate our model, acivicin and zoledronic acid were both tested in vivo regarding their efficacy and safety compared to PRG. Acivicin is a glutamine analogue antibiotic produced as a fermentation product of Streptomyces sviceus, which has an antitumor activity [13]. Zoledronic acid is an FDA-approved medication for treatment of osteoporosis or as adjunct medication in cancer chemotherapy as an intravenous infusion (Reclast ® , Aclasta ® and Zometa ® ) [14]. Preparation of Viscous Eye Drops Containing 0.6% w/v of Different Drug Molecules Because these drug molecules that we identified are soluble in an aqueous vehicle, they can rapidly drain from the eye surface upon topical application into the eye. For this reason, 0.2% w/v Carbopol 981 viscous eye drops were selected as the vehicle for all the tested molecules, including PRG. Carbopol 981 is characterized by its reasonable viscosity, biocompatibility and bioadhesiveness [15,16]. Carbopol 981 is considered one of the most safe crosslinked polyacrylic acid derivatives due to the absence of benzene solvent residues [17]. Bioadhesion is a very important parameter for a topically applied ophthalmic formulation, allowing it to adhere to the eye surface (cornea and conjunctiva) and prevents its rapid drainage, either outside the eye or through the nasolacrimal duct. Moreover, the viscosity of Carbopol 981 prevents the immediate washout of the formulation from the eye surface, thus allowing enough time for the bioadhesion reaction to occur [10,18]. pH Measurement of Different Eye Drops The measured pH of the tested viscous eye drops (Table 1) ranged between 4.9 and 5.5, which are in the pH range that could be easily tolerated by the natural eye buffering system without causing any discomfort [19]. In Vivo IOP-Lowering Efficacy Evaluation of Different Eye Drops after a Single Dose Application The abilities of these molecules to lower the IOP were tested on Dutch belted rabbits and compared to the IOP-lowering effect of PRG in the same animal model. The IOPlowering results of all the tested molecules are plotted in Figure 4 and the pharmacodynamic parameters are listed in Table 1. The IOP-lowering efficacy of the different molecules can be arranged as follow: PRG > zoledronic acid > acivicin (Table 1 and Figure 4). Although acivicin and zoledronic acid demonstrate inferior potency compared to PRG, their similar IOP-lowering activity indicates a shared mode of action with PRG ( Figure 4). In Vivo Safety Evaluation of Different Eye Drops after a Single Dose Application Regarding the safety of these new molecules as potential IOP-lowering the neither PRG nor acivicin show any signs of irritation, toxicity or allergic reactions ( 5). In contrast, zoledronic acid showed severe eye toxicity reactions that have been s 2 days after the topical application of zoledronic acid eye drops. All rabbits rec zoledronic acid eye drops did not show any problem during the first 24 h after the cation. During the second day after application, all rabbits started to show toxic s fects in the treated eyes, such as redness, tearing and thick ocular discharge. As sug by the veterinary doctor at our university, after the toxicity sign started to appear ment was started with isotonic boric acid eyewash and erythromycin antibiotic ey ment twice a day. With time, the severity of the side effects were exaggerated an toxicity signs started to appear. The peak of the side effects occurred two weeks af application, with all rabbits' eyes developing corneal swelling and the appearanc white membrane that covered all the eye surface (cornea and conjunctiva). Subsequ the inflammation began to decrease. Unfortunately, a new permanent inflammator dition (corneal vascularization) appeared in all eyes dosed with zoledronic acid. Aft months of treatment with isotonic boric acid eyewash and erythromycin eye ointm toxicity signs disappeared except for corneal vascularization ( Figure 5). Although rinia et al. have reported the safety of zoledronic acid after intravitreal injection in eyes of pigmented rats [20], our in vivo safety study confirmed the ocular toxicity zoledronic acid molecules. There are two case reports that are in agreement with ou ing, which demonstrated the occurrence of severe eye inflammation following in nous infusion of a zoledronic acid solution [21,22]. In Vivo Safety Evaluation of Different Eye Drops after a Single Dose Application Regarding the safety of these new molecules as potential IOP-lowering therapies, neither PRG nor acivicin show any signs of irritation, toxicity or allergic reactions ( Figure 5). In contrast, zoledronic acid showed severe eye toxicity reactions that have been started 2 days after the topical application of zoledronic acid eye drops. All rabbits receiving zoledronic acid eye drops did not show any problem during the first 24 h after the application. During the second day after application, all rabbits started to show toxic side effects in the treated eyes, such as redness, tearing and thick ocular discharge. As suggested by the veterinary doctor at our university, after the toxicity sign started to appear, treatment was started with isotonic boric acid eyewash and erythromycin antibiotic eye ointment twice a day. With time, the severity of the side effects were exaggerated and new toxicity signs started to appear. The peak of the side effects occurred two weeks after the application, with all rabbits' eyes developing corneal swelling and the appearance of a white membrane that covered all the eye surface (cornea and conjunctiva). Subsequently, the inflammation began to decrease. Unfortunately, a new permanent inflammatory condition (corneal vascularization) appeared in all eyes dosed with zoledronic acid. After two months of treatment with isotonic boric acid eyewash and erythromycin eye ointment all toxicity signs disappeared except for corneal vascularization ( Figure 5). Although Nourinia et al. have reported the safety of zoledronic acid after intravitreal injection into the eyes of pigmented rats [20], our in vivo safety study confirmed the ocular toxicity of the zoledronic acid molecules. There are two case reports that are in agreement with our finding, which demonstrated the occurrence of severe eye inflammation following intravenous infusion of a zoledronic acid solution [21,22]. Animals Dutch belted rabbits, mixed males and females, aged 5-7 months, weighing 1.5−2.5 kg, purchased from Covance Inc. (Princeton, NJ, USA), were used to test the IOP-lowering effects of the tested molecules. All animals were examined before the study and appeared free of any clinically observable abnormalities. All rabbit eyes were healthy with no injury or history of injury. The IOP difference between the two eyes of the same rabbit did not exceed 1 mmHg. Through the whole study, rabbits had free and continuous access to food and water. All procedures including rabbits were previously approved by the Animal Care and Use review board of the University of Tennessee Health Science Center (UTHSC), Memphis, TN. The protocol also followed the Association of Research in Vision and Ophthalmology (ARVO) Statement for the Use of Animals in Ophthalmic and Vision Research and the guidelines for laboratory animal experiments (Institute of Laboratory Animal Resources, Public Health Service Policy on Humane Care and Use of Laboratory Animals). Homology Model of CACNA2D1 Because no CACNA2D1 crystal structure has been published to date, we built a homology model and used it for the docking studies based on the procedure described as follows: The protein sequence of CACNA2D1 was obtained from the UniProt database and was subsequently used for the online BLAST search. The voltage-gated calcium channel Ca(v)1.1 (PDB: 5GJV [23]) with 91% sequence identity was identified and used as the template to build the homology model using Prime. The non-template loops with less than 7 residues were subsequently refined using the VSGB solvation model and OPLS3 force field. The model was further prepared by using the protein preparation wizard in Maestro during which the ionization state at pH 7.0 ± 2.0 was generated using the Epik module, which was then used without further modification. Virtual Screening The binding sites of the abovementioned homology model of CACNA2D1 were identified using SiteMap and the top five sites were used for docking studies. The structure of PRG was first prepared by Ligprep with the OPLS3e force field and was subsequently used to dock against the five sites. Similarly, ligands from National Cancer Institute (NCI) database were also prepared and docked against the same 5 sites. Preparation of Viscous Eye Drops Containing 0.6% w/v of Different Drug Molecules Because all the used drug molecules are soluble in aqueous media, they could be easily prepared in the form of aqueous solutions. Unfortunately, these aqueous solutions could be rapidly drained from the eye surface before exerting any pharmacological response. For this reason, we incorporated these molecules in bioadhesive viscous Carbopol 981 eye drops. Carbopol 981 in a concentration of 0.4% w/v was soaked in Milli-Q water overnight until full swelling. PRG and acivicin were dissolved separately in Milli-Q water, while zoledronic acid was dissolved in 1% w/v triethanolamine solution, at a concentration of 1.2% w/v. Equal volumes of 0.4% w/v Carbopol 981 gel and drug aqueous solutions were mixed together using a vortex mixer to produce the final bioadhesive viscous eye drops, which were kept protected from light in closed air-tight containers at 5 • C until the time of use. After mixing, the final products were composed of 0.6% w/v of each drug in 0.2% w/v Carbopol 981 gel. To ensure sterility of the final products, all the used tools and water were sterile and all procedures were performed under aseptic conditions. pH Measurement of Different Eye Drops The pH of the prepared eye drops was measured according to our previously published protocol [10]. A gram of each formulation was diluted in 20 mL Milli-Q water and mixed well using a magnetic stirrer and subjected to pH measurement by a pH meter (Corning pH meter 440; Corning Inc., Corning, NY, USA). The measurement was done in triplicate and the results presented as the mean ± SEM. 1. Efficacy evaluation The IOP-lowering effect of the different eye drops containing different drug molecules was evaluated using Dutch belted rabbits (n = 3) according to our previously published protocols [10]. Each rabbit was given 100 µL of each eye drops into the inferior conjunctival sac of one eye, while the other eye was given 100 µL of the blank formulation (i.e., 0.2% w/v Carbopol gel vehicle) and used as a control. A Tono-pen AVIA (Reichert Technologies, Depew, NY, USA) was used to measure the IOP. The baseline IOP was measured immediately before application of the eye drops and then was taken hourly until it returned back to its baseline value. At each time point, three readings were taken and averaged for each eye. The IOP measurement was repeated until we measured a fixed value at each reading. The relative efficacy of each drug molecule was evaluated by comparing the calculated pharmacodynamic parameters of the three eye drops containing the different drug molecule. The calculated pharmacodynamic parameters include maximum percent IOP reduction; the time required to reach the maximum decrease in percent IOP (T max ); the time required for IOP to return back to its baseline (i.e., end of drug effect; T end ); and the total area under the percent IOP reduction-versus-time curve (AUC). All pharmacodynamic calculations were carried out using GraphPad Prism-9 software (GraphPad Software Inc., San Diego, CA, USA). All results were expressed as the mean ± SEM. 2. Safety evaluation The biosafety of the tested drug molecules after acute ocular exposure was tested on the eyes of Dutch belted rabbits (n = 3) according to our previously published protocols [18]. Rabbit eyes were visually examined for any problems or abnormalities before the application of the eye drops. Each rabbit was placed in rabbit restrainer (Plas Labs Inc., Lansing, MI, USA) to prevent them from touching their eyes at the beginning of the study (for 4 h). Each rabbit received 100 µL of each eye drops into the inferior conjunctival sac of one eye, while the other eye received 100 µL of the blank formulation (i.e., 0.2% w/v Carbopol gel vehicle) and used as a control. All eyes were visually evaluated for the appearance of irritation, toxicity or allergic reactions, such as tearing, inflammation, corneal swelling, conjunctival redness, hyperemia or swelling, etc., after 1, 2, 3, 4, 6, 8, 24, 48 and 72 h after the application. In addition to the visual evaluation, slit-lamp examinations were performed after 1, 2, 3 and 7 days. More examinations were performed to evaluate any irritation, toxicity or allergic reactions. Conclusions Glaucoma is one of the leading causes of irreversible blindness. Current drugs suffer from diminishment of activity and/or severe side effects upon prolonged usage. To address these problems, a new target, CACNA2D1, was identified, and PRG, which targets the protein, was found and showed a promising IOP-lowering effect in our previous studies. To further evaluate CACNA2D1 as a potential drug target, several other molecules binding to this protein were identified and tested for their in vivo activity. In our current study, to identify additional compounds that bind to CACNA2D1, a homology model was built and used to dock against the NCI library to find possible hits. Two compounds, namely, acivicin and zoledronic acid, stood out among the others and became the best candidates. This model was validated after testing these compounds in vivo for their IOP-lowering effect. Our results demonstrate that though they have lesser potency, these compounds share the same target protein and binding sites as PRG. Together, these data suggest CACNA2D1 is a validated drug target in treating glaucoma and worthy of further investigation for more potent compounds.
2021-09-24T15:30:14.874Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "0fbe0db2214b7d67719fd005d75cf0b0360fd3b2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/14/9/887/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a35cae39be22ebf3b5626403408d61f28eec0c72", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260016506
pes2o/s2orc
v3-fos-license
Mutant HTT does not affect glial development but impairs myelination in the early disease stage Introduction Huntington’s disease (HD) is caused by expanded CAG repeats in the huntingtin gene (HTT) and is characterized by late-onset neurodegeneration that primarily affects the striatum. Several studies have shown that mutant HTT can also affect neuronal development, contributing to the late-onset neurodegeneration. However, it is currently unclear whether mutant HTT impairs the development of glial cells, which is important for understanding whether mutant HTT affects glial cells during early brain development. Methods Using HD knock-in mice that express full-length mutant HTT with a 140 glutamine repeat at the endogenous level, we analyzed the numbers of astrocytes and oligodendrocytes from postnatal day 1 to 3 months of age via Western blotting and immunocytochemistry. We also performed electron microscopy, RNAseq analysis, and quantitative RT-PCR. Results The numbers of astrocytes and oligodendrocytes were not significantly altered in postnatal HD KI mice compared to wild type (WT) mice. Consistently, glial protein expression levels were not significantly different between HD KI and WT mice. However, at 3 months of age, myelin protein expression was reduced in HD KI mice, as evidenced by Western blotting and immunocytochemical results. Electron microscopy revealed a slight but significant reduction in myelin thickness of axons in the HD KI mouse brain at 3 months of age. RNAseq analysis did not show significant reductions in myelin-related genes in postnatal HD KI mice. Conclusion These data suggest that cytoplasmic mutant HTT, rather than nuclear mutant HTT, mediates myelination defects in the early stages of the disease without impacting the differentiation and maturation of glial cells. Introduction Huntington's disease (HD) is caused by an expansion of CAG repeats in the huntingtin gene (HTT), resulting in mutant HTT carrying an expanded polyglutamine repeat (more than 36Q) in its N-terminal region (Ross and Tabrizi, 2011;Ross et al., 2014;Saudou and Humbert, 2016). While mutant HTT causes late-onset neurodegeneration in HD, it has been found to affect Frontiers in Neuroscience 02 frontiersin.org neuronal cells during early brain development (Reiner et al., 2003;Godin et al., 2010;McKinstry et al., 2014;Molina-Calavita et al., 2014;Molero et al., 2016;Barnat et al., 2020;Mangin et al., 2020;Capizzi et al., 2022). This raises an important hypothesis that the effect of mutant HTT on the early development of the brain may contribute to late-onset neurodegeneration (Cepeda et al., 2019;van der Plas et al., 2020;Humbert and Barnat, 2022). There is also mounting evidence that mutant HTT can affect glial cells, impairing neuronal function (Shin et al., 2005;Bradford et al., 2009Bradford et al., , 2010Khakh and Sofroniew, 2014;Huang et al., 2015;Diaz-Castro et al., 2019;Lee et al., 2022). However, it remains unknown whether mutant HTT affects the development of glial cells during early brain development. Addressing this issue is important for understanding the effect of mutant HTT during early brain development, as neuronal maturation and function are critically dependent on glial function in the early development stage (Haydon, 2001;Allen and Lyons, 2018;Liu et al., 2023). For example, axonal formation and function rely on myelination produced by oligodendrocytes (Franklin and Ffrench-Constant, 2017;Kramer-Albers and Werner, 2023), while synaptic function is regulated by astrocytes that can uptake neurotransmitters and regulate their release (Blanco-Suarez et al., 2017;Endo et al., 2022). Furthermore, neurotrophic factors provided by glial cells are essential for maintaining neuronal survival and regulating their differentiation and maturation during the early brain development stage (Park and Poo, 2013;Caldwell et al., 2022;Wang et al., 2022;Xie et al., 2022). Although previous work has shown that mutant HTT can affect neuronal development (Wiatr et al., 2018;Barnat et al., 2020;Fyfe, 2021;Hickman et al., 2021;Capizzi et al., 2022), it is unclear whether glial dysfunction can contribute to this early defect. In the current study, we used the HD KI mouse model to address this issue. HD KI mice express full-length mutant HTT at the endogenous level, allowing us to investigate the toxic effect of mutant HTT under physiological conditions and during early brain development. We focused on astrocytes and oligodendrocytes, as both come from the same progenitor cells. We assessed the numbers of astrocytes and oligodendrocytes in postnatal HD KI mice and observed no differences in their numbers between HD KI and WT mice. However, we found that mutant HTT can reduce myelin protein expression and myelination at 3 months of age. These findings suggest that mutant HTT does not affect the development of glial cells but can impair myelination at the early disease stage, providing additional insight into the neuropathology of HD. Animals All animal protocols were approved by the Institutional Animal Care and Use Committee (IACUC) of Jinan University in China (IACUC Approval No. IACUC-20200728-01). Wild-type C57BL/6 mice were purchased from the Guangdong Medical Laboratory Animal Center (license No. SCXK (Yue) 2018-0002) and used as controls. HD KI-140Q mice expressing full-length mutant HTT were obtained from Jackson Lab (stock number: 029928). All mice were housed in the animal facility of the Institute of Central Nerve Regeneration at Jinan University, which had a 12-h light period and a 12-h dark period, with a controlled temperature of 22 ± 2°C and humidity of 50 ± 10%. The animals had ad libitum access to mouse diet and sterile water. Immunofluorescence staining Mice were anesthetized with 5% chloral hydrate and then perfused with 0.9% NaCl, followed by 4% paraformaldehyde (PFA). The brains were subsequently removed and post-fixed in 4% PFA overnight at 4°C. The brains were then transferred to 30% sucrose for 48 h and cut into 20-or 40-μm sections using a cryostat (Leica CM1850) at 20°C. The sections were blocked in 4% donkey serum with 0.2% Triton X-100 and 3% BSA in PBS for 1 h. For immunofluorescent staining, 20-μm sections were incubated with primary antibodies in the same buffer at 4°C overnight. After washing with 1× PBS, the sections were incubated with fluorescent secondary antibodies. Fluorescent images were acquired using a Zeiss microscope (Carl Zeiss Imaging, Axiovert 200 MOT) and either a 40× or 63× lens (LD-Achroplan 40×/0.6 or 63×/0.75) with a digital camera (Hamamatsu, Orca-100) and Openlab software (Improvision). RNAseq analysis Total RNA was extracted from the prefrontal cortex (PFC) and striatum of 140Q KI mice and age-matched control animals. Only samples with an RNA integrity number (RIN) over 6.8 were used for cDNA library construction. Sequencing was performed on a single lane of an Illumina HiSeq 4,000 to produce 150 bp paired-end reads. We performed three independent replicates from adjacent areas for each animal. The brain tissues were transported on dry ice to BGI Genomics (Shenzhen, China) for high-throughput sequencing. The RNA-seq sequencing workflow followed the company's protocol. The sequencing libraries were enriched and constructed with magnet beads containing Oligo (dT) and randomly fragmented using fragmentation buffer. The RNA fragments were then amplified using random hexamers, end-repaired, adenylated, and sequenced using the Illumina platform. The RNA-seq data were quantified using Salmon software (ver 1.9.0) based on the GRCm38 genome on the high-performance computing platform of Jinan University (Patro et al., 2017). The quantified files were then matrixed with tximport (ver 1.28.0) and estimated through edgeR to explore differentially expressed genes Frontiers in Neuroscience 03 frontiersin.org (DEGs) (Robinson et al., 2010). The heatmaps were clustered and plotted using the ComplexHeatmap R package (ver 2.13.1) (Gu et al., 2016). The significant up-regulated and down-regulated genes (p < 0.01, |Fold Change| > 1) in the cortex and striatum between WT and HD140Q mice were shown on the volcano plot through the EnhanceVolcano R package (ver 1.14.0). To estimate different pathway activation scores in samples, the neuron or glial cell-associated pathway sets scores were calculated through Gene Set Variation Analysis (GSVA) with the GSVA R package (ver 1.48.1) (Hänzelmann et al., 2013). All of the above analyses were performed using R (ver 4.6.0) and R studio (ver 2022.07.01, Build 554). Quantitative PCR For qRT-PCR, total RNA was extracted from the prefrontal cortex and striatum of postnatal mice at 3 months of age. Samples were collected from 140Q KI mice and age-matched control animals. Reverse transcription reactions were performed using the Superscript III First-Strand Synthesis System (Invitrogen, 18,080-051) with 1.5 μg of total RNA. One microliter of cDNA was combined with 10 μL SYBR Select Master Mix (Applied Biosystems, 4,472,908) and 1 μL of each primer in a 20 μL reaction. The reaction was performed in a real-time thermal cycler (Eppendorf, Realplex Mastercycler). The PCR products were analyzed on a 2% agarose gel to ensure that the PCR amplification produced only a single specific band. The sequences of the primers for Ptgds, Acvr2a, Ccn3, Rxrg, Nrp2, Vip1r, Igf2, Lars2, Agt, Lgfbp2, Sema7a, Sparc, Bdnf, Hap1, Sox11 and Actin are listed in Supplementary Figure S1. Actin was used as the internal control. Relative expression levels were calculated using 2^-ΔΔCT, with WT set at 1. Electron microscopy The mice were anesthetized with 5% chloral hydrate and perfused with 0.9% NaCl, followed by 4% PFA containing 2.5% glutaraldehyde. After post-fixation, the brain was cut into 50-μm sections using a vibratome (Leica, VT1000). All sections used for electron microscopy were dehydrated in ascending concentrations of ethanol and propylene oxide/Eponate 12 (1:1) and embedded in Eponate 12 (Ted Pella, Redding, CA). Ultrathin sections (60 nm) were cut using a Leica Ultracut S ultramicrotome. Thin sections were counterstained with 5% aqueous uranyl acetate for 5 min, followed by Reynolds lead citrate for 5 min, and examined using a Hitachi (Tokyo, Japan) H-7500 electron microscope equipped with a Gatan Bio-Scan CCD camera. Axon and myelin fiber diameters were measured using ImageJ (NIH). More than 60 axon sections were examined for each genotype. Statistics The results are presented as mean ± standard error (SE). Statistical analysis was performed using Prism 8 software (GraphPad Software). When comparing only two experimental groups, Student's t-test was used to calculate statistical significance. For all other experiments, statistical significance was calculated using one-way ANOVA or two-way ANOVA, followed by Tukey's multiple-comparisons test. A value of p of less than 0.05 was considered statistically significant. Selective reduction of myelin proteins in HD KI mice We performed western blotting to examine the expression levels of glial proteins. This assay can validate the expression of mutant HTT and also quantitatively measure the relative levels of glial proteins by comparing with the loading control protein on the same blots. In the cortex, heterozygous HD KI mice expressed wild type and mutant HTT that showed less immunoreactivity to the anti-HTT than WT HTT ( Figure 1A, arrows). The expression of both WT and mutant HTT appeared to be quite stable from P1 to 3 months. Astrocytic proteins GFAP and S100beta were markedly increased from P1 to P7 and then stably expressed from P7 to 3 months, whereas ALDH1L1, which labels glial cells at different states, was more consistently expressed at different postnatal days ( Figure 1A). However, oligodendrocytic proteins (olig2 and PLP) were more abundant at P1-P7 and declined from P7, suggesting that the expression of oligodendrocytic proteins is dynamic at the postnatal stage. However, mutant HTT does not seem to affect the expression of these glial proteins in the cortex when compared with that of WT mice. In contrast, myelin proteins (MBP, MAG, MOG, and CNP) were reduced in their expression when compared with WT mice. This conclusion is also supported by quantification of the ratios of glial proteins to the loading control vinculin on western blots, and a significant reduction of these myelin proteins was found at 3 months of age (Figurse 1A,C). Similar changes were also seen in the striatum of HD KI mice, where glial proteins were unchanged between WT and KI mice, but myelin proteins (MBP, MAG, MOG, and CNP) showed reduced levels, especially at 3 months of age ( Figures 1B,D). Thus, examining postnatal mice at multiple time points revealed that myelin proteins were selectively reduced in HD KI mice. Decreased myelin protein staining in HD KI mice To confirm the Western blotting results, we performed immunocytochemistry using antibodies specific to the astrocytic protein GFAP and oligodendrocytic protein Oligo2. We used heterozygous HD KI mice and wild type littermates of the same age and isolated their brains at different days after birth, ranging from P1 to 3 months. Immunostaining of different brain regions, including the cortex, hippocampus, and striatum, of WT and HD KI mice at 1 and 2 months after birth did not reveal any significant differences in GFAP labeling (Figure 2A). Counting the numbers of GFAP-positive cells (n = 3 mice per group) also showed no significant difference between WT and HD KI mice ( Figure 2B). These results suggest that mutant HTT does not affect the development of astrocytes, as there were no significant differences in GFAP-positive cells between HD KI and WT mice. To assess the number of oligodendrocytes, we used an antibody to oligo2, a protein specifically expressed in oligodendrocytes, to immunolabel WT and HD KI mice at P1, P7, P14, and P21. The examination did not reveal any obvious changes in oligo2 labeling in the brain regions containing the cortex, corpus callosum, and striatum when compared with WT mice ( Figure 3A). Quantification of the oligo2-positive cells (n = 3 animals per group) showed similar numbers of these cells ( Figure 3B). Thus, mutant HTT does not alter the numbers of oligodendrocytes during early brain development. By staining the cortex with antibodies to myelin proteins (MBP, MAG, MOG, and CNP), we observed decreases in these proteins in HD mice at 3 months of age ( Figure 4A). Quantification of Selective reduction in myelin proteins in HD KI mouse brain. Western blotting analysis of the prefrontal cortex (A) and striatum (B) of WT and HD KI mice at the ages of P1, P7, P21, 1 M, 2 M, 3 M. The blots were probed with antibodies to the astrocytic proteins ALDH1L1, GFAP, S100β and oligodendrocytic proteins olig2, PLP, MBP, MOG, MAG, CNP, Vinculin served as an internal control. Quantification of western blots of the prefrontal cortex (C) and striatum (D), normalized to vinculin. P, postnatal; M, month; WT, wild type; KI, knock-in. The data were presented as mean+/-SE (n = 3 independent experiments from 3 mice per genotype). *p < 0.05, **p < 0.01. Frontiers in Neuroscience 05 frontiersin.org immunofluorescent staining intensity also verified the decreases in the cortex of HD KI mice compared to WT mice ( Figure 4B). Similarly, immunostaining of MBP, MAG, MOG, and CNP in the striatum was also reduced in HD KI mice at 3 months of age ( Figures 5A,B). These results are consistent with the western blotting results in Figure 1 and demonstrate that mutant HTT can reduce myelin protein expression when mice reach 3 months of age. Decreased myelination in HD KI mice Next, we aimed to investigate whether mutant Htt causes any myelination defects in HD KI mice at 3 months of age. Electron microscopy revealed myelinated axons in the striatum and white matter (WM) of both WT and HD KI mice. However, the thickness of myelin in HD KI mice appeared to be thinner than that of WT mice ( Figure 6A). To confirm this, we performed a quantitative analysis of g-ratios (the inner axonal diameter to the total outer diameter) ( Figure 6B). The ratio was increased in the striatum and white matter in HD KI mice compared to WT mice, reflecting thinner myelin in HD KI mice ( Figure 6B). We did not observe any obvious evidence of axon degeneration in HD KI mice, suggesting that reduced myelination is an early pathological change prior to obvious degeneration in HD. RNAseq analysis of postnatal HD mice Transcriptional dysregulation is a significant molecular change in HD (Malla et al., 2021), which is observed in HD KI mice and correlates with the age-dependent nuclear accumulation of mutant HTT (Langfelder et al., 2016). HD KI mice do not exhibit significant alterations in gene expression until 6 months of age (Langfelder et al., 2016), consistent with the apparent nuclear accumulation of N-terminal mutant HTT at this age . We conducted RNAseq analysis to investigate whether postnatal HD KI mice (n = 3) display any altered gene expression. Analysis of HD KI mice at 1 and 3 months did not reveal any significant changes in global gene expression in the cortex and striatum compared to WT mice (n = 3) ( Figures 7A,B). The volcano plots of differentially expressed genes (DEGs) also showed minimal numbers of altered genes (19-33 at 1 month of age and 171-59 at 3 months of age for cortex, 17-44 at 1 month of age and 108-48 at 3 months of age for striatum), although there is a trend towards increased numbers with aging ( Figure 7C). We also used quantitative RT-PCR to compare the expression of several genes involved in the regulation of neuronal and glial differentiation and maturation, but did not identify any that displayed obvious alterations in HD KI mice at 3 months of age (Supplementary Figure S1). Characterization of genes for neuronal and glial cell differentiation and development supports the idea that mutant HD mice do not exhibit a deficiency in the numbers of glial cells at the postnatal stage ( Figure 7D). Analyzing the expression of genes for myelination did not reveal significant changes in HD KI mice either ( Figure 7E; Supplementary Figure S1A). Although their average values revealed alterations in some subset genes for myelin sheath, the expression of MBP, MAG, MOG, and CNP appeared to be slightly up-regulated in HD KI mice when compared with those of WT mice ( Figure 7F) which is consistent with quantative PCR resulte (Supplementary Figure S1). The lack of reduction of these myelin genes in postnatal HD KI mice is in line with the absence of obvious nuclear accumulation of mutant HTT. Discussion The discovery of the effects of mutant HTT on early neuronal development raises an important idea that late-onset neurodegeneration in HD may be initiated by early defects in neuronal development (Cepeda et al., 2019;van der Plas et al., 2020;Humbert and Barnat, 2022). Since mutant HTT is also expressed in glial cells and glial dysfunction contributes to HD neuropathology (Shin et al., 2005;Bradford et al., 2009Bradford et al., , 2010Khakh and Sofroniew, 2014;Huang et al., 2015;Diaz-Castro et al., 2019;Lee et al., 2022), it is interesting to investigate whether mutant HTT affects glial development, thereby contributing to early neuronal development defects. In the current study, we found that mutant HTT does not affect glial development, as there are no obvious alterations in the numbers of astrocytes and oligodendrocytes in the brains of postnatal HD mice. These findings suggest that neuronal toxicity of mutant HTT occurs much earlier than glial toxicity, whereas non-autonomous effects resulting from mutant HTT in glial cells may be important for facilitating HTT toxicity and disease progression. Although previous studies have shown that mutant HTT can affect both astrocytes and oligodendrocytes (Shin et al., 2005;FIGURE 3 No significant alteration in oligodendrocyte numbers in early postnatal HD KI mice. (A) Immunofluorescence staining of the mouse brain for the oligodendrocytes with an antibody to olig2. WT and HD KI mice at the ages of P1, P7, P14 and P21 were examined. The brain region contains the cortex (Ctx), whithe matter (WM) in the corpus callosum, and striatum (Str). (B) Quantification of olig2-positive cell rate (% of total cells). The data were presented as mean+/-SE (n = 3 independent experiments from 3 mice per genotype). P, postnatal; WT, wild type; KI, knock-in. Frontiers in Neuroscience 07 frontiersin.org Bradford et al., 2009Bradford et al., , 2010Khakh and Sofroniew, 2014;Huang et al., 2015;Diaz-Castro et al., 2019;Lee et al., 2022;Sun et al., 2022), it has never been investigated which types of glial cells are preferentially affected at the early disease stage. Addressing this issue is important for understanding the development and progression of HD, as astrocytes and oligodendrocytes play distinct roles in maintaining neuronal function. Astrocytes can regulate synaptic neurotransmitters and release growth factors to maintain neuronal function, whereas oligodendrocytes are critical for myelination, which is essential for axonal conductivity and function (Haydon, 2001;Allen and Lyons, 2018;Duncan et al., 2021;Liu et al., 2023). We found that mutant HTT reduced the expression level of myelin proteins but not oligodendrocytic numbers, suggesting that mutant HTT affects the oligodendrocytic function that produces myelin proteins. The findings that mutant HTT reduces myelin protein expression and myelination are consistent with several previous reports that mutant HTT can affect axonal integrity and myelination (Shirendeb et al., 2012;Marangoni et al., 2014;Huang et al., 2015;Teo et al., 2016;Rosas et al., 2018;Bourbon-Teles et al., 2019;Ferrari Bardile et al., 2019). These previous studies investigated the effects of mutant HTT in transgenic mouse models. However, it remains unknown how fulllength mutant HTT at the endogenous level affects glial cells, particularly at the early stages of the disease. Our early study demonstrated that selective expression of transgenic N-terminal mutant HTT in oligodendrocytes can induce much more severe demyelination and axonal degeneration in PLP-150Q mice (Huang et al., 2015;Yin et al., 2020), whereas other HD transgenic mice displayed different extents of demyelination phenotypes (Teo et al., 2016;Ferrari Bardile et al., 2019, 2021. Thus, the level of toxic HTT products is critical for dysfunction of oligodendrocytes. In the current study, we examined HD KI mice in which fulllength mutant HTT is expressed at the endogenous level. Although the extent to which mutant HTT affects myelination is milder in HD KI mice than in transgenic mouse models of HD, the current finding points to the fact that demyelination is an early pathological event in HD when full-length mutant HTT is expressed under physiological conditions. It is known that there is an age-dependent Reduced myelin protein staining in the cortex in HD KI mouse brain. (A) Immunofluorescence staining of the prefrontal cortex for myelin proteins MBP, MOG, MAG and CNP. (B) Quantification of myelin proteins fluorescence density. The data were presented as mean+/-SE (n = 3 independent experiments from 3 mice per genotype). P, postnatal; M, month. *p < 0.05, WT, wild type; KI, knock-in. Frontiers in Neuroscience 08 frontiersin.org accumulation of N-terminal mutant HTT in the nucleus, which affects gene expression, whereas full-length mutant HTT is predominantly distributed in the cytoplasm (Zhao et al., 2016). In line with this, HD KI mice expressing 111-175Q do not show obvious alterations in gene expression at 2 months of age until 6 months of age (Langfelder et al., 2016). Our RNAseq data indicate that the mutant HTT does not significantly affect gene expression in the early stages of the disease. For instance, at 1 month, only 17 genes were found to be down-regulated and 44 genes were upregulated in the striatum of HD KI mice. At 3 months, the number of altered genes increased to 108 down-regulated and 48 upregulated genes. When compared to the altered gene expression in the striatum of adult HD KI mice, which could be more than 2000 genes up-or down-regulated (Langfelder et al., 2016), the changes found in our HD KI mice at the early disease stage are minor. These changes reflect the age-dependent nuclear effects of mutant HTT, which is consistent with the fact that nuclear accumulation of mutant HTT depends on the accumulation of N-terminal HTT fragments in the nucleus. However, the minor nuclear effects of mutant HTT seen in our RNAseq analysis do not account for the decreased myelin proteins at 3 months, as both RNA-seq and RT-PCR did not reveal decreased levels of myelin protein mRNAs. Thus, the reduced myelin proteins are likely due to the cytoplasmic effects of mutant HTT rather than nuclear mutant HTT that can affect gene expression. The production of myelin proteins seems to be regulated by complicated mechanisms. For example, MBP mRNA needs to be transported from the nucleus to the plasma membrane to be translated locally at the axon-glial contact site (Müller et al., 2013). This transport relies on kinesin-mediated intracellular trafficking (Lyons et al., 2009) and intracellular signaling regulation (Laursen et al., 2011). In addition, axon-glial signaling is also critical for myelination, and axonal diameter or electrical activity influences myelination (Krämer-Albers and White, 2011;Müller et al., 2013). Reduced myelin protein staining in the striatum in HD KI mouse brain. (A) Immunofluorescence staining of striatum for myelin proteins MBP, MOG, MAG and CNP. (B) Quantification of myelin proteins fluorescence density. The data were presented as mean+/-SE (n = 3 independent experiments from 3 mice per genotype). P, postnatal; M, month. *p < 0.05, WT, wild type; KI, knock-in. Frontiers in Neuroscience 09 frontiersin.org Since cytoplasmic mutant HTT can affect a variety of cellular functions, including intracellular trafficking, mRNA translation or stability, and various signaling pathways (Bates et al., 2015;Saudou and Humbert, 2016;Fu et al., 2019;Eshraghi et al., 2021), the defective myelination in postnatal HD KI mice highlights the effect of cytoplasmic mutant HTT on myelination in the early stages of HD. These findings offer additional insight into the pathogenesis of HD and have therapeutic implications for halting or preventing HD neuropathology. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://www.ncbi.nlm.nih.gov/bioproject/ PRJNA982440/. Ethics statement The animal study was reviewed and approved by the Institutional Animal Care and Use Committees at Jinan University. Author contributions XG and X-JL conceived the study. SY and JM conducted major experiments. YL, MP, Haz, HoZ, and JL participated in some experiments. LC analyzed RNAseq data. SL and DH provided advice. Reduced myelination in the HD KI mouse brain at 3 months of age. (A) Electron microscopic graphs of the striatum and subcortical white matter. Scale bar: 0.25 μm. (B) G ratios (r/R) of axons in WT and HD KI mice. WT: wild type, KI: knock-in. The data were presented as mean+/-SE and obtained from counting 60 axons per group. * p < 0.05. Funding This study was supported by National Natural Science Foundation of China (81830032, 82071421, and 82271902) and Natural Science Foundation of Guangdong Province (2021A1515012526).
2023-07-21T15:14:50.367Z
2023-07-19T00:00:00.000
{ "year": 2023, "sha1": "eb802e6fd4497a60299a1eaa5767e7d7614507b9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2023.1238306/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "636218b3a5aa4a5adc33d4917838a17df8bd0887", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
73582872
pes2o/s2orc
v3-fos-license
Estimates of ikaite export from sea ice to the underlying seawater in a sea ice – seawater mesocosm The precipitation of ikaite and its fate within sea ice is still poorly understood. We quantify temporal inorganic carbon dynamics in sea ice from initial formation to its melt in a sea ice–seawater mesocosm pool from 11 to 29 January 2013. Based on measurements of total alkalinity (TA) and total dissolved inorganic carbon (TCO2), the main processes affecting inorganic carbon dynamics within sea ice were ikaite precipitation and CO2 exchange with the atmosphere. In the underlying seawater, the dissolution of ikaite was the main process affecting inorganic carbon dynamics. Sea ice acted as an active layer, releasing CO2 to the atmosphere during the growth phase, taking up CO2 as it melted and exporting both ikaite and TCO2 into the underlying seawater during the whole experiment. Ikaite precipitation of up to 167 μmolkg−1 within sea ice was estimated, while its export and dissolution into the underlying seawater was responsible for a TA increase of 64–66 μmolkg−1 in the water column. The export of TCO2 from sea ice to the water column increased the underlying seawater TCO2 by 43.5 μmolkg−1, suggesting that almost all of the TCO2 that left the sea ice was exported to the underlying seawater. The export of ikaite from the ice to the underlying seawater was associated with brine rejection during sea ice growth, increased vertical connectivity in sea ice due to the upward percolation of seawater and meltwater flushing during sea ice melt. Based on the change in TA in the water column around the onset of sea ice melt, more than half of the total ikaite precipitated in the ice during sea ice growth was still contained in the ice when the sea ice began to melt. Ikaite crystal dissolution in the water column kept the seawater pCO2 undersaturated with respect to the atmosphere in spite of increased salinity, TA and TCO2 associated with sea ice growth. Results indicate that ikaite export from sea ice and its dissolution in the underlying seawater can potentially hamper the effect of oceanic acidification on the aragonite saturation state (aragonite) in fall and in winter in ice-covered areas, at the time when aragonite is smallest. Introduction Currently, each year, 7 Pg anthropogenic carbon is released to the atmosphere, 29 % of which is estimated to be taken up by the oceans through physical, chemical and biological processes (Sabine et al., 2004).The Arctic Ocean takes up −66 to −199 Tg C year −1 , (where a negative value indicates an uptake of atmospheric CO 2 ) contributing 5-14 % to the global ocean CO 2 uptake (Bates and Mathis, 2009), primarily through primary production and surface cooling (MacGilchrist et al., 2014).However, polar ocean CO 2 uptake estimates consider sea ice as an impermeable barrier, ignoring the potential role of ice-covered areas on gas exchange between the ocean and atmosphere.Recent studies have shown that sea-ice-covered areas participate in the variable sequestration of atmospheric CO 2 into the mixed layer below the ice (e.g., Papakyriakou and Miller, 2011;Geil-fus et al., 2012Geil-fus et al., , 2014Geil-fus et al., , 2015;;Nomura et al., 2013;Delille et al., 2014).Studies are required to elucidate the processes responsible for this as well as their magnitudes, both temporally and spatially. The carbonate chemistry in sea ice and brine is spatially and temporally variable, which leads to complex CO 2 dynamics with the potential to affect the air-sea CO 2 flux (Parmentier et al., 2013).Release of CO 2 from sea ice to the atmosphere has been reported during sea ice formation from open water (Geilfus et al., 2013a) and in winter (Miller et al., 2011;Fransson et al., 2013), while uptake of CO 2 by sea ice from the atmosphere has been reported after sea ice melt onset (e.g., Semiletov et al., 2004;Nomura et al., 2010Nomura et al., , 2013;;Geilfus et al., 2012Geilfus et al., , 2014Geilfus et al., , 2015;;Fransson et al., 2013).In combination, these works suggest that the temporal cycle of sea ice formation and melt affects atmospheric CO 2 uptake by the ocean in variable ways.Sea ice may also act as an important control on the partial pressure of CO 2 (pCO 2 ) in the sea surface through a sea ice pump (Rysgaard et al., 2007).During the earliest stages of sea ice formation, a small fraction of CO 2 -supersaturated brine is expelled upward onto the ice surface, promoting a release of CO 2 to the atmosphere (Geilfus et al., 2013a).As sea ice forms and grows thicker, salts are partly rejected from the sea ice to the underlying seawater and partly trapped within the sea ice structure, concentrated in brine pockets, tubes and channels.As a result, the concentration of dissolved salts, including inorganic carbon, increases within the brine and promotes the precipitation of calcium carbonate crystals such as ikaite (CaCO 3 q 6H 2 O) (Marion, 2001).These crystals have been reported in both natural (Dieckmann et al., 2008;Nomura et al., 2013;Søgaard et al., 2013) and experimental sea ice (Geilfus et al., 2013b;Rysgaard et al., 2014) and have been suggested to be a key component of the carbonate system (Rysgaard et al., 2007;Fransson et al., 2013;Delille et al., 2014). During ikaite precipitation within sea ice, total alkalinity (TA) in brine is reduced by 2 moles due to the reduction of bicarbonate (HCO − 3 ) while total dissolved inorganic carbon (TCO 2 ) in brine is only reduced by 1 mol (Reactions R1-R3): (R3) The specific conditions leading to ikaite precipitation, as well as the fate of these precipitates in sea ice, are still not fully understood.Ikaite crystals may remain within the ice structure, while the CO 2 formed during their precipitation is likely rejected with dense brine to the underlying seawater and sequestered below the mixed layer.During sea ice melt, the dissolution of these crystals triggered by increased ice temperatures and decreased bulk ice salinity will consume CO 2 and drive a CO 2 uptake from the atmosphere to the ice.Such a mechanism could be an effective sea ice pump of atmospheric CO 2 (Delille et al., 2014).In addition, ikaite stored in the ice matrix could become a source of TA to the nearsurface ocean upon its subsequent dissolution during sea ice melt (Rysgaard et al., 2007(Rysgaard et al., , 2009)). The main air-sea fluxes of CO 2 and TCO 2 are driven by brine rejection to the underlying seawater and its contribution to intermediate and deep-water formation (Semiletov et al., 2004;Rysgaard et al., 2007Rysgaard et al., , 2009;;Fransson et al., 2013) or below sea ice in ice tank studies (e.g., Killawee et al., 1998;Papadimitriou et al., 2004).As sea ice thickens, reduced near-surface ice temperatures result in reduced brine volume content, increased brine salinity and increased solute concentration in the brine.In the spring-summer, as the ice temperature increases, sea ice brine volume increases and sea ice becomes vertically permeable to liquid (Golden et al., 2007), enhancing the potential CO 2 exchange between the atmosphere, sea ice and ocean.Eventually internal ice melt promotes brine dilution, which decreases brine salinity, TA and TCO 2 , and leads to lower pCO 2 in the brine.In addition, the dissolution of ikaite decreases brine pCO 2 (Reaction R1) (Geilfus et al., 2012(Geilfus et al., , 2015)).These conditions all favor sea ice as a sink for atmospheric CO 2 (Nomura et al., 2010(Nomura et al., , 2013;;Geilfus et al., 2012Geilfus et al., , 2015)).Melting sea ice stratifies surface seawater, leading to decreased TA, TCO 2 and pCO 2 , in the sea surface, enhancing air-sea CO 2 fluxes (Rysgaard et al., 2007(Rysgaard et al., , 2009)). Although we now have a basic understanding of the key mechanisms of carbon cycling in sea ice, significant unknowns remain.One of the major unknowns is the fate of ikaite, TCO 2 and CO 2 released from sea ice during winter.It is unclear what proportion of precipitated ikaite crystals in sea ice remains in the matrix to be released upon melt or what proportion is expelled with brine drainage during ice formation and growth.Examining the chemical signatures of the water column beneath sea ice may provide an indication of the importance of the different processes.However, the signal of carbon components released from 1 to 2 m of sea ice growth is difficult to detect in a water column several hundred meters deep. In this study, we followed the evolution of the inorganic carbon dynamics within experimental sea ice from sea ice formation to melt in a sea ice-seawater mesocosm pool (∼ 435 m 3 ) at the University of Manitoba, Winnipeg, Canada.The benefits of this type of environment are multiple.An artificial pool equipped with a movable bridge makes it possible to collect undisturbed samples from thin growing sea ice.We gain the ability to carefully track carbonate parameters in the ice, in the atmosphere and in the underlying seawater, while growing sea ice in a large volume of seawater, so that conditions closely mimic the natural system.During this experiment, we examined physical and chemical processes, in the absence of biology, responsible for changes in the inorganic carbon system of sea ice and the underlying seawater, and quantified fluxes of inorganic carbon between the atmosphere, sea ice and the water column.We also discuss that dissolution of ikaite crystals exported from sea ice in the underlying seawater can potentially hamper the effect of oceanic acidification on aragonite . Site description, sampling and analysis The Sea-ice Environmental Research Facility (SERF) is an inground outdoor concrete pool, 18.3 m × 9.1 m in surface area and 2.6 m deep, exposed to ambient temperatures, winds and solar radiation (by retracting its roof, Fig. 1).The weather conditions in the region are conducive to sea ice growth for several months every winter.Prior to the experiment, the pool is filled with artificial seawater (ASW) made by dissolving large quantities of various rock salts into local groundwater to mimic the major composition of natural seawater (see Rysgaard et al., 2014 for exact composition of the ASW).Sea ice is melted in the pool by circulating heated ethylene glycol through a closed-loop hose located at the bottom of the pool, allowing successive ice growth/melt experiments to be carried out during one winter.The experimental sea ice and brine exhibit similar physical and chemical properties to those observed in natural Arctic sea ice (Geilfus et al., 2013b;Hare et al., 2013).The experiment described herein was initiated from open water conditions on 11 January 2013 when the heater was turned off.Sea ice grew until 26 January when the heat was turned back on.The experiment ended on 30 January when the pool was 20 % ice-free. Four 375 W pumps were installed on the bottom of the pool near each of the corners to induce a consistent current.The pumps were configured to draw water from their base and then propel it outward parallel to the bottom of the pool.The pumps were oriented successively at right angles to one another, which created a counterclockwise circulation of 2-3 cm s −1 (Else et al., 2015). Bulk ice and seawater temperatures were recorded by an automated type-T thermocouple array fixed vertically in the pool.Seawater salinity was measured continuously using Aanderaa CT sensors (model 4319) located at 30, 100, 175 and 245 cm depth.The in situ seawater pCO 2 was measured every 5 s using a Contros HydroC (resolution < 1 µatm, accuracy ±1 % of the upper range value) located at 1.3 m depth. Air temperature and relative humidity were measured using a Vaisala HMP45C probe at a meteorological station located 2 m above the sea ice surface.Solar irradiance was continuously recorded by an Eppley Precision Spectral Pyranometer (range of 0.285-2.8µm) mounted 10 m above the sea ice surface.In addition, estimated photosynthetically active radiation (PAR) values at the ice bottom were recorded with Alec mkV/L PAR sensors throughout the study and ranged from 0 to 892 µmol photons m −2 s −1 . Sea ice and seawater samples were obtained from a confined area located on the north side of the pool to minimize effects on other experiments (e.g., Else et al., 2015).Ice samples were collected using ceramic knives or a Kovacs Mark II coring system depending on the ice thickness.Sampling was performed from a movable bridge to avoid walking on the ice surface and to ensure only undisturbed sites were sampled.Ice cores were collected from one end of the pool (half meter away from the edge of the pool) and at least 20 cm away from previous cored sites.Ice cores were packed in clean plastic bags and kept frozen during the 20 min transport to a cold laboratory and processed within a few hours.Seawater was sampled for total alkalinity (TA) and total dissolved inorganic carbon (TCO 2 ) with a peristaltic pump (Cole Parmer Masterflex ® Environment Sampler, equipped with PTFE tubing) through an ice core hole of the ice-water interface, at 1.25 and 2.5 m depth.Samples were stored in 12 mL gastight vials (Exetainer, Labco High Wycombe, United Kingdom) and poisoned with 12 µL of saturated HgCl 2 solution and stored in the dark at 4 • C until analyzed. Air-ice CO 2 fluxes were measured using a Li-Cor 8100-103 chamber associated with a LI-8100A soil CO 2 flux systems.The chamber was connected in a closed loop to the IRGA with an air pump rate of 3 L min −1 .The measurement of pCO 2 in the chamber was recorded every second over a 15 min period.The flux was computed from the slope of the linear regression of pCO 2 against time (r 2 > 0.99) according to Frankignoulle (1988), taking into account the volume of ice or snow enclosed within the chamber.The uncertainty of the flux computation due to the standard error on the regression slope was on average ±3 %. In the cold laboratory, sea ice cores were cut into 2 cm sections using a pre-cleaned stainless steel band saw.Each section was placed in a gas-tight laminated (Nylon, ethylene vinyl alcohol and polyethylene) plastic bag (Hansen et al., 2000) fitted with a gas-tight Tygon tube and a valve for sampling.The plastic bag was sealed immediately, and excess air was gently removed through the valve using a vac-uum pump.The bagged sea ice samples were then melted in the dark at 4 • C to minimize the dissolution of calcium carbonate precipitates (meltwater temperature never rose significantly above 0 • C).Once melted, the meltwater mixture and bubbles were transferred to gas-tight vials (12 mL Exetainer, Labco High Wycombe, United Kingdom), poisoned with 12 µL solution of saturated HgCl 2 and stored in the dark at 4 • C until analyzed. Bulk ice and seawater salinities were measured using a Thermo Orion 3-star with an Orion 013610MD conductivity cell, and values were converted to bulk salinity (Grasshoff et al., 1983).TA was determined by potentiometric titration (Haraldsson et al., 1997), while TCO 2 was measured on a coulometer (Johnson et al., 1987).Routine analysis of Certified Reference Materials provided by A. G. Dickson, Scripps Institution of Oceanography, verified that TA and TCO 2 were analyzed within ±3 and ±2 µmol kg −1 , respectively.Brine volume was estimated from measurements of bulk salinity, temperature and density according to Cox and Weeks (1983) for temperatures below −2 • C and according to Leppäranta and Manninen (1988) for ice temperatures within the range −2 to 0 • C. Bulk ice samples for biological measurements were collected between 14 and 21 January.Filtered (0.2 µm) SERF seawater (FSW) was added at a ratio of three parts FSW to one part ice and the samples were left to melt in the dark.Chl a was determined on three occasions by filtering two aliquots of the melted ice sample onto GF/F filters (Whatman ® brand) and extracting pigments in 10 mL of 90 % acetone for 24 h.Fluorescence was measured before and after the addition of 5 % HCl (with a Turner Designs fluorometer), and chl a concentration was calculated following Parsons et al. (1984).Measurements of bacterial production were done four times during the biological sampling period by incubating 6-10 mL subsamples of the ice-FSW solution with 3 H-leucine (final concentration of 10 nM) for 3 h at 0 • C in darkness (Kirchmann, 2001).Half of the samples were spiked with trichloroacetic acid (TCA, final concentration 5 %) as controls prior to the incubation, while the remaining active subsamples were fixed with TCA (final concentration 5 %) after incubation.Following the incubation, vials were placed in 80 • C water for 15 min (Garneau et al., 2006) before filtration through 0.2 µm cellulose acetate membranes (Whatman ® brand) and rinsing with 5 % TCA and 95 % ethanol.Filters were dried and dissolved in scintillation vials by adding 1 mL ethyl acetate, and radioactivity was measured on a liquid scintillation counter after an extraction period of 24 h.Bacterial production was calculated using the equations of Kirchman (1993) and a conversion factor of 1.5 kg C mol −1 (Ducklow et al., 2003). Sea ice and seawater physical conditions Sea ice was grown in the pool from open water on 13 January 2013 and reached a maximum thickness of 24 cm on 26 January at which point the heat at the base of the pool was turned on.On 30 January the experiment ended with the pool 20 % ice-free.Three main snowfall events occurred during the experiment.The first, from 14 to 15 January, covered the sea ice surface with 1 cm of snow.The second, from 18 to 23 January, deposited 6-9 cm of snow over the entire pool.On the morning of 23 January, the snow was manually cleared off the ice surface to investigate the insulating effect of the snow on the ice temperature and ikaite precipitation (Rysgaard et al., 2014).Finally, from noon on 24 to 27 January, 8 cm of snow covered the entire pool until the end of the experiment on 30 January. The air temperature at the beginning of the experiment ranged from −2 to −26 • C, which initiated rapid sea ice growth to 15 cm until 18 January (Fig. 2).During this initial sea ice growth, the sea ice was attached to the side of the pool, resulting in the development of a hydrostatic pressure head that caused percolation of seawater at the freezing point upwards through the sea ice volume as the sea ice grew downwards.This resulted in repeated events of increased sea ice temperature from the bottom to the surface observed between 15 and 18 January (Fig. 2).Subsequently, the ice was cut using an ice saw around the perimeter, allowing the ice to float, and a pressure release valve was installed to prevent such events (Rysgaard et al., 2014).During this period, the ice temperature oscillated between relatively warm (∼ −3 • C) and cold (∼ −7 • C) phases.Brine volume content (0.047) was low in the middle part of the ice cover, close to the permeability threshold of 0.05 as suggested by Golden et al. (2007).The bulk ice salinity profiles were typically C-shaped, with values ranging from 6 to 23 (Fig. 2).The underlying seawater salinity increased rapidly due to sea ice growth.From 18 to 23 January, the 9 cm snow cover insulated the ice cover from the cold atmosphere (Rysgaard et al., 2014), resulting in a fairly constant ice thickness, nearly no change in ice temperature and salinity, a brine volume content above the permeability threshold and a small increase in the underlying seawater salinity.Once the ice surface was cleared of snow on the morning of 23 January, the ice temperature decreased throughout the entire ice thickness and the ice surface salinity increased.The sea ice volume cooled from the top downwards, and the brine volume content decreased below the permeability threshold on 23 January and rapid sea ice growth rapidly increased the seawater salinity.Shortly after the snow clearing, the last snowfall event covered the ice surface with 8 cm of snow, reducing the effect of the cold atmosphere on the ice cover.On 26 January, the heater was activated to initiate sea ice melt.Sea ice temperatures increased and became isothermal around −2 • C, while the bulk ice salinity decreased and the brine volume content increased up to 0.13.The sea ice melt decreased the seawater salinity.The pool was well mixed during the whole growth phase with similar salinity and temperature observed at the four depths.However, once the heat was turned on, the pool become stratified with respect to salinity changes, as the salinity at 30 cm depth started to diverge from the deeper depths (Fig. 2). Carbonate system TA and TCO 2 in seawater, noted as TA (sw) and TCO 2(sw) , were sampled at the sea ice-seawater interface, at 1.25 and 2.5 m depth.An analysis of variance (ANOVA) test over the three depths revealed that the means are not statistically different (p < 0.01), so we consider the average concentration of the three depths in the following analysis.During sea ice growth, TA (sw) increased from 2449 to 2644 µmol kg −1 (black line, Fig. 3a), while TCO 2(sw) increased from 2347 to 2516 µmol kg −1 (black line, Fig. 3b).Once the ice started to melt, TA (sw) decreased to 2607 µmol kg −1 , and TCO 2(sw) decreased to 2461 µmol kg −1 .As the experiment stopped before the ice was completely melted in the tank, both the seawater salinity and TA (sw) do not reach their initial values by the end of the experiment (Table 1, Figs. 2, 3).To discard the effect of salinity changes, we normalized TA (sw) www.the-cryosphere.net/10/2173/2016/The Cryosphere, 10, 2173-2189, 2016 Table 1.Seawater conditions on 11 January, before any sea ice formation (t = 0), on 25 January, just before the heat was turned back on and on 29 January, at the end of the experiment.Note that seawater salinity and TA (sw) do not reach the initial seawater values as sea ice was still present at the end of the experiment. Date Temperature Salinity TA (sw) nTA (sw) TCO 2(sw) nTCO 2(sw) ), (c) nTA (sw) (black) and nTCO 2(sw) (green) (µmol kg −1 ) and (d) the seawater pCO 2 (µatm) measured in situ (black) and corrected to a constant temperature of −1 • C (blue).In panels (a) and (b) the black line is the average over the three depths, while the dotted red line is the expected concentrations according to the variation of salinity observed and calculated from the mean values of the three depths (TA * (sw) and TCO * 2(sw) , respectively).The vertical black dotted line on 26 January marks when the heat was turned back on.and TCO 2(sw) to a salinity of 33 (noted as nTA (sw) and nTCO 2(sw) ) according to Reactions (R4) and (R5): where t is the time of the sampling and S the salinity of the sample (seawater or sea ice).During ice growth, nTA (sw) and nTCO 2(sw) increased slightly to 2446 and 2328 µmol kg −1 , respectively (Fig. 3c).However, once the ice started to melt, nTA (sw) increased to 2546 µmol kg −1 , and nTCO 2(sw) increased to 2404 µmol kg −1 .The in situ pCO 2 of the underlying seawater (pCO 2(sw) ) decreased from 377 to 360 µatm as the seawater temperature in the pool decreased to the freezing point.The pCO 2(sw) then oscillated from 360 to 365 µatm during sea ice growth.One day after the heater was turned on, the pCO 2(sw) increased to a similar concentration as at the beginning of the experiment, before decreasing to 373 µatm by the end of the experiment (Fig. 3d). Within bulk sea ice, TA (ice) ranged from 300 to 1907 µmol kg −1 , while TCO 2(ice) ranged from 237 to 1685 µmol kg −1 .Both TA (ice) and TCO 2(ice) exhibited Cshaped profiles with higher concentrations at the surface and bottom layers of the ice cover (Fig. 4).The concentration of TA (ice) (average of 476 µmol kg −1 ) and TCO 2(ice) (average of 408 µmol kg −1 ) did not show significant variability during our survey, except at the surface of the ice.A first maximum was observed on 17 January with a concentration of 1907 µmol kg −1 for TA (ice) and 1685 µmol kg −1 for TCO 2(ice) .A second maximum was observed on 23 January with a concentration of 1433 µmol kg −1 for TA (ice) and 861 µmol kg −1 for TCO 2(ice) .These maxima matched the high bulk ice salinity (Fig. 2), so we also normalized TA (ice) and TCO 2(ice) (noted as nTA (ice) and nTCO 2(ice) , Fig. 4) to a salinity of 33 (according to the Reactions R4 and R5) to discard the effect of salinity changes and facilitate comparison with the underlying seawater.During initial sea ice formation (up to 17 January), the concentration of both nTA (ice) (from 1083 to 2741, average 1939 µmol kg −1 ) and nTCO 2(ice) (from 853 to 2440, average 1596 µmol kg −1 ) were at their minima in the experimental time series.From 17 to 21 January, both nTA (ice) and nTCO 2(ice) increased throughout the ice column (average nTA (ice) 2375 µmol kg −1 and nTCO 2(ice) 2117 µmol kg −1 ).However, from 21 January until the initial sea ice melt, nTA (ice) and nTCO 2(ice) decreased in the top 5 cm of the ice cover (average nTA (ice) 2125 µmol kg −1 and nTCO 2(ice) 1635 µmol kg −1 ). Air-ice CO 2 fluxes The CO 2 fluxes measured at the variably snow-covered sea ice surface (Fig. 2b), which ranged from 0.29 to 4.43 mmol m −2 d −1 , show that growing sea ice released CO 2 to the atmosphere (Fig. 5).However, as soon as the ice started to warm up and then melt, the sea ice switched from source to sink for atmospheric CO 2 , with downward fluxes from −1.3 to −2.8 mmol m −2 d −1 .These ranges of air-ice CO 2 exchanges are of the same order of magnitude as fluxes reported on natural sea ice using the same chamber technique in the Arctic during the initial sea ice growth (from 4.2 to 9.9 mmol m −2 d −1 in Geilfus et al., 2013a) and during the spring-summer transition (from −1.4 to −5.4 mmol m −2 d −1 in Geilfus et al., 2015).In Antarctica, air-ice CO 2 fluxes were reported during the spring-summer transition from 1.9 to −5.2 mmol m −2 d −1 by Delille et al. ( 2014), from 0.3 to −2.9 mmol m −2 d −1 (Geilfus et al., 2014) and from 0.5 to −4 mmol m −2 d −1 (Nomura et al., 2013). Key processes affecting the carbonate system The dynamics of inorganic carbon in the ocean and sea ice are mainly affected by temperature and salinity changes, precipitation and dissolution of calcium carbonate and biological activities (Zeebe and Wolf-Gladrow, 2001).During this experiment, neither organic matter nor biota were purposely introduced into the pool; the observed range of bulk ice microbial activity (5.7 × 10 −9 on 14 January to 7.5 × 10 −7 g C L −1 h −1 on 21 January) and algal chl a (0.008 on 14 January to 0.002 µg L −1 on 21 January) were too low to support any biological activity (Rysgaard et al., 2014).Therefore biological activity is unlikely to have played a role.During the same 2013 time series at SERF, Rysgaard et al. (2014) frost flowers, and ikaite precipitation up to 350 µmol kg −1 within bulk sea ice.Within sea ice, ikaite precipitation is associated with low ice temperatures, high bulk salinity and high TA (ice) and TCO 2(ice) concentrations (Figs. 2, 3).The main processes affecting the carbonate system can be described by changes in TA and TCO 2 (Zeebe and Wolf-Gladrow, 2001).An exchange of CO 2(gas) affects TCO 2 , while TA remains constant and the precipitation-dissolution of calcium carbonate affects both TA and TCO 2 in a ratio of 2 : 1 (see Reactions R1-R3, Fig. 6).To calculate the theoretical changes in TA and TCO 2 during the course of the experiment, we used seawater samples from 11 January prior to sea ice formation (t = 0, Table 1) as the origin point (blue circle on Fig. 6).Sea ice data are located between the theoretical calcium carbonate precipitation line and the CO 2 release line (Fig. 6a), while seawater data mainly fall on the calcium carbonate dissolution line (Fig. 6b), suggesting that the carbonate system within sea ice is affected by both the precipitation of ikaite and a release of CO 2(gas) , while the underlying seawater is mainly affected by the dissolution of calcium carbonate. Estimation of the precipitation-dissolution of ikaite During the experiment, Rysgaard et al. (2014) observed ikaite within sea ice using direct microscopic observations.The precipitation-dissolution of ikaite and gas exchange are the only two processes taking place during the experiment.As illustrated in Fig. 6, an exchange of CO 2 does not affect TA, while the precipitation-dissolution of ikaite affects TA and TCO 2 in a ratio 2 : 1.Therefore, we use TA to estimate how much ikaite is precipitated or dissolved within the ice cover and the underlying seawater. Assuming no biological effect, ikaite precipitation/dissolution and gas exchange, TA and TCO 2 are considered conservative with salinity.Therefore, we can calculate the expected TA and TCO 2 (noted as TA * (ice) and TCO * 2(ice) in the ice cover and TA * (sw) and TCO * 2(sw) for the water column) based on the initial seawater conditions (TA (sw) , TCO 2(sw) and S (sw) at t = 0, Table 1) and the sample salinity (bulk sea ice or seawater) measured during the experiment: where t is the time of the sampling.Within the ice cover, TA (ice) , TCO 2(ice) and the bulk ice salinity are averaged throughout the ice column on each sampling day (Fig. 7a, b, black line), while for the underlying seawater, we used the averaged TA (sw) , TCO 2(sw) and salinity for all the measured depths (Fig. 2a, b, black line).The difference between TA * (sample) and the observed TA is only due to the precipitation or dissolution of ikaite crystals.In case of ikaite precipitation (i.e., TA * (sample) > TA (sample) ), half of this positive difference corresponds to the amount of ikaite precipitated within the ice.This ikaite may either remain or may be exported out of the ice.A negative difference (i.e., TA * (sample) < TA (sample) ) indicates ikaite dissolution.and TA (ice) (µmol kg −1 ) (black diamonds) compared to the average amount of ikaite precipitated throughout the ice thickness for each sampling day from Rysgaard et al. (2014) (white dots).The vertical black dotted line on 26 January marks when the heat was turned back on. Sea ice Greater TA * (ice) and TCO * 2(ice) compared to the averaged observed TA (ice) and TCO 2(ice) (Fig. 7a, b) are expected as ikaite is precipitated and CO 2 is released from the ice to the atmosphere (Figs. 5,6).Half the difference between TA * (ice) and TA (ice) is a result of ikaite precipitation (Fig. 7c, black diamonds).Highly variable ikaite precipitation was observed (Fig. 7c).Ikaite precipitation was up to 167 µmol kg −1 (e.g., first days of the experiment) and as low as 1 µmol kg −1 (e.g., 19 January).A negative difference between TA * (ice) and TA (ice) (i.e., ikaite dissolution) occurred on three occasions: 14, 20 and after 26 January (beginning of the sea ice melt).On these occasions, the ice cover was relatively warm due to warmer atmospheric temperatures (14 January), thicker snow cover insulating the ice cover from the cold atmosphere (20 January) or when heat was turned back on (after 26 January, Fig. 2).Relatively high sea ice temperatures likely promote ikaite dissolution in agreement with Rysgaard et al. (2014), who linked ikaite precipitation/dissolution to ice temperature.The upward percolation of seawater observed from 15 to 18 January might complicate the effect of sea ice temperature on ikaite formation because it was in part responsible for increased ice temperatures (Fig. 2b) and therefore increased the sea ice brine volumes (Fig. 2c).Increased vertical connectivity (permeability) of the network of liquid inclusions throughout the sea ice (Golden et al., 2007;Galley et al., 2015) would have allowed the export of ikaite crystals from the ice cover to the underlying seawater.However, while we calculated a negative difference between TA * (ice) and TA (ice) , ikaite crystals were observed by Rysgaard et al. (2014).We compared the direct microscopy observations by averaging the amount of ikaite precipitated throughout the ice thickness for each sampling day from Rysgaard et al. ( 2014) (Fig. 7c, white dots) with our estimation of the amount of ikaite based on the difference between TA * (ice) and TA (ice) (Fig. 7c, black diamonds).Both ikaite measurements are of the same order of magnitude; however, the average (22 µmol kg −1 ) and maximum (100 µmol kg −1 ) of direct observations presented by Rysgaard et al. (2014) were lower than our estimated average (40 µmol kg −1 ) and maximum of up to 167 µmol kg −1 over this whole experiment.Deviations are likely due to methodological differences.Here, sea ice samples were melted to subsample for TA and TCO 2 .Ikaite crystals may have dissolved during melting, leading to an underestimation of the total amount of ikaite precipitated in the ice.However, the difference between TA * (ice) and TA (ice) provides an estimation of how much ikaite is precipitated in the ice cover, including those crystals potentially already exported to the underlying seawater.The method used by Rysgaard et al. ( 2014) avoids the bias of ikaite dissolution during sea ice melt with the caveat that crystals need to be large enough to be optically detected.If no crystals were observed, Rysgaard et al. (2014) assumed that no crystals were precipitated in the ice, though ikaite crystals could have been formed and then exported into the underlying seawater prior to microscopic observation of the sample, which may explain the difference observed between both methods during initial sea ice formation (15-18 January) when the ice was still very thin.In addition, the succession of upward percolation events could have facilitated the ikaite export from the ice cover to the underlying seawater.Estimations from both methods show similar concentrations when the ice (i) warmed due to snowfall (18-23 January) and (ii) cooled once the snow was removed (on 23 January).Once the ice started to melt (26 January), Rysgaard et al. (2014) the ikaite precipitation, while in this study we reported a negative difference between TA * (ice) and TA (ice) , possibly indicating that ikaite dissolved in the ice. Water column The main process affecting the carbonate system in the underlying seawater in this study is the export of ikaite from the ice and its dissolution in the water column (Fig. 6).While a few studies of ikaite precipitation within sea ice carried out over open ocean hypothesized that ikaite remained trapped within the sea ice matrix (Rysgaard et al., 2007(Rysgaard et al., , 2013;;Delille et al., 2014), the observed increase of nTA (sw) (Fig. 3) suggests that ikaite precipitated within the ice cover was exported to the underlying seawater where the crystals were dissolved, as suggested by Fransson et al. (2013).Lower TA * (sw) and TCO * 2(sw) compared to TA (sw) and TCO 2(sw) (Fig. 3) confirm the dissolution of ikaite in the underlying seawater, as the dissolution of ikaite crystals will decrease both TA and TCO 2 (Reactions R1-R3).Therefore, half the difference between TA * (sw) and TA (sw) corresponds to the concentration of ikaite exported from the ice and dissolved in the underlying seawater (Fig. 8a).This concentration increased over time to a maximum of 66 µmol kg −1 . During this experiment, nTA (sw) increased by 128 µmol kg −1 , while nTCO 2(sw) increased by 82 µmol kg −1 (Fig. 3c).This suggests that 64 µmol kg −1 of ikaite is dissolved compared to the 66 µmol kg −1 estimated from the difference between TA * (sw) and TA (sw) .As a result of the effect of ikaite dissolution on the 2 : 1 ratio of TA : TCO 2 , the dissolution of ikaite accounts for the entire increase of nTA (sw) but only accounts for 64-66 µmol kg −1 of the 82 µmol kg −1 increase in nTCO 2(sw) .Therefore, 16-18 µmol kg −1 (about 25 %) of the increase of nTCO 2(sw) cannot be explained by the dissolution of ikaite.The increase of both nTA (sw) and nTCO 2(sw) is more significant once the ice starts to melt (26 January).During sea ice melt, increased vertical permeability resulting in increased liquid communication through the sea ice volume from below likely, in part, dissolved ikaite crystals still residing in the ice at that time, and also will have created a downward crystal export mechanism.As the ice melt advanced, patches of open water occurred at the surface of the pool.Therefore, uptake of atmospheric CO 2 by the undersaturated seawater likely occurred, increasing the TCO 2(sw) . The dissolution of ikaite crystals could also have a strong impact on the pCO 2(sw) .The water column was undersaturated compared to the atmosphere during the whole experiment (Fig. 3d).A release of CO 2 , from the ice to the atmosphere, was measured during sea ice growth (Fig. 5) in spite of the undersaturated pCO 2(sw) .This suggests that airice CO 2 fluxes are only due to the concentration gradient between the ice and the atmosphere (Geilfus et al., 2012;Nomura et al., 2013) but that sea ice exchanges CO 2 with the atmosphere independently of the seawater concentration 2014) (white).The vertical black dotted line on 26 January marks when the heat was turned back on.(Geilfus et al., 2014).The pCO 2(sw) is highly correlated with the seawater temperature (Fig. 2), with a rapid decrease of pCO 2(sw) during the first days of the experiment (13-15 January) and a relative constant pCO 2(sw) until 27 January.However, on 26 January, the heat was turned back on, affecting the seawater temperature on the same day (Fig. 2), while the impact of increasing temperature on the pCO 2(sw) appeared 1 day later (Fig. 3d).We normalized the pCO 2(sw) to a temperature of −1 • C (after Copin-Montegut, 1988, noted as npCO 2(sw) , blue line on Fig. 3d).The npCO 2(sw) does not show major variations during sea ice growth, with values around 380 µatm.However, once the heat is turned on and the seawater temperature increased (on 26 January), npCO 2(sw) decreased from 383 to 365 µatm, while pCO 2(sw) did not change in response to increased seawater temperatures until 27 January, suggesting that a process other than temperature change affected the pCO 2(sw) .According to Reaction (R1), the dissolution of calcium carbonate has the potential to reduce pCO 2(sw) .Therefore, during sea ice growth and the associated release of salt, TA, TCO 2 and ikaite crystals to the underlying seawater, ikaite dissolution within the seawater could be responsible for maintaining stable pCO 2(sw) values while seawater salinity, TA (sw) and TCO 2(sw) are increasing.Once the seawater temperature increased (26 January), sea (TCO 2(sw) ) and in the ice cover (TCO 2(ice) ), masses of ikaite within the ice cover estimated from this study and from Rysgaard et al. (2014), masses of ikaite dissolved in the water column (Ikaite (sw) ) and masses of CO 2 exchanged between the ice and the atmosphere over the whole pool (estimation based on the air-ice CO 2 fluxes).All units are in mol.ice melt likely released ikaite crystals to the underlying seawater (Figs. 2, 8a) along with brine and meltwater, a process that would continuously export ikaite from the sea ice as the volume interacting with the seawater via percolation or convection increased.The dissolution of these crystals likely contributed to keeping the pCO 2(sw) low and counterbalancing the effect of increased temperature.We argued that once all the ikaite crystals are dissolved, the increased seawater temperature increased the pCO 2(sw) simultaneously with the npCO 2(sw) (27 January, Fig. 3). Ikaite export from the ice cover to the water column We estimated the amount of ikaite precipitated and dissolved within sea ice and seawater based on the sea ice (and seawater) volume (in m 3 ), the sea ice and seawater density, the concentration of ikaite precipitated and dissolved within the ice cover (Fig. 7c) and the concentration of ikaite dissolved in the water column (Fig. 8a).Within the ice cover, the amount of ikaite precipitated/dissolved ranged from −0.7 to 1.97 mol (Fig. 8b, Table 2), with a maximum just after the snow was cleared on 23 January.In the underlying seawater, the amount of ikaite dissolved in the pool increased from 0.47 mol on the first day of the experiment to 11.5 mol on 25 January when sea ice growth ceased.Once the ice started to melt, the amount of dissolved ikaite increased up to 20.9 (28 January) and 26.7 mol (29 January, Table 2).The estimation of ikaite dissolution in the pool is significantly higher than the estimated amount of ikaite precipitated (and potentially exported) within the ice cover, especially during sea ice melt.Within the ice cover, the ikaite values presented here represent a snapshot of the ikaite content in the ice at the time of sampling.In the underlying seawater, ikaite dissolution increased TA (sw) cumulatively over time. The difference between TA * (ice) and TA (ice) provides an estimation of ikaite precipitated within the ice, including potential ikaite export to the underlying seawater, so it cannot be used to determine how much ikaite remained in the ice vs. how much dissolved in the water column.However, Rysgaard et al. (2014) indicate that ikaite precipitated within the ice based on direct observations.Using the ikaite concentration reported in Rysgaard et al. (2014) (and shown in Fig. 7c), the sea ice volume (in m 3 ) and density, we calculate that 0-3.05 mol of ikaite precipitated within the ice cover during sea ice growth (Fig. 8b, Table 2).This amount decreased to 0.46 and 0.55 mol during the sea ice melt (28 and 29 January, respectively).Increased ikaite dissolution in the water column when the ice began to melt (from 11.5 to 20.9 mol) indicates that 9.4 mol of ikaite was stored in the ice and rejected upon the sea ice melt.This amount is about 3 times the amount of ikaite precipitated in the ice estimated by Rysgaard et al. (2014) at the end of the growth phase (3.05 mol, Table 2), suggesting more work is needed for the best estimate of ikaite precipitation within sea ice. Once the ice started to melt, the increased ikaite dissolution from 11.5 to 20.9 mol (28 January) and to 26.7 mol (29 January) suggests that about the same amount of ikaite is dissolved during the sea ice growth as during the first 2 days of the sea ice melt.The amount of ikaite dissolved in the water column after melt commenced continued to increase cumulatively, suggesting that ikaite is continuously exported to the underlying seawater as increased sea ice temperatures permit more of the volume to communicate with the underlying seawater.Therefore, we can assume that more than half of the amount of ikaite precipitated within the ice remained in the ice cover before ice melt began. Air-ice-seawater exchange of inorganic carbon SERF is a semi-closed system where the only way for the surface (water or sea ice) to gain or lose CO 2 is through exchange with the atmosphere, making it reasonable to track the exchange of TCO 2 in the atmosphere-sea ice-seawater system.The ice cover always had lower TCO 2(ice) during the experiment (TCO * 2(ice) > TCO 2(ice) ) compared to what would be expected if the CO 2 simply followed brine rejection in a conservative process (i.e., TCO * 2(ice) ) (Fig. 7b).This could be due to (i) CO 2 released to the atmosphere from sea ice, (ii) decreased TCO 2(ice) due to the precipitation of ikaite within sea ice and/or (iii) sea ice exchanging TCO 2 with the underlying seawater. The number of moles of TCO 2 exchanged during this experiment was calculated using the sea ice (and seawater) volume (in m 3 ) and density (in kg m −3 ).The total amount of TCO 2(ice) lost from the ice cover (the difference between TCO * 2(ice) and TCO 2(ice) ) ranged from 0.11 to 6.02 mol (average 2.38 mol, Fig. 9, black dots).The greatest sea ice TCO 2 losses occurred on 15-16 January, during initial sea ice growth, and from 23 to 25 January, during ice cooling due to snow removal.The exchange of CO 2 between the ice and the atmosphere is known (Fig. 5).The number of moles of CO 2 exchanges between the ice and the atmosphere was calculated (noted as CO 2(air−ice) in Table 2) using the time .Total amount of TCO 2 lost from the ice cover (black dots), amount of CO 2 exchange between the atmosphere and the ice cover (CO 2air−ice , white triangle) and sea ice-seawater TCO 2 exchanges (blue triangle) measured in moles for each day, integrated over the whole tank.The dotted line on 26 January marks when the heat was turned back on. step between each flux measurement, the ice thickness and the density.During sea ice growth, 0.01-0.42mol of CO 2 was released from the ice-covered pool to the atmosphere.During sea ice melt, uptake of atmospheric CO 2 by the icecovered pool ranged from −0.15 to −0.93 (Fig. 9, white triangles).On average, over the duration of the experiment, the ice cover released 0.08 mol of CO 2 to the atmosphere.Assuming we know how much ikaite is contained within the ice cover (Fig. 8b), we can estimate how much TCO 2 is exported from the ice to the underlying seawater (Fig. 9, blue triangles) by subtracting the air-ice CO 2 exchange and the ikaite precipitation from the total reduction of TCO 2(ice) observed within the ice cover (Fig. 9, black dots).The sea ice-seawater TCO 2 export ranged from 0.2 to 3.98 mol (average = 1.7 mol), confirming that sea ice primarily exports TCO 2 to the underlying seawater.TCO 2 export from the ice to the water column ranged from 23 % of the total sea ice TCO 2 early in the ice growth (14 January) to 100 % after the onset of melt.These estimations are comparable to the study of Sejr et al. (2011), who suggested that sea ice exports 99 % of its total TCO 2 to the seawater below it.On average over the whole experiment, sea ice exported 1.7 mol of TCO 2 to the underlying seawater (Fig. 9), which corresponds to a TCO 2(sw) increase of 43.5 µmol kg −1 considering the average sea ice thickness and density during the experiment and the volume of the pool.However, TCO 2(sw) increased by 115 µmol kg −1 over the whole experiment (Fig. 3b), leaving an increase of 71.5 µmol kg −1 in the TCO 2(sw) that cannot be explained by the sea ice-seawater exchange of TCO 2 .We postulate that as the ice melt advanced, patches of open water that opened at the surface of the pool which were undersaturated compared to the atmosphere (Fig. 3d) imported the additional TCO 2 directly from the atmosphere in the form of CO 2(g) .Considering the pool volume, the 71.5 µmol kg −1 increase of TCO 2(sw) could be explained by an air-sea water CO 2 uptake of 8.5 mmol m −2 d −1 over 3 days of sea ice melt in a 20 % ice-free pool.High air-sea gas exchange rates have been observed over partially ice-covered seas (Else et al., 2011(Else et al., , 2013)).This mechanism is also corroborated by models that account for additional sources of turbulence generated by the presence of sea ice (Loose et al., 2014). The design of the experiment allowed for constrained measurements of inorganic carbon fluxes between sea ice and the water column not possible in a natural environment where large-volume mixing processes alter the underlying seawater, making it more complicated to identify changes.We build a CO 2 budget based only on the sea ice growth phase because only 2 days of data for the melt phase are available, and the experiment stopped while the pool was 20 % ice-free (Rysgaard et al., 2014;Else et al., 2015).The initial seawater (origin point, t = 0) contained 1041 mol of TCO 2(sw) on 11 January, while on the last day of sea ice growth (25 January) the seawater contained 1017 mol of TCO 2(sw) (Table 2), with the difference (24 mol of TCO 2 ) in all likelihood transferred from the water column to the ice cover or the atmosphere.However, the TCO 2 content within the ice cover at the end of the growing phase was 15.6 mol and the ice cover released 3.1 mol of CO 2 to the atmosphere (Table 2).Therefore, 4.9 of the 24 mol of TCO 2 exchanged from the water column is unaccounted for, but may be explained by air-ice CO 2 fluxes.The chamber measurement technique for air-ice CO 2 flux may underestimate the exchange of CO 2 , and the air-seawater CO 2 fluxes are unknown until the ice started to grow (13 January).These missing moles of TCO 2 may also be explained by our assumption of uniform sea ice thickness in the SERF.Using the seawater conditions at the end of the experiment, 1 cm of seawater in the pool contains 4.21 mol of TCO 2 , making it difficult to close our budget. Potential impact of sea ice growth and ikaite export on aragonite saturation state of the underlying seawater The Arctic Ocean is a region where calcifying organisms are particularly vulnerable to ocean acidification since low temperatures and low salinity lower the carbonate saturation state.As a result, several areas of the Arctic Ocean are already undersaturated with respect to aragonite (Chierici and Fransson, 2009;Yamamoto-Kawai et al., 2009;Bates et al., 2011).This undersaturation is enhanced in winter as the temperature decreases and pCO 2 increases as a result of respiration.Calcifying organisms might therefore be most susceptible to the effects of acidification in the winter, corresponding to the annual minimum in aragonite saturation state ( aragonite ).Sea ice retreat is thought to enhance the impact of ocean acidification by freshening and ventilating the surface water (Yamamoto-Kawai et al., 2008;Yamamoto et al., 2012;Popova et al., 2014).However, any understanding of the effect of ikaite precipitation in sea ice on ocean acidification is still in its infancy (e.g., Fransson et al., 2013). Since the discovery of ikaite precipitation in sea ice (Dieckmann et al., 2008), research on its impact on the carbonate system of the underlying seawater has been ongoing.Depending on the timing and location of this precipitation within sea ice, the impact on the atmosphere and the water column in terms of CO 2 transport can be significantly different (Delille et al., 2014).Dissolution of ikaite within melting sea ice in the spring and export of this related high TA : TCO 2 ratio meltwater from the ice to the water column will decrease the pCO 2 and increase pH and aragonite of the surface layer seawater.Accordingly, during sea ice melt, an increase of aragonite in the surface water in the Arctic was observed (Chierici et al., 2011;Fransson et al., 2013;Bates et al., 2014).However, it was difficult to ascribe this increase to the legacy of excess TA in sea ice, ikaite dissolution or primary production. The impact of ikaite precipitation on the surface seawater during sea ice growth is less clear.Fransson et al. (2013) suggested in winter in the Amundsen Gulf that the release of brine decreased aragonite by 0.8 at the sea ice-seawater interface as a result of ikaite precipitation within sea ice and the related CO 2 enrichment of brine.Conversely, durwww.the-cryosphere.net/10/2173/2016/The Cryosphere, 10, 2173-2189, 2016 ing ice melt, aragonite increased by 1.4 between March and May, likely due to both calcium carbonate dissolution and primary production.This contrasts with the present experiment.Figure 10 shows the evolution of aragonite and pH in the water column derived from TA (sw) and TCO 2(sw) and the evolution of aragonite and pH predicted solely from salinity changes (i.e., using TA * (sw) and TCO * 2(sw) , noted as * aragonite and pH * ).We used the CO2sys_v2.1.xlsspreadsheet (Pierrot et al., 2006) with the dissociation constants from Goyet and Dickson (1989) and all other constants from DOE (1994).This shows the complexity of ikaite and its impact on the carbonate system and in the underlying water. During ice growth, sea ice brine rejection appears to increase both pH (from 8.00 to 8.06) and aragonite (from 1.28 to 1.65) of the underlying seawater, offsetting the effect of decreased temperature.A slight increase of aragonite was predicted due to increased salinity and a proportional increase of TA and TCO 2 as depicted in * aragonite .However, the effect of ikaite rejection and subsequent changes in TA strongly enhance the increase of aragonite .Therefore, ikaite rejection from sea ice has a much stronger potential to increase aragonite than brine rejection during fall and winter sea ice growth, suggesting ikaite exported to seawater from sea ice may hamper the effect of oceanic acidification on aragonite in fall and in winter at the time when aragonite is at its minimum (Chierici and Fransson, 2009;Yamamoto-Kawai et al., 2009;Chierici et al., 2011).Ice formation may therefore delay harmful effects of ocean acidification on calcifying organisms by increasing aragonite in the critical winter period when aragonite reaches its minimal values.As a corollary, ice removal acts to alleviate the effect of ikaite rejection and may therefore lower aragonite .This calls for an accounting of under-ice ikaite rejection in modeling predictions on the consequences of Arctic Ocean acidification in the context of northern hemispheric annual multiyear sea ice loss, as increased summer open water will lead to more firstyear sea ice formation in fall and winter in the future. Conclusion We quantified the evolution of inorganic carbon dynamics from initial sea ice formation to its melt in a sea ice-seawater mesocosm pool from 11 to 29 January 2013.Based on our analysis of TA and TCO 2 in sea ice and seawater, the main processes affecting inorganic carbon within sea ice are ikaite precipitation and CO 2 exchange with the atmosphere, while in the underlying seawater, dissolution of ikaite was the main process affecting the inorganic carbon system. During this experiment, sea ice exchanged inorganic carbon components (e.g., CO 2 , ikaite, TCO 2 ) with both the atmosphere and the underlying seawater.During sea ice growth, CO 2 was released to the atmosphere, while during ice melt, an uptake of atmospheric CO 2 was observed.We re-port ikaite precipitation of up to 167 µmol kg −1 sea ice, similar to previous estimates from Rysgaard et al. (2014) based on microscopically observed values.In the underlying seawater, a net increase of nTA (sw) over the whole experiment was observed (up to 128 µmol kg −1 ), suggesting that a portion of the ikaite crystals precipitated within sea ice was exported to the underlying seawater and then dissolved as the ice cover evolved in time.Ikaite export from ice to the underlying seawater was associated with brine rejection during sea ice growth, increased sea ice vertical connectivity due to the upward percolation of seawater and meltwater flushing during sea ice melt.Rysgaard et al. (2007) suggested that ikaite precipitation within sea ice could act as a significant sink for atmospheric CO 2 ; however to act as a sink for atmospheric CO 2 , ikaite crystals must remain in the ice structure while the CO 2 produced by their precipitation is expelled with dense brine rejection and entrained in deep seawater (Delille et al., 2014).TA changes observed in the water column once the sea ice started to melt indicate that more than half of the total amount of ikaite precipitated in the ice during the sea ice growth remained in the ice until the sea ice began to melt.Derivation of air-sea CO 2 fluxes related to the sea ice carbon pump should take into account ikaite export to the underlying ocean during sea ice growth, which might reduce the efficiency of oceanic CO 2 uptake upon sea ice melt.As sea ice melts, ikaite is flushed downward out of the ice along with the meltwater. Ikaite export from sea ice and its dissolution had a strong impact on the underlying seawater.In this semi-closed system, sea ice growth increased the seawater salinity, TA (sw) and TCO 2(sw) .In spite of these increases, the pCO 2 of the underlying seawater remained undersaturated compared to the atmosphere.We conclude that ikaite dissolution within the water column is responsible for the seawater's continual pCO 2 undersaturation.In addition, we discuss that dissolution of ikaite crystals exported from sea ice in the underlying seawater can potentially hamper the effect of oceanic acidification on aragonite in fall and in winter in ice-covered areas at the time when aragonite is smallest. Data availability Data are available upon request from the authors.and the ARC cake club.The authors are grateful to the anonymous reviewers and to the editor whose comments greatly improved the quality of the manuscript. Edited by: L. Kaleschke Reviewed by: four anonymous referees Figure 1 . Figure 1.The Sea-ice Environmental Research Facility (University of Manitoba, Winnipeg, Canada) with thin sea ice covering the pond during the 2013 experiment.Photo: J. Sievers. Figure 2 . Figure 2. Evolution of (a) air temperature ( • C) at 2 m height, (b) snow thickness (black shaded areas) and sea ice/seawater temperature ( • C), (c) bulk ice salinity, (d) brine volume content within sea ice and (e) seawater temperature (blue) and salinity (green).Measurements were performed at 30, 100, 175 and 245 cm water depths.The darker the color is, the closer to the surface.In panels (b-d) sea ice thickness is illustrated by black dots.Stars on panel (b) represent the depth at which the temperature profiles are derived from.Open squares in the lower part of (d) mark the sampling times.The dashed line on panel (e) indicates when the heat at the bottom of the pool was turned back on. Figure 3 . Figure 3. Evolution of (a) TA (sw) and TA *(sw) (µmol kg −1 ), (b) TCO 2(sw) and TCO * 2(sw) (µmol kg −1 ), (c) nTA (sw) (black) and nTCO 2(sw) (green) (µmol kg −1 ) and (d) the seawater pCO 2 (µatm) measured in situ (black) and corrected to a constant temperature of −1 • C (blue).In panels (a) and (b) the black line is the average over the three depths, while the dotted red line is the expected concentrations according to the variation of salinity observed and calculated from the mean values of the three depths (TA * (sw) and TCO * 2(sw) , respectively).The vertical black dotted line on 26 January marks when the heat was turned back on. Figure 5 . Figure 5. Air-ice CO 2 fluxes (mmol m −2 d −1 ).Positive air-ice CO 2 flux means outgassing from the ice, and negative CO 2 flux means uptake of atmospheric CO 2 .The vertical black dotted line on 26 January marks when the heat was turned back on. Figure 6 . Figure 6.(a) Relationship between nTCO 2 and nTA (µmol kg −1 ) in bulk sea ice (white hexagons) and seawater (black dots).(b) Zoomed-in image of seawater data.The different dotted lines represent the theoretical evolution of nTA and nTCO 2 ratio following the precipitation/dissolution of calcium carbonate and release/uptake of CO 2(g) .A linear regression is shown in green for the ice samples (a) and blue for the seawater samples (b). Figure 7 . Figure 7. Evolution of (a) TA (ice) averaged throughout the ice thickness on each sampling day (black dots) and TA * (ice) (dashed red line) (µmol kg −1 ) and (b) TCO 2(ice) averaged throughout the ice thickness on each sampling day (black diamonds) and TCO * 2(ice) (dashed red line) (µmol kg −1 ).(c) Estimation of the ikaite precipitation/dissolution from half of the difference between TA * (ice) Figure 8 . Figure 8. Evolution of (a) ikaite dissolution within the water column (in µmol kg −1 ), (b) mass of ikaite dissolved in the underlying seawater (blue) and mass of ikaite precipitated in sea ice (black) estimated from this study and estimated from Rysgaard et al. (2014) (white).The vertical black dotted line on 26 January marks when the heat was turned back on. Figure9.Total amount of TCO 2 lost from the ice cover (black dots), amount of CO 2 exchange between the atmosphere and the ice cover (CO 2air−ice , white triangle) and sea ice-seawater TCO 2 exchanges (blue triangle) measured in moles for each day, integrated over the whole tank.The dotted line on 26 January marks when the heat was turned back on. Figure 10 . Figure10.Evolution of (a) aragonite in the water column, calculated based on TA (sw) and TCO 2(sw) (black dots) and calculated based on TA * (sw) and TCO * 2(sw) (dashed red line), and (b) pH in the water column calculated based on TA (sw) and TCO 2(sw) (black dots) and calculated based on TA * (sw) and TCO * 2(sw) (dashed red line). Table 2 . Masses of TCO 2 in the water column
2018-12-29T20:28:07.765Z
2016-09-21T00:00:00.000
{ "year": 2016, "sha1": "2e046505ff4cbc2fa15b334fea80211d3425fb22", "oa_license": "CCBY", "oa_url": "https://www.the-cryosphere.net/10/2173/2016/tc-10-2173-2016.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a9c0ad621650442eb68309279433fa060f151635", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
72955440
pes2o/s2orc
v3-fos-license
GENITAL HAILEY-HAILEY DISEASE : A CASE REPORT Deeptara Hailey Hailey disease is a rare autosomal dominant acantholytic disorder, previously not reported from Nepal. We report a case of 30 years old female who presented with pruritic hyperkeratotic papules and plaques on vulva, perianal area and inner left thigh for a period of one year. Biopsy from the lesion showed suprabasal acantholysis with loss of intercellular bridges resulting in a dilapidated brick-wall appearance; characteristic of Hailey Hailey disease. Treatment of this disease till date is far from satisfactory. Introduction Hailey-Hailey disease is a rare autosomal dominant acantholytic disorder.It is characterized clinically by a recurrent eruption of vesicles and bullae at the sites of friction and intertriginous areas.Histopathology is diagnostic of Hailey-Hailey disease.We present a case with an atypical presentation involving vulva, previously not reported from Nepal. Case Report A 30 year old female presented in the Department of Dermatology, Nepal Medical College and Teaching Hospital, with pruritic hyperkeratotic plaques on vulva, perianal area and inner left thigh for a period of one year.She initially had itching on the vulvar area.A month later, she noticed hyperkeratotic small raised lesion in vulva, which over three months, coalesced to form bigger lesions and spread to perianal area and inner left thigh.She experienced increase itching during friction and sweating but it did not aggravate during menstruation or stress.She denied history of similar disease in her family.She neither took medical advice nor medication prior to coming to our department.Local examination revealed violaceous to brownish irregular hyperkeratotic papules and plaques of different sizes, with well defined margin on lower one third of bilateral labia majora which extended to involve perianal area.There was also a plaque near medial aspect of inner left thigh near groin area (Fig. 1).Systemic examination showed no other abnormality.Biopsy revealed characteristic features of Hailey-Hailey disease (Fig. 2) showing large separation of detached stratum malpighii cells with loss of their intercellular bridge (acantholysis effect) in suprabasal portions.Detached epidermis showed dilapidated brick wall appearance, which was consistent with Hailey-Hailey disease.Immunofluorescence test, due to unavailability, could not be done in our set up.The patient was treated with oral Doxycycline and topical clobetasol propionate 0.05% cream and Tacrolimus 0.1% ointment.A month later follow up examination revealed marked clinical improvement.Patient was then continued with topical tacrolimus.www.odermatol.com Discussion Hailey-Hailey disease also known as Familial benign chronic pemphigus, was first described in 1939.It is an autosomal dominant acantholytic disorder which clinically presents as recurrent painful or pruritic fragile, vesicles and erosions in intertriginous areas involving axillary folds, groin, submammary region, and neck folds [1].Patients mostly present with symptoms during the second or third decade of life and suffer from chronic, relapsing outbreaks [2].Our patient presented with an atypical presentation involving vulva with hyperkeratotic plaques rather than the characteristic vesicle or erosion.Literature review quotes presence of lesion in atypical sites like with symmetrical distribution limited to the upper chest and anterior aspects of the upper arms and neck [3], erythroderma [4], conjunctivae [5] or mucosae [6,7].The triggering factors like friction, heat, sweating, constrictive clothing, physical trauma, stress and menstruation have been attributed.Our patient also had exaggeration of symptoms during sweating and friction.Characteristic histopathological examination shows widespread suprabasal acantholysis with loss of intercellular bridges, which results in a dilapidated brick-wall appearance and similar picture was also seen in our patient.Recently studies have shown that Hailey-Hailey disease occurs due to the result of mutations in the ATP2C1 gene, which encodes Ca2+/Mn2+-ATPase protein 1 (hSPCA1), which is localized to the Golgi apparatus [1].Keratinocytes which shows ATP2C1 mutation, have deficient Ca2+signaling, with dysregulated sorting and glycosylation of desmosomal proteins, giving rise to epidermal defects in skin lesions.Patients with Hailey-Hailey disease, a total of 98 ATP2C1 mutations have been reported worldwide.Linkage analysis has localized the gene locus to chromosome 3q21-q24 [8].Colonization and secondary infections with bacterial, fungal, or viral microorganisms are known to be associated with Hailey-Hailey disease.Squamous-cell carcinoma that is rare can also occur [9].The frequency of exacerbations may be decreased by wearing light weight clothing and avoiding activities that result in sweating or skin friction.Treatment option includes topical antimicrobials, steroids and intralesional steroids.Systemic therapy includes oral antimicrobials and few case reports describe use of cyclosporin, acitretin and methotrexate.Surgical methods like dermabrasion, CO2 or erbium-YAG laser vaporization and others like 5-aminolevulinic acid photodynamic therapy, botulinum toxin have been used with success.Surgical management with wide local excision of affected skin folds has a high complication rate.Refractory Hailey-Hailey disease may benefit from local electron-beam therapy [3,[11][12][13][14]. Conclusion This is very rare disease and no case has been reported from Nepal till date.Histopathology is an important diagnostic tool to diagnose Hailey-Hailey disease.Due to its relapsing and remitting course of the disease there is need to have effective treatment options in future to improve quality of life of the patient with Hailey-Hailey disease. Figure 1 . Figure 1.Multiple hyperkeratotic papules and plaque on vulva and near groin area Figure 2 . Figure 2. Detached epidermis showing characteristic dilapidated brick wall appearance
2018-12-11T06:13:54.519Z
2013-01-02T00:00:00.000
{ "year": 2013, "sha1": "b9f6e0214d2ff2a755739bef7bc70707db2929c7", "oa_license": "CCBY", "oa_url": "http://www.odermatol.com/wp-content/uploads/file/2013%201/19_Genital%20Haile-Hailey-Thapa%20DP.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b9f6e0214d2ff2a755739bef7bc70707db2929c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
103199793
pes2o/s2orc
v3-fos-license
Urinary excretion as a function of uranium concentration in bladder cancer patients using kinetic phosphorimetry analyzer Measurement of urinary excretion is most commonly used as a method of assessing internal contamination due to insoluble nuclides. The pulsed-laser Kinetic Phosphorimetry Analyzer (KPA-11 with analysis range 0.01μg/L up to 50mg/L for Uranium) has been used to determine the Uranium concentration (Uc) in urine samples of three groups of persons; Bladder cancer patients, Healthy and Infants (with age less than two month) and their mothers. The range of Uc excreted in all subjects have been found to be 0.735–3.876μgl-1 with an overall average of 2.00038 μgl-1, 0.856–1.042μgl-1 with an overall average of 0.9464 μgl-1 and 0.505–0.979μgl-1 with an overall average of 0.7742 μgl-1, respectively. The obtained results illustrated that there is statistically significant correlation between Uc and residential area. The obtained Mean values of Uc of the different groups were found to be approximately proportional to age up to 50 years. A noticeable drop is observed for subjects greater than 50 years old. A synthetic urine analysis was chosen in this study to proclaim any concern over biohazards, such as acidity of urine, increasing Amorphous urate, Uric acid, and decreasing Mucous. Introduction Diagnosis of bladder cancer may occur by urine tests, cancer cells, and other signs of disease [1]. A risk factor of bladder cancer is anything that increasing the chance of getting a disease such as smoking, which can be changed by leaving this habit. Others, such as a person's age or family history, can't be changed [2].The risk of bladder cancer increases with age, and about 9 out of 10 people with bladder cancer are older than 55. It is much more common in men than in women. Urinary infections, kidney and bladder stones, bladder catheters left in place a long time, and other causes of chronic bladder irritation have been linked with bladder cancer (especially squamous cell carcinoma of the bladder), but it's not clear if they actually cause bladder cancer [3]. There are many studies indicated that radiation exposure is one of that carcinogenic reasons of bladder cancer or any type of cancer [4][5][6].The radiation effect may be appear in the exposed person; directly or, take up along time [7]. Evidence of injury from low or moderate doses of radiation may not show up for months or even years. For leukemia, the minimum time period between the radiation exposure and the appearance of disease (latency period) is 2 years. For solid tumors, the latency period is more than 5 years [8]. Till now, the misuse of forbidden weapons in Gulf war I and II caused environmental pollution with uranium in wide areas of Iraq, which lead to different diseases, children malformations, leukemia, 2 1234567890 ''"" The Sixth Scientific Conference "Renewable Energy and its Applications" IOP Publishing IOP Conf. Series: Journal of Physics: Conf. Series 1032 (2018) 012026 doi : 10.1088/1742-6596/1032/1/012026 and cancers [9]. Besides, Mc Diarmid et al. [10] found evidence of neurocognitive impairment in Gulf War veterans who had retained fragments of Depleted Uranium (DU) shrapnel, although these individuals showed little evidence of impaired kidney function. Hindin et al. [11] made a review about the teratogenic effects on animal and human. They concluded that the weight of evidence is consistent with an increased risk of birth defects in the progeny that exposed to DU. Accidental or chronic exposure to DU occur in military use of DU munitions in combat at the 1991 and 2003 Gulf Wars [12]. At the last decade in Iraq, the steady increasing in morbidity cases such as cancers, children with birth defects and abortion of pregnant women have been noted [13][14][15][16].Uranium, principally, represents an internal radiation hazard. The risks associated with Uranium in the body are both chemical and radiological [6,17]. The damage caused by ionizing radiation from radionuclide transformations is a result of the energy absorbed by body tissues [18]. Heavy metals compounds, like carbonate/ bicarbonate compounds, e.g. [UO 2 (CO 3 ) 2 ]2, may cause a number of cytotoxic effects. These compounds are stable at a neutral pH value (pH of blood) and in this form are not very reactive, but the highly reactive uranyl ion UO 2 +2 is released at low pH values (as urine) [19]. The effects of the radionuclides vary with many parameters, specifically its pathway, before reaching the target. Uranium can travel through the air, water (both groundwater and surface water), soil and through the food chain. The radionuclides may enter the human body by ingestion, inhalation and through the skin, and then played the internal exposure [20]. Moreover, the effect of radionuclides is depending on biokinetic of them inside the body. The behavior of a substance in the body, such as intake, uptake, distribution, excretion, and retention is called biokinetics [21]. These biokinetics are depending on the properties of Uranium compound such as bio-solubility. However, the Gastrointestinal (GI) tract absorption of Uranium at environmental levels is about (1%) [22].The other property of Uranium is biological half-life (T b ), which is the time was taken for the amounts of a particular element in the body to decrease to half its initial value due to elimination by biological processes alone, and hence some of radionuclides can be excrete by urine and/ or feces. This means that a radioactive atom can be expelled before it has had the chance to decay. For a number of radioisotopes of particular medical interest, the rate of excretion has been specified in the form of an effective biological half-life. The overall elimination half-life of Uranium under conditions of normal daily intake has been estimated to be between 180 and 360 days in humans [23]. A synthetic urine analysis has been chosen in this study to proclaim any concern over biohazards. Monitoring for occupational incorporation of Uranium and Thorium are usually carried out by urinary excretion analysis. Uranium is poorly absorbed by the digestive tract and most of the absorbed Uranium is eliminated in the urine. Thus, urine is considered to be the best sample for the detection of excessive intake of Uranium [24].Measurement of urinary excretion is most commonly used as a method of assessing internal contamination due to insoluble nuclides. Analysis of human urine has revealed the presence of various nuclides such as Plutonium, Polonium, Uranium, Carbon-14, Strontium, Barium, Tritium, etc. [25]. The purpose of this research is to get answers of the questions: is the increasing of Uranium concentration (Uc) in urine represents a bio marker indication to get cancer or is a main cause of the cancer reasons?, is there any correlation relation between Uc in urine sample and its bio marker analyses?, and finally, is there any variance between patient and control groups due to their age and home location?. Sample Collection Urine samples were collected from different group of persons as illustrated in the table 1. The patient group participators were mostly from all districts of Baghdad governorate, from all age groups, and with both genders. The age of control group is asymptotic to that for patient, while the age of the infant is just a few days. 2.1.Samples for biological test The samples those taken for a biological test and from the entire groups that shown in table 1were kept cool until the test was carried out. The sample was taken from the adult person with urine cup contains 60 ml, then was put in cooling box until reach the bio-laboratory, but in the case of the infant, the sample gathering by urine bag and also transfers in cooling box. 2.2.Samples for radiological test The urine samples for a radiological test from the same participators in table 1don't need stay cooling, and they were accumulated to be 500-1000 ml for almost 24 hour. Complete 24hour urine samples give better precision than spot samples in estimating Uc at these low levels, but presented more logistic difficulties in the collection [26]. In other wards these samples were taken in another day, because of all patient are visitors not residents. Total 24 hour urine samples were collected in polyethylene bottles, and immediately acidized by the addition of 1ml concentrated HCl for each sample to avoid precipitate formation. 3.1.Biological analyses It could be done for all groups of participators in table 1 to study the relation between the cancer incidence probability and bio marker analysis of the urine. The biological tests are fixing in tables 2. Radiological analyses It could be done for all groups of participators in table 1 to study the relation between the cancer incidence probability and Uc measurements. In general, execration of Uc in urine of healthy person was limited by several international agencies, for example World Health Organization (WHO), to about100 ng/L [27]. Therefore, Uc measurements in urine need a detecting system with low minimum detectable activity. Elements in environmental sample are presented in very small amount and their measurements required reliable and sensitive analytical technique. Environmental levels of Uranium in human excreta are highly variable, depending on Uc in air, food and water, and on the health of the individual [28]. Analytical techniques used as part of a radiation protection program should provide limits of detection comparable to or below these levels, in order to differentiate between environmental and occupational Uranium exposures. Concentrations of various elements in different samples were determined using many analytical techniques, whereas, Uranium content as been measured in various foods and drink samples using Gamma-Ray Spectrometry, but this technique needs to use of special food and liquid calibration standards [29]. Neutron Activation Analysis and radiochemical separation can use to estimate Uc in some biological samples [30]. Photometric techniques such as fluorometry and phosphorometry can use to measure Uc with very good resolution and sensitivity. KPA-11 spectrometer, which is one of the updated phosphorometry techniques, has been used to evaluate Uc in urine samples of the present work. KPA-11 provides highly precise and accurate measurements that eliminate the need for internal standards. The advantages of this technique are the extremely low detection limit (10 ng/L), and only required pretreatment of samples. KPA-11 is a bench-top instrument that rapidly performs singlemeasurement, manufactured by Chemchek Instruments (Richland, USA). This model is equipped with a pulsed nitrogen/dye laser to supply monochromatic ultraviolet light to excite Uranium atoms in the sample solution. It is a fully integrated computerized system for data collection and analysis. Chemchek's KPA Win software controls the KPA-11 along with storing and interpreting the analytical data returned from the KPA [31]. The Data Processing Questionnaire data, related to all participators included in table 1, had been done as fixed in table 3. They were taken from participators and checked for validity and consistency with hospital computer files of them. Identities were stored separately from questionnaire data, which were indexed by an anonymous unique participant number. In order to stand about all the effected parameters on the bladder cancer incidence probability, Uc and biological measurements have been gathered with this questionnaire and studied against each other. The Statistical Analysis System-SAS (2012) program was used to study the effect of factors differences in studied parameters [32]. Chi-square test was used to significant compare between percentage and T-test instead of Least Significant Difference -LSD test (because of the comparison is between two groups) was used to significant compare between Mean of the studied parameters. Table3. The questionnaire of all participators of study. 5.1.Results of biological analyses T-test was used to significant compare between the Mean of each bio marker of urine, related to the patient, healthy (control) and infants with their mothers, who participated in this study. The comparison showed that there are high significant differences between patients and controls in pH, Amorphous urate, Mucous, Uric acid, Epithelial and Pus cell. Also a significant difference appeared in other bacteria, as shown in table 4. The comparison between control participants and infants with their mothers showed no significant differences as illustrated in table 5. However, the bio markers of urine are really considered as indicators on healthiness states, and the infants with their mothers considered as a reference to healthiness states. A comparison among all groups of participants was showed in figure 1. It's clear that the uric salts (Amorphous urate, Calcium oxalate and Uric acid) increase in urine as much as the pH decreasing for all groups, and the infants with their mothers had minimum amount of these salts and highest pH. Furthermore, other bacteria had the same behavior, while Epithelial and Pus cell in the second and third groups had disconcerted behavior due to sample related to women in all most cases. Results of Radiological Analyses The Mean of the measured Uc in urine samples of patient and control participants had been compared significantly using T-test. The comparison showed there is high significant difference between participants as in table 6. Additionally, the comparison between control and infants with their mothers showed a high significant difference as in table 7. Figure 2 illustrates a comprehensive comparison between the studied groups and the international organizations; World Health Organization (WHO) and International Commission on Radiological Protection (ICRP). Publication 23 of ICRP listed the values 0.04-0.4 µg/L for urinary Uc execration [28]. WHO reviewed data from the early 1990s suggesting that urinary Uc execration in the general population ranged from about 0.04 to 0.57 µg/L [27]. However, comparison shows that Mean of Uc of patient has the highest value against all other groups, while Mean Uc of infants with their mothers is the lowest value, but it is higher than the permissible recommended values of WHO and ICRP. The correlation coefficients between measured Uc and the biological parameters were shown in table 8. The results showed a reverse correlation coefficient with high significant between Uc and pH, Mucous and Epithelial, while a positively proportional correlation coefficient with high significant with Amorphous urate, and a non-significant correlation coefficient with respect to the other parameters. Results of Questionnaire Data Chi-square test was used to significant comparison between percentages of distribution of sample which related to patient character according to Questionnaire that fixed in Bladder cancer patients' distribution with their Mean of Uc according to home location is shown in table 10. The home location with symbols 2, 8 and 9 registered highest numbers of cases with highest Mean of Uc, with taking into consideration that almost the patients living in home location since 10-15 years or from the born. In spite of that City center (symbolized with 1) registered 7cases, but their Mean of Uc is lower although the high population density of this location (Mansour, Ghazaliya, Al-Shurtah, Juafir, Muasalat, America and Hafa Street) against others. However, the archives of the patients in the Al Jwad center for tumor treatment demonstrated that the registered cases, for all types of cancer, got from home location with symbol 2 represent 30-40% of all other home locations. Also Al-Amal National Hospital for Cancer Management registered cancer cases with 30-40% from home location with symbols 8 and 9. To emphasis the results that obtained from table 10, a comprehensive comparison had been made among all participants of studied groups, where control group (included healthy and infant and their mothers), with respect to home location as shown in figure 3-a. The Uc of participants of control group in home location (1) is lower than in other home locations (2, 8 and 9). Moreover, bladder cancer patients' distribution with their Mean of Uc according to age groups is shown in table 11. Age group larger than 50 years registered highest number of cases but with lower Mean ± SE of Uc, while the other age groups registered lower number of cases but with highest Mean ± SE of Uc. This result can be explained as fallow: the archives of the patients in the hospital that registered cases with bladder cancer refer to largest number of cases in age 25-50, but they couldn't survive firstly; because of highest Uc contaminated in their body that attributed for many reasons like type of working, food consumed, etc. . Secondly, from table 9, where the period of illness was (1)(2)(3)(4)(5) years and as the period increase the probability of death increase, where cases with period of 1 year represent highest percentage in distribution of studied samples. To emphasis the results that deduced from table 11 a comprehensive comparison according to age had been made among all participants of study, where control group (included healthy and infant and their mothers), as shown in figure 3-b. Conclusions Through the results of this work, one can strongly conclude that the increasing of Uc in bladder cancer patients represents one of the main reasons of the increasing bladder cancer incidence probability in the Iraqi populations. The second important conclusion is the Uc increment in the urine samples of control and infants and their mothers compared with international agencies such as WHO and ICRP, which indicates on the Uc increment in the Iraqi environment and, therefore, the increasing health hazards in populations. However, there are other remarkable conclusions one can be deduced from this work. The decreasing pH of urine caused increasing uric salts especially amorphous urate and this considered as strong indicator to renal failure in particular and generally in urinary system. Additionally, increasing of acidity of urine, which accompanying with activity of Uc, leads to increase the incidence rate of a bladder cancer. Finally, the home location 2, 8 and 9 suffer from highest Uc pollution.
2019-04-09T13:11:01.129Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "06cffe1459b90b267147f82fb9bf3adf3ea8e0d4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1032/1/012026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d2506eb0b20c45787580065fc91903130ee03e82", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
261415367
pes2o/s2orc
v3-fos-license
Modeling Side Chains in the Three-Dimensional Structure of Proteins for Post-Translational Modifications Amino acid substitutions and post-translational modifications (PTMs) play a crucial role in many cellular processes by directly affecting the structural and dynamic features of protein interaction. Despite their importance, the understanding of protein PTMs at the structural level is still largely incomplete. The Protein Data Bank contains a relatively small number of 3D structures having post-translational modifications. Although recent years have witnessed significant progress in three-dimensional modeling (3D) of proteins using neural networks, the problem related to predicting accurate PTMs in proteins has been largely ignored. Predicting accurate 3D PTM models in proteins is closely related to another fundamental problem: predicting the correct side-chain conformations of amino acid residues in proteins. An analysis of publications as well as the paid and free software packages for modeling three-dimensional structures showed that most of them focus on working with unmodified proteins and canonical amino acid residues; the number of articles and software packages placing emphasis on modeling three-dimensional PTM structures is an order of magnitude smaller. This paper focuses on modeling the side-chain conformations of proteins containing PTMs (nonstandard amino acid residues). We collected our own libraries comprising the most frequently observed PTMs from the PDB and implemented a number of algorithms for predicting the side-chain conformation at modification points and in the immediate environment of the protein. A comprehensive analysis of both the algorithms per se and compared to the common Rosetta and FoldX structure modeling packages was also carried out. The proposed algorithmic solutions are comparable in their characteristics to the well-known Rosetta and FoldX packages for the modeling of three-dimensional structures and have great potential for further development and optimization. The source code of algorithmic solutions has been deposited to and is available at the GitHub source. Introduction Amino acid substitutions and post-translational modifications (PTMs) are critical to the function of many proteins in living systems, and understanding their effects at the molecular level is important for both basic and applied research in biology and medicine [1,2].Post-translational modifications of proteins, such as phosphorylation, acetylation, methylation, carboxylation, and hydroxylation, play a key role in cell ontogeny [3,4].For example, PTMs play an important role in regulation of enzyme activity, protein transport, and changing of protein stability [5,6].Non-enzymatic PTMs, such as carbonylation and oxidation, often occur as a consequence of oxidative stress and are considered a ubiquitous mechanism for non-specific protein damage associated with age-related disorders, including neurodegenerative diseases, cancer, and diabetes mellitus [7,8].It is important to note that amino acids often undergo significant changes in their physicochemical properties upon modification, which sometimes dramatically alters the structure of the affected protein and its dynamics and ability to interact with the environment and other proteins [3,9]. One of the key challenges in modeling 3D protein structures for amino acid substitutions and post-translational modifications is predicting the correct conformations of amino acid side chains in proteins, also called "packing" [10].Most of the currently available side-chain packing methods can be roughly divided into two large groups. The first group is the protein physics-based approaches that involve searching within a given sample space, often defined by a library of predefined rotamers.A rotamer (short for "rotational isomer") is a single side-chain conformation represented as a set of values, one for each degree of freedom of the dihedral angle.The side chains of proteins usually exist in a limited number of low-energy conformations, and these conformations are contained in rotamer libraries.Rotamer libraries typically contain information about the conformation, the frequency of a particular conformation, and the variance of dihedral mean values that can be used in searches or sampling.One of the most famous and frequently used libraries today is the Dunbrack library [11].This group of methods looks at the problem from a physicochemical point of view and tries to optimize the interactions between side chains, avoiding steric collisions and minimizing the overall energy of the system. The second group uses machine learning methods to reconstruct amino acid side chains.These methods use deep neural networks or neural network ensembles to model the position of side chains [12][13][14][15].Some solutions use a combination of machine learning and rotamer library space search to determine the optimal side-chain conformation.A number of solutions use neural networks to find optimal side-chain scoring functions and use these functions to search for side-chain conformations in the rotamer library [16]. All methods for predicting side-chain conformations show good results for canonical amino acid residues, but for non-canonical amino acid residues (PTMs), there exists a practical problem hindering progress in this area.The problem is that the Protein Data Bank (PDB, https://www.rcsb.org/,accessed on 5 June 2023) contains significantly less data on PTM residues than on canonical amino acid residues.For comparison, while the number of residues of canonical amino acids is measured in millions, the number of residues modified by a particular type of PTM is in the best-case scenario measured in thousands of units and on average hundreds or even tens.This amount is not enough for training neural network models or building rotameter libraries with full-fledged statistical potential.This explains the relatively small number of solutions for the incorporation and packaging of post-translational modifications into the 3D protein structure.Rosetta and FoldX are the most famous and widespread packages currently providing PTM modeling and repacking. In this study, we consider a number of algorithms for choosing the optimal position of side chains from an ensemble of rotamers for protein structures with PTMs.The algorithms are evaluated for a large test set of proteins, and their performance is compared with that of the well-known Rosetta and FoldX protein structure modeling packages.We also discuss the advantages and drawbacks of the algorithms and point out possible improvements and extensions to our methods. Results We carried out a comprehensive analysis aimed to evaluate the performance of algorithms purposed for the modeling and reconstructing of PTMs and canonical amino acid residues in three-dimensional protein structures: • Monte Carlo Markov Chain (MCMC) sampling (rotamer) using rotamer libraries.Dunbrack rotamer libraries were used for canonical amino acid residues, and proprietary libraries were assembled for five common post-translational modifications. • Monte Carlo Markov Chain (MCMC) sampling (off-rotamer): This algorithm allows side-chain torsion angles to go beyond the values of the rotamer library.The rotamer library is used only to control the degree of changes in angles. • Generative algorithm (GA-rotamer) is an evolutionary search algorithm with initialization of the initial population from the rotamer library. • Generative algorithm (GA-random) is an algorithm with initialization of the initial population from a uniform distribution.The rotamer library is not used in this algorithm. A detailed description of these algorithms is available in Section 4. We also compared outcomes obtained by these algorithms and the well-known modeling services Rosetta and FoldX.Since our work is more focused on the prediction of side-chain conformations caused specifically by PTMs and their neighborhoods, to achieve satisfactory quality, we took a set of high-resolution (≤1.5 Å) PDB structures (total 100 structures) carrying each type of considered PTM function (complete list of advised set of structures is available in Supplementary Table S3). The evaluation algorithm was built as follows: a.All side chains were removed from the PDB structure.b.All side chains were restored, and side chains were repackaged within a radius of 10 Å from the mutation point using the algorithms described before.c.For the restored structure, the quality indicators provided by the MolProbity service [1] (Table 1), RMSD indicators, and torsion angle were calculated for the comparison with the original structure.A similar algorithm was used to assess performance with the Rosetta and FoldX packages: the side chains were recovered and metrics were calculated for the recovered structures.Since the FoldX software package does not support the PTM part, the corresponding positions in the tables and plots are not filled. The MolProbity service was chosen to control integrity characteristics of the restored structures and provides metrics for the assessing of the quality of structures (Table 1).Hydrogen atoms were added to and possible inversions of the side chains of asparagine, glutamine, and histidine were recognized and accepted. Result comparisons between the in-house algorithms and Rosetta or FoldX were consequently handled using the MolProbity service to elucidate the quality of calculated structures (Figure 1).We also determined typical deviations in the structures of amino acid residues for each algorithm and established those residues where deflection incidents were the most frequent. We defined such residuals with deviations as "marginal" if such residuals matched one of the following provisions: • Abnormally closely located atoms; • Going beyond the allowable values of the Ramachandra map; • Abnormal angles or out of angles of the rotamers. The defined marginal amino acid residues among plenty of structures in the test data set were extracted, and deviations classified by the PTM type and canonical amino acids for each algorithm were estimated and ranged (Figure 2) with an average calculated RMSD (Table S1 in the Supplementary Materials).It is also interesting to look at the comparison of mean absolute errors (MAEs) of torsion angles χ in different methods (Table S2 in the Supplementary Materials). Gathering the obtained data, we found that the tested in-house algorithms demonstrated well results that are not at odds with the well-known Rosetta and FoldX packages, and some were better for PTMs.Some outliers in MolProbity indicators could be observed for PHE, ASN, and ARG residues.However, these emissions are typical of all considered algorithms and software packages, which may indicate that the reference model in MolProbity imposes excessive quality requirements. If we compare the speeds of these algorithms, it should be immediately noted that the FoldX software package takes more computing time than all other algorithms.A comparison of the operation speed is presented in Figure 3.For MCMC algorithms, this plot reflects MCMC (rotamer) and GA (random) for GA since the speeds within the group are approximately the same. The following conclusions can be drawn from the presented comparative data. 1.The best results in our study, in terms of both accuracy and processing speed, were demonstrated by the Rosetta software package.This was expected, since Rosetta is one of the leading molecular modeling packages and is widely used by researchers around the world.According to the published documentation [17], Rosetta also uses the MCMC algorithm inside its software implementation, and the difference in performance apparently depends only on the selected scoring function.2. The FoldX software package also generally shows good results, but its speed is much slower than that of all the algorithms considered.In addition, FoldX only supports two PTMs (SEP and TPO), and we could not fully evaluate its results.3. The MCMC algorithm with sampling from the rotamer library shows good results, close to those of Rosetta, and even better for some PTMs.4. The results of the MCMC off-rotamer algorithm are slightly worse but still acceptable.If we thoroughly analyze the results provided by this algorithm, we can observe that in some cases its performance is better than that of other algorithms, but no regular pattern could be identified.5.The results of the work of genetic algorithms, despite the fact that their performance in general turned out to be worse than that of all the others, surprised us.The interesting point here is that GA initialized with random numbers from a uniform distribution works better than GA initialized from the rotamer library.This makes it possible not to use rotamer libraries at all for identifying the optimal position of side chains and obtain results with quite acceptable accuracy, which is especially important for rare non-canonical amino acid residues.If we analyze in detail the results of the work of GA algorithms, we can observe a picture similar to that for the MCMC off-rotamer: some structures are determined better compared to other The following conclusions can be drawn from the presented comparative data. 1.The best results in our study, in terms of both accuracy and processing speed, were demonstrated by the Rosetta software package.This was expected, since Rosetta is one of the leading molecular modeling packages and is widely used by researchers around the world.According to the published documentation [17], Rosetta also uses the MCMC algorithm inside its software implementation, and the difference in performance apparently depends only on the selected scoring function.2. The FoldX software package also generally shows good results, but its speed is much slower than that of all the algorithms considered.In addition, FoldX only supports two PTMs (SEP and TPO), and we could not fully evaluate its results.3. The MCMC algorithm with sampling from the rotamer library shows good results, close to those of Rosetta, and even better for some PTMs.4. The results of the MCMC off-rotamer algorithm are slightly worse but still acceptable. If we thoroughly analyze the results provided by this algorithm, we can observe that in some cases its performance is better than that of other algorithms, but no regular pattern could be identified.5.The results of the work of genetic algorithms, despite the fact that their performance in general turned out to be worse than that of all the others, surprised us.The interesting point here is that GA initialized with random numbers from a uniform distribution works better than GA initialized from the rotamer library.This makes it possible not to use rotamer libraries at all for identifying the optimal position of side chains and obtain results with quite acceptable accuracy, which is especially important for rare non-canonical amino acid residues.If we analyze in detail the results of the work of GA algorithms, we can observe a picture similar to that for the MCMC off-rotamer: some structures are determined better compared to other algorithms, while some are worse.In general, the results of GA work are unstable, but as it seems to us these algorithms show great promise for solving this problem. We assume that genetic algorithms have a great potency to cover modeling of threedimensional protein structures, although they are still rarely applied in this realm.We also noticed that as the resolution of protein structures increases, the accuracy of all algorithms, including Rosetta Packer and FoldX, drops dramatically (Figure S1 in the Supplementary Materials), while the accuracy of GA severely increases.This can be caused by the fact that the electron density in structures with a low resolution and poor quality is closer to the posterior distribution of the rotamer libraries used for sampling.The genetic algorithm initialized from uniform distribution does not use rotamer libraries, and for structures with good resolution, its predictions are closer to the experimental data. Currently, we are working under the following main hurdles: 1. Improving the overall accuracy of the genetic algorithm.According to our preliminary studies, perfect improvement of accuracy can be achieved using the particle swap optimization (PSO) [3] approach, where the elements of the search space (in our case, atoms of amino acid residues) interact without centralized coordination. 2. Reproducibility of results.Since genetic algorithms are inherently heuristic, the stability of their results is not guaranteed.To ensure stable reproducibility, we are working toward the integration of GAs and neural networks, where neural networks implement the functions of genetic operators and evaluation functions. Discussion We developed a solution for building a library of rotamers for PTMs and any noncanonical amino acid residues present in the Protein Data Bank.We also implemented and conducted a comparative analysis of the algorithms for side-chain reconstruction and "repacking": 1. Monte Carlo Markov Chain (MCMC) sampling (rotamer) using rotamer libraries.Dunbrack rotamer libraries were used for canonical amino acid residues, and proprietary libraries were assembled for five common post-translational modifications.parallelizing the evaluation task on several CPUs.The ability to parallelize computations is one of the important advantages of genetic algorithms. Among the shortcomings of genetic algorithms, there is relative instability of the search for solutions: they may differ each time the algorithms are run.This problem is inherent in all heuristic search and optimization algorithms and can potentially be solved by integrating genetic algorithms and neural networks (neuro-genetic networks). We will continue our research in this direction and will present new research in this area in future papers. Rotamer Library Our solution implements the functionality of building a rotamer library for any amino acid residue present in the PDB (https://www.rcsb.org/).To build a library, one needs to indicate the code of the amino acid residue and the possible torsion angles that groups of atoms can form.The code of the residue is searched across the PDB, and a library of rotamers is formed, consisting of a set of low-energy conformers and their associated internal energies generated using CREST [18].For canonical amino acid residues, the Dunbrack library [19] was used, and for the most common PTMs, our own rotamer libraries were collected from the PDB (available at https://figshare.com/s/253d32313e1294fbf1e2, accessed on 5 June 2023). Multiple entries in the same PDB file were treated as different entries.Our solution allows one to build a library for any PTM of non-canonical amino acid residues.For the PTM presented in Table 2, testing and debugging of algorithms for finding the optimal conformation of side chains of both the modifications per se and its neighbors (repacking) were carried out.As one can see in the diagram shown in Figure 4, the accuracy of GA increases with population size.As the population size rises, the speed of the algorithm decreases simultaneously, mainly due to the multiple increase in computational costs in the function of assessing the fitness of each individual.This problem is solved well by parallelizing the evaluation task on several CPUs.The ability to parallelize computations is one of the important advantages of genetic algorithms. Among the shortcomings of genetic algorithms, there is relative instability of the search for solutions: they may differ each time the algorithms are run.This problem is inherent in all heuristic search and optimization algorithms and can potentially be solved by integrating genetic algorithms and neural networks (neuro-genetic networks). We will continue our research in this direction and will present new research in this area in future papers. Rotamer Library Our solution implements the functionality of building a rotamer library for any amino acid residue present in the PDB (https://www.rcsb.org/).To build a library, one needs to indicate the code of the amino acid residue and the possible torsion angles that groups of atoms can form.The code of the residue is searched across the PDB, and a library of rotamers is formed, consisting of a set of low-energy conformers and their associated internal energies generated using CREST [18].For canonical amino acid residues, the Dunbrack library [19] was used, and for the most common PTMs, our own rotamer libraries were collected from the PDB (available at https://figshare.com/s/253d32313e1294fbf1e2, accessed on 5 June 2023). Multiple entries in the same PDB file were treated as different entries.Our solution allows one to build a library for any PTM of non-canonical amino acid residues.For the PTM presented in Table 2, testing and debugging of algorithms for finding the optimal conformation of side chains of both the modifications per se and its neighbors (repacking) were carried out. Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: Side-Chain Modeling and Repacking In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: In single-site mutants and closely related proteins, the backbone usually changes little, and a prediction of the target structure can be made by accurately predicting the position of side chains.When modeling mutations, it is important to model not only changes at the mutation point per se but also changes in conformations of neighboring side chains (perform local "repacking" of neighboring side chains).In this article, we describe and compare three modeling and repacking algorithms. Markov Chain Monte Carlo (MCMC) Sampling from the Rotamer Library This classic method uses Markov Chain Monte Carlo (MCMC) sampling to repackage all amino acid residues within a user-specified radius using a rotamer library.The algorithm is the most common variant for solving problems of this kind; it has been described quite well in [20] and used in many libraries and software products, such as Rosetta.Markov Chain Monte Carlo sampling can be described as follows: 1.The user defines the number of selection steps and the neighborhood radius from the mutation point (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. 3. The step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function.The clash evaluation function based on flat-top Lennard-Jones potential energy is used as an evaluation function in our algorithms.The interaction energy in this function consists of repulsive and attractive van der Waals terms and is defined as: where E i are the values from the CHARMM param19 potential [22] and d is the distance between the two atoms.This scoring function is used in the popular SCWRL4 side-chain conformation modeling software package [23]. Markov Chain Monte Carlo Sampling outside the Rotamer Library This method implements an algorithm for selecting side-chain conformations with deviations from canonical dihedral angles from fixed rotamer libraries.The sampling algorithm is described as follows: 1.The user defines the number of sampling steps and the radius (by default, R = 10.0 A). 2. At each sampling step, a site is randomly selected from a user-defined radius.For a given site, dihedral angles of the side chain of the site and the average deviation of this angle are randomly selected from the rotamer library. The new dihedral angle values of side chains are defined using a random sample from the von Mises distribution [24], with the center equal to the dihedral angle in the rotamer library and dispersion reciprocally proportional to squared deviation.This can be formally described as follows: where µ is the mode and k the dispersion (k = 1/σ 2 , σ 2 -std from the rotamer library), and I 0 is the modified Bessel function of order 0. The von Mises distribution (also known as the circular normal distribution) is a continuous probability distribution on a circle.By applying additional sampling from the von Mises distribution, we can expand the search space for rotamers, which is especially true for rotamers with low statistical potential, such as PTMs.Like in the first algorithm, the step is accepted or rejected using the Metropolis-Hastings criterion [21] based on the energy function. Modeling Using a Genetic Algorithm Genetic algorithms are a family of search algorithms whose ideas are based on the principles of natural evolution.Genetic algorithms implement a simplified version of Darwinian evolution: • Variability-the characteristics of individual individuals that are part of the population may change; • Heredity-some traits are consistently transmitted from an individual to their descendants; • Natural selection-better-adapted individuals are more successful in struggling for survival and leave more offspring in the next generation. In our work, we considered a variant of solving the problem of finding the optimal sidechain conformation ("repacking") for PTM and amino acid substitution and its neighboring regions within a user-specified radius using a genetic algorithm.We decided to analyze the possibility of solving the problem using a genetic algorithm for two reasons: 1. Genetic algorithms are rarely used to solve this problem.According to our hypothesis, they can show good results, especially for amino acid residues with a small statistical potential of rotamer libraries due to the greater variability of solutions formed during mutations and crossing.2. Genetic algorithms have a number of advantages over traditional search and optimization algorithms: • Ability to perform global optimization; • Applicability to problems with complex mathematical representation; • Resistance to noise; • Support for parallelization and distributed processing. The proposed genetic algorithm for solving the problem of finding the optimal conformation is described in the following sections. Creating the Initial Population The initial population is a set of individuals, each being represented by a set of chromosomes (a sequence of dihedral angles).The dihedral angles to create the population are either randomly selected from a library of rotamers or selected from a uniform distribution of the range (−π, π).The method of specifying the initial population is determined by the user.When evaluating the algorithm efficiency, we consider both options for the formation of the initial population. Selection Individuals are selected from the current population in such a way that preference is given to the best ones.This is performed at the beginning of each cycle operation, and individuals are selected from a population that will become parents for the next generation.Selection is probabilistic in nature, and the probability of choosing an individual depends on their fitness.In our solution, a selection method called "tournament" is used: 1. k Randomly selected individuals from the population participate in each round of selection.2. The individual whose fitness is higher wins and is selected to form the next generation.3. The process is repeated until the number of "parents" becomes equal to the population size. The number of individuals participating in each round of the tournament (parameter k) is called the tournament size.The larger the tournament size, the higher the chances that the best representatives of the generation will participate in the rounds, and the less likely that individuals with low fitness will win the tournament and qualify for the next generation.In our solution, we set the tournament size at 1/20 of the population size. Furthermore, we use the elitism strategy when selecting and forming the population.The elitism strategy allows one to transfer a certain percentage of the best individuals to the next generation.Thus, it guarantees to a certain extent that the best individuals will not disappear from the solution due to mutations and crossbreeding.In our solution, we carry over the top 15% individuals to the next generation. Fitness Function The clash evaluation function based on the flat-top Lennard-Jones potential energy (Equation ( 1)) was also used as the fitness function of an individual in a population. Crossing and Mutation In the classic genetic algorithms, chromosomes are usually described by binary or integer representations and crossing and mutation operators are defined over sets of integers or binary numbers.In our algorithm, chromosomes represent dihedral angles and are described by real numbers.Therefore, in our algorithm, we use special crossing and mutation methods adapted to work with real numbers.It is also important to note that since the chromosomes are torsion angles in our case, we must ensure that the values of the angles lie within the region (−π, π). Our solution implements two types of mutations applied with equal probability: 1. Mutation by a sample from the von Mises distribution, with a center equal to the value of the angle in the chromosome and a variance inversely proportional to squared deviation (σ).The squared deviation is either selected from the rotamer library if the initial population was formed from the rotamer library, or the value is randomly selected from the uniform distribution (0, k), and then crossing and mutation operations are also performed for the value of σ. 2. Mutation using an operator in which the distribution density is given by a polynomial function [25].The range of values of the polynomial density function is confined to the interval (−π, π). The generalized scheme of the described algorithms is shown in Figure 5. new generation.It applies to the offspring produced as a result of selection and crossing.The mutation operation is probabilistic and is typically used quite rarely, since it can degrade the quality of an individual and lead to degeneration of the genetic algorithm into a random search.In our algorithm, the default mutation rate is set to 0.15 and is user-configurable.As a mutation operator in genetic algorithms with real coding, a sample is used from a distribution, ensuring that the offspring is in relative proximity to the parents.Our solution implements two types of mutations applied with equal probability: 1. Mutation by a sample from the von Mises distribution, with a center equal to the value of the angle in the chromosome and a variance inversely proportional to squared deviation (σ).The squared deviation is either selected from the rotamer library if the initial population was formed from the rotamer library, or the value is randomly selected from the uniform distribution (0, k), and then crossing and mutation operations are also performed for the value of σ. 2. Mutation using an operator in which the distribution density is given by a polynomial function [25].The range of values of the polynomial density function is confined to the interval (−π, π). The generalized scheme of the described algorithms is shown in Figure 5. Conclusions Amino acid substitutions and post-translational modifications (PTMs) are essential to the function of many proteins in organisms.One of the challenges in modeling 3D protein structures for amino acid substitutions and PTMs is predicting the correct conformations of amino acid side chains in proteins.In order to help research in this area, we developed a modular modeling library that allows one to build one's own libraries of rotamers for standard and non-standard amino acid residues, as well as model side-chain conformations using various methods. 16 Figure 2 . Figure 2. Comparison of algorithm results for MCMC (rotamer), MCMC (off-rotamer), GA (rotamer initialization), GA (random initialization), Rosetta Packer, and FoldX.(A) Types of the most common marginal amino acid residues with examples from the test set (PDB ID: 6s1b A-180 VAL, 5n3q A-141 PRO, 5m2f X-23 SER).(B) Distribution of marginal amino acid residues by test set.(C) Distribution of mean RMSD values between original and reconstructed structures. Figure 2 . Figure 2. Comparison of algorithm results for MCMC (rotamer), MCMC (off-rotamer), GA (rotamer initialization), GA (random initialization), Rosetta Packer, and FoldX.(A) Types of the most common marginal amino acid residues with examples from the test set (PDB ID: 6s1b A-180 VAL, 5n3q A-141 PRO, 5m2f X-23 SER).(B) Distribution of marginal amino acid residues by test set.(C) Distribution of mean RMSD values between original and reconstructed structures. 16 Figure 3 . Figure 3. Average running time of algorithms, depending on the number of residues in the repacking area (GA population size = 300, number of generations = 40).CPU AMD Ryzen 5 4000.For MCMC algorithms, this plot reflects MCMC (rotamer) and GA (random) for GA since the speeds within the group are approximately the same. Figure 3 . Figure 3. Average running time of algorithms, depending on the number of residues in the repacking area (GA population size = 300, number of generations = 40).CPU AMD Ryzen 5 4000.For MCMC algorithms, this plot reflects MCMC (rotamer) and GA (random) for GA since the speeds within the group are approximately the same. MolProbity score) for PTM O-phosphotyrosine.The experiment involved 20 PDB structures containing O-phosphotyrosine; the population increased by 100 individuals, and 10 technical repetitions were performed at each step. Table 1 . Structure quality indicators obtained using the MolProbity service.
2023-09-01T15:21:45.002Z
2023-08-30T00:00:00.000
{ "year": 2023, "sha1": "16541caa4a3fb1a985b996c48119cbb890a1da9b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/17/13431/pdf?version=1693377457", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2bcbad35a2747693774a912499991f67f11f7bfd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
97252088
pes2o/s2orc
v3-fos-license
PREPARATION OF BIOACTIVE ISORHAPONTIGENIN DIMERS THROUGH CHEMICAL TRANSFORMATION OF BISISORHAPONTIGENIN A –Two new stilbene dimers, bisisorhapontigenin E ( 2 ) and bisisorhapontigenin F ( 3 ), and two novel cyclooligostilbenes, bisisorhapontigenin G ( 4 ) and 13b-methoxyl bisisorhapontigenin G ( 5 ) were prepared through isomerization reaction of bisisorhapontigenin A with sulfuric acid as a catalyst. Their structures and relative stereochemistry were elucidated on the basis of spectral analysis and their possible formation mechanisms were proposed. The pharmacological activities on anti-inflammation and anti-oxidant of 2 - 4 have been tested. All of them exhibited potent anti-oxidant activities. INTRODUCTION In the past few years, our research group has been engaged in some mimic biosynthesis of oligostilbenes by oxidative coupling reaction using FeCl 3 •6H 2 O, Ag 2 O etc as oxidants. 1,2,3,4Among them, most of the products have a benzofuran-type skeleton.Recently, Takaya et al. reported that acidic isomerization of (+)-ε-viniferin could produce various types of stilbene dimers. 5In order to obtain various stilbene oligomers with special skeletons for pharmacological screening, we picked out bisisorhapontigenin A (1), which was synthesized from isorhapontigenins, 3 to achieve a chemical transformation catalyzed by sulfuric acid.Two new cyclooligostilbenes, bisisorhapontigenin E (2), bisisorhapontigenin F (3), and two novel stilbene dimers, bisisorhapontigenin G (4) and 13b-methoxyl bisisorhapontigenin G (5) (Figure 1) have been obtained.In this paper, we describe the preparation, structural elucidation, plausible formation mechanisms and activities of these new cyclooligostilbenes. Compound (2) was obtained as a brown amorphous powder.Its high resolution FAB-MS m/z 514.1640 (514.1628calcd for C 30 H 26 O 8 ) agreed with a molecular formula of C 30 H 26 O 8 , indicating that 2 was an isorhapontigenin dimer.The UV absorption bands at λ max (log ε) 226 (4.72), 283 (4.17) nm suggested the absence of trans olefinic bond in 2. Its IR spectrum exhibited absorptions of hydroxyl (3400 cm -1 ) and aromatic group (1614, 1514, 1464 cm -1 ).The 1 H NMR spectrum showed the presence of two methoxyls at δ 3.78 (3H, s), 3.63 (3H, s), two sets of ABX system for ring A 1 and ring B 1 at δ 6.96 (1H, d, J=1.5 Hz); 6.75 (1H, d, J=8.7 Hz), 6.67 (1H, dd, J=8.7, 1.5 Hz) and 6.61 (1H, d, J=1.5 Hz), 6.57 (1H, d, J=8.7 Hz), 6.41 (1H, dd, J=8.7, 1.5 Hz), and two sets of meta-coupled protons for ring A 2 and ring B 2 at δ 6.02 and 50.2 ppm in its 13 C NMR spectrum suggested that 2 possessed a similar bicyclo[3.2.1]octane skeleton as the natural compound (+)-ampelopsin F (6). 5 The difference between them was that the former was an isorhapontigenin dimer, while the latter was a resveratrol dimer.In the HMBC spectrum (Figure 2, a), correlations between H-7a/C-2a, C-6a; H-7b/C-6b, C-9b, C-10a, C-11a; and H-8b/C-1b, C-10b, C-14b, C-10a confirmed the connection pattern of the two isorhapontigenins.Thus, the planar structure of 2 was elucidated as depicted in structure (2) (Figure 1).was similar to that of gneafricanin F (7), 7 having a bicyclo[3.3.0]octaneskeleton.The difference between 3 and 7 was that in the former the two isorhapontigenins were connected by head to head, while in the latter the two units were connected by head to tail.In the HMBC spectrum of 3 (Figure 3, a), long range Thus, the planar structure of 3 was concluded as shown in Figure 1.Therefore, the relative configuration of 4 could tentatively be assigned as structure (4) (Figure 1).According to the isomerization reaction mechanisms of ε-viniferin, 5 the possible formation mechanisms of compounds ( 2), ( 3), ( 4) and ( 5) may be rationalized as follows (Figure 5).The difference of the products is apparently due to the difference of the position of the protonation at the initial stage of the reaction.In the case of path a, the reaction started with protonation of the double bond, followed by cyclization to form the intermediate (A).Subsequently, an acidic protonation of the oxygen atom on the dihydrofuran ring, followed by nucleophilic attack of ring C yielded compound (4); afterwards, a hydroxyl of 4 was connected to a methoxyl by acidic methanol forming compound (5).In the case of path b, an acidic protonation of the oxygen atom on the dihydrofuran ring, followed by nucleophilic attack of the double bond, formed a intermediate (B or C).In the case of B or C, second nucleophilic attack of ring B or ring A and subsequent deprotonation gives product (2) or (3).The pharmacological activities of anti-oxidant and anti-inflammatory of compounds (2-4) have been evaluated.As shown in Table 3, none of them were found to be active on TNF-α.Nevertheless, the inhibitory rates of malondialdehyde (MDA) for compounds (2), ( 3) and ( 4 were 87.42%, 87.63% and 58.78% (as a positive control, the inhibitory rate of MDA for vitamin E at concentration of 1×10 -6 M was 50.60%) respectively.The results suggested that 2, 3 and 4 have potent anti-oxydant activity. General IR spectra were run on a Perkin Elmer 683 infrared spectrometer in KBr pellets.UV spectrum were taken on a Shimadzu UV-300 spectrophotometer.NMR spectra were carried out on AM 500 using TMS as internal standard.FAB-MS were obtained by using an Autospec-Ulma-Tof mass spectrometer and HPLC on Waters 411.Bisisorhapontigenin A (500 mg) was synthesized in previous paper. 3 Biomimetic Synthesis of 2-5 Bisisorhapontigenin A (1, 500 mg) from isorhapontigenin was dissolved in 50 ml anhydrous methanol containing 2.5 ml sulfuric acid.The solvent was refluxed 72 h under stirring, then much water was added. Compound ( 5 ) was obtained as a light brown amorphous powder.Its UV, IR,1 H NMR and 13 C NMR spectra resembled closely to those of 4, suggesting that they have the similar skeletons.The HR-FAB-MS m/z 528.1780 (528.1784calcd for C 31 H 28 O 8 ) suggested the molecular formula of C 31 H 28 O 8 , which indicated that the structure of 5 have one methyl more than that of 4. A singlet at δ 3.60 ppm for a methoxyl in the 1 H NMR spectrum of 5, together with the corresponding carbon signal at δ 56.0 ppm further confirmed the above assumption.In the NOE difference experiments, irradiation of the methoxyl signals at δ 3.60 ppm showed enhancement of the H-12b, H-14b signals, suggesting that the methoxyl should be located at C-13b.The relative configuration of 5 (Figure1) was identical to that of 4.Accordingly, the relative configuration of 5 was determined as shown in Figure1.Both4 and 5 belong to a new type of oligostilbenes with a bicyclo[3.3.2]decaneskeleton. Table 1 . Compound (4) was obtained as a brown amorphous powder.It was found to have the molecular formula of C 30 H 26 O 8 by HR-FAB-MS m/z 514.1632 (514.1628calcd for C 30 H 26 O 8 ), suggesting that 4 should be an isorhapontigenin dimer with 18 degrees of unsaturation.The 1 H NMR spectrum of 4 showed the presence of one set of ABX system due to ring A 1 at δ 6.08 (1H, d, J=2.1 Hz), 6.57 (1H, d, J=8.1 Hz), 6.46 (1H, dd, J=8.1, 2.1 Hz), two pairs of meta-coupled aromatic protons due to ring A 2 and ring B 2 at δ 6.30 (1H, d, Table 2 . 1H-and 13 C-NMR Data for 4 and 5 (δ in ppm and J in Hz) *
2019-04-06T00:43:16.568Z
2006-05-01T00:00:00.000
{ "year": 2006, "sha1": "e7394d83e17c0f3a9d35e4e46ff2f1d63840fca8", "oa_license": null, "oa_url": "https://doi.org/10.3987/com-06-10705", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "95bf29410f9e4a7f94cca925d5f3a9c6396b5cac", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
119236192
pes2o/s2orc
v3-fos-license
Exploring the Luminosity Evolution and Stellar Mass Assembly of 2SLAQ Luminous Red Galaxies Between Redshift 0.4 and 0.8 We present an analysis of the evolution of 8625 Luminous Red Galaxies between z = 0.4 and z = 0.8 in the 2dF and SDSS LRG and QSO (2SLAQ) survey. The LRGs are split into redshift bins and the evolution of both the luminosity and stellar mass function with redshift is considered and compared to the assumptions of a passive evolution scenario. We draw attention to several sources of systematic error that could bias the evolutionary predictions made in this paper. While the inferred evolution is found to be relatively unaffected by the exact choice of spectral evolution model used to compute K+e corrections, we conclude that photometric errors could be a source of significant bias in colour-selected samples such as this, in particular when using parametric maximum likelihood based estimators. We find that the evolution of the most massive LRGs is consistent with the assumptions of passive evolution and that the stellar mass assembly of the LRGs is largely complete by z ~ 0.8. Our findings suggest that massive galaxies with stellar masses above 10^11 solar masses must have undergone merging and star formation processes at a very early stage (z>1). This supports the emerging picture of downsizing in both the star formation as well as the mass assembly of early type galaxies. Given that our spectroscopic sample covers an unprecedentedly large volume and probes the most massive end of the galaxy mass function, we find that these observational results present a significant challenge for many current models of galaxy formation INTRODUCTION Luminous Red Galaxies (LRGs) are arguably some of the brightest galaxies in our Universe allowing us to study their evolution out to much higher redshifts than is possible with samples of more typical galaxies. LRGs are known to form a spectroscopically homogenous population that can be reliably identified photometrically by means of simple colour selections (Eisenstein et al. 2001;Cannon et al. 2006). The presence of a strong 4000Åbreak in these galaxies also means that accurate photometric redshifts can ⋆ E-mail: mbanerji@ast.cam.ac.uk be derived for large samples where spectroscopy is difficult to obtain (Padmanabhan et al. 2005;Collister et al. 2007;Abdalla et al. 2008). Consequently, samples of luminous red galaxies have emerged as an important data set both for studies of cosmology (Blake et al. , 2008Cabré & Gaztañaga 2009) and galaxy formation and evolution (Wake et al. 2006;Cool et al. 2008). The study of massive galaxies in our Universe is particularly interesting since these galaxies present a long-standing problem for models of galaxy formation. While in the standard ΛCDM cosmology, massive galaxies are thought to be built up through successive mergers of smaller systems (White & Rees 1978), many observations now suggest that these galaxies were already well assembled at high redshifts (Glazebrook et al. 2004;Cimatti et al. 2006;Scarlata et al. 2007;Ferreras et al. 2009;Pozzetti et al. 2009). At the same time, star formation and merging activity has also been seen in such systems between z ∼ 1 and the present day (Lin et al. 2004;Stanford et al. 2004) and the studies of Bell et al. (2004) and Faber et al. (2007) have found evidence that the luminosity density of massive red galaxies has remained roughly constant since z∼1 implying a build up in the number density of these objects through mergers. Most observational results now support downsizing in star formation -i.e most massive galaxies having lower specific star formation rates than their less massive counterparts -and many galaxy formation models are able to reproduce these observations e.g. by invoking quenching mechanisms such as AGN feedback De Lucia et al. 2006). However, the more recently observed trend of downsizing in the mass assembly (Cimatti et al. 2006;Pozzetti et al. 2009) presents more of a challenge for galaxy formation models which typically predict that the most massive galaxies were assembled later than their less massive counterparts. Many of the contradictions in observational studies of massive galaxy formation arise principally for two reasons. Firstly, the way in which early-type galaxies are selected in different surveys can be considerably different. Morphological selection compared to colour selection of such objects will almost certainly result in different samples being chosen, particularly at high redshifts. The colour selection is also usually different for different samples of massive galaxies as we will point out later in this paper. Secondly, many of the studies of massive galaxy formation mentioned so far are deep and narrow spectroscopic samples that will suffer from biases due to the effects of cosmic variance. In addition, one has to be careful in interpreting observational results as in current galaxy formation models, star formation and mass assembly in galaxies are not necessarily concomitant. So while the stars in early-type galaxies may have formed very early in the Universe's history, they may have formed in relatively small units and only merged at lower redshifts to create the massive ETGs we see today. Studies of the evolution of luminous red galaxies such as those analysed in this paper have already supported the evidence that massive galaxies have been passively fading and show very little recent star formation (Wake et al. 2006;Cool et al. 2008). In this paper, we extend this work by also considering the mass assembly of these systems. The 2dF and SDSS LRG and QSO (2SLAQ) survey presents a comprehensive improvement in volume for massive galaxy samples. The survey covers an area of 186 deg 2 and reaches out to a redshift of 0.8 making it a promising data set to study the evolution of massive red galaxies. Furthermore, much of the 2SLAQ area is now covered by the UKIDSS Large Area Survey (LAS) that provides complementary data in the near infra-red bands for the optically selected LRGs. The advantage of near infra-red data is that the mass-to-light ratios and k-corrections are largely insensitive to the galaxy or stellar type and therefore the total infra-red flux in for example the K-band provides a good estimate of the total stellar mass of the galaxies. This stellar mass estimate allows us to study not only the star formation history but also the mass assembly history of these systems. As is the case for optically selected spectroscopic samples of massive galaxies, the K-band selected surveys have so far been restricted to relatively small and deep patches of the sky (Mignoli et al. 2005;Conselice et al. 2007). Clearly a large spectroscopic survey of massive galaxies with optical and near infra-red photometry, will allow for better constraints on the evolution of these systems. In this paper, we utilise a spectroscopic sample of colour-selected massive red galaxies from the 2SLAQ survey between redshifts 0.4 and 0.8. Optical photometry is obtained from the Sloan Digital Sky Survey supplemented with near infra-red data from the UKIDSS Large Area Survey (LAS). We consider the evolution of these galaxies in terms of their observed colours, luminosities and comoving number densities focussing particularly on the effects of colour selection on the inferred evolution. Wake et al. (2006) have presented a comprehensive analysis of the luminosity function of Luminous Red Galaxies using data from both the Sloan Digital Sky Survey and the 2SLAQ Survey. These authors use a subset of data from the 2SLAQ survey in the redshift range 0.5 to 0.6 and by comparing this galaxy population to a lower redshift population at 0.17 < z < 0.24 from SDSS, they are able to establish that the LRG LF does not evolve beyond that expected from a simple passive evolution model at these redshifts. Meanwhile, Cool et al. (2008) have compared the low-redshift SDSS LRG population to their own high-redshift sample at redshifts of ∼ 0.9 and once again find little evidence for evolution beyond the passive fading of the stellar populations. We extend the work of these authors by considering now most of the galaxies available in the 2SLAQ data set. This enables the redshift range of 2SLAQ galaxies used to be broadened to 0.4 z < 0.8. The 2SLAQ luminosity function is presented for 8625 LRGs as compared to the 1725 used by Wake et al. (2006). These results are useful in filling the gap between redshift 0.4 and 0.5 and redshift 0.6 and 0.8 in the Wake et al. (2006) and Cool et al. (2008) data sets. The sample is split into four redshift bins and the evolution of the luminosity and colour of LRGs between these redshift bins is considered. In addition, we also consider the stellar mass function and its evolution with redshift. Furthermore, we use various different stellar population synthesis models that are commonly used in the literature, to model the LRGs, including the new models of Maraston et al. (2009) and quantify the sensitivity of the luminosity function estimate to changes in these models. The optical colours of LRGs have long been difficult to model using standard spectral evolution models (Eisenstein et al. 2001) and this problem has only recently been solved (Maraston et al. 2009) thereby allowing us to utilise the most accurate spectral evolution models of LRGs to date in order to infer their evolution. The paper is structured as follows. In § 2 we describe the spectroscopic 2SLAQ data set as well as SDSS and UKIDSS photometry for these galaxies. § 3 describes the spectral evolution models used in this paper to model the LRGs. We consider the optical luminosity function of the 2SLAQ LRGs in § 4 and its evolution with redshift as well as its sensitivity to cosmic variance, photometric errors and changes in the spectral evolution models. In § 5 we present estimates for the LRG mass function in different redshift bins and analyse its sensitivity to the choice of spectral evolution model as well as the IMF. Finally, we discuss our results in § 6 in terms of models of galaxy formation and evolution. Throughout this paper we assume a cosmological model with Ωm=0.3, ΩΛ=0.7 and h=0.7. All magnitudes are in the AB system unless otherwise stated. DATA Our dataset is a spectroscopic sample of Luminous Red Galaxies from the 2dF and SDSS LRG and QSO (2SLAQ) survey (Cannon et al. 2006). This survey was conducted on the 2-degree Field (2dF) spectrograph on the 3.9m Anglo-Australian Telescope. The survey recorded spectra for ∼10000 LRGs with a median redshift of 0.55 and ∼10000 faint z < 3 QSOs selected from SDSS imaging data. The survey covers an area of 186 deg 2 and extends to a redshift of ∼ 0.8 for the LRGs. In this section, we provide a description of the 2SLAQ data as well as optical and near infra-red photometry for these galaxies obtained using the SDSS and UKIDSS LAS (Smith et al. 2009) respectively. In the 2SLAQ survey, the following colour and magnitude cuts have been applied using SDSS DR4 photometry in order to isolate the main LRG population to target with spectroscopy (Cannon et al. 2006). where g, r and i denote the dereddened SDSS model magnitudes, i deV is the i-band deVaucouleurs magnitude and Ai is the extinction in the i-band. The deVaucouleurs magnitudes are obtained by fitting a pure deVaucouleurs profile to the 2D galaxy images. The definitions of both these types of magnitudes as well as other SDSS parameters can be found at http://www.sdss.org/dr4/algorithms/photometry.html as well as Stoughton et al. (2002). These cuts are for the primary sample defined as Sample 8 in Cannon et al. (2006) which has the highest completeness and this is the sample that we use throughout this paper. M stars are expected to make up ∼5% of the sample but can be excluded by their low redshifts. In addition, as will be discussed later in the paper, photometric errors could scatter objects both in and out of the sample across the colour selection boundaries and lead to some contamination. Roseboom et al. (2006) have conducted a detailed study of the star formation histories of this 2SLAQ LRG sample and find that 80% of the objects, which represents the vast majority, are passive in nature as would be expected from early type systems. The rest are mainly emission line galaxies. Note also that the colour cuts applied in order to isolate LRGs are usually different for different surveys. For example the colour cuts used to isolate LRGs in the SDSS LRG sample (Eisenstein et al. 2001) results in a distribution that is redder than that of the 2SLAQ galaxies. If these different LRG samples are to be compared therefore, further cuts must be applied to them to ensure a consistent colour selection as in Wake et al. (2006). In the case of the 2SLAQ LRGs, the effect of the d ⊥ cut is to select early-type galaxies at increasingly high redshifts whereas the c cut eliminates late-type galaxies from the sample. Most of the sample has redshifts of 0.2 < z < 0.8 with ∼5% contamination from M-type stars (Cannon et al. 2006). In order to capture the main redshift distribution of this primary sample, only galaxies with 0.4 z < 0.8 are used in this work. Note, however that most of the galaxies lie above a redshift of 0.45 and our conclusions remain unchanged if we choose the lower redshift limit of our sample to be 0.45 instead. Furthermore, we have also selected only galaxies with a redshift quality flag of greater than 2. This results in a sample of 8625 LRGs and the redshift completeness for this sample is 76.1% (David Wake:private communication). All LRGs in our sample have secure spectroscopic redshifts. In Figure 1, we plot the LRGs in the observed (g-r) versus (r-i) plane in redshift bins of width 0.1. Isolating bright galaxies only with M ideV < −22.5 and plotting them in the same colour-colour plane shows clearly how the red sequence is truncated in the lowest redshift bin due to the redshift dependant colour selection. The effect of this truncation on the luminosity function estimate will be described in more detail in § 4. In order to obtain accurate stellar masses for the LRGs, the 2SLAQ data is matched to near infra-red data from the UKIDSS Large Area Survey (LAS) DR5 which has a depth of KV ega=18.4. The galaxies are matched to within 1.1 arcsecs in position using the WFCAM science archive 1 and only galaxies with detections in the K-band are selected. Out of our total sample of 8625 2SLAQ LRGs, we have K-band data for 6476 of them which represents ∼75% of the sample. The UKIDSS K-band magnitudes are in the Vega system and have been corrected to the AB system using the corrections of Hewett et al. (2006). Photometric Errors Wake et al. (2006) have conducted a detailed analysis of the effect of photometric errors on the 2SLAQ LRG sample. These photometric errors could induce a systematic bias in the sample by scattering galaxies both in and out of the sample across the colour selection boundaries. For i < 19.3, the effect of the photometric errors is relatively insignificant and the colour-magnitude distributions of LRGs selected using single-epoch photometry and those selected using multi-epoch photometry are almost identical (Wake et al. 2006). Fainter than this however, the photometric errors may present a significant systematic bias to the inferred colour and luminosity evolution of LRGs in this survey. Note that the magnitude errors inferred using the single-epoch data are found to be systematically underestimated. The effect of these photometric errors on the luminosity function estimate is considered in § 4.2.1. It turns out that different estimators of the luminosity function have different sensitivity to the photometric errors and this point is addressed in more detail in Appendix A. . The (g-r) versus (r-i) colours of 2SLAQ galaxies in the observed frame in four redshift bins -0.4 z < 0.5 (top left), 0.5 z < 0.6 (top right), 0.6 z < 0.7 (bottom left) and 0.7 z < 0.8 (bottom right). The dark crosses correspond to all galaxies in that redshift bin whereas the light dots show the brightest galaxies with M ideV < −22.5. K+E CORRECTIONS In order to calculate the rest-frame properties of LRGs and infer their evolution, the observed properties need to be transformed into the rest-frame by means of a k-correction. In addition, one can also correct for any evolutionary changes expected in the galaxy spectra by means of an e-correction. Motivated by the work of Padmanabhan et al. (2005), we start by using a Pegase (Fioc & Rocca-Volmerange 1997) template to model the spectral evolution of the LRGs. In this template, the stars are formed in a single burst 11Gyr ago (z ≃ 2.5). We assume a Salpeter (1955) IMF, solar metallicity and no galactic winds or substellar objects. However, it has been noted by several authors that stellar population synthesis models such as Pegase and those of Bruzual & Charlot (2003) fail to reproduce the observed colours of LRGs (Eisenstein et al. 2001;Maraston 2005). Improvements to these models have subsequently been made that involve changes to the input stellar libraries as well as including an additional metal-poor sub component to the stellar population in order to create a composite model (Maraston et al. 2009). Such models predict significantly bluer (g-r) colours compared to simple stellar population models and are better able to reproduce the observed broadband colours of individual LRGs. For this reason, a Maraston model of age 12Gyr corresponding to a galaxy mass of 10 12 M⊙ is also considered in this work. Finally, we consider the models of Charlot & Bruzual 2007(Bruzual & Charlot 2003Bruzual 2007) -CB07 hereafter -with a range of different metallicities and star formation histories in order to quantify the systematics associated to the different prescriptions, isochrones and stellar libraries of the population synthesis models. All the CB07 templates assume a formation redshift of zF = 3 and a Chabrier (2003) IMF. K and k+e corrections in the r-band derived from each of these models, are illustrated in Figure 2. In addition, kcorrections are also derived from stacked LRG spectra from the SDSS LRG survey (Eisenstein et al. 2001) at redshifts 0.2 and 0.5. These are very similar to each other suggesting that there is little or no colour evolution in the SDSS LRGs between redshifts of 0.2 and 0.5. As the 2SLAQ LRG spectra are not flux calibrated, a similar derivation of the kcorrections using these was not possible. Although the SDSS LRGs are selected using different colour cuts, it is still in-teresting to compare their k-corrections to the ones derived using the various spectral evolution models and it can be seen that the empirical k-corrections best match those derived from a Maraston model. For this reason, the composite 12Gyr model from Maraston et al. (2009) has been used throughout this paper unless otherwise stated. We can also see from the right-hand panel of Figure 2 that the ecorrection is very sensitive both to the star-formation history of the galaxy as well as the metallicity as galaxies with some residual star formation as well as those at high metallicity have more negative e-corrections. In Figure 3 we also plot the rest-frame (g-r) and (r-i) colours of LRGs derived using the different spectral evolution models as a function of redshift. Once again it is seen that the Maraston model is able to reproduce the colours derived from SDSS composite spectra almost exactly unlike the Pegase model. However, we note that unlike in Maraston et al. (2009), we find that including the evolutionary correction to the Maraston models, creates a mismatch in the colours of the model and the SDSS composite spectra. The SDSS observed colours are found to be redder than the evolving Maraston model. We have already noted that the SDSS LRG colour distribution is redder than that of the 2SLAQ galaxies used in Maraston et al. (2009) for comparison to their models so this difference is perhaps to be expected. It should also be noted that the observed colours derived from the spectra are derived from the fiber magnitudes of the galaxies. As the fibers only enclose light within a 3" aperture and it is well known that early type galaxies have colour gradients (Vader et al. 1988;Franx & Illingworth 1990;Ferreras et al. 2005), this could be why the colours appear redder but it is unlikely that this effect would be significant especially at redshifts of 0.5. Finally, the redshift 0.5 LRGs in the SDSS survey are the brightest and therefore expected to be the reddest galaxies in this survey (Bernardi et al. 2005;Gallazzi et al. 2006). All these factors may help explain the discrepancy in the observed and model colours derived using the evolving Maraston model. Figure 4 shows the 2SLAQ LRGs in the observed (g-r) versus (r-i) colour-colour plane along with the main colour selection boundaries as well as the tracks produced in this plane by different stellar population synthesis models as well as SDSS composite spectra. It can be seen that both the Maraston as well as Pegase burst models only enter the d ⊥ selection at redshifts of ∼0.45. Star forming galaxies with τ > 2Gyr are clearly excluded at all redshifts by the c selection. THE OPTICAL LUMINOSITY FUNCTION The luminosity function for 2SLAQ LRGs is calculated in this section in order to infer any evolution in the observed number density of these objects. The non-parametric Vmax estimator is used to calculate luminosity functions and a gaussian parametric form is fit by means of a χ 2 minimisation where appropriate. The gaussian parametric form provides a better fit to the luminosity function of early-type galaxies compared to the more commonly used Schechter function and has three free parameters, φ * , M * and σg. The parametric form is given by: The commonly used parametric estimator of Sandage et al. (1979) (STY hereafter) is prone to biases for this data set and therefore has not been presented in the main body of the paper. Details of the STY estimator and the biases induced on it are given in Appendix A. The Vmax estimator relies on measuring the maximum volume occupied by a galaxy given the survey selection criteria. By scaling the observed number density by this accessible volume, we account for the fact that only the brightest galaxies are observed at high redshifts in a flux-limited sample. For the 2SLAQ survey, this maximum volume depends not only on the maximum redshift out to which the galaxy could be observed given the flux limit of i dev − Ai < 19.8 but also the minimum redshift at which the galaxy would be included in the survey given the d ⊥ and c colour selection criteria. The maximum volume for each galaxy is then given by: where S d ⊥ (z), Sc (z) and Si deV (z) are the selection functions for each galaxy due to the two primary colour cuts (Eq 1 and 2) and the flux limit (Eq 3). Once this maximum volume has been derived for every galaxy in the sample, the luminosity function which is simply the number density of galaxies per unit brightness, is given by: where c is the redshift completeness of the sample assumed to be 76.1% and the index, i runs over all galaxies in the absolute magnitude bin dM . The errors on the luminosity function are assumed to be Poissonian and are given by: The best-fit parametric form is derived using a χ 2 fit to the Vmax data points. We marginalise over the normalisation, φ * which is determined independantly from the observed number density of galaxies. The marginalised χ 2 to be minimised is: where Φm refers to the parametric luminosity function which is assumed to be a gaussian function and ΦV represents the Vmax data points with errors σV . Once the best fit parameters have been determined, the normalisation of the luminosity function, φ * is then determined by matching the parametric estimate of the luminosity function to the observed number density of galaxies using Eq. 9. where Ng is the number of galaxies, fs is the fraction of sky covered by the survey and f (M, z) represents the colour selection function obtained by summing the product of S d ⊥ (z) and Sc (z) over all galaxies in the sample. The errors on the gaussian parameters M * and σ are calculated using the 1σ error ellipsoid obtained by performing the minimisation using a Markov Chain Monte Carlo (MCMC) method. The main sources of error on φ * come from cosmic variance and the covariance of φ * with M * . In § 4.2.2 we will show that for a large survey such as 2SLAQ, cosmic variance is unlikely to be a dominant systematic. The errors for φ * quoted throughout the paper are therefore calculated from the 1σ errors on M * . The dependance of the luminosity function on redshift is first considered and possible sources of systematic errors to these estimates examined later in this section. Redshift Evolution We examine the redshift evolution of the 2SLAQ LRG luminosity function in four redshift bins between redshift 0.4 and 0.8. The luminosity function estimates are obtained using the Vmax estimator described above. Due to the different absolute magnitude ranges in each redshift bin, no χ 2 fits to the data points are shown as the inferred parameters would not be comparable. In the left-hand panel of Figure 5 we plot the luminosity function in four redshift bins after K correcting to redshift 0 using the 12Gyr Maraston composite model. The number density of galaxies is found to increase slightly with redshift particularly in the brightest absolute magnitude bins. In the faintest bins there is a drop in the observed number density. As LRGs are known to be some of the brightest objects in the Universe and our survey performs an LRG selection, we would expect a drop in the number density of such systems at faint magnitudes. Downsizing in the star formation of early type systems, is an effect that is already well observed. The fainter galaxies are expected to be less massive . The (g-r) versus (r-i) colours of 2SLAQ galaxies in the observed frame along with tracks produced by several different stellar population synthesis models as well as composite LRG spectra from SDSS. In the case of the models, the markers are located at redshift intervals of 0.1. In the case of the SDSS composite spectrum, the markers are located at redshift intervals of 0.01. Figure 5. The 2SLAQ luminosity function at a rest-frame redshift 0 after k-correction (left) and k+e correction (right) assuming passive evolution. and therefore bluer due to their higher specific star formation rates and such objects are excluded from our sample by the colour selection criteria which only select out the brightest and reddest objects. The Vmax method used to evaluate luminosity functions already accounts for those galaxies that may be missing from our sample due to the observational limits set by the survey. It does not however account for photometric uncertainties in the sample which we study in detail in § 4.2.1. It is possible that genuine LRGs may be scattered out across the colour selection boundaries due to their large photometric errors rendering the sample incomplete at the faint end. In order to demonstrate that the downturn we see at the faint-end of our luminosity function estimates is not due to any such incompleteness, we plot in Figure 6 the normalised histogram of the perpendicular distance of the faintest 2SLAQ LRGs with i deV > 19 from the d ⊥ selection line. Most of these objects that appear faint and are likely to have large photometric errors, are seen to lie very close to the selection line -i.e. at a distance of ∼0 . This distribution therefore suggests that there are also likely to be large numbers of faint galaxies on the other side of the d ⊥ selection. Any photometric errors would thus scatter more galaxies into the sample than are being scattered out resulting in an overprediction in the number density of the faint galaxies that are close to the flux limit of the survey. We therefore conclude that the faint-end downturn seen in our luminosity function estimates is genuine and not due to any incompleteness induced by the photometric errors. We have also seen in Figure 1 how the red sequence is truncated in the lowest redshift bin due to the redshift dependant colour selection. This is what leads to a drop in the number density in the lowest redshift bin relative to the other bins which is not due to any evolution in the underlying galaxy population. In the right-hand panel of Figure 5, we plot the luminosity functions after correcting for passive evolution assuming the same Maraston model. The luminosity functions change little on inclusion of this evolutionary correction suggesting it is small in these redshift ranges but in general the data points for galaxies with M ideV < −22.5 in the different redshift bins now agree better. We are confident that incompleteness does not affect this bright galaxy sample and these results therefore suggest that this LRG population is evolving consistently with the assumptions of a passive evolution model. Sources of Systematic Error In this section we examine potential sources of systematic error that could bias the luminosity function estimates presented above. Photometric Errors In order to assess the bias that photometric errors will introduce into the luminosity function estimate, we cut our sample at i < 19.3 and calculate the luminosity function for this reduced sample of galaxies. Note that in § 2.1 we have already drawn attention to the fact that previous studies (Wake et al. 2006) have shown that fainter than i = 19.3, galaxy samples selected through single and multi-epoch photometry differ considerably. The single-epoch magnitude errors are also found to be systematically underestimated. The luminosity functions for the entire sample as well . The 2SLAQ luminosity function calculated for the entire sample and galaxies with i < 19.3 for which photometric errors are thought to be insignificant along with gaussian fits to both. Only the bright-end of the luminosity function for the entire sample is shown in order to ensure that the gaussian fits are made over the same absolute magnitude range. as the i < 19.3 sample are plotted in Figure 7 along with gaussian fits to these derived using χ 2 minimisation. Only the bright-end of the luminosity function for the entire sample is shown as the sample with small photometric errors has no galaxies in the faintest absolute magnitude bins and we want to compare the two luminosity functions over the same absolute magnitude range. The i < 19.3 sample has 2131 galaxies as opposed to the 8625 galaxies in the entire sample. The space densities from the i < 19.3 sample have therefore been renormalised by the fraction of galaxies in the complete sample that contribute at each absolute magnitude bin. The χ 2 fit to the reduced sample with small photometric errors yields values of σg = 0.45 ± 0.02, M * = −21.89 ± 0.05 and φ * = 2.64 ± 0.21 0.18 ×10 −4 where the normalisation, φ * for the i < 19.3 sample has also been corrected for the smaller space density of this sample. In this case however, a constant average correction factor is used rather than the absolute magnitude dependent corrections applied to the Vmax data points in order to ensure that the shape of the gaussian is maintained. The χ 2 fit to the entire sample on the other hand gives σg = 0.52 ± 0.01, M * = −21.70 ± 0.04 and φ * = 5.51 ± 0.53 0.47 ×10 −4 . As can be seen, the faintest galaxies in terms of absolute magnitude are removed from our sample on applying the i-band cut, as they typically have larger photometric errors. However, the two luminosity functions are remarkabely similar in shape and there is evidence for a faint-end downturn even for the sample with small photometric errors. The space density at the bright-end is also lower for our sample with small photometric errors even after renormalisation suggesting that photometric errors affect galaxies at all absolute magnitudes. The space density for the entire sample is found to be about a factor of two larger due to galaxies being scattered into the sample due to their large photometric errors. We have already shown that more galaxies are likely to be scattered in across the colour selection boundaries than are scattered out (Figure 6). These galax- ies are bluer than more typical LRGs but appear red due to the photometric errors. Many of their spectra show evidence for the presence of strong emission lines suggesting that they are star-forming and this has already been found by Roseboom et al. (2006). Cosmic Variance One of the main drawbacks of the Vmax estimator is that it can suffer from biases due to cosmic variance. As massive galaxies such as the ones in this sample are thought to be strongly clustered, the evolution in their number density will be very sensitive to large-scale structure. Traditionally, this has posed a problem for many small surveys but the 2SLAQ survey should cover a significant enough volume to make this an unlikely source of problem for the LRGs being studied. In order to test this hypothesis, we split the LRG sample by declination and seperate galaxies with dec < −0.2 and dec −0.2. This results in two smaller samples with 4340 and 4285 galaxies respectively. A luminosity function is then calculated for each of these subsamples as well as the total luminosity function for the 2SLAQ sample. These are plotted in Figure 8. No notable differences can be seen in the three luminosity function estimates confirming that cosmic variance is not a dominant systematic in this sample. K+e Corrections: Spectral Evolution Model In order to test whether the choice of spectral evolution model used to compute K+e corrections significantly affects the estimate of the luminosity function, we compute luminosity functions for all the 2SLAQ galaxies between redshift 0.4 and 0.8 using the 12Gyr Maraston composite model as well as a Pegase burst model, a Pegase model with an exponentially declining star formation history with τ = 1Gyr and a CB07 model with an exponentially declining star formation history with τ = 1Gyr. Both Pegase models have an age of 11Gyr while the CB07 model has a formation redshift of 3 corresponding to an age 11.35Gyr in our chosen cosmology. The results are illustrated in Figure 9 and the best-fit gaussian parameters to these luminosity functions summarised in Table 1. Both the Pegase burst and Maraston models have similar ages and star formation histories. The difference between them is the introduction of a metallicity poor sub-component to the Maraston model and the use of improved stellar librarires in order to better match the observed colours of LRGs. However, it can be seen from Figure 9 (left panel and right,top) that there is very little difference in the luminosity function estimate obtained from using these two models. The Pegase and CB07 models with τ =1Gyr produce very similar luminosity functions despite having different initial mass functions as the K+e corrections and therefore the luminosity function estimate is not very sensitive to the number of low mass stars. The main difference between the Salpeter IMF assumed in the Pegase models and the Chabrier IMF assumed in the CB07 models, is the mass distribution for stars below 1M⊙. Changing the star formation history to have a little residual star formation shifts the luminosity function to slightly fainter absolute magnitudes and the number density in the brightest bins decreases. This is because including recent star formation or younger stars in the model means that the luminosity of the galaxy evolves faster and the galaxy fades quicker than in a passive evolution scenario. K+e Corrections: Metallicity The colour-magnitude relation of early-type galaxies can be interpreted as a metallicity sequence and variations in metallicity dominate the slope of the CMR (Kodama & Arimoto 1997;Bower et al. 1992). The metallicity may therefore be an important parameter to vary in stellar population synthesis models when trying to model the spectral evolution of LRGs. In order to assess the importance of metallicity in the luminosity function estimate, we calculate luminosity functions after K+e correcting galaxies using a CB07 model at solar metallicity as well as one with variable metallicity. Both assume a formation redshift of 3 and an exponentially declining star formation history with a timescale, τ of 1Gyr. In the variable metallicity model, the metallicities are derived from the absolute magnitudes using the colourmagnitude relation of early-type galaxies in the Virgo Cluster from Bower et al. (1992) log 10 The effect of metallicity on the luminosity function estimate is also illustrated in Figure 9. The second plot in the right panel illustrates the difference between the luminosity function estimates obtained using a CB07 model at solar metallicity and one with variable metallicity. The difference between the two estimates is found to be roughly of the same order as the errorbars on the individual luminosity function estimates. It is therefore concluded that the metallicity is not likely to be important in the evaluation of the luminosity function for the 2SLAQ LRGs studied here. The LRGs are all found to have metallicities between 0.6 and 1.5 Z⊙ with a mean metallicity of roughly solar. K+e Corrections: Age In Figure 9 we also plot the Vmax estimate of the luminosity function after K+e corrections to redshift 0 assuming a Pegase single burst model at three different ages ranging from 9 to 11Gyr. In the cosmological model assumed throughout this paper, these correspond to formation redshifts for the galaxy of between ∼ 1.4 and 2.6. We find as expected that changing the formation redshift of the galaxy to later times results in the luminosity function moving to fainter absolute magnitudes or alternatively, a decrease in the number density at a given absolute magnitude. As is the case for models with some residual star formation, the introduction of younger stars into the models means that they fade quicker thereby shifting the luminosity function estimate to fainter absolute magnitudes. THE INFRA-RED LUMINOSITY FUNCTION AND STELLAR MASS FUNCTION So far we have considered the evolution of the 2SLAQ LRG optical luminosity function with redshift and quantified various sources of systematic error that could arise for this estimate. This analysis shows little evidence that the LRG population evolves beyond that expected from a simple passive evolution model between a redshift of 0.4 and 0.8. However, such an analysis does not fully exploit all the available information as the evolution of elliptical galaxy populations is known to be a strong function of mass Cimatti et al. 2006;Ferreras et al. 2009;Pozzetti et al. 2009). The availability of near infrared data for a sizeable subsample of these LRGs means we can calculate accurate stellar masses for these objects from their K-band luminosities in order to see if the evolution of the number density of these galaxies is a strong function of the mass. The mass function analysis in this section is carried out for the subset of LRGs for which near infra-red photometry is available from UKIDSS DR5. This reduced sample contains 6476 galaxies. As the Maraston models do not extend to near infra-red wavelengths, the CB07 models have been used in this section to calculate the mass-to-light ratios for the LRGs. Following Ferreras et al. (2009), the stellar masses are computed by comparing the SDSS g, r, i, z fiber magnitudes with a set of 10 τ -models with solar metallicity, formation redshifts of 3 and exponential timescales between 0 (i.e. corresponding to a Simple Stellar Population) and 10Gyr. The CB07 models are used to generate the composite populations. The best-fit model, almost always corresponding to short formation timescales, is then used to determine the mass-to-light ratio in the K-band of the UKIDSS survey assuming a Chabrier or Salpeter IMF. This best-fit along with the K-band petrosian magnitude allows us to determine the stellar mass of the galaxy. Note that no total model magnitudes are available in the UKIDSS survey. However, the difference between the total model magnitudes in the SDSS i-band and the petrosian magnitudes in the same band are found to be of the order of 15%. Both magnitudes can be used as reasonable estimates for the total light from the galaxy. The mass functions are calculated as before using the Vmax method described in detail in § 4. The difference in the mass function when Vmax is determined using the best-fit CB07 models as opposed to a single Maraston model of age 12Gyr, is illustrated in Figure 10. From this figure, it can be seen that the difference is minimal when using the two different spectral evolution models. Note, however that this may not remain the case if stellar masses could be calculated directly from the Maraston models as well as determining Vmax from it. The mass function for the 2SLAQ LRGs in two redshift bins -0.4 z 0.55 and 0.55 z 0.8 -is illustrated in the left-hand panel of Figure 11. These redshift limits have been chosen so as to ensure that incompleteness does not affect the high-mass end in either of these bins. On this plot, we also show the mass functions derived in two redshift bins for red sequence galaxies in the COMBO-17 survey (from Figure 9 of Borch et al. (2006)). For illustrative purposes, we plot the K-band luminosity function in the two redshift bins in the right-hand panel of Figure 11. The K-band absolute magnitudes are also determined using the best-fit CB07 model for each LRG and after correcting to the AB system using the correction of Hewett et al. (2006). Figure 11 clearly shows that the 2SLAQ data set extends to much higher stellar masses than the COMBO-17 survey occupying a very different stellar mass range to the COMBO-17 red sequence and thereby allowing us to effectively probe the high-mass end of the mass function. These most massive galaxies are the ones expected to place the Figure 10. The 2SLAQ mass function calculated using the Vmax method when Vmax is evaluated using a single Maraston model of age 12Gyr for all the LRGs and when Vmax is evaluated using the best-fit CB07 τ model for each LRG. most stringent constraints on current models of galaxy formation. Our sample also probes a volume that is ∼200 times bigger than that of COMBO-17 over the redshift range 0.4 z < 0.8. As is the case with the COMBO-17 data, there is little evidence for any evolution in the number density at the high-mass end between a redshift of 0.4 and 0.8. This is consistent with the findings of Ferreras et al. (2009) who find that the number density does not evolve in the highest stellar mass bins. Below ∼ 10 11 M⊙ however, our sample is likely to be incomplete due to the redshift dependant selection criteria involved in isolating LRGs. The discrepancy between the 2SLAQ mass function presented in this paper and the COMBO-17 mass function of Borch et al. (2006) for stellar masses greater than ∼ 10 11 M⊙ may arise due to the differing choices of IMF used in the two papers. While this study uses the Chabrier (2003) IMF, Borch et al. (2006) use a truncated version of the Salpeter (1955) IMF to calculate stellar masses. In order to estimate how this would affect the mass function estimate, in Figure 12 we plot the 2SLAQ mass function over the entire redshift range of 0.4 z < 0.8 calculated using both the Salpeter and Chabrier IMF. The Salpeter IMF pushes the mass function towards higher masses and this results in a bigger discrepancy between the COMBO-17 and 2SLAQ data points. It is well known that the simple power-law form of the Salpeter IMF overestimates the number of low-mass stars in galaxies and therefore the stellar mass (Cappellari et al. 2006;Ferreras et al. 2008) and more recent analytical forms of the IMF such as those of Chabrier (2003) have fewer lowmass stars and therefore predict lower stellar masses. The increased number density of 2SLAQ galaxies seen at 3×10 11 M⊙ to ∼ 10 12 M⊙ where it overlaps with COMBO-17 could be due to various reasons. Firstly, the COMBO-17 area is much smaller than that of 2SLAQ and so it is possible that there could be some incompleteness at the high-mass end of the COMBO-17 red sequence mass function due to the effects of cosmic variance. More likely however, the discrepancy probably arises due to the complicated selection of LRGs which makes it difficult to obtain a proper estimate of the volume probed. As the LRGs are selected through colour cuts in addition to the flux limit and the colour selection cuts the volume at low redshifts, the 2SLAQ galaxies will on average have a smaller Vmax than a flux-limited sample such as COMBO-17. This will result in a larger inferred number density at any given mass. DISCUSSION In this paper we have presented luminosity and stellar mass function estimates for a sample of 8625 Luminous Red Galaxies with spectroscopic redshifts between 0.4 and 0.8 in the 2SLAQ survey. The evolution of the optical luminosity function between redshift 0.4 and 0.8 is found to be consistent with the assumptions of a passively evolving model with very little evidence for recent star formation. The number density of LRGs shows a downturn at faint magnitudes due to the effects of downsizing in the star formation whereby the less massive galaxies which are also less luminous, are bluer and therefore excluded from an LRG sample by the imposed colour selection criteria which select the reddest objects. There is also evidence that some bluer galaxies are scattered into the LRG sample due to the non-negligible photometric errors on them. The sensitivity of the optical luminosity function to changes in the spectral evolution models has also been studied. The results of this analysis are summarised in Figure 13 where we show the best-fit gaussian parameters that are fit to the Vmax data points for LRGs after assuming different star formation histories in the spectral evolution models. M * moves towards brighter absolute magnitudes if we assume a Pegase burst model instead of the Maraston model, and towards fainter magnitudes if we assume a model with some residual star formation signified by a star formation timescale of τ = 1Gyr. The gaussians all have very similar widths although the Maraston model results in a gaussian that is slightly wider than that found with the rest of the spectral evolution models. The K-band luminosity function is calculated for about three quarters of the LRGs for which infra-red data is available from the UKIDSS LAS DR5. The K-band luminosity function also shows little evidence for evolution in this redshift range. The K-band luminosities are used to derive mass-to-light ratios for the LRGs by fitting CB07 spectral evolution models to the multi-band photometry. These are then used to calculate mass functions in redshift bins. Using the best-fit CB07 model instead of the Maraston models in the estimate of the mass function, makes very little difference to the mass function. We find that the most massive galaxies with M > 3 × 10 11 M⊙ are already well assembled at redshifts of 0.8 and their number density does not change much in the redshift range considered in this work. The 2SLAQ sample probes the most massive end of the mass function and can therefore be used to place stringent constraints on models of massive galaxy formation and evolution. In Figure 14 we plot the comoving number density of 2SLAQ galaxies as a function of redshift obtained by in- Figure 13. Summary of best-fit gaussian parameters used to fit luminosity functions after assuming different spectral evolution models for k+e corrections. tegrating the mass functions presented in Figure 11. Note that the horizontal errorbars simply represent the size of the redshift bin in our plots. We choose only galaxies with M > 3 × 10 11 M⊙ for which we are confident that the redshift dependant selection criteria do not impose any incompleteness into the sample. We compare our estimates of the comoving number density with those from other surveys of massive galaxies as well as predictions from models of galaxy formation. All data points presented in this figure use stellar masses representative of a Chabrier IMF. Where appropriate, the quoted stellar masses in the literature have been corrected to this choice of IMF before plotting in Figure 14. In Table 2 we summarise the samples plotted in Figure 14. We can see that the selection criteria for all these samples is varied. In the Ferreras et al. (2009) sample for example, there is no segregation by colour but rather a visual classification of early type galaxies. In this sample, no blue galaxies are found above a stellar mass of 10 11 M⊙. Pozzetti et al. (2009) use different galaxy classification schemes to derive the galaxy stellar mass function and its evolution by galaxy type and find that the massive end (M> 10 10.5 M⊙) is dominated by red spheroidal galaxies upto z∼1. We can therefore infer that there should be a negligible fraction of blue massive galaxies at the masses being considered in this work. We can also immediately see from Table 2, the unique position of 2SLAQ among these surveys due to its massive volume. This allows us to significantly reduce the size of the vertical errorbars in Figure 14. From Figure 14 we can see that our work agrees well with that of Borch et al. (2006); Conselice et al. (2007) and Ferreras et al. (2009) and shows that the most massive galaxies were already well assembled at redshift 0.8 and there is little evidence for their number density having changed since then. Fontana et al. (2006) on the other hand find some evolution in the number density of massive galaxies with redshift albeit mild up to z ≃ 1.5. The differing comoving number densities for the different samples can be attributed to the fact that these samples have all been selected in very different ways as summarised in Table 2. However, it is encouraging that the comoving num- Figure 14. The comoving number density of massive galaxies as a function of redshift for the 2SLAQ survey -found by integrating the mass functions presented in Figure 12 between 3 × 10 11 M ⊙ and 10 12 M ⊙ along with the comoving number density from other surveys of massive galaxies as well as models of galaxy formation. KS06 refers to the Khochfar & Silk (2006) model and D06 refers to the De Lucia et al. (2006) model. ber density for our colour selected sample of LRGs agrees well with that inferred from the visually classified sample of Ferreras et al. (2009). Our observational results seem to match well predictions from the semi-analytical models of Khochfar & Silk (2006). These models follow the merging history of dark matter halos generated by the extended Press-Schechter formalism. The baryonic physics is modelled according to the prescriptions of Khochfar & Burkert (2005) and references therein. This model predicts that the number density of massive galaxies is almost constant upto redshifts of ∼1 as the number density of galaxies entering a certain mass bin from lower mass bins is counteracted by the number density of galaxies leaving that mass bin for a higher mass bin. If this were the case, we would expect the number density of galaxies in the highest mass bin to decrease as a function of redshift even if it is constant in the intermediate mass bins. We compute the comoving number density in the higher mass bin of 12 < log 10 (M * /M⊙) < 12.5 and in our two redshift bins -0.4 z < 0.55 and 0.55 z < 0.8. The number densities per unit dex in stellar mass thus obtained are log(n)/h 3 M pc −3 dex −1 = −5.39± 0.10 0.13 and log(n)/h 3 M pc −3 dex −1 = −5.55± 0.08 0.09 respectively. Therefore there is no evidence that the comoving number density of the most massive galaxies is decreasing with increasing redshift. The agreement between our data points and the Khochfar & Silk (2006) models shown in Figure 14 should therefore be treated with caution as the constant number density of massive galaxies in the models, is only maintained due to a fine-tuning of objects between mass bins and we find no evidence that this fine-tuning actually occurs. Our results also do not match at all those generated from models based on the Millennium simulations (De ). In these models, AGN feedback is invoked to shut off star formation after a characteristic mass scale in order to reproduce the observed colour-bimodality and luminosity function at redshift 0. As a consequence, the downsizing in star formation is reproduced but the growth of massive galaxies at high redshifts of greater than 0.5 is prohibited and dry mergers become the most dominant mechanism for mass growth in such galaxies. This is clearly inconsistent with our observations which suggest not only that significant star formation in massive galaxies has ceased at a redshift of 0.8 but also that these galaxies were already well assembled at these redshifts. The main difference between the De and Khochfar & Silk (2006) models is the feedback with respect to AGN activity rather than the fact that the former follows a numerical simulation for the evolution of the dark matter halos whereas the latter relies on an analytical approach. If cooling is suppressed, the efficiency of massive galaxy formation at early epochs is much lower accounting for the lower number densities in the De models. We also note that the De models are computed from the Millennium simulations, which use a cosmology with σ8=0.9, whereas the Khochfar & Silk (2006) models have been updated for this work with the later WMAP5 cosmology with σ8=0.8 (Dunkley et al. 2009). As the AGN activity affects the star formation history of the galaxy, we conclude that the star formation and quenching mechanisms invoked in the different galaxy formation models need to be revisited in light of our new constraints on the very high mass end of the stellar mass function. Granato et al. (2004) on the other hand have proposed a way in which to bring about the anti-hierarchical formation of the baryonic component of galaxies in models while still working within the framework of the ΛCDM cosmology. This is essentially done by invoking feedback from supernovae as well as the nuclear activity in massive galaxies. These processes slow down star formation in the least massive halos and drive gas outflows thereby increasing the stellar to dark matter ratio in the more massive halos and ensuring that the physical processes acting on baryons are able to effectively reverse the order of formation of galaxies compared to dark matter halos. These models have already been shown to match the local K-band luminosity function of massive galaxies (Granato et al. 2004) as well as the luminosity function at redshifts of ∼1.5 (Granato et al. 2004;Silva et al. 2005). Reproducing our observational results for the most massive galaxies at intermediate redshifts would therefore be an important test for these models. Recently Dekel et al. (2009) have also proposed an alternative galaxy formation model whereby bursts of star formation can occur in massive galaxies without the need for violent merging events. These authors suggest that the massive galaxies, which reside in the centers of filaments, are fed by cold gas streams that penetrate the shock heated media of massive dark matter halos thereby inducing star formation. Given the small errorbars in our estimates of the comoving number density compared to previous studies and the fact that our sample has secure spectroscopic redshifts for all objects and a massive area that reduces errors due to cosmic variance, this new observational result presents a significant challenge for many current models of galaxy formation and requires a reinvestigation of the feedback mechanisms involved. CONCLUSION This paper has examined the evolution of the optical and near infra-red luminosity function as well as the stellar mass function of 8625 LRGs from the 2SLAQ survey between redshift 0.4 and 0.8. We have demonstrated the unique position of 2SLAQ LRGs among other massive galaxy samples due to the large volume probed by the sample as well the availability of spectroscopic redshifts for all galaxies and near infrared data for ∼75% of them. This has allowed us to probe the very high-mass end of the stellar mass function which places the most stringent constraints on models of galaxy formation. We have also used the spectral evolution models of Maraston et al. (2009) which are the most accurate models of LRGs to date to study the evolution of these objects although our conclusions change little if other models are used instead. Specifically we draw the following conclusions: • The evolution of the optical luminosity function of LRGs between redshifts 0.4 and 0.8 is consistent with the assumptions of a passive evolution model where the stars were formed at high redshift with little or no evidence for recent episodes of star formation. • A downturn is seen at the faint end of the LRG luminosity function due to the effects of downsizing in the star formation. The faintest galaxies are expected to be less massive and therefore bluer than the bright objects and such objects are excluded from the LRG sample due to the imposed colour selection of these objects which selects only the reddest galaxies. • We find that photometric errors may induce a significant bias into our sample of LRGs and scatter galaxies both in and out of our sample across the colour selection boundaries. We show that more galaxies are likely to be scattered into the sample than are scattered out leading to an overprediction of the observed space density but the shape of the luminosity function does not change if we remove the galaxies with large photometric errors. • The LRG luminosity function is found to be unaffected by cosmic variance due to the large volume occupied by the sample. • The luminosity function is also relatively insensitive to the choice of spectral evolution model and the metallicity although models with some residual star formation shift the luminosity function towards fainter absolute magnitudes. • The stellar mass function for these LRGs for M > 3 × 10 11 M⊙ also shows little evidence for evolution between redshifts 0.4 and 0.8 suggesting that these most massive systems were already well assembled at redshifts of 0.8. This is consistent with the emerging picture of downsizing in the mass assembly of massive galaxies. • The stellar mass function estimate is also relatively insensitive to the choice of spectral evolution model assumed in the calculation of Vmax. However, different choices of the stellar initial mass function will shift the mass function estimate as found in previous studies. • The comoving number density of LRGs with M > 3 × 10 11 M⊙ has changed little between redshifts 0.8 and 0.4. The same is true for LRGs with M > 10 12 M⊙. This is consistent with other observational results for massive galaxy samples and does not agree with the predictions of most current galaxy formation models. We find that the models of Granato et al. (2004) may be promising in matching our observations although they are yet to be compared with massive galaxy samples at intermediate redshifts. Overall, our results support the emerging picture of downsizing in both the star formation and mass assembly of early type galaxies and present a significant challenge for current models of galaxy formation. We have shown our findings to be robust to changes in spectral evolution models, cosmic variance and photometric errors and conclude that these new observations of the most massive and most luminous galaxies in our Universe will need to be reconciled with the models if progress is to be made in the field of massive galaxy formation and evolution. tions assuming wi = 1 for the entire sample and this is shown by the dashed line in Figure A1. This clearly illustrates that the Vmax and STY estimators differ considerably from each other at the faint-end. The STY fit parameters are M * = −21.32, σ = 0.65, φ * = 9.97 × 10 −4 for the entire sample and these values are very different to those derived from the χ 2 -fits in § 4. It has already been noted in this paper that fainter than i ∼ 19.3, photometric errors may become significant in scattering LRGs both in and out of the colour selection boundaries. This analysis therefore suggests that the STY estimator is particularly sensitive to photometric errors for colour selected samples and the weights wi start to differ significantly from 1 for galaxies with large photometric errors. In order to calculate these weights, one would have to do a Monte Carlo simulation of galaxies assuming a certain spectral evolution model, add noise to this simulation and then look at the completeness function S(mi) for each galaxy by considering which galaxies are selected on applying the 2SLAQ colour selection criteria e.g. (Fried et al. 2001;Wolf et al. 2003). This is the subject of future work. The STY estimator in its traditional form is clearly biased for a colour-selected sample with non-negligible photometric errors and for this reason, it has not been presented in the main body of this paper. We note that when we move to high redshift bins where the galaxies are generally brighter and the photometric errors smaller, the STY estimator and Vmax estimator agree very well as it is clearly reasonable in these regimes to assume wi = 1.
2009-11-17T11:32:02.000Z
2009-10-28T00:00:00.000
{ "year": 2009, "sha1": "7d6e6502fe2b746cd8f1bdd3d13c96fae693ca23", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/402/4/2264/4875203/mnras0402-2264.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "7d6e6502fe2b746cd8f1bdd3d13c96fae693ca23", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17547589
pes2o/s2orc
v3-fos-license
Bayesian Attack Model for Dynamic Risk Assessment Because of the threat of advanced multi-step attacks, it is often difficult for security operators to completely cover all vulnerabilities when deploying remediations. Deploying sensors to monitor attacks exploiting residual vulnerabilities is not sufficient and new tools are needed to assess the risk associated to the security events produced by these sensors. Although attack graphs were proposed to represent known multi-step attacks occurring in an information system, they are not directly suited for dynamic risk assessment. In this paper, we present the Bayesian Attack Model (BAM), a Bayesian network-based extension to topological attack graphs, capable of handling topological cycles, making it fit for any information system. Evaluation is performed on realistic topologies to study the sensitivity of its probabilistic parameters. Introduction Managing the security of Information Systems (IS) is increasingly complex, due to the numerous security mechanisms that are implemented, and the significant amount of dynamic data produced by security enforcement points.In critical environments, security operators generally know most of the vulnerabilities of their IS thanks to regular vulnerability scans.Unfortunately, many vulnerabilities are not patched, either because patching may disrupt critical services, or because they are not a priority for system administrators.As a second line of defence, security operators deploy sensors (e.g., Host or Network Intrusion Detection Systems) generating alerts when an attacker attempts to exploit such vulnerabilities.As these security events are produced, operators need to evaluate the risk brought by ongoing attacks in their system, to respond appropriately: this process is called dynamic risk assessment (DRA) [14]. The most impacting attacks are composed of several successive exploitation steps.Several models have been proposed to formalize such multi-steps attacks. An attack graph is a model regrouping all the paths an attacker may follow in an information system.It has been first introduced by Phillips and Swiler in [18].A study of the state of the art about attack graphs compiled from early literature on the subject has been carried out by Lippmann and Ingols [12], while a more recent one was made available by Kordy et al. [10].Topological attack graphs are based on directed graphs.Their nodes are topological assets (hosts, IP addresses, etc.) and their edges represent possible attack steps between such nodes [8].Attack graphs are generated with attack graph engines.There are three main attack graph engines: (1) MulVAL, the Multi-host, Multi-stage Vulnerability Analysis Language tool created by Ou et al. [15], (2) the Topological Vulnerability Analysis tool (TVA) presented by Jajodia et al. in [8,9] (commercialized under the name Cauldron) and (3) Artz's NetSPA [2].Attack graphs are attractive because they leverage readily available information (vulnerability scans and network topology).However, they are not adapted for ongoing attacks, because they can not represent the progression of an attacker nor be triggered by alerts.Thus, they must be enriched to provide the functionalities needed to perform Dynamic Risk Assessment, for example using Bayesian networks. A Bayesian network is a probabilistic graphical model introduced by Judea Pearl [16].It is based on a Directed Acyclic Graph, where nodes represent random variables, and edges represent probabilistic dependencies between variables [3].For discrete random variables, these dependencies can be specified using a Conditional Probability Table associated with each child node.Bayesian networks are particularly interesting for computing inference, i.e. calculating the probability of each state of all nodes of the network, given some evidences, i.e. nodes that have been set to a specific state.Inference can be done efficiently using the algorithm of Lauritzen and Spiegelhalter [11].A Bayesian attack graph, introduced by Liu and Man in [13] is an extension of an attack graph based on a Bayesian network, constituted of nodes representing a host in a specific system state (a true state means that the host is compromised) and edges representing possible exploits that can be instantiated from a source host to a target host.The major concern of building such a Bayesian network from an attack graph is due to the structure of a Bayesian network that must be acyclic, while attack graphs almost always contain cycles.To avoid cycles, Liu and Man assume that an attacker will never backtrack once reaching a compromised state, but do not detail how such assumption is used to build the model.In [6], Frigault and Wang use Bayesian inference in Bayesian Attack Graphs to calculate security metrics in an information system.Xie et al. present in [21] a Bayesian network used to model the uncertainty of occurring attacks.The Bayesian attack graph is enhanced with three new properties: separation of the types of uncertainty, automatic computation of its parameters and insensitivity to perturbation in the parameters choice.This model also adds nodes dedicated to dynamic security modelling: an attack action node models whether or not an attacker action has been performed, a local observation node models the inaccuracy of observations.In this paper, we propose a new model combining attack graphs and Bayesian networks for DRA.It is built from the knowledge security operators have about their IS: network topology, known vulnerabilities and detection sensors.Then, we change the states of the sensor nodes according to the security events received.This model is capable of representing the attacks that may occur (vulnerabilities) and the ones ongoing (alerts).It outputs probabilities that attacks have succeeded and that assets of the IS may have been compromised.With respect to the current state of the art, our contributions are twofold.First, we provide an explicit model and process for handling cycles.This process is supported by a clear definition of a set of model parameters.The sensitivity of the model toward these parameters is studied in the validation.Second, we provide a significant performance improvement in terms of number of nodes and vulnerabilities over the existing state of the art.While classic Bayesian attack graph models are usually demonstrated over a few nodes and vulnerabilities, we show that our model can be realistically computed at the scale of an enterprise IS. This paper is organised as follows: in Section 2, we formally define the structure and the conditional probability tables of our Bayesian Attack Model built from a topological attack graph.Section 3 validates the results of the Bayesian Attack Model on a realistic use case and analyses its sensitivity toward the probabilistic parameters.Section 4 compares our work with the related work, before concluding and presenting future work, in Section 5. The Bayesian Attack Model Given the advantages brought by Bayesian Attack Graphs, they provide a strong foundation for dynamic security modelling.Our proposal extends Bayesian Networks to be used for DRA with real-scale IS. The Bayesian Attack Model (BAM) described all along this section is built from a Topological Attack Graph, which is described in section 2.1, and a set of detection alerts.The BAM is composed of submodels called Bayesian Attack Trees (BAT).BAT and BAM are described in section 2.3.Each BAT is composed of a sequence of attack steps, typed nodes linked together.They are described in section 2.2.The probabilistic relations between nodes of a BAT are described in conditional probability tables whose content is detailed in section 2.4. Topological Attack Graph The BAM is built from a topological attack graph.Definition 1.A topological attack graph is a directed graph TAG(TN,AS): .N }} is a set of N topological nodes: the assets of an information system, -AS is a set of attack steps, the edges that represent the fact that an attack allows the attacker to move from the parent topological node to the child topological node. • Each attack step has a type of attack, describing how the attacker can move between nodes (exploitation of a vulnerability, credential theft, etc.). • Depending on the type of attack, each attack step is associated with a set of conditions [c].• Some attack steps are associated with a sensor that may raise an alert indicating that this attack has been detected. A TAG can be generated with an attack graph engine such as MulVAL [15] or TVA [9].Topological nodes represent, for example, an IP address or a computer cluster.Attack steps are, for example, the exploitation of a vulnerability.Definition 2. A condition c is a fact that needs to be verified, for an attack step to be possible.It is associated with a probability of success P(c). The condition fact is, for example, "a vulnerability is exploited on the destination host".For such conditions, in our experiments, we use an approximation of the probability of successful exploitation using information coming from the Exploitability Metrics of the Common Vulnerability Scoring System (CVSS) [5].It is deduced from (1) the Attack Complexity (AC), (2) Privileges Required (PR), (3) and User Interaction (UI) values, as well as the Attack Vector (AV), which is taken into account when constructing the topological attack graph.Definition 3. A sensor s of an attack step is an oracle issuing an alert when the attack step has been detected.It is associated with a false negative and a false positive rates. A sensor represents, for example, an Intrusion Detection System, a System Event Management, or a human report. Grouping attack steps In topological attack graphs, there may exist many attack steps between two topological nodes.Attack steps can be of different types, depending on the attack (cf.Definition 1).Generally, there are very few possible types of attack steps (e.g., the remote exploitation of a vulnerability on a server).In order to reduce the size of the model, while preserving information, we group all attack steps of the same type between two topological nodes into a single vertex with (1) a new condition: a multivariable boolean function (usually, an OR) of all conditions applying to the grouped attack steps; (2) an attached sensor node activated only when the boolean function of grouped sensors is true. When several conditions c i of an attack step as are grouped in one condition c, we define the probability of successful exploitation associated with this new condition.For example, when grouping several conditions c i "a vulnerability is exploited on the destination host" into one new condition c "at least one vulnerability of the list is exploited on the destination host", we assume that the exploitation of each vulnerability is independent, to compute its probability of exploitation P (c).This is an acceptable approximation since we consider all the existing vulnerabilities between two topological nodes.Thus, the probability of exploitation P (c) becomes: Breaking cycles in topological attack graphs A TAG is a model defined globally for a system, containing all potential attacks that can happen.It thus almost always contains cycles, especially inside local networks in which any host can attack any other one.For example, a host tn 1 may be able to attack another host tn 2 that can also attack tn 1 (directly or in several steps).A common assumption to break cycles in attack graphs is that an attacker will not backtrack, i.e., come back on a node he has already successfully exploited.This is reasonable because backtracking does not bring new information about attack paths.It has been properly justified by Ammann et al. in [1] and by Liu and Man in [13].However, the solutions of the state of the art for Bayesian modelling of an attack graph such as the ones of Liu and Man [13] and Poolsappasit et al. [19] use this assumption to delete arbitrary possible attack steps.In reality, it is impossible to know a priori which path the attacker can choose.Deleting paths in the Bayesian model thus suppresses actually possible attacker actions.The only way to break cycles, while keeping all possible paths, is to enumerate all paths, starting from every possible attack source, keeping in the nodes a memory of the path of the attacker.So, using this memory, we build an acyclic TAG by ensuring that the paths do not backtrack on already exploited nodes.For example, a node tn 1 tn 2 tn 3 means that the attacker controls the node tn 3 , having first compromised tn 1 , then tn 2 , finally tn 3 .Unfortunately, this cycle breaking process causes a combinatorial explosion in the number of nodes of the model.We discuss in Section 2.6 how we mitigate such limitation. Representation of an attack step in BAM An attack step in the TAG is an edge which is associated with several conditions and can be related to a detection sensor.In the BAM, we detail the attack steps, the conditions, and sensors as nodes, in order to model the probabilistic interactions between such elements, using the nodes detailed below.Each node represents a boolean random variable with two mutually exclusive states.Definition 4. A Bayesian topological node btn (tn 1 , • • • , tn n ), with ∀i, tn i ∈ TN (cf.Def.1), is a node of the BAM representing the random variable describing the state of compromise of tn n using the path of the topological attack graph tn 1 → • • • → tn n (i.e., Compromised or NotCompromised).Definition 5. A Bayesian attack step node basn(as), with as ∈ AS (cf.Def.1), is a node of the BAM representing the random variable describing the attack success of as (i.e., Succeeded or Failed).Definition 6.A Bayesian condition node bcn(c), with c a condition (cf.Def. 2), is a node of the BAM representing the random variable describing that the condition c is fulfilled (i.e., Succeeded or Failed).Definition 7. A Bayesian sensor node bsen(s), with s a sensor (cf.Def.3), is a node of the BAM representing the random variable describing the state of the sensor s (i.e., Alert or NoAlert). These nodes are linked with edges, indicating that the child node has a conditional dependency to the state of its parents.For example, a Bayesian attack step node has a dependency toward its condition(s) and the topological node from which it may be accomplished.Thus it is the child of the nodes representing the conditions and the topological node.In the same way, a Bayesian sensor node is the child of a Bayesian attack step, and a Bayesian topological node is the child of a Bayesian attack step.Definition 8.A Bayesian edge e, is a link from a parent node to a child node that represents a conditional dependency of the child toward its parent. Appendix A Figure 3 shows the details of the representation of an attack step from tn n (source) to tn n+1 (target). Complete Bayesian Attack Model Bayesian Attack Tree and Global Model The complete BAM is composed of a family of Bayesian Attack Trees (BAT), as defined below, each one issued from one attack source.As all nodes are discrete random variables, the local probability distributions can be specified within a Conditional Probability Table .To build the whole structure of one BAT of the BAM, we start from a potential attack source of the acyclic TAG.It is the Attack Source and the root of the BAT.Then, we recursively add the attack steps contained in the acyclic TAG with the nodes described in Subsection 2.2.To avoid cycles, each attack step is added, as soon as its target has not been already compromised during the currently followed path.This can be achieved thanks to the memory of past topological nodes in Bayesian topological nodes.This building process also ensures that the graph structure of each BAT is a polytree: a Directed Acyclic Graph for which there are no undirected cycles either.This allows to use very efficient exact inference algorithms in the Bayesian network such as Pearl's algorithm [17]. The complete BAM is constituted of the set of all BATs.As we consider that each topological node may be a source of attack, the BAM contains exactly N BAT (i.e., the number of topological nodes in the TAG). Definition 10.The Bayesian Attack Model BAM({BAT i }), is a family of N Bayesian networks where, for all i in {1..N }, BAT i is a BAT, whose attack source is node i in the topological attack graph.In the complete BAM, we thus have many Bayesian topological nodes representing the same asset of the IS.However, what most interests a security operator is the attacks that are the most likely to compromise his assets.Thus, as output of the consolidation of probabilities, we assign to a physical asset a probability of compromise that is the maximum of the probabilities of Bayesian topological nodes targeting the same asset. Conditional Probability Tables We now specify the local probability distribution associated with each node, describing the probability dependencies of a node toward his parents.As the nodes are discrete random variables, we can describe the probability dependencies using conditional probability tables (CPT).A Bayesian Topological node has one parent for each type of attack that can be used to compromise it.Its probability table represents a noisy-OR.At least one succeeded attack step is needed to compromise this node.Even if no known attack step has succeeded, there is still a little chance that an attack of this topological node may be an unknown one (e.g. a 0-day).We denote it by pua.Such a CPT is described in Appendix B Table 1. An attack step node has two types of parents: (1) one Bayesian topological node, the source of the attack, required to perform the attack; (2) one or more Bayesian condition nodes.Depending on the type of attack modelled, the condition nodes may not exist for the attack node.The probabilityN ewAttackStep parameter represents the fact that an attacker may have reached his objective.Even if he has compromised the topological node and conditions are verified, it is not certain that he will attempt to propagate through the execution of a new exploit.We describe in Appendix B Table 2 the CPT of a Bayesian attack node, for the exploitation of a vulnerability. A Sensor node has only one parent, the attack node related to the sensor.Its CPT thus contains only two values and their complement representing the f alseP ositive and f alseN egative rates attached to the sensor.The CPT of a Bayesian sensor node is described in Appendix B Table 3. The attack source of a BAT is a Bayesian topological node without parents.As such, it does not have a complete CPT, but only a prior probability value and its complementary.This attackSourceP robability parameter represents the a priori probability of having an attack issued from this node.It thus has to be set by the operators, knowing the risk that an attack starts from a topological node.It can be deduced from a risk evaluation methodology (e.g., ISO 27005 [7]).In a typical system, a high probability can be set to the Internet (e.g., 0.7), a medium one to servers in a demilitarised zone (internal subnetwork protected by a firewall exposing external-facing services on the Internet) (e.g., 0.4), and a small one for production database servers (e.g., 0.1). The Attack conditions also do not have any parents.Their probability is the probability of successful exploitation P (c) associated with the condition.It highly depends on the type of condition modelled by this node.For example, for a condition describing the successful exploitation of at least one vulnerability of a list on a host.The estimation of this probability of successful exploitation follows the process detailed in Section 2.1, with values for each vulnerability, coming from the Exploitability Metrics of the CVSS, as explained in Section 2.1. Bayesian Attack Model usage We build our Bayesian Attack Model from the knowledge that the security operators have about the information system: network topology, known vulnerabilities and deployed detection sensors.Then, we change the state of the Bayesian sensor or topological nodes according to the security events received from the sensors: Sensor Nodes: If the sensor of an attack step exists and is deployed in the network, as long as it has not issued any alert, all related sensor nodes of the BAM (that may appear in several BATs) are set to the no alert state.When the sensor raises an alert corresponding to this attack step, the Bayesian sensor nodes are set to the alert state.If the sensor also gives an alert confidence probability, it is possible to set the state alert to this probability.Topological Nodes: As soon as a compromise information is known for a topological node, all related Bayesian topological nodes are set to the corresponding state.For example, if a Host Intrusion Detection System (HIDS) says that a host is healthy, the related Bayesian topological nodes in all BATs are set to the not compromised state.Conversely, if the HIDS says that a host is compromised, the related Bayesian topological nodes are set to the compromised state.If the HIDS also gives a compromise probability, the compromised state is set to this probability. The Bayesian nodes for which there is no compromise information (no deployed sensor, Bayesian attack step nodes and Bayesian condition nodes) are not set in any state and their probability are updated by the Bayesian inference. Each time the BAM changes state (when we fix nodes in a different state), we use a Bayesian network belief propagation algorithm (Lauritzen or Pearl's inference algorithm) to update the probabilities of each state at all the nodes.Then, for each topological node of the topological attack graph, the maximum probability of the state compromised of all related Bayesian topological nodes, provides security operators with the probability of the asset being compromised, as described in Subsection 2.3. Model size limitation Use of a nbSteps parameter to prevent performance issues: The main limitation when implementing this model is the combinatorial explosion of the number of nodes, due to the redundancy introduced by the cycle breaking process.In order to improve the performance and prevent this combinatorial explosion, we limit the number of successive attack steps added to each BAT , according to a nbSteps parameter.Thus, we can contain the number of nodes to process in the BAM, as detailed in Section 3.1. Impact of the nbSteps parameter on the outputs of the BAM: Thanks to the redundancy of the model, and as each topological node is an attack source of a BAT , if some attack steps are discarded in a BAT , they will be in another BAT , closer to the BAT attack source.The probabilities of Bayesian topological nodes in a BAT represent the probability of the attacker exploiting this node starting from the attack source.As long as no attack has been detected on a path, the probability of a node compromise decreases rapidly as a function of the length of the path between the attack source and the node.During initial probability computation, the probabilities of nodes far from the attack sources are very low.These probabilities are below the maximum used during the probability consolidation detailed in Section 2.3 and do not have any effect on final compromise probabilities.In that case, the nbSteps parameter has no impact on final results. The key limitation this parameter introduces is when attacks start being detected and introduced in a path.More precisely, the limitation arises when more than two detections are injected in the model.For example, to compute the combined impact of two detections relative to each other, they need to appear in the same BAT.The maximum compromise probability of the topological node related to the first detection will be in the BAT in which it is the attack source.If the second detection is attached to a node that is more than nbSteps away (i.e., separated with more than nbSteps − 1 missed detections), it will not be in the same BAT and these two attacks will be taken into account separately.This will prevent the increase of probabilities of the nodes between the two detections.Detections may be separated by other nodes without detections for two reasons: if there are not enough sensors or if there are false negatives, both undesired cases.As a summary, the only case when the impact of the limitation of the BAT depth to nbSteps is significant is when there are more missed detections than nbSteps − 1 between two successive detections for the same attack.These assumptions are validated by the experimental validation of Section 3.2. Complexity evaluation The main computation done on each Bayesian Attack Tree of the BAM is the execution of the belief propagation algorithm (probability inference), computing the probability of all nodes, according to evidences, nodes set to a specific state.The complexity of the inference in a Bayesian network is directly linked to the number of nodes and structure of the network.We estimate the number of nodes M of a BAT, depending on N , the number of topological nodes in the attack graph, and k the maximum number of consolidated attack steps between two topological nodes in the topological attack graph (i.e., the maximum number of different types of attacks).M is also strongly depending on the existence of attack steps between the topological nodes.An attack step needs the existence of at least a vulnerability and of an authorised network access, which depends totally on the monitored information system.Thus for this complexity evaluation, we consider the worst case: there are k attack steps between each pair of topological nodes.For each attack step, we add ≈ 4 nodes to the BAM (sometimes few more, according to the number of conditions).Thus, in the worst case, for each BAT, starting from an attack source, the number of nodes to add is The degree of the polynomial curves of the number of nodes in the BAM increases with the parameter nbSteps.However, even if the number of nodes in each BAT is high, the Bayesian inference can be done efficiently.Indeed, as the structure is a polytree, some efficient inference algorithms can be used.For example, Pearl's belief propagation algorithm is linear in the number of nodes [17]. Thus, for each BAT, in the worst case, the complexity of the construction and probability inference C(BAT ) is C(BAT ) = O(N nbSteps ).Finally, for the whole BAM, as there are at most N attack sources, in the worst case, the complexity of the inference in the whole model C(BAM ) is The calculations on each BAT are independent.So, they may be easily done in parallel, which gives in practice, C(BAM ) = O(N nbSteps ) with N processors. Experimental use-case-based validation We will first present a use-case and the scenarios that have been chosen to do the experimental validation of the BAM, then discuss the results obtained. Validation scenarios In order to validate the accuracy of the results, while keeping the scenarios simple for explanations, we implemented a real infrastructure of 11 virtual machines, for a total of a hundred vulnerabilities.A host (that will be called host A, thereafter) can be attacked from the Internet, and can attack the other hosts G to J of its subnetwork.The latter hosts can attack hosts A, C and D. This network topology is representative of a real information system, where an ingress firewall (host K) protects the LAN (E to J), and where publicly accessible servers are put in a demilitarised zone (A to D).The topological attack graph used to populate the BAM has been generated from a report of the vulnerability scanner Nessus, done on this infrastructure. We apply 6 attack scenarios on this network topology, as summarised in Appendix C Table 4.The attack is carried out through three attack steps.In the first scenario, no step is detected; it represents the basic risk of the IT system.In scenarios 2 to 4, steps are detected and alerts are generated.Scenarios 5 and 6 represent detection anomalies.These scenarios represent the dynamic evolution of a system with different possible situations: -Scenarios 1, 2, 3, then 4: Normal evolution of an attack during the time. -Scenarios 1, 2, then 5: Evolution of an attack in which an attack step cannot be detected (no sensor for this step).-Scenarios 1, 2, then 6: Evolution of an attack in which an attack step has not been detected while there was a sensor for this step. We assume in these scenarios that the alerts given by the sensors are binary (alert, noalert), i.e., we do not have alert confidence. Parameters default values This use-case represents a typical critical IS.It is managed by a security operator who often uses a vulnerability scanner.Most vulnerabilities are known, but there is still a chance (e.g., 0.1%) that a very motivated attacker knows a non-public vulnerability.As the system contains known unpatched vulnerabilities, sensors are deployed to raise an alert when one of the vulnerabilities is exploited.These sensors have a medium chance (e.g., 5%) to raise false positives, when an attack do not succeed while being detected. However, for the vulnerabilities for which a detection sensor is deployed, the probability of having a false negative is lower (e.g., 1%).The operator knows that his system is quite well protected, so it is very unlikely that an attack occurs with more than 2 undetected steps (e.g., nbSteps can be set to 3).Most attacks may come from the Internet (e.g., probability of the Internet being a source of attack of 70%), even if internal hosts may also be a new source of attacks (undetected phishing, malicious employee, etc.) with a lower probability (e.g., 10%)).Finally, as valuable machines are not deeply protected (they can be reached in 3 steps from the Internet), the probability that the attacker propagates through a new attack step is medium (e.g., 30%).After an attack, he may have already found what he was looking for.Default values of the parameters used for this use case are summarised in Appendix D Table 5. Results and analysis The Bayesian Attack Model was implemented in Java, using the SMILE Bayesian Network library [4].The results of the compromise probabilities of each topological node calculated by the BAM, for each scenario, are shown in Appendix E Figure 4.The first scenario is the basic risk.The only host that has a medium risk is the Internet.The other hosts have a notsignificant risk.In the scenarios 2, 3 and 4, the sensors corresponding to the 3 steps attack are set progressively.Each new sensor set as detected confirms the attack that is currently happening and increases the compromise probability of the previous and future states.For example, in scenario 4, the Internet, and the 3 victim hosts are in the high-risk zone.In scenario 5, and scenario 6, when there is a missing detection or a false negative / false positive, the probabilities of an ongoing attack are lower, but higher than the basic risk, and the probabilities of scenario 2, that should precede this state.So, a security operator may investigate the appropriate machines to confirm or disprove the attack. Parameter sensitivity analysis Several parameters can be customised in the BAM (cf.Section 3.2).We summarise in Appendix F, Table 6 the results of the sensitivity analysis of these parameters with the range of variation that we find appropriate for the given parameters (range of values that may occur in real-life).The false positives and negatives rates vary from 0 to 30%, because beyond, their values are meaning-less (e.g., a vulnerability signature with more 30% false positive is useless).The number of successive steps varies from 1 (its minimum) to 4 (the maximum possible number of successive attack steps for this use-case).Sources probabilities vary from 0 to 1, as, according to the context, all values may be possible.The probability of having an exploitation of an unknown vulnerability is low (15% is a far upper bound).The probability of the attacker making a new attack step is difficult to estimate.We thus need to study the impact of this parameter on its whole possible variation interval (0 to 100%). The most interesting result of this analysis is the ranking influence describing the impact of the variation of a parameter on the rank of topological nodes probabilities (on the whole parameter variation range, for the 6 scenarios).This rank will determine the priorities of security operators in their IS.The probability influence describes the effect of the variation of the parameters on the absolute value of the topological nodes probability.The only parameter that has an impact on the rank of the topological nodes probabilities is probabilityOtherHosts.However, this parameter can be estimated quite accurately with a risk analysis methodology, which gives the security risk of each topological node, according to its position in the information system.All other parameters do not have any effect on the ranking on their whole variation range, which is a comforting result.Four parameters have a medium impact on the absolute value of the compromise probabilities of topological nodes.With a medium uncertainty on such parameters (e.g., 0.2), the variation of the absolute value of the probabilities is medium (e.g., up to 0.2).Other parameters have a low impact on absolute values of probabilities.With a medium uncertainty on such parameters (e.g., 0.2), the variation of the absolute value of the probabilities is low (e.g., up to 0.02).So, absolute value of probabilities may be a little impacted by uncertainty on parameters, but rank is mostly not impacted by the variation of the parameters. Performance evaluation In order to dynamically assess the risk of a system, the BAM has to be evaluated each time a correlated alert, or a set of correlated alerts is received: the sensors and topological nodes are set in their new states, then the probabilities are updated.The duration of such a process needs to be quite fast (around 1 minute is good), for the operator to properly understand the risk in operational time.We simulate random network topologies with different parameters (number of hosts, subnets, vulnerabilities and network services and connectivity between subnets) to evaluate the performance of the BAM.We generate the TAGs related to the topologies.Then, we generate random attack scenarios with seven successive attack steps.Finally, we evaluate the BAM on the different scenarios. Fig. 2. Network topology for simulations We generate random topologies, as depicted in Figure 2, containing from 1 to 70 hosts, in 7 subnets.These topologies are representative of a real network in which defense in depth is implemented: all the hosts of a subnet have access to all the hosts of a deeper subnet.In each subnet, all accesses between hosts are authorized.Each host has 30 random vulnerabilities for a maximum total of around 2000 vulnerabilities.The results of the duration in seconds of the BAM generation and the inference after the evaluation of one scenario of 7 successive attack steps, on these topologies, is displayed in Appendix G Figure 5.The parameters of the BAM are in the default values detailed in Section 5.This simulation shows that for medium-sized topologies (up to 70 hosts) the duration of the Bayesian Attack Model generation and of the inference remains acceptable (< 1 minute 30 seconds) on a laptop-class computer. Even if the number of topological nodes of these simulations is limited (70 hosts), it could be extended to much bigger IS, by clustering together identical templates of servers or of client machines in one topological node, as they possess the same vulnerabilities and authorised accesses and thus behave in a similar way in the BAM.Even with 60 assets in the topological attack graph with, for example, 10 templates of client machines, 30 of network servers, and 20 of business application servers, it is possible to model a usual big-sized IS. Accuracy evaluation To evaluate the accuracy of the results (i.e., how close the probabilities are to the truth), we simulate attack scenarios on the random topologies presented in Section 3.3 and compare the theoretical results with the outputs of the BAM.The results are shown in Appendix G Figure 6.We compare the theoretical results known in the scenarios with the results of the BAM.In each scenario, we know the nodes that are compromised and healthy, i.e., nodes with a theoretical probability of respectively 1 and 0.Then, we assess if the BAM probabilities of compromised nodes are close to 1, and if the BAM probabilities of healthy nodes are close to 0. The plot shows the maximum errors (in terms of distance to the theoretical values 1 and 0) of compromised and healthy nodes.This figure shows a large free space between the errors on compromised hosts and the errors on healthy hosts.This means that if there are no false-positives nor false-negatives in the detection inputs of the BAM, it allows to distinguish exactly healthy and compromised hosts, for example with a boundary at the probability of 0.5.So, there are no false negatives nor false positives introduced by the BAM.The graphical difference of the results between the values for a low number of hosts and high number of hosts is probably due to the random attack scenarios that may be shorter when there are not enough hosts. Related Work Many people proposed enhancements to improve attack graphs with Bayesian networks, to use them for dynamic risk assessment [20,13,21].However, they do not describe how they manage cycles that are inherent to attack graphs.In [21], Xie et al. present an extension of MulVAL attack graphs using Bayesian networks, but they do not mention how to manage the cycle problem, while MulVAL attack graphs frequently contain cycles.In the same way, in [6], Frigault and Wang do not mention how they deal with the cycle problem constructing Bayesian attack graphs.In [13], Liu and Man assert that to delete cycles, they assume that an attacker will never backtrack.The same assumption is used by Poolsappasit et al. in [19].However, they both do not present how they deal with this assumption to keep all possible paths in the graph, while deleting cycles.We propose here a novel model that explodes cycles in the building process, keeping all possible paths while deleting the cycles, to compute the Bayesian inference. The Bayesian model presented by Xie et al. in [21] is based on logical attack graphs.It is thus very verbose and can be huge for real information systems.In [13], Liu and Man's model is a topological graph, in which are added violation states.It is thus quite compact, but does not detail the attacks, their conditions and, mainly, the sensors that can change state.Thus, the only observations that can be set on this model are observations on topological nodes.The model we present is a topological model.So, it is much more compact than those based on logical attack graphs.However, it contains the logical conditions necessary to carry out the attacks, in order to keep all information important to model attacks, and add sensor nodes that can be activated with detections.Moreover, we also add several improvements (attack nodes gathering, polytree structure of BAT, etc.) that either reduce the size of the graph structure or improve the performance of the inference.We thus constrain the size of the graph in which we do Bayesian inference, while conserving all paths by linearising cycles. The experimental validation we did on the Bayesian Attack Model is on a real topology of a complexity similar or superior to what was done in the literature and on simulated topologies that are far bigger than the state of the art.For example, Xie et al. assess their model on a topology of 3 hosts and 3 vulnerabilities [21], Liu and Man on a topology of 4 hosts and 8 vulnerabilities [13].The real world examples used by Frigault and Wang in [6] contain at most 8 vulnerabilities on 4 hosts.The test network used by Poolsappasit et al. in [19] contains 8 hosts in 2 subnets, but with only 13 vulnerabilities.Thanks to our polytree model, we successfully run our Bayesian Attack Model efficiently on simulated topologies with up to 70 hosts for a total of more than 2000 vulnerabilities. Conclusion and Future Work We present in this paper a new Bayesian Attack Model (BAM), representing all the possible attacks in an information system.This model enables dynamic risk assessment.It is built from a topological attack graph, using already available information.Sensor nodes can be activated by dynamic security events to update the compromise probabilities of topological assets, which rank the risk level of ongoing attacks.This model handles the cycles that are inherent to attack graphs and thus is applicable to any information system, with multiple potential attack sources.The cycle breaking process significantly increases the number of nodes in the model, but thanks to the polytree structure of the Bayesian networks we build, the inference remains efficient, for medium information systems.In order to be able to use the Bayesian Attack Model for bigger information systems, future work will investigate how the usage of a hierarchical topological attack graph can be appropriate to build the Bayesian Attack Model. A Appendix: Detail of a Bayesian attack step Figure 3 shows the details of the representation of an attack step from tn n (source) to tn n+1 (target).It is composed of a Bayesian attack step node that binds a Bayesian topological node to another one.This Bayesian attack step has two conditions (bcn 1 and bcn 2 ) and a sensor (bsen). B Appendix: Conditional Probability Tables In this appendix, we detail the conditional probability tables (CPTs) associated with the nodes of the Bayesian Attack Model.Each node with at least one parent is associated with a CPT which depends on its type of node.In these tables, the first lines represent all possible states of the parents.The last lines contain the probabilities of each state of the child node according to the states of its parents. Table 1 shows the CPT of a Bayesian topological node, according to the states of its parents: Bayesian attack step nodes.It represents a noisy OR: an OR with a small residual probability (the probabilityU nknownAttack parameter).Finally, Table 3 shows the CPT of a Bayesian sensor node, according to the state of its parent: a Bayesian attack step node.It represents the potential false positive and false negative rates of the sensor. C Appendix: Simulation scenarios Table 4 shows the detection scenarios applied on the use-case.In the first scenario, no step is detected; it represents the basic risk of the IT system.In scenarios 2 to 4, steps I → A, A → G and G → D are progressively detected and alerts are generated.Scenarios 5 and 6 represent detection anomalies on A → G (no sensor information or false negative). E Appendix: Validation results Figure 4 shows the results of the BAM for the six scenarios of the use case presented in Subsection 3.2.Markers represent hosts of the topology, and the ordinate is their compromise probability in the abscissa scenario.The horizontal lines give some idea of the threshold that could be taken to define the compromise risk level of the hosts.For example, the hosts under the lowest line (probability ≤ 0.25) have a not-significant risk of being compromised, above the lowest line (0.25 < probability ≤ 0.50) have a low risk, above the second line (0.50 < probability ≤ 0.75) have a medium risk, and above the upper line (0.75 < probability) have a high risk of being compromised. For readability of the results, the hosts having the same value (more or less 10 −10 ) for all the scenarios have been grouped on one point, and the points are spread around the scenario number.Table 5 summarises the sensitivity analysis of the parameters of the Bayesian Attack Model.We give in this table the range of variation that we find appropriate for the parameters.Then, we study the influence of each parameter in its whole variation range on the ranking between the compromise probability of the hosts, and on the value of the probabilities.With the increase of this parameter, the probability of attackable hosts increases slowly, as it is more probable that they are attacked, using attacks that are not known and cannot be detected.When this parameter is small (increase from 0 to 0.3), the parameter represents that even if an attack is possible, it may not happen.Then (decrease from 0.3 to 1) it represents that even if an attacker has compromised a host, he may not do another attack. Table 6.Sensitivity analysis of the parameters of the BAM G Appendix: Performance and evaluation results Definition 9 . A Bayesian Attack Tree is a Bayesian network represented by BAT(AS, DAG,P) where: -AS is a special Bayesian Topological Node, the attack source of this BAT.-DAG(BN,E) is a polytree structure, constituted of • BN , the Bayesian nodes BN=[btn],[basn],[bcn],[bsen] (cf.Defs.4-7) • E, the set of edges E = {e} representing a conditional dependency between the nodes (cf.Def. 8).-P is a set of local probability distributions, associated with each node of DAG. Figure 1 Fig. 1 . Figure1summarises the global architecture of the BAM.In this example, it is built from a TAG containing 3 nodes and thus is composed of 3 BATs. Third alert 5 O 6 × No information available for the second step (=no sensor) No detection for the second step Caption I → A: Attack from the Internet to host A → G: Attack from host A to host G; G → D: Attack from host G to host D; O: No values set (=no sensor); : Sensor node set to alert; ×: Sensor node has been set to noalert. Fig. 4 . Fig. 4. Results for each scenario for low values (between 0 and 0.05).Effect amplified by the number of detections set.hosts (except the Internet) increase with the increase of such parameter.Stronger increase for hosts attackable from many hosts Figure 5 Figure 5 presents the results of the duration in seconds of the Bayesian Attack Model generation and the inference after the evaluation of one scenario of 7 successive attack steps, on random simulated topologies from 1 to 70 hosts. Fig. 5 . Fig. 5. Duration in seconds, according to the number of hosts Figure 6 Figure 6 presents the results of an accuracy evaluation of the BAM on random simulated topologies.The curve with triangles represent the mean and standard deviation, during 10 simulations, of the minimum probability of the hosts known as compromised.The curve with circles represent the mean and standard deviation, during 10 simulations, of the maximum probability of the hosts known as healthy.In other words, this graph shows the maximum errors (in terms of distance to the theoretical values 1 and 0) of compromised and healthy nodes. Fig. 6 . Fig. 6.Accuracy of the results of the BAM according to the number of hosts Table 1 . CPT of a Bayesian topological node Table 2 shows the CPT of a Bayesian attack step node, for the exploitation of a vulnerability, according to the states of its parents: a Bayesian topological node and a Bayesian condition node.It represents an AND on the parents with the probabilityN ewAttackStep parameter, when all conditions are fulfilled. Table 2 . CPT of a Bayesian attack step node "exploitation of a vulnerability" Table 3 . CPT of a Bayesian sensor node Table 4 . Simulation scenarios D Appendix: Default values of the parameters Table 5 presents all the parameters of the Bayesian Attack Model: probabilityUn-knownAttack, falsePositive, falseNegative, nbSteps, probabilityInternet, probabili-tyOtherHosts, and probabilityNewAttackStep.Each parameter is associated with its description and the default value that was chosen for the use-cases. Table 5 . Default values of the parameters used in the BAM
2016-06-29T11:01:21.000Z
2016-06-29T00:00:00.000
{ "year": 2016, "sha1": "828d4ca428840a4c167a1fecc1cb28fed2666811", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0a30b37433eed8bad4a9d6e7beeedf4e8fa53a96", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
169102261
pes2o/s2orc
v3-fos-license
Method of Clustering in Fashion Industry Sector with the Aim of Raising the Quality of Business for the Decision Makers Apstrakt: Rad prikazuje uticaj pametnih tehnologija na donošenje odluka u kompanijama širom sveta. Fokusiran je na pružanju podrške modnoj industriji i marketinškim odlukama baziranim na pristupu podataka kroz Data Mining (DM). Današnji korisnici imaju tako raznovrsne ukuse i želje da ih nije moguće grupisati u velike i homogene populacije kako bi razvili marketinške strategije. Zapravo, svaki korisnik želi da bude servisiran prema svojim individualnim i jedinstvenim potrebama Prema tome, prelazak sa masovnog marketinga na tržište odnosa jedan-na-jedan zahteva od donosilaca odluka da daju posebne strategije za svakog pojedinačnog korisnika na osnovu njegovog profila. Introduction Modern society, whose main characteristic is data, has been in the Internet revolution for several decades now. Companies such as Google, Facebook, Amazon, Microsoft, who first realized this, took the world's first development and profits and became the state-of-the-art company that everyone sees. The methodology of data warehousing and manipulation is extremely important as well as the methodology of data interpretation. This analytics is not just a presentation in spreadsheets, but it is a complete communication technology with supergoortics and supercomputers, in combination with sensors, cloud, robotics, etc. Data research dates back to the beginning of the twentieth century with the first development of computing, then caused by wars of research turning to patches such as cryptography and logic. The next point in history that is significant for the development of business data is the 70s and 80s of the last century. Nevertheless, it is worth mentioning recent sources such as McKinsey's work from the 2012 World Economic Forum in Davos, Switzerland, where he presented a survey titled "Big Data, Big Impact". After the aforementioned work, Barack Obama's Cabinet proclaimed the Big Data Initiative for which $ 200M was allocated and where institutions such as the Academy, the Government, NGOs and private corporations were invited to work on exploring key points on the big data. Today, the situation is such that data takes an essential role in people's lives. The term Big Data has gone down in history and now the brief data of all forms is called Data. There are three relevant terms in science and technology that are direct participants in modern marketing and decision making. It is almost unthinkable to create a successful business in the future without the following three components [1]: • Data mining (DM) -the discipline of As the purpose of modern business is growth in every direction, the demands placed on new technologies are becoming more and more challenging discovering patter in large data sets; baseline for analysis of all types in structured and unstructured data; It is based on database systems and statistics • Machine Learning (ML) -discipline of using algorithms that create predictions from historical data, whereby, depending on the amount of historical data, the computer trains to be independent • Artificial Intelligence (AI) -a broad discipline that includes data mining and machine learning; uses processes from these disciplines to describe the characteristics of human intelligence and at some point with sufficiently technologically developed societies the question of the role of traditional marketing in business is posed. Fashion industry and decision making Fashion is a major global industry. The global apparel market is valued at between US$2.4 trillion2/ and US$3 trillion and accounts for 2 percent of the world's Gross Domestic Product (GDP). 57.8 million People are employed in clothing and textiles worldwide -24.8 million of those in apparel manufacture. [2] Digital technologies reshape markets and value chains for fashion content and information, leading to an opportunity for innovative businesses to create value added services, applications and products. ICT forms the enabling element that brings to market these services, applications and products across all sectors through production, distribution and e-commerce. The life cycle of fashion products becomes shorter in recent years due to the fierce market competition environment. Short life cycles, high volatility, low predictability, and high impulse purchasing is being appointed has characteristics of fashion industry. The data that companies collect about their customers is one of its greatest assets. However, companies increasingly tend to accumulate huge amounts of customer data in large databases and within this vast amount of data are all sorts of valuable information that could make a significant difference to the way in which any company run their business, and interact with their current and prospective customers and gaining competitive edge on their competitors. [3] Global markets demand innovation. DM is a very powerful tool that should be used for increasing customer satisfaction providing best, safe and useful products at reasonable and economical prices as well for making the business more competitive and profitable. A clustering algorithm assigns data points to different groups, some that are similar and others that are dissimilar. The use of clustering involves placing data into related groups typically without advance knowledge of group definitions. Companies can utilize DM techniques to extract the unknown and potentially useful information about customer characteristics and their purchase patterns DM tools can, then, predict future trends and behaviours, allowing businesses to make knowledge driven decisions that will affect the company, both short term and long term. The identification of such patterns in data is the first step to gaining useful marketing insights and making critical marketing decisions. [4] The concept of applied statistics in the sales sector The first question that concerns business and the data is -How to use Data Analytics to increase the shareholder value? Consumer society based business relies on various media that serve as a channel for data integration, where both structured and unstructured data are rapidly growing day by day. Companies have billions of numerical records, demographics, records from social networks, textual records, video and image records, etc. Such a conglomerate base is a challenge for successful business that balances between strategy, marketing, structures, and algorithms. The business that first sees alarms, anomalies, rules, and results in data has the lead in an everlasting competition between the competitions. Cluster analysis or clustering is the task of grouping customers in such a way that customers in the same cluster group are more similar to customers in other groups. Customers in the same clusters share features that lead to a similar product group or similar service group. Algorithms as mathematical concepts are aimed at theoretical grounding, however, their expansion comes only after application in the economy and industry. The science of data that has evolved over the years gets its shape only when it is applied to concrete datasets and when results that yield success on it are seen. Many algorithms are developed for the benefit of data science; it is worth mentioning only the most basic groups: Clustering, Logistic regression, Linear models, Support vector machines, decision trees, neural networks, etc. This paper presents clustering. Clustering is one of the methods of unexpected learning that allows clustering of instances into groups, where the number and size of groups is not known in advance. The essence of clustering is to create as homogeneous groups as possible as heterogeneous among themselves. Clustering goals are a better understanding of consumer behaviour, sales campaign planning, customer retention, optimization of marketing costs, quality engagement of customers through appropriate channels and impact on consumer behaviour. The clustering information can be used to "tag" customers in the overall database. Customer clustering uses purchase transaction data to track buying behaviour and then create new business initiatives based upon findings, like sales campaigns and customer retention. Cluster analysis is one of the most important segmentation methods and it has long been the dominant and preferred method for market segmentation. [5] Therefore, clustering methods are commonly used in marketing for the identification and definition of market segments that become a focus of a company´s marketing strategy. [6] Traditionally, marketers must first identify customer cluster using a mathematical mode and then implement an efficient campaign plan to target profitable customers. [7] Applications of Clustering Being part of a cluster allows companies to operate more productively in sourcing inputs; accessing information, technology, and needed institutions; coordinating with related companies; and measuring and motivating improvement. [8] Competition in today's economy is far more dynamic. Companies want to keep high-profit, high-value, and low-risk customers. This cluster typically represents the 10 to 20 percent of customers who create 50 to 80 percent of a company's profits. A company would not want to lose these customers, and the strategic initiative for the segment is obviously retention. A low-profit, high-value, and low-risk customer segment is also an attractive one, and the obvious goal here would be to increase profitability for this segment. Cross-selling (selling new products) and up-selling (selling more of what customers currently buy) to this segment are the marketing initiatives of choice. [9] K Mean algorithm Clustering is widely used method which includes huge number of algorithms. K Means algorithm was applied for the purpose of the research. K Mean algorithm is a part of the Centroid method. These are iterative clustering algorithms in which the notion of similarity is derived by the closeness of a data point to the centroid of the clusters. K-Means clustering algorithm is a popular algorithm that falls into this category. In these models, the number of clusters required at the end has to be mentioned beforehand, which makes it important to have prior knowledge of the dataset. These models run iteratively to find the local optima. [10] Given a set of observations (x 1 , x 2 ,…,x n ) where each observation is a d-dimensional real vector, kmeans clustering aims to partition the n observations into k (≤ n) sets S = {S 1 , S 2 ,…,S k } so as to minimize the within-cluster sum of squares (WCSS). Formally, the objective is to find: where m i is the mean of points in Si . [9] Methodology of research Based on the testing of multiple algorithms, it has been concluded that it is necessary to segment these customers into certain categories in order to make it easier for them to commune and create precise and successful campaigns. The used characteristics or variables in the cluster model are divided into two groups -basic characteristics and special characteristics. The basic characteristics are the number of purchases so far (the number of days in which it was purchased), the average amount on the account, the length of the loyalty card, the frequency of purchases (average number of purchases per year). The specific characteristics are Vitality (the probability that the customer is active, calculated on the basis of a special churn model), Expected average amount in the account in 2018 (calculated by a special model), Expected number of purchases in 2018 (calculated by a special model), Expected revenue in 2018 (calculated by a special model). After determining the variables that were involved in modelling, a kMean cluster was applied and 8 clusters were obtained, of which 6 are significant for analysis. Clusters are named on the basis of the most important characteristics: Average, Seasonal, New, Churn, Shopaholic, Best customers, and in addition are Inactive and customers who are less than 15 days active, shown on figure 1: Figure 1: Percentage of clusters Cluster 1 -Average This cluster is characterized by buying cheap items and buying mostly women's things, such as hollyhocks and women's jeans. Shoes buy significantly less than average. The number of loyal customers in the cluster is 127, the number of men is 42, and the number of women is 85. This is the most numerous clusters, which accounts for 15.5% of the total revenue of loyal customers. Customers of this cluster are more vital than average, and the length of possession of a loyalty card is also higher than the average of all loyal customers. Cluster 2 -Shopaholic This cluster is characterized by buying it often; however, the amount on the accounts is less than the average. The cluster is predominantly women who have been loyal to the program for a long time. There is little risk of leaving these customers. Concerning revenue share, 35% of total revenue comes from this cluster. Expected revenue is significantly higher than the average in 2018. Cluster 3 -Seasonal This cluster is characterized by buying an average of two times a year, in the summer and winter periods. Customers are at a low risk of becoming inactive (churn), most often buying jackets, caps and equipment for the sea, however the revenue predicted in 2018 is small. Cluster 4 -New This cluster is characterized by buying up to half a year in a loyalty program. Mostly women make this cluster that buys most common earrings, bags, chains. This cluster is at a low risk of churn. This cluster predicts significantly higher revenue than average. Cluster 5 -Churn This cluster characterizes to buy shoes much more than other clusters. The likelihood that customers become inactive (churn) is as big as 50%. These are generally good buyers who have a slight downward trend in purchasing. Saving a loyal customer is 10 times cheaper than gaining a new loyalty, it was necessary to pay extra attention to this cluster. It is divided into two groups, weaker outgoing customers and better outgoing customers. The properties that were used for additional clustering are vitality and expected revenue in 2018. The goal was to anticipate which customers are leaving and what the revenue from each customer is in 2018. The calculator has shown that if the departing loyal customers retain and thereby cross into the nearest clusters, they predict the income from 7.500 €. Cluster 6 -Best This cluster is characterized by the purchase of extremely often and extremely expensive items. The account amounts for these customers are extremely high. They are naturally characteristic. They are expected to earn 3 times the average. About 34% of total income comes from the best buyers, shown on table 1 and figure 2. More often, they buy steaming items than individual goods. Table1. Share in total profit for loyalty customers Conclusion Clusters serve to make it easier to do upselling and cross selling. Similarities and differences are observed, as well as the hierarchy of clusters and activities are undertaken in the direction of migration of customers from the cluster to the cluster, in order to increase profitability and to influence the change in the behaviour of the consumer-participants. Cluster hierarchy has made the following recommendations: Figure 3: Cluster hierarchy The figure 3. shows which cluster comes from and in which direction it works best in order to show the results first. The new buyers become Shopaholic when increasing the number of purchases, and on the other hand, seasonal buyers become an increase in the number of purchases by the average buyers. The average increase in the number of purchases is becoming a buyer for everyone. Shoppers, when buying expensive items, become the best buyers. Outgoing customers in a better group need to be kept in order to make them the Best Buyers. In general, clustering as a technique of machine learning is extremely useful in high positioned decision making, and this technique is basically modelling sales -missed, linked, personalized, targeted. Intelligently programmed groups are useful for BI analysis due to accurate customer comparison; Groups are useful and necessary when designing group sales campaigns; groups introduce the company into a new level of business and build the level of personalization of customers; The groups give the possibility of alarm or easier detection when it becomes inactive (churn), who changes behaviour and how. Statistical modelling in company management processes and decisions in these companies is a sufficient and necessary condition for a sustainable and successful business.
2019-05-30T23:47:07.051Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "75263004c9a90df5f759dcb5e618e3b26c5054e3", "oa_license": null, "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/2466-4693/2018/2466-46931801001J.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "465db24f537f69aa191e45d765312946d81776cf", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
15393695
pes2o/s2orc
v3-fos-license
A Prerecognition Model for Hot Topic Discovery Based on Microblogging Data The microblogging is prevailing since its easy and anonymous information sharing at Internet, which also brings the issue of dispersing negative topics, or even rumors. Many researchers have focused on how to find and trace emerging topics for analysis. When adopting topic detection and tracking techniques to find hot topics with streamed microblogging data, it will meet obstacles like streamed microblogging data clustering, topic hotness definition, and emerging hot topic discovery. This paper schemes a novel prerecognition model for hot topic discovery. In this model, the concepts of the topic life cycle, the hot velocity, and the hot acceleration are promoted to calculate the change of topic hotness, which aims to discover those emerging hot topics before they boost and break out. Our experiments show that this new model would help to discover potential hot topics efficiently and achieve considerable performance. Introduction Microblogging (post) is a mini blog which is typically smaller in both actual and aggregate file size comparing with a traditional blog. Microblogging allows users to exchange small elements of content such as short sentences, individual images, or video links. As a convenient communication means, especially with mobile phone, microblogging has been prevailing in the Internet. Sina Weibo (a Chinese Twitter) produces 25,000,000 messages each day, and Twitter gets 50,000,000 for each day. In our opinion, there are two main reasons that bring the bloom of microblogging. The first reason is the initiative of posting concerning messages of each person ranging from the simple such as "what I'm doing right now" to the thematic such as political theme. The second reason is that the mobile phone would help users to utilize the splitting time to concern the topics on the microblogging systems. With a large amount of reading and communication from users, it is quite understanding that hot topics would show up since most of people are concerned about those emergent incidents, such as "missing flight MH370. " Of course there are a lot of rumors since Internet is anonymous. It is a good way for local government and department to publish latest news about their work to dismiss rumors. However we argue that it is more important to discover those hot topics in advance. That means we need to construct a prerecognition model for hot topic discovery. Most of current work usually focuses on the postrecognition of hot topic discovery for analysis with history dataset. They are difficult to check the real-time status of topics, which is unfavorable to control those rumors. In this paper, we emphasize our work on the prerecognition mechanism and propose a novel hot topic discovery system which integrates previous hot topic discovery mechanisms with the concept of hot velocity and hot acceleration to recognize potential hot topics before they boost and break out. This paper aims to enhance our previous work on prerecognition of hot topic discovery [1]. We firstly promote a topic life cycle model that defines the different status of a topic from its appearance to its disappearance. Then we utilize the topic hot velocity and the hot acceleration borrowing from "mechanics field" to calculate the change of topic hotness, which aims to discover those hottest topics before they are hot 2 The Scientific World Journal ones. The prerecognition model helps to find those potential hot topics and checks the real-time status of each topic, which can be applied for local government to guide public opinion and build a harmonious society. Also it would help e-business enterprise to deliver customized advertisement for interested users. The rest of the paper is organized as follows. In Section 2, we discuss related work. We give the related definitions at Section 3. Section 4 provides our prerecognition model for hot topics. Section 5 shows our experiment results. We present further discussion at Section 6. Finally, we conclude and discuss some future work. Related Work Hot topic prerecognition is basically to aggregate those similar microbloggings, formalize topic clusters, and then rank topic clusters with the count of included posts, the hot velocity, and the hot acceleration. Topic Detection. Much work has been done for topic discovery before microblogging's appearance. TDT (Topic Detection and Tracking) is one of the popular approaches. TDT aims to discover the topical structure in unsegmented streams of news reporting as it appears across multiple media and in different languages. Since hot topic discovery is focusing on real-time topic stream nowadays, we would like to introduce those online models. TID (Topic Initiator Detection) [2] introduced a web mining and search technique for a specificized topic query and gave resulting collection of time-stamped web documents which contain the query keywords. Petrovic et al. provided a similar work [3] to detect new events from a stream of Twitter posts. In particular, they gave comparison with other systems on the first story detection task. Pan and Mitra introduced two event detection approaches using generative models [4]. They combined the popular LDA (Latent Dirichlet Allocation) model with temporal segmentation and spatial clustering and adapted an image segmentation model, SLDA (Supervised Latent Dirichlet Allocation), for spatial-temporal event detection on text. Since finding and clustering topics with generative models like LDA and its extension, we would adopt LDA series as our topic model for clustering. Other work on online news detection and tracking was introduced in papers [5][6][7]. In our opinion, these papers focused more on topic discovery for traditional messages, such as posts from forums and blogs. The original dataset of microblogging is larger than those traditional datasets, and it is real-time stream. Therefore, how to detect topics on large scale of stream texts has been hot research topic in recent years. Topic Discovery with Combined Features. Current work on emerging topic discovery with microblogging always applied several features of posts, such as textual information, graph connection, and the time factor to find those emerging topics. As for using textual information feature, Kasiviswanathan et al. identified emerging topics through detection and clustering of novel user-generated content in the form of blogs, microbloggings, forums, and multimedia sharing sites with dictionary learning approach [8]. Goorha and Ungar described a system that monitored social and mainstream media to determine shifts in what people are thinking about, a product or company [9]. Bai et al. provided hot events detection based on burst terms, terms co-occurrence, and generative probabilistic model [10]. Jo et al. defined a topic as a quantized unit of evolutionary change in content and discovered topics with the time of their appearance in the corpus to capture the rich topology of topic evolution inherent [11]. These work focused on the text clustering and the topic model utilization. They considered little on the feature of the topic increasing rate. Considering the time factor, Zhu et al. proposed a method for discovering the dependency relationship between the topics of documents in adjacent time stamps based on the knowledge of content semantic similarity and social interactions of authors and repliers [12]. Iwata et al. proposed an online topic model for sequentially analyzing the time evolution of topics in document collections considering both the long-timescale dependency and the short-timescale dependency [13]. Yin et al. detected both stable and temporal topics simultaneously and provided a unified user-temporal mixture model to distinguish temporal topics from stable topics [14]. Besides the time factor, some researchers thought that the graph connection could be one of the important sources to detect emerging topics. Cataldi et al. made use of a term aging model to compute the burstiness of each term and provided a graph-based method to retrieve the minimal set of terms that can represent the corresponding topic [15]. Zhou and Chen proposed a graphical model called location-time constrained topic (LTT) to capture the content, time, and location of social messages for event detection [16]. Zhao et al. used a subspace clustering algorithm to group all the social objects into topics and then divided the members that are involved in those social objects into topical clusters, each corresponding to a distinct topic [17]. Some other work combined more features for topic detection. Chen et al. [18] crawled the relevant messages related to the designated organization by monitoring multiple aspects of microblog content, including users, the evolving keywords, and their temporal sequence. They then developed an incremental clustering framework to detect new topics and employed a range of content and temporal features to help in promptly detecting hot emerging topics. Moreover, emerging topic detection technologies are widely applied for diverse applications, such as earthquake reporting [19], locationspecific tweet detection [20], and geospatial event detection [21]. [22], He and Parker [23] proposed similar ideas of our model. Tu and Seng provided a new set of indices for emerging topic detection. They defined novelty index (NI) and the published volume index (PVI) to determine the detection point (DP) of new emerging topics, which used ACM Digital Library as experimental data. He and Parker reconstructed bursts as a dynamic Summary. Tu and Seng The Scientific World Journal 3 phenomenon using kinetics concepts from physics (mass and velocity) and derived momentum, acceleration, and force from the concepts. Also they referred to the result as topic dynamics, permitting a hierarchical, expressive model of bursts as intervals of increasing momentum. They used PubMed/MEDLINE database of biomedical publications as experimental data. Different from these models, we define the topic life cycle, the hot velocity, and the hot acceleration to recognize hot topics and use the microblogging dataset to examine our model. And our goal is to find those hot topics in advance. So in this paper, we combine the concept of topic model with the topic life cycle to define a prerecognition model for emerging topic detection. Definition Before introducing the prerecognition model, we would like to give some related definitions for hot topic discovery. The original message of a post always includes text, video link, audio link, images, retweet, and comment information. In this paper, we focus more on textual content in a post which inspires us to define the post as a sequence of keywords from the view of NLP (Natural Language Processing). Since a post is always limited with the word count (most of microblogging systems maximize the word count to 140), we assume that the maximum count of keywords of a post is 20. Considering the particularity of the Chinese microblogging system, we generated the Chinese keywords from several basic corpus, including Sogou Pinyin input dict (http://pinyin.sogou.com/dict/), NLPIR microblogging corpus (http://www.nlpir.org/). Definition 2 (topic). A topic to is what posts are talking about and is composed of a set of posts. A topic may include a set of subtopics; thus it can be expressed as to = {to | to 1 , to 2 , . . . , to , 1 , 2 , . . . , , ≥ 0, > 0}. Always a new topic is generated from a series of posts, whereas, with its evolution, a topic may derive subtopics which are discussing about the same theme but with partly distinct keywords. Of course a subtopic to may derive sub-subtopics to until a subtopic becomes a new topic representing totally different theme and cannot be derived at that time. As we observed, when a topic is becoming a hot topic, the following conditions should be satisfied: (1) the topic amount is high enough, which means the number of posts included in the topic exceeds a predefined threshold; (2) the speed of the topic amount is high enough, which shows that the topic amount should increase quickly in a short time; (3) the acceleration of topic increment grows fast. Figure 1 gives an example of a hot topic with its amount, velocity, and acceleration. Thus we define three concepts to identify a hot topic: the topic amount, the topic hot velocity, and the topic hot acceleration. Definition 3 (topic amount). Topic amount ∑ to describes how many posts belong to current topic and its subtopics: Definition 4 (topic hot velocity). Topic hot velocity thv is to express how fast a topic to increases, which is calculated with topic amount in a period time : thv = ∑ to/ , > 0, %Δ = 0. Δ is the minimized time period to process the original posts and get the topics. Definition 5 (topic hot acceleration). Topic hot acceleration tha shows the speed of thv, which can be presented as the first derivative of topic hot velocity tha = thv . As we have observed, when a topic is emerging, tha always gets high, which would be an important metric to determine whether a topic is hot or not. As shown in Figure 1, we can find that a topic exists significant patterns from appearance to disappearance, which inspire us to put forward the concept of the topic life cycle. In our opinion, a topic life cycle includes six periods: embryo, boost, outbreak, stabilization, recession, and extinction as shown in Figure 2. A topic shows up when people begin to discuss about it, in which stage we call embryo presented as TLC 1 . In this period, the topic amount is increasing slowly. When more people begin to concentrate on a specialized topic, the topic amount would increase in a very short time, in which stage we call boost presented as TLC 2 . In this period, the thv and tha are increasing continuously which makes this topic be a potential hot topic. When the topic amount and the thv are increasing continuously whereas the tha is increasing not so fast, in which stage we call outbreak presented as TLC 3 . In this period, the thv would achieve its maximum value. When the thv has a relatively fixed value, we call this period stabilization presented as TLC 4 . When a topic decreases quickly in a short time, we call this stage recession presented as TLC 5 . When a topic is almost not discussed, we call this period extinction presented as TLC 6 . Also a topic life cycle has its periodicity according to the evolution of attention from the public, which means a topic may have several life cycles consequently. According to the above definitions, we can describe a hot topic as follows. Definition 8 (hot topic). A hot topic would always be in the period of boost and the topic amount should exceed the threshold expressed as hotto = {hotto | to ∈ boost, tp ∈ {boost ∩ outbreak}, ∑ to ≥ }. With the above definitions, we then offer the prerecognition model in detail at the next section. Prerecognition Model As described above, different from other works on hot topic discovery, our contributions on hot topic discovery can be summarized as follows. (1) The first one is that our model aims to find those emerging topics before they are hot ones since we apply a prerecognition model, which can catch the instant changes of the topics on their topic amounts, topic velocity, and topic acceleration. (2) We borrow the concepts of "velocity" and "acceleration" from physics, which can well illustrate the dynamics of the hot topics. (3) We define the concept of topic life cycle, which can capture the periodic characteristics of hot topics. Moreover, calculating the ∑ to ≥ during the period of boost brings the success of the prerecognition model. Prerecognition Steps. The prerecognition model is to find those potential hot topics with ∑ to ≥ during the period of boost in a topic life cycle; thus three processes should be followed. (1) Clustering the original posts to get topics and their amount: we also extend this process into five steps: filtering the original posts to omit the stop and useless words, matching the preprocessed words to get keywords, using LDA [24] and PAM (Pachinko Allocation Model) [25] topic model to generate topic and its subtopics, and finally clustering similar topics and getting their amounts using KNN (K-Nearest Neighbor) algorithm. (2) Calculating the velocity and acceleration of the topic: we define several transformation points, threshold of thv and tha, to find the different periods of the topic life cycle. (3) Selecting potential hot topics during the boost period through checking their ∑ to, thv, and tha. Topic Clustering. The topic clustering step aims to classify streamed posts into different topics. We should first collect original posts from different microblogging systems, for example, Sina (http://weibo.com/), QQ (http://t.qq.com/), and Twitter (http://twitter.com/). We develop a crawler gathering posts' textual information with open APIs provided by these microblogging systems. It is important to note that a post may include hashtag which is a manually labeled hashtag expressed with #xx# (xx represents word term). In this paper, we extract #xx# as a topic directly since this token can express the semantics explicitly. For the other plain text, we need to extract keywords from the posts and then cluster the current keywords to generate topics. As we observed, a post can be viewed as a series of keywords that delivers the similar scenario of topic model. Topic model schemes each post as a mixture of topics, and each topic is a multinomial distribution The Scientific World Journal 5 over words in a vocabulary, which inspires us to introduce the topic model for post clustering. LDA is one of the increasingly popular tools for summarization and discovery with the capability of automatically extracting the topical structure of large document collections. LDA constructs a three-level hierarchical Bayesian model based on the idea of topics. Each document exhibits multiple topics with different proportions, and the topic proportions are document-specific and randomly drawn from a Dirichlet distribution. Each topic is also modeled as an infinite mixture over a set of words probabilities. We use LDA to sample each post with multinomial dirichlet distribution over topics, and then repeatedly sample each topic with multinomial distribution over keywords as expressed in In the three-level Bayesian network of LDA, parameters and are applied to corpus level where is a vector and is a × matrix ( is the the dimensionality of the topic variable , and is the length of a keywords vector from a vocabulary). is a document level variable which presents multinomial distribution over topic and ∑ =1 1 = 1. The and are word level variables which measure the multinomial probability of a word in document with a topic . LDA topic model helps to capture the correlations among words and improve the recall of topic discovery. However, it does not explicitly model correlations among topics; that is, topics are not just plain textual documents but present strong structural information among topics. The ignored correlations among topics limit LDA's ability to mine the underling context of topic [26]. In this paper, we will model the hierarchically structural information to reveal the correlations among topics by PAM approach. PAM [26] uses a directed acyclic graph (DAG) structure to represent and learn arbitrary-arity, nested, and possibly sparse topic correlations. In PAM, the concept of topics is extended to be distributions not only over words, but also over other topics, that is, subtopics. is a child of ( −1) and it is sampled according to the multinomial distribution ( ) ( −1) . And then sample word from ( ) . Then PAM gets With and ( ) , PAM calculates the marginal probability of as Finally, the probability of generating whole posts is a product of the probability for each post: With (1)-(4), we can calculate the relation between different topics using generative model, which helps us to classify similar topics and calculate the topic amount of included posts applying KNN (K-Nearest Neighbor) algorithm. Calculating Topic Parameters. We need to calculate three parameters for a topic: topic amount, topic hot velocity, and hot acceleration. Considering the time duration of topic clustering, we set the time interval Δ as 1 hour. That means we will cluster posts and calculate topic parameters at each hour. According to the definition of hot velocity and hot acceleration, we get the instantsthv andtha, averages thv and tha. Considert thv to is measured with topic amount increment at time and time − Δ , that is, the post increment of a topic after a time interval Δ . Consider Similar to the calculating steps of the topic hot velocity, we gettha to and (tha to ) as follows: tha to is measured with the thv increment after a time interval Δ . Consider (tha to ) represents the thv increment in the interval from to . Hot Topic Recognition. As described above, prerecognition model aims to find those topics before they become hot topics, so we should find which period a topic belongs to. In Figure 3, we give the characteristics of each period expressed with the topic hot velocity. As shown in Figure 3, we can determine each period of topic life cycle through calculating the topic parameters. If a topic is in its embryo stage, we can get the following equation: The transformation point tp 1 between embryo and boost can be calculated as follows: The boost period and tp 2 can be calculated as follows: We record thetha of tp 2 as 2 . The outbreak period and tp 3 can be calculated as follows: The stabilization period and tp 4 can be calculated as follows: We record thetha of tp 4 as 3 . The recession period and tp 5 can be calculated as follows: We record thetha of tp 5 as 4 . The extinction period can be calculated as follows: According to the calculating steps of each period of topic life cycle and corresponding transformation point, we can easily find those potential hot topics; that is, we can choose those potential hot topics at tp 2 . We then rank these potential hot topics as our results. Recognition Algorithm. The following codes give the recognition algorithm for hot topic discovery: see Algorithm 1. Complexity Analysis. For simplicity, we just omit the complexity of the clawers and only present the complexity of prerecognition model. The clustering steps include topic generation and cluster generation. As for topic generation, the total running time is (( ) ( + ) 3 ), where is the number of words, is the number of latent topics, and is the number of topics appearing in a post. According to our observation, the number of topics included in a post would be less than 3, which inspires us to set = 3 for few computational costs. As a result, the computational complexity of LDA is ((( )×( + )) 3 ). The complexity of PAM is similar to the LDA except its depth of children (topic level); that is, the computational complexity of LDA is ( × (( ) × ( + )) 3 ), where is the depth of children. In this paper, we set the maximum value of = 8 for reducing the computational costs. As for cluster generation, the complexity is ( ), where is the total number of topics. Experiments and Evaluation We set a server cluster to evaluate the efficiency of our model. The cluster includes 10 PC servers, each server having 2 CPU, 32 GB memory, and 4 TB disk storage. We distribute 4 servers to collect the real dataset since the microblogging systems always limit the number of posts being crawled. The remaining servers are distributed for hot topic recognition. And all experiments are evaluated with 100 Mb bandwidth. We crawled approximately 2,000,000 original posts from Sina, QQ microblogging systems with their APIs. The dataset contained 675,439 valid posts after preprocessing to filter those meaningless ones (those posts with less retweet count than 500) from 2014/01/01 to 2014/04/30. Of course this dataset cannot include all posts because of the limit of API. However, we investigated that it is enough to validate our model since the crawled posts would cover almost all concerned topics. We choose our training dataset from 2014/01/01 to 2014/01/31 and other posts as test dataset. In the training dataset, there are 208,563 posts and 903,772 words which are identified by 80,000 terms. In our datasets, the topics and keywords are almost Chinese terms. Considering the particularity of Chinese microblogging system, we generate these Chinese terms from several basic corpora, including Sogou Pinyin input dict, NLPIR microblogging corpus. For those English keywords, we just use the standard corpus. Topic Clustering. We first cluster topics from the training dataset. We use C implementation of variational EM for LDA provided by Princeton University (http://www.cs.princeton .edu/∼blei/lda-c/). When training LDA parameters, we figured out 500 latent topics manually from the training posts and get the parameters = 0.05 and = 0.1. Table 1 gives five sample topics and top ten keywords' distributions over them. Though most of posts are generated with Chinese keywords, we prefer to present English keywords just for convenience. We found that these ten words indicate the topics well, which shows what people are talking about and gets a latent topic from these words apparently. We have observed that column 2 and column 3 present the similar topic which should be classified into one topic "missing Flight MH370" with PAM model. Also we investigated that column 1 and column 5 are talking about two different topics; however, they are related topics since the "Ukraine 8 The Scientific World Journal crisis" is one of the reasons of "Crimea independence. " In the second scenario, we would classify them as two topics for simplicity. We then presented top 10 hot topics with their names (summarized manually), amount of posts, amount of subtopics, the maximum level of subtopics, and average recall/precision after clustering process. As shown in Table 2, the ranked top ten topics discovered with our model are also hot topics discussed most by people at the Internet, which proved that our model can separate hot topics from all discussing topics correctly. We observed that a topic always embedded average 4-7 levels of subtopics. The subtopics at the same level with the same parent topic have similar keyword distribution since one topic is always an evolution version of another one. The difference is that these subtopics are more concerning about one profile of the parent topic. Also we observed that the recall/precision of most topics is not very high, which means some posts are ambiguous to be classified into one topic. In this paper, we aim to discover those potential hot topics quickly; we would like to improve recall of the topic, which inspires us to classify a post into a topic when its possibility is over a threshold = 60%. Hot Topic Recognition Time. We have summarized those hot topics with our clustering model; another problem is to find those potential hot topics in their transformation point tp 2 . We made the simulated evaluation with the testing dataset and got the predict time and the corresponding topic hotness shown in Table 3. Also we presented the predict time comparing with Google Trend (http://www.google.com/ trends/) and Baidu Index (http://index.baidu.com/) (measured with query amount and normalized with time base Δ ). We should emphasize that topic amount and thv are far less than the query amount of Google and Baidu. However, we emphasized our focus on the predicting time for emerging hot topics. We observed that our result of finding a hot topic is always quicker than the query from search engine; this is because posts and topics are always published on the microblogging systems nowadays, then noticed by Internet users and traditional medias, and finally searched by interested people with search engines. In our experiment, we set the thresholds = 5, 000, Discussion As we observed, different topics have their special trend models of becoming hot topics in their topic life cycles. As shown in Figure 4, we classified four types of topic hotness modes. As shown in Figure 4(a), a hot topic increases slowly for a long time, then breaks out in a short time, and finally does not change the topic amount any more. In this mode, as we have observed, topics are from competition, ads, such as "China Open, " and football final match. These topics are always attractive for a long time before they show up and become the focus when they happen and disappear quickly when they finish. We can monitor those important events in advance since they are easy to recognize with almost fixed topic increasing model. The second mode of topic hotness is shown in Figure 4(b). In this mode, a hot topic may be neglected by people for a long time, then break out in a short time, and finally disappear quickly and it does not change the topic amount any more. In this mode, topics are from breaking news, new movies, such as "MH370 missing" and "Captain America: The Winter Soldier". These topics do not exist or are not discussed much before they show up and become the focus when they happen and disappear quickly when they finish. The third mode of topic hotness is shown in Figure 4(c). In this mode, a hot topic increases slowly for a long time, then breaks out in a short time, and repeatedly will be few discussed in a long time, and finally breaks out again with more people involved in. In this mode, topics are from accidents with traditional medias involved in, such as "MH370 hijacked. " These topics do not exist or are not discussed much before they show up and become the focus when they happen. With more people participating in the topic and traditional medias pushing more concentration, the topic would be hotter and hotter. The fourth mode of topic hotness is shown in Figure 4(d). In this mode, a hot topic breaks out suddenly and disappears quickly. In this mode, topics are from Internet news, such as "Internet publicity stunt. " These topics would exist for a very short time but present strong hotness. According to the analysis of four modes of hot topics, we can capture important keywords to monitor hot topics in advance and control public opinion correspondingly. We have noticed that a hot topic may be hotter and hotter in the third mode. However, there exist two situations for periodic hot topics as shown in Figure 5. A hot topic may present almost the same hot velocity at each life cycle for some special topics, such as "China Spring Festival. " These topics are concentrated at a fixed time and disappear when the events finish, and the concerned people and medias are always fixed. So the topic amount and topic hot velocity present the same speed at each life cycle correspondingly, which helps to capture these hot topics before the event shows up. Another scenario is that a hot topic may present higher hot velocity at each life cycle, such as "MH370 missing accident. " These topics would be focused on by more people with traditional medias involved in. When more and more people participate in the discussion, more and more posts are generated and certainly would result in higher topic velocity. In this scenario, any evidence or opinion may bring in more concentration. The regular pattern of periodic hot topics would help us to monitor those periodic hot topics when they are in their periodic time or when new breaking news show up. Conclusion How to discover potential hot topics before they boost and break out on microblogging systems is a research focus, which also helps to guide rumor topics for government and deliver customized advertisement for e-business enterprise. In this paper, we scheme a novel prerecognition model for hot topic combining the generative model with the concept of the topic life cycle, the topic hot velocity, and the hot acceleration. We crawled test dataset from popular microblogging systems to verify our model. The experiments prompt that this prerecognition model can identify those emerging hot topics quickly. Still there are several issues that should be solved. The first one is that a comparison between the model proposed in this paper and other similar models should be given to improve the persuasiveness. And then a parallel clustering algorithm should be provided to process large scale of posts. Another problem is how to portray the evolution of topics which may change the theme and keyword set of the topic to a large extent. Also in this paper, we omitted the factor of involved users, which may be an important metric to calculate the hotness of the topic. In our future work, we would analyze these issues further and do the corresponding experiments for our model.
2018-04-03T01:17:52.061Z
2014-08-26T00:00:00.000
{ "year": 2014, "sha1": "5d8f3e44bf5c601fab841ba0ac57d5bcc4a370e1", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2014/360934.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04e19f9bd92474c5e9ecb264468f0d959ec55486", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
104653036
pes2o/s2orc
v3-fos-license
Preparation of Mn-Ce/TiO2 Catalysts and its Selective Catalytic Reduction of NO at Low-temperature The catalytic performance of NO removal was studied over Mn-Ce/TiO2 catalysts prepared by rheological phase method under different preparation parameters, such as preparation method, manganese precursor, Mn/(Mn + Ce) molar ratio and calcination conditions. It was found that manganese nitrate and manganese acetate were more conducive to high denitration efficiency than manganese chloride. The highest NO conversion of 92% is achieved at the Ce/(Mn + Ce) molar ratio of 0.15, which is much higher than that of the pure manganese constituent. The increase of calcining temperature favored the crystallization of active components and leads to the decline of catalytic activity. Introduction Nitrogen oxides (NOx), which could cause varieties of pollutions to the atmosphere, such as acid rain, photochemical smog and eutrophication of surface water, etc., are among the worst pollutions in the world nowadays. Moreover, they can also be involved in the formation of particulates in air and as a result, have significant adverse effect to ecological environment and human health. Therefore, it is urgent to control the emission of NOx to avoid further pollution. Selective Catalytic Reduction (SCR), in which reducing agents were used to transform NOx into N 2 and H 2 O with the help of oxygen at the right temperature and the presence of a catalyst, has been proved as an effective technology to reach that goal. Vanadium-based catalysts doping with WO 3 or MoO 3 are the most prevalent commercial catalysts used in manufacture factories. This kind of catalyst has to be installed upstream the particle matter collector to meet their active temperature of 300~400°C. As a result, the high concentration of dust, fly ash and other harmful impurities in the flue gas would cause such problems as deactivation, blockage or abrasion, leading to the increase of cost. Obviously, low temperature SCR technology can avoid these problems. According to the literature [1][2][3] , transition metal oxides, especially with manganese oxide, have better low-temperature activities than other catalyst. This is mainly due to the advantage of their noncrystal type and valence state exchange of MnOx over redox reaction. Many studies [4][5][6] found that manganese-cerium oxides had a higher SCR activity than pure manganese oxides. However, most of them were focus on powder catalyst which was difficult to apply in industry. In this paper, the strip-shaped Mn-Ce/TiO 2 catalysts were prepared by extrusion molding. The effect of preparation parameters, such as preparation method, manganese precursor, Mn/(Mn+Ce) molar ratio, and calcination conditions were investigated. To the best of our knowledge, such research has not been reported before. Catalysts Preparation The TiO 2 powder was mixed with deionized water and shaped into strips via a mini-extruder, then dried at 60 °C, and finally calcined at 800°C for 6h in a muffle furnace. Ce(NO 3 ) 3 •6H 2 O and manganese precursor, including Mn(NO 3 ) 2 •4H 2 O (50w%) solution, Mn(CH 3 COO) 2 •4H 2 O) and MnCl 2 . were dissolved in deionized water. Then the mixed solution was placed on the magnetic stirrer and stirred evenly. The TiO 2 strips were dipped into the beaker containing the mixture. Finally the TiO2 strips adsorbed completely were dried at 60°C and calcined at 500 °C for 6h in the muffle furnace. Catalyst Characterization Bulk crystalline structures of catalysts were performed on a German D8 advance X-ray diffraction using a Cu Kα(λ=0. 15406nm) X-ray source, within the scan range 10-80°. BET surface area, pore volume, and the pore size distribution were measured by nitrogen adsorption using an Autosorb-iQ physical adsorption system (Quantachrome Instruments, USA). Catalyst activity measurement Standard cylinders to simulate flue gas used in the experiment, the gas composition of NO (600 × 10 -6 ), NH 3 (600 × 10 -6 ), O 2 (6%) and N 2 as carrier gas. In all the runs, the total gas flow rate of 833ml/min and GHSV was about 1000 h -1 . The loading volume of catalyst sample is 5ml. The SCR activity measurement was carried on in a quartz tube fixed bed reactor which specifications is Φ 8 mm × 1000 mm. Using an external electric heating mode, the temperature of fixed bed was controlled by tubular resistance furnace. The NO concentration at the inlet and outlet of the reactor was analyzed by a flue gas analyzer (Testo350, Germany). During the measurements, the NO concentrations at four temperature points, 100°C, 150°C, 200°C and 250°C were measured respectively. And each test temperature point was in stable reaction for at least 10min. Effect of manganese precursor on the catalytic activity Figure1 shows the denitration efficiency of the samples, which gradually increased with the temperature. The denitration efficiency curve of Mn-Ce/TiO 2 (MN) is similar to Mn-Ce/TiO 2 (MA) catalysts, which is significantly higher than Mn-Ce/TiO 2 (MC) catalysts in the temperature range of 80~200°C. At 200°C, the denitration efficiency of Mn-Ce/TiO 2 (MN) and Mn-Ce/TiO 2 (MA) catalysts reaches up to 90% above. By contrast, it is only 70% at 200°Cfor Mn-Ce/TiO 2 (MC) samples. Due to the difference in valence state and structure of MnOx, the denitration activity for these catalysts is also different. Kapteijn et al. [1] investigated the activity and selectivity of pure manganese oxides for SCR of NO by ammonia and found that the activity and selectivity for N 2 of the unsupported manganese oxide were determined by the oxidation state and the degree of crystallinity, and that Mn 2 O 3 exhibited the highest selectivity for nitrogen while the MnO 2 exhibited the highest activity. The sequence of MnOx catalytic activity for unit surface area is increased as following: The XRD patterns of the Mn-Ce/TiO 2 catalysts with different manganese precursors are shown in Figure 2. The XRD patterns for Mn-Ce/TiO 2 (MN) catalysts did not show intense or sharp peaks for manganese oxides or cerium oxides and only anatase phase can be observed. However, for Mn-Ce/TiO 2 (MA) samples, typical diffraction peaks of MnO 2 (JCPDS:44-0141) were identified. And we can see the diffraction peaks of Mn 8 O 10 C l3 (JCPDS:30-0821). Kang [7] prepared MnOx by precipitation and the effects of two factors, temperature in precipitation and calcination temperature, were investigated. The resulting showed that some decomposition and amorphous also help to improve the catalytic activity of catalysts. It is consistent with our study. Effect of ceria content on the catalytic activity Results on NO conversion as a function of temperature is given in Figure3 for Mn-Ce/TiO 2 catalysts with different molar ratios of Mn/(Mn+Ce). As we can see that the NO conversion activity is greatly improved with the increase of Ce loading from 0 to 0.15, but when the Ce/(Mn+Ce) ratio increased from 0.2 to 0.4, it begin to decrease. It seems that there is an optimal ceria content beyond which the overloaded Ce would cover the active sites and thus it would be small or even negative to improve the SCR reaction. The highest NO conversion of 92% is achieved at the Ce/(Mn+Ce) molar ratio of 0.15, which is much higher than that of the pure manganese constituent. The catalytic activity for NO conversion decreases in the following order: As a good oxygen reservoir, ceria (CeO 2 ) has aroused great interest of researchers because of its oxygen storage and reducing properties. After adding Ce, the peaks of Mn oxide are disappeared, due to the synergistic effect of Mn and Ce. As described in previous research [8] , the doped ceria can interact with MnOx and titania species and achieve the enrichment of amorphous manganese oxide active phase, and fortify the available mobile oxygen on the surface of catalyst. These aspects could be attributed to Mn-Ce/TiO 2 samples' high denitration activity. Effect of calcination temperature on the catalytic activity The deNOx performance of the Mn-Ce/TiO 2 catalysts calcined at 500°C, 600°C, 700°C and 800°C for 6h is tested within a reaction temperature range from 100°C to 250°C under a simulate flue gas stream as shown in Figure5. It was observed that, Mn-Ce/TiO 2 catalysts calcined at 500°Cand 600°C show superior catalytic activity in the whole temperature range with NOx conversion above 90% at 150°C, and the NO conversion activity of Mn-Ce/TiO 2 -500°C catalysts show some slight decrease at high test temperatures (200~250°C). With the increase of calcination temperature, the activity of catalysts is obviously reduced at low temperature especially below 150°C, indicating the possible severe structural change of Mn-Ce/TiO 2 catalysts after calcination at high temperatures. The denitration rate of Mn-Ce/TiO 2 -700°C catalysts drops to 20% at 100°C, while there is almost no active in low temperature of catalysts calcined at 800°C. Typically, the calcination temperature mainly affects the oxidized state and crystallinity of MnOx. Figure 6 shows XRD patterns of Mn-Ce/TiO 2 catalysts at different temperatures, all the peaks of Mn-Ce/TiO 2 catalysts calcined at 500°C and 600°C are anatase. It indicated thatTiO2 consists of anatase as the unique phase and MnOx and CeOx were well dispersed over the support calcined at 500°C or 600°C. When the calcination temperature of catalysts were raised to 800°C, the diffraction peaks of Mn 2 O 3 (JCPDS:65-7467) and CeO 2 (JCPDS :65-5923) are detected. And there were two crystal forms, anatase and rutile, coexisting in TiO 2 , indicating that the crystal form transformation of TiO 2 happened. The BET surface areas, pore volumes, and pore sizes of the various catalysts are summarized in Table 2. From Table 2 we can see that the Mn-Ce/TiO 2 (500°C) has the largest surface area (10.27 m 2 /g) and the surface area of catalysts calcined at 800°C is only 1.13 m 2 /g. We can see that with the increasing calcination temperature, there is a continuous decrease in the BET surface area and the decrease is larger after calcination at 800°C. It can be due to various factors that the subsequent decline in the surface area upon thermal treatment at higher temperatures, such as growth of crystallite size, formation of various mixed oxide phases, and sintering. Conclusions In summary, Mn-Ce/TiO 2 catalysts for low-temperature selective catalytic reduction of NO were prepared with rheological phase method under different preparation parameters. Compared with manganese chloride, manganese nitrate and manganese acetate are more suitable as a precursor, leading to the decline of catalytic activity. With the increase of Ce loading from 0 to 0.15, the NO conversion activity is greatly improved, but when the Ce/(Mn+Ce) ratio increased from 0.2 to 0.4, it begins to decrease. With the increase of calcination temperature, the activity of catalysts is obviously reduced at low temperature especially below 150°C.
2019-04-10T13:12:26.954Z
2018-11-07T00:00:00.000
{ "year": 2018, "sha1": "cd08222f66f96a19f7361eae4f31bb81f2263271", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/423/1/012179", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "aaceb94ecabfa11d0a0d80df2005eb9b52c87425", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
257757356
pes2o/s2orc
v3-fos-license
Early phases of the Galaxy from the chemical imprint on the iron-poor stars J0815+4729 and J0023+0307 We have been exploring large spectroscopic databases such as SDSS to search for unique stars with extremely low iron content with the goal of extracting detailed information from the early phases of the Galaxy. We recently identified two extremely iron-poor dwarf stars J0815+4729 (Aguado et al. 2018a) and J0023+0307 (Aguado et al. 2018b) from SDSS/BOSS database and confirmed from high-quality spectra taken with ISIS and OSIRIS spectrographs at the 4.2m WHT and 10.4m GTC telescopes, respectively, located in La Palma (Canary Islands, Spain). We have also acquired high-resolution spectroscopy with UVES at 8.2m VLT telescope (Paranal, ESO, Chile) and HIRES at the 10m KeckI telescope (Mauna Kea, Hawaii, USA), uncovering the unique abundance pattern of these stars, that reveal e.g. the extreme CNO abundances in J0815+4729 with ratios [X/Fe]~$>4$ (Gonz\'alez Hern\'andez et al. 2020). In addition, we are able to detect Li at the level of the lithium plateau in J0023+0307 (Aguado et al. 2019a), whereas we are only able to give a Li upper-limit 0.7 dex below the lithium plateau in J0815+4729, thus adding more complexity to the cosmological lithium problem. New upcoming surveys such as WEAVE, 4MOST and DESI will likely allow us to discover new interesting extremely iron-poor stars, that will certainly contribute to our understanding of the Early Galaxy, and the properties of the first stars and the first supernovae. Introduction Extremely metal-poor stars must have formed from a mixture of material from the primordial nucleosynthesis and matter ejected from the first supernovae.Those stars are relics of the early epochs of the Milky Way, so their chemical composition, especially those still on the main sequence, holds crucial information such as the properties of the first stars and the early chemical enrichment of the Universe. During the last decades, there has been an enormous observational effort to search for extremely metal poor stars in large spectroscopic surveys, such as Hamburg/ESO (HE; Christlieb et al. 2001), or Sloan Digital Sky Survey (SDSS; York et al. 2000), or narrow-filter photometric surveys such as Skymapper (Keller et al. 2007) or Pristine (Starkenburg et al. 2017).However, in the Galaxy with a few hundred thousand million stars, 1 over about 800 stars have [Fe/H] < −3 in the solar neighbourhood, and we only know 14 stars at metallicity [Fe/H] < −4.5 and only seven at [Fe/H] < −5.Almost all stars at [Fe/H] < −4.5 are carbonenhanced metal-poor (CEMP) stars with carbon abundances A(C) > 5 dex (see Fig. 4), with the clear exception of the dwarf star J1029+1729 at [Fe/H] = −4.7 (Caffau et al. 2011).At metallicities [Fe/H] < −5, these seven stars appear to be concentrated in the low carbon band where all are expected to be CEMP with no enrichment in n-capture elements (CEMP-no) with [Ba/Fe] < 1 (Bonifacio et al. 2018).Thus, they belong to the CEMPno class where their stellar abundances should reflect the pristine material polluted with the ejecta of core-collapse supernovae of a few zero metallicity massive stars. Observations and analysis We have extensively explored the SDSS/BOSS (Eisenstein et al. 2011) spectroscopic database and found several tens of extremely metal poor stellar candidates.Among these we discovered two extremely iron poor stars, J0815+4729 and J0023+0307, that we observed using with ISIS and OSIRIS spectrographs at the WHT and GTC telescopes in the Observatorio del Roque de los Muchachos (La Palma, Canary Islands, Spain).In Fig. 1 we display these very high quality medium-resolution WHT/ISIS and GTC/OSIRIS spectra of these two chemically primitive stars that allowed us to confirm J0815+4729 as an extreme carbon enchanced star (Aguado et al. 2018a) and J0023+0307 as an hyper metal poor with apparently no carbon enhancement from the WHT/ISIS spectrum (Aguado et al. 2018b).The GTC/OSIRIS spectrum of J0815+4729 shows a forest of CH features together with the series of Balmer lines and a tiny Ca II K line.The WHT/ISIS of J0023+0307 shows also a tiny Ca II K feature but does not show any signature of carbon.We were able to reproduce fairly well the observed spectra with synthetic spectral fits using the FERRE code(see e.g.Aguado et al. 2017).The global analysis uses FERRE with a grid of synthetic spectra code ASS T (Koesterke et al. 2008) and model atmospheres from Kurucz ATLAS 9 (Mészáros et al. 2012). We observed these two very faint targets using HIRES and UVES at the Keck and VLT telescopes, to get high resolution spectra (R ∼ 37, 500 for HIRES and R ∼ 31, 000 for UVES) with the goal of extracting the detailed chemical patterns of these two unique stars.A dedicated analysis of individual spectral features is performed using ATLAS9 model atmospheres and the 1D local thermodynamic equilibrium (LTE) using the SYNPLE code for spectral synthesis.We also use an automated fitting tool based on the IDL MPFIT routine, with continuum location, global shift, abundance, and global FHWM as free parameters (González Hernández et al. 2020). The individual 1D spectra were corrected for barycentric and radial velocity, normalized, merged and binned into a single 1D spectrum of each star.In Fig. 2 we compare these high-quality spectra with those UVES spectra of other extremely iron-poor unevolved stars.Here we see clearly the huge amount of carbon in the spectrum of J0815+4729 as compared to J0023+0307 and other CEMP stars.We also clearly see the Ca II HK features in all these stars.We were able to Ca II K Fig. 1.WHT/ISIS spectrum of J0023+0307 and GTC/OSIRIS spectrum J0815+4729 (black lines) and the best fits obtained with FERRE (red and blue lines), normalized using a running-mean filter.The inner small panels show details of the Ca II K region for both stars. measure a Ca abundance from Ca II lines of A(Ca) = 0.66 in J0023+0307 significantly lower than the Ca abundance of A(Ca) = 1.60 in J0815+4729, as compared to A(Ca) = 1.35 of other two extremely iron poor unevolved Cenhanced stars J1035+0641 (Bonifacio et al. 2015) and HE 1327−2326 (Frebel et al. 2008). Given the similarity of stellar parameters of the stars shown in Fig. 2, the direct comparison of spectra and 1D-LTE element abundances seems quite reasonable.These highquality spectra allowed us to measure an iron abundance of [Fe/H] = −5.5 in J0815+4729 but only an upper-limit of [Fe/H] < −6.1 (assuming [Ca/Fe] > 0.4) in J0023+0307. This UVES spectrum unveiled that the star J0023+0307 is indeed also a CEMP with an abundance from the weak CH G band at the λλ4295 − 4315 Å of A(C) = 6.2 (Aguado et al. 2019a), thus providing a high C abundance ratio of [C/Fe] > 3.9.On the other hand, the HIRES spectrum of J0815+4729 is populated with many C features (see Fig. 2), most of them CH lines, thus claiming for a significant amount of carbon according to the relatively hot effective temperature of the star (only about 100 K cooler than that of J0023+0307).This led to the detection of several C molecular features, including CH, CN and C 2 , providing different C abundances.We measured inconsistent 1D abundances of A(C) = 7.4 dex and = 8.0 from CH (G-band) The spectrum of J0023+0307 revealed high α-element abundance ratios of [Mg/Fe] > 3.1 and [Si/Fe] > 2.6, and also high odd-Z light element abundance ratios of [Na/Fe] > 1.9 and [Al/Fe] > 2.0.The spectrum of J0815+4729 shows relatively lower ratios of [Mg/Fe] = 1.7, [Si/Fe] < 1.3, and [Ca/Fe] = 0.75, but higher ratio of [Na/Fe] = 2.9 and lower [Al/Fe]< 0.5.These differences show how unique are the abundance patterns of these stars, that are expected to be the result of a mixture of primordial matter with the ejecta of a few supernovae of the first massive stars formed in the first 300 Myr of Universe (Frebel & Norris 2015). Discussion and conclusions The detailed abundance patterns from C to Ni permitted a comparison with zero metallicity SN models, suggesting low-energy SN models with very little mixing of 21-27 M Population III progenitors (Heger & Woosley 2010).The ratios of [Sr/Fe] < 1.0 and [Ba/Fe] < 1.9 in J0815+4729, do not allow to confirm this star as CEMP-no but the carbon abundance is compatible with the upper part of the low-carbon band (see Fig. 4).The upper-limits in Sr, Ba and Fe do not allow to extract any conclusion in J0023+0307 that also appears to be located in the lower part of the low-carbon band.The only few stars known at [Fe/H] < −5 suggest that all belong to this low-carbon band and may be CEMP-no stars, that are expected to form at the early phases of the Galaxy and their atmospheric abundances resemble the mixture of primordial matter with the ejecta of a few metal-free weak SNe.There is so far no evidence for RV variations as well as no chemical signature of mass transfer from companion AGB stars in J0023+0307 and J0815+4729.Recently, Aguado et al. (2022) have performed a systematic survey of these extremely iron poor stars using the ultra-stable high-resolution ESPRESSO spectrograph (Pepe et al. 2021).ESPRESSO observations demonstrated the binarity of the cool iron-poor giant star HE 0107−5240 at [Fe/H] = −5.4but found a very high 12 C/ 13 C ratio, thus supporting that this star remains as an unmixed CEMP-no with A(C) = 6.8 located in the low-C band. Finally, these iron-poor stars allow us to look back to the time to the big bang through the lithium abundances, in particular, in unevolved stars where Li can still survive in their atmospheres during the whole age of the Universe (see Fig. 2).The star J0023+0307 is particular interesting because it shows a significant Li feature at 6707.8 Å, with a metallicity of [Fe/H] < −6.1, thus the only star with a clear Li detection in this metallicity regime (see Fig. 3).The Li abundance of A(Li) = 2.0 in J0023+0307 is at the level of the Li plateau (see Fig. 4), thus extending the upper envelope of the Li abundances in metal Fig. 2 . Fig. 2. High-resolution Keck/HIRES spectra of the star J0815+4729 together with the VLT/UVES spectra of other extremely iron-poor unevolved stars at [Fe/H] < −4.5.The spectra are sorted and colored by stellar effective temperature from top to bottom. Fig. 3 . Fig. 3. Lithium features (green dots with error bars) in high-resolution spectra of the star J0815+4729 (upper Keck/HIRES spectrum) and the star J0023+0307 (lower VLT/UVES spectrum) compared to SYNPLE synthetic spectra (the best-fit abundance is shown as red dashed-dotted lined).
2023-03-27T01:15:49.627Z
2023-03-24T00:00:00.000
{ "year": 2023, "sha1": "639bad029dce52629c6ceced938e69349e192971", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "639bad029dce52629c6ceced938e69349e192971", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251778032
pes2o/s2orc
v3-fos-license
EFFECTS OF EXOGENOUS CaCl 2 ON THE PHOTOSYNTHETIC FUNCTION AND ACTIVE OXYGEN METABOLISM OF SALIX VIMINALIS LEAVES UNDER PB STRESS . In order to provide some basic data for revealing the mechanism of exogenous CaCl 2 on improving drought resistance of Salix viminalis seedlings. The experimental results showed that: Spraying exogenous of different CaCl 2 concentrations significantly alleviated the damage degree of S. viminalis seedlings leaves caused by heavy metal stress, and the effect of 30 μmol·L -1 CaCl 2 was the most significant. Spraying exogenous CaCl 2 could regulate stomatal limitation of S. viminalis seedlings leaves under heavy metal stress, which was beneficial to water holding capacity of S. viminalis seedlings leaves under heavy metal stress, and enhanced photosynthetic carbon assimilation capacity of leaves under heavy metal stress; Spraying exogenous CaCl 2 can reduce the energy pressure of PSII reaction center by increasing the non-photochemical quenching (NPQ) of leaves of S. viminalis seedlings, alleviate the photoinhibition of PSII, and promote the electron transfer process, especially on the receptor side of PSII; Spraying exogenous CaCl 2 effectively reduced the production of reactive oxygen species in leaves of S. viminalis seedlings, as well as the degree of membrane peroxidation, which was also one of the important reasons for alleviating the inhibition of photosynthetic capacity. Introduction Heavy metal stress is an important limiting factor in agricultural production, and the yield of crops caused by heavy metal stress alone exceeds the sum of all pathogens. Lead (Pb) is one of the most harmful heavy metals in the environment. In recent years, with the rapid development of economy, mining, agriculture and recycling of waste metals have significantly increased the concentration of Pb in the soil, and the damage dealt by Pb pollution to the environment has gradually intensified, and the impact on human health has become a hot topic (Soffianian et al., 2014;Sauliutė and Svecevičius, 2015). The majority of Pb enters the human body through the soil-plant-human pathway and causes harm. To a certain extent, Pb in the soil will have a series of adverse effects on plant physiological metabolism, such as the increase of cell membrane permeability, the accumulation of reactive oxygen species, and the aggravation of membrane lipid peroxidation, and even lead to plant death in severe cases (Shu et al., 2012;Ogbomida et al., 2018). Plant photosynthesis is one of the most sensitive processes to heavy metals. Persistent heavy metal stress will cause irreversible damage to plant photosynthetic apparatus (Zhang et al., 2018a), such as inhibition of photosynthetic pigment degradation (Albert et al., 2011;Guadagno et al., 2017;Chen et al., 2018;Zhang et al., 2018), photosynthetic phosphorylation and electron transport. Oxidative stress caused by oxidative stress in plant cell membrane caused by oxidative stress (Gill et al., 2010;Wang et al., 2017). Plant photosystem II (PSII) is one of the sensitive parts to heavy metal stress. A series of photosynthetic physiological processes such as light energy absorption, water photolysis and electron transfer are closely related to PSII (Allahverdiyeva et al., 2013;Chen et al., 2017). To ensure the stability of photosynthetic function, especially PSII function, of plant leaves under heavy metal stress plays an important role in maintaining the normal growth of plants and improving the stress resistance of plants. Calcium is not only an essential mineral nutrient element for plants, but also a second messenger for intracellular physiological and biochemical reactions. Therefore, the stress resistance of plants can be improved by stabilizing cell wall and cell membrane structures and inducing the expression of specific genes (Ryan and Kochian, 1993;Larkindale and Huang, 2004). At present, there are a lot of reports about CaCl2 can protect the normal physiology of PSII and increase the content of antioxidant enzymes in plant leaves under stress. S. viminalis is a Salix plant of Salix family. S. viminalis has the advantages of rapid growth, high biomass and strong ability to accumulate heavy metals is widely used in phytoremediation and biomass energy development in heavy metal contaminated soil areas, and has certain ecological and economic value (Zhai et al., 2016). Therefore, improving the resistance of S. viminalis seedlings to heavy metal stress is the key to ensure the survival and growth of seedlings after transplanting. However, the regulation of exogenous CaCl2 on physiological function of S. viminalis seedlings under heavy metal stress, especially the regulation mechanism of PSII function, was less studied. In this experiment, the effects of spraying different concentrations of exogenous CaCl2 on PSII function of S. viminalis leaves under heavy metal stress were studied, in order to provide some basic data for improving the drought resistance ability of S. viminalis. Test materials and treatment The experiment was conducted in the laboratory of soil science of Jilin Agricultural University (Changchun, Jilin Province, China) from March to June 2019. The annual seedlings were raised by cutting. The culture substrate was fully mixed with peat soil and quartz sand; the ratio was 2:1 (V / V). It was cultured in an artificial climate box with temperature of 25/23 ℃ (light / dark), light intensity of 400 μmol·m -2 ·s -1 , photoperiod of 12/12 h (light/dark), and relative humidity of about 75%. After the seedlings had long functional leaves, 1/2 of Hoagland nutrient solution was irrigated once a week. When the seedlings grow to six leaves and one heart, they are transplanted. Before transplanting, the seedlings are irrigated once to make the relative water content of the soil basically reach saturation. The seedlings are transplanted into a cultivation bowl with a diameter of 12 cm and a height of 15 cm, and one plant is planted in each pot. After transplanting, the seedlings with the same growth were selected as the experimental seedlings. The treatment group was sprayed with CaCl2 solution with concentration of 15 and 30 μmol·L -1 (PH=7), and the control group was sprayed with water (CK). The foliar spray was carried out at 4:00 p.m., and the front and back sides were evenly sprayed until the solution on the leaves formed fine mist like uniform droplets ready to drop, after the water on the leaf surface evaporates naturally and the CaCl2 solution is completely absorbed, repeat spraying 1 time with the same amount each time. 10 plants in each treatment were repeated. After spraying CaCl2, the water on the surface of leaves is naturally evaporated. In order to fully absorb the spraying CaCl2 on the leaf surface, after the CaCl2 solution on the leaf surface evaporates, spray once more, and the usage and dosage are the same as the first time. The content of Pb in the test Ning et Determination parameters and methods The growth parameters were determined: Vernier caliper was used to measure plant height and leaf area was measured by leaf area meter. Photosynthetic gas exchange parameters were measured: A CIRAS-2 portable photosynthesis system (PPsystem Company, UK) was used to determine the net photosynthetic rate (Pn), stomatal conductance (Gs), transpiration rate (Tr), and intercellular CO2 concentration (Ci) of the second fully expanded functional leaf of the penultimate leaf of S. viminalis seedlings was selected. The CO2 concentration was fixed at 400 μl·L -1 in a CO2 cylinder. The light intensity PFD was set to 1000 μmol·m -2 ·s -1 with the light source built in the instrument. The net photosynthetic rate (Pn), stomatal conductance (Gs), transpiration rate (Tr) and intercellular CO2 concentration (Ci) of S. viminalis leaves under different treatments were measured. Repeat 5 times. Chlorophyll fluorescence parameters were measured: The electron transfer rate (ETR) and non-photochemical quenching (NPQ), the maximum photochemical efficiency (Fv/Fm) and the actual photochemical efficiency (ФPSⅡ) of PSII reaction center under light acclimation were measured by FMS-2 (Hansatch company, UK) for 5 times. The chlorophyll fluorescence kinetic curve and its parameters were determined: After 30 min dark adaptation, the OJIP curve of leaves after dark adaptation was measured by handy pea (Hansatech compan, UK). According to Strasser et al.'s method (1995), the OJIP curves were standardized by VO-P=(Ft-Fo)/(Fm-Fo) and VO-J=(Ft-Fo)/(FJ-Fo) respectively. The relative variable fluorescence (VJ and VK) of J point at 2 ms and K point at 0.3 ms were obtained, respectively. The differences between the standardized VO-P and VO-J curves of different treatments and the control were calculated, expressed as VO-P and VO-J, respectively. The measured OJIP curve was analyzed by JIP test. The maximum photochemical efficiency (Fv/Fm) of PSII and the photosynthetic performance index (PIABS) based on absorbed light energy were measured. Determination of ROS metabolism and other physiological indices: The rate of production of O2 • was measured using the method of Zhang et al. (2007), The malondialdehyde (MDA) content was determined using the thiobarbituric acid method. The conductivity was measured by DDS-11C. Relative conductivity was used to express the electrolyte leakage rate. The content of superoxide dismutase activity (SOD) was determined using the NBT method. The activity unit (U) was 50% of the enzyme that inhibited the photochemical reduction of NBT in 1 ml of reaction solution in 1 h. The activity of ascorbic acid peroxidase (APX) was determined as described by Shen et al. (1996). An activity unit (U) is defined as the amount of enzyme that catalyzes 1 μmol ascorbic acid oxidation in one minute. Each index was measured five times. Data and analysis Excel and SPSS software (Version. 22) were used to conduct statistical analyses on the measured data. The data in the figure was denoted as mean ± standard deviation (SE). One-way ANOVA and least significant difference (LSD) were used to compare the differences among different data groups. Effects of exogenous CaCl2 on leaf growth of S. viminalis seedlings under Pb Stress Under Pb stress, the leaf growth of S. viminalis seedlings changed significantly, and the increase of leaf area of S. viminalis seedlings without spraying exogenous CaCl2 was significantly delayed (Fig. 1). Compared with CK, the plant height of S. viminalis seedlings was decreased by 20.36% (P<0.01), reaching a very significant difference level. However, spraying different concentrations of exogenous CaCl2 significantly alleviated the reduction of leaf area of S. viminalis seedlings under heavy metal stress, of which 15 and 30 μmol·L -1 were sprayed under heavy metal stress. The leaf area of S. viminalis seedlings treated with 30 μmol·L -1 exogenous CaCl2 was 26.51% (P˂0.05) and 37.21% (P˂0.05), respectively. Effects of exogenous CaCl2 on Photosynthetic gas exchange parameters of S. viminalis seedlings under Pb Stress Compared with CK, Pn, Gs and Tr of S. viminalis seedlings leaves under Pb stress were significantly decreased, but spraying different concentrations of exogenous CaCl2 significantly alleviated the reduction range of Pn, Gs and Tr. Except that there was no significant difference between spraying different concentrations of exogenous CaCl2 and spraying CaCl2 under heavy metal stress, spraying 15 and 30 μmol·L -1 in leaves of S. viminalis seedlings leaves Under CaCl2 Treatment, Pn and Gs in leaves of S. viminalis seedlings were significantly increased compared with those without CaCl2 treatment. It can be seen from Fig. 2D that spraying different concentrations of exogenous CaCl2 also significantly alleviated the Ci reduction of S. viminalis seedlings under heavy metal stress, but there was no significant difference between the Ci of S. viminalis seedling leaves treated with 30 μmol·L -1 exogenous CaCl2 and that without spraying exogenous CaCl2. Effects of exogenous CaCl2 on Fv/Fm,ФPSⅡ, ETR and NPQ of S. viminalis seedlings under Pb stress Under Pb stress, Fv/Fm, ФPSⅡ,and ETR of S. viminalis seedlings decreased significantly, while NPQ increased significantly (Fig. 3) Effects of exogenous CaCl2 on OJIP curve of S. viminalis seedlings under Pb stress Compared with CK, the relative fluorescence intensity of O point and J point had no significant change under Pb stress, while the relative fluorescence intensity of I and P decreased significantly, especially that of P point (Fig. 4). However, spraying different concentrations of exogenous CaCl2 could significantly reduce the change range of OJIP curve of S. viminalis seedlings under heavy metal stress. Effects of exogenous CaCl2 on standardized O-P curve, VJ and VI of S. viminalis seedlings under heavy metal stress Compared with CK, the relative variable fluorescence (VJ) of each point on the VO-P curve of S. viminalis seedlings leaves under Pb stress increased significantly with the relative variable fluorescence VJ of J point at 2 ms (Fig. 5). the increase range of VJ in leaves of S. viminalis seedlings treated with different concentrations of exogenous CaCl2 was significantly less than that of no CaCl2 treatment. At 0.3 ms, the change range of K point relative to variable fluorescence VK was small, and it was not significantly affected by spraying CaCl2. Effects of exogenous CaCl2 on reactive oxygen species and membrane peroxidation of S. viminalis seedlings under Pb Stress Compared with CK, the results showed that the treatment of 30 μmol·L -1 exogenous CaCl2 had the most obvious effect, and the O2 •production rate and H2O2 content decreased by 25.83% (P˂0.05) and 32.22% (P˂0.05), respectively, which resulted in the decrease of MDA content and electrolyte leakage rate of membrane lipid peroxidation by 21.43% (P˂0.05) and 25.14% (P˂0.05), respectively (Fig. 6). Under Pb stress, the photosynthetic carbon assimilation capacity of S. viminalis seedlings leaves was limited, which was mainly manifested in the decrease of Pn, along with the decrease of Gs, Tr and Ci. According to Farquhar's photosynthetic stomatal factor analysis theory (Farquhar et al., 2003), the reason for the decrease of photosynthetic carbon assimilation capacity of S. viminalis seedlings under heavy metal stress was directly related to the decrease of stomatal conductance Although the decrease of water loss can effectively prevent the loss of water, it also directly reduces the supply of carbon assimilation material (CO2), which limits its carbon assimilation capacity. In this experiment, spraying different concentrations of exogenous CaCl2 increased Pn in different degrees, which was similar to the change of Gs, and accompanied by the increase of Ci, indicating that spraying exogenous CaCl2 could improve the photosynthetic capacity of S. viminalis seedlings by improving stomatal limitation. Under Pb stress, the dark response of plant leaves was inhibited, and the accumulation of assimilative capacity (ATP and NADPH) would feedback inhibit the light response process, resulting in the excess of electrons in the photosynthetic electron transport chain (Li et al., 2000;Liu et al., 2006). In addition, the decrease of PSII activity also inhibited the process of light energy absorption and electron transfer. In this experiment, although Fv/Fm, ETR and, ФPSⅡ of S. viminalis seedlings were significantly decreased under Pb stress. Fv/Fm and, ФPSⅡ were important indexes reflecting the photochemical activity of PSII, and the sensitivity of ФPSⅡ was significantly higher than that of Fv/Fm (Kalaji et al., 2016). Therefore, the results showed that the photochemical activity of PSII in leaves of S. viminalis seedlings was inhibited under heavy metal stress, and the process of PSII electron transport was hindered. However, the decrease of PSII photochemical activity was alleviated by spraying different concentrations of exogenous CaCl2, especially 30 μmol·L -1 . The results showed that exogenous CaCl2 could alleviate the photoinhibition of S. viminalis seedlings under heavy metal stress and promote electron transfer. NPQ was positively correlated with heat dissipation dependent on xanthophyll cycle. Under Pb stress, NPQ in leaves of S. viminalis seedlings increased significantly, which means that the excess excitation energy in PSII could be dissipated by increasing NPQ under Pb stress, so as to reduce the pressure of PSII reaction center. The NPQ of S. viminalis seedlings leaves treated with different concentrations of exogenous CaCl2 increased to varying degrees. Ivanov et al. (1995) found that CaCl2 treatment enhanced the xanthophyll cycle in barley seedlings, which enhanced the opening degree of photosystem II reaction center and promoted the utilization rate of light energy. Therefore, the reason why spraying exogenous CaCl2 to alleviate the decrease of PSII in leaves of S. viminalis seedlings under heavy metal stress may be related to the mechanism of energy dissipation dependent on xanthophyll cycle induced by CaCl2. Under stress conditions, the blocking sites of photosynthetic electron transfer often occur on the electron donor side and receptor side of PSII reaction center, especially the transfer of QA to QB is the main inhibition site (Zhang et al., 2018a). The increase of J point relative to variable fluorescence VJ at 2 ms on the normalized O-P curve indicated that the electron transfer from QA to QB was blocked in the photosynthetic electron transport chain Zhang, 2018b). The increase of relative variable fluorescence VK at 0.3 ms on the normalized O-J curve was considered as a specific marker of damage to OEC activity of PSII electron donor side oxygen complex (Zhang et al., , 2016. In this experiment, the VJ of S. viminalis seedlings leaves increased significantly under Pb stress, but the VK did not change significantly, which indicated that the reason for the decrease of PSII photosynthetic electron transfer rate under Pb stress mainly occurred in PSII receptor side. However, the change of VK was not only affected by the donor side injury of PSII, but also by the receptor side injury of PSII. When the injury degree of the recipient side was greater than that of the donor side, the VK did not increase significantly Zhang et al., 2018c). Therefore, in this study, the effect of heavy metal stress on the donor side of PSII may be due to the insensitivity of OEC to Pb stress, or it may be due to the excessive damage on the receptor side. However, spraying different concentrations of exogenous CaCl2 significantly alleviated the increase of VJ, but the effect on VK was not significant, which indicated that exogenous CaCl2 could promote the electron transfer from QA to QB in PSII receptor side of S. viminalis seedlings under Pb stress. Pb usually leads to excessive reduction of photosynthetic electron transport chain in plants, and produces a large number of ROS in chloroplasts and mitochondria (Ramachandra et al., 2004;Asada, 2009;Ahmed et al., 2009). Excessive ROS breaks the redox balance in plants, causes membrane peroxidation, damages membrane system, and causes oxidative damage to cell components and structures (Jiang and Zhang, 2004;Gill et al., 2010). Under normal conditions, the production and elimination of intracellular free radicals are in a dynamic equilibrium state. When the concentration of exogenous Pb 2+ reaches a certain level, this dynamic balance will be broken, resulting in membrane lipid peroxidation and osmotic stress (Amhed et al., 2009; Li and Li, 2011). The results also showed that under Pb stress, the relative permeability of cell membrane increased, the production rate of O2 •and the content of MDA increased, which caused lipid peroxidation in the inner membrane of S. viminalis. Under Pb stress, O2 •production rate and H2O2 content were significantly increased, MDA content and electrolyte leakage rate were also significantly increased, which indicated that excessive ROS caused membrane system peroxidation and increased membrane permeability. However, spraying different concentrations of exogenous CaCl2 significantly alleviated ROS production and membrane lipid peroxidation of S. viminalis seedlings, especially the treatment of 30 μmol·L -1 CaCl2 was the most effective, which was consistent with the change of photosynthetic parameters of S. viminalis seedlings. Spraying exogenous CaCl2 could alleviate ROS production by enhancing photosynthetic capacity and reducing PSII damage degree of S. viminalis seedlings, and the decrease of ROS production could also alleviate the photoinhibition of S. viminalis seedlings leaves under heavy metal stress. Conclusion The photosynthetic capacity of S. viminalis seedlings was decreased under Pb stress, mainly due to stomatal factors, and also related to the decrease of PSII photochemical
2022-08-25T15:21:48.352Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "915b99ff84466d93ca59f1de526575fe585341a5", "oa_license": null, "oa_url": "https://doi.org/10.15666/aeer/2004_31433154", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4b8915f2585793de4caa3e585b86ad6f717cbeb5", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
246848976
pes2o/s2orc
v3-fos-license
Enhanced Convolutional Neural Network Model for Cassava Leaf Disease Identification and Classification : Cassava is a crucial food and nutrition security crop cultivated by small-scale farmers because it can survive in a brutal environment. It is a significant source of carbohydrates in African countries. Sometimes, Cassava crops can be infected by leaf diseases, affecting the overall production and reducing farmers’ income. The existing Cassava disease research encounters several challenges, such as poor detection rate, higher processing time, and poor accuracy. This research provides a comprehensive learning strategy for real-time Cassava leaf disease identification based on enhanced CNN models (ECNN). The existing Standard CNN model utilizes extensive data processing features, increasing the computational overhead. A depth-wise separable convolution layer is utilized to resolve CNN issues in the proposed ECNN model. This feature minimizes the feature count and computational overhead. The proposed ECNN model utilizes a distinct block processing feature to process the imbalanced images. To resolve the color segregation issue, the proposed ECNN model uses a Gamma correction feature. To decrease the variable selection process and increase the computational efficiency, the proposed ECNN model uses global average election polling with batch normalization. An experimental analysis is performed over an online Cassava image dataset containing 6256 images of Cassava leaves with five disease classes. The dataset classes are as follows: class 0: “Cassava Bacterial Blight (CBB)”; class 1: “Cassava Brown Streak Disease (CBSD)”; class 2: “Cassava Green Mottle (CGM)”; class 3: “Cassava Mosaic Disease (CMD)”; and class 4: “Healthy”. Various performance measuring parameters, i.e., precision, recall, measure, and accuracy, are calculated for existing Standard CNN and the proposed ECNN model. The proposed ECNN classifier significantly outperforms and achieves 99.3% accuracy for the balanced dataset. The test findings prove that applying a balanced database of images improves classification performance. Introduction Cassava is the main crop in Africa and many other nations. Africa is the largest producer of Cassava crops. Cassava can be cultivated successfully in any climate, including drought and unproductive soil. Cassava crops encounter several challenges during production, i.e., leaf diseases and poor quality. Cassava leaf diseases are the principal cause of production reduction, and they can directly affect farmers' revenue [1]. Cassava leaf disease identification must be treated on a priority basis to improve production capacity. The automatic detection of crop diseases focused on crop leaves is critical in crop production. Furthermore, effective and accurate detection of leaf diseases significantly affects crop productivity improvement. Cassava leaf diseases are similar to Maize leaf diseases [2]. Early recognition of leaf disease facilitates the rescue of cultivars well before the plant can be infected permanently [3]. A few researchers focused on building fusion plants resistant to pathogenic organisms and created a system to recognize and anticipate crop disease formation from leaf images [4]. Farm owners can significantly raise farm yields by using smart farming. Farmers spend a lot of time, money, and effort in the manual identification of plant diseases, and the results are still inaccurate. Research [5] has developed an intelligent system based on image classification and deep-learning methods. A deep-learning and machine-learning-based model is discussed in research [6] for leaf disease detection. The automated machine-learning model for detecting and treating Cassava crop diseases enables farmers and experts to increase system throughput and accuracy. Deep-learning-based CNN classifiers can enhance leaf disease detection in all the possible situations where image-based diagnostics with advanced training are involved. Various portable devices are also used in leaf disease detection. In all the instances where an intelligent classifier is installed on portable devices and contains a novel disease, datasets can enhance detection accuracy. Portable devices, i.e., smartphones, drones, and laptops, can be easily tested in realistic scenarios [7]. Researchers have considered various novel techniques to resolve leaf disease detection issues, i.e., image classification, AI, machine learning, and deep learning [8]. Data preprocessing is an essential phase in image analysis, which includes various processes, i.e., image optimization, color adjustment, reshaping, and feature extraction. An image classification method must be applied with an image enhancement technique for better outcomes [9]. A hybrid deep-learning and image-classification-based model for leaf disease detection is discussed in [10]. However, these existing research works have several challenges, which need immediate attention. This motivates researchers to work on Cassava leaf disease detection [11]. These factors also encourage researchers to develop a more robust and reliable Cassava leaf disease detection system. This research aims to fill the gaps by presenting a better overview of leaf disease detection and analysis in Cassava plants. This research provides a comprehensive learning strategy for real-time Cassava leaf disease identification based on enhanced CNN models (ECNN The proposed ECNN also utilizes a distinct block processing feature to process imbalanced images. • Furthermore, the proposed ECNN model utilizes de-correlation stretching with Gamma correction. It enhances the image color segregation feature and provides a higher band-to-band correlation. • The proposed model utilizes a global average election polling layer to replace the fully connected layer to decrease the number of variables. After that, ECNN utilizes a batch normalization layer that enhances the overall computational efficiency [13]. • The proposed ECNN method is validated by calculating the standard performance measuring parameters, and the results are compared with the existing Standard CNN method. The research article is organized as follows. Section 1 covers introductory work related to the research; Section 2 covers related positions in Cassava leaf disease identification and classification. Next, Section 3 covers materials and methods related to research. Section 4 covers the proposed ECNN model's implementation, results, and discussion. Section 5 covers the conclusion and future work. Related Work Cassava is the most popular commercial and industrial crop in Africa and Thailand. Due to the apparent pleasant environment and soil, it is primarily produced in these countries. Cassava crop encounters several issues, i.e., leaf disease and fungal infection, thus reducing production and increasing cost. Early and accurate detection of Cassava leaf disease is a promising research area for researchers. Various research articles suggest different methods and models to improve Cassava leaf disease detection. Existing research has also tried to determine effective methods for improving Cassava crop production. This section covers the existing research on Cassava leaf disease detection. Machine Learning Based ResNet-50-and SVM-classifier-based Cassava leaf disease model is presented in [14]. The proposed model first extracts all the relevant features and then classifies the image dataset using an SVM classifier in the next phase. The outcomes show better accuracy and performance by incorporating ResNet-50 and SVM classifiers. A digital image processing model uses a hybrid transfer learning method [15]. It is crucial to perform correct data preparation in leaf disease research. This improves plant disease pattern recognition, forecasting, and model performance. A hybrid model based on SVM and RF for Cassava leaf disease detection is presented in [16]. The proposed model utilizes multiple feature selection processes, including selecting image type, association in parameters, quality, and uniformity. The proposed classification model achieved more than 90% accuracy compared to the existing model. The SVM and Naive Bayes machine-learning-based model is presented in [17] to detect plant diseases. The researcher suggested that a massive data history and machinelearning methods play an essential role in plant disease analysis. The machine-learning method [18] provides a valuable contribution to evaluate a considerable volume of leaf image data. Another research [19] presented a deep-learning-based model with ImageNet for Cassava leaf disease detection. Leaf Shape, Colour, and Texture Based Leaf disease detection based on leaf properties is discussed in research [20]. The proposed model utilized complex geometries and segmentation-based methods for feature extraction. After feature extraction, the SVM classification method was applied to classify leaf diseases. A shape-and texture-based classification for Cassava leaf disease identification is discussed in research [21]. The proposed model achieved more than 84% accuracy and 88% detection rate. A region-based detection method is discussed in research [22]. This work mainly focused on retrieving Cassava leaf properties using a cluster center method. A bacterial and viral infection detection algorithm is introduced in research [23]. The proposed method first detects leaf image texture and shape features, enhancing disease classification outcomes and improving overall precision and accuracy. An innovative procedure for categorizing plant leaf disease is discussed in research [24]. Often, these plants have distinctive leaves that vary by features, such as margin, color, shape, and texture. A shape-, color-, and texture-based leaf disease classification are discussed in research [25]. This research classified diseases using combinations of two and more characteristics, such as shape, size, color, and texture. In the proposed method, a shape-based technique first extracts the curve receipt using leaf stem and afterward determines the inconsistencies using a Jeffrey divergence estimate method. Leaf disease detection based on computer vision and leaf feature analysis method was discussed in research [26]. Neural Network Based A deep-learning-based model to analyze Cassava leaf diseases is presented in research [27]. This proposed model firstly performs a subdivision method and later applies a classification approach to diagnose Cassava leaf disease. GoogleNet-and AlexNet-based convolutional neural network structures were discussed in research [28] to analyze and identify distinct CNN leaf diseases. A neural-network-based Cassava leaf disease prediction model is described in research [29]. This research utilizes various neural network models on different crops to analyze diseases and infections. Experimental results show the strength of the proposed model through higher recognition rates. A deep-learning-based model is described in research [30] to predict leaf disease. This research utilizes a feature selection method to recognize thirteen particular crop diseases. Researchers have trained CNN architecture by utilizing the Caffe deep-learning approach. An improved deep-learning-based model is described in research [31] to predict leaf disease classification. This research work also covers the limitation of existing works. A nine-layer-based convolutional neural network model is presented in [32] to characterize Cassava diseases in plants. A NASNet-based fully convolutional architecture is described in research [33]. This model applied a feature selection model to recognize fungal leaf infection. The proposed model achieved an accuracy rate of 94.1% compared to an existing model. A superficial CNN model is presented in research [34] to identify and characterize plant leaf diseases. In the initial phase, researchers retrieved the leaf features using the feature extraction method and then categorized them using a feature selection method with random forest classification methods. Table 1 represents the comparative analysis of various existing methods used in plant leaf disease detection and analysis. Materials and Methods This section covers the proposed model architecture and working steps. Proposed ECNN Architecture This research provides a comprehensive learning method for real-time Cassava leaf disease detection based on an enhanced CNN model (ECNN). The existing Standard CNN model is based on extensive features and a massive computational process that increases the computational overhead. We present an enhanced CNN model (ECNN) for Cassava leaf disease detection and an analysis for overcoming these issues. The existing Standard CNN model is improved by adding new features and properties. In the proposed ECNN model, a depth-wise layer separation feature is introduced, minimizing the feature count and computational overhead. Additionally, a global average election polling layer replaces the fully connected layer and decreases the variable count. Then, a batch normalization layer is applied to adjust computational efficiency. The proposed ECNN model utilizes a distinct block processing feature to deal with data imbalance. The next phase utilizes de-correlation stretching with Gamma correction feature, which improves color segregation with high band-to-band correlation features on the image dataset. The architecture of the proposed ECNN model involves three convolutional layers and four fully integrated layers in the head. The first layer contains 32 (5 × 5) convolutions, in order to know and understand more significant characteristics of workflow normalization. This layer also contains batch sizes of (3 × 3) for the max-pooling feature. The subsequent two and three layers consist of two main pairs of convolution layers. They mainly contain 64 features, with size (3 × 3) batch normalization features. They also contain 128 features of size (3 × 3) for max pooling, respectively. The layers are arranged in a particular manner to facilitate the entire learning system to learn broader and deeper characteristics by applying the stacking of two pairs of convolution layers. Figure 1 shows the architectural features of the proposed ECNN framework. GAEPL's objective is to standardize the entire network structure and minimize the dimensionality from three-dimensional to one-dimensional, which minimizes the overfitting issues. The proposed ECNN model utilizes the pattern map feature within the last CNN layer to aggregate all the outputs into a sequence of one-dimensional form. After applying a GAEPL, the number of variables is considerably reduced because the advancement of pattern maps in matrices is not required, as described in Figure 2. The advantage of a GAEPL over the convolutional layers is that it can effectively maintain the multilayer architecture by improving the connection between the pattern maps and analogies. It also provides more convincing features and well-understood pattern map classifications [44]. A pooling function includes sliding a two-dimensional filtration system across each link of its feature space. It also aggregates all the features within the filter's communication range. For a convolution layer feature space composed of parameters (Nw: width of feature space, Nh: height of feature space and Nc: Total number of channels/links in a feature space, f: filter size, and s: length of stride), the measurement of results acquired straight after a pooling layer can be defined as Each link in the feature space is combined into a single value using the global pooling layer function. As a result, the (Nh × Nw × Nc) feature space is adjusted to (1 × 1 × Nc). It is the same as using only a filter with aspects (Nh × Nw), i.e., the feature map's elements. Batch Normalization Layer (BNL) BNL is a training method for complex CNN architecture. It standardizes the number of parameters at each level in small batches. It also improves the teaching methods and significantly minimizes the training epochs needed to build deep convolutional networks. Figure 3 shows the working of BNL [45]. In CNN, the quantity of neurons in each layer is often expansive. If data transmission at a specific layer starts shifting from one layer to another, the network size also grows, enhancing the modeling risks. Consequently, a batch normalization process mainly aims to relieve the above issues. A batch normalization process splits the population into small clusters and fixes each cluster's variables [46]. A record inside one cluster collectively depends on the direction of the differential and minimizes unpredictability when the value decreases. A CNN group requires fewer items than a complete dataset during the process, which dramatically reduces the computation count. An activation function is used in the batch normalization process. Before applying an activation function, the batch normalization layer normalizes the input data toward all the levels and overcomes the problem of addressing the input offset. A batch normalization process transforms the input n as per the following formula given in equation (2): where n ∈ B represents an input element toward batch normalization (BN), which is mainly related to a small batch β, γ represents the scale variable, represents the standard deviation, and represents the sample mean value. Distinct Block Processing (DBP) This research utilized an imbalanced Cassava leaf disease dataset. The data are biased against CBSD, CBD, and CGM disease classes, and they also include Cassava leaf images of varying sizes. The imbalanced dataset needs immediate attention, and it should be converted into a balanced dataset for better outcomes. A distinct block technique is used to fix this problem. Therefore, when the resolution of the source image is significantly greater than the neural network's potential, the block processing method is utilized [47]. On the other hand, the block processing method enables the preservation of visual information. It has earlier been utilized effectively in numerous computer-vision-based research works. The input data are filtered from block to block during a distinct block operating condition. The input image is divided further into a rectangular shape, and each block is processed independently to evaluate the correlating block image outcome and define the image pixels. The images are separated into distinct blocks in the top left corner. A zero-padding value is introduced to boost the series of images in less identified classes, and the blocks do not align to a particular object. All Cassava leaf disease class labels contain similar images for all five classes. Different block processing methods boost each class's feature count. Working of Proposed ECNN The Cassava leaf disease detection and analysis using the proposed ECNN model includes various phases. Each phase has its distinct features. The max-pooling layer's goal is to decrease the geographic capacity dimensions of all image pixels. After parameter selection and improvement with the grid search process, the network's head comprises four fully linked layers of 512 neurons. The first, second, and third layers contain 1024 neurons in this process, and the fourth layer contains 256 neurons. There is a neuron for each classification in the output-based convolutional layers correlating to five Cassava leaf disease classes. The dropout feature is utilized in the fully inter-linked layers to overcome inaccuracy and overfitting issues. In particular, the fully connected layers obtain essential information from the object through the fully connected components. To utilize these selected features to identify and classify all the healthy and unhealthy classes from the leaf images, the convolution layer value can be measured as equation (3) = * + where represents the feature map value of the last layer used as an output, represents the channel output value, n represents the layer number, represents the offset value related to channel, represent the subset data for input. Phase 1 The first phase performs image transformation, including mask segment, deskew, gray, thresh, rnoise, canny, and sharpen. Then, to remove image imbalance, we apply a pre-processing data phase based on Contrast Limited Adaptive Histogram Equalization (CLAHE) method [48]. Figure 4 shows image transformation. Here, one to ten transformations are performed by various methods. In Figure 5 shows the image pre-processing by using the CLAHE method. The CLAHE method improves the performance of image processing methods in low-resolution and low-contrast environments. The initial color image is transferred from RGB to Y.I.Q. and H.S.I. shared spaces. In the next phase, a CLAHE method is utilized in the Y.I.Q. and H.S.I. color spaces to produce two improved image datasets. Then, the Y.I.Q. and H.S.I. improved images are subsequently converted to RGB color space. Phase 2 In this phase, we applied the SMOTE method for resampling purposes [49]. The first phase mainly removes the skewness from the images. As discussed, the Cassava leaf disease dataset [50] that we are using for this research is highly imbalanced. The second phase utilized a perfect combination of existing methods: SMOTE (Synthetic Minority Oversampling Technique), class-weight, and focal loss techniques, to enhance the volume of the training dataset, which led to improvements in high precision. SMOTE is a method for oversampling that generates data samples only for class labels. This method mainly overcame the overfitting issue caused by arbitrary data. The SMOTE method creates unique Cassava leaf disease data samples based on actual results to remove the skewness. The SMOTE approach selects samples in the feature space closest to them, makes a clear distinction between them in the subspace, and draws a new sample once at the position along each path. Phase 3 Phase three is mainly applied to enhance the size of the Cassava leaf image dataset. To address the issue of a limited dataset, this phase utilizes dataset enhancement techniques, such as random shearing, image flipping, center zooming, random scaling, height/width shift, and random cropping. This phase also utilizes an image-flipping method, which increases the dataset volume. It helps in the testing and training process and provides better precision, accuracy, and performance. Results and Discussion This section covers the implementation, dataset description, result comparison, and discussion. The python programming language implements existing Standard CNN [2] and proposed ECNN methods. The proposed ECNN model is compared with the existing Standard CNN architecture-based model. To implement these models, we are using a similar type of feature. Various performance measuring parameters are calculated, i.e., precision, recall, f-measures, and accuracy. Dataset The Cassava leaf dataset is collected from the online Kaggle dataset [50]. The original data contain 6256 Cassava leaf images with imbalanced occurrences of 316 healthy Cassava leaves. The dataset also contains the four types of unhealthy infected Cassava leaf classes. Figure 6 shows the various disease classes of Cassava leaf (0: CBB, 1: CBSD, 2: CGM, 3: CMD, and 4: Healthy). Different parameters are calculated to examine the performance of the proposed ECNN model, i.e., dropout, batch size, other numbers of epochs and precision, recall, fmeasure, accuracy. Data Pre-Processing In the pre-processing phase, the raw Cassava images are normalized. An imbalance is also removed from the images. The image set is classified into two main categories: standard (healthy) and abnormal (unhealthy). These natural-color images are divided into five binary classes, from 0 to 4. The unhealthy Cassava images are classified into distinct classes. The complete normalization process in data pre-processing for a data sample is described in the equations (4)-(6): In Equations (1) and (2), Ni shows the data for a pixel, which is stored at position k, and n shows the pixel samples. γ shows the mean data value, and (μ) shows the variance. Based on Equations (1) and (2), a normalization process can be defined by Equation (3) as follows: In Equation (3), the N | represents the normalization value for an ith pixel, and ε is some small random value, where ε > 0. In Cassava leaf image data pre-processing, the images' R, G, and B components are decreased from their mean values in the normalization progressive enhancement de-averaging. Moreover, there are a variety of issues with the Cassava leaf dataset. The first is the small dataset size, and the next is the poor contrast and resolution images. Another challenge is associated with the skewness in the class label. The top class contains 39.4% of this dataset, and the minor class contains 2.89% magnitude variations [51]. We focused on enhancing Cassava image contrast using the CLAHE method. The CLAHE method can significantly improve the performance of image processing methods in low-resolution and low-contrast environments. To increase the size of the database, various image enhancement methods, i.e., random shearing, image flipping, central zooming, random cropping, random scaling, shifting of image height and width, are used. An image flipping method that helps to enhance the size of the database helps in training and validation for testing results. In the next phase, all the Cassava leaf images are restructured into (224 × 224) by adjusting the width and length of the images. The images of leaf categories are restructured further into vertically and horizontally flipped components. The Cassava image dataset includes CMD: 2808, CGM: 923, CBB: 166, and CBSD: 1593 images. As shown in Figure 7, these images are completely unbalanced, with a heavy bias toward CBSD and CMD Cassava disease classes. Visualization of Proposed ECNN Model The proposed ECNN model generates 239 NN layers. Figure 8 represents the visualization outcomes of the first five layers (layer 1 to layer 5) of the proposed ECNN model. Layer 1 represents the input image; layer 2 represents the rescaling process; layer 3 represents normalization; layer 4 represents the stem_conv_pad; and layer 5 represents the stem_conv. The proposed ECNN model's structure consisted of three convolution operations and a core of four fully linked layers. Layer 1 contains 32 cores (5 × 5) for learning higher batch normalization characteristics, with max pooling of (3 × 3) pool capacity. Layers 2 and 3 contain two fully connected layers with 64 (3 × 3) and 128 (3 × 3) feature selection, batch normalization, and max pooling. A batch normalization process enables the creation of the batches for two different sets of convolution layers. Before completing the max-pooling process, all the layers are structured to enhance the learning of the entire model. In the ECNN layer architecture, layer 4 shows the "stem_conv_pad", which describes the Keras Zero Padding 2D normalization process outcomes, and similar layer 5 shows the "stem_conv", which describes the Conv2D in Keras outcomes [52]. (1) Experimental Outcomes The existing Standard CNN model and Proposed ECNN methods are implemented using python and Anaconda distribution in this research. The online Kaggle Cassava leaf dataset is used for analysis. The dataset is divided into training and testing sets. Following performance measuring, the parameters are calculated to measure the performance of the proposed ECNN method [53][54][55][56]: In this experiment, we used two scenarios for Cassava leaf disease analysis. In Scenario 1, experimental analysis is performed on the imbalanced dataset, and in Scenario 2, experimental analysis was performed on a balanced dataset. Accuracy rate, precision, recall, and F-measure parameters are calculated to evaluate the training and test competitiveness of the CNN and proposed ECNN models. Scenario 1 The first scenario performs experimental analysis on an imbalanced Cassava leaf disease dataset. The dataset is divided into 60% for training and 40% for testing purposes. Kfold cross-validation is applied with k = 3 for training and testing to achieve a higher precision. Figure 9 represents the experimental outcome of the proposed ECNN and CNN model for training and validation accuracy, and training and validation loss for imbalanced datasets. The experimental results demonstrate that the proposed ECNN model achieved training and validation accuracy of 94.689% and a loss of 24.547%, which is better than the existing Standard CNN model results, showing training and validation accuracy of 89.754% and a loss of 36.414%. Tables 2 and 3 show the experimental outcomes for various Cassava leaf disease classes (0 to 4) for the proposed ECNN and CNN for the imbalanced dataset. These experimental results show that the proposed ECNN model performs better in accuracy, precision, recall, and f-measure than the existing Standard CNN model. In the second scenario, the balanced dataset of the Cassava leaf is used. This dataset is divided into 60% for training and 40% for testing purposes. Table 4 shows that the proposed ECNN procedure outperformed the existing Standard CNN model in terms of accuracy results for all the classes. The ECNN model shows 99.47% accuracy for CBB class, which is the highest in all the terms. Once we compare the experimental results of Scenarios 1 and 2, we can see that the proposed ECNN method shows better results for a balanced dataset than an imbalanced dataset. Conclusions and Future Work Cassava leaf detection is a hot area of research. This research developed an ECNN model for a high imbalance Cassava leaf dataset to predict the disease class. The existing Standard CNN models utilize a higher extensive set of features and a massive computational process that increases the computational overhead. We upgraded the traditional convolution network model by adding enhanced features to overcome this issue. The proposed ECNN model utilizes a depth-wise layer separation, minimizing the feature count and computational overhead. Additionally, to overcome the dataset imbalance factor, this research applied improved data pre-processing methods. It reduces the error rate and improves image quality. The proposed ECNN model is compared with the existing Standard CNN architecture-based model. To implement these models, we are using a similar type of feature. An experimental analysis was performed on an online Cassava leaf dataset. This dataset contained five classes: 0: CBB, 1: CBSD, 2: CGM, 3: CMD, and 4: Healthy. An experimental analysis clearly shows the strengthening of the proposed ECNN model in terms of better accuracy, precision, recall, and f-measure than the existing Standard CNN model. In future work, we will try to improve the current research in various aspects: (a) the dataset can be improved in terms of data size and more disease classes; (b) the ECNN model can be improved by adding more CNN models in hybrid form; (c) the experimental analysis can be performed in a real-time environment with more performance measuring parameters.
2022-02-16T16:26:44.215Z
2022-02-13T00:00:00.000
{ "year": 2022, "sha1": "b2b993164be8328ccb8c8375df9a27a6a40b24cc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/10/4/580/pdf?version=1645087713", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "85fd5c49bc337afd5216167262a758e83247227d", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science" ], "extfieldsofstudy": [] }
225146377
pes2o/s2orc
v3-fos-license
The prospects of emotional dogmatism The idea that emotional experience is capable of lending immediate and defeasible justification to evaluative belief has been amassing significant support in recent years. The proposal that it is my anger, say, that justifies my belief that I’ve been wronged putatively provides us with an intuitive and naturalised explanation as to how we receive epistemic justification for a rich catalogue of our evaluative beliefs. However, despite the fact that this justificatory thesis of emotion is fundamentally an epistemological proposal, comparatively little has been done to explicitly isolate what it is about emotions that bestows them with justificatory ability. The purpose of this paper is to provide a novel and thorough analysis into the prospects of phenomenology-based—or dogmatist—views of emotional justification. By surveying and rejecting various instantiations of the emotional dogmatist view, I endeavour to provide an inductive case for the conclusion that emotional phenomenology cannot be the seat of the emotions’ power to immediately justify evaluative belief. Introduction The idea that emotional experience is capable of lending immediate and defeasible justification to evaluative belief has been gaining significant traction in recent years. The proposal that it is my anger, say, that justifies me in believing that I've been & Eilidh Harrison e.harrison.1@research.gla.ac.uk 1 Philosophy, School of Humanities, University of Glasgow, 67 -69 Oakfield Avenue, Glasgow G128QQ, UK offended putatively provides us with an intuitive and naturalised explanation as to how we receive epistemic justification for a rich catalogue of our evaluative beliefs. With many notable advocates, this justificatory thesis of emotion is fast becoming a central facet in how we conceive of the emotions' epistemic role. 1 Interestingly, however, comparatively little of the philosophical literature has been dedicated to explicitly isolating what it is about emotional experience that bestows it with the ability to immediately and defeasibly justify belief. The aim of this paper is to present and evaluate an internalist view of emotional justification, namely, one which identifies emotional phenomenology as the source of the emotions' ability to justify evaluative belief. Support for a phenomenology-based view can be found in various suggestive comments made by notable authors in the philosophy of emotion. Goldie (2004), for example, argues on behalf of an account of emotion ''where the feelings involved are at center stage, playing a centrally important epistemic role in revealing things about the world'' (p. 92). On a similar note, Tappolet (2016) argues that emotional experiences uniquely ''allow us to be aware of certain features of the world'' (p. 18), while Johnston (2001) claims that the epistemic import of affective experiences is rooted in their providing us with ''affective disclosure'' (p. 213) of evaluative properties. The focus of these claims on 'feelings', 'awareness', and 'affective disclosure' certainly seems at least suggestive of the fact that these authors take the phenomenal properties instantiated by emotional experience-the what-it-is-like for a subject to undergo emotional experience-to bear epistemic significance. So, how might we construct a phenomenology-based view of emotional justification? One plausible way is to build it as relevantly analogous to phenomenal dogmatism. For phenomenal dogmatists, a perceptual experience that makes it seem to you that p immediately and defeasibly justifies you in believing that p. Given that phenomenal dogmatism is an attractive internalist view of justification that places epistemic importance on experiential phenomenology, we can draw up an emotional analogue accordingly, such that an emotional experience that makes it seem to you that e (where e signifies a proposition attributing an evaluative property to an object) immediately and defeasibly justifies you in believing that e. Call this emotional dogmatism. Here, by surveying and rejecting several instantiations of the emotional dogmatist view, I endeavour to build an inductive case for the conclusion that the phenomenal character of emotional experience cannot be what makes it capable of immediately and defeasibly justifying evaluative belief. The structure of this paper is as follows. §2 begins by further elucidating the phenomenal dogmatist view and presenting the analogous emotional dogmatist thesis. In §2.1, I argue that basic dogmatism, which requires only that the experience bears unqualified seeming phenomenal character, falls foul to a worrisome overgeneralisation problem. In §3, I suggest that a restrictive account of phenomenal dogmatism based on Chudnoff's presentationalism is better placed for an investigation into the prospects of an analogous emotional dogmatist view. §3.1 then presents a novel objection against this view, namely, that there is no plausible way of spelling out what seeming awareness of truth-makers for evaluative propositions consists in. §4 then considers and rejects alternative restricted views based on McGrath's and Markie's respective accounts of restricted phenomenal dogmatism. Finally, I conclude that, while emotional seeming states might be capable of transmitting justification to evaluative belief mediated by other mental states and beliefs, we have good reason to believe that they cannot bear immediate justificatory power. Basic emotional dogmatism Let us understand phenomenal dogmatism as follows: Phenomenal Dogmatism (PD): if it perceptually seems to S that p, then, in the absence of defeaters, S thereby has [immediate] justification for believing that p. (Tucker 2013, p. 2). Some clarifications are in order. First, PD is an internalist view of justification insofar as it identifies factors internal to the agent (i.e. an agent's seeming states) as sole epistemic justifiers. Second, and importantly, PD is a thesis about immediate justification, i.e. justification which exists independently of any inferential connections to other justified beliefs. Third, identifying the source of an experience's justificatory power in its bearing the character of 'seeming to S that p' is to identify it in the experience's phenomenal character, i.e. the something-thatit-is-like for the subject to undergo the perceptual experience. Fourth, 'seemings' are typically taken to be non-doxastic propositional attitudes. Finally, while the nature of seeming phenomenal character can be difficult to elucidate in writing, it will be sufficient for our purposes to conceive of it along similar lines to the way in which Tucker (2010) describes it, i.e. seemings instantiate the phenomenal property of asserting or insisting to you that the content of the experience obtains. Insofar as we're interested in building an account of emotional epistemology on the basis of PD, we can conceive of emotional dogmatism as follows: Emotional Dogmatism (ED): if it emotionally seems to S that e (where e signifies a proposition which attributes an evaluative property to an object) then, in the absence of defeaters, S thereby has immediate justification for believing that e. On this view, just as my visual seeming experience of the blue mug can immediately and defeasibly justify my belief that there is a blue mug, my emotional seeming experience of awe towards a painting can immediately and defeasibly justify me in believing that the painting is beautiful. This view is attractive for a number of reasons. First, PD is praised in virtue of its ability to provide a simple and intuitive explanation as to how we receive epistemic justification for our beliefs about the world; we're justified in believing what we do because of the way the world appears to us in our perceptual experience. Analogously, for ED, we're justified in our evaluative beliefs about the world because of the way it appears to us in our emotional experience. Secondly, given the focus on immediate justification, The prospects of emotional dogmatism... 2537 PD provides an antidote to pernicious sceptical worries pertaining to the justificatory status of our everyday beliefs about the sensible world. Epistemic justification comes at a low price for PD in virtue of all that's required is that our perceptual experiences bear the right sort of 'seeming' character; the justification need not be mediated via relations to other justified beliefs. Insofar as ED is built on the foundations of PD, it can provide an analogous remedy for sceptical worries pertaining to the justificatory status of our everyday evaluative beliefs. Finally, given the importance of justification for the acquisition of further epistemic goods, dogmatist views can provide a substantive epistemic yield which extends beyond justified belief and plausibly into the domain of both perceptual and evaluative knowledge and understanding. Objection: an over-generalisation problem However, a worry with identifying an experience's justificatory power in its bearing unqualified seeming phenomenal character is that the theory lacks the ability to exclude epistemically problematic cases. A popular way of presenting this challenge is in terms of the following example from Markie (2005): Suppose that we are prospecting for gold. You have learned to identify a gold nugget on sight but I have no such knowledge. As the water washes out of my pan, we both look at a pebble, which is in fact a gold nugget. My desire to discover gold makes it seem to me as if the pebble is gold; your learned identification skills make it seem that way to you. According to [PD], the belief that it is gold has prima facie justification for both of us. (p. 356-357). This problem constitutes a serious threat for PD. The possibility of states like desires manipulating the content of seemings, and thereby having an influence over which of our beliefs enjoy immediate justification, is worrisome for any theory which attributes such epistemic significance to these seemings. Indeed, consider an emotional case. To borrow an example from Brady (2013, p. 87), suppose that I'm on the hiring committee for a job, and upon interviewing a particular candidate, I find myself experiencing a negative emotion that makes it seem to me that this candidate is duplicitous or untrustworthy. It would be implausible to claim that this emotion alone is capable of immediately justifying my belief that the candidate is duplicitous on the basis of its bearing seeming phenomenal character. However, insofar as ED only identifies unqualified emotional seemings as justificationconferring states, it lacks the theoretical resources to exclude cases like this. It cannot be true that it's only in virtue of an experience bearing this 'seeming' character that it is capable of justifying the relevant beliefs, or else we would have to concede that the gold prospector's wishfully-produced perceptual belief that the pebble is gold is afforded the same justifying role as the skill-produced belief of the mineral expert, or that the suspicious interviewer's belief is justified on the basis of their rogue emotional experience. The staunch dogmatist might resist this objection, however. In response to overgeneralisation cases, proponents of these views may bite the bullet and allow that, in virtue of their bearing the right kind of seeming character, experiences like these are capable of immediately and defeasibly justifying belief. That is, the dogmatist might be perfectly happy to concede that their theory generalises to experiences like those of Markie's gold-prospector or the suspicious interviewer, but deny that this is particularly problematic. It may be counterintuitive to those who aren't naturally inclined to internalist views, but this isn't a decisive objection insofar as these views can plausibly diagnose the intuitive oddness of these cases in other ways, e.g. by pointing to the fact that it is only defeasible and not ultima facie justification conferred by these experiences, and that our intuitions aren't sufficiently finegrained to track the difference between the two, and to the fact that this justification is easily and often defeated, and so forth. Thus, dogmatism appears to have a relatively straightforward escape clause such that it can disarm worries concerning the apparent profligacy of the account. This form of bullet-biting strikes me as implausible. To illustrate why, consider a weak-willed agent who finds themselves living within a community of racists, all of whom harbour xenophobic beliefs towards those from a different ethnicity to themselves. Out of a strong desire to fit in with this group, the agent actively engages with these xenophobic beliefs. She listens to racist propaganda, attends community events celebrating the exploits of racist historical figures, and so forth. Over time, she comes to adopt these beliefs herself, such that she forms a network of biases towards particular ethnic groups. As such, upon encountering any person that belongs to such a group, she habitually has the seeming that this person is acting suspiciously. Plausibly, these xenophobic seemings are attributable to the agent herself and, specifically, to her desire to integrate into her community. She created and is responsible for the formation of those seemings. Dogmatists, in virtue of their commitment to the claim that it is defeasible and not ultima facie justification conferred by experience, can explain why the agent's xenophobic seemings do not justify her in believing that the person from a particular ethnic group is acting suspiciously only if she has an awareness of her experience's etiology. That is, for dogmatists, the justification conferred by the xenophobic seemings is defeated by her awareness of the fact that the seemings are ultimately attributable to her desires. 2 However, it also seems plausible that, as time passes and she successfully integrates into the community, she comes to forget that her desire to fit in was the source of these xenophobic seemings. Her racist beliefs become such an entrenched part of her cognitive architecture that she no longer questions them nor their origin. 3 Dogmatism then generates the strange result that the agent is not experientially justified in her belief that the person is behaving suspiciously at a time t 1 where she is aware that her desire is the origin of the xenophobic seemings, but she is justified on the basis of those seemings at a time t 2 where she has forgotten that this is the case. This strikes me as counterintuitive. It's odd to suggest that forgetting something can enhance the positive epistemic status of a belief, especially when that belief is causally traceable and attributable to an agent's epistemically dubious desire. 4 Dogmatists seem to be getting the wrong result here. Now, there are two ways in which the defender of ED might respond. First, the dogmatist may argue that, while there is something intuitively problematic about this case, it's not obvious that the problem pertains to the presence of epistemic justification. That is, one might contend that what our intuitions in this case are actually tracking is the agent's moral blameworthiness, or zetetic failings pertaining to her process of poor epistemic inquiry. 5 If these failings are the source of our intuition that there is something amiss with this case, then the emotional dogmatist is let off the hook insofar as there's not actually anything problematic about bestowing her emotional seemings with justificatory power at t 2 . I take it that the best strategy for establishing that there is an epistemic failing here (and, specifically, one pertinent to the presence of justification) is to consider an analogous case in which there are no obvious moral or zetetic failings which plausibly hijack the intuition that there's something amiss with bestowing justificatory power to the emotional seemings. If we neutralise these nonjustificatory failings and there's still something problematic about the epistemic result, then we have good reason to believe that this case does constitute an overgeneralisation worry for ED. On that note, consider the following. Suppose that, through a powerful desire to be liked by everybody, I come to believe that a person has strong affection for me whenever they remember my name. Consequently, I habitually experience the emotional seeming of joy whenever anybody refers to me by name; it emotionally seems to me that this referral is a very good thing for me. At a time t 1 , when I am aware of these seemings' causal origin in my wishful thinking, they don't justify my evaluative belief that this event is good for me. At a later time t 2 , when I have forgotten the etiology of these seemings, they do justify my evaluative belief. Now, this case shares the same general structure as the original overgeneralisation case for ED. Plausibly, however, there's no obvious moral failing in this case. Moreover, it strikes me as unlikely that the issue at play is a zetetic worry pertaining to my poor process of epistemic inquiry given that I'm plausibly not conducting an inquiry when I have the emotional seeming of joy after somebody refers to me by name. According to Friedman (2019), a necessary condition for a subject to count as an inquirer, and to thereby have their process of inquiry subject to zetetic norms of assessment, is that they possess an ''interrogative attitude'' (p. 299) towards the question at hand, i.e. they're curious or contemplative as to what the answer is. In this case, it's not obvious that I have the goal-directed activity of pursuing an answer to the question as to what any given individual's attitude is towards me; I just have the psychologically immediate experience of joy whenever a person refers to me by name, given my beliefs about what that referral means and my powerful desire to be liked. So, if a subject isn't morally or zetetically blameworthy in a case like this, but there still seems to be something counterintuive about allowing their evaluative belief to be justified by their emotional seemings, then this seems best explained in terms of the subject's specific epistemic failing, such that bestowing their emotional seemings with justificatory power constitutes an over-generalisation problem for ED. A second argument that the dogmatist might make in response to the overgeneralisation case specifically concerns the worry that, for ED, forgetting key defeating evidence can improve the epistemic status of one's evaluative belief. To dispel this counterintuitive result, the dogmatist might appropriate argumentative resources from discussions of forgotten evidence and defeat in the epistemology of memory literature. One particularly relevant discussion concerns Huemer's (1999) proposal of the following diachronic view of phenomenal conservatism: A belief is justified full stop if and only if one had an adequate justification for adopting it at some point, and thenceforward one was justified in retaining it. (p. 351). This view is proposed partially in response to cases of forgotten defeat that are typically levelled against synchronic views of internalist justification. In these cases, a subject forms a belief that p via epistemically irrational means, such as wishful thinking. At a time t 1 , when the subject is aware of this, her belief that p is unjustified. However, as time passes, the subject forgets the means through which she arrived at p, and retains p in memory at t 2 . The worry is that many synchronic views will deliver the result that p is justified at t 2 given that, at this time, the subject's defeater for p is lost to memory. Huemer's diachronic phenomenal conservatism attempts to avoid this result by claiming that a belief is overall justified if and only if the subject was once justified in adopting that belief, i.e., the subject's past mental states matter for the present justificatory status of one's belief. Given that, in the forgotten defeat case, the subject was never justified in adopting p because of its formation via irrational means, Huemer's view avoids the counterintuitive result. Returning to the case at hand, then, perhaps the emotional dogmatist can argue something similar. That is, assuming a view like Huemer's, perhaps one can argue that the xenophobic subject is not justified in her evaluative belief that the person is acting suspiciously at t 2 because the evaluative belief was not justified at t 1 , given her then-awareness of her emotional seemings' etiology. Here's the problem with this response. Even if diachronic views of this sort turn out to be plausible, 6 reasoning drawn from these discussions in the epistemology of memory cannot get a foothold on this over-generalisation case for ED given that, here, nothing is being retained in memory. Recall that, in the forgotten defeat cases pertinent to diachronic views like Huemer's, the subject forgets the defeating evidence but retains the belief that p via memory. The problem is that, in ED's overgeneralisation case, the subject does not memorially retain the same belief that the person is acting suspiciously from t 1 to t 2 . Rather, at t 2 , the subject has another emotional seeming experience which causes the belief which, crucially, is distinct from the belief formed at t 1 . Because memory is playing no role here, plugging in a view like Huemer's will not be sufficient to dispel the counterintuive result delivered by ED, nor can it absolve the dogmatist of the over-generalisation charge. So, in summary, if identifying unqualified seemings as justifiers results in an overly permissive account of justification, and if endorsing such an account results in counterintuitive implications, then PD and ED are not plausible accounts of immediate experiential justification. Restricted emotional dogmatism A natural response for the dogmatist to make here is to tighten and finetune their account so as to exclude the over-generalisation cases presented above. One notable example of such a view is proposed by Chudnoff. On Chudnoff's view, it's not sufficient for a perceptual experience to make it seem to you that p in order for it to justify your belief that p. Rather, the experience must instantiate the property of having presentational phenomenology with respect to p. Chudnoff (2013) sets out the notion of presentational phenomenology as follows: What it is for an experience of yours to have presentational phenomenology with respect to p is for it to both make it seem to you that p and make it seem to you as if this experience makes you aware of a truth-maker for p (p. 37). Crucially, what distinguishes Chudnoff's view from basic PD is the addition of the truth-maker condition. On this account, if my visual experience of the mug on the desk immediately and defeasibly justifies my belief that there is a mug on the desk, it does so in virtue of having presentational phenomenology with respect to that proposition, i.e. it both makes it seem to me that there is a mug on the desk and makes it seem as if I'm visually aware of an item in my perceptible surroundings that makes that proposition true. Thus, we get the following restricted phenomenal dogmatist view: Presentationalism: S's perceptual experience is capable of immediately and defeasibly justifying her belief that p if and only if the experience both makes it seem to S that p and makes it seem as if S is perceptually aware of a truthmaker for p. There are good reasons to endorse presentationalism. One of the central motivations for the view is that the notion of presentational phenomenology chimes well with various characterisations of the epistemically significant phenomenal character of visual experience offered by phenomenal dogmatists in the literature, while providing a more robust diagnosis of this character. 7 Moreover, the presence of the truth-maker condition makes Chudnoff's account better able to deflect overgeneralisation cases, e.g. if presentationalism is true, then Markie's wishful prospector cannot be justified in their seeming-based belief that the pebble is gold because what makes that proposition true, i.e. the chemical composition of the pebble, is not something that can figure into visual seeming awareness. Therefore, because the visual experience does not make it seem as if the prospector is perceptually aware of a truth-maker for the relevant proposition, it cannot lend justification to the relevant belief. Now, in light of this development, let's return to the emotions. We can transpose the theoretical machinery of presentational phenomenology over to the case of emotional experience in order to construct the following restricted account of emotional dogmatism: Restricted Emotional Dogmatism (RED): S's emotional experience is capable of immediately and defeasibly justifying her evaluative belief e if and only if the experience both makes it seem to her that e, and makes it seem as if she is emotionally aware of a truth-maker for e. One interesting thing to note here is that RED, insofar as it places epistemic significance on the emotional experience making it seem to you as if you're aware of a truth-maker for an evaluative proposition, fits nicely with the comments provided by Goldie, Tappolet, and Johnston in §1. Recall that in their respective descriptions of the epistemic power of emotions, Goldie described emotional feelings as capable of ''revealing things about the world'', while Tappolet suggested that emotional experiences ''allow us to be aware of certain features of the world''. The suggestion here that emotional experiences provide us with some sort of unique awareness about things out there in the world seems to closely match RED's requirement of emotional experiences making it seem as if we're aware of truthmakers for evaluative propositions, i.e. things out there that make evaluative propositions true. Indeed, Johnston explicitly uses the language of truth-makers insofar as he claims that ''affect discloses evaluative truth-makers' ' (2001, p. 206), and that this (at least partially) explains what he terms the ''epistemic authority'' (p. 205) of affective experiences. By including the truth-maker condition, then, RED coheres with views about the epistemic import of emotional phenomenology in the surrounding literature, inherits the general advantages of the basic ED account and receives support from a more theoretically robust epistemological framework which avoids the pitfalls of basic dogmatism. However, RED also faces significant challenges. Before presenting my own critique, let us first address a challenge levelled against RED by Brogaard and Chudnoff (2016). In their analysis, RED is rejected on the grounds that it builds phenomenologically unrealistic contents into the scope of emotional seeming awareness. For Brogaard and Chudnoff, emotional experience cannot bring seeming awareness of truth-makers for evaluative propositions because evaluative properties are not suitable objects of emotional awareness. Crucially, this is because evaluative properties bear a normative dimension; they merit certain emotional responses. For an emotional experience to make it seem as if I'm aware of an evaluative property instantiated by an object, that emotional experience would have to reflexively present itself as being epistemically merited by the object. This, for Brogaard and Chudnoff, cannot be true. Whether an object merits that particular emotional response is not something I can be aware of via my own emotional phenomenology. I will not pursue this criticism against RED. Instead, I will propose a different challenge which focuses not on RED's putative commitment to controversial phenomenological assertions, but on its commitment to controversial epistemological results. My reason for this is twofold. First, note that whether one finds Brogaard and Chudnoff's challenge compelling relies on their having the intuition that emotional experience cannot bear a very specific kind of self-reflexive phenomenology. This doesn't strike me as a commonly held intuition. There are those in the literature who, at the very least, are amenable to the suggestion that emotions can be experienced as being epistemically merited with respect to their objects, and some even propose accounts of emotional phenomenology in which this is explicitly the case. 8 Second, and relatedly, it seems at least prima facie plausible that our intuitions have significantly more reliability and argumentative traction within the domain of epistemological theorising, given the frequency with which counterexamples are cited as compelling objections to epistemological views. Our intuitions when it comes to specific introspective phenomenological claims, on the other hand, are plausibly less widely-shared, less reliable, and less dialectically compelling. For these reasons, §3.1 will solely pursue the forthcoming epistemological challenge against RED. Objection: the dilemma of evaluative truth-makers Here, I argue that RED's inclusion of the truth-maker condition spells serious trouble for the view. Specifically, RED faces a dilemma in what seeming awareness of truth-makers for evaluative propositions consists in. Take an experience of fear towards an approaching snake. In order for that experience of fear to justify the evaluative belief that the snake is fearsome, the experience must both make it seem to you that the snake is fearsome and make it seem as if you're emotionally aware of a truth-maker for that evaluative proposition. But what is the truth-maker for this proposition? RED, as expressed thus far, is silent as to whether the truth-maker consists in the evaluative property of fearsomeness itself, or whether it consists in the non-evaluative properties instantiated by the snake that give rise to the evaluative property of fearsomeness, i.e. the sharp fangs, the aggressive movements, and so forth. Call these 'the evaluative property reading' and 'the non-evaluative property reading' of the truth-maker condition respectively. The problem is that neither of these options looks promising for RED. Let's begin with the evaluative property reading, which can be spelled out as follows: RED EP : S's emotional experience is capable of immediately and defeasibly justifying her evaluative belief e if and only if the experience both makes it seem to her that e and makes it seem as if she's emotionally aware of the evaluative property putatively instantiated by the object. Immediately, a problem arises here. Namely, while the inclusion of the truthmaker condition seems to suitably restrict dogmatism in the perceptual case, it's not at all clear that this reading of the truth-maker condition restricts RED at all. Reconsider Brady's suspicious interviewer. The worry is that RED EP can't exclude the interviewer's emotional experience of suspicion because their experience satisfies both the seeming condition and the truth-maker condition. That is, insofar as the emotional experience already makes it seem to the interviewer that the candidate is duplicitous (and they're not aware of any reason to distrust this seeming), then plausibly their experience of suspicion also makes it seem to them that the candidate instantiates the property of 'duplicitousness'. The evaluative property reading of the truth-maker condition doesn't seem to be adding any further requirement to emotional dogmatism, given that any emotional experience which satisfies the seeming condition will also satisfy the truth-maker condition. What else could it mean for an emotional experience to make it seem to you that the candidate is duplicitous, other than making it seem as if you're aware of the evaluative property of 'duplicitousness' putatively instantiated by the candidate? Naturally, then, RED EP will continue to over-generalise to problematic cases precisely because, in practice, it's no different to ED. At this point, the defender of RED EP may argue that the case is under-described. In response to this over-generalisation worry, they might attempt to re-describe the case in order to motivate the plausibility of conceding justification to the interviewer. They may suggest, for instance, that the interviewer's emotional experience of suspicion makes it seem as if they're emotionally aware of the duplicitousness instantiated by the candidate because the interviewer is picking up on subtle duplicitous-making features of the candidate, i.e. that their emotional seeming awareness of duplicitousness is caused by their perception of certain mannerisms and micro-behaviours indicative of duplicitousness, such as avoiding the gaze of the interview panel, excessive talking, smirking, etc. Thus, the defender of RED EP might argue that the emotional experience makes it seem as if they're emotionally aware of the property 'duplicitousness' instantiated by the candidate because they're aware of the relevant pattern of non-evaluative properties. If this is the case, then conceding justification on the basis of these emotional seemings doesn't seem problematic. The problem with this response is that RED EP lacks the ability to distinguish between a case like this, i.e. a case in which the emotional seeming awareness of duplicitousness is caused by a seeming awareness of a pattern of duplicitous-making features of the candidate, and a case in which the emotional seeming awareness of 'duplicitousness' is caused by epistemically dubious cognitive biases (e.g. suppose that the candidate is a woman and the interviewer is unknowingly biased against The prospects of emotional dogmatism... 2545 women). The worry is that, insofar as the epistemically relevant emotional seemings-i.e. the seeming that the candidate is duplicitous and the seeming awareness of the evaluative property 'duplicitousness' instantiated by the candidate -can be grounded in either of these causal explanations, RED EP doesn't have the tools to differentiate the good and bad cases; both types of emotional seemings (i.e. those produced by epistemically legitimate means and those produced by epistemically illegitimate means) have the same justificatory power. This is a bad result. So, if the source of RED's continued vulnerability to the over-generalisation problem is conceiving of truth-makers for evaluative propositions as evaluative properties themselves, why not abandon this claim and insist instead that the truthmaker for an evaluative proposition is the relevant set of non-evaluative properties instantiated by the object which would make the proposition true? This is the nonevaluative property reading, and can be spelled out as follows: RED NEP : S's emotional experience is capable of immediately and defeasibly justifying her evaluative belief e if and only if the experience both makes it seem to her that e and makes it seem as if she's emotionally aware of the set of non-evaluative properties that, if instantiated, would give rise to the relevant evaluative property, and so make e true. The attraction of this reading is that, unlike RED EP , it avoids obvious overgeneralisation cases like the biased interviewer. Recall that, in this case, the interviewer's emotional seeming awareness of the candidate's duplicitousness is caused by their bias against women. This case would not meet the requirements of RED NEP precisely because the interviewer's emotional experience is not making it seem as if they're aware of the set of non-evaluative properties that would make the proposition 'the candidate is duplicitous' true. Rather, their experience is being triggered by the combination of their sexist bias and their perception of the candidate's gender. Clearly, mere seeming awareness of the candidate's gender does not amount to seeming awareness of the candidate instantiating particular nonevaluative properties which would make the proposition 'the candidate is duplicitous' true. Thus, RED NEP avoids the charge of over-generalisation because it can epistemically differentiate between the good case (i.e. the case in which the interviewer's emotional seemings of duplicitousness are caused by their perception of duplicitous-making non-evaluative features of the candidate), and the bad case (i.e. the case in which the interviewer's emotional seemings of duplicitousness are caused by their perception of the candidate's gender and their bias against women). The problem, however, is that RED NEP is now too restrictive. If we identify these conjunctions of non-evaluative properties as truth-makers, then very few of our emotional experiences would be capable of bearing justificatory power. It seems that only very basic emotional experiences, like fear of a snake or disgust towards spoiled milk, for example, are reliably capable of bringing the required wideranging emotional seeming awareness of the relevant non-evaluative properties that would make the relevant proposition (e.g. 'the snake is fearsome', or 'the spoiled milk is disgusting') true. Emotional experiences which do not figure into this very basic category often don't bring awareness of the relevant non-evaluative properties. 9 Take an emotional experience of awe towards a piece of artwork which does not bring full seeming awareness of the non-evaluative properties which would make the proposition 'that artwork is beautiful' true, or an experience of amusement towards a particular state of affairs which does not bring seeming awareness of the particular amusement-making non-evaluative properties. Despite the absence of such fine-grained seeming awareness, it seems entirely possible that emotional experiences of this sort are capable of providing a positive epistemic contribution to the status of the corresponding evaluative beliefs. Thus, robbing these emotions of immediate justificatory power on the basis of their not fulfilling the strict phenomenological requirements for RED NEP strikes me as bad news for the view. Here, there are two possible responses available to the defender of RED NEP . The first of which is to concede that, understood this way, the view ends up being restrictive but deny that this is problematic. Indeed, the defender of RED NEP might stress that the lesson to be learned from the over-generalisation problem is that we should be casting a narrow net around the emotional experiences capable of bearing justificatory power. We want to rule out cases in which emotional seemings look like they're not grounded in epistemically legitimate observations of the relevant non-evaluative properties, and the best way of doing this is to impose strict constraints on what counts as emotional seeming awareness of truth-makers. If a consequence of this is that relatively complex emotional experiences which do not bring seeming awareness of the relevant non-evaluative properties end up getting ruled out of the account (insofar as they do not make it seem as if one is emotionally aware of a truth-maker for the relevant evaluative proposition), then so be it. The worry with conceding epistemic austerity here, however, is that one desideratum for a plausible version of a justificatory thesis of emotion is that it can account for how a broad catalogue of our evaluative beliefs can be justified by emotional experiences. If endorsing RED NEP means that we can only consider very basic emotional experiences as capable of bearing justificatory ability, then our dogmatist approach to emotional justification is failing to provide a satisfactory picture of the immediate justificatory capacity of emotional experience. Secondly, the objector might argue that in these scenarios-take the amusement case, for example -my emotional experience is, in fact, making it seem as if I'm aware of the relevant collection of non-evaluative properties which would make the event amusing, I just can't articulate exactly what those properties are. One suggestion in support of this might be something like the following. When prompted, i.e. when asked 'what's so funny?', I can gesture vaguely towards the features of the situation that make it amusing, such as the particular comment made, the context in which it was made, and so forth, even if I can't express the amusingmaking minutia. In other words, I'm not at a complete loss as to what it is about the situation that makes it amusing, and this is all that's needed for evidence of emotional seeming awareness of the relevant conjunction of non-evaluative properties. Therefore, we can tell some story about having emotional seeming awareness of the relevant truth-maker in these cases, and RED NEP doesn't end up being objectionably restrictive with respect to the kinds of emotional experiences is bestows with justificatory power. The problem with this response is that further ambiguity in what emotional seeming awareness of truth-makers consists in raises difficult questions for RED NEP . If all that matters for emotional seeming awareness of truth-makers is that the experience makes the subject capable of gesturing towards the non-evaluative features of the object which would make the relevant evaluative proposition true, then it becomes less clear that RED NEP is able to rule out problematic cases. Take the suspicious interviewer whose emotional seemings that the candidate is duplicitous are caused by sexist bias. Plausibly, their emotional experience of suspicion will make them capable of saying something about what seems to make the candidate duplicitous (e.g. ''there's just something about them''), but this still seems insufficient for the interviewer to be justified in their belief that the candidate is duplicitous. Substantively relaxing the notion of awareness in order to let in cases where the emotional experience doesn't make it seem as if one is aware of (i.e. able to identify) all of the relevant non-evaluative properties runs the risk of letting the epistemically illegitimate cases like biased suspicious interviewer in through the back door. In summary, RED is confronted with a troubling dilemma. Either we identify evaluative properties themselves as the truth-makers for evaluative propositions (RED EP ), in which case the view continues to over-generalise, or we identify the relevant aggregate of non-evaluative properties as truth-makers for evaluative propositions (RED NEP ), in which case the view rules out emotional experiences which, plausibly, are capable of immediately justifying the relevant evaluative beliefs. If endorsing RED means that we must commit to either an objectionably profligate account of emotional justification or instead one which is objectionably austere, then RED does not provide a suitable framework for thinking about the immediate justificatory power of emotional experiences. Alternative restricted emotional dogmatism One question the reader might have at this point is whether there exists an alternative instantiation of a restricted emotional dogmatist view. That is, if the addition of the Chudnoff-inspired truth-maker condition fails to make ED plausible, then perhaps we can look elsewhere for an additional condition to crystallise the view. Here, I will consider two alternative versions of restricted emotional dogmatism inspired by restricted phenomenal dogmatist accounts provided by McGrath and Markie, and argue that neither of these views can provide a plausible framework for an emotional dogmatist view. Receptive seemings emotional dogmatism Recall the gold prospector example which began our discussion of the overgeneralisation problem back in §2.1. In this case, the expert prospector's perceptual seeming that the pebble is gold arises from their learned identification skills, while the wishful prospector's perceptual seeming that the pebble is gold arises from their desire to discover gold. The problem for basic PD was that it was unable to account for the intuitive verdict that, while the expert may be justified on the basis of their perceptual seemings, it is implausible that the wishful prospector's seeming has the same justificatory ability. In light of counterexamples like this, McGrath (2013) aims to construct a restricted version of phenomenal dogmatism which manages to exclude problematic cases while also striving to retain the initial attractions of basic views. On this note, McGrath suggests that what's going wrong in cases like the wishful prospector is that the perceptual seeming has what he refers to as a ''quasi-inferential'' (p. 228) basis, i.e. the wishful prospector's perceptual seeming that the pebble is gold does not arise directly from perception but instead arises via an inference-like transition or 'jump' from the base perceptual seeming that there is a yellowish pebble. The relationship between the seemings here is 'quasi-inferential' insofar as exchanging the seemings with corresponding beliefs containing the same propositional contents would render the transition as an instance of inference between beliefs. For McGrath, it is only seemings which do not have such a quasi-inferential basis-i.e. receptive seemings-which are capable of providing immediate and defeasible justification to the relevant belief. At best, seemings with a quasi-inferential basis might be capable of conferring mediate justification to the relevant belief, but only if it's an epistemically good quasi-inference, i.e. only if the content of the base seeming adequately supports the content of the quasi-inferred seeming. Applying this to the example at hand, the wishful prospector has a receptive perceptual seeming that there is a yellowish pebble. On McGrath's account, the prospector would be immediately justified in believing that there is a yellowish pebble on the basis of this seeming. However, the prospector's desire to discover gold intervenes and produces a quasi-inferred perceptual seeming that the pebble is gold. Because this perceptual seeming is quasi-inferred from the base perceptual seeming that there is a yellowish pebble, it is not capable of immediately and defeasibly justifying the prospector's belief that the pebble is gold. Moreover, we can see that this quasi-inference taking place is not an epistemically legitimate one. The seeming with the content 'there is a yellowish pebble' does not sufficiently support the content of the quasi-inferred seeming, i.e. 'the pebble is a gold nugget'. Hence, the wishful prospector is in no way justified in their belief that the pebble is gold on the basis of their perceptual seemings. So, if this looks like a plausible view with respect to perceptual seemings, we can construct an analogous emotional dogmatist view as follows: Receptive Seemings Emotional Dogmatism (RSED): S's emotional experience is capable of immediately and defeasibly justifying her evaluative belief that an object O instantiates an evaluative property E if and only if (i) the Now, to some degree, the question of whether RSED constitutes an improvement on RED hinges on whether RSED gives us the right result in emotional overgeneralisation cases; whether it correctly diagnoses what's going wrong with the suspicious interviewer's emotional seeming, for example, and has the philosophical tools to exclude it from being capable of conferring justification. Here's the problem. While the notion of receptivity may be plausible with respect to perceptual seemings and perceptual over-generalisation cases, it's not obvious that it translates well to the emotional case. There's a question of whether any emotional seemings are receptive, and not quasi-inferred from other seemings, given that emotions have what Deonna and Teroni (2012) refer to as ''cognitive bases'' (p. 5). That is, unlike perceptions, emotions rely on base mental states such as perceptions, beliefs, and so forth. I can't experience fear in response to the approaching snake without in some way perceiving the snake and its fearsome-making features. The same is not true of visually perceiving the snake; my visual experience of the snake does not presuppose a further mental state in the same way that my emotional experience does. In light of this fact, then, we might wonder how any emotional experience can involve a seeming that an object instantiates a particular evaluative property without that seeming being quasi-inferred from non-emotional seemings pertaining to the non-evaluative features of the object. 10 This is a problem because, if it is the case that all or most emotional seemings are quasi-inferred from the seemings of their cognitive bases (i.e. perceptual seemings, introspective seemings, etc.), it looks like RSED can't explain the intuitive epistemic difference between legitimately and illegitimately produced emotional seemings. Reconsider two versions of the suspicious interviewer case. In one scenario, the interviewer's emotional seeming that the candidate is duplicitous is caused by legitimate observations of duplicitous-making features of the candidate, whereas the other scenario involves the emotional seeming being caused by illegitimate background biases. For RSED, what has to be going wrong in the bad case is that the interviewer's emotional seeming that the candidate is duplicitous is quasi-inferred from another seeming, and is thereby incapable of lending immediate justification to the evaluative belief that the candidate is duplicitous. But, as we've seen above, it looks like both the good and the bad case involve quasi-inferred emotional seemings. If merely being non-receptive makes a seeming incapable of conferring immediate justification, then RSED generates the same result for both the good and bad cases of suspicious interviewer. In response, the defender of RSED might argue that the view can still explain the intuitive difference in epistemic capacity between the emotional seemings involved in both cases. That is, they may point to the difference in epistemic quality in each quasi-inference as what explains the intuition that the emotional seeming produced by legitimate observations is better epistemically placed than the seeming produced by illegitimate bias. Recall that a quasi-inferential basis need not rob the seeming of all of its justificatory power. If it is a good quasi-inference, i.e. if the content of the base seeming adequately supports the content of the quasi-inferred seeming, then the quasi-inferred seeming can transmit mediate justification to the relevant belief. The defender of RSED might argue that in the good case, i.e. the case in which the emotional seeming that the candidate is duplicitous is quasi-inferred from the perceptual seeming which has as its content the relevant conjunction of duplicitousmaking non-evaluative features of the candidate (i.e. their behaviours and mannerisms), the quasi-inference is legitimate insofar as the content of the base perceptual seeming adequately supports the content of the emotional seeming. On the other hand, consider the bias case. Presumably, the emotional seeming that the candidate is duplicitous will be quasi-inferred from perceptual seemings with different contents, e.g. if it's a sexist bias, then the emotional seeming that the candidate is duplicitous will be quasi-inferred from the base perceptual seeming that the candidate is a woman. Clearly, this is not a legitimate quasi-inference. In other words, there's an illegitimate 'jump' in the bias quasi-inference that isn't present in the good case, and this is what explains the difference in epistemic status between the two cases. Even if this is a plausible way of explaining the intuitive difference between the two suspicious interviewer cases, it still doesn't get us where we want to go given that we've been interested in how emotional phenomenology can immediately justify our evaluative beliefs. If it is the case that emotional seemings can only ever transmit mediate justification generated by perceptual (or introspective, etc.) seemings, then RSED cannot account for emotional experience as a source of unmediated epistemic justification. Recall that one of the main selling points of emotional dogmatism concerns its ability to provide low-cost justification to a rich catalogue of our evaluative beliefs. This capacity, which is essential for providing dogmatists with the resources to answer sceptical worries pertaining to the acquisition of various epistemic goods, crucially depends on the immediacy of the justification. If the justification provided by seemings must instead meet further epistemic requirements, such as being suitably related to the content of the subject's non-emotional seemings and existing justified beliefs, then we give argumentative sway back to the sceptic, and thereby lose the distinctive virtue of dogmatism. Therefore, because eliminating immediacy from dogmatism eliminates a substantive percentage of the theory's philosophical attractions, and because RSED requires eschewing immediacy, McGrath's receptivity-based view cannot be a suitable theoretical framework for emotional dogmatism. Knowledge-how emotional dogmatism Finally, let's consider Markie's view. Returning to the gold prospector case, a natural suggestion as to why the expert prospector's perceptual seeming enjoys justificatory power is that the expert knows what gold looks like; the novice doesn't have anything close to this knowledge. One way of spelling out the problem with basic PD is that it can't account for the fact that this ought to make a difference between the epistemic status of the expert's and novice's belief. In light of this, Markie (2013) proposes a qualified view of phenomenal dogmatism which restricts the type of seemings capable of possessing justificatory power to seeming experiences brought about by the agent's exercise of the relevant knowledge-how capacity. To summarise Markie's view, merely having a perceptual seeming is insufficient for immediate and defeasible justification. A further condition must be met, namely that the subject must have the relevant knowledge-how capacity to recognise the relevant property and the seeming must be appropriately related to that capacity, i.e. the knowledge-how plays a substantive causal role in bringing about the seeming. On Markie's view, what possessing a knowledge-how capacity amounts to is the subject possessing a disposition to experience the relevant seemings upon perceptually apprehending certain features of the object in question, e.g. the expert prospector has the knowledge-how capacity to perceptually identify gold nuggets insofar as they are disposed to have the perceptual seeming that a pebble is gold when apprehending certain gold-making features of the object. Moreover, that subject's disposition is, as Markie puts it, ''determined by'' (p. 264) their having the right sort of background information, e.g. that an object which has certain features and looks a certain way is gold. Finally, on Markie's account, having this background information is a matter of having evidence that justifies the subject in believing, in this case, that an object which looks a certain way is gold. So, if this view looks like it's generating the right result in the perceptual case, we can transpose it into an emotional dogmatist view as follows: Knowledge-How Emotional Dogmatism (KHED): S's emotional experience is capable of immediately and defeasibly justifying her evaluative belief that an object O instantiates an evaluative property E if and only if (i) the experience makes it seem to her that O is E, and (ii) the experience makes it seem to her that O is E in virtue of her knowledge of how to emotionally identify something as being E. For KHED, my emotional experience of fear towards the snake is capable of immediately and defeasibly justifying my evaluative belief that the snake is fearsome if and only if my experience makes it seem to me that the snake is fearsome and I have this emotional seeming as the result of my knowledge of how to emotionally identify something as fearsome. Analogously to the perceptual case above, a subject's knowledge-how capacity to emotionally identify something as fearsome involves the possession of a disposition to experience emotional seemings of fearsomeness upon attending to certain features of the object or situation. Moreover, I possess this disposition at least partly by virtue of my having the relevant background information, i.e. what makes fearsome things fearsome. The good news for KHED is that the addition of the knowledge-how condition on seemings places the view in a much better position than RED to be able to handle over-generalisation cases. Take the case in which the interviewer's suspicious emotional seemings towards the candidate are produced by an illegitimate bias as opposed to legitimate observations of duplicitous-making features of the candidate. KHED is able to provide a straightforward explanation as to why the interviewer's emotional seemings do not justify them in believing that the candidate is duplicitous, i.e. the suspicious seeming is experienced by virtue of the interviewer harbouring illicit biases, not by virtue of their knowledge of how to emotionally identify duplicitousness. The interviewer whose emotional seemings do arise as a result of legitimate observations of duplicitous-making features, however, plausibly does enjoy justification for their belief that the candidate is duplicitous insofar as their experiencing the seemings as a result of those legitimate observations is an exercise of her knowledge-how capacity to identify duplicitousness. However, KHED faces two problems. The first of which is that, again, it's not clear that this account paints a plausible picture of immediate justification. Since justification-conferring emotional seemings must be the result of an exercise of a knowledge-how capacity, and since this capacity is determined by the possession of background information that would justify the relevant evaluative proposition, it's not obvious that KHED is capturing the phenomenon that we set out to explain. The second worry is that attributing so much weight to the possession of the relevant background information that determines one's disposition to have the relevant emotional seemings (and thereby the relevant knowledge-how capacity) threatens to render emotional phenomenology epistemically superfluous. That is, there's a serious question of what justificatory work the emotional seemings are doing if the brunt of the epistemic labour has already been done by the subject insofar as she has the background information required to justify her belief that a given object instantiates the relevant evaluative property. So, while KHED looks promising insofar as it seems better placed to handle overgeneralisation cases, we see on closer inspection that it is unable to explain the immediate justificatory power of emotional experience, and threatens to make emotional phenomenology epistemically superfluous. Markie's knowledge-how account, then, is not a suitable framework for an analogous emotional dogmatist view. Summing up Here, I have considered and rejected four possible emotional dogmatist views. Basic emotional dogmatism fails in virtue of being too liberal with respect to the types of emotional experiences it bestows with justificatory power, and the attempt to restrict emotional dogmatism with the Chudnoff-inspired truth-maker condition fails in virtue of falling foul to a troubling dilemma in what seeming awareness of truthmakers consists in. I've also aimed to show that alternative options for restricting emotional dogmatism, i.e. views analogous to those advanced by McGrath and Markie, cannot provide a plausible account as to how emotional experience can immediately justify evaluative belief. To be clear, the purpose of this discussion has not been to provide a conceptual argument for the failure of every instantiation of the emotional dogmatist view, nor have I endeavoured to show that emotional experience is altogether incapable of immediately and defeasibly justifying belief. Rather, by levelling these arguments against ED, RED, RSED, and KHED, I have aimed to build an inductive case against the possibility of a plausible emotional dogmatist view. In lieu of an undiscovered dogmatist instantiation which does not fall foul to the above objections, we have good reason to reject the idea that emotional phenomenology is what makes emotional experience capable of immediately justifying evaluative belief. What remains an open question is whether emotional seeming states might be capable of lending justification to evaluative belief mediated by other mental states and beliefs. As we've seen from §4.1 and §4.2, it may be possible to provide an explanation as to how emotional seemings can perform some epistemic role in transmitting justification initially generated by either receptive perceptual seemings in McGrath's case or by the background information which constitutes the relevant knowledge-how capacity for Markie. Determining whether emotional experience has this epistemic capacity, however, is a task for another paper. For now, I conclude that the prospects of emotional dogmatism, as a straightforward analogue of phenomenal dogmatism, are bleak.
2020-10-28T19:19:58.836Z
2020-10-06T00:00:00.000
{ "year": 2020, "sha1": "b12d4d055ce82c828f72f09868476699162ff54e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11098-020-01561-5.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "282ed1ca27ecd4b95481226085c4f1fb52aede2c", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Psychology" ] }
18136276
pes2o/s2orc
v3-fos-license
First Simultaneous Optical and EUV Observations of the Quasi-Coherent Oscillations of SS Cygni Using EUV photometry obtained with the Extreme Ultraviolet Explorer (EUVE) satellite and UBVR optical photometry obtained with the 2.7-m telescope at McDonald Observatory, we have detected quasi-coherent oscillations (so-called ``dwarf nova oscillations'') in the EUV and optical flux of the dwarf nova SS Cygni during its 1996 October outburst. There are two new results from these observations. First, we have for the first time observed ``frequency doubling:'' during the rising branch of the outburst, the period of the EUV oscillation was observed to jump from 6.59 s to 2.91 s. Second, we have for the first time observed quasi-coherent oscillations simultaneously in the optical and EUV. We find that the period and phase of the oscillations are the same in the two wavebands, finally confirming the long-held assumption that the periods of the optical and EUV/soft X-ray oscillations of dwarf novae are equal. The UBV oscillations can be simply the Rayleigh-Jeans tail of the EUV oscillations if the boundary layer temperature kT_bb<~ 15 eV and hence the luminosity L_bb>~ 1.2e34 (d/75 pc)^2 erg/s (comparable to that of the accretion disk). Otherwise, the lack of a phase delay between the EUV and optical oscillations requires that the optical reprocessing site lies within the inner third of the accretion disk. This is strikingly different from other cataclysmic variables, where much or all of the disk contributes to the optical oscillations. Introduction Rapid periodic oscillations are observed in the optical flux of high accretion rate ("high-Ṁ ") cataclysmic variables (nova-like variables and dwarf novae in outburst) (Patterson 1981;Warner 1995a,b). These oscillations have high coherence (Q ≈ 10 4 -10 6 ), short periods (P ≈ 7-40 s), low amplitudes (A < ∼ 0.5%), and are sinusoidal to within the limits of measurement. They are referred to as "dwarf nova oscillations" (DNOs) to distinguish them from the apparently distinct longer period, low coherence (Q ≈ 1-10) quasi-periodic oscillations (QPOs) of high-Ṁ cataclysmic variables, and the longer period, high coherence (Q ≈ 10 10 -10 12 ) oscillations of DQ Her stars. DNOs appear on the rising branch of the dwarf nova outburst, typically persist through maximum, and disappear on the declining branch of the outburst. The period of the oscillation decreases on the rising branch and increases on the declining branch, but because the period reaches minimum about one day after maximum optical flux, dwarf novae describe a loop in a plot of oscillation period versus optical flux. The dwarf nova SS Cygni routinely exhibits DNOs during outburst. Optical oscillations have been detected at various times with periods ranging from 7.3 s to 10.9 s (Patterson, Robinson, & Kiplinger 1978;Horne & Gomer 1980;Hildebrand, Spillar, & Stiening 1981;Patterson 1981). In the soft X-ray and EUV wavebands, quasi-coherent oscillations have been detected in HEAO 1 LED 1 data at periods of 9 s and 11 s (Córdova et al. 1980(Córdova et al. , 1984, EXOSAT LE data at periods between 7.4 s and 10.4 s (Jones & Watson 1992), EUVE DS data at periods between 2.9 s and 9.3 s (Mauche 1996(Mauche , 1998, and ROSAT HRI data at periods between 2.8 s and 2.9 s (van Teeseling 1997). Mauche (1996) showed that the period of the EUV oscillations of SS Cyg is a single-valued function of the EUV flux (hence, by inference, the mass-accretion rate onto the white dwarf), and explained the loops observed in plots of oscillation period versus optical flux as the result of the well-known delay between the rise of the optical and EUV flux at the beginning of dwarf nova outbursts. While the quasi-coherent oscillations of SS Cyg are usually sinusoidal to high degree, Mauche (1997) pointed out the pronounced distortion of the EUV waveform at the peak of the 1994 June/July outburst. We present here new observations in the optical and EUV obtained during the 1996 October outburst of SS Cyg. EUVE As discussed by Wheatley, Mauche, & Mattei (2000), AAVSO optical, EUVE , and RXTE observations of SS Cyg were obtained during a multiwavelength campaign in 1996 October designed to study the wavelength dependence of the outbursts of dwarf novae. The EUVE (Bowyer & Malina 1991;Bowyer et al. 1994) observations began on JD − 2450000 = 366.402 and ended on JD − 2450000 = 379.446. Data are acquired only during satellite night, which comes around every 95 min and lasted for 23 min (at the beginning of the observation) to 32 min (at the end of the observation). Valid data are collected during intervals when various satellite housekeeping monitors [including detector background and primbsch/deadtime corrections] are within certain bounds. After discarding numerous short (∆t ≤ 10 min) data intervals comprising less than 10% of the total exposure, we were left with a net exposure of 208 ks. EUV photometry is provided both by the deep survey (DS) photometer and short wavelength (SW) spectrometer, but the count rate is too low and the effective background is too high to detect oscillations in the SW data. Unfortunately, the DS photometer was switched off between October 11.37 UT and October 14.70 UT because the count rate was rising so rapidly on October 11 that the EUVE Science Planner feared that the DS instrument would be damaged while the satellite was left unattended over the October 12-13 weekend. We constructed an EUV light curve of the outburst from the backgroundsubtracted count rates registered by the two instruments, using a 72-130Å wavelength cut for the SW spectroscopic data, and applying an empirically-derived scale factor of 14.93 to the SW count rates to match the DS count rates. The resulting EUV light curve is shown by the filled symbols in the upper panel of Figure 1, superposed on the AAVSO optical light curve shown by the small dots (individual measurements) and histogram (half-day average). As shown by Mauche, Mattei, & Bateson (2001), the EUV light curve lags the optical light curve by ≈ 1 1 2 days during the rising branch of the outburst, then leads the optical light curve during the declining branch of the outburst. The secondary maximum of the EUV light curve at the very end of the optical outburst appears to be real, and coincides with the recovery of the hard X-rays flux measured by RXTE (Wheatley, Mauche, & Mattei 2000). To determine the period of the oscillations of the EUV flux of SS Cyg, for each valid data interval we calculated the power spectra of the background-subtracted count rate light curves using 1.024 s bins (the bin width of the primbsch/deadtime correction table). Individual spectra typically consist of a spike superposed on a weak background due to Poisson noise, so in each case we took as the period of the oscillation the location of the peak of the power spectrum in the interval ν = 0.1-0.4 Hz (P = 2.5-10 s). The resulting variation of the period of the EUV oscillation is shown in the lower panel of Figure 1. The oscillation was first convincingly detected on the rising branch of the outburst at a period of 7.81 s, fell to 6.59 s over an interval of 4.92 hr (Q = 1.5 × 10 4 ), jumped to 2.91 s, and then fell to 2.85 s over an interval of 4.92 hr (Q = 3.0 × 10 5 ) before observations with the DS were terminated. When DS observations resumed 3.4 days later during the declining branch of the outburst, the period of the EUV oscillation was observed to rise from 6.73 s to 8.23 s over an interval of 2.10 days (Q = 1.2 × 10 5 ). It is clear from the lower panel of Figure 1 that the period of the EUV oscillation of SS Cyg anticorrelates with the DS count rate, being long when the count rate is low and short when the count rate is high. To quantify this trend, we plot in Figure 2 the log of the period of the oscillation as a function of the log of the DS count rate. As in the previous figure, the data fall into two groups: one during the early rise (distinguished with crosses) and decline of the outburst, the other during the interval after the frequency of the oscillation had doubled. The trend during the early rise and decline of the outburst is clearly the same; fitting a function of the form P = P 0 I −α , where I is the DS count rate, an unweighted fit to the data gives P 0 = 7.26 s and α = 0.097. A similar fit to the data acquired after the oscillation frequency had doubled gives P 0 = 2.99 s and α = 0.021. The first trend is consistent with that observed during outbursts of SS Cyg in 1993 August and 1994 June/July (Mauche 1996), but the trend after the frequency had doubled is clearly distinct: not only did the oscillation frequency double, it's dependence on the DS count rate became "stiffer" by a factor of ≈ 5 in the exponent. SS Cyg seems to have been doing what it could to avoid oscillating faster than about 2.8 s. If this is the Keplerian period of material at the inner edge of the accretion disk, then P Kep ≥ 2π(R 3 wd /GM wd ) 1/2 ≈ 2.8 s, requiring M wd ≥ 1.27 M ⊙ (assuming the Nauenberg 1972 white dwarf mass-radius relationship). If instead, P Kep ≈ 5.6 s (i.e., the observed 2.8 s period is the first harmonic of a 5.6 s Keplerian period), then M wd > ∼ 1.08 M ⊙ . The data of Hessman et al. (1984), Friend et al. (1990), andMartínez-Pais et al. (1994) are consistent with a binary inclination i ≈ 40 • and white dwarf mass M wd = 0.9-1.1 M ⊙ , hence favor the second option, but it requires only a ≈ 10% reduction in the inclination angle to accommodate the first option. Optical In an effort to obtain the first simultaneous optical and EUV/soft X-ray measurements of dwarf nova oscillations, optical photometry of SS Cyg was obtained with the 2.7-m telescope at McDonald Observatory and the Stiening high-speed photometer on the nights of 1996 October 13, 14, and 15 UT. The Stiening photometer simultaneously measures the flux in four bandpasses similar to the Johnson U BV R bandpasses (see Robinson et al. 1995 for the effective wavelengths and widths of the bandpasses). Fluxes were calibrated using the standard star BD+28 • 4211, and the time standard was UTC as given by a GPS receiver located at the dome of the telescope. The start times of the observations were JD − 2450000 = 369.583, 370.644, and 371.587; the run lengths were 3.14, 1.37, and 3.97 hr respectively; and sample intervals were 0.5 s throughout. SS Cyg was observed to fade by ∼ 0.25 mag between the first and second nights and by another ∼ 0.30 mag between the second and third nights, but the mean flux ratios remained nearly constant from night to night: the F ν flux ratios are F U /F V ≈ 1.30, F B /F V ≈ 1.13, and F R /F V = 0.87, with V = 9.04, 9.27, and 9.56 for October 13, 14, and 15 UT, respectively. A search was made for optical oscillations by calculating the power spectra of the light curves in the various bandpasses. We found no detectable periodicities in the light curves from the first night with an upper limit on the relative amplitude ∆F/F < 3.0 × 10 −4 for any periodicity between 2.5 s and 10 s. Oscillations were detected on the second and third nights with periods of 6.58 s and 6.94 s, respectively. The mean properties of these oscillations are listed in Table 1. The fluxes in that table should be accurate to a few percent, and the oscillation amplitudes from the third night also should be accurate to a few percent, but on the second night the accuracy of the amplitude measurements are no better than ∼ 20% because the light curves are weak and contaminated by noise so the oscillation amplitudes are poorly determined and biased upwards by noise. The band fluxes F , oscillation amplitudes ∆F , and relative amplitudes ∆F/F from the third night are plotted in Figure 3, where it is apparent that the continuum flux of SS Cyg rises monotonically from R through U , while the absolute and hence the relative oscillation amplitudes are smallest in V . As shown by the dotted line, the U BV spectral distribution of the oscillation amplitudes is reasonably consistent with Rayleigh-Jeans, whereas the spectral distribution of the continuum is much flatter: an unweighted fit to the U BV measurements of the oscillation amplitudes and continuum assuming a function of the form F ν ∝ λ −α yields α = 1.9 and 0.57, respectively. Optical and EUV To compare the periods of the oscillations detected in the EUV and optical, we added the points for the optical periods to the lower panel of Figure 1. The period of the oscillation on the second night is about what one would expect from an extrapolation of the EUVE data, but, more importantly, the period of the oscillation on the third night is consistent with the value measured contemporaneously by EUVE . To investigate this further, we focused the analysis on that (unfortunately short) interval when optical and EUVE data were obtained simultaneously: during JD − 2450000 = 371.6639-371.6724 and 371.7297-371.7417; two stretches of strictly simultaneous observations separated by an EUVE orbit. Given the low amplitude of the optical oscillations, we calculated power spectra of the various bands in the encompassing interval JD − 2450000 = 371.6639-371.7417. For the EUVE data, we calculated the power spectra separately for the two intervals. These spectra are plotted in the left panels of Figure 4 (where the EUV power spectrum is the average of the two intervals). In each case (four optical bands, two EUV intervals), the period of the oscillation is found to be 6.94 s (actually, 6.944± 0.004 s for the optical channels, 6.94± 0.02 s for the EUV channel; the higher accuracy for the optical channels is due to longer data interval). To determine the relative phase of these oscillations, we phase-folded the data assumed a common period of 6.94 s and a common zero point at JD − 2450000 = 371.7, midway between the two data intervals. Fitting a sine wave F + ∆F sin 2π(φ − φ 0 ) to each band separately, we derived the parameters listed in Table 2; the phase-folded light curves and sinusoidal fits are shown in the right panels of Figure 4. The primary result from these efforts is that the relative phase of the oscillation is the same for all the bands. In particular, the phase of the EUV and optical oscillations are the same within the errors: the difference is ∆φ 0 = 0.014 ± 0.038. The relative amplitudes derived by these means are higher than derived in the previous section, but it is to be expected that higher amplitudes will be derived over short intervals when the oscillation period is more nearly constant. As before, the oscillation amplitude is highest in U , lowest in V , and comparable at an intermediate value in B and R. Summary and Discussion We have described EUV and optical photometric observations of SS Cyg obtained during its 1996 October outburst. During the rise to outburst, the period of the EUV oscillation was observed to fall from 7.81 s to 6.59 s over an interval of 4.92 hr, jump to 2.91 s, and then fall to 2.85 s over an interval of 4.92 hr. During the decline from outburst, the period of the EUV oscillation was observed to rise from 6.73 s to 8.23 s over an interval of 2.10 days. Optical oscillations were detected on the second and third nights of observations during the decline from outburst with periods of 6.58 s and 6.94 s, respectively. During the times of overlap between the optical and EUV observations on the third night, the oscillations were found to have the same period and phase; they differ only in their amplitudes, which are 34% in the EUV and 0.05%-0.1% in the optical. The first striking aspect of these observations is the frequency doubling observed on the rise to outburst. SS Cyg appears to have undergone a "phase transition" at a critical period P c < ∼ 6.5 s, when the frequency of its oscillation doubled (Fig. 1) and the "stiffness" of its period-intensity relation (P ∝ I −α ) increased by a factor of ≈ 5 in the exponent (Fig. 2). Optical oscillations were detected on the second and third nights of observations at periods above P c , but not on the first night when an extrapolation of the trend would predict an oscillation period below P c . It is interesting to speculate that the optical oscillations of SS Cyg (and possibly other dwarf novae) disappear on the rise to outburst when (if) the source makes the transition to the higher oscillation frequency and "stiffer" period-intensity state, and then reappear on the decline from outburst when the source reverts back to its normal state. Additional simultaneous optical and EUV/soft X-ray observations are required to determine if this is the case. Such observations are also required to determine if this transition takes place at the same period on the rise to and decline from outburst, or whether there is a "hysteresis" in the transition. The observational data are consistent with SS Cyg pulsating at a fundamental period P > ∼ 6.5 s, then switching to a first harmonic and stiffening its period-intensity (by inference period-Ṁ ) relationship so as to avoid oscillating faster than P min /2 ≈ 2.8 s. This minimum period P min ≈ 5.6 s is consistent with the Keplerian period at the inner edge of the accretion disk if, as seems to be the case observationally, the binary inclination i ≈ 40 • and the mass of the white dwarf M wd ≈ 1 M ⊙ . A secure white dwarf mass would confirm this interpretation. The second striking aspect of these observations is the lack of a phase delay between the EUV and optical oscillations measured simultaneously on the third night (Fig 4). The relative phase delay ∆φ 0 = 0.014 ± 0.038 for P = 6.94 s, or ∆t = 0.10 ± 0.26 s. The 3 σ upper limit ∆t ≤ 0.88 s corresponds to a distance r = c ∆t ≤ 2.6 × 10 10 cm. If the EUV oscillation originates near the white dwarf, and the optical oscillation is formed by reprocessing of EUV flux in the surface of the accretion disk, the delays ∆t = r (1 − sin i cos ϕ)/c, where the binary inclination i ≈ 40 • and 0 ≤ ϕ ≤ π is the azimuthal angle from the line of sight. Then, the distance to the reprocessing site r = c ∆t/(1 − sin i cos ϕ) ≤ 1.6 × 10 10 cm. To give a sense of scale, this is about 30 white dwarf radii or one-third the size of the disk. In contrast, eclipse observations of UX UMa (Nather 1981) indicate that in these high-Ṁ cataclysmic variables much or all of the disk contributes to the optical oscillations. The much smaller size of the optical emission region in SS Cyg is derived from an application of echo mapping, made possible for the first time by our strictly simultaneous optical and EUV observations. The other diagnostic of the optical oscillations is their spectrum, which is nearly Rayleigh-Jeans in U BV (Fig. 3). Given this result, it is interesting to ask if the U BV oscillations of SS Cyg are simply due to the Rayleigh-Jeans tail of the spectrum responsible for the EUV oscillations. Mauche, Raymond, & Mattei (1995) discuss the EUV spectrum of SS Cyg and show that it can be parameterized in the 72-130Å EUVE SW bandpass by a blackbody absorbed by a column density of neutral material. Acceptable fits to the spectrum are possible for a wide range of temperatures kT bb , hydrogen column densities N H , and luminosities L bb , but the tight correlation between these parameters significantly constrains the allow region of parameter space. From Figures 8-10 of Mauche, Raymond, & Mattei, a reasonable set of acceptable parameters is as listed in the first three columns of Table 3, where we have assumed a source distance d = 75 pc and a fiducial SW count rate of 0.5 counts s −1 . Scaling to the SW count rate of 0.083 counts s −1 observed during the interval of overlap on the third night of observations, the fractional emitting area of these blackbodies f = L bb /4πR 2 wd σT 4 bb are as listed in the fourth column of Table 3 for an assumed white dwarf radius R wd = 5.8 × 10 8 cm. With the exception of the coolest model, these fractional emitting areas are smaller than the value f = H bl /R wd ∼ 3 × 10 −3 expected for a boundary layer with a scale height H bl . The B band flux densities M B of these models are as listed in the fifth column of Table 3, and after multiplying by the EUV oscillation amplitude of 34% they become the oscillation amplitudes ∆M B listed in the sixth column of Table 3. With an observed oscillation amplitude ∆F B = 4.4 × 10 −4 Jy (Table 2), the relative model oscillation amplitudes ∆M B /∆F B are as listed in the seventh column of Table 3. We conclude that a single source can (within a factor of < ∼ 3) produce both the EUV and U BV oscillations of SS Cyg if its boundary layer temperature kT bb < ∼ 15 eV and hence its luminosity L bb > ∼ 1.2 × 10 34 (d/75 pc) 2 erg s −1 . Unfortunately, other data cannot confirm or exclude this possibility. First, while blackbody fits to HEAO 1 LED 1 and ROSAT PSPC soft X-ray spectra favor temperatures kT bb ≈ 20-30 eV, they are not inconsistent with temperatures as low as kT bb ≈ 15 eV (Córdova et al. 1980;Ponman et al. 1995). Second, while the strength of the He II λ4686 emission line at the peak of the outburst implies a boundary layer luminosity L bb ≈ 5 × 10 33 (d/75 pc) 2 erg s −1 , hence kT bb ≈ 20 eV, the luminosity can be increased to the required value if the fraction of the boundary layer luminosity intercepted by the disk is decreased from η = 10% to η ≈ 2%; such a model has the added charm of producing the expected boundary layer luminosity L bl ≈ L disk ≈ GM wdṀ /2R wd ≈ 3 × 10 34 (d/75 pc) 2 erg s −1 (Mauche, Raymond, & Mattei). The recent Chandra LETG spectrum of SS Cyg in outburst will better constrain the boundary layer parameters, hence allow us to determine if the boundary layer can produce both the EUV and U BV oscillations. Either way, we are left to explain the enhancement of the oscillation amplitude in R over that predicted a by Rayleigh-Jeans spectrum (Fig. 3). We have no compelling explanation for this datum, but the echo mapping constraint from above still applies, so the source of the extra oscillation amplitude in R must lie within the inner third of the accretion disk. We note that Steeghs et al. (2001) recently measured the spectrum of the optical oscillations of the dwarf nova V2051 Oph in outburst and found that the oscillation amplitudes of the Balmer and He I emission lines were stronger than the continuum by factors of < ∼ 5. Our R bandpass contains the Hα emission line, so it interesting to speculate that the enhanced oscillation amplitude in R might be due to the larger oscillation amplitude of the Hα line flux compared to the continuum. Unfortunately, it seems unlikely that the needed factor-of-two enhancement can be produced this way. Also, for this explanation to work, the higher-order Balmer lines must not significantly enhance the oscillation amplitude in B. Fast optical spectroscopy of SS Cyg in outburst is required to determine if this scenario can explain the enhanced oscillation amplitude in R, and, more generally, to determine if our explanation of the origin of the U BV oscillations is correct. Clearly, there is much more observational work to be done.
2014-10-01T00:00:00.000Z
2001-08-03T00:00:00.000
{ "year": 2001, "sha1": "9e5391739a03633ae1c916d061dc3cd7dc775413", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1086/323870/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "737057d3a2cdcc2c464a3e942c9106ad4a84d5db", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256615173
pes2o/s2orc
v3-fos-license
Dependence matters: Statistical models to identify the drivers of tie formation in economic networks Networks are ubiquitous in economic research on organizations, trade, and many other areas. However, while economic theory extensively considers networks, no general framework for their empirical modeling has yet emerged. We thus introduce two different statistical models for this purpose -- the Exponential Random Graph Model (ERGM) and the Additive and Multiplicative Effects network model (AME). Both model classes can account for network interdependencies between observations, but differ in how they do so. The ERGM allows one to explicitly specify and test the influence of particular network structures, making it a natural choice if one is substantively interested in estimating endogenous network effects. In contrast, AME captures these effects by introducing actor-specific latent variables affecting their propensity to form ties. This makes the latter a good choice if the researcher is interested in capturing the effect of exogenous covariates on tie formation without having a specific theory on the endogenous dependence structures at play. After introducing the two model classes, we showcase them through real-world applications to networks stemming from international arms trade and foreign exchange activity. We further provide full replication materials to facilitate the adoption of these methods in empirical economic research. Introduction The study of networks has established itself as a central topic in economic research (Jackson, 2008). Within the broader context of the study of complex and interdependent systems (see e.g. Flaschel et al., 1997Flaschel et al., , 2007Flaschel et al., , 2018, networks can be defined as interconnected structures which can naturally be represented through graphs. In the economic literature, networks have been extensively considered from a theoretical perspective, with the primary goal of understanding how economic behavior is shaped by interaction patterns (Jackson and Rogers, 2007). Indeed, the adequate modelling of such interactions has been described as one of the main empirical challenges in economic network analysis (Jackson et al., 2017). Research in this direction on, e.g., organizations as networks, diffusion in networks, network experiments, or network games, is surveyed in Bramoullé et al. (2016), Jackson (2014), and Jackson et al. (2017). These theoretical advances find application in their proper specification and testing. We then make use of the AME model to study a historical network of global foreign exchange activity, where a directed edge is present if one country's national currency is actively traded within the other country. AME allows us to estimate how relevant country features, such as per-capita gdp and the gold standard, and pairwise covariates, such as the distance between two countries and their reciprocal trade volume, influence tie formation, while controlling for network effects to provide unbiased estimates. We further compare the two model classes, weighing pros and cons of each approach and providing guidance on which tool is appropriate for applications to different empirical settings and research questions. Finally, in addition to a step-by-step analysis and interpretation of these application cases, we provide full replication code in our GitHub repository 1 , allowing for seamless reproducibility. We, therefore, demonstrate the "off-the-shelf" applicability of these methods, and offer applied researchers a headstart in employing them to study substantive economic problems. Our contribution is related to various strands of the growing literature on economic networks (e.g. Jackson and Rogers, 2007;Jackson, 2008;Bramoullé et al., 2016). Due to its focus on economic questions, our work differs from surveys in physics (Newman, 2003), statistics (Goldenberg et al., 2010), or political science (Cranmer et al., 2017). Several articles provide overviews and surveys of existing economic network models from a theoretical perspective (Jackson, 2014;Graham, 2015;Jackson et al., 2017;De Paula, 2020). None of these articles concentrates on discussing broadly applicable statistical modeling frameworks, such as ERGM and AME, from an empirical perspective. In this sense our paper is similar in spirit to van der Pol (2019) who, however, only focuses on ERGM, without comparing alternative approaches. Indeed, one of the goals of this paper is to shed light on the emerging AME model class (and, more generally, on latent variable network models) for future applications in the economic literature. The remainder of the paper is structured as follows. Section 2 discusses existing literature and presents the mathematical and notational framework used to define and discuss networks throughout the paper. Section 3 introduces the ERGM and applies it to the international arms trade network. Section 4 is dedicated to AME and its application to the global foreign exchange network. Section 5 concludes the paper with a brief discussion on the two model classes, contrasting their different uses and highlighting pros and cons of each approach. Related literature Even though network structures naturally arise in many aspects of economics and are subject of prominent research in the field, much of the previous literature has ignored the implied interdependencies, instead opting for regression models assuming ties to be independent conditional on the covariates (e.g. Anderson and Van Wincoop, 2003, Rose, 2004, Lewer and Van den Berg, 2008. This assumption is often unreasonable in practice. It would, for example, imply that Germany imposing economic sanctions on Russia is independent of Italy imposing sanctions on Russia, and, in the directed case, even of Russia imposing them on Germany itself. While no standard framework for the modeling of empirical network data has emerged in economics so far, a number of contributions in -or adjacent to -the field do make use of statistical network models. We shortly survey these works here to show that the models we present are indeed suitable for the analysis of economic data. Possibly the most obvious kind of economic network is the international trade network (see Chaney, 2014) and many of these studies accordingly seek to model the formation of trade ties. In this vein, two early studies (Ward and Hoff, 2007;Ward et al., 2013) apply latent position models to show that trade exhibits a latent network structure beyond what a standard gravity model can capture (see also Fagiolo, 2010;Dueñas and Fagiolo, 2013). More recently, numerous contributions have used the ERGM to explicitly theorize and understand network interdependence in the general trade (Herman, 2022;Liu et al., 2022;Smith and Sarabi, 2022) as well as the trade in arms (Thurner et al., 2019;Lebacher et al., 2021), patents (He et al., 2019), and services (Feng et al., 2021). That being said, empirical research on economic networks is not limited to trade. Smith et al. (2019) use multilevel ERGMs to study a production network consisting of ownership ties between firms at the micro-level and trade ties between countries at the macro-level, while Mundt (2021) explores the European Union's sector-level production network via ERGMs as well as an alternative methodology, the stochastic actor-oriented model (SAOM). The latter is another prominent tool in the realm of network analysis, which is suitable for modeling longitudinal network data. As we, in the interest of brevity, focus on models for static networks (i.e. networks that are observed only at one point in time), we do not treat the SAOM, and instead refer to Snijders (1996Snijders ( , 2017 for an introduction to the model class. Going back to empirical research on economic networks in the literature, Fritz et al. (2023) deploy ERGMs to investigate patent collaboration networks. Studies on foreign direct investments document network influences using latent position models (Cao and Ward, 2014), or seek to model them via extensions of the ERGM (Schoeneman et al., 2022). Finally, economists also study networks of interstate alliances and armed conflict (see e.g. Jackson and Nei, 2015;König et al., 2017), both of which have been modeled via ERGMs (Cranmer et al., 2012;Campbell et al., 2018) and AME (Dorff et al., 2020;Minhas et al., 2022). This short survey indicates that both ERGM and AME can be used to answer questions which are of substantive interest to economists. Setup Before introducing models for networks in which dependencies between ties are expected, we briefly introduce the mathematical framework for networks, as well as the necessary notation. Let y y y = (y i j ) i, j=1,...,n be the adjacency matrix representing the observed binary network, comprising n fixed and known agents (nodes). In this context, y i j = 1 indicates an edge from agent i to agent j, while y i j = 0 translates to no edge between the two. Since self-loops are not admitted for most studied networks, the diagonal of y y y is left unspecified or set to zero. Depending on the application, the direction of an edge can carry additional information. If it does, we call the network directed. In this article, we mainly focus on this type of networks. Also note that all matrix-valued objects are written in bold font for consistency. In addition to the network connections, we often observe covariate information on the agents, which can be at the level of single agents (e.g. the gdp of a country) or at the pairwise level (e.g. the distance between two countries). We denote covariates by x x x 1 , ..., x x x p , and our goal is to specify a statistical model for Y Y Y , that is the random variable corresponding to y y y, conditional on x x x 1 , ..., x x x p . A natural way to do this is to specify a probability distribution over the space of all possible networks, which we define by the set Y. Two main characteristics differentiate our modeling endeavor from classical regression techniques, such as Probit or logistic regression models. First, for most applications, we only observe one realization y y y from Y Y Y , rendering the estimation of the parameters to characterize this distribution particularly challenging. Second, the entries of Y Y Y are generally co-dependent; thus, most conditional dependence assumptions inherent to common regression models are violated. Generally, we term mechanisms that induce direct dependence between edges to be endogenous, while all effects external to the modeled network, such as covariates, are called exogenous. The Exponential Random Graph Model The ERGM is one of the most popular models for analyzing network data. First introduced by Holland and Leinhardt (1981) as a model class that builds on the platform of exponential families, it was later extended with respect to fitting algorithms and more complex dependence structures (Lusher et al., 2012;. We next introduce the model step-by-step to highlight its ability to progressively generalize by building on conditional dependence assumptions. Accounting for dependence in networks We begin with the simplest possible stochastic network model, the Erdös-Rényi- Gilbert model (Erdös and Rényi, 1959;Gilbert, 1959), where all edges are assumed to be independent and to have the same probability of being observed. In stochastic terms, each observed tie is then a realization of a binomial random variable with success probability π, which yields for the probability to observe y y y. Evidently, model (1), which implies equal probability for all possible ties, is too restrictive to be applied to real world problems. In the next step, we, therefore, additionally incorporate covariates x i j by letting π vary depending on those covariates, leading to edge-specific probabilities π i j . Following the common practice in logistic regression, we parameterize the log-odds by log π i j 1−π i j = θ x i j , where x i j is a vector of exogenous statistics with the first entry set to 1 to incorporate an intercept, and get (2) From (2), the analogy to standard logistic regression being a special case of generalized linear models (Nelder and Wedderburn, 1972) becomes apparent. The joint distribution of Y Y Y can be formulated in exponential family form, yielding where s(y y y) = (s 1 (y y y), ..., s p (y y y)), s q (y y y) = n i=1 j =i y i j x i j,q ∀ q = 1, ..., p, with x i j,q as q − th entry in x i j and κ(θ ) = n i=1 j =i (1 + exp{θ x i j }). In the jargon of exponential families, we term s(y y y) sufficient statistics. Figure 1: Illustration of directed edgewise-shared partner statistics for k agents. Circles represent agents, and black lines represent edges between them. The names follow statnet nomenclature: OTP = "Outgoing Two-Path", ISP = "Incoming Shared Partner", OSP = "Outgoing Shared Partner", and ITP = "Incoming Two-Path". Newcomb (1979) observed that many observed networks exhibit complicated relational mechanisms, including reciprocity, which we can account for by extending the set of sufficient statistics. Under reciprocity, an edge Y ji influences the probability of its reciprocal edge Y i j to occur. Analyzing social networks, we would expect that the probability of agent i nominating agent j to be a friend is higher if agent j has nominated agent i as a friend. Holland and Leinhardt (1981) extended model (1) to such settings with the so-called p 1 model. To represent reciprocity, we assume dyads, each of them defined by (Y i j , Y ji ), to be independent of one another, which again yields an exponential family distribution similar to (3) with sufficient statistics that count the number of mutual ties (s Mut (y y y) = i< j y i j y ji ), of edges (s Edges (y y y) = n i=1 j =i y i j ), and the in-and out-degree statistics for all degrees observed in the networks 2 . Agents' in-and out-degrees are their number of incoming and outgoing edges, and relate to their relative position in the network (Wasserman and Faust, 1994). Next to reciprocity, another important endogenous network mechanism is transitivity, originating in the structural balance theory of Heider (1946) and adapted to binary networks by Davis (1970). Transitivity affects the clustering in the network, implying that a two-path between agents i and j, i.e. y ih = y h j = 1 for some other agent h, affects the edge probability of Y i j . Put differently, Y i j and Y kh are assumed to be independent iff i, j = k and i, j = h. Frank and Strauss (1986) proposed the Markov model to capture such dependencies. For this model, the sufficient statistics are star-statistics, which are counts of sub-structures in the network where one agent has (incoming and outgoing) edges to between 0 and n − 1 other agents, and counts of triangular structures. If the network is directed it is possible to define different types of triangular structures, as depicted in Figure 1. Extension to general dependencies Starting from the Erdös-Rényi-Gilbert model, which is a special case of a generalized linear model, we have consecutively allowed for more complicated dependencies between edges, resulting in the Markov graphs of Frank and Strauss (1986). Over this course, we showed that each model can be stated in exponential family form, characterized by a particular set of sufficient statistics. We now make this more explicit to allow for more general dependence structures, and specify a probabilistic model for Y Y Y directly through the sufficient statistics 3 . Wasserman and Pattison (1996) introduced this model as where θ is a p-dimensional vector of parameters to be estimated, s(y) is a function calculating the vector of p sufficient statistics for network y, and κ(θ ) = ỹ∈Y exp{θ s(ỹ)} is a normalizing constant to ensure that (4) sums up to one over all y ∈ Y. To estimate θ , Handcock (2003) adapted the Monte Carlo Maximum Likelihood technique of Geyer and Thompson (1992), approximating the logarithmic likelihood ratio of θ and a fixed θ 0 via Monte Carlo quadrature (see Hunter et al., 2012, for an in-depth discussion). A problem often encountered when fitting model (4) to networks is degeneracy (Handcock, 2003;Schweinberger, 2011). Degenerate models are characterized by probability distributions that put most probability mass either on the empty or on the full network, i.e., where either all or no ties are observed. To detect this behavior, one can use a goodnessof-fit procedure where observed network statistics are compared to statistics of networks simulated under the estimated model . To address it, Snijders et al. (2006) and Hunter and Handcock (2006) propose weighted statistics that, in many cases, have better empirical behavior. Degeneracy commonly affects model specifications encompassing statistics for triad counts and multiple degree statistics. For in-degree statistics, we would thus incorporate the geometrically-weighted in-degree, where IDEG k (y) is the number of agents in the studied network with in-degree k and α is a fixed decay parameter. One can substitute IDEG k (y) in (5) with the number of agents with a specific out-degree, ODEG k (y), to capture the out-degree distribution. We term these statistics geometrically weighted since the weights in (5) are a geometric series 4 . A positive estimate implies that an edge from a low-degree agent is more likely than an edge from a high-degree agent, resulting in a decentralized network. If, on the other hand, the corresponding coefficient is negative, one may interpret it as an indicator for a centralized network. To capture clustering, we have to define the distribution of edgewise-shared partners (ESP). This distribution is defined as the relative frequency of edges in the network with a specific number of k shared partners, that we denote by ESP(y) for k ∈ {1, ..., n − 2}. As shown in Figure 1, various versions of edgewise-shared partner statistics can be found in directed networks, depending on the direction of the edges between the three agents involved. Geometrically weighted statistics can be stated for them in a similar manner as for degree statistics. For example, for the outgoing two-path (OTP, see Figure 1a), this is In this case, a positive coefficient indicates that sharing ties with third actors increases the probability of observing an event between two agents. Along with capturing endogenous network statistics, it is also possible to extend the ERGM framework to include the temporal dimension, that is, to model longitudinal network data. This is done quite naturally through use of a Markov assumption on the temporal dependence of subsequently observed networks, giving rise to the Temporal Exponential Random Graph Model (TERGM). As we here focus on static networks, we do not cover this in depth, and refer to Hanneke et al. (2010) for an introduction to the TERGM, and to Fritz et al. (2020) for a more general discussion on temporal extensions to the model class. In summary, the ERGM allows to account for network dependencies via explicitly specifying them in s(y). A large variety of potential network statistics, such as those given in (5) and (6), can be included in s(y), enabling to test for their influence in the formation of the observed network. By allowing for this explicit inclusion and testing of network statistics, the ERGM requires researchers to at least have an implicit theory regarding what types of network dependence should exist in the network they study. Without such theory to guide the selection of network statistics, the range of potential network dependencies, and corresponding statistics, is virtually endless 5 . As a result, the ERGM is best suited for research questions that explicitly concern interdependencies within the network. If these interdependencies are, instead, only a potential source of bias the researcher wants to control for, the AME model (introduced in Section 4) may be a better fit. Application to the international arms trade network We next make use of the ERGM to analyze the international arms transfer network. Recent studies on trade in Major Conventional Weapons (MCW), such as fighter aircraft or tanks, not only emphasize its networked nature, but also argue that this very nature is of substantive theoretical interest (Thurner et al., 2019;Fritz et al., 2021). In line with Chaney (2014), triadic trade structures are held to reveal information regarding the participants' economic and security interests. Explicitly modeling these structures allows us to test hypotheses regarding their effects on further arms transfers. Accordingly, we seek to model the network of international arms transfers in the year 2018, where countries are nodes and a directed edge indicates MCW being delivered from country i to country j. Our interest here mainly lies in uncovering the network's endogenous mechanisms. MCW trade data come from SIPRI (2021), and the resulting network is depicted in Figure 2, obtained using the Yifan Hu force-directed graph drawing algorithm (Hu, 2005) with the software Gephi (Bastian et al., 2009). For estimating the parameters characterizing the ERGM, we use the R package ergm . Since evaluating κ(θ θ θ ) from (4) necessitates calculating the sum of |Y| = 2 n(n−1) terms, we rely on MCMC approximations thereof to obtain the maximum likelihood estimates (see Handcock, 2003 andHummel et al., 2012 for additional information on this topic). As discussed above, the ERGM allows us to use both exogenous (node-specific and pair-specific) attributes as well as endogenous structures to model the network of interest. Here, we select both types of covariates based on existing studies on the arms trade (Thurner et al., 2019;Fritz et al., 2021). In addition to an edges term, which corresponds to the intercept in standard regression models, we include importers' Countries are labeled by their ISO 3166-1 codes, and a directed edge from node i to node j indicates major conventional weapons being delivered from country i to country j. and exporters' logged GDP, whether they share a defense pact, their absolute difference in "polity" scores (a type of democracy index), and their geographical distance 6 . We lag these covariates by three years, reflecting the median time between order and delivery for MCW delivered in 2018 7 . More importantly, for the purpose of demonstrating how to model network data with the ERGM, we specify five endogenous network terms. Inand out-degree (IDEG and ODEG) measure, respectively, importers' and exporters' trade activity, and thus capture whether highly active importers and exporters are particularly attractive trading partners, or if they are instead less likely to form additional trade ties. Moreover, we specify a reciprocity term to capture whether countries tend to trade MCW uni-or bidirectionally. We further include two types of triadic structures, which represent transitivity and a shared supplier between countries i and j. The transitivity term counts how often country i exports arms to j while i exports to k, which in turn exports to j, thus capturing i's tendency to directly trade with j if they engage in indirect trade (OTP, see Figure 1a). In contrast, the shared supplier term counts how often country i sends arms to j while both import weapons from a shared supplier k (ISP, see Figure 1b). Note that, given the issue of degeneracy discussed above, we use geometrically weighted versions of all endogenous statistics except reciprocity. Finally, we include a repetition term capturing whether arms transfer dyads observed in 2018 had already occurred in any of the three previous years. Results of this ERGM, as well as, for comparison's sake, a logistic regression that includes the same exogenous covariates but does not capture 6 Data for these covariates come from the peacesciencer package (Miller, 2022). 7 We use the median as the distribution of times between order and delivery is quite skewed. As shown in the Supplementary Materials, our substantive results remain unchanged when using 4-and 5-year lags instead, which reflect the average time between order and delivery. In particular, the ERGM outperforms the logistic regression model regardless of lag choice. any of the endogenous network structures, are presented in Table 1. These results can be compared directly, as, just like in a logistic regression model, coefficients in the ERGM indicate the additive change in the log odds of a tie occurring in association with a unit change in the respective variable. In this sense, the logistic regression model can be viewed as a special case of the ERGM in which the network effects are omitted. From the table, we can see how the two models differ both in their in-sample fit, as captured by AIC and BIC 8 , as well as in the substantive effects they identify for the exogenous covariates. The repetition coefficient is positive and statistically significant in both models, but differs substantially in its size. An arms transfer edge having occurred at least once in 2015-17 increases the log odds of it occurring also in 2018 by 3.96 in the Logit, but only by 3.26 in the ERGM. Similarly, both models agree that the log odds of an arms transfer occurring increase with the economic size of the sender and receiver, as captured by their respective GDPs, but the coefficients retrieved by the Logit are approximately double the size of those in the ERGM, thus attributing more explanatory power to them. Also in this vein, the effect of the geographical distance between sender and receiver is three times as large in the logistic regression as in the ERGM and, while statistically significant in the former, indistinguishable from zero in the latter. Finally, both models report small and statistically insignificant effects for countries' polity difference and alliance ties. Taken together, however, there are clear, substantively meaningful differences in the effect sizes and, in the case of geographical distance, even statistical significance of the coefficients that the ERGM and Logit recover for the exogenous covariates. Furthermore, three of the endogenous statistics included in the ERGM exhibit statistically significant effects on the probability of arms being traded. The results for inand out-degree replicate the finding by Thurner et al. (2019), showing that highly active importers and exporters are less likely to form additional trade ties. In the ERGM, coefficients can also be interpreted at the global level, in addition to the edge-level interpretation given above. The shared supplier term having a (statistically significant) positive coefficient indicates, at the edge level, that an exporter is more likely to transfer weapons to a potential receiver if both of them import arms from the same source. Globally, on the other hand, the same coefficient means that the observed network exhibits more shared supplier configurations -where country i sends weapon to j while both receive arms from k -than would be expected in a random network of the same size. On the whole, the results presented in Table 1 offer an example for the striking differences that modeling network structures (instead of assuming them away) can make. The ERGM and Logit, while identical in their non-network covariates, report substantively different effects for these covariates, and, in the ERGM, network effects are also found to drive the formation of arms transfer edges. Latent variable network models Another way to account for network dependencies is by making use of latent variables. Models within this class assume that latent variables Z i are associated with each node i. Depending on the type of model, these latent variables can either be discrete (e.g. indicating group memberships for each node) or continuous, and affect the connection probability in different ways (Matias and Robin, 2014). An early (but still popular) approach in this direction is the stochastic blockmodel, which assumes that each agent possesses a latent, categorical class (or group membership). Nodes within each class are assumed to be stochastically equivalent in their connectivity behavior, meaning that the probability of two nodes to connect depends solely on their group memberships (Holland et al., 1983;De Nicola et al., 2022). This family of models is attractive due to its simplicity in detecting and describing subgroups of nodes in networks. In many applications, however, discrete groupings fail to adequately represent the observed data, as agents behave more heterogeneously. Moving from discrete to continuous latent variable network models, another prominent approach is the latent distance model. The latter postulates that agents are positioned in a latent Euclidean "social space", and that the closer they are within it, the more likely they are to form ties (Hoff et al., 2002). More precisely, the classical latent distance model specifies the probability of observing an edge between nodes i and j, conditional on Z, through where Z = (z 1 , ..., z n ) denotes the latent positions of the nodes in the d-dimensional latent space, and θ is the coefficient vector for the covariates x i j . The latent positions Z are assumed to originate independently from a spherical Gaussian distribution, i.e. Z ∼ N d (0, τ 2 I I I d )), where I I I d indicates a d-dimensional identity matrix. Latent distance models are particularly attractive for social networks in which triadic closure plays a major role, and where nodes with similar characteristics tend to form connections with each other (i.e. homophilic networks, see Rivera et al., 2010). It is also possible to add nodal random effects to the model, to control for agent-specific heterogeneity in the propensity to form edges (Krivitsky et al., 2009). The model then becomes where a = (a 1 , ..., a n ) and b = (b 1 , ..., b n ) are node-specific sender and receiver effects that account for the individual agents' propensity to form ties, with a ∼ N n (0, τ 2 a I n ) and b ∼ N n (0, τ 2 b I n ). Despite its advantages and its fairly simple interpretation, a Euclidean latent space is unable to effectively approximate the behavior of networks where nodes that are similar in terms of connectivity behavior are not necessarily more likely to form ties (Hoff, 2008), such as, e.g., many networks of amorous relationships (Ghani et al., 1997;Bearman et al., 2004). More generally, the latent distance model tends to perform poorly for networks in which stochastic equivalence does not imply homophily and triadic closure, i.e., when nodes which behave similarly in terms of connectivity patterns towards the rest of the network do not necessarily have a higher probability of being connected among themselves. This is often the case in economics, where real-world networks can exhibit varying degrees and combinations of stochastic equivalence, triadic closure and homophily. Moreover, it is often a priori unclear which of these mechanisms are at play in a given observed network. In this context, agent-specific multiplicative random effects instead of the additive latent positions allow for simultaneously representing all these patterns (Hoff, 2005). Further developments of this innovation have led to the modern specification of the Additive and Multiplicative Effects network model (AME, Hoff, 2011), which, from a matrix representation perspective, generalizes both the stochastic blockmodel and the latent distance model (Hoff, 2021). AME: Motivation and framework The AME approach can be motivated by considering that network data often exhibit first-, second-, and third-order dependencies. First-order effects capture agent-specific heterogeneity in sending (or receiving) ties within a network. For example, in the case of companies and legal disputes, first-order effects can be viewed as the propensity of each firm to initiate (or be hit by) legal disputes. Second-order effects, i.e., reciprocity, describe the statistical dependency of the directed relationship between two agents in the network. In the previous example, this effect can be described as the correlation between (a) company i initiating a legal dispute against company j and (b) j doing the same towards i. Of course, second-order effects can only occur in directed networks. Third-order effects are described as the dependency within triads, defined as the connections between three agents, and relate to the triangular statistics previously illustrated in Figure 1. How likely is it that "a friend of a friend is also my friend"? Or, returning to the previous example: given that i has legal disputes with j and k, how likely are disputes to occur between j and k? The AME network model is designed to simultaneously capture these three orders of dependencies. More specifically, it extends the classical (generalized) linear modeling framework by incorporating extra terms into the systematic component to account for them. In the case of binary network data, we can make use of the Probit AME model. As is well known, the classical Probit regression model can be motivated through a latent variable representation in which y i j is the binary indicator that some latent normal random variable, say L i j ∼ N (θ x i j , σ 2 ), is greater than zero (Albert and Chib, 1993). But an ordinary Probit regression model assumes that L i j , and thus the binary indicators (edges) y i j , are independent, which is generally inappropriate for network data. In contrast, the AME Probit model specifies the probability of a tie y i j from agent i to agent j, conditional on a set of latent variables W , as where Φ is the cumulative distribution function of the standard normal distribution, θ x i j accommodates the inclusion of dyadic, sender, and receiver covariates, and e i j can be viewed as a structured residual, containing the latent terms in W to account for the network dependencies described above. In the directed case, e i j is composed as In this context, a i and b j are zero-mean additive effects for sender i and receiver j accounting for first-order dependencies, jointly specified as The parameters σ a and σ b measure the variance of the additive sender and receiver effects, respectively, while σ ab relates to the covariance between sender and receiver effects for the same node. Going back to (10), ε i j is a zero-mean residual term which accounts for second order dependencies, i.e. reciprocity. More specifically, it holds that where σ 2 denotes the error variance and ρ determines the correlation between ε i j and ε ji , thus quantifying the tendency towards reciprocity. Finally, u i and v j in (10) are ddimensional multiplicative sender and receiver effect vectors that account for third-order dependencies, and for which (u 1 , v 1 ), ..., (u n , v n ) ∼ N 2d (0, Σ 3 ) holds. As noted above, AME is able to represent a wide variety of network structures, generalizing several other latent variable model classes. This generality comes at the price of a high level of complexity for the estimated latent structure. This can make the model class a sub-optimal choice if one wants to interpret the latent structure with respect to, e.g., clustering. On the other hand, its flexibility makes it an ideal fit when the underlying network dependencies are unknown, and the researchers' interest mainly lies in evaluating and interpreting the effect of dyadic and nodal covariates on tie formation while controlling for network effects. This strength has led to AME being used for several applications of this type (Koster, 2018;Minhas et al., 2019Minhas et al., , 2022Dorff et al., 2020). We next showcase the AME framework by applying it to the world foreign exchange activity network as of 1900, originally introduced and studied by Flandreau andJobst (2005, 2009). This application highlights how using AME instead of classical regression can allow us to reconsider existing, influential answers to relevant questions via replication. Application to the global foreign exchange activity network In 1900, every financial center featured a foreign exchange market were bankers bought and sold foreign currency against the domestic one. Foreign exchange market activity was monitored in local bulletins, which allowed Flandreau and Jobst (2005) to collect a global dataset with all currencies used in the world at that time. In the resulting network structure, laid out in Figure 3, countries are nodes, and a (directed) edge from country i to country j occurs if the currency of country j was actively traded in at least one financial center within country i. From the graph representation, laid out using a variant of the Yifan Hu force-directed graph drawing algorithm (Hu, 2005), we observe that the most actively traded currencies at the time belonged to large European economies, such as Great Britain, France and Germany. To determine the drivers of currency adoption, Flandreau and Jobst (2009) model this network as a function of several covariates by employing ordinary binary regression. As we show, it is possible to use AME to pursue the same goal while taking network dependencies into account. We specify the AME model as in (9), using directed edges y i j as response variable. The nodal covariates we use, sourced from and described in detail in the replication materials of Flandreau and Jobst (2009), are (log)per-capita GDP, democracy index score, coverage of foreign currencies traded in the country, and an indicator of whether the country's currency was on the gold standard. We also include, as dyadic covariates, the distance between two countries as well as their total trade volume. As specified in (10), the structured residual term e i j comprises additive effects a i and b j for each node, which capture country-specific propensities to send and receive ties, respectively. Multiplicative effects u i and v j are included to account for third order dependencies. We here set the dimensionality of the multiplicative effects to two, which we assume to be sufficient given the relatively small size of the network. To estimate the AME model, we make use of the R package amen (Hoff, 2015). As the likelihood involves intractable integrals arising from the combination of the transformation and dependencies induced by the the model, closed form solutions are not available. The package thus uses reasonably standard Gibbs sampling algorithms to provide Bayesian inference on the model parameters. More details on the estimation routine can be found in Hoff (2021). The results of the analysis, as well as, for comparison's sake, a Probit regression including the same covariates but ignoring network dependencies, are displayed in Table 2. Note that the classical Probit regression model can be seen as a special case of AME Probit in which both additive and multiplicative node-specific effects are omitted. Additional model diagnostics and goodness of fit measures, together with the estimated variance and covariance parameters, are provided in the Supplementary Material. The estimated coefficients (for both models) can be interpreted as in standard Probit regression: For the nodal covariate per-capita GDP, for example, a unit increase in the log-per-capita GDP for country i corresponds to a decrease of 0.453 in the linear predictor, therefore negatively influencing the expected probability of the country to send a tie. The same unit increase in the log-per-capita-gdp for country i corresponds to an increase of 0.426 in the linear predictor, and has therefore a positive impact on the expected probability of that country to receive a tie. In the case of a dyadic covariate, such as distance, a unit increase in distance between two countries leads to a decrease of 1.019 in the linear predictor, resulting in a decrease in the expected probability of the two countries to form a tie in either direction. Overall, we find that the principal drivers of the formation of a tie between i and j are the magnitude of the foreign exchange coverage of the two countries involved, the distance between them, and their reciprocal trade volume. These results correspond to the thesis of Kindleberger (1967) and to Flandreau and Jobst (2009), who suggest that the most important determinants of international adoption for a currency are size and convenience of use. At the same time, we note that, as for the ERGM in the arms trade example, the results of the Probit and AME model differ in several regards. In particular, several effects are statistically significant in the Probit but not significant in the AME model. Indeed, unacknowledged network dependence can cause downward bias in the estimation of standard errors, leading to spurious associations (Lee and Ogburn, 2021). This finding once again highlights how accounting for network dependencies can make a difference when it comes to the substantive results. As a final note, we add that in this case we went with AME over ERGM as our interest lies in answering the research questions addressed by Flandreau andJobst (2005, 2009), that is assessing the effect of the exogenous covariates in Table 2 on tie formation. AME allows us to do that without specifying the configuration of the endogenous network mechanisms at play, which are instead accounted for through the imposed latent structure. If, on the other hand, the researcher expects some specific network effects to play a role, and wishes to test for their presence and measure their influence on network formation, the ERGM may be a better tool. The latter model class can, for example, directly answer questions such as "Does the fact that both countries A and B trade the currency of country C influence the probability of A and B to be connected? And if so, to what extent?". AME, on the other hand, is limited to accounting for those effects via the latent variables, without explicitly identifying them, to provide unbiased inference for the covariate effects. The choice between the two model classes is thus a matter of what assumptions can be made about the network and where the researcher's interest lies. Conclusion Complex dependencies are ubiquitous in the economic sciences (Chiarella et al., 2005;Flaschel et al., 2008), and many economic interactions can be naturally perceived as networks. This area of research has thus received considerable interest in recent years. However, this attention has not yet been accompanied by a corresponding general take-up of empirical research methods tailored towards networks. Instead, researchers either develop their own estimators to reproduce the features of their theoretical network models, or use standard regression methods that assume conditional independence of the edges in the network. Against this background, this paper seeks to provide a hands-on introduction to two statistical models which account for network dependencies, namely the Exponential Random Graph Model (ERGM) and the Additive and Multiplicative Effects network model (AME). These two classes serve different purposes: While the ERGM is most appropriate when explicitly interested in testing the effects of endogenous network structure, the AME model allows one to control for network dependencies while substantively focusing on estimating the effects of exogenous covariates of interest. We present the statistical foundations of both models, and demonstrate their applicability to economic networks through examples in the international arms trade and foreign currency exchange, showing that modeling network dependencies can alter the substantive results of the analysis. We, moreover, provide the full data and code necessary to replicate these exemplary applications. We explicitly encourage readers to use these replication materials to get started with analyzing economic networks via ERGM and AME, beginning with the examples covered here to then transfer the code and methods to their own research. We especially want to encourage the use of such methods as not accounting for interdependence between observations when it exists can lead to biased estimates and spurious findings. Our two applications demonstrate that this bias can result in very different em-pirical results, and thus affect substantive conclusions. It is therefore vital to account for network structure when studying interactions between economic agents such as individuals, firms, or countries, regardless of whether one is substantively interested in this structure. As shown by Lee and Ogburn (2021), our applications are just two examples of how unaccounted dependence in the observed data may lead to spurious findings. At the same time, this paper can only serve as an introduction to statistical network data analysis in economics. We covered two general frameworks in this realm, but, in the interest of brevity, focused only on their simplest versions that apply to networks observed at only one time point and with binary edges. However, both frameworks have been extended to cover more general settings. For the ERGM, there are extensions for longitudinal data (Hanneke et al., 2010), distinguishing between edge formation and continuation (Krivitsky and Handcock, 2014), as well as to settings where edges are not binary but instead count-valued or signed (Krivitsky, 2012;Fritz et al., 2022). As for AME, approaches for longitudinal networks are described by Minhas et al. (2016), while versions for undirected networks as well as for non-binary network data are presented by Hoff (2021). Both the ERGM and the AME frameworks are thus flexible enough to cover a wide array of potential economic interactions. We believe that increasingly adopting these methods will, in turn, aid our understanding of these interactions. The following contains technical details and supplementary information to the manuscript. The full data and code to reproduce the analysis and to aid the reader in applying the presented models to their own data can be found in our Github repository, available at https://github.com/gdenicola/statistical-network-analysis-in-economics. S.1 Details on the p 1 model To represent reciprocity, the p 1 model assumes dyads, defined by (Y i j , Y ji ), to be independent of one another. The resulting bivariate distribution of each dyad (Y i j , Y ji ) comprises three parameters and a constraint: The joint distribution of the network is then given by: where ρ i j = log m i j e i j /a 2 i j for i < j, θ i j = log (a i j /n i j ) for i = j, and κ(θ ) is the normalizing constant. To estimate the parameters in (1) we need to introduce some homogeneity assumption to avoid overparametrization. Following Holland and Leinhardt (1981), we assume where θ Rep is the global effect of reciprocity, θ Edges quantifies the general sparsity in the network, and θ Out,i and θ In,i indicate the general tendency for all agents i ∈ {1, ..., n} to form out-or in-going ties. Combining these homogeneity assumptions with (1) yields P θ (Y Y Y = y y y) ∝ exp θ Rep s Rep (y y y) + θ Edges s Edges (y y y)+ n i=1 θ Out,i s Out,i (y y y) + n i=1 θ In,i s In,i (y y y) , where s Rep (y y y) = i< j y i j y ji is the number of reciprocal ties, s Edges (y y y) = n i=1 j =i y i j the number of edges, and s Out,i (y y y) and s In,i (y y y) the number of out-and in-going ties of agent i. Note that all these statistics are functions of the observed network and are the sufficient statistics, i.e. they contain all necessary information for determining all coefficients in (2). This sufficiency principle translates to s Out,i (y y y) = s Out, j (y y y) ⇒ θ Out,i = θ Out, j . This, in turn, allows us to write all agent-specific terms in (2) in terms of degree statistics: where s Outdeg,i (y y y) and s Indeg,i (y y y) are statistics counting the actors with out/in-degree i in y y y. S.2.1 Model diagnostics and goodness of fit To evaluate the goodness of fit of of an estimated ERGM, the standard approach is to compare the statistics observed in the real world network with the distribution of the same statistics calculated on networks simulated from the model . The results of this comparison are depicted in Figure S.1, where we investigate the goodness of fit of the ERGM estimated on the 2018 international arms trade network. The black line in each subfigure represents the distribution of the respective network statistic observed in the real network. For instance, the top right figure indicates that approximately 10% of all nodes (countries) in the network had an in-degree of 2. The boxplots then show the distribution of each value of a given statistic over the simulated networks. A good model will thus generally result in boxplots that include the observed values of the network statistics under consideration. In Figure S.1, this is almost always the case, though it is also visible that the real network included less countries with an in-degree of 0 but more with an in-degree of 1 than the large majority of networks simulated from the ERGM. In Figure S.2, we further compare the performance of the fully specified ERGM against that of the logistic regression model including the same exogenous covariates as the ERGM, but of course omitting all endogenous network statistics. We already noted that the ERGM appears to do a better job at in-sample prediction than the Logit model given its lower AIC and BIC values. Figure S.2 documents both models' respective areas under the receiver-operator (ROC) and precision-recall curves (PR). Here, a higher value of each curve indicates better predictive performance, and again, the ERGM appears to outperform the logistic regression model for both metrics. Figure S.2 thus offers further evidence that in the case of the international arms trade, model performance is improved by accounting for endogenous network effects. S.2.2 Robustness checks In the analysis presented in Section 3.3, we lag all covariate information by three years. Since this choice is based on the heuristic that there are considerable delays between the order and delivery date of arms, we next show the results of the ERGM with different time lags, namely four and five years, in S.3 Further results of the application to the global foreign exchange activity network S.3.1 Variance and covariance parameters In Section 4.3 of the paper, we showcased the AME model by fitting it to the historical network of global foreign exchange activity in 1900. In illustrating the results we, for brevity, focused on the main effects of the covariates included, reported in Table 2. But the AME model also estimates several variance and covariance parameters, which also have a meaningful interpretation. The estimates for those parameters are reported in the additive sender and receiver effects, respectively. We can see that receiver effects are much more variable then sender effects. This makes intuitive sense given that there are a few currencies which are traded by a large number of countries, while many currencies aren't traded at all outside of their origin countries. The skewness in the distribution of incoming ties thus induces a relatively large variance in the receiver effects. The coefficient in the third row, σ ab , measures the (global) correlation between sender and receiver effects of the same node. In this case, we can see that there is a slight negative correlation between the two, meaning that countries that trade many foreign currencies within their financial hub do not necessarily tend to have their home currency traded in many countries. Finally, the coefficient ρ indicates the covariance between the residuals on the same node-pair, ε i j and ε ji . This parameter quantifies the tendency towards reciprocity in the network. In this case, we can see that there is a slight positive tendency for ties to be reciprocated. S.3.2 Model diagnostics and goodness of fit Similarly as for ERGM, the goodness of fit for AME models is evaluated by comparing the statistics observed in the real world network with the distribution of the same statistics calculated for networks simulated from the model. Figure S.4 depicts this comparison for first order effects (top panel) and for second-and third-order dependencies (bottom panel), as done by default in the R package amen (see Hoff, 2015 for details). All in all, we can see that the model does a reasonably good job in preserving the network statistics in question. We can also compare the goodness of fit of the AME model with that of the Probit model including the same exogenous covariates, but of course omitting the latent variables. Figure S.5 depicts the same comparison of observed with simulated statistics just described, but for Probit instead of AME. From the plots therein we can see how the Probit model does a markedly worse job than the AME in reproducing first and third order dependencies, thus demonstrating an overall worse performance in capturing the mechanisms at play in the network. In Figure S.6, also produced by default by the amen package, we can further check how the coefficients and their variance vary across the MCMC iterations. In general, the fit is considered to be acceptable if no visible trends emerge in the chains. If, to the contrary, trends in the estimates are visible, the researcher must consider running the chain for more iterations and/or using alternative model formulations. In the case of Figure S.6, visual inspection gives us confidence that the AME model has reasonably converged.
2022-10-27T01:16:35.749Z
2022-10-26T00:00:00.000
{ "year": 2022, "sha1": "ef93e6f153ac17d2460b99055e2c1898b21ce005", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ef93e6f153ac17d2460b99055e2c1898b21ce005", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
270222160
pes2o/s2orc
v3-fos-license
A novel intervention for wound bed preparation in severe extremity trauma: Highly concentrated carbon dioxide bathing Introduction In severe extremity trauma involving large tissue defects, early closure (e.g., free-flap surgery) of the defects is an essential step for good functional reconstruction; however, in some cases, early closure may be difficult. Highly concentrated carbon dioxide bathing, used to improve blood flow in ischemic limbs and skin ulcers, can also be applied in wound bed preparation for severe limb trauma. Patients and Methods The three cases in this study required an average of 13 weeks of highly concentrated carbonated bathing, which led to significantly better wound bed preparation, even in the exposed bone and tendon regions. Results We successfully achieved good functional limb reconstruction in patients with deep burns and severe open fractures by reducing wound infection and facilitating good wound bed preparation. Conclusions Highly concentrated carbon dioxide bathing was sufficient to prevent frequent wound infections, even in severe extremity trauma involving large soft-tissue defects such as deep crush burns and Gustilo Anderson classification ≥3b open fractures of the extremities. To our knowledge, such interventions have not been reported in the past and are valuable as new procedures for wound bed preparation in severe extremity trauma from both cost and wound infection control perspectives. Introduction: In severe extremity trauma involving large tissue defects, early closure (e.g., free-flap surgery) of the defects is an essential step for good functional reconstruction; however, in some cases, early closure may be difficult.Highly concentrated carbon dioxide bathing, used to improve blood flow in ischemic limbs and skin ulcers, can also be applied in wound bed preparation for severe limb trauma.Patients and Methods: The three cases in this study required an average of 13 weeks of highly concentrated carbonated bathing, which led to significantly better wound bed preparation, even in the exposed bone and tendon regions.Results: We successfully achieved good functional limb reconstruction in patients with deep burns and severe open fractures by reducing wound infection and facilitating good wound bed preparation.Conclusions: Highly concentrated carbon dioxide bathing was sufficient to prevent frequent wound infections, even in severe extremity trauma involving large soft-tissue defects such as deep crush burns and Gustilo Anderson classification ≥3b open fractures of the extremities.To our knowledge, such interventions have not been Introduction Highly concentrated carbon dioxide bathing has been clinically applied to ischemic limbs and skin ulcers 1 , 2 ; however, there have been no reports on using such treatments for severe limb trauma involving large tissue defects.We here report the use of highly concentrated carbon dioxide bathing for severe limb trauma to verify the effectiveness of wound bed preparation. Case 1 3 A man in his 40s sustained a heat press injury when his right hand was caught in a roller heated to 180 °C for crimping fabric products for approximately 30 s.The palmar skin was severely indurated owing to deep burn damage, and the active motion of the fingers was severely limited ( Figure 1 ).From the day after the injury, a 15-min hand bath in highly concentrated carbon dioxide bathing at 37 °C (AS Care®; Asahi Kasei Medical Co., Ltd., Tokyo, Japan) was performed daily to gradually eliminate necrotic tissue and continue finger rehabilitation ( Figure 2 ).Four weeks after the injury, palmar necrotic tissue was completely eliminated, and sufficient granulation tissue had grown; therefore, the patient underwent skin grafting using plantar glabrous skin grafts ( Figure 3 ).A year after the injury, the color, texture, and skin extensibility of the grafted skin were acceptable, and the intrinsic muscles and mechanisms of the fingers functioned normally without any residual damage.The treatment was completed with full restoration of hand function ( Figure 4 ). 3 Case 2 A man in his 20s sustained an open fracture of the left calcaneal region with degloving skin while riding a motorcycle in a traffic accident ( Figure 5 ).After open reduction and internal fixation, necrosis of the calcaneal skin progressed, exposing most of the calcaneus and the fixation pin.Four weeks after injury, a 15-min foot bath in highly concentrated carbon dioxide bathing (AS Care®) at 37 °C was performed daily to gradually eliminate necrotic tissue ( Figure 6 ).Twenty-two weeks after the injury, necrotic tissue removal was completed, and sufficient granulation tissue had formed around the calcaneal base; therefore, a distally based sural flap was used to reconstruct the heel area ( Figure 7 ).Ten months after the injury, the patient walked with normal shoes, although sensory disturbance in the sole of the foot persisted ( Figure 8 ). Case 3 A man in his 30s sustained an upper extremity open fracture while riding a motorcycle sandwiched between a truck and a roadside wall.The patient had bone defects of the lateral epiphysis of the humerus and neck of the radius and an extensive soft-tissue defect on the outer side of the elbow, approximately 40 cm in diameter ( Figure 9 ).First, large tissue defects were packed using a right latissimus dorsi muscle flap and right pectoral skin flap, and external fixation of the right upper extremity was performed.Two weeks after injury, wound irrigation was continued with a highly concentrated carbonated spring bathing (AS Care®) at 37 °C for 15 min once daily ( Figure 10 ).Twelve weeks after the injury, sufficient granulation tissue developed around the elbow; therefore, full-thickness skin grafts were performed on the skin defect at the elbow ( Figure 11 ).Six months after the injury, the patient still had a limited range of motion of the right elbow joint; however, hand function was fully preserved ( Figure 12 ). Results Highly concentrated carbon dioxide bathing was sufficient to prevent frequent wound infections, even in severe extremity trauma involving large soft-tissue defects such as deep crush burns and Discussion In severe extremity trauma involving large tissue loss, early wound closure of the tissue defect is essential for good functional reconstruction. 4 , 5However, early and reliable debridement is often difficult because of severe wound contamination and insufficient blood flow to the wound.There are also cases in which various patient-oriented factors (e.g., old age, multiple traumas, peripheral vascular disease, diabetes mellitus, etc.) make it difficult to perform early skin flap surgery.There is always a risk of prolonged infection due to inadequate debridement and skin flap necrosis due to the use of damaged recipient vessels in haste to perform immediate surgery.It is also widely known that extremity reconstruction, especially after trauma, has a high-complication rate, including a high flap necrosis rate.Chronic osteomyelitis and cutaneous fistulas that complicate severe extremity trauma treatment are often caused by inadequate debridement, and the frequency of such complications is approximately 30%. 6 , 7Therefore, facilitated wound granulation therapy using various techniques (e.g., acellular dermal matrices or negative-pressure wound therapy [NPWT]) is commonly applied in combination with staged debridement for wound bed preparation. There have been a few reports on the efficacy of applying an artificial dermis to improve wound granulation 8 ; however, there have been more negative reviews recently, especially in cases with large tissue defects, because of the potential to cause wound infection. 9 , 10Some randomized controlled trials using NPWT have reported that the risk of wound infection was reduced by one-fifth and that wounds could be closed within an average of 3.7 days, 11 whereas others have reported a decreased wound area and reduced positive local infection rates. 12Conversely, there have been reports of an increase in deep infection when NPWT is used for more than 7 days, 13 which is insufficient evidence to support the efficacy of NPWT for contaminated wounds, 14 and even statements against its use in patients with severe open fractures of the lower extremity, with no significant difference in surgical site infection. 15Various expensive skin substitutes and wound dressings have also been developed; however, their effectiveness is currently limited. 16NPWT with instillation and dwelling, which has the additional function of continuous wound cleansing, is gradually becoming more available. 17This also has significant healthcare economic disadvantages.The reality is that treatments that do not consider costs and benefits are widespread, including continuing NPWT with inadequate debridement and using expensive extracellular matrix (ECM) products. 18Therefore, researchers should return to the principles of wound care; in the case of contaminated and crushed wounds involving insufficient blood flow, daily "diligent wound cleansing" is probably still the most important task. The fact that the procedure can be performed at home means that it does not require the intervention of a medical professional and significantly reduces the medical financial burden.The three patients required an average of 13 weeks of bathing, but the total cost averaged $91 ( = 13 × 7 × 1) per case, leading to inexpensive and extremely good wound bed preparation, even in sites with exposed bone and tendon.NPWT in Japan can cost approximately $20 0 0 per month in medical procedure costs alone, even for wounds < 100 cm 2 in size.Wound dressings and ECM products have also skyrocketed in price in recent years, with some materials costing as much as $300 per cm 2 . 19 , 20he main effects of highly concentrated carbonated bathing have been reported previously, including improved skin and muscle blood flow, decreased blood pressure, and amelioration of bradycar-dia. 21The clinical applications of highly concentrated carbon dioxide bathing for ischemic extremities, skin ulcers, and osteomyelitis have already been reported. 2 , 22 , 23The biochemical mechanism involves the conversion of transdermally absorbed carbon dioxide to bicarbonate ions, which act directly on endothelial cells to increase nitric oxide (NO) production through endothelial NO synthase (eNOS) phosphorylation, a process considered to improve blood flow. 24The optimal conditions for improving skin blood flow were as follows: carbon dioxide gas concentration, 10 0 0-130 0 ppm; water temperature, 37 °C; bathing time, 15 min; and application interval, once daily. 1 This regimen is also acceptable for facilitating wound granulation during severe extremity trauma treatments.The disadvantage of this method is the prolonged treatment period; however, it has the great advantage of avoiding the inevitable risk of free-flap surgery, which is required even in the infectious stage, and downgrading the reconstructive ladder seems possible. 25 , 26ighly concentrated carbonated bathing tablets are commercially available in Japan at a low cost of approximately US $1 per day.From a medical economic point of view, this method should be added to the list of wound-healing procedures in the future. 2However, the authors do not agree with blindly applying this method to patients with severe traumatic injuries for a long period, as it is essential for trauma specialists to set treatment goals according to the type of injury.Early and definitive debridement and wound closure with a free skin flap were performed in patients with potentially functionally detrimental scarring.Although careful attention must be paid when delivering prolonged treatment to specific joints to ensure safe and reliable reconstruction, this method can be advantageous for surgical downgrading and securing wound closure in select cases. Conclusion Highly concentrated carbon dioxide bathing is useful not only for treating ischemic limbs and skin ulcers but also as a novel method of wound bed preparation in severe extremity trauma from both cost and wound infection control perspectives. Declaration of Competing Interests The authors declare no conflicts of interest in association with the present study. Limitations The study was limited by its small sample size; all results were obtained from a single plastic surgeon.More cases and further studies are needed to confirm the statistical significance. Figure 1 . Figure 1.The palmar skin was severely indurated due to deep burn damage, and the active motion of the fingers was severely limited. Figure 2 . Figure 2. A 15-min hand bath in highly concentrated carbon dioxide bathing at 37 °C (AS Care®; Asahi Kasei Medical Co., Ltd., Tokyo, Japan) was performed daily to gradually eliminate necrotic tissue and continue finger rehabilitation Figure 3 . Figure 3. Four weeks after the injury, the patient underwent skin grafting using plantar glabrous skin grafts. Figure 4 . Figure 4.One year after the injury, the intrinsic muscles and mechanisms of the fingers functioned normally without any residual damage. Figure 5 . Figure 5.The patient sustained an open fracture of the left calcaneal region with degloving skin while riding a motorcycle in a traffic accident. Figure 6 . Figure 6.Four weeks after injury, a 15-min foot bath in highly concentrated carbon dioxide bathing (AS Care®) at 37 °C was performed daily to gradually eliminate necrotic tissue. Figure 7 . Figure 7. Twenty-two weeks after injury, the removal of necrotic tissue was completed, and sufficient granulation tissue had formed around the calcaneal base; therefore, a distally based sural flap was used to reconstruct the heel area. Figure 8 . Figure 8.Ten months after the injury, the patient walked with normal shoes, although sensory disturbance in the sole of the foot remains. Figure 9 . Figure 9.The patient had bone defects of the lateral epiphysis of the humerus and neck of the radius and an extensive softtissue defect on the outer side of the elbow, approximately 40 cm in diameter. Figure 10 . Figure 10.Large tissue defects were packed using a right latissimus dorsi muscle flap and a right pectoral skin flap, and external fixation of the right upper extremity was performed.Two weeks after injury, wound irrigation was continued with a highly concentrated carbonated spring bathing (AS Care®) at 37 °C for 15 min once daily. Figure 11 . Figure 11.Twelve weeks after the injury, sufficient granulation tissue had developed around the elbow, and full-thickness skin grafts were performed on the skin defect at the elbow. Figure 12 . Figure 12.Six months after the injury, the patient still had a limited range of motion of the right elbow joint, but the hand function had been fully preserved.
2024-06-04T15:10:27.837Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "8f620c521dff5af3f467a0921702cbfffaa1a0c3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jpra.2024.05.011", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8000e1bc357fc1bb2c1a18484985a42191d71532", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
251571388
pes2o/s2orc
v3-fos-license
Unsupervised layer-wise feature extraction algorithm for surface electromyography based on information theory Feature extraction is a key task in the processing of surface electromyography (SEMG) signals. Currently, most of the approaches tend to extract features with deep learning methods, and show great performance. And with the development of deep learning, in which supervised learning is limited by the excessive expense incurred due to the reliance on labels. Therefore, unsupervised methods are gaining more and more attention. In this study, to better understand the different attribute information in the signal data, we propose an information-based method to learn disentangled feature representation of SEMG signals in an unsupervised manner, named Layer-wise Feature Extraction Algorithm (LFEA). Furthermore, due to the difference in the level of attribute abstraction, we specifically designed the layer-wise network structure. In TC score and MIG metric, our method shows the best performance in disentanglement, which is 6.2 lower and 0.11 higher than the second place, respectively. And LFEA also get at least 5.8% accuracy lead than other models in classifying motions. All experiments demonstrate the effectiveness of LEFA. Introduction Feature engineering is an important component of pattern recognition and signal processing. Learning good representations from observed data can help reveal the underlying structures. In recent decades, feature extraction methods (He et al., 2016;Howard et al., 2017;Hassani and Khasahmadi, 2020;Zbontar et al., 2021) have drawn considerable attention. Due to the high cost of obtaining labels, supervised learning methods suffer from data volume limitations. Unsupervised learning methods therefore becomes critical for feature extraction. Most of these are based on probabilistic models, such as maximum likelihood estimation (Myung, 2003), maximum a posteriori probability estimation (Richard and Lippmann, 1991), and mutual information (MI) (Thomas and Joy, 2006). Methods such as principal component analysis (PCA) (Abdi and Williams, 2010), linear discriminant analysis (Izenman, 2013), isometric feature mapping (Tenenbaum et al., 2000), and Laplacian eigenmaps (Belkin and Niyogi, 2003) are widely used owing to their good performance, high efficiency, flexibility, and simplicity. Other algorithms are based on reconstruction errors or generative criteria, such as autoencoders (Bengio et al., 2013) and generative adversarial networks (GANs) (Goodfellow et al., 2014). Occasionally, the reconstruction error criterion also has a probabilistic interpretation. In recent years, deep learning has become a dominant method of representation learning, particularly in the supervised case. A neural network simulates the mechanism of hierarchical information processing in the brain and is optimized using the back propagation (BP) algorithm (LeCun et al., 1988). Because several feature engineering tasks are unsupervised, that is, no label information is available in the real situation and collecting considerable labeled data is expensive, methods to discover the feature representation in an unsupervised case have been significantly developed in recent years. MI maximization (Bell and Sejnowski, 1995) and minimization criteria (Matsuda and Yamaguchi, 2003) are powerful tools for capturing salient features of data and disentangling these features. In particular, variational autoencoder (VAE) (Kingma and Welling, 2013) based models and GAN have exhibited effective applications in disentangled representations. There are two benefits of learning disentangled representations. First, models with disentangled representations are more explainable (Bengio et al., 2013;Liu et al., 2021). Second, disentangled representations make it easier and more efficient to manipulate training-data synthesis. However, the backpropagation algorithm still requires a high amount of computation and data. To extract features information in SEMG signal data, we propose a Layer-wise Feature Extraction Algorithm (LFEA) based on information theory in the unsupervised case, which includes a hierarchical structure to capture disentangled features. In each layer, we split the feature into two independent blocks, and ensure the information separation between the blocks via information constraint, which we called Information Separation Module (ISM). Moreover, to ensure the expressiveness of the representation without losing crucial information, we propose the Information Representation Module (IRM) to enable the learned representation to reconstruct the original signal data. Meanwhile, redundant information would affect the quality of the representation and thus degrade the effectiveness of downstream tasks, for which Information Compression Module (ICM) is proposed to reduce the redundant and noisy information. In terms of the optimization algorithm, our back-propagation process is only performed in a single layer and not back propagated throughout the network, which can greatly reduce the amount of computation while having no effect on the effectiveness of our method. Regarding the experiments, we have made improvement and strengths in terms of motion classification and representation disentanglement over the traditional methods of surface electromyography (SEMG). Especially, on NinaPro database 2 (DB2) dataset, our approach gets a significant 4% improvement in the motion classification, and better model stability. This manuscript is organized as follows. In Section 2, we introduce the related work. The proposed method LFEA is described in Section 3. We present the numerical results in Section 4. Section 5 gives the conclusion of this manuscript. Disentangled representation The disentanglement problem has played a significant role, particularly because of its better interpretability and controllability. The VAE variants construct representations in which each dimension is independent and corresponds to a dedicated attribute. β-VAE (Higgins et al., 2016) adds a hyperparameter to control the trade-off between compression and expression. An analysis of β-VAE by Burgess et al. (2018) is provided, and the capacity term is proposed to obtain a better balance of the reconstruction error. Penalizing the total correlation term to reinforce the independence among representation dimensions was proposed in Factor VAE (Kim and Mnih, 2018) and β-TCVAE (Chen et al., 2018). FHVAE (Hsu et al., 2017) and DSVAE (Yingzhen and Mandt, 2018) constructed a new model architecture and factorized the latent variables into static and dynamic parts. Cheng et al. (2020b) described a GAN model using MI. Similar to our study, Gonzalez-Garcia et al. (2018) proposed a model to disentangle the attributes of paired data into shared and exclusive representations. Information theory Shannon's MI theory (Shannon, 2001) is a powerful tool for characterizing good representation. However, one major problem encountered in the practical application of information theory is computational difficulties in high-dimensional spaces. Numerous feasible computation methods have been proposed, such as Monte Carlo sampling, population coding, and the mutual information neural estimator (Belghazi et al., 2018). In addition, the information bottleneck (IB) principle (Tishby et al., 2000;Tishby and Zaslavsky, 2015;Shwartz-Ziv and Tishby, 2017;Jeon et al., 2021) learns an informative latent representation of target attributes. A variational model to make IB computation easier was introduced in variational IB (Alemi et al., 2016). A stair disentanglement net was proposed to capture attributes in respective aligned hidden spaces and extend the IB principle to learn a compact representation. Surface electromyography signal feature extraction With the development of SEMG signal acquisition technology, the analysis and identification of SEMG signals has also drawn the attention of researchers. As machine learning has demonstrated excellent feature extraction capabilities in areas such as images and speech, it can also be a good solution for recognizing SEMG signals. The basic motivation was to construct and simulate neural networks for human brain analysis and learning. Deep neural networks can extract the features of SEMG signals while effectively avoiding the absence of valid information in the signal and improving the accuracy of recognition. Xing et al. (2018) used a parallel architecture model with five convolutional neural networks to extract and classify SEMG signals. Atzori et al. (2016) used a convolutional network to classify an average of 50 hand movements from 67 intact subjects and 11 transradial amputees, achieving a better recognition accuracy than traditional machine learning methods. Zhai et al. (2017) proposed a self-calibrating classifier. This can automatically calibrate the original classifier. The calibrated classifier also obtains a higher accuracy than the uncalibrated classifier. In addition, He et al. (2018) incorporated a long short-term memory network (Hochreiter and Schmidhuber, 1997) into a multilayer perceptron and achieved better classification of SEMG signals in the NinaPro DB1 dataset. As stated, deep learning methods can help overcome the limitations of traditional methods and lead to better performance of SEMG. Furthermore, deep-learning methods can provide an extensive choice of models to satisfy different conditional requirements. Method Preliminary Information theory is commonly used to describe stochastic systems. Among the dependency measurements, mutual information (MI) was used to measure the correlation between random variables or factors. Given two random variables X and Z, the MI is defined as follows: Regarding the data processing flow as a Markov chain X → Z → Y, the information bottleneck (IB) principle desires that the useful information in the input X can pass through the 'bottleneck' while the noise and irrelevant information are filtered out. The IB principle is expressed as follow: where, β is the tradeoff parameter between the complexity of the representation and the amount of relevant essential information. Framework The diagram of our proposed Layer-wise Feature Extraction Algorithm (LFEA) is illustrated in Figure 1. Our algorithm aims to learn a representation that satisfies three main properties: "Compression, " "Expression" and "Disentanglement." To this end, three key information process modules are introduced, including the information compression module (ICM), information expression module (IEM), and information separation module (ISM) in each layer. In the ICM, input s i−1 of layer i is compressed into h i (s 0 = X). In the IEM, z i as part of h i is constrained to represent the original input X. In the ISM section, s i and z i are irrelevant. The parameters of the ICM and IEM in layer i are denoted as φ i and θ i . The data information flow can be expressed as follows: where, s 0 = X, and q φ i and p θ i are the condition distributions with φ i and θ i for h i andX. In following sections, we describe these three modules in detail. Information compression module According to (3), h i is the hidden representation of s i−1 . To ensure information 'compression, ' the optimal representation of s i−1 should forget redundant information altogether, that is, h i represents s i−1 with the lowest bits. Formally, the objective in the i-th layer to be minimized is as follows: Due to intractability of mutual information, optimizing L ICM with gradient methods directly is not feasible. We therefore derived the upper bound of L ICM with the variational inference method and get decomposition as follows: where, p h is the prior, and L upper ICM is the upper bound of L ICM defined as follows: Information expression module With the ICM guaranteeing the information compression, LFEA also need to ensure the expressiveness of the representation to the data. We therefore propose the information expression module (IEM). To ensure sufficient information to reconstruct the original data X, we maximize the MI between and Z i in i-th layer, that is, For L IEM , we can obtain a lower bound using the variational approximation method as follows: where, p θ i (x) can be viewed as the reconstruction loss. Information separation module To achieve disentanglement of representations (Independent of each block z 1 , z 2 , . . . , z n in Z), we further introduce the information separation module (ISM) in each layer. In i-th layer, the principle of ISM is to ensure that there is no intersection information between z i and s i , that is, In practice, the products of q φ i (z i ) and q φ i (s i ) are not analytical in nature. We introduce discriminatorD(.) (see Figure 2) to distinguish samples from the joint distribution and the product of the marginal distribution, that is, ]. Discriminator D(.). To compute and optimize L ISM , we need an additional discriminator as shown in Eq. (13). Sample data image. Number of layers 4 Size of z i 5 We compare our method the classic methods including VAE, β-VAE and PCA. Our HFEA method is much better than others. The bold indicates the best results. Algorithm optimization As presented above, our model contains three modules: ICM, IEM, and ISM. However, during optimization, the back-propagation algorithm is computationally intensive and potentially problematic when training deep networks, so we propose a layer-wise training step. After training one layer of the network, we fix the parameters of the trained layers and only train the next layer in the next step. Finally, we can obtain the final model after training all the layers. Such optimization design allows for training parameters at the bottom layers without bacpropagation from the top layers, avoiding the problems that often occur with deep network optimization, like vanishing and exploding gradient. Dataset In our experiments, we used the NinaPro * DB2 dataset and DB5 dataset. Atzori et al. (2014), Gijsberts et al. (2014) as the benchmark to perform numerical comparisons. NinaPro is a standard dataset for the gesture recognition of sparse multichannel SEMG signals. The SEMG signals in DB2 were obtained from 40 subjects and included 49 types of hand movements (see Figure 3). Detailed attribute information of the five subjects in NinaPro DB2 is shown in Table 1. The original SEMG signal was processed through sliding windows, and the size of the sample data used in the experiment was (200,12). Figure 4 shows 20 processed data points. DB1 consists of 11 subjects and the data set of each subject contains three types of gestures, which are Exercise A, Exercise B, and Exercise C. Exercise A includes 12 basic movements of fingers (see Figure 5). Exercise B includes 17 movements. Exercise C includes 23 grasping and functional movements. We preprocessed the dataset with the digital filter to cutoff frequency and sliding window to split signal, which follows He et al. (2018). Model setting In the following experiments, we used four layers model. The loss function is as follows: Frontiers in Neuroscience 06 frontiersin.org 12 basic movements signal of fingers in Exercise A. Detail parameters are listed in Table 2. Results First, we used total correlation (TC) as the quantitative metric for the quality of the disentanglement of the representation. TC is defined as follows: TC z 1 , z 2 , z 3 , z 4 = E p(z 1 ,z 2 ,z 3 ,z 4 ) log p z 1 , z 2 , z 3 , z 4 p z 1 p z 2 p z 3 p z 4 . The TC was estimated using a three-like algorithm (Cheng et al., 2020a). A low TC score indicated that the representation had less variance. MIG metric (Chen et al., 2018) is another disentanglement metric; the higher the value, the more disentangled representation is. We compared the quality of disentanglement among PCA, β-VAE, VAE, and HFEA. Table 3 shows the comparison results on TC score and MIG metric. In TC score and MIG metric, HFEA has the best performance, which is 6.2 lower and 0.11 higher than the second place, respectively. Furthermore, in Figure 6, we visualize the distribution of z 1 , z 2 , z 3 , and z 4 , respectively in a two-dimensional space based on t-distributed stochastic neighbor embedding. We can find that the variance of representation decreases with deeper layers, which indicates that the deeper networks learn more robust representations. Classification results on NinaPro DB2 dataset is described in Table 4. Our method is based on LFEA and SVM and the feature Z used in SVM is computed by LFEA. The methods used for comparison include LSTM + CNN (He et al., 2018), k-nearest neighbor (KNN), support vector machine (SVM), random forest, and convolutional neural network (CNN) (Atzori et al., 2016). In all experiments, our method was second best in all methods and only 0.2% lower than the best. What is more, our method showed more stable results (2.3% fluctuations) than others. Discrimination results for Exercise A, Exercise B, and Exercise C in DB1 and DB2 is shown in Figures 7, 8, respectively. For each exercise, we compare feature combinations from layer 1-4. Detail feature combinations is described in Table 5. Tables 6-8 list the classification accuracy with different feature combinations for DB1, respectively. Discrimination value in Tables 6-8 measures the representation capability of feature in each layer. The higher the value, the better the feature representation ability. In Exercise A, C4 obtains the highest discrimination value, which means feature z 3 plays the most import role in Exercise A. Similarly, feature z 2 makes little difference in Exercise A. Conclusion In this manuscript, we propose an Unsupervised Layerwise Feature Extraction Algorithm (LFEA) to perform the sEMG signal processing and downstream classification tasks. The model contains three core modules: Information Compression Module (ICM), Information Expression Feature discrimination results for DB1. Frontiers in Neuroscience 08 frontiersin.org Feature discrimination results for DB2. Module (IEM) and Information Separation Module (ISM), that ensure that the learning representation is compact, informative and disentangled. We further use a layer-wise optimization procedure to reduce the computation cost and avoid some optimization problem, like vanishing and exploding gradient. Experimentally, we also verify that the untangling effect and downstream classification tasks give better results. In the future, we hope to combine the advantages of supervised and unsupervised to build a semi-supervised learning framework that can be adapted to more scenarios. The bold values mean the lowest and highest discrimination values. Data availability statement The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. Author contributions ML and ZL contributed to the conception and design of the study. FZ organized the database. JG performed the statistical analysis. ML and ST wrote the first draft of the manuscript. All authors contributed to the manuscript revision, read, and approved the submitted version. Funding This study was supported by the National Key R&D Program of China (2021YFA1000401) and the National Natural Science Foundation of China (U19B2040).
2022-08-16T13:53:33.376Z
2022-08-16T00:00:00.000
{ "year": 2022, "sha1": "8b828a5b4a0dff7154d6a445787942e275f43c43", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "8b828a5b4a0dff7154d6a445787942e275f43c43", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
103600262
pes2o/s2orc
v3-fos-license
Ozone saturation and decomposition kinetics in porous medium containing different hybrids of maize Saturação e cinética da decomposição de ozônio em meio poroso contendo diferentes híbridos de milho R E S U M O Objetivou-se neste estudo avaliar a saturação e a cinética de decomposição do ozônio em meio poroso contendo grãos de diferentes híbridos de milho. Foram utilizados híbridos de milho comum, AG 1051, e os de milho super doce, Tropical Plus, GSS 42072, GSS 41499 e GSS 41243. Foram usadas amostras de 1 kg de milho, com teores de água de 13,0% (b.u.), acondicionadas em recipientes de vidro com capacidade de 3,25 L. Os grãos foram ozonizados na concentração de 1,28 mg L-1, a 25 oC e vazão do gás de 5,0 L min-1. Foram determinados o tempo e a concentração de saturação, o tempo de meia-vida, e as propriedades físicas massa específica aparente, massa específica real, porosidade, esfericidade e circularidade de cada um dos híbridos. O experimento foi realizado com delineamento experimental inteiramente casualizado, com três repetições, utilizando-se análise de regressão dos dados. No que se refere ao tempo de saturação do gás, os valores obtidos permaneceram entre 6,6 e 163,9 min, com concentração de saturação variando de 0,34 a 1,12 mg L-1. Quanto ao tempo de meia-vida do ozônio, o maior valor obtido foi 10,5 min para o híbrido de milho comum AG 1051 e o menor valor 0,16 min, para o híbrido de milho super doce GSS 41499. Verificou-se que a saturação e cinética de decomposição do ozônio em milho é dependente do híbrido contido no meio poroso. A decomposição do ozônio é mais rápida em meio poroso contendo híbridos de milho super doce. Introduction Ozonation has been proposed as an alternative to control grain pests because of the increased resistance of the insects, especially to the phosphine fumigant, and the increase of demand for products free from pesticide residues (Tiwari et al., 2010;Pandiselvam et al., 2015;Xinyi et al., 2017).There are various reports in the literature referring to the effectiveness of ozone in the control of grain pest insects, such as Tribolium castaneum, Sitophilus zeamais, S. oryzae, Oryzaephilus surinamensis Rhyzopertha dominica and larvae of Plodia interpunctella (Kells et al., 2001;Rozado et al., 2008;Sousa et al., 2008;Bonjour et al., 2011;Silva et al., 2016).Ozone has also been used as antimicrobial agent, with proven efficiency in the control of different species of bacteria and fungi (Kim & Yousef, 2000;Kells et al., 2001;Concha-Meyer et al., 2014;Igura et al., 2004;Hudson & Sharma, 2009;Alencar et al., 2012;Santos et al., 2016). Given the expressive applicability of ozone as protecting agent, it is fundamental to study parameters related to the distribution of the gas during grain fumigation, evaluating its saturation and decomposition kinetics in the porous medium.Gaseous ozone has half-life of 20 min at 20 ºC (Novak & Yuan, 2007) and rapidly reacts in medium containing organic material, decomposing into oxygen (Cullen et al., 2009).Temperature and moisture content are factors that influence ozone decomposition in porous medium containing grains, as reported by Alencar et al. (2011) and Pandiselvam et al. (2015). There are few reports on ozone saturation and decomposition in porous medium containing grains (Santos et al., 2007;2016;Alencar et al., 2011;Pandiselvam et al., 2015;Roberto et al., 2016).For Alencar et al. (2011), parameters related to saturation and decomposition are fundamental to evaluate technical viability and to dimension grain ozonation systems.Due to the differences in chemical composition and physical properties, it is important to study these processes in the different grains.Hence, the present study aimed to evaluate ozone saturation and decomposition kinetics in porous medium with grains of different maize hybrids. Material and Methods The study was carried out at the Laboratory of Preprocessing and Storage of Vegetal Products, of the College of Agronomy and Veterinary Medicine of the University of Brasília, Brasília-DF, Brazil (15º 45' 46.70" S; 47º 52' 10.25" W), from January to August 2016. Ozone gas was obtained using an ozone generator, developed by the company Ozone & Life, Model O&L 3.0-O2-RM.The ozone generation process used as input oxygen with purity level of approximately 90%, free from moisture, obtained with an oxygen concentrator attached to the generator. Ozone gas saturation and decomposition kinetics were evaluated using grains from four super sweet maize hybrids (Tropical Plus; GSS 42072; GSS 41499; GSS 41243) and one common maize hybrid (AG 1051), with moisture content around 13% (w.b.).Ozone concentration was quantified using the iodometric method, described by Clescerl et al. (1999). Three tests were carried out under the same conditions for each maize hybrid. Saturation times and the respective saturation concentrations (C Sat ) in the evaluation of saturation were determined according to Santos et al. (2007).To measure the ozone gas saturation time in porous medium containing different maize hybrids, the gas was injected at concentration of 1.28 mg L -1 , using 3.25 L glass pots containing 1 kg of grains (Figure 1).It should be noted that the ozone concentration of 1.28 mg L -1 is higher than that used by Rozado et al. (2008), which was efficient to control pest insects of stored maize grains.Input flow rate of 5.0 L min -1 of the gas was adopted, at temperature of 25 ºC.The residual concentration of the gas was determined after it passed through the product, at regular 5 min intervals, until it remained virtually constant. To relate ozone gas residual concentration with time, a sigmoid equation was fitted to the obtained data (Eq. 1) where: C -ozone gas concentration, mg L -1 ; t -time, min; and, a, b, c -constants of the equation. Based on the constants b and c of the fitted equations, it was possible to obtain the ozone saturation times (Eq.2) in the porous media composed by the different hybrids, as described by Venegas et al. (1998).After the saturation times were determined, the respective saturation concentrations were calculated. t b c Sat = +2 where: t Sat -saturation time, min. Ozone decomposition kinetics was evaluated after saturation of the porous medium, by quantifying the residual concentration through the iodometric method, after different time intervals during which the gas was not injected and spontaneous decomposition occurred, following the methodology adopted by Santos et al. (2007). (1) (2) In the quantification of residual ozone, after the different rest periods, atmospheric air was injected at rate of 1.0 L min -1 .The first-order kinetic model (Eq.3) was fitted to the data of ozone residual concentration as a function of the different time intervals (Wright, 2004).The decomposition kinetics model, after linearization (Eq.4), was fitted through regression analysis.The decomposition rate constant (k) is given by the slope of the line after fitting the integrated and linearized models. c -lower characteristic dimension of the grain, mm; C -circularity, %; di -diameter of the largest circle inscribed, mm; and, dc -diameter of the largest circle circumscribed, mm. The experiment was conducted in a completely randomized design, in triplicate, using data regression analysis.The software SigmaPlot 10.0 was used to obtain the regression equations and plot the graphs, referring to the processes of saturation and decomposition kinetics. Results and Discussion For the saturation time, the values remained within the range from 6.5 to 163.9 min, and the highest value was obtained for the hybrid GSS 41499 (Table 1 and Figure 2).The ozone gas saturation concentration varied from 0.34 and 1.12 mg L -1 , and highest value was obtained for the common maize hybrid AG 1051, equivalent to 87.5% of the initial concentration, with saturation time of 6.5 min.On the other hand, for the super Decomposition rate constants were used to calculate the half-life time (t 1/2 ) of ozone in porous medium containing different maize hybrids, which is defined by Eq. 5 (Wright, 2004) for the first-order kinetics model: Additionally, the following physical properties of the different hybrids were determined: apparent specific weight, actual specific weight, porosity, circularity and sphericity.Apparent specific weight (ρ) was determined based on the relationship between weight and volume occupied by maize grains, whereas the actual specific weight (ρ r ) was obtained according to Moreira et al. (1985).After determining apparent specific weight and actual specific weight, porosity (P) was calculated using Eq. 6. Sphericity and circularity were determined as defined by Mohsenin (1986), using Eqs.7 and 8, by measuring the dimensions of 50 grains of each maize hybrid using a caliper.(3) sweet maize hybrid GSS 41499, the saturation concentration was equivalent to 26.6% of the initial concentration, associated with the longest saturation time, equal to 163.9 min.The observed behavior, regarding the saturation of porous medium containing grains of different maize hybrids, was consistent with Strait (1998), Kells et al. (2001) and Mendez et al. (2003), who claimed that ozone behavior during grain fumigation has two distinct phases.These authors claim that the grains have active sites on their surface that react with ozone during the initial fumigation, leading to degradation of the gas and, consequently, the elimination of these sites (phase I).Once these sites have reacted with ozone (phase 2), its degradation rate decreases. Ozone decomposition (Figure 3 and Table 2) was more accelerated in the super sweet maize hybrids GSS 42072 and GSS 41499, with decomposition rate constants (k) of 0.835 and 4.471 min -1 , respectively.It should be pointed out that the k value in the porous medium containing grains of the hybrid GSS 41499 is 5.3 times higher than that for the hybrid GSS 42072.Lower decomposition rates were observed in the hybrids AG 1051, Tropical Plus and GSS 41243, with values of 0.066, 0.110 and 0.185 min -1 , respectively. For ozone half-life time (Table 2), highest values were found for the hybrids AG 1051 and Tropical Plus, equivalent to 10.5 and 6.3 min, respectively.The lowest values of half-life time were obtained for grains of the hybrids GSS 41499 and GSS 42072, which stand out for having longer saturation time, as demonstrated in Table 1.Ozone half-life times obtained in the present study were lower than those found in the absence of biological material (20 to 50 min), as well as in aqueous solutions (20 to 30 min) (Khadre et al., 2001), confirming the influence of medium composition on gas reactivity.Santos et al. (2007) studied ozone decomposition kinetics in common maize, using the concentration of 100 ppm (≈ 0.21 mg L -1 ).These authors found half-life time of 5.57 min, a behavior similar to that observed in the hybrid Tropical Plus.In porous medium containing peanut with 7.1% water content at 25 ºC, Alencar et al. (2011) obtained half-life time equivalent to 7.7 min, which is higher than those for all hybrids of super sweet maize, but lower than that of the common maize hybrid. Expressive differences were found in the saturation and decomposition of ozone gas, between the different maize hybrids.Such differences are possibly associated with the physical properties of the grains.Table 3 shows the data referring to apparent specific weight, actual specific weight, porosity, sphericity and circularity of the different maize hybrids.The highest values of saturation time associated with lower values of saturation concentration were obtained in super sweet maize hybrids.In general, super sweet maize hybrids showed the lowest values of apparent specific weight and actual specific weight, and highest values of porosity, sphericity and circularity, compared with the common maize hybrid (AG1051).The literature has data related to other types of grains and reports that the observed differences may be related to physical properties.For peanut grains, Alencar et al. (2011) observed saturation time and concentration equivalent to 175 min and 0.26 mg L -1 , respectively, for grains with water content of 7.1%, initial gas concentration of 0.45 mg L -1 , at 25 ºC.Santos et al. (2016) studied the saturation process in porous medium containing rice grains, adopting flow rate of 1.0 L min -1 and concentration of 10.13 mg L -1 , and observed saturation time and concentration of 13.97 min and 5.00 mg L -1 , respectively. Figure 2 . Figure 2. Residual ozone concentration over time in porous medium containing grains of different maize hybrids, with initial gas concentration of 1.28 mg L -1 saturation and decomposition.According to Tiwari et al. (2010), ozone diffusion depends on the chemical composition of the Figure 3 . Figure 3. First-order kinetic model fitted to the observed data of ozone residual concentration in porous medium containing different maize hybrids at temperature of 25 ºC Table 2 . Regression equations over time for residual concentration of ozone gas in porous medium containing grains of different maize hybrids, at temperature of 25 ºC and the respective coefficients of determination (r 2 ) and half-life times
2019-04-09T13:08:57.680Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "c76868715b24253a70992deec500c83450372d7a", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rbeaa/v22n4/1415-4366-rbeaa-22-04-0286.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "11c9970e9d9d9a6a930d6f4386fe5b40ecf42379", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Chemistry" ] }
221881271
pes2o/s2orc
v3-fos-license
Development of Scale to Measure Attitude of Farmers and Farm Women towards Front Line Extension System of ICAR The ICAR established a section of Extension Education at its headquarters in 1971 which was later on strengthened and renamed as Division of Agricultural Extension. It was intended to enforce this functional relationship of the extension system down the line in the research institutes, agricultural universities and allied institutions. Front line extension system of ICAR presently comprises of DEE of SAUs and Krishi Vigyan Kendra (KVKs). Introduction The ICAR established a section of Extension Education at its headquarters in 1971 which was later on strengthened and renamed as Division of Agricultural Extension. It was intended to enforce this functional relationship of the extension system down the line in the research institutes, agricultural universities and allied institutions. Front line extension system of ICAR presently comprises of DEE of SAUs and Krishi Vigyan Kendra (KVKs). The system has a wide network of KVKs at district level which is mainly responsible for the dissemination of agricultural technologies to farmers, farm women and other extension field functionaries at the grass-root level. As attitude influences an individual's choice of Attitude influences an individual's choice of action and responses to any services, incentives and challenges. The attitude of the farmers and farm women towards front line extension system of ICAR has a direct bearing on the performance of the system. A scale was constructed to measure the attitude of farmers and farm women towards the agriculture extension system of ICAR. For this, Thurston equidistance method of scale construction was used. The scale consisted of final 20 statements including ten positive and ten negative statements. Reliability of the scale was calculated by using reliability coefficient (Cronbach alpha) was 0.93. The validity of the scale was tested by the expert"s judgments. The reliability and validity of the scale indicate its consistency and precision of the results. This scale can be used to measure the attitude of farmers and farm women to study their attitude towards front line extension system of ICAR. action and responses to any services, incentives and challenges. The attitude of the farmers and farm women towards front line extension system of the ICAR has a direct bearing on their participation in various extension activities and utilization of extension services. Hence, an effort has been made to construct a scale to measure the attitude of the farmers and farm women towards various aspects of the transfer of technology, extension services of the system. The attitude scale thus constructed can be utilized for improving the participation and effectiveness of extension work. Materials and Methods An attitude is a predisposition or a tendency to respond positively or negatively towards a certain idea, object, person or situation. The attitude of the farmers and farm women towards the agricultural extension system of ICAR was measured by the attitude scale especially constructed to meet out the objectives. The attitude was operationalized as the degree of positive or negative feeling of farmers and farm women towards the front line extension system of ICAR. Thurston"s equal appearing interval technique was used to construct the attitude scale because the technique has an absolute system of units and also show higher reliability, as indicated by Pandey (2017). The methodological procedures for Thurston"s equal appearing interval technique of attitude scale construction are as follows: Defining the construct A construct is a concept with added meaning, deliberately and consciously invented or adopted for a special scientific purpose (Kerlinger, 1973). The construct is a proposed attribute of a person that often cannot be measured directly, but can be assessed using several indicators or manifest variables. In the present study construct was the attitude of farmers and farm women towards the agriculture extension system of ICAR. Identification and operationalization of dimensions under the construct Major dimensions identified under this construct were factors related to the extension activities, services rendered and facilities provided to the farmers and farm women by the extension system (mainly KVKs). Collection and development of items Items are the statements representing each dimension of the construct under study. Items related to the attitude of the farmers and farm women towards front line extension system of ICAR were collected and developed based on an extensive review of literature, consultation with the experts from State Agricultural Universities and KVKs. These statements were obtained from all possible sources e.g. literature, discussion with experts, the experience of investigator and research papers. Initially, a tentative list of 65 statements was drafted keeping in view the applicability of statements suited to the area of study viz. Rajasthan. Editing of items The statements thus collected were edited for final selection based on the criteria suggested by Edwards (1957). Maximum care was taken in the editing of statements so that it could measure what is intended. After editing and based on pilot study on 30 respondents viz. 15 farmers and 15 farm women, finally 42 statements were selected. Judges' rating of attitude statements A copy of all the 42 statements together with 5 point continuum against each statement was personally given/ mailed to 50 judges with a request letter explaining the procedure of judgment. The judges selected for the study comprised of extension specialists, educationists and officials of DEE. The judges were requested to sort out the statements on 5 point continuum i.e. most favourable, favourable, neutral, unfavourable and most unfavourable statement in judging the attitude towards the front line extension system of ICAR. They were also requested to delete redundant statements and suggest modifications in the scale they deemed necessary. The response was received from 37 judges out of 50. Calculation of scale and Q values On the basis of judges rating in the equal appearing interval, the scale values of 48 statements were obtained by computing their medians. The semi-interquartile range "Q" was computed as an index of dispersion of statements in the scale. The goal was to have a smaller number of statements evenly placed on the continuum. The "Q" value indicated the ambiguity or uncertainty of the meaning of the statements. The statements with larger "Q" value were omitted. Since the median of the distribution of judgment for each statement is taken as the scale value of the statement, the scale value was calculated with the help of the following formula: S =1+ where, S= the median or scale value of the statement 1= the lower limit of the interval in which the median falls pb= the sum of the proportions below the interval in which the median falls pw= the proportion within the interval in which the median falls. i= the width of the interval and is assumed to be equal to 1.0 To determine the Q value, two other point measures i.e. the 75 th and 25 th centile were calculated using the following formulae: where, & = the 25 th and 75 th centile respectively. 1= the lower limit of the interval in which the 25th and 75 th centile falls. = the sum of the p proportions below the interval in which the 25 th and 75 th centile falls. pw= the proportion within the interval in which the 25 th and 75 th centile falls. i= the width of the interval and is assumed to be equal to 1.0 The inter-quartile range or Q value was calculated as under: Q= - The scale value and Q value for each of the 42 statements was thus calculated according to the above mentioned formula. Final selection of the attitude statements When there was good agreement among the judges in judging the degree of favourableness or unfavourableness of a statement, Q value was small as compared with the value obtained when there was relatively little agreement among the judges. Based on the following criteria, 20 statements were finally selected for the attitude scale: Representation of the universe of the opinion about the extension system of the department. The scale values should have equal appearing intervals and Equal distribution of favourable and unfavourable attitude statements. Scoring procedure and final format of the scale Out of twenty statements, ten statements were the indicators of favourable attitude towards the extension system and the remaining ten were indicating unfavourable attitude. These finally selected twenty statements were randomly arranged to avoid response bias. Against each of these statements, thus arranged, there were five columns representing a 5 point continuum as strongly agree, agree, undecided, disagree and strongly disagree with the weightage of 5, 4, 3, 2 and 1, respectively for favourable statements and weightage of 1, 2, 3, 4 and 5 for unfavourable statements. The scale was then administered to the 30 farmers/farm women and attitude score of each individual was calculated. Standardization of the scale For standardization of the present scale reliability and validity was ascertained using "Cronbach"s alpha" method and content validity, respectively. The reliability of attitude scale To measure the reliability of the attitude scale, "Cronbach"s alpha" method was used. The developed attitude scale was administered to the same group of 30 respondents (farmers and farm women) other than the respondents included in the sample. The instrument was re-administered to the same group of respondents after 15 days. Cronbach's alpha coefficient of correlation between the scores obtained by the respondents at two occasions was calculated. The value of correlation so obtained was 0.93 indicating highly significant or high reliability of the attitude scale. The validity of attitude scale The validity of a test depends upon fidelity with which it measures what it is expected to measure (Kerlinger, 1967). Content and construct validity of the attitude scale was examined. Statements were selected to cover the whole universe of the content with the help of literature and scientists from different departments. The selected statements were presented to a panel of judges to find out the jury validity, to see whether the whole universe and subuniverse of content are covered or not and the statements framed were clear and in an understandable form. Those items which secured 70-80 per cent concurrence of experts were included in the final scale or test. Administration of the scale The final scale which would measure the attitude of farmers and farm women towards the front line extension system of ICAR consisted of 20 statements. The scale can be administered on a five-point continuum viz., strongly agree, agree, undecided, disagree and strongly disagree with a score of 5,4,3,2 and 1, respectively for positive statements and reverse scoring for negative statements. Therefore, the overall possible attitude score of the individual respondent towards the agriculture extension system of ICAR could range from 20-120. The high score of scale will represent the favourable attitude of farmers/farm women towards agricultural extension system of ICAR. This scale was constructed keeping in mind the study area viz. Rajasthan. Due to uniformity of services, activities and approaches of ICAR extension system throughout the nation, the attitude scale thus constructed can be administered upon the farmers and farm women on a large scale to get a wider picture of their view towards the system. The results obtained will be helpful not only in planning and directing the future extension work but also help in improving the participation of the farmers and farm women thereby enhancing the effectiveness of the system.
2020-09-10T10:22:04.253Z
2020-07-10T00:00:00.000
{ "year": 2020, "sha1": "a0c33527bd2af2355902287a6b3418f729a3348c", "oa_license": null, "oa_url": "https://www.ijcmas.com/9-7-2020/Shalini%20Pandey,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d651a77012cf704dc138fc8e20d2e39ab2fcb355", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
15270896
pes2o/s2orc
v3-fos-license
Quantum evolution across singularities Attempts to consider evolution across space-time singularities often lead to quantum systems with time-dependent Hamiltonians developing an isolated singularity as a function of time. Examples include matrix theory in certain singular time-dependent backgounds and free quantum fields on the two-dimensional compactified Milne universe. Due to the presence of the singularities in the time dependence, the conventional quantum-mechanical evolution is not well-defined for such systems. We propose a natural way, mathematically analogous to renormalization in conventional quantum field theory, to construct unitary quantum evolution across the singularity. We carry out this procedure explicitly for free fields on the compactified Milne universe and compare our results with the matching conditions considered in earlier work (which were based on the covering Minkowski space). Introduction Dynamical evolution across space-time singularities is one of the most tantalizing, even if speculative, questions in modern theoretical physics. Should our theories point towards a beginning of time, it is very natural to ask what came before, and, indeed, whether there could be anything before. In certain model contexts, quantum evolution across space-time singularities appears to be described by time-dependent Hamiltonians developing an isolated singularity as a function of time at the moment the system reaches a space-time singularity. It is then worthwhile to study such quantum Hamiltonians and establish some general prescriptions for using them to constuct a unitary quantum evolution. Needless to say, additional specifications are needed in a Schrödinger equation involving this kind of Hamiltonians, on account of the singular time dependence. One of the simplest examples of such singular time-dependent Hamiltonians in systems with space-time singularities is given by a free scalar field on the Milne orbifold (see [1,2,3,4,5] and references therein for some recent occurrences of the Milne orbifold in models of cosmological singularities). We shall give a detailed consideration of this case in section 3. Here, it should suffice to say that the square root determinant of the metric of the Milne orbifold vanishes as |t| when t goes to 0. Because of that, the kinetic term in the Lagrangian for a free field φ on the Milne orbifold will have the form |t|(∂ t φ) 2 , and the corresponding term in the Hamiltonian expressed through the canonical momentum π φ conjugate to φ will have the form π 2 φ /|t|, which manifestly displays an 1/|t| singularity. The position of this singularity in the time dependence coincides with the metric singularity of the Milne orbifold. While it is well-known that free fields on the Milne orbifold are not a good approximation to interacting systems, especially in gravitational theories [6,7], analogous singular time dependences have recently appeared in other models, which have been the main motivation for the present work. For example, 11-dimensional quantum gravity with one compact dimension in a certain singular time-dependent background with a light-like isometry is conjectured to be described by a time-dependent modification of matrix string theory [8,9]. This model can be recast in the form of a (1+1)-dimensional super-Yang-Mills theory on the Milne orbifold. It will thus contain in its Hamiltonian the 1/|t| time dependence typical of the general Milne orbifold kinematics. The question of transition through the singularity will then amount to defining a quantum system with such singular Hamiltonian. Likewise, for the time-dependent matrix models of [10], which are conjectured to describe quantum gravity in non-compact eleven-dimensional time-dependent background with a light-like singularity, one obtains a quantum Hamiltonian with a singular time dependence. In view of these examples, our present paper will address the question of how one should define unitary quantum evolution in the presence of isolated singularities in the time dependence of quantum Hamiltonians. Upon giving a general prescription for treating such singularities and discussing the ambiguities it incurs, we shall proceed with analyzing the simple yet instructive case of a free scalar field on the Milne orbifold. We shall further discuss the relation between our prescription and the recipes for quantum evolution of this system previously proposed in the literature (and based on considerations in the covering Minkowski space) [11,12,13,7]. Isolated singularities in time-dependent quantum Hamiltonians Following the general remarks in the introduction, we shall consider a quantum system described by the following time-dependent Hamiltonian: where H reg (t) is non-singular around t = 0, whereas the numerical function f (t, ε) develops an isolated singularity at t = 0 when ε goes to 0 (ε serves as a singularity regularization parameter), and h is a time-independent operator. We shall be interested in the evolution operator from small negative to small positive time. In this region, we shall assume that we can neglect the regular part of the Hamiltonian H reg (t) compared to the singular part. 1 The Schrödinger equation takes the form The solution for the corresponding evolution operator is obviously given by When the regularization parameter ε is sent to 0, f (t, ε) becomes singular and U(t, t ′ ) is in general not well-defined. The goal is then to modify the Hamiltonian locally at t = 0 in such a way that the evolution away from t = 0 remains as it was before, but there is a unitary transition through t = 0. Of course, a large amount of ambiguity is associated with such a program, and we shall comment on it below. The most conservative approach to the Hamiltonian modification is suggested by (3). Since the problem arises due to the impossibility of integrating f (t, ε) over t at ε = 0, the natural solution is to modify f (t, ε) locally around (in the ε-neighborhood of) t = 0 in such a way that the integral can be taken (note that we are leaving the operator structure of the Hamiltonian intact). The subtractions necessary to appropriately modify f (t, ε) are familiar from the theory of distributions. Namely, for any function f (t, ε) developing a singularity not stronger than 1/t p as ε is sent to 0, with an appropriate choice of c n (ε), one can introduce a modified (where δ (p) (t) are derivatives of the δ-function) in such a way that the ε → 0 limit off(t, ε) is defined in the sense of distributions. The latter assertion would imply that the ε → 0 limit of is defined for any smooth "test-function" F (t), and, in particular, that the ε → 0 limit of only differ in an infinitesimal neighborhood of t = 0, this modification will not affect the evolution at finite t). As a matter of fact, the subtraction needed for our particular case is simpler than (4). Since the n > 0 terms in (4) can only affect the value of the evolution operator (3) at t ′ = 0, if one is only interested in the values of the wave function for non-zero times, one can simply omit the n > 0 terms from (4). One can then write down the subtraction explicitly as The appearance of a free numerical parameter (which can be chosen as t 0 in the expression above, or a function thereof) is not surprising, since, iff(t, ε) is an adequate modification of f (t, ε), so isf (t, ε) + cδ(t) with any finite c. For the particular 1/|t| time dependence of the Hamiltonian mentioned in the introduction, one can choose f (t, ε) as 1/ √ t 2 + ε 2 , in which casef (t, ε) becomes It is sometimes more appealing to replace the δ-function in (7) by a resolved δ-function, in which case we find (with µ being an arbitrary mass scale). One should note that it is very natural to think of the above subtraction procedure as renormalizing the singular time dependence of the Hamiltonian. Indeed, the mathematical structure behind generating distributions by means of δ-function subtractions is precisely the same as the one associated with subtracting local counter-terms in order to render conventional field theories finite. For concreteness, consider the one-loop contribution to the full momentum space propagator in λφ 3 field theory, given by the diagram x x ′ If we compute it using position space Feynman rules, we find that it is proportional to the Fourier transform of the square of the scalar field Feynman propagator D(x, x ′ ). However, while the Feynman propagator itself if a distribution, its square is not. For that reason, if one tries to evaluate the Fourier transform, one obtains infinities, since integrals of [D(x, x ′ )] 2 cannot be evaluated. The problem is resolved by subtracting local counter-terms from the field theory Lagrangian, which, for the above diagram, would translate into adding δ(x − x ′ ) and its derivatives (with divergent cutoff-dependent coefficients) to [D(x, x ′ )] 2 in such a way as to make it a distribution. The mathematical structure of this procedure is precisely the same as what we employed for renormalizing the singular time dependences in time-dependent Hamiltonians. We should remark upon the general status of our Hamiltonian prescription viewed against the background of all possible singularity transition recipes one could devise. If the only restriction is that the evolution away from the singularity is given by the original Hamiltonian, one is left with a tremendous infinitefold ambiguity: any unitary transformation can be inserted at t = 0 and the predictive power is lost completely. One should look for additional principles in order to be able to define a meaningful notion of singularity transition. Our prescription can be viewed as a very conservative approach, since it preserves the operator structure of the Hamiltonian (the counter-terms added are themselves proportional to h, the singular part of the Hamiltonian). In the absense of further physical specification, this approach appears to be natural and can be viewed as a sort of "minimal subtraction". However, under some circumstances, one may be willing to pursue a broader range of possibilities for defining the singularity transition. For example, one may demand that the resolution of the singular dynamics must have a geometrical interpretation (at finite values of ε). This question will be addressed in [14]. In section 3, the focus of our attention will be a particular quantum system with a Hamiltonian quadratic in the canonical variables. For such linear systems, it is most common to analyze quantum dynamics in the Heisenberg picture, rather than in the Schrödinger picture we have employed above for the purpose of describing our general formalism. For convenience, we shall give a summary of the relevant derivations in appendix A. In short, one should construct the most general classical solution of the system in the form The solution to the Heisenberg equations of motion is simply obtained by replacing the integration constants A and A * in the above expression by creation-annihilation operators a and a † , which (with an appropriate normalization of u(t)) satisfy the standard commutation relation [a, a † ] = 1. The question of solving for the quantum dynamics is then most commonly phrased in terms of constructing the mode functions u(t) and u * (t), which are normalized solutions to the classical equations of motion. Our prescription may equally well be applied in such setting. One can analyze the classical equations of motion derived from the time-dependent Hamiltonian. It is safest to do so at finite ε, since the naïve ε → 0 limit of the classical equations of motion may not necessarily exist. However, the ε → 0 limit of the solutions for the mode functions will exist, and will, of course, define the same quantum dynamics as the general solution to the Schrödinger equation given by (2). 3 Free fields on the compactified Milne universe 3.1 The compactified two-dimensional Milne universe The two-dimensional Milne universe with 0 < t < +∞, corresponds to the "future" quadrant X ± > 0 of Minkowski space The Milne universe can be compactified by the identification which corresponds to the discrete boost identification The resulting space is a cone, which is singular at its tip t = 0. The action for a free scalar field in the (compactified) Milne universe is The corresponding equation of motion is solved by and their complex conjugates [11,12]. Here H (1) denotes a Hankel function, and the compactification (12) enforces the momentum quantization condition l ∈ . For solutions to the equation of motion (15), we define the scalar product [11] (φ 1 , and the Klein-Gordon norm (φ, φ). The solutions (16) are normalized to have Klein-Gordon norm −1. To quantize the scalar field φ, one expands where the u k (x, t) have Klein-Gordon norm 1, which ensures the canonical commutation relations [a k , a † l ] = δ k,l . We choose Essentially because ψ m,k of (16) are superpositions of negative frequency waves on the covering Minkowski space, the vacuum state defined with the creation and annihilation operators of (19) is an adiabatic vacuum of infinite order [11]. Note, however, that in a compactified Milne universe (where globally defined inertial frames are absent) this particular adiabatic vacuum is no more special than any other adiabatic vacuum of infinite order (of which there are infinitely many). Near t = 0, the l = 0 mode functions behave as (see, for instance, [7]) with ϕ l defined by e iϕ l = Γ(1 + il) sinh(πl) πl and satisfying ϕ −l = −ϕ l , while The mode functions are clearly singular at t = 0. The question we now want to address is whether quantum mechanical evolution can be consistently and naturally defined beyond t = 0. In the literature (see, for instance, [12,13,7]), this question has been addressed by extending the range of the t coordinate in the compactified Milne metric (10) to −∞ < t < ∞, i.e. by adding a "past cone" to the "future cone". 2 In the action (14), the factor t is replaced by |t|, The same goes for the scalar product (18) and the corresponding Klein-Gordon norm. The question then is how to define matching conditions between t < 0 mode functions and t > 0 mode functions, i.e. how to define global mode functions. Natural globally defined mode functions are obtained by allowing X ± to be either both positive or both negative in (16) (see (21)). As these are superpositions of negative frequency Minkowski modes, they describe excitations above the (adiabatic) vacuum inherited from Minkowski space. The solutions (16) have the property that they are analytic in the lower complexified t-plane. For t < 0, they can be written as which still has Klein-Gordon norm −1. For t approaching 0 from below, we have for the corresponding mode functions u l (t, x) = ψ * m,l (t, x) with l = 0, and Note that, even though the above prescription may seem natural, and it does define consistent matching conditions and a unitary evolution, it should not be given any privileged status. The (compactified) Milne universe contains a genuine singularity at the origin, and the question of how the system evolves in the neighborhood of the singularity cannot be in principle settled through an appeal to a flat Minkowski space (even though there is nothing wrong with using the covering Minkowski space for constructing particular evolutionary prescriptions). As we shall see below, more general rules for singularity crossing can be devised, with a different set of mode functions and a different vacuum state (which, being an adiabatic vacuum of infinite order, is no better and no worse than the one inherited from the covering Minkowski space). Even though the modefunctions u l = ψ * m,l constructed above solve the equations of motion derived from the action (24) at all positive and all negative t, there are no meaningful equations of motion satisfied at t = 0. Correspondingly, even though the quantum evolution defined in terms of the above prescription for the mode functions is unitary (and essentially inherited from the covering Minkowski space), this quantum evolution cannot be represented as a solution to the Schrödinger equation for the Hamiltonian derived from (24). In what follows, we shall nevertheless be able to cast this quantum evolution in a Hamiltonian form by appropriately renormalizing the time dependences in the Hamiltonian of the system. Quantum Hamiltonian evolution across the Milne singularity In section 2, we constructed a general prescription which allows to define a Hamiltonian evolution across an isolated singularity in the time dependence of the Hamiltonian. Since the case of a free scalar field on the Milne orbifold falls precisely into this category, it will be instructive to compare the above consideration in terms of the covering Minkowski space with our general prescription. We shall see that the two are in fact related, even though it is only in the parametrization of section 2 that the evolution has a manifestly Hamiltonian form at t = 0. The Hamiltonian corresponding to the action (14) is Following the general guidelines presented in section 2, we shall regulate the 1/|t| time dependence into f 1/|t| (t, ε) of (8): Near the origin, where the mass term is negligible, the equations of motion take the formφ −ḟ The general solution to this equation is or (34) With ε explicitly taken to 0, this becomes To construct the Heisenberg field operator (which contains all information on quantum dynamics) one should choose any such complex solution and, after normalizing appropriately, promote it to a mode function, as in (19) (see also appendix A). The question that will interest us here is how the quantum dynamics described by the Hamiltonian with our "minimal subtraction" is related to the mode function prescription (21) inherited from the covering Minkowski space. To this end, we shall define mode functions u (µ) l that solve (32) and coincide with u l of (21) for t > 0; however, they will generically differ from u l for t < 0. To see the relation between u (µ) l and u l , we construct u (µ) l by choosing A l and B l in (35) in such a way that it equals (22) for t > 0 and then compare it, for t < 0, with (27). In order to match (22) and (35) for t > 0, we impose Then, at t < 0, Comparing this expression with (27), we conclude that they are indeed the same if Note that the fact that µ depends on the Milne momentum l implies that the value of the arbitrary parameter introduced by our renormalization procedure is different for each of the oscillators comprising the field. For that reason, even though the covering Minkowski space prescription turns out to be the same as our "minimal subtraction" for each of the oscillators, for the entire field it is not. Phrased in the Hamiltonian language, the covering space prescription for the Milne singularity transition turns out to be different from the simplest consistent recipe one could devise, even though it is related to such simple recipe in a fairly straightforward way. Conclusions We have addressed the issue of how one can define a unitary quantum evolution in the presence of isolated singularities in the time dependence of a quantum Hamiltonian. If one demands that the operator structure of the Hamiltonian should be unaffected by regularization prescriptions (the "minimal subtraction" recipe), one discovers a one-parameter family of distinct quantum evolutions across the singularity. For the case of free quantum fields on the Milne orbifold, the covering Minkowski space considerations previously brought up in the literature [11,12,13,7] turn out to be closely related to, though distinct from, our "minimal subtraction" proposal. One explicit advantage of our present approach is that it makes the evolution across the singularity manifestly Hamiltonian, which was not the case in the context of the previous discussions. In this appendix, we shall review the dynamics of linear quantum systems. This material is very basic and well-known; however, it is usually presented in relation to a few specific linear systems of physical interest, whereas, for our purposes, it shall be convenient to summarize here the treatment of a general one-dimensional linear quantum system described by the Hamiltonian with f (t) and g(t) being arbitrary functions of time. The equations of motion take the forṁ Should one succeed finding a complex solution u(t) to this equation, one would be able to write down the most general real solution in the form with some complex constant A. In the quantum case, the solution to the Heisenberg equations of motion will have the exact same form with A and A * replaced by Hermiteanconjugate operators a and a † : Our solution for the quantum dynamics shall be complete if we establish the commutation relations for a and a † . Before doing so, we recall the important notion of Wronskian for a linear differential equation. For any two solutions x 1 (t) and x 2 (t) of a second order differential equation, their Wronskian is defined as It is straightforward to show that, for equation (42), the Wronskian of any two given solutions satisfiesẆ In other words, W/f does not depend on time. This circumstance permits to define the "Wronskian norm" for any complex solution u(t): As we have just demonstrated, the value of this expression does not depend on the moment of time one chooses to evaluate it. The familiar Klein-Gordon norm for free quantum fields, which we use in section 3, is a direct generalization of the Wronskian norm. The physical relevance of the Wronskian norm becomes apparent from the consideration of commutators: Therefore, to obtain the standard commutation relations for the creation-annihilation operators, [a, a † ] = 1, one has to choose a complex solution u(t) with Wronskian norm 1.
2008-01-21T17:41:42.000Z
2007-06-06T00:00:00.000
{ "year": 2007, "sha1": "1d69af298819b5374cf651a381672a56a87c1ab5", "oa_license": null, "oa_url": "http://arxiv.org/abs/0706.0824", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1d69af298819b5374cf651a381672a56a87c1ab5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250732619
pes2o/s2orc
v3-fos-license
Sestrin mediates detection of and adaptation to low-leucine diets in Drosophila Mechanistic target of rapamycin complex 1 (mTORC1) regulates cell growth and metabolism in response to multiple nutrients, including the essential amino acid leucine1. Recent work in cultured mammalian cells established the Sestrins as leucine-binding proteins that inhibit mTORC1 signalling during leucine deprivation2,3, but their role in the organismal response to dietary leucine remains elusive. Here we find that Sestrin-null flies (Sesn−/−) fail to inhibit mTORC1 or activate autophagy after acute leucine starvation and have impaired development and a shortened lifespan on a low-leucine diet. Knock-in flies expressing a leucine-binding-deficient Sestrin mutant (SesnL431E) have reduced, leucine-insensitive mTORC1 activity. Notably, we find that flies can discriminate between food with or without leucine, and preferentially feed and lay progeny on leucine-containing food. This preference depends on Sestrin and its capacity to bind leucine. Leucine regulates mTORC1 activity in glial cells, and knockdown of Sesn in these cells reduces the ability of flies to detect leucine-free food. Thus, nutrient sensing by mTORC1 is necessary for flies not only to adapt to, but also to detect, a diet deficient in an essential nutrient. Fruitflies require Sestrin to regulate mTORC1 signalling in response to dietary leucine, survive a diet low in leucine, and control leucine-sensitive physiological characteristics, which establishes Sestrin as a physiologically relevant leucine sensor. Mechanistic target of rapamycin complex 1 (mTORC1) regulates cell growth and metabolism in response to multiple nutrients, including the essential amino acid leucine 1 . Recent work in cultured mammalian cells established the Sestrins as leucine-binding proteins that inhibit mTORC1 signalling during leucine deprivation 2,3 , but their role in the organismal response to dietary leucine remains elusive. Here we find that Sestrin-null flies (Sesn −/− ) fail to inhibit mTORC1 or activate autophagy after acute leucine starvation and have impaired development and a shortened lifespan on a low-leucine diet. Knock-in flies expressing a leucine-binding-deficient Sestrin mutant (Sesn L431E ) have reduced, leucine-insensitive mTORC1 activity. Notably, we find that flies can discriminate between food with or without leucine, and preferentially feed and lay progeny on leucine-containing food. This preference depends on Sestrin and its capacity to bind leucine. Leucine regulates mTORC1 activity in glial cells, and knockdown of Sesn in these cells reduces the ability of flies to detect leucine-free food. Thus, nutrient sensing by mTORC1 is necessary for flies not only to adapt to, but also to detect, a diet deficient in an essential nutrient. The protein kinase mTORC1 regulates growth and metabolism in response to diverse signals, including growth factors and nutrients such as amino acids 1 . Amino acids activate mTORC1 by promoting its translocation to the lysosomal surface, where its essential activator Rheb resides [4][5][6] .The heterodimeric Rag GTPases, which are under the control of several multi-component protein complexes, including GATOR1 and GATOR2 (ref. 7 ), regulate the lysosomal localization of mTORC1 (refs. 4,5 ). GATOR1 is a GTPase-activating protein for RagA and RagB and is necessary for amino acid deprivation to inhibit mTORC1 signalling 8,9 . By contrast, GATOR2 is required for amino acids to activate mTORC1 and directly interacts with several of the amino acid sensors so far discovered, indicating that it acts as a nutrient-sensing hub despite its still unknown biochemical function 7 . Among the proteogenic amino acids, leucine is the best-established activator of mTORC1 (refs. [10][11][12][13]. Work in cultured mammalian cells has shown that leucine controls mTORC1 by regulating the interaction of GATOR2 with the Sestrin family of proteins 3,14,15 , which are repressors of mTORC1 signalling 16,17 . Human Sestrin1 and Sestrin2 bind leucine at affinities consistent with the leucine concentration needed to activate mTORC1 and are required for leucine deprivation to inhibit mTORC1 signalling 3 . Moreover, a Sestrin2 mutant that does not bind leucine fails to dissociate from GATOR2 in the presence of leucine, and in cells expressing this mutant, mTORC1 activity remains low even when the cells are cultured in leucine-replete conditions 2,3 . Despite the evidence that Sestrin is a leucine sensor for the mTORC1 pathway in cultured mammalian cells, the roles of Sestrin-mediated leucine sensing in the physiology of an intact organism remain largely unexplored. Although much of the work on leucine sensing has been in mammalian systems, Sestrin and the core nutrient-sensing machinery, including the Rag GTPases, GATOR1 and GATOR2, are conserved in most invertebrates, including the fly Drosophila melanogaster 18 . Unlike in mammals, flies express only one gene for Sestrin (Sesn) 16 , greatly facilitating the in vivo study of leucine sensing by mTORC1. Here we show that Sestrin and its leucine-binding pocket are required for leucine to regulate mTORC1 activity in fly tissues in vivo and for flies to detect and adapt to leucine-deficient diets. Fly mTORC1 senses leucine in vivo through Sestrin In an equilibrium binding assay, Drosophila Sestrin bound leucine with a dissociation constant (K d ) of about 100 µM (Fig. 1a), an affinity several fold lower than those of human Sestrin1 and Sestrin-2 (K d values of about 15-20 µM) 3 . This reduced affinity is probably the result of a difference between the leucine-binding pockets of human and fly Sestrin. Structural studies show that in human Sestrin2 a tryptophan (W444) forms the floor of the pocket, but in the fly protein, the analogous residue is a leucine (L431), a smaller residue that when introduced into human Sestrin2 (W444L) is sufficient to reduce its leucine-binding capacity by several fold 2 . The low leucine affinity of fly Sestrin is consistent with the observation that fly haemolymph has substantially higher amino acid concentrations than human plasma 18,19 , a difference probably reflected Article intracellularly. Like the analogous mutant of human Sestrin2 (W444E), fly Sestrin(L431E) does not bind leucine (Fig. 1b). To examine whether leucine regulates the interaction of fly Sestrin with GATOR2, we stably expressed in Drosophila S2R+ cells a Flag-tagged control protein (und, the Drosophila orthologue of mammalian metap2, methionyl aminopeptidase) or WDR59, one of the five core components of the GATOR2 complex. Sestrin co-immunoprecipitated with GATOR2, but not und, and removal of leucine from the cell medium strongly enhanced the interaction. The addition of leucine, but not isoleucine, valine or methionine, to the immunoprecipitates was sufficient to release Sestrin from GATOR2 (Fig. 1c). Thus, like the human protein, fly Sestrin binds to GATOR2 in a fashion that is specifically disrupted by leucine. To extend our work in vivo, we generated flies that ectopically express MYC-tagged WDR24, another core component of GATOR2 (lpp>myc-WDR24 flies), in the fat body, and are either wild type at the Sesn locus or have a knock-in mutation causing the L431E substitution that renders Sestrin unable to bind leucine (Sesn L431E ). For a period of 4.5 h, we fed third instar larvae a chemically defined diet (see Methods and Extended Data Tables 1-4 for details) containing all proteogenic amino acids (amino acid replete) or the same diet lacking just leucine (leucine free) or valine (valine free). Regardless of genotype, larvae eating the leucineor valine-free diets had reduced levels of leucine or valine, respectively (Extended Data Fig. 1a,b). In lysates prepared from isolated fat bodies, endogenous Sestrin co-immunoprecipitated with GATOR2, but not a control protein (GFP-MYC), and deprivation of leucine, but not valine, strongly boosted the interaction. In contrast, Sestrin(L431E) bound equally well to GATOR2 under all dietary conditions, consistent with the mutant being leucine insensitive (Fig. 1d). In cultured cells and in fat bodies, we observed that Sestrin has multiple isoforms (Fig. 1c,d), probably the result of differential splicing 16 . In wild-type larvae, feeding of the diet free in leucine, but not valine, inhibited mTORC1 in the fat body, as assessed by the phosphorylation of S6K, a canonical mTORC1 substrate. The loss of Sestrin (Sesn −/− ) did not impact mTORC1 activity in larvae eating the amino-acid-replete diet, but completely prevented the inhibition of mTORC1 normally caused by leucine deprivation (Fig. 1e). Sestrin was also required for the leucine-free diet to activate autophagy, a process suppressed by mTORC1, as monitored by the formation of mCherry-Atg8a-positive puncta (Extended Data Fig. 1c). In Sesn L431E larvae, mTORC1 activity was low relative to that in wild-type larvae and also unaffected by leucine deprivation, indicating that the leucine-binding mutant of Sestrin acts as a non-repressible inhibitor of mTORC1 (Fig. 1e). Notably, mTORC1 signalling was inhibited in Sesn −/− larvae deprived of all food to a similar extent as in wild-type larvae (Extended Data Fig. 1d), which is consistent with work in cultured mammalian cells showing that Sestrin has a specific role in transmitting leucine availability to mTORC1 (refs. 3,14 ). Last, in larvae lacking a component of GATOR1 ( the absence of dietary leucine did not impact mTORC1 activity and it remained as hyperactive or suppressed, respectively, as when the larvae were fed the amino-acid-replete diet (Fig. 1e). Consistent with mTORC1 promoting Sesn transcription as part of a feedback loop 16,20 , Nprl2 −/− and Mio −/− flies had increased and decreased Sestrin levels, respectively (Fig. 1e). Collectively, these results show that dietary leucine modulates mTORC1 in vivo and that this regulation requires Sestrin and its leucine-binding pocket as well as the GATOR1 and GATOR2 complexes. Sestrin mediates adaption to low-leucine diets We reasoned that Sestrin-mediated suppression of mTORC1 helps animals adapt to and thus survive a diet low in leucine. We first tried to test this idea by feeding larvae food lacking leucine, but all larvae, independently of genotype, died within 2-3 days of starting the diet, consistent with leucine being an essential amino acid required for larval growth. When given food containing one-tenth of the normal leucine content, about 40% of wild-type larvae survived over a period of 16 days (Fig. 2a,b). In contrast, only about 10% of Sesn −/− larvae did so (Fig. 2b). Moreover, the surviving larvae grew to a much smaller size than their wild-type counterparts (Fig. 2c), a defect rescued by the expression of wild-type Sestrin from the ubiquitous Tubulin-Gal4, Tubulin-Gal80 ts promoter (Fig. 2c). When fed the standard laboratory diet, Sesn −/− and wild-type larvae developed indistinguishably (Extended Data Fig. 2a). Consistent with previous work showing that adult flies can live for weeks on a diet lacking any amino acid source 21 , our observations showed that wild-type flies also survived for many weeks on a leucine-free diet (Fig. 2e,h, Extended Data Fig. 2c,f and Supplementary Data 1). As with larvae, adult flies also required Sestrin to adapt to leucine scarcity, as Sesn −/− male and female animals had greatly shortened lifespans on the leucine-free, but not amino-acid-replete, diet (Fig. 2d,e,g,h and Supplementary Data 1). On the other hand, Sesn −/− flies had slightly shorter lifespans than wild-type counterparts only when eating the valine-free food (Fig. 2f,i and Supplementary Data 1), a diet on which the activity of processes controlled by mTORC1, such as protein synthesis and autophagy, would be expected to impact survival. When the Sesn L431E flies were fed the same chemically defined diets, they survived similarly to the wild-type flies (Extended Data Fig. 2b-g and Supplementary Data 1). Consistent with the chronic suppression of mTORC1 signalling, Sesn L431E larvae reared on the standard laboratory diet developed more slowly than wild-type ones (Extended Data Fig. 2h). Wild type attP2 Sesn -/-attP2 The P values were determined using a two-proportion z-test (two-sided). The bars show the percentage of surviving larvae in each genotype and the error bars represent the 95% Wald confidence interval. c, Sestrin is required for larval growth on a low-leucine diet. Shown are age-synchronized animals of the indicated genotypes raised for 9 days on either an amino-acid-replete diet or a reduced (10%)-leucine diet. Article We monitored mTORC1 activity in whole-fly lysates of female and male adult flies that had been fasted overnight and then refed for 90 min with the chemically defined diets used above. The loss of Sestrin prevented the inhibition of mTORC1 caused by the leucine-free diet in male and female flies (Extended Data Fig. 3a,b). We further focused on oogenesis, a physiological trait that is known to be regulated by diet 22 . Moreover, diet is known to regulate ovarian function through the GATOR1-GATOR2 complexes 21,[23][24][25] , and Mio, the gene for one of the components of GATOR2, was so named because mutations in it result in a missing oocyte phenotype 26 . We found that mTORC1 activity was strongly increased in the ovaries of Sesn −/− flies eating the standard laboratory diet, and as in larval fat bodies (Fig. 1e), it was suppressed in the ovaries of Sesn L431E flies (Extended Data Fig. 3c). When fed the amino-acid-replete or valine-free diet, Sesn −/− and wild-type flies had ovaries of similar sizes, but the loss of Sestrin greatly reduced ovarian size in flies under conditions of acute leucine deprivation (Extended Data Fig. 3d,e), again pointing to a specific role for Sestrin in adapting to leucine scarcity. The ovaries of the Sesn L431E flies were equally small on all of the diets (Extended Data Fig. 3d,e), consistent with a role for mTORC1 in the control of gonad development. Sesn L431E flies also had reduced fecundity as they laid fewer eggs than wild-type flies (Extended Data Fig. 3f). Eggs from wild-type, Sesn L431E and Sesn −/− flies had comparable hatching rates, suggesting that Sestrin does not impact fertility (Extended Data Fig. 3g). Collectively, these data reveal that in larvae and adult flies Sestrin promotes survival on a low-leucine diet and has a particularly important role in controlling ovarian size and function. Sestrin regulates feeding behaviour Having established that Sestrin is important for flies to adapt to and survive on diets low in leucine, we examined whether flies also require Sestrin to detect and thus avoid food that is poor in leucine. To do so, we developed an assay to test whether adult flies prefer eating leucine-rich over leucine-poor food. The experimental set-up consisted of 15 female and 5 male flies in a bottle containing 2 apple pieces, the first painted with a solution of one or more amino acids and the second with an appropriate control (Fig. 3a). Each also contained a trace amount of a unique DNA oligonucleotide, which served as a barcode for measuring the food consumption, an approach previously described 27 and that we validated (Extended Data Fig. 4a-c and Methods). We chose apple as the base food because it is carbohydrate rich and protein poor 28 , allowing us to set up food choices that have different amino acid compositions but the same content of sugars. Apples are reported to contain very little leucine and valine 29-31 . We found that wild-type female flies prefer to eat apples coated with leucine rather than water. This preference emerges after the flies have been eating the food for about 6 h and increases to 5-6-fold by 24 h, the time point we used in subsequent experiments (Fig. 3b). The preference for leucine is concentration dependent (Extended Data Fig. 4d) and not every amino acid elicits a preference, as flies do not distinguish between apples coated with valine or water (Extended Data Fig. 4e). Given a choice between equal amounts of leucine and valine, flies still prefer leucine, suggesting that the leucine preference is not simply the result of a nitrogen imbalance (Extended Data Fig. 4e). Moreover, the leucine preference requires differential mTORC1 activity, as when flies were fed the mTORC1 inhibitor rapamycin, they no longer showed a preference (Fig. 3c). Rapamycin treatment also lowered the total amount of food consumed by the flies (Extended Data Fig. 4f), consistent with previous reports 32, 33 . Remarkably, neither Sesn −/− nor Sesn L431E female flies-both of which have leucine-insensitive mTORC1 signalling-had a preference for leucine as they ate similar amounts of leucine-rich and leucine-poor foods (Fig. 3d,e and Extended Data Fig. 4g). However, the two Sesn mutants probably differ in the total amount of food each ate. The amount of food (leucine-rich or leucine-poor) that Sesn −/− female flies ate was similar to the amount of leucine-rich food consumed by wild-type (w 1118 ) flies (Extended Data Fig. 4h). The opposite was true for Sesn L431E female flies. These flies ate an amount of food (leucine-rich or leucine-poor) similar to the amount of leucine-poor food consumed by the wild-type (OreR) flies (Extended Data Fig. 4i). That Sesn L431E files, which have low mTORC1 signalling, eat less food than wild-type controls is consistent with rapamycin causing a reduction in food consumption (Extended Data Fig. 4f). Whole-body re-expression in the Sesn −/− female flies of Sestrin driven by Tub>Gal4 partially restored the leucine preference of the animals (Extended Data Fig. 4j). We also examined whether flies can distinguish between foods with a more subtle difference in amino acid composition: an apple coated with the 20 proteogenic amino acids versus just 19 of them (that is, lacking only leucine). Indeed, this was the case and this preference was also absent in the Sesn −/− and Sesn L431E flies (Fig. 3f). Valine again served as a control: when removed from the 20-amino-acid cocktail, neither wild-type nor Sesn mutant flies showed preference for the valine-containing food (Extended Data Fig. 4k). To obtain temporal control of Sestrin suppression, we generated a conditional knockdown system using a short hairpin RNA (shRNA) targeting Sesn. Ubiquitous expression of the shRNA reduced Sestrin protein levels (Fig. 3g), and as expected, the preference of the flies for the leucine-containing food (Fig. 3h). Using a temperature-sensitive shRNA driver, we suppressed Sestrin specifically during adulthood ( Fig. 3i,j). This too reduced their leucine preference (Fig. 3k), indicating that the acute loss of Sestrin in adult flies is sufficient to impact the leucine preference. Notably, the temperature shift to 29 °C increased Sestrin levels (Fig. 3j), consistent with previous work showing that multiple stresses induce its transcription 17,34 . Thus, female flies can readily detect food lacking leucine even if it contains sugars and other amino acids. This ability requires Sestrin and its capacity to bind leucine. To further analyse the physiological relevance of leucine sensing through the Sestrin-mTORC1 axis, we tested the impact of both leucine and Sestrin on the choice between low-and high-protein diets: apple coated with a low or high amount of yeast extract, which is a complex type of food and the major protein source for laboratory-raised flies. Wild-type flies had a strong preference for the apple with a higher protein content. The addition of leucine to the protein-poor food reduced the preference of wild-type female flies for the protein-rich food, but only minimally impacted the preference of the Sesn L431E mutants (Extended Data Fig. 5a). Sesn −/− mutants showed a similar trend (Extended Data Fig. 5b), but it was not statistically significant. Together, these data suggest that flies use leucine sensing through the Sestrin-mTORC1 axis as a proxy for the food protein content. Sestrin regulates egg-laying behaviour We found that female flies prefer to lay eggs on the leucine-coated apples. To explore this further, we put 15 female and 5 male flies in the assay bottle and 24 h later counted the number of eggs on each piece of apple (Extended Data Fig. 6a). In an initial test, we found that flies laid many more eggs on an apple piece painted with a yeast suspension instead of water, consistent with yeast being a food rich in nutrients and the olfactory cues that attract flies 35-38 (Extended Data Fig. 6b). Wild-type flies that had been deprived of protein overnight deposited 5-6-fold more eggs on an apple piece coated with the 20 proteogenic amino acids instead of water (Extended Data Fig. 6c,d,f). Flies had a similar, albeit smaller (threefold), preference for leucine-coated apples, and this preference was more profound when the flies had been starved for protein. Importantly, flies did not distinguish between apple pieces painted with the same substance (Extended Data Fig. 6d,f). We found that Sesn L431E mutant flies lacked a strong preference for laying eggs on the apple coated with leucine and had a reduced preference for the apple with the 20 amino acids (Extended Data Fig. 6e), although the total number of eggs Sesn L431E mutant flies laid was about 25% reduced compared to that for the wild-type flies (Extended Data Fig. 3f). This altered egg-laying behaviour was also observed in the Sesn −/− flies, which laid a similar number of eggs to the wild-type animals (Extended Data Fig. 6g). Furthermore, the wild-type flies mildly preferred to deposit eggs on an apple piece painted with the 20 proteogenic amino acids instead of 19 (that is, lacking leucine), a much more complex choice, and this ability was reduced in the Sesn L431E flies (Extended Data Fig. 6h). When facing the same complex choice, Sesn −/− flies did not show a statistically significant different behaviour compared to the wild-type flies (Extended Data Fig. 6h), which might reflect the subtleness and noise of this complex choice set-up. Consistent with the leucine preference we observed in the food choice assay, we found that female flies also laid fewer eggs on food lacking leucine, and this capacity requires the intact leucine-binding pocket of Sestrin. This finding might reflect an active choice for egg deposition or the amount of time that flies spend on each apple owing to their preference for eating leucine-containing food. Glial Sestrin regulates leucine preference To determine in which tissue(s) Sestrin is required for flies to distinguish between food with or without leucine, we suppressed Sestrin with the Sesn shRNA under the control of a variety of cell-type-specific The data show the fold difference in relative food intake for the leucine-coated compared to water-coated apples. n ≥ 11 per time point. c, Rapamycin prevents flies from developing a preference for the leucine-coated apple. n ≥ 5 per condition. d-f, Sesn L431E and Sesn −/− animals fail to develop a preference for the leucine-containing apple. In d,e, n ≥ 4 per condition; in f, n ≥ 6 per condition. g, Immunoblotting for Sestrin following knockdown of Sesn in adult flies. Akt serves as a loading control. h, Ubiquitous knockdown of Sesn reduces the preference of adult female flies for leucine. The data show the fold difference in food intake for the leucine-coated apple relative to the water-coated apple. n ≥ 5 per condition. i, The approach used to achieve temporal control of Sesn knockdown in j,k. j, Sesn immunoblot showing Gal80 ts -mediated depletion of Sestrin in adult, but not developing, animals. Extracts were prepared from flies raised at the indicated temperatures. S6K serves as a loading control. Note that heat shock induces Sestrin protein levels in control flies. k, Knockdown of Sesn during adulthood is sufficient to decrease the preference of female flies for leucine-containing apples. n ≥ 13 per condition. a,i, Created with BioRender. com. In b-f,h,k the values are mean ± s.d. of biological replicates from a representative experiment. Each experiment was repeated three (d-k) or two (b,c) times with similar results. Statistical analyses were carried out using one-way analysis of variance (ANOVA) followed by Dunnett's multiple comparisons test (b), two-way ANOVA followed by Šídák's multiple comparisons test (c-e), one-way ANOVA followed by Šídák's multiple comparisons test (f) and two-tailed unpaired t-test (h,k). Article Gal4 drivers. Notably, Sesn knockdown specifically in glial cells (repo-Gal4) was sufficient to reduce the preference of flies for the leucine-containing food to a similar extent as when it was expressed ubiquitously (da-Gal4; Fig. 4a). In contrast, Sesn knockdown in many other tissues, including the fat body and muscle, did not impact the leucine preference. It is important to note that the intrinsic capacity of each Gal4 driver line to distinguish between food with or without leucine varied considerably (Extended Data Fig. 7a), probably owing to their different genetic backgrounds. Thus, although we are confident that the preference of flies for leucine-containing food requires Sestrin in glial cells, we are cautious in ruling out contributions from other tissues, particularly those examined with driver lines with intrinsically lower leucine preferences, such as the pan (elav-Gal4) and dopaminergic and cholinergic (ddc-Gal4) neuronal lines (Extended Data Fig. 7a). Consistent with an important role for glial Sestrin in regulating the leucine preference, expression of wild-type Sestrin just in glial cells in Sestrin-null flies partially rescued the defect in detecting leucine-poor food (Extended Data Fig. 7b). In wild-type flies, expression in the glial cells of either wild-type Sestrin or Sestrin(L431E) decreased the leucine preference, consistent with the inhibition of mTORC1 caused by Sestrin overexpression (Extended Data Fig. 7b). Indeed, overexpression under the control of repo-Gal4 of TSC1 and TSC2-well-established inhibitors of mTORC1 signalling-was also sufficient to decrease the leucine preference (Extended Data Fig. 7c). Analyses of a single-cell RNA-sequencing dataset indicated that Sestrin is expressed in most glial subtypes 39 (Extended Data Fig. 7d). Expression of the Sesn shRNA under the control of Gal4 driver lines that target subtypes of glial cells revealed that none caused as strong a suppression of the leucine preference as with the pan-glial driver The images in b,c were taken with 10× and 40× objectives, respectively. Scale bars, 50 µm (b) and 10 µm (c). d, In wild-type flies, but not Sesn L431E or Sesn −/− flies, leucine starvation increases the number of GFP-positive peri-oesophageal glial cells. Each point represents the ratio of the number of GFP-to Repo-positive cells in the oesophageal area of one fly brain. n ≥ 3 per condition. e, Proposed role of the Sestrin-mTORC1 pathway in regulating the preference of flies for leucinecontaining food. In a,d, the values are mean ± s.d. of biological replicates from a representative experiment. The data are representative of three independent experiments with similar results. Statistical analysis was performed using two-tailed unpaired t-test (a), and two-way ANOVA followed by Šídák's multiple comparisons test (d). repo-Gal4 (Extended Data Fig. 7e), although Wrapper-Gal4-driven Sesn knockdown led to a partial reduction of the leucine preference. Thus, multiple glial subtypes probably participate in mediating the leucine preference. Given the importance of glial Sestrin in mediating the leucine preference, we examined mTORC1 signalling in glial cells in the brains of adult female flies. To do so, we used a line expressing a GFP-based reporter for the MITF transcription factor 40 , which is the Drosophila orthologue of mammalian TFEB (ref. 41 ). mTORC1 suppresses MITF so that after mTORC1 inhibition, MITF activity increases 41 and drives GFP expression. In wild-type flies, starvation for total protein activated, as indicated by elevated GFP expression, MITF in Repo-positive glial cells, particularly in those surrounding the oesophagus (Extended Data Fig. 7f). Remarkably, starvation for just leucine also increased the number of peri-oesophageal GFP-positive glial cells (Fig. 4b-d and Extended Data Fig. 8a,b). In contrast, in Sesn −/− flies, leucine starvation did not increase the number of peri-oesophageal GFP-positive glial cells, which were few in number irrespective of the diet (Fig. 4c,d and Extended Data Fig. 8a,b). In Sesn L431E flies, there were many peri-oesophageal GFP-positive glial cells, and, like in Sesn −/− flies, leucine starvation did not increase their numbers (Fig. 4c,d and Extended Data Fig. 8a,b). Notably, quantification of GFP-positive cells in the mushroom body and optic lobe areas showed that, unlike in peri-oesophageal glial cells, the mTORC1 activity in these cells did not significantly respond to acute dietary treatments (Extended Data Fig. 8b-e). Thus, dietary leucine regulates mTORC1 signalling in a subset of glial cells in a fashion that depends on Sestrin and its capacity to bind leucine, and this regulation correlates with the ability of flies to distinguish between food that is rich or poor in leucine. Discussion We show that D. melanogaster requires Sestrin to regulate mTORC1 signalling in response to dietary leucine, survive a leucine-poor diet, and control leucine-sensitive physiological measures such as food choice and ovarian size. Flies with a point mutation that eliminates the leucine-binding capacity of Sestrin(L431E) have suppressed, leucine-insensitive mTORC1 signalling. Moreover, whereas wild-type flies can live on leucine-free diets for weeks, flies lacking Sestrin die much faster. In all, our results establish Sestrin as a physiologically relevant leucine sensor in vivo. Recently, Lu et al. reported complementary findings of an amino acid-sensing role of Sestrin upstream of mTORC1 in the control of Drosophila development, fecundity and longevity 42 . We find that Sestrin and its leucine-binding pocket are required for the preference of adult female flies for consuming, as well as laying eggs on, leucine-rich instead of leucine-poor food even when it contains sugars and other amino acids. To our knowledge, the ability of flies to choose food that is rich in leucine over food that lacks leucine but still retains a complex set of other nutrients has not been previously documented, although such behaviour has been reported in mice 43 . When given a starker choice than we provided-a pure sugar, such as sucrose or glucose, versus an individual amino acid-flies prefer to eat a variety of essential amino acids in sex-and developmental stage-dependent fashions [44][45][46] . There has been a long-standing interest in understanding the mechanisms that enable animals, including flies and rodents 43,47 , to prefer diets rich in protein. A variety of mechanisms in flies have been implicated, including amino acid transporters 44 , taste receptors 45,48,49 , GCN2 (ref. 50 ), serotonin 51 and dopamine signalling 50,52 , sex peptide receptor 53 , microbiome 54 , and mTOR and S6K (refs. 51,53 ). How these mechanisms coordinate together to impact organismal protein detection in the diet remains unclear. Our work raises several questions for future study. One such question concerns whether there is crosstalk between the food preference behaviour controlled by glial cells and acute changes in ovarian size caused by nutritional stress. Another question is whether female flies actively choose to lay more eggs on the leucine-containing food because it has the nutrients needed for larval growth, or whether the apparent preference simply reflects the amount of time they spend on it owing to their dietary preference. As it takes flies many hours to distinguish between leucine-containing and leucine-free food (Fig. 3b), it seems unlikely that the alterations in Sestrin eliminate the preference for leucine by substantially interfering with the capacity of flies to taste leucine. Rather, we favour the idea that leucine, through Sestrin-mTORC1, turns on a neuronal reward circuit that drives food consumption (see potential model in Fig. 4e). Previous work has identified a set of dopaminergic neurons that controls protein hunger 52 , and it will be interesting to examine whether Sestrin-mediated leucine-sensitive mTORC1 signalling can impact these cells. In this regard, it is intriguing that the preference for leucine requires the expression of Sestrin in glia as there is increasing evidence that glial cells can be key intermediates between an environmental signal and its modulation of a neuronal circuit [55][56][57] . Last, it will be interesting to investigate why mTORC1 activity in a set of peri-oesophageal glial cells is particularly sensitive to Sestrin-dependent regulation by dietary leucine. Online content Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-022-04960-2. Table 2 of a recent authentication 58 ). Suspension FreeStyle 293F cells were obtained from Thermo Fisher and cultured in FreeStyle 293 expression medium (Thermo Fisher (12338018)), supplemented with 100 IU ml −1 penicillin and 100 µg ml −1 streptomycin, at a shaking speed of 125 r.p.m. at 37 °C and 8% CO 2 , 80% humidity. No mycoplasma contamination was detected using PCR. Dissected tissues and whole flies were crushed physically using a bead beater in 1% Triton lysis buffer (same as above). The resulting lysates were cleared by centrifugation in a microcentrifuge (15,000 r.p.m. for 10 min at 4 °C) and analysed as above. For anti-Flag immunoprecipitations, the anti-Flag M2 affinity gel (Sigma number A2220) was washed with lysis buffer three times and then resuspended to a ratio of 50:50 affinity gel to lysis buffer. A 25 µl volume of a well-mixed slurry was added to cleared lysates and incubated at 4 °C in a shaker for 90-120 min. For anti-MYC immunoprecipitations, magnetic anti-MYC beads (Pierce) were washed three times with lysis buffer. A 30 µl volume of resuspended beads in lysis buffer was added to cleared lysates and incubated at 4 °C in a shaker for 90-120 min. Immunoprecipitates were washed three times; once with lysis buffer and twice with lysis buffer with 500 mM NaCl. Immunoprecipitated proteins were denatured by addition of 50 µl of SDS-containing sample buffer (0.121 M Tris, 5% SDS, 12.5% glycerol, 0.25 M dithiothreitol and bromophenol blue) and heated in boiling water for 5 min. Denatured samples were resolved by 8-12% SDS-PAGE, and analysed by immunoblotting. Leucine-binding assay and K d calculation. For radiolabelled leucine-binding assays using Flag-tagged Drosophila Sestrin, suspension HEK293F cells were seeded at 2.5 million cells ml −1 , and transfected with the pRK5-Flag-Sestrin cDNA using polyethylenimine. At 72 h after transfection, cells were rinsed once in cold PBS and lysed in 1% Triton lysis buffer (1% Triton, 40 mM Hepes pH 7.4, 2.5 mM MgCl 2 and 1 tablet of EDTA-free protease inhibitor (Roche) per 25 ml buffer). Following an anti-Flag immunoprecipitation, the beads were washed four times with lysis buffer containing 500 mM NaCl and then incubated for 1 h on ice in cytosolic buffer (0.1% Triton, 40 mM HEPES pH 7.4, 10 mM NaCl, 150 mM KCl, 2.5 mM MgCl 2 ) with the indicated amount of [ 3 H] leucine and unlabelled leucine. After 1 h, the beads were aspirated dry and rapidly washed four times with binding wash buffer (0.1% Triton, 40 mM HEPES pH 7.4, 300 mM NaCl, 2.5 mM MgCl 2 ). The beads were aspirated dry again and resuspended in 80 µl of cytosolic buffer. Each sample was mixed well, and then 15 µl aliquots were separately quantified using a TriCarb scintillation counter (Perkin Elmer). This process was repeated in pairs for each sample, to ensure similar incubation and wash times for all samples analysed across different experiments. The affinity for leucine of Drosophila Flag-Sestrin was determined by first normalizing the bound Liquid chromatography-mass spectrometry-based metabolomics and quantification of metabolite abundances. Liquid chromatography-mass spectrometry (LC-MS)-based metabolomics was performed and data were analysed as previously described 59,60 using 500 nM isotope-labelled internal standards. Briefly, an 80% methanol extraction buffer with 500 nM isotope-labelled internal standards was used for whole-fly metabolite extraction. Samples were dried by vacuum centrifugation, and stored at −80 °C until analysed. On the day of analysis, samples were resuspended in 100 µl of LC-MS-grade water, and insoluble material was cleared by centrifugation at 15,000 r.p.m. The supernatant was then analysed as previously described by LC-MS (refs. 59,60 ). Synthetic fly food formulation and preparation. Drosophila diet formulations were derived from previous recipes 66,67 with the following modifications: the type of agar (Micropropagation Agar-Type II; Caisson Laboratories number A037); the final percentage of Agar (1%); the amount of sucrose (25 g per litre of food); and the amino acids that were added to stock solutions before or after autoclaving 68 whose order is described below. The amino acid composition of the diet including the concentrations of leucine, isoleucine and valine were based on the exome-matched (that is, the concentrations used for a given amino acid correspond with the prevalence of exons for that amino acid in the Drosophila genome) and Drosophila diet formulation developed in a previous study 67 that was found to be optimal for growth and fecundity without compromising lifespan. The rationale for which amino acids were part of the autoclaving process was based on solubility considerations 68 . The complete procedure, formula and stock solutions for food production are as follows: prepare mixture 1 (Extended Data Tables 1, 3 and 4); stir using stir bar; autoclave mixture 1 for 15 min; prepare mixture 2 (Extended Data Tables 2-4) and set aside; remove mixture 1 from the autoclave, combine it with mixture 2 and stir, making sure to mix well; quickly pipette the food into Drosophila vials (5-10 ml food per vial); allow the food to solidify/cool for roughly an hour, and then cover the vials (either with cotton plugs or with plastic wrap) and store food at 4 °C. The food is good for about 3 weeks at 4 °C (it will shrink and pull away from the sides of the vials owing to evaporation). (Note, after autoclaving, mixture 1 containing agar can start solidifying (both before and after the two mixtures are combined, but combining the two mixtures will cause food to cool down and solidify fast). Quickly combine and pour the food while the autoclaved mixture is still hot to avoid this. Adding water to the autoclave tray and keeping mixture 1 in this hot water until ready to combine and pour helps prevent premature solidification.) The catalogue numbers for the reagents not listed in Extended Data Tables 1-4 are as follows: sucrose (Sigma, S7903), agar (Caisson, A037), propionic acid (Sigma, P5561). Stocks can be stored at 4 °C for several months unless otherwise specified. Generation of clones expressing the Sesn shRNA. Clones were generated by crossing yw,hs-flp; mCherry-Atg8a; Act>CD2>GAL4, UAS-nlsGFP/TM6B with the indicated UAS lines. Progeny of the relevant genotype was reared at 25 °C and spontaneous clones were generated in the fat body owing to the leakiness of the heat-shock flipase (hs-flp). For the assay, the surface of fresh Gala apples was sprayed and cleaned using 70% ethanol. Fresh Gala apple pieces (about 1 g) containing both a piece of peel and pulp were cut on a clean field using a knife (both the knife and the field were precleaned by 70% ethanol). Two apple pieces with similar shape and weight were placed in the opposite corners of a 6 oz (177 ml; 57 length × 57 width × 103 height (in mm)) clean Drosophila bottle. Solutions of 100 µl in volume that contained one DNA oligomer (final concentration 3.5 ng µl −1 ) and substances (that is, sterile water, amino acid solutions and so on) were placed evenly on top of the apple pieces and allowed to soak in for 1.5-2 h. Age-synchronized adult flies (15 female and 5 male animals) were flipped into these assay bottles and allowed to feed ad libitum on the apples for the indicated times in the time course experiments ( Fig. 3b and Extended Data Fig. 4g) and for 24 h in the other food preference experiments. CO 2 -anaesthetized flies were collected using a tweezer. From each bottle, two tubes of female flies were collected with five flies per tube. Five flies were homogenized for each qPCR sample. Homogenization was performed using a beads beater in the cold after adding 250 µl of squishing buffer (10 mM Tris-HCl pH 8.2, 1 mM EDTA, 1 mM NaCl) and 0.5 µl of 20 mg ml −1 proteinase K (Thermo Fisher number AM2546). The whole-fly lysates were digested at 37 °C for 30-40 min after homogenization followed by proteinase K inactivation at 95 °C for 5 min. The samples were centrifuged for 10 min at 15,000 r.p.m. at room temperature and 2 µl of the supernatant was loaded in each qPCR reaction in a 96-well qPCR plate. We used the SYBR green qPCR master mix from Bio-Rad and a CFX96 Touch Real-Time PCR Detection System with a melting temperature of 60 °C and 40 cycles per run. Genomic Cyp1 qPCR Ct values were used to control for extraction efficiency. For every batch of samples, an average of Cyp1 qPCR Ct values was taken and all samples beyond ±0.5 Ct away from the average were discarded. Standard curves for DNA oligomers 1 and 2 were generated, and the amount of DNA oligomer from each tube of flies was calculated by fitting their Ct values to the standard curves. The preference index was generated by dividing the calculated amount of DNA oligomer 1 by that of DNA oligomer 2. To remove external oligomer that may stick to the outside of the flies, we used a four-step protocol described previously 27 : a 10-min wash with 10% Contrex AP Powdered labware detergent (catalogue number 5204, Decon Laboratories); a 5-min wash in double-distilled H 2 O; a 2-min wash in 30% bleach; and a 5-min wash in double-distilled H 2 O. All washes were performed in a 1,500 µl microfuge tube with continuous rocking at room temperature. For Fig. 3c and Extended Data Fig. 4f, we fed the flies with food containing either 25 µM rapamycin or 25 µM ethanol for 2 days before either protein starvation overnight or not (including 25 µM Rapamycin or 25 µM ethanol). Then for the final choice assay, 25 µM of rapamycin or 25 µM ethanol was added to both apple pieces in the container. Immunofluorescence assays. Fat bodies from aged larvae (96 h after egg laying) were dissected in PBS at room temperature, fixed for 25-30 min in 4% formaldehyde, washed twice for 10 min in PBS 0.3% Triton (PBST), blocked for 30 min (PBST, 5% BSA, 2% FBS, 0.02% NaN 3 ), incubated with primary antibodies in the blocking buffer overnight, and washed four times for 15 min. Secondary antibodies diluted 1:500 in PBST were added for 1 h and tissues were washed four times before mounting in Vectashield (Vector Laboratories) containing 4′,6-diamidino-2-phenylindole (DAPI). Brains from 5-10-day-old adult female flies were dissected and processed as in a previous study 69 . Images for Fig. 2c and Extended Data Fig. 3d were acquired on a Zeiss Axio Zoom v16. Images for Fig. 4b,c and Extended Data Figs. 1c, 7f and 8b,c were acquired on a Zeiss AxioVert200M microscope with a 63× or 40× oil-immersion objective or a 10× objective and a Yokogawa CSU-22 spinning-disc confocal head with a Borealis modification (Spectral Applied Research/Andor) and a Hamamatsu ORCA-ER CCD camera. The MetaMorph software package (Molecular Devices) was used to control the hardware and image acquisition. The excitation lasers used to capture the images were 405 nm, 488 nm and 561 nm. Images for Extended Data Fig. 6b,c were acquired on an iPhone XR camera through a binocular microscope. Egg-laying preference assay. The set-up for the egg-laying preference assay was identical to that for the food preference assay. Instead of collecting female flies for qPCR analyses, the two apple pieces were removed from the bottle and examined under a binocular microscope. The number of eggs on each apple piece was determined. Ovary size quantification. Ovaries were dissected in PBS and bright-field images were acquired using a Zeiss Axio Zoom v16 scope. The size of the ovaries was quantified using the average area of individual ovaries on ImageJ. Developmental timing. Three-day-old crosses were used for 3-4-h periods of egg collection on standard laboratory food. Newly hatched L1 larvae were collected 24 h later for synchronized growth using the indicated diets at a density of 30 animals per vial. The time to develop was monitored by counting the number of animals that underwent pupariation, every 2 h in fed conditions, or once/twice a day in starved conditions. The time at which half the animals had undergone pupariation is reported. For larva developmental timing experiments, 10%-leucine chemically defined diet was used because complete leucine starvation quickly caused lethality before any size comparison across genotypes could be efficiently and meaningfully performed. Lifespan experiments. To generate age-synchronized adult flies, larvae were raised on laboratory food at low density, transferred to fresh food after emerging as adults and allowed to mate for 48 h. Animals were anaesthetized with low levels of CO 2 and sorted at a density of 25 flies per vial. Each condition examined used 8-10 vials of flies. Flies were transferred to fresh vials three times per week at which point deaths were also scored. For adult flies, leucine-free diet or valine-free diet was used. Statistical analyses. For non-survival experiments, two-tailed unpaired t-tests, multiple t-tests, one-way or two-way ANOVA analyses followed my post hoc tests were used for comparison between two groups in GraphPad Prism (GraphPad Software v9). All comparisons were two-sided unless specified otherwise. All analysed P values are indicated for each comparison made within all figure panels. P values of less than 0.05 were considered to indicate statistical significance. For survival comparisons in Fig. 2a,b, two-proportion z-tests were performed. Pupariation percentage (Extended Data Fig. 2a,h) data were compared using permutation tests, in which the test statistic was the difference in mean pupariation times of the two genotypes. The distribution of the test statistic under the null hypothesis was estimated by simulating 100 million rearrangements of the data. Permutation tests were performed in R (script available in Supplementary Data 2). Results for all statistical analyses were summarized in source data files corresponding to each figure. Analysis of survival data. All data were complete and uncensored. Kaplan-Meier estimates of the survival function were plotted and used to compute median survival times. Log-rank tests were used to compare survival distributions, and univariate Cox proportional hazard analysis (with ties handled by Efron approximation) was used to compute hazard ratios between Sestrin-mutant versus wild-type flies within individual dietary conditions. To examine the interaction between genotype and diet (specifically using the alternative hypothesis that the lifespan defect of Sestrin-mutant versus wild-type flies is exacerbated on a leucine-free compared to a valine-free diet), one-tailed Wald tests were conducted on the interaction coefficients generated by two-factor Cox proportional hazard models with interaction terms (with ties handled by Efron approximation). All statistical analyses on survival data were performed in R (script available in Supplementary Data 3). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The data that support the findings of this study are available from the corresponding authors and the Whitehead Institute (sabadmin@ wi.mit.edu) upon reasonable request. Source data are provided with this paper. Values are mean ± SD of biological replicates from a representative experiment. n = 4 independent biological samples. Two samples from wild type (OreR) leucine-free and valine-free, respectively, failed to yield decent peaks for leucine levels, thus discarded. Multiple unpaired t tests, Holm-Šídák multiple comparison method. c, Sesn knockdown prevents autophagy induction upon leucine deprivation. Fat body cells in mid-third instar larvae expressing mCherry-Atg8a were fed the indicated diets for 4.5 h. The Sesn RNAi was expressed in clones of cells (GFP, outlined) with a FLP-out system 70 Wild-type (OreR) animals were given indicated food choices and the preference fold-difference was shown. n (leucine vs water) = 8, n (valine vs water) = 10, n (leucine vs valine) = 7. f, Rapamycin treatment reduces fly food consumption. Vehicle or Rapamycin pre-treated animals were given a choice between leucineor water-coated apples. For the Rapamycin group during the choice assay, animals were fed on apples painted with Rapamycin in addition to either leucine or water. Data show the normalized values of food consumption. n = 5 for both conditions. g, Sesn L431E animals do not have a preference for valineover water-painted apples. Animals were given a choice between valine-or water-coated apples and food preference was measured at the indicated time points. Data show the fold-difference in relative food intake for the valinecoated apple compared to the water-coated apple. n = 10 (2 hrs), 12 (4 hrs), 12 (6 hrs), 9 (9 hrs), and 9 (24 hrs). h,i, Sesn L431E animals have decreased food intake regardless of the leucine content of the food (h), and Sesn −/− animals have increased food intake regardless of the leucine content of the food (i). n = 4 for all conditions. j, Whole-body re-expression of wild-type Sestrin driven by Tub>Gal4 is sufficient to partially restore the preference for leucine-containing food of Sesn −/− adult female flies. Animals with indicated genotypes were given the choice between leucine-or water-coated apples. Data show the preference of fold-difference. n (attP2) = 10, n (Sestrin WT) = 6. k, Adult female flies do not develop a preference for valine-containing apple regardless of their genotype. Animals with indicated genotypes were given the choice between leucine-or water-coated apples. Fig. 5 | Leucine-sensing via the Sestrin-mTORC1 axis contributes to the detection of the protein content of food. a, Wild-type (OreR) flies prefer food containing a high amount of yeast extract and this preference is reduced by the addition of leucine to food containing a low amount of yeast extract. Sesn L431E flies have a reduced preference for the food containing a high amount of the yeast extract and the addition of leucine has minimal impact on the preference. How the food preference index was calculated is described in the methods. n (Wild type OreR, no leucine)=5, n (Wild type OreR, with leucine)=7, n (Sesn L431E , no leucine)=6, n (Sesn L431E , with leucine)= 9. b, As in (a) a choice experiment for wild type w 1118 and Sesn −/− flies. n (Wild type w 1118 , no leucine)=9, n (Wild type w 1118 , no leucine)=8, n (Sesn −/− , no leucine)=9, n (Sesn −/− , with leucine)= 12. Values are mean ± SD of biological replicates from a representative experiment. Data are representative of three independent experiments with similar results. Statistical analysis was performed using two-tailed unpaired t test, Holm-Šídák method. Extended Data Fig. 6 | Flies prefer to lay eggs on leucine-containing food in a fashion that requires the leucine-binding capacity of Sestrin. a, Schematic of the setup used in the egg-laying preference assay. Two identical apple pieces were painted with solutions containing different substances and placed on opposite sides of a container. Animals were allowed to feed ad libitum over the course of the assay and the number of eggs deposited on each apple was counted after 24 h. b, c, Wild-type flies prefer to lay eggs on yeast-or amino acid-painted apples over water-painted apples. Scale bars, 1 mm. d-h, Sesn L431E and Sesn −/− animals do not prefer to lay eggs on the leucine-containing apple. (a) created with BioRender.com. Values are mean ± SD of three biological replicates from a representative experiment. Data are representative of two independent experiments with similar results. Statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparisons test (d-g), and Šídák's multiple comparisons test (h). Statistics For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section. A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information. Data Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability Extended Data Fig. 10d: the single cell RNAseq dataset analyzed is Aerts_Fly_AdultBrain_Filtered_57k, which is available here: scope.aerslab.org. All codes required to run the CPH and permutation statistical analyses are provided as source data.
2022-07-22T06:19:31.853Z
2022-07-20T00:00:00.000
{ "year": 2022, "sha1": "b0db8e347c14c933c881ac41dd9d3126467420ff", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21203/rs.3.rs-103234/v1", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "11c0fe7a89343ab487b3c615620f26c7cd24978f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257655287
pes2o/s2orc
v3-fos-license
Clinical Characteristics of Children With Acute Tubulointerstitial Nephritis: A Single-Center Experience Objective: Acute tubulointerstitial nephritis (ATIN) is an infiltration of the kidney interstitium with inflammatory cells. Medications are most frequently blamed for the etiology. Patients may present with non-specific signs and symptoms. Therefore, the diagnosis of ATIN is often delayed. In this study, clinical characteristics, treatment protocols, and outcomes of children diagnosed with ATIN were presented. Methods: This is a retrospective study based on the data of 18 patients diagnosed with ATIN between 2017 and 2022 at Gazi University. Patients were divided into two groups: steroid-treated (n=13) and non-steroid-treated (n=5). Clinical features and laboratory evaluations were compared between the groups. Results: The mean age of the patients was 14.4±2.6 years, and the great majority were girls (88.9%, n=16). ATIN was mostly medication-related (n=17, 94.4%). Steroids were started in one-third of patients using non-steroidal anti-inflammatory drugs. Steroids were started in 45.4% of the patients with eosinophilia, 75% of those with pyuria, 66.6% of those with hematuria, and half of the patients with increased kidney echogenicity. The kidney functions returned to normal ranges in all patients. In steroid-treated patients, although recovery times for serum creatinine were longer (7.2±2.5 vs. 71.2±100.7 days), blood eosinophil count reached normal values more rapidly (5.4±2.3 vs. 3.1±1.0 days). Conclusion: ATIN can be associated with diverse clinical presentations. The first and most important step of treatment is to discontinue the medication responsible for the etiology. Steroid treatment improves eosinophilia more rapidly. However, randomized controlled studies are needed to determine further treatment steps and establish a more definite treatment protocol. Introduction Acute tubulointerstitial nephritis (ATIN) is an infiltration of the renal interstitium with inflammatory cells, including neutrophils, monocytes, lymphocytes, and eosinophils [1]. It is observed at a rate of 3-7% in kidney biopsies in children [2]. Among the causes of ATIN, medications (non-steroidal anti-inflammatory drugs (NSAIDs), beta-lactam antibiotics, and proton pump inhibitors (PPIs)) are most frequently blamed. This is followed by infections, immune-mediated diseases, tubulointerstitial nephritis and uveitis (TINU) syndrome, granulomatous diseases, and genetic causes. In a significant group of patients, no causative agent could be detected [1]. In ATIN of any cause, patients may present with non-specific signs and symptoms of acute kidney dysfunction. These include the acute or subacute onset of nausea, vomiting, and malaise [3]. In druginduced ATIN, extrarenal manifestations of hypersensitivity, such as fever, skin rash, and eosinophilia, are relatively common. However, many patients are asymptomatic [4]. Therefore, since patients usually present with non-specific symptoms and findings, the diagnosis of ATIN is often delayed. While the majority of patients recover spontaneously, a severe clinical picture that may rarely progress to kidney failure may be observed [5]. In this study, clinical characteristics, treatment protocols, and outcomes of pediatric patients diagnosed with ATIN who were followed up in a single center were presented. Study design This is a retrospective study based on data collected from children and adolescents diagnosed with ATIN between 2017 and 2022 at the Department of Pediatric Nephrology, Gazi University. All data were retrospectively obtained from the electronic medical record system. Demographic characteristics, complaints at presentation, physical examination findings, and possible etiological factors (including the presence of medication use, previous infections, or chronic diseases) were evaluated. Laboratory values, including complete blood count (white blood cell, neutrophil, lymphocyte, and eosinophil counts); serum biochemistry, including serum creatinine and albumin levels; dipstick examination; and urine microscopy findings (urine density, proteinuria, hematuria or pyuria) were noted at admission and at the last follow-up. Urinary system ultrasonography and kidney biopsy findings (if available) were recorded. Especially in drug-related ATIN, spontaneous recovery may be achieved with early discontinuation of the medication [1]. However, persistence of kidney dysfunction (persistence of elevated serum creatinine, persistent proteinuria, and/or hematuria) after discontinuation of the related medication is a major indication for commencement on steroids [6]. In our study, patients who were started on steroids because of persistently elevated creatinine, proteinuria, and/or hematuria were retrospectively evaluated, and these patients were grouped in a separate group. Steroid dose and duration were noted in the group receiving steroid therapy. Demographic and laboratory values and differences in terms of rates and durations of recovery were compared in these two groups. Kidney clinical improvement was defined as a decrease in serum creatinine to its basal value and improvement of proteinuria, hematuria, and pyuria. This study was approved by Gazi University Clinical Research Ethics Committee with the approval number 2022-1467. Statistical analysis In the presentation of descriptive statistics, the data obtained by measurement were expressed as mean ± standard deviation (SD) and categorical data as number (percentage). Cross-table analyses and Fisher's exact chi-square tests were used to compare the qualitative characteristics of the groups. The Shapiro-Wilk test was used to determine the normal distribution of numerical measurements in groups. Two groups were compared with the t-test in independent groups and Mann-Whitney U test for those who did not show normal distribution. IBM SPSS Statistics for Windows, Version 22.0 (Released 2013; IBM Corp., Armonk, New York, United States) was used for all statistical analyses. A significance level of p<0.05 was taken. Results Eighteen patients were included in the study. The mean age of the study group was 14.4±2.6 years, and the great majority were girls (88.9%, n=16). The mean weight and height z-scores of the patients were within normal intervals for age (0.67±1.32 and 0.60±0.55, respectively). The most common complaints at initial admission were nausea and vomiting (77.8%, n=14). These were followed by flank pain, fever, malaise, and weight loss. On physical examination, costovertebral angle (CVA) tenderness was present in 61.1% of the patients (n=11). None of the patients had uveitis. The etiologic evaluation revealed that half of the patients (50%, n=9) used NSAIDs before the onset of signs and symptoms. This was followed by beta-lactam antibiotics and PPIs. One (5.6%) patient was taking a medication containing the active substance mirtazapine, and only one (5.6%) patient revealed no prior medication exposure. There were no patients taking more than one medication at the same time or using an herbal-based product. The patient on mirtazapine had been taking the medication for about three months, and one of the patients on NSAIDs had been taking the medication for about 20 days. Except for these two patients, there was no history of chronic medication use. None of the patients had a history of infection or other chronic/systemic diseases. The mean serum creatinine level and urinary protein excretion were 2.08±1.06 mg/dL and 9.0±4.6 mg/m 2 /h, respectively, whereas urine density was low (1005.8±4.4). No patient required kidney replacement therapy. Hypoalbuminemia was observed in 16.7% (n=3) of the patients, and eosinophilia was observed in 61.1% (n=11). In urine dipstick evaluation, 27.8% (n=5) of the patients had 2+ proteinuria. Moreover, leukocyturia was detected in 22.2% (n=4) of the cases and hematuria in 11.1% (n=2). Kidney parenchymal echogenicity was increased in 10 (55.6%) of the patients on ultrasonography. Five (27.7%) patients underwent kidney biopsy. Kidney biopsy showed interstitial infiltrates and interstitial edema consisting of mononuclear cells, mainly lymphocytes and eosinophils, which were diffuse in one patient (chronic mirtazapine-used patient) and focal in the other four patients. The clinical and laboratory characteristics of the patients are shown in Table 1. Mean ± SD Min-Max n (%) All patients were hospitalized, and steroid therapy was started at 1 mg/kg/day (maximum 60 mg/day) doses in five (27.7%) patients with no decrease or further rise in serum creatinine levels during the follow-up. The mean duration of steroid use was 3.37±3.29 (0.5-9) months. Although the mean age of the patients who necessitated steroid therapy in addition to supportive measures seemed to be higher than those who used supportive treatment only, the difference was not statistically significant (p>0.05). The frequency of male sex was also higher in the steroid group (p>0.05). Height, weight, and body mass index (BMI) z-scores were numerically similar in both groups (p>0.05). Although not statistically significant, nausea and vomiting were more frequent in the steroid-free group (p=0.261). All other complaints, such as flank pain, malaise, fever, or weight loss, were more frequent in the steroid group (p>0.05, for all). A prior history of NSAID intake as an insulting agent was found to be the most common cause in both groups, but the rate was slightly higher in patients who needed to start steroid therapy (p=0.599). Steroids were not initiated in any of the patients who used beta-lactam antibiotics or PPIs (p=0.103 and p=0.352, respectively). In the steroid group, eosinophilia was present in all patients while hypoalbuminemia was not detected in anyone (p=0.036 and p=0.239, respectively). The majority of the patients with pyuria (75%) or hematuria (66.6%) necessitated steroid treatment, which was significantly higher than those without these findings (p=0.017 and p=0.016, respectively). Hyposthenuria was more frequent in the steroid-free group (p=0.099), whereas proteinuria (dipstick) was less frequent in that group (p=0.132). In addition, kidney echogenicity was increased in all patients in the steroid group (p=0.019), and steroid was used in all patients who underwent kidney biopsy (n=5, p<0.001). Clinical and laboratory values between the groups followed up with supportive treatment only and the groups in which steroids were added to the treatment are compared in Table 2. At the end of the follow-up period of mean 31.2±19.1 (5-60) months, kidney functions were normalized in all patients. Aside from serum creatinine, eosinophil count, urinary protein excretion, pyuria, and hematuria rates decreased, whereas urine density increased in all patients. On the other hand, the time for serum creatinine to reach basal value and for 24-h urinary protein excretion level to regress to normal range were longer in the steroid-treated patients compared to patients on supportive treatment (p=0.028 and p=0.040, respectively). In the steroid-treated group, two patients had a history of chronic medication use, and one of these two patients had diffuse rather than focal inflammation on kidney biopsy, unlike the other patients (p<0.001). They had the longest time for mean serum creatinine to return to baseline 165.0±106.0 (90-240) days compared to the rest of the patients (p<0.001). The blood eosinophil count reached normal values more rapidly in the steroid users (p=0.049). Although not statistically significant, urine density also normalized more quickly in patients treated with steroids (p>0.05). The time required for the normalization of the laboratory parameters is shown in Table 3. Two of the patients were re-exposed to the causative agents (PPI and beta-lactam antibiotics) a few months after the diagnosis of ATIN, but no clinical or laboratory abnormalities were detected in these patients. Discussion In this study, all ATIN cases were drug-induced, except for a patient with an undetermined etiology. Corticosteroid therapy was used in half of the patients with flank pain and fever. Although steroids were not initiated in any of the patients who were notified to have beta-lactam antibiotics or PPI use in the etiology, one-third of patients with a history of NSAID use required the treatment. We started steroids in about half of the patients (45.4%) with eosinophilia and in more than half of the patients with pyuria (75%) or hematuria (66.6%). Besides, it was used in half of the patients with increased kidney parenchymal echogenicity and in all patients who underwent kidney biopsy (all of whom had interstitial inflammation). At the end of the follow-up period, kidney functions returned to normal ranges in all patients irrespective of steroid use. However, recovery times for serum creatinine and proteinuria were significantly longer in steroid-treated patients. The blood eosinophil count reached normal values faster in the steroid users. The results regarding sex distribution in ATIN are quite variable. In one single-center study, 57.9% of pediatric patients diagnosed with ATIN were girls, and in another single-center study, the proportion of girls was 90% [4,7]. In our study, 88.9% of the study group consisted of girls. There is no study in the literature for ATIN and sex predominance; however, a study has shown that drug allergy is more common in girls [8]. Since medication exposure was frequently found in the etiology of ATIN in our study and an allergic component is thought to be present in ATIN, a higher frequency of female sex may be an expected finding [1]. ATIN has a wide clinical spectrum ranging from acute kidney injury, which may improve spontaneously by removal of the etiological factor, to very severe kidney involvement that requires dialysis [5]. In 72.2% of our patients, elimination of the possible etiological factors was sufficient to normalize acute kidney injury findings spontaneously over time. In the remaining patients, steroids were started in addition to supportive therapy. Symptoms and signs of ATIN may be nonspecific, but uremic symptoms may occur if kidney failure develops [9]. The absence of the classic triad of fever, eosinophilia, and allergic rash does not exclude ATIN. Minimal proteinuria is frequently found in patients, but nephrotic level proteinuria may develop in rare cases. A routine urine analysis may reveal the presence of white or red blood cells [6]. Eosinophiluria may also be demonstrated by Hansel's stain [10]. However, assessment for eosinophiluria was lacking in our study due to technical issues. In our patients, non-specific symptoms such as weakness, fever, weight loss, nausea, and vomiting were the most common complaints. Since none of our patients had an allergic rash, no one fulfilled the classic triad of ATIN. Although some of our patients had some degrees of proteinuria, none of them showed nephrotic level proteinuria. Some of our patients also had pyuria and hematuria. The diagnosis of ATIN can be confirmed by kidney biopsy due to the presence of focal or diffuse interstitial infiltrates consisting predominantly of mononuclear cells, including lymphocytes and eosinophils, and interstitial edema. A biopsy is undertaken when the diagnosis is unclear or when the patient does not improve clinically following discontinuation of the medication suspected as the cause of AIN and kidney failure [11]. In our patients, interstitial infiltrates accompanied by eosinophils and interstitial edema were present in all patients who underwent biopsy. However, a mild degree of tubulointerstitial fibrosis was observed in the patient with chronic use of the offending medication, mirtazapine. Treatment is based on the clinician's previous experience. Supportive therapies such as close monitoring of intravascular volume and maintenance of electrolyte balance are essential. Kidney function may improve with treatment of the underlying cause. Especially in drug-related ATIN, spontaneous recovery may be achieved with early discontinuation of the medication [1]. However, persistence of kidney dysfunction after discontinuation of the related medication is a major indication for the initiation of steroids [10]. In our study, supportive measures were applied in the majority of our patients (72.2%), and kidney functions normalized in a short time with the elimination of the possible etiological agent. However, steroid therapy was added to the supportive treatment for patients whose kidney dysfunction persisted after discontinuation of the medication implicated in the etiology. There is no consensus in terms of corticosteroid dose and duration of use [11]. In a study by Gonzalez et al., patients diagnosed with drug-related ATIN were classified according to the presence of steroid treatment or not, and final serum creatinine was found to be significantly lower in the group receiving steroids compared to those without. Moreover, almost half of the non-steroid group had to be admitted for chronic dialysis sessions. In that study, it was also shown that when there was a delay in the initiation of steroid treatment, kidney functions did not fully recover [12]. In another study, patients who received prednisolone 2 mg/kg/day (maximum 60 mg/day) for one month followed by a gradual tapering in medication doses were compared with patients who were on only supportive treatment, and despite a rapid decrease in serum creatinine in the steroid users, no significant difference was found in serum creatinine levels at the end of treatment [13]. In our patients, the starting dose was 1 mg/kg/day in the steroid-treated group. Although urine density and serum eosinophil levels returned to normal ranges more rapidly in the steroid users, proteinuria and high serum creatinine persisted for a much longer time. We attribute the delayed recovery of markers associated with clinical improvement in the steroid-treated group to the presence of chronic medication use in two patients in this group. In summary, we believe that early discontinuation of the medication implicated in the etiology and/or early steroid treatment may lead to a decrease in inflammation and thereby the risk of fibrosis, and it may induce rapid recovery. Most patients have a complete renal recovery in the long term, like our patients. Chronic kidney disease rarely develops. Progression into a chronic process is usually dependent on the underlying cause. Exposure to the triggering medication for more than one month, delay in removal of the triggering agent (like in our patient with mirtazapine use), especially drug-associated ATIN, systemic inflammatory state, genetic predisposition, prolonged acute kidney injury, intense neutrophil infiltration on biopsy, diffuse inflammation or severe fibrosis, and the presence of interstitial granuloma are indicators of poor prognosis. A few numbers of subjects and inclusion of only patients with medication exposure in the ATIN etiology can be considered as the main limitations of this study. Nevertheless, although previous studies report that initial clinical symptoms and laboratory tests (serum creatinine, urine analysis, and amount of proteinuria) do not have distinguishing features in terms of renal prognosis [1], our study showed that patients with echogenic kidneys, eosinophilia, and an active urine sediment at the onset of the disease were associated with significantly higher percentage of steroid use in the course of the disease. Therefore, we believe that careful clinical assessment and early initiation of steroids in the presence of indicators suggestive of a more severe disease will have a beneficial effect on the long-term prognosis of patients with ATIN. Conclusions ATIN is a major cause of acute kidney injury in children. It can be associated with diverse clinical presentations that range from an asymptomatic, spontaneously recovering clinical state to a very severe clinical course that progresses into kidney failure. The first and most important step of treatment is elimination of the etiological condition. Another important step in treatment is the early initiation of steroid therapy, which can trigger rapid healing by reducing inflammation and thus the risk of fibrosis. However, randomized controlled studies are needed to determine further treatment steps and establish a more definitive treatment protocol. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Gazi University Clinical Research Ethics Committee issued approval 2022-1467. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-03-22T15:07:03.353Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "9311f1320fee5e9062df4e3a5206ea8c112b4953", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/134993/20230320-23887-2dd1fm.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac36730d61b75b691da0f4243074ee813f18d7a8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
219561451
pes2o/s2orc
v3-fos-license
Cocaine-Induced Acute Interstitial Nephritis: A Comparative Review of 7 Cases Acute interstitial nephritis is a well-known cause of acute kidney injury, but its association with cocaine use is extremely rare. In this article, we chronicle the case of a patient who developed acute interstitial nephritis secondary to cocaine insufflation. Furthermore, we conducted a systematic literature search of MEDLINE, Cochrane, Embase, and Scopus databases regarding cocaine-induced acute interstitial nephritis. A comprehensive review of the search results yielded a total of 7 case reports only. The data on patient characteristics, clinical features, biochemical profiles, treatment, and outcomes were collected and analyzed. This paper illustrates that acute interstitial nephritis may be added to the list of differentials in patients with acute kidney injury and a history of cocaine use. The therapeutic approach for cocaine-related kidney disease may be different than other etiologies responsible for acute renal insult. Prompt recognition of this entity is crucial because such patients may ultimately develop severe deterioration in renal function. Introduction Acute interstitial nephritis is an underrecognized cause of acute kidney injury. It leads to decreased creatinine clearance and is characterized by an inflammatory infiltrate in the kidney interstitium, sparing the glomeruli. 1 The occurrence of this entity has been described in association with a multitude of diseases ranging from intrinsic kidney pathologies to systemic diseases involving immune alterations like systemic lupus erythematosus, sarcoidosis, several infections, or following the use of certain medications. 2,3 Notably, physicians also encounter difficult-to-diagnose cases of acute interstitial nephritis where a precise etiology cannot be deciphered. In such cases, the underlying pathogenesis has mostly been attributed to aberrant autoimmune mechanisms. 4 In this article, we describe an interesting case of a young patient who was eventually diagnosed with acute interstitial nephritis secondary to cocaine use. He showed clinical improvement, and his biochemical profile normalized with conservative management and cocaine cessation. This article highlights acute interstitial nephritis as a possible cause for acute kidney injury in patients having a history of cocaine use. Clinicians should maintain a high index of suspicion for cocaine-associated acute interstitial nephritis, particularly due to its nonspecific clinical presentation and potential to cause severe renal dysfunction. This paper also serves the purpose of community awareness regarding this unusual association between acute interstitial nephritis and cocaine use. Population-based studies are warranted to assess the magnitude of this pathologic relation. It will not only broaden the scope of our knowledge on this issue but will also help frame guidelines to standardize the care of such patients. Case Presentation This case study involves a 27-year-old Caucasian male who developed a dull aching type of abdominal pain, fever, cough, and chest congestion with flu-like illness over the past 5 days. He used ibuprofen 200 mg 2 times a day for the past 2 days, with brief improvement in his symptoms. Subsequently, he experienced a focal to bilateral tonicclonic seizure while working at a hardware store 1 day ago. He was initially brought to a nearby satellite facility. His biochemical profile was unremarkable, except for a deranged renal function. He was initiated on 500 mg levetiracetam twice daily, and magnetic resonance imaging of brain with gadolinium and electroencephalography were planned. He had been having occasional seizure episodes for the past 7 years, but he refused to start anticonvulsant therapy. He remained seizure-free for 24 hours after initiation of levetiracetam at the facility. The patient was then transferred to our hospital for further evaluation and management of his worsening renal function. On detailed inquiry, he admitted having large amounts of daily intranasal cocaine 1 week ago, immediately preceding his clinical symptoms. He had been smoking marijuana and snorting cocaine 3 to 4 times per week for past several months but denied intravenous drug use. He chewed tobacco for 4 years but suspended its use 1 year ago. He also reported binge alcohol consumption. He denied stabbing chest or flank pain, nausea, vomiting, or change in bowel habits. There was no history of sore throat, joint swelling, skin rash, dysuria, or hematuria. Family history was negative for autoimmune diseases and tuberculosis. Abdominal examination was remarkable for diffuse tenderness with normal bowel sounds. Investigations Laboratory evaluation revealed elevated serum creatinine levels, 2.8 mg/dL (baseline: 1.1 mg/dL), normal creatine phosphokinase, 226 U/L (39-308 U/L), and insignificant peripheral eosinophil count, 2%, consistent with acute kidney injury. The details of the laboratory studies are provided in Table 1. Urinalysis revealed pH 5.5, specific gravity 1.015, and proteinuria 30 mg/dL. A trace amount of blood was present, but ketones, nitrates, and leukocyte esterase were absent. Urine microscopy showed 4 to 5 white cells per high-power field and a few scattered red cells. It was negative for leukocytes, including eosinophils by special stain, pigmented granular casts, and bacteria. Urine culture also came out negative. Urine toxicology screen was positive for cocaine. Computed tomography scan of the abdomen and pelvis without contrast showed nonspecific bilateral perinephric stranding, with a thickening along the anterior portion of the Gerota's fascia. Renal Doppler ultrasonography ruled out renal artery stenosis and aortic dissection. Transthoracic echocardiogram showed normal wall motion and ejection fraction. Electrocardiogram was also normal. Chest radiograph was negative for hilar nodules. Subsequently, an uneventful renal biopsy was performed. The histopathologic examination of the biopsy specimen revealed normocellular glomeruli ( Figure 1). An interstitial inflammatory infiltrate composed of mononuclear cells was present, with no pathologic alterations in the arteries ( Figure 2). Patchy interstitial edema along with inflammation was also identified ( Figure 3). The biopsy findings ruled out the presence of mitotic figures and tubular necrosis, with no evidence of tubulitis or granulomas ( Figure 4). The presence of eosinophils was confirmed, which was suggestive of acute interstitial inflammation ( Figure 5). Differential Diagnoses In terms of possible causes of his acute kidney injury, certain etiologies related to cocaine use were high on the list that can cause rhabdomyolysis, vasculitis, renal infarction, and thrombotic microangiopathy. A variety of autoimmune disorders and infectious etiologies were also considered plausible. Based on the clinical history, extensive diagnostic workup, biopsy findings, and exclusion of the probable etiologies, the patient was diagnosed with acute interstitial nephritis secondary to cocaine use. Treatment With regard to the treatment, he was initiated on conservative management with intravenous hydration and maintenance of hemodynamics. Given his relatively mild initial presentation of acute interstitial nephritis and subsequent signs of early recovery, steroid therapy or hemodialysis were not required. He was educated about his disease and was directed to seek professional help for substance use disorder. He was also counseled regarding the importance of continuation of levetiracetam as well as future avoidance of nephrotoxic medications. Outcome and Follow-Ups On day 7 of admission, his recovery was good with gradual improvement in renal function. He was discharged from the hospital in a stable condition under ongoing anticonvulsant therapy with levetiracetam 500 mg twice daily. At the 1-week follow-up visit, his renal function showed significant improvement. His serum creatinine trended down to 1.7 mg/dL, and he reported no neurological issues. On subsequent follow-ups, his renal function returned to baseline. The patient has been receiving regular cognitive behavioral therapy sessions for substance use disorder. He has had urine toxicological screens performed, confirming that he has remained abstinent from cocaine. He continues to do well on levetiracetam without any renal or neurological complications to date. Discussion Cocaine-associated acute interstitial nephritis is an extremely rare clinicopathologic entity. We conducted a comprehensive search of MEDLINE, Cochrane, Embase, and Scopus databases from inception to date. Search terminologies such as "acute interstitial nephritis," "acute kidney injury," "cocaine," "substance use disorder," "renal dysfunction," and the abbreviations (ie, AIN, AKI), were combined using the Boolean operators "AND" and "OR" with the terms "diagnosis," "management," and "recovery." A total of 32 articles consisting of but not limited to original articles, case series, and case reports were initially obtained using the above-mentioned search strategy. The titles and abstracts of all these articles were carefully reviewed for their relevance to our study. A total of 13 articles were first enlisted for rereview, whereas 19 studies were excluded as they were not related to our topic, were in a language other than English, and/or full-text versions were not available. After removing duplicate and redundant articles, 7 case reports only were identified and included in the present article for the final review and analysis. [5][6][7][8][9][10][11] The data of individual cases of cocaine-related acute interstitial nephritis regarding patients demographics, clinical presentation, laboratory parameters, biopsy status, management, and outcomes are summarized in Table 2. The data analysis demonstrated that all patients were males with the mean age of 41 years (range: 28-49 years). Of the total 7 patients, 5 were African Americans. The presentation patterns of acute interstitial nephritis were mostly related to nonspecific symptoms like abdominal pain, fatigue, malaise, anorexia, nausea, and vomiting. Urinalysis frequently showed the findings of hematuria and proteinuria. In a majority of patients, serum creatinine levels were considerably elevated, indicating the onset of cocaine-related kidney injury several days prior to hospital admission. The features of an allergic-type reaction, including rash, hives, itching, fever, and eosinophilia, were absent in these patients. The initial presentation of this patient was dominated by abdominal pain and fever, but other classic clinical features of druginduced acute interstitial nephritis such as rash and eosinophilia were absent. However, it is notable that the published medical literature now denotes acute interstitial nephritis as a heterogeneous disorder with the classic triad of rash, fever, and eosinophilia present in only 10% of cases. 12 Based on his renal function tests, his renal insult was found to be relatively less severe than most of the previously reported similar cases. In this patient, rhabdomyolysis was considered unlikely due to normal serum creatine phosphokinase levels. Vasculitis was ruled out based on his acute presentation and negative ANCAs. Renal Doppler ultrasound excluded a vascular abnormality or infarction. Furthermore, a transthoracic echocardiogram was inconclusive for cardiac pathologies and embolic phenomena. Thrombotic microangiopathy was not considered based on the findings of his serial testing of serum creatinine, urinalysis, electrocardiography, troponin levels, and liver enzymes. The workup for relevant infectious etiologies was also negative. In terms of prerenal causes of acute kidney injury, he had no evidence of significant volume depletion, hypotension, or renal hypoperfusion. Urine microscopy showed bland sediment with no muddy brown granular casts. Computed tomography scan of the abdomen was unremarkable for urinary obstruction, ruling out the postrenal disease. The absence of a compatible clinical picture, negative serological testing, normal chest radiography, and normal serum calcium levels excluded the possibility of sarcoidosis. Additionally, systemic lupus erythematosus, Sjogren's syndrome, and infectious etiologies like poststreptococcal glomerulonephritis were also excluded on the standard set of investigations. In light of the clinical and workup findings, intrinsic kidney pathology was considered probable. Thereafter, Kidney function recovered but patient eventually succumbed to multi-organ failure after subsequent 3 episodes of cocaine-related acute interstitial nephritis. the pathologic examination of the renal biopsy specimen confirmed acute interstitial nephritis. Nonsteroidal anti-inflammatory drugs (NSAIDs) also show a propensity to cause acute interstitial nephritis. However, the occurrence of this adverse event is delayed, requiring a prolonged exposure, ranging from several weeks to months. On biopsy, the absence of eosinophils in interstitial infiltrates is the salient pathologic feature in such patients. 11,13 In a retrospective study, Schwarz et al 14 demonstrated that NSAIDs-induced disease predominantly causes nephrotic-range proteinuria compared with other causes of interstitial nephritis (38% vs 14%, respectively). Finally, NSAIDs-associated interstitial nephritis typically involves patients older than 60 years, with a female gender predominance having a male-to-female ratio of 1:2. 15 Conversely, the overall presentation of this patient did not fulfill the typical features of NSAIDs-related renal insult. He is a young male who used ibuprofen only for 2 days, and his biopsy findings confirmed the presence of interstitial eosinophilic infiltration. Although his initial urinalysis showed 30 mg/dL proteinuria, it resolved in the subsequent testing. Notably, the timing of the onset of his symptoms was a vital clue to exclude NSAIDsassociated renal pathology in this patient. He developed clinical symptoms after intranasal cocaine binge but before starting the use of ibuprofen. Thus, ibuprofen as the cause for acute interstitial nephritis was unconvincing. This patient was also initiated on levetiracetam for his focal to bilateral tonic-clonic seizure. However, the medical literature regarding the association between this drug and interstitial nephritis remains limited to anecdotal reports. 16,17 In a population-based study, Yau et al 18 showed that the use of levetiracetam was not associated with a higher risk of interstitial nephritis within 30 days (0.33% in levetiracetam users and 0.26% events in nonusers [odds ratio = 1.24; 95% confidence interval = 0.62-2.47]). This patient used levetiracetam only for 1 day before this admission. However, despite continuation of anticonvulsant therapy, his acute kidney injury improved, which was compelling evidence that levetiracetam was not related to his acute interstitial nephritis. At the follow-up visits, he has been tolerating levetiracetam well without any subsequent seizure episodes and his renal function has remained normal thus far. Eventually, after exclusion of all the probable etiologies, the only credible cause was cocaine abuse in this patient. However, it is unclear whether cocaine itself or any of its impurities caused the acute renal injury in this patient. The exact pathogenesis of drug-induced acute interstitial nephritis remains to be determined. However, an immunologic disturbance, possibly a delayed hypersensitivity T-cell response, appears to be plausible. 19 The main pathogenetic mechanisms may involve molecular mimicry or direct binding of the drug to the tubular basement membrane. 19 The dose-independent nature of the presentation patterns, the extrarenal manifestations of hypersensitivity, and the recurrence of symptoms on reexposure favor this theory. 20 Renal biopsy remains the gold standard for definitive diagnosis in such patients. Pathologic finding of interstitial inflammatory infiltrates of lymphocytes, plasma cells, and eosinophils with normal glomeruli is the hallmark of this disease. 20 Prompt and accurate recognition of acute interstitial nephritis as a cause of acute kidney injury in cocaine users should be considered as imperative. A significant history of cocaine abuse or positive screening test for cocaine, clinical symptoms of abdominal pain or hematuria, and biochemical profile showing elevated levels of serum creatinine are key diagnostic clues for cocaine-associated acute kidney injury. With regard to the treatment of drug-induced acute interstitial nephritis, withdrawal of offending agent alone may result in rapid recovery of renal function. Corticosteroids are used if drug cessation alone fails to improve renal function in 3 to 7 days. 20 Although the outcomes in patients with severe disease treated with steroids are not extensively investigated, a delay in corticosteroid therapy may result in the worse recovery of kidney function. 21,22 The outcomes of both oral and intravenous corticosteroids are comparable with prompt administration. 23 It is notable that the patients with severe acute interstitial nephritis or NSAIDs-related renal syndrome may show suboptimal response to steroids. 24,25 In the present data regarding cocaine-induced acute interstitial nephritis, 6 out of 7 patients received steroid therapy. Although they received urgent dialysis, their kidney function improved over the next several weeks. The present patient was unique in this regard as neither had he received dialysis nor steroid therapy. We speculate that his diagnosis was established early in the course of the disease, and his nephritis presentation was not severe compared with previously reported patients. Therefore, his kidney function recovered with conservative management and cocaine cessation. Learning Points • • This study represents the eighth reported case describing the association between acute interstitial nephritis and cocaine use. • • An updated knowledge of this possible causal link is imperative for early diagnosis and necessary holistic clinical management. • • Detailed drug history is a prudent endeavor in patients with suspicion of acute interstitial nephritis. • • In patients with cocaine-induced disease, a timely administration of corticosteroid therapy may hasten the recovery of kidney function. • • The identification of more cases of this potential duo will help understand the pathogenesis, which may provide the basis for improvisation of appropriate treatment. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Ethics Approval Our institution does not require ethical approval for reporting individual cases or case series. Informed Consent Verbal informed consent was obtained from the patient(s) for their anonymized information to be published in this article.
2020-06-11T09:09:01.208Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "194f8624e04782b65ed7a86b306dc3086e9964a1", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2324709620932450", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ca37557e8702f9cd156b4067b3b36c7eb039089", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118066922
pes2o/s2orc
v3-fos-license
Structural unification of space and time correlations in quantum theory We suggest a natural mapping between bipartite states and quantum evolutions of local states, which is a Jamiolkowski map. It is shown that spatial correlations of weak measurements in bipartite systems precisely coincide with temporal correlations of local systems. This mapping has several practical and conceptual implications on the correspondence between Bell and Leggett-Garg inequalities, the statistical properties of evolutions in large systems, temporal decoherence and computational gain, in evaluation of spatial correlations of large systems. Space and time are distinguished in the formalism of quantum theory. A system that is separated in two parts of space is described by a positive semi-definite operator that lies in a tensor product of two Hilbert spaces. The time evolution of a system is most generally described by a trace preserving map from one Hilbert space to another Hilbert space. Mathematically, one can define a map between the space of bipartite states and the space of time evolutions, which is defined by the Hilbert-Schmidt scalar product. However, one would not expect that this mapping would be physical, that is that correlations of spatially separated observations would equal the corresponding temporal correlations of measurements before and after the evolution. In particular, while in the spatial case the measured operators commute, in the temporal case two sequentially measured operators do not generally commute and effect each other due to the uncertainty principle. S is weakly measured at t1 < 0 and t2 > 0 by O1 and O2 respectively, where the system undergoes an instantaneous evolution given by Kraus operators {Mz} at t = 0. The system is post-selected to state ρ fi S . (b) A and B share a bipartite state ρAB and weakly measure it by OA and OB respectively. Then the parties post-select their states to ρ fi A and ρ fi B . ρAB, OA(xA), OB(xB), ρ fi A and ρ fi B are mapped to {Mz}, O1(t1), O2(t2), ρ in S and ρ fi S , respectively. Inset. Realization of post-selection to a mixed state by interaction with ancilla and post selecting both the system and the ancilla to pure states. However, as is well known, there is a trade-off between the accuracy of the measurement and the disturbance caused to the system [1]. The limit in which individual measurements provide vanishing information gain was first analyzed by Aharonov et. al. [2] and was termed weak measurements. Since weak measurements only slightly disturb the systems, they provide a non-destructive and operational method for comparing spatial and temporal correlations in quantum mechanics. In this letter we construct a Jamio lkowski map [3] between the space of bipartite systems ρ AB ∈ H A ⊗ H B and the set of time evolutions transforming systems from H A to H B . In this mapping spatial correlations of weak measurements in bipartite systems precisely coincide with temporal correlations of weak measurements, before and after the evolution, in local systems (Theorem 1). The entanglement between A and B is mapped to a correlation between the past and the future, which characterize the evolutions of systems and their quantum mechanical nature. We show that maximally entangled states are mapped to unitary evolutions. Non-maximally entangled states correspond to evolutions under the influence of selective measurement. In particular, non-entangled pure product states correspond to selective projector measurements. Finally, mixed bipartite systems are mapped to mixtures of the corresponding evolutions. We shall also discuss briefly several practical and conceptual applications of the suggested mapping. To set the ground for the mapping let us first discuss generalized time evolutions. The evolution of a system ρ S , subject to interaction with a larger system, is most generally described as a completely positive map given by Beyond the trivial unitary operations Kraus operators describe evolutions due to the interaction with an environment. In order to describe the effect of selective measurements, we remove the constraint z M † z M z = I and normalize the Kraus operators to preserve the trace: where p z ≥ 0 is the probability for post-selecting the state dictated by M z . A single Kraus operator corresponds to selecting the state M ′ † ρ S M ′ (p = 1). As an illustration, consider the two-dimensional case in which the system ρ S is measured in the computational ba-sis non-selectively: M 0 = |0 0| and M 1 = |1 1|. ρ S then evolves to the diagonal form ρ → 0|ρ S |0 M 0 + 1|ρ S |1 M 1 . Alternatively, if ρ S is subject to selective measurements M 0 with probability p 0 and M 1 with probability p 1 , by normalizing M 0 and M 1 according to Eq. (1) one obtains a more general evolution , which coincides with the non-selective case in case all p z are equal. In the following we present the main results. Temporal weak correlations. An initially prepared system ρ in S with dimension d A is subject to an evolution described by Kraus operators (as normalized in Eq. 1), which for the sake of simplicity we take as instantaneous at time t = 0. We assume that the system is measured weakly (and instantaneously) before t = 0 and after t = 0 by operators O 1 and O 2 with two pointer readings q 1 (t 1 ) and q 2 (t 1 ) respectively, as illustrated in fig. 1(a). Lemma 1. The correlation of the instruments' pointers q 1 (t 1 ) and q 2 (t 1 ) is given by: which includes both selective and non-selective measurements. See related results in the context of unitary evolutions for correlations of two-level system with continuous weak measurements [5], in the context of post-selection [6][7][8] and of two sequential measurements [9,10]. where Post-selection to a mixed state using ancillas is described in the inset of fig. 1. Note that no post-selection is equivalent to post-selecting the maximally distributed mixed state I/d B . Spatial weak correlations. Next, let us assume an initially prepared bipartite system ρ AB is measured weakly by parties A and B with operators O A and O B respectively, as illustrated in fig. 1(b). In addition, A may post-select her state to ρ fi A and B to ρ fi B . An immediate consequence of Eqs. (2,3) is Corollary 1. The correlation of spacelike related pointers q A (x A ) and q B (x B ) equals: (5) Note again that for each party no post-selection is equivalent to post-selecting the maximally distributed mixed state. Finally, let us present the mapping between time evolutions and bipartite states. A pure bipartite state is mapped to a single Kraus operator by having The map extends to mixed states/evolutions by convex combinations: Theorem 1. Given the mapping defined in Eqs. 6, 7 and the following correspondence of operators and boundary states: This mapping is illustrated in figure 1. It is symmetric to the exchange of A and B, given that we exchange the dimensions of the boundary conditions of ρ S and take M t z instead of M z . Note that the usual spatial setting in which no post-selection is assumed, corresponds to having a maximally distributed mixed state in the temporal setting ρ in S = I/d A with no post-selection. Corollary 2. The expectation values of the single measurements equal as well: It is illuminating to analyze points in which our mapping does not work. The notion of multipartite entanglement is well established by now. One may then expect that tripartite correlations would also be mapped to temporal ones. This is not the case, however, even in the simplest case where the state ρ in S evolves trivially (M = I) and not post-selected. It can be shown that the correlation of three sequential weak measurements O 1 , O 2 and O 3 performed at times t 1 < t 2 < t 3 is given by In contrast to measurements at two times, Eq. (9) implies that the correlation of three depends on their order, in sharp contradiction with the multipartite spatial scenario. This reflects that one dimensional time can only be bisected once to unordered parts, whereas multidimensional space may be sectioned into many parts with no internal order. Another feature of multipartite states which is not satisfied in the temporal setting is the monogamy of entanglement [11]. If two qubits A and B are maximally entangled, they cannot be correlated at all with a third qubit C. In the temporal case, however, we choose again M = I. Then any pair of instances among t 1 , t 2 , t 3 etc. is maximally correlated. We would like to remark that a notion of entanglement in time was introduced in a different context by Brukner et. al. [12], who analyze correlations of successive ±1 strong measurements. These temporal correlations violate Leggett-Garg inequalities [13], the Bell inequalities [14] in time. Brukner et. al. also show that there are no genuine multi-time correlations and that the monogamy of spatial correlations is violated in the temporal setting. However, there are crucial differences between temporal correlations of strong and weak measurements as correlations of successive strong measurements do not depend on the state and are a particular feature of ±1 observables. The suggested mapping does not apply for strong measurements. We note that different physical interpretations of Jamio lkowski isomorphism were given: a purification protocol of local unitary operations [15], a test of non-zero channel capacity [16], and a manifestation of "superposition of unitary operations" [17]. We proceed by proving our results. Proof of Lemma 1. Observables O 1 , O 2 are measured sequentially on system ρ in S at times t 1 , t 2 where t 1 < 0 < t 2 . In addition, ρ in S evolves at t = 0 with Kraus operators {M z }. The von-Neumann interaction measurement cor- . We assume identical initial Gaussian wavepackets φ(q 1 ) and φ(q 2 ) for the pointers: (10) The initial state of the system and the apparatuses ρ in To compute E(q 1 q 2 ) we first notice that since qφ 2 (q)dq = 0 and φ(q)φ ′ (q)dq = 0, all terms in Eq. (11) except the last one do not contribute. In addition, by tracing out the system ρ in S and using qφ(q)φ ′ (q)dq = −1/2, we conclude that , which coincides with Eq. (2). Proof of Lemma 2. Preparation of a mixed state is realized by projecting a system to a pure state |ψ in S which then interacts with an ancilla in a known state. Correspondingly, post selection to a mixed state ρ fi S is realized in the same way but with the reversed time axis: ρ fi S = U † int |0 anc 0 anc | ⊗ |ψ fi S ψ fi S |U int (as illustrated in the inset of figure 1). The proof of Lemma 2 follows the same steps as that of Lemma 1 where instead of tracing out the system, one projects the system to the final state and renormlizes the remaining state. In case M = I the normalization yields a factor of 1/Tr[ 0 anc | ψ fi S |U int ρ in S ⊗ I anc U † int |ψ fi S |0 anc ] = 1/Tr[ρ in S ρ fi S ]. The generalization to arbitrary evolution is straightforward. Note that Wizeman [18] analyzed a similar case for a single weak measurement. Proof of Theorem 1. Let us first show the correspondence for a pure bipartite state |ψ , which is mapped to a single Kraus operator M ′ (with p = 1). We show the equality of the temporal and spatial denominators D T , D S and nominators N T and N S of Eq. (3) and (5) respectively. From Eq. (4,6) up to a factor of 4: where we use the notation A ki = k|A|i for matrix elements. In correspondence with the mapping defined in Theorem 1, D T = D S and N T = N S . Note that by proving D S = D T we have explicitly confirmed that the mapping corresponds to Jamio lkowski isomorphism [3]. To extend to a set of Kraus operators M ′ z , note that D T , D S , N T , N S become now a convex combinations of p z , which respects their equality. This concludes the proof of Theorem 1. Implications. The suggested mapping has several important implications which we briefly discuss: 1. The correspondence between Bell and Leggett-Garg inequalities. Interestingly, Leggett and Garg [13] have suggested temporal inequalities with the same bounds as the corresponding spatial Bell inequalities [14]. For example, CHSH inequality [19] and the corresponding temporal inequality (Eq. 2b in [13]) are bounded by 2 √ 2 [20]. In a previous paper [10] we have shown that Bell's inequalities can be maximally violated using weak measurements even if all observables are measured for each member of the ensemble. A similar result for Leggett-Garg inequalities was given in [21,22]. By the mapping above the correspondence between the two type of inequalities becomes clear. Leggett-Garg inequalities are distinguished from the Bell inequalities as their maximal violation depends only on the measured observables and not on the state of the system. By the mapping above we see that this is a consequence of unitary evolutions which correspond to maximally entangled bipartite states. By having non unitary evolutions Leggett-Garg inequalities are less violated. Since our mapping is exact all the results concerning bipartite Bell inequalities are valid in the corresponding temporal inequalities. For example, the non-separable Werner states which do not violate CHSH inequality [23], correspond to the same mixtures of unitary evolutions which do not violate Leggett-Garg inequality. Another example is the anomaly of nonlocality in bipartite systems with dimension greater than two [24], with a Bell inequality [25,26] that is not maximally violated by the maximally entangled state. One can explicitly show that the same anomaly appears in the temporal setting, where maximal violation is obtained with the corresponding non-unitary evolution. 2. Statistical characteristics of large systems. In the work by Hayden et. al. [27] correlation properties of random high-dimensional bipartite pure systems were examined. They showed that there exist large subspaces in which almost all pure states are close to maximally entangled. Their result is based on the uniqueness of the Haar measure in pure states. Any pure state can be generated by applying a unitary matrix on a fiducial state, where the space of unitary matrices is comprised of the rotationally invariant Haar measure. Through our mapping the space of bipartite unitary evolutions maps also to "pure evolutions" on local systems where a pure evolution corresponds to a single Kraus operator. Therefore, there exist large subspaces in which all pure evolutions are close to unitary ones. This implies that as the system becomes sufficiently large, its evolution is most likely to be unitary. 3. Models of decoherence. The usual framework of decoherence [28] deals with the transition of a state to a diagonal form. By the suggested mapping one can distinguish decoherence of states from decohering dynamics. Decohering dynamics can be observed by detecting the temporal decay of correlations in case of non exact unitary evolution, even on the maximally distributed mixed state. 4. Computational gain. In numerical computations of two point correlation function of bipartite states, needed for instance in evaluating Bell inequality bounds, one can utilize the corresponding Leggett-Garg inequalities. For example, given d A = d B = N , instead of manipulating N 2 × N 2 matrices, one can use only N × N matrices. Conclusions. Our mapping provides a new perspective on time evolution in quantum-mechanics. By observing correlations of weak measurements the entanglement of bipartite states finds exact correspondence with temporal quantum mechanical correlations. Surprisingly, by having an exact mapping between spatial and temporal correlations, nonrelativistic quantum-mechanics manifests a structural unification of time and space.
2011-03-13T21:30:56.000Z
2011-03-13T00:00:00.000
{ "year": 2011, "sha1": "8f51134106d92f022e2bdd70055d948a3eb7bb64", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f400d776eb16a6a8ac01d3d746ebd1e54707cd3f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
222141035
pes2o/s2orc
v3-fos-license
Efficient computation of contrastive explanations With the increasing deployment of machine learning systems in practice, transparency and explainability have become serious issues. Contrastive explanations are considered to be useful and intuitive, in particular when it comes to explaining decisions to lay people, since they mimic the way in which humans explain. Yet, so far, comparably little research has addressed computationally feasible technologies, which allow guarantees on uniqueness and optimality of the explanation and which enable an easy incorporation of additional constraints. Here, we will focus on specific types of models rather than black-box technologies. We study the relation of contrastive and counterfactual explanations and propose mathematical formalizations as well as a 2-phase algorithm for efficiently computing pertinent positives of many standard machine learning models. Introduction The increasing deployment of machine learning (ML) systems in practice led to an increased interest in explainability and transparency. In particular, "prominent failures" of ML systems like predictive policing [30], loan approval [34] and face recognition [1], highlighted the importance of transparency and explainability of ML systems. In addition, the need for explainability was also recognized by policy makers which resulted in a "right to an explanation" in the EU's "General Data Protection Right" (GDPR) [23]. The crucial problem with regard to these demands is the definition and the type of explanations -there exist many different kinds of explanations [14,16,20,26,31] but it is still not clear how to properly formalize an explanation [11,20]. One family of explanations are example-based explanations [2] which are considered to be particularly well suited for lay people, since they allow the inspection of explanations by looking at example data, including the possibility of domain-specific representations of data [20]. Counterfactual explanations [33] and contrastive explanations constitute instantiations of example-based explanations [9,12,20]; these will be in the focus in this work. Following the common definition or intuition of a contrastive explanation [12,20] (in the context of [9]), a contrastive explanation consists of two parts: -A pertinent positive specifies a minimal and interpretable amount of features that must be present for obtaining the same prediction as the complete sample does. Meaning that we are looking for a subset of features such that the resulting sample has the same prediction as the original sample. -A pertinent negative specifies a set of features, which must not be present to provide the prediction, i.e. it is contrastive, since it relates to elements representative of a different class which are absent; expressed in different words, it refers to a small and interpretable perturbation of the original sample that would lead to a different prediction than the original sample. Together, a pertinent negative and pertinent positive form a contrastive explanation. For an example, consider the application of a loan approval system. Imagine that the system rejects a loan application and we now have to explain its decision. A possible contrastive explanation (consisting of a pertinent negative and a pertinent positive) might be: The loan application was rejected because the pay back of the last loan was delayed, the applicant has a second credit card and because the monthly income is not above a minimum specific threshold, required for acceptance of the loan. The first two arguments/reasons can be considered as a pertinent positive and the last reason as a pertinent negative. Note that, if more than two classes are present, pertinent negatives always contrast the present class to one specified alternative class. Related work There does exist extensive work and experimental evidence, which highlights that explanations provided by people are often contrastive in nature [19]: rather than explaining reasons for an observed event p, people often focus on reasons for observing p rather than another specific event q. The question of how to compute contrastive explanations for technical systems, constitutes an issue, though. In causal models, contrastive arguments of factors, which explain an appearance of p rather than q, can be based on according triangulations within the logical relations [17]. For black box models including deep networks, there exists some work how to compute contrastive explanations in practice [9,10,32]. More specifically, the authors of [9] propose an algorithm called "contrastive explanation method (CEM)" that computes a contrastive explanation of a differentiable model such as a Deep Neural Network. The method computes a pertinent positive and a pertinent negative by solving strongly regularized cost functions by using a projected fast iterative shrinkage-thresholding (FISTA) algorithm. A part of the regularizations consists of an autoencoder ensuring that the solution is plausible. While this approach might be well suited for Deep Neural Networks, it might be less suited for standard ML models, where the regularization is not clear, and an autoencoder is not easily available, e.g. because the training set is too small. Furthermore, there do not exist theoretical guarantees of the result, in particular the sensitivity of the provided explanations with respect to the chosen regularization can be high. In subsequent work [10], the authors extend CEM towards the model agnostic contrastive explanation method (MACEM) for computing contrastive explanations of an arbitrary (not necessarily differentiable) model. The modelling approach is somewhat similar to the one in [9]. MACEM uses FISTA and estimates the gradient in case of a fully black-box (not-differentiable) model. Furthermore, the authors also propose how to model categorical features. The authors of [32] address model agnostic contrastive explanations, which are obtained based on locally trained decision trees which serve as a local surrogate of the observed model. Since this method needs to sample training points around a given data point, it is sensitive to the curse of dimensionality. Most of the methods for computing contrastive explanations are somewhat model agnostic or are suitable for a "broader" class of models. As a consequence, it is not easily possible to provide guarantees on important properties such as uniqueness of the explanation, since no assumptions on the type of model are made. Further, the involved optimization technologies might be computationally demanding, and they often rely on iterative numeric methods such as general gradient-based optimization technologies. Here, we are interested in the question, how to efficiently compute contrastive explanations for specific models, which are popular in machine learning. For specific models, a general method might not be the most efficient one and specific formulations might provide particularly efficient alternatives, for which additional guarantees such as convexity and uniqueness hold. In this work we study how to exploit model specific structures for efficiently computing contrastive explanations of several standard ML models. To the best of our knowledge, this is the first work to address the question how to efficiently compute such model-specific contrastive explanations. Our contributions We make several contributions in this work: 1. In section 2 we address a conceptual issue, and we study how pertinent negatives are related to counterfactual explanations as discussed e.g. in [24]. We reduce the problem of computing a pertinent negative to the problem of computing a counterfactual explanation. 2. In section 3 we conceptualize computing pertinent positives and we propose a 2-phase algorithm for computing "high-quality" pertinent positives. In section 3.3 we develop mathematical programs (often even convex programs) for efficiently computing pertinent positives of many different standard ML models like linear and quadratic classifiers and learning vector quantization models. 3. We empirically evaluate our proposed methods in section 3.4. For most settings, we obtain unique explanations. Due to space constraints and for the purpose of better readability, we include all proofs and derivations to the appendix (section A). Pertinent negatives as counterfactuals A pertinent negative, as described in [9], specifies a "small and interpretable" perturbation δ of the original sample x orig that leads to a different prediction y ′ = y orig , i.e. it contrasts the current output y orig to another class y ′ . If we consider a small 1-norm as "small and interpretable", we can phrase the computation of a pertinent negative as the following optimization problem: where h : R d → Y denotes the classifier whose prediction we want to explain. Here, the 1-norm accounts not only for a small change, but also sparsity as regards the number of features, which are changed. The constrained optimization problem for computing a counterfactual explanation [33] as proposed by [3] is given as: where θ(·) denotes a regularization (e.g. 1-norm), x ′ denotes the counterfactual and y ′ = y orig the requested target label. We can turn Eq. (1) into Eq. (2) by setting x ′ = x orig + δ and choosing The appealing consequence of this is that we can reduce the problem of computing a pertinent negative to computing a counterfactual explanation for which several efficient methods already exists [3,18,29,33]. The work [3], in particular, proposes convex formulations of the problem for a number of important ML models. The work [4] enriches this framework with efficient approximations of how to compute plausible counterfactuals with a guaranteed likelihood value, in order to distinguish those from adversarial examples, which correspond to artificial signals in particular for high dimensional data [15]. Note that the computation of pertinent negatives as counterfactual explanations perfectly fits the intuition of contrasting the given prediction y orig against some other (predefined) prediction y ′ as discussed in the introduction of this work. Modelling In order to model the intuition of a pertinent positive, as described in [9], we have to consider several aspects: -We want to "turn off" as many features as possible. -For "turned on" features, the difference to the original feature values should be as small as possible. -The pertinent positive must be still classified as y orig . We denote the final pertinent positive 1 x ′ as: In order to improve readability of the subsequent formulas, we will sometimes substitute Eq. (3) and optimize over x ′ instead of δ -we mean by this an optimization over δ which implies x ′ . Like the authors of [10] did, we can always subtract a constant b from the original sample x orig to allow non-zero default values -i.e. b would denote the feature wise base/default values at which we consider a particular feature to be "turned off", in the sense that a feature does not deviate "much" from the default value(e.g. the expected value or a statistically robust alternative). In the following, we assume b = 0 for simplicity. Considering all these aspects yields the following multi-objective optimization problem: where [·] I denotes the selection operator on the set I, whereby I denotes the set of all "turned on" features. 2 I is defined as follows: where ǫ ∈ R + denotes a tolerance threshold at which we consider a feature "to be turned on" -e.g. a strict choice would be ǫ = 0. Because the optimization problem Eq. (4) is "notoriously difficult" and highly non-convex -in particular, Eq. (4a) and Eq. (4b) are in parts "contradictory" -, we propose a relaxation in the subsequent section. This relaxation allows us to efficiently compute pertinent positives of many standard ML models (we will turn this relaxation into a convex relaxation for many standard ML models) -we empirically evaluate our proposed relaxation in the experiments (see section 3.4). Relaxation by a 2-phase algorithm For computing a pertinent positive Eq. (4), we have to ensure sparsity and closeness to the original sample. We propose to approximately solve Eq. (4) by a 2-phase algorithm where we separate the computation of our two goals sparsity and closeness in two phases. Sparsity In order to achieve sparsity of the pertinent positive, we propose the following optimization for ensuring a sparse pertinent positive: Although the optimization problem Eq. (6) looks similar to the one proposed in [9,10], a crucial difference is that Eq. (6) is a constrained optimization problem with a convex objective -this is what allows us (see section 3.3) to derive convex programs for computing pertinent positives of many standard ML models. Sparsity is here enforced by the 1-norm, instead of the 0-norm. Furthermore, our formulation Eq. (6) allows to easily add additional constraints like box constraints or "freezing" some features, for meeting domain specific requirements (e.g. plausibility). Another consequence of our modelling is that we do not need any hyperparameters -note that the formulation in [9] uses several hyperparameters that have to be chosen. Since our formulation comes without any hyperparameters, the computation is easier. More importantly, by making use of convex optimization we can provide theoretical guarantees such as uniqueness or an exact statement of existence or non-existence of a solution. Closeness By solving the optimization problem Eq. (6) we obtain a sparse pertinent positive. As already discussed, while sparsity is in alignment with the intuition of a pertinent positive, it can happen that many features will be shrunken towards zero and thus be far away from the original features values -we will empirically observe this behavior in the experiments (section 3.4) -, which contradicts the intuition of a pertinent positive. Therefore, we proposed a second optimization step, enforcing closeness for the values, which are kept. Also note that it can happen that the optimal solution of Eq. (6) is the zero vector 0; this holds if the zero vector is classified as the same class as the original sample -i.e. h( 0) = y orig . In this case, all features would be "turned off'.' We argue that in this case a pertinent positive might not make much sense because such an explanation would not be very informative for the user, and it is unclear how to break symmetries about which features are relevant in this case. We propose to reduce an explanation to the pertinent negative part, in this case, or to add additional semantic information, which indicates which features are relevant. As an example, one could avoid this issue by fixing some features to their original values or introduction box constraints that prevent a certain number of features of being "turned off". Such kind of constraints easily fit into the proposed optimization problems Eq. (6) and Eq. (7) and do not change the computational complexity of the problems. Provided the first phase of the algorithm yields a reasonable and non-trivial solution {j : | |( x ′ ) i | ≤ ǫ} for features which can be turned off, where x ′ is the solution from Eq. (6), we propose a second phase, where we minimize the Algorithm 1 Computation of a pertinent positive Input: A labeled sample ( x orig , y orig ) Output: A pertinent positive x ′ 1: Compute a pertinent positive x ′ by solving Eq. (6) 2: Try to improve x ′ by solving Eq. (7) distance of the remaining features to the original values, as follows: The final 2-phase algorithm is described as pseudo code in Algorithm 1 and is empirically evaluated in section 3.4 (Experiments). Interestingly, this two-step algorithm can be instantiated as efficient convex problems for many popular machine learning models, as we will show in the following. Model specific programs In the subsequent sections we study how the optimization problem Eq. (6) evolves for different standard ML models -in particular we reduce Eq. (6) to convex or "nearly convex" programs. Because the objectives Eq. (7a) and Eq. (6a) are both convex and independent of the model h, it is sufficient to work on Eq. (6) only -if we can turn Eq. (6) into a convex program (meaning we have to turn the constraint Eq. (7b) into a convex one), then the same holds for Eq. (7). Linear models A linear classifier h : R d → Y can be written as follows: where we restrict our-self to a binary classifier -however, the idea (and everything that follows) can be generalized to multi-class problems. Popular instances of linear models are logistic regression, linear discriminant analysis (LDA) and linear support vector machine (linear-SVM). Assuming Y = {−1, 1}, we can rewrite the constraint Eq. (6b) as follows: where ǫ denotes a small positive constant that ensures that the set of feasible solutions is closed (strict vs. non-strict inequality) and Note that Eq. (9) is linear in δ and because the objectives Eq. (6a) and Eq. (7a) are linear, the optimization problems become linear programs which can be solved efficiently [7]. The derivation of Eq. (9) can be found in the appendix A.1. Quadratic models A quadratic classifier h : R d → Y can be written as follows: where Q ∈ S d and again we restrict our-self to a binary classifier -again, the idea (and everything that follows) can be generalized to multi-class problems. Popular instances of quadratic models are quadratic discriminant analysis (QDA) and Gaussian Naive Bayes. Again, if we assume Y = {−1, 1}, we can rewrite the constraint Eq. (6b) as the following quadratic constraint: Since all we know aboutQ is that it is symmetric, Eq. (12) is in general nonconvex. Solving non-convex quadratic programs is known to be NP-hard [7,22]. However, we can rewrite Eq. (12) as a difference of two convex functions 3 and thus turn the whole program into a special instance of a difference of convex programming (DC) for which efficient approximation solvers exist -more details can be found in the appendix A.2. Learning vector quantization models where d(·) denotes a distance function. In vanilla LVQ, this is chosen globally as the squared Euclidean distance with Ω ∈ S d + , referred to as matrix-LVQ (GM-LVQ) [27], or a prototype specific quadratic form d( x, with Ω j ∈ S d + , referred to as local-matrix LVQ (LGMLVQ) [28]. Similar to the algorithm for computing counterfactual explanations of LVQ models [5], the idea is to use a Divide-Conquer approach for computing a pertinent positive of a LVQ model Eq. (14). Because the LVQ model outputs the if x ′ * 1 < z then ⊲ Keep this pertinent positive if it is sparser than the currently "best" pertinent positive 6: z = x ′ * 1 7: x ′ = x ′ * 8: end if 9: end for label of the closest prototype, we know that in order to get a specific prediction y = y orig , the closest prototype must be one the prototypes labeled as y orig . Therefore, we simply try all possible prototypes (labeled as y orig ) and select the one that leads to the smallest objective Eq. (6a). For every suitable prototype p i , we can rewrite the constraint Eq. (6b) as follows: where In case of GMLVQ, the constraints Eq. (15) become linear while in the case of LGMLVQ the constraints Eq. (15) become quadratic (but potentially nonconvex). Because the objective Eq. (6a) is linear, Eq. (6) becomes a linear program in case of GMLVQ and a (non-convex) quadratic program in case of LGM-LVQ. Again, while linear programs can be solved very efficiently [7], (non-convex) quadratic programs can not (unless they turn out to be convex quadratic programs) [7,22]. Like in the case of quadratic classifiers, we can easily rewrite the constraint Eq. (15) as a difference of convex functions and then turn the whole program into a special instance of a DC for which good approximation solvers exist [22] -more details can be found in the appendix A.3. The resulting algorithm is summarized in Algorithm 2. Note that the for loop in Algorithm 2 can be easily parallelized because it does not matter when we compute the minimum. Experiments We want to empirically verify that our proposed modelling yields pertinent positives that fit the intuition of a pertinent positive as discussed in the introduction. We therefore evaluate our proposed modelling and the derived mathematical programs on a set of different standard benchmark data sets. We compare the results of Eq. (6) with those of the 2-phase algorithm Algorithm 1. Since the convex programs are guaranteed to output valid pertinent positive, we would have to validate the outputs (check if it is a valid pertinent positive) of the non-convex programs only (e.g. DCs for quadratic and LGMLVQ models) -however, we can neglect this in our specific situation because we choose a specific solver that is guaranteed to output a feasible solution. For the quantitative evaluation of the computed pertinent positives, we choose two scoring functions for assessing sparsity and closeness to the original sample. We evaluate sparsity of a pertinent positive x ′ with Eq. (17) 4 and closeness to the original sample x orig with Eq. (18). We run the experiments on four standard benchmark sets using logistic regression, a quadratic discriminant analysis (QDA) and GLVQ. We use the "Iris Plants Data Set" [13], the "Wine data set" [25], the "Ames Housing dataset" [8] 5 and the "Breast Cancer Wisconsin (Diagnostic) Data Set" [35]. We compute a three-fold cross validation and compute a pertinent positive by only solving Eq. (6) and another one by using our proposed 2-phase algorithm (Algorithm 1). We standardize all data sets, use a regularization strength of 1.0 when estimating the covariance matrices in QDA, set the basis values to b = 0 and set the threshold for "turned on" features Eq. (5) to ǫ = 0 for all data sets. We report the mean sparsity Eq. (17) and the mean closeness Eq. (18) for each combination of model, method and data set (we also report the variance) -because the sparsity does not change when using the 2-phase algorithm instead of Eq. (6) only, we only report sparsity once. For the purpose of better observing the properties of non-trivial pertinent positives, we always exclude the class of the zero vector -as discussed in section 3.2, all samples from the class h( 0) would yield the sparsest and trivial pertinent positive 0 which makes them less suited for evaluating our proposed algorithms. In addition, we compare the feature overlap between pertinent negatives and pertinent positives. For the purpose of informative and useful explanations it is beneficial that the pertinent positive and the pertinent negative "share" as few features as possible -meaning that the overlap of "turned on" features in the pertinent positive and the perturbed features in a pertinent negative should be rather small. We argue that if the pertinent positive and the pertinent negative "share" many features they might not be that useful and informative 6 -if the overlap of features happens to be too large, one could add additional constraints to the optimization problems for manually including or excluding some features that finally result in a smaller overlap of features. We compute the pertinent negatives by using a Python toolbox [6] for efficiently computing counterfactual explanations -we use the l1 norm as a regularization for enforcing a sparse pertinent negative. We also keep track of the F1-score to ensure that the classifiers learned a "somewhat reasonable" decision boundarybecause all classifiers perform quit well, we do not report the F1-scores in here and refer the interested reader to the published source code and protocols. We approximately solve the non-convex QPs using the conex-concave penalty (CCP) method [22]. Because the CCP method is guaranteed to output a feasible solution, we do not have to check if the pertinent positive is valid. Further details (including the raw protocols of the experiments) and the implementations itself is available on GitHub 7 . The results are shown in Table 1. We observe that our proposed method is able to consistently compute sparse pertinent positives. Furthermore, we observe that our proposed 2-phase algorithm significantly increases the closeness of the pertinent positives to the original samples. Only in the case of GLVQ and logistic regression in combination with the breast cancer data set, the 2-phase algorithm is not able to improve on average upon Eq. (6) -we think that this might be an issue of unfavorable chosen hyperparameters 8 (we expect that changing the model would most likely show a difference). In addition, the large variances in the results of QDA for the breast cancer data set can be explained by some outliers. Also note that the mean sparsity is often just a little bit below the total number of features. This means that our proposed method was able to "turn off" many features which perfectly fits the intuition of a pertinent positive as discussed in the introduction. Finally, we observe that the overlap of "turned on" features in the pertinent positives and the perturbed features in the pertinent negatives is relatively small. This means that the pertinent positives and the pertinent negatives "share" only very few features in their explanations which makes them useful and informative in practice -as discussed previously, if both explanations would use (more or less) the same features, they would not be that informative to a user. However, please note that these findings are empirically only and might not necessarily generalize to other models and/or data sets. Discussion and Conclusion In this work we extensively studied the computation of contrastive explanations that consists of a pertinent negative and a pertinent positive. We argued that computing a pertinent negative is equivalent to computing a counterfactual explanation -this reduction enables us to use methods from the counterfactual explanations literature for efficiently computing pertinent negatives. We also proposed to model pertinent positives as a constrained optimization problem and proposed upon that a 2-phase algorithm for computing qualitatively better pertinent positives. Building upon these, we derived mathematical programs for efficiently computing pertinent positives of many standard ML models. We empirically evaluated our proposed methods on several standard benchmark data sets. One aspect we "ignored" so far is plausibility of the contrastive explanations. As discussed in this work, one could manually formulate plausibility constraints and add them to the proposed optimization problems. However, because it can be very time consuming and tedious (or even impossible) to manually come up with a set of constraints that guarantee plausibility, one would like to have a method that works without handcrafting plausibility rules. Because computing a pertinent negative is basically the same as computing a counterfactual explanation, we can use methods from plausible counterfactual explanations [4,18,24,29] for computing plausible pertinent negatives. One might also be able to use some of the ideas of plausible counterfactual explanations for plausible pertinent positives such that the whole contrastive explanation is guaranteed to be plausible -we leave this as future research. Finally, we relax the strict inequality by adding a small positive number ǫ to the left side -by doing this we avoid that the resulting data points lies on the decision boundary (in this case the sign would be undefined): Note that Eq. (20) is linear in δ -thus the final optimization problems become linear programs (LPs) which can be solved very efficiently [7]. In case of a multi-class problem, we would get multiple constraints of the form Eq. (20) -however, since they are all linear, the final problems are still LPs. A.2 Pertinent positives of quadratic models We assume Y = {−1, 1} and h( x) = sign x ⊤ Q x + q ⊤ x + c with Q ∈ S d . We then can rewrite the constraint Eq. (6b) as follows: where we definedQ = −y orig Q Again, we relax the strict inequality by adding a small positive number ǫ to the left side: δ ⊤Q δ + δ ⊤ z + c ′ + ǫ ≤ 0 (23) A basic fact from linear algebra states that we can rewrite every real symmetric matrix as the difference of two s.psd. matrices. Furthermore, in case of QDA or Gaussian Naive Bayes such a decomposition appears naturally because in both cases the matrix Q is defined as the difference of two (s.psd.) covariance matrices. Assuming that we decomposeQ as we can rewrite Eq. (23) as follows: Clearly, Eq. (25) is now a difference of convex quadratic functions which turns the resulting optimization problem into a DC for which good approximatation solvers like the Suggest-and-Improve framework exist [22]. In case of a multi-class problem, we would get multiple constraints of the form Eq. (25) -however, since they are all of the same form, the final optimization problems are still DCs.
2020-10-07T01:00:46.156Z
2020-10-06T00:00:00.000
{ "year": 2020, "sha1": "668ec4b1d9b4658be735458d945950906bf5ed6a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2010.02647", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "668ec4b1d9b4658be735458d945950906bf5ed6a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
221150353
pes2o/s2orc
v3-fos-license
On Rings whose Maximal Essential Ideals are Pure Raida D. Mahmood Awreng B. Mahmood raida.1961@uomosul.edu.iq awring2002@yahoo.com College of Computer sciences and Mathematics University of Mosul, Iraq Received on: 06/04/2006 Accepted on: 25/06/2006 ABSTRACT This paper introduces the notion of a right MEP-ring (a ring in which every maximal essential right ideal is left pure) with some of their basic properties; we also give necessary and sufficient conditions for MEP – rings to be strongly regular rings and weakly regular rings. 1-Introduction An ideal I of a ring R is said to be right (left)pure if for every I a  , there exists I b  such that a=ab (a=ba), [1], [2]. Throughout this paper, R is an associative ring with unity. Recall that: 1) R is called reduced if R has no non _zero nilpotent elements. 2) For any element a in R we define the right annihilator of a by r(a)={ 0 : =  ax R x } , and likewise the left annihilator l(a). 3) R is strongly regular [4], if for every 4) Z,Y,J(R) are respectively the left singular ideal right singular ideal and the Jacobson radical of R . 5) A ring R is said to be semi-commutative if xy=0 implies that xRy=0,for all x,yR .It is easy to see that R is semi-commutative if and only if every right (left) annihilator in R is a two-sided ideal [8] 2-MEP-Rings: In this section we introduce the notion of a right MEP-ring with some of their basic properties; Definition 2.1: A ring R is said to be right MEP-ring if every maximal essential right ideal of R is left pure. Next we give the following theorem which plays the key role in several of our proofs. Theorem 2.2: Let R be a semi commutative, right MEP-ring. Then R is a reduced ring. Proof: Let a be a non zero element of R, such that a 2 = 0 and let M be a maximal right ideal containing r (a). We shall prove that M is an essential ideal. Suppose that M is not essential, then M is a direct summand, and hence there exists 0  e = e 2  R such that M = r (e) (Lemma 2-3, of [8]). Since R is semi commutative and a  M , then e a = 0 and this implies that e  r (a)  M = r (e). Therefore e=0, is a contradiction. Thus M is an essential right ideal. Since R is a right MEP-ring, then M is left pure for every a  M. Hence there exists bM such that a = ba implies that (1-b)  l(a) = r (a)  M, so 1  M and this implies that M=R, which is a contradiction. Therefore a= 0 and hence R is a reduced ring. Theorem 2.3: If R is a semi commutative, right MEP-ring, then every essential right ideal of R is an idempotent. Proof: Let I = bR be an essential right ideal of R . For any element b  I, RbR+ r (b) is essential in R (Proposition 3 of [5]). If RbR + r (b)  R, let M be a maximal right ideal containing Since a J, then there exists an invertible element v in R such that ( 1-ar) v = 1 , so (a-a 2 r) v = a , yields a = 0 . This proves that J(R) =(0).  Recall that a ring R is said to be MERT-ring [7], if every maximal essential right ideal of R is a two-sided ideal. Theorem 2.5: If R is MERT, MEP-ring, then Y(R) = (0). Proof: , by Lemma (7) of [6] , there exists ) ( 0 R Y y   with y 2 = 0 . Let L be a maximal right ideal of R, containing r(y) .We claim that L is an essential right ideal of R. Suppose this is not true, then there exists a non-zero ideal T of R such that L  T = (0) . Then yRT  LT  L  T = 0 impolies T  r(y)  L, so L  T ≠ 0. This contradiction proves that L is an essential right ideal. Since R is an MEP-ring, then L is a left pure. Thus for every y  L, there exists c  L such that y = cy (L is a left pure). Since R is MERT , then cy L (two sided ideal)and thus 1L, is a contradiction. Therefore Y (R) =(0). 3-The connection between MEP-Rings and other rings In this section, we study the connection between MEP-Rings and strongly regular rings, weakly regular rings. Following [3],a ring R is right (left) weakly regular if I 2 = I for each right (left) ideal I of R. Equivalently, a  aRaR ( a  RaRa) for every a  R . R is weakly regular if it's both right and left weakly regular. The following result is given in [3]: Lemma 3.1: A reduced ring R is right weakly regular if and only if it is left weakly regular. Next we give the following lemma: Lemma 3.2: If R a semi-commutative ring then RaR+r(a) is an essential right ideal of R for any a in R. Proof: Given 0 a  R, assume that If R is a semi commutative, right MEP-ring, then R is a reduced weakly regular ring. Proof: By Theorem (2.2), R is a reduced ring .We show that RaR+r(a)=R, for any a  R. Suppose that RaR + r (a)  R, then there exists a maximal right ideal M containing RaR + r(a).By a similar method of proof used in Theorem (2.2), M is an essential ideal. Now R is MEP-ring, so a = ba , for some b  M , hence (1-b)  l (a) = r (a)  M and so 1  M which is a contradiction. Therefore M=R and hence RaR + r (a) = R, for any a  R. In particular 1= cab + d, for some c, b  R, d  r (a). Hence a = acab and R is right weakly regular. Since R is reduced, then by Lemma (3.1) R is a weakly regular ring.  Before closing this section, we give the following result. Theorem 3.4: A ring R is strongly regular if and only if R is a semi-commutative, MEP, MERT-ring. Proof: Assume that R is MEP, MERT-ring, let 0  a  R, we shall prove that aR + r (a) = R . If aR + r(a)  R , then there exists a maximal right ideal M containing aR + r(a) . Since M is essential, then M is left pure. Hence a= ba , for some b  M , so 1  M, a contradiction . Therefore M=R and hence aR+r(a) = R . In particular ar +d = 1, for some r  R, d  r(a). So a=a²r.Therefore R is strongly regular. Conversely: Assume that R is strongly regular, then by [3], R is regular and reduced .Also R is MEP and semi-commutative.
2020-02-13T09:17:32.325Z
2007-07-01T00:00:00.000
{ "year": 2007, "sha1": "2e4a98c28c373f44015281ea5ca75241eb84dc18", "oa_license": "CCBY", "oa_url": "https://csmj.mosuljournals.com/article_163995_6d822e2ca46cf5ad8ba3a72a5f167060.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5cffe8474100d621bf87e172cf434e084b60f1bb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
248297005
pes2o/s2orc
v3-fos-license
Long‐term open‐label perampanel: Generalized tonic–clonic seizures in idiopathic generalized epilepsy Abstract Objective Assess the longer‐term efficacy and safety of adjunctive perampanel (up to 12 mg/day) in patients aged ≥12 years with generalized tonic–clonic (GTC) seizures from the Open‐label Extension (OLEx) Phase of Study 332 to determine whether responses obtained during the Core Study are maintained during long‐term treatment. Methods Patients with GTC seizures previously enrolled in a randomized placebo‐controlled trial of perampanel could enter an OLEx Phase comprising 6‐week blinded conversion (during which patients previously randomized to placebo‐switched to perampanel) and up to 136‐week maintenance periods (maximum perampanel dose of 12 mg/day). A 4‐week follow‐up period was completed by all patients after the last on‐treatment visit during the OLEx. We assessed seizure frequency outcomes from preperampanel baseline and the Core Study Pre‐randomization Phase, retention rates, doses selected, and treatment‐emergent adverse events (TEAEs). Results Overall, 138 patients entered the OLEx. Median percent reductions in GTC seizures per 28 days from preperampanel were 77% (Weeks 1‐13) and 90% (Weeks 40‐52). Retention rates were 88% (6 months) and 75% (12 months). Seizure‐freedom rates were maintained for at least 2 years regardless of prior treatment received during the Core Study. Most common modal daily dose was >4‐8 mg/day (n = 93). Across the Core and OLEx Phases, 120 (87%) patients experienced TEAEs; the most common was dizziness. Significance Perampanel was generally well‐tolerated, and the TEAEs reported here are consistent with the known safety profile of perampanel. Perampanel offers a long‐term treatment option for patients (aged ≥12 years) with GTC seizures. | INTRODUCTION Limited treatment options are available for patients with treatment-resistant generalized tonic-clonic (GTC) seizures; thus, treatment with a narrow range of anti-seizure medications (ASMs) is often the only option for these patients. 1-3 Therefore, it is essential to continue to investigate ASMs with novel mechanisms of action to improve treatment outcomes for patients with GTC seizures. Perampanel is a once-daily oral ASM approved for use in focal-onset seizures (previously partial-onset seizures), with or without progression to bilateral tonic-clonic seizures (previously secondarily generalized seizures), and GTC seizures (previously primary generalized tonicclonic seizures). 4 The approval of perampanel for the adjunctive treatment of GTC seizures was based on the randomized, double-blind, placebo-controlled, Phase 3 Study 332 in patients (aged ≥12 years) with idiopathic generalized epilepsy (IGE) and GTC seizures. 1 Patients who completed the Double-blind Phase of Study 332 could enter an Open-label Extension (OLEx) Phase. Here, we investigated GTC seizure outcomes during longer-term treatment with perampanel (up to 12 mg/ day) in patients who participated in the OLEx Phase of Study 332 to determine whether responses obtained during the Double-blind Phase are maintained during the OLEx. We report on the doses that were most likely to be selected for long-term use, and address longer-term tolerability and safety outcomes. We also evaluated retention rates, which is an outcome that addresses both efficacy and tolerability. Study Pre-randomization Phase, retention rates, doses selected, and treatmentemergent adverse events (TEAEs). Seizure-freedom rates were maintained for at least 2 years regardless of prior treatment received during the Core Study. Most common modal daily dose was >4-8 mg/day (n = 93). Across the Core and OLEx Phases, 120 (87%) patients experienced TEAEs; the most common was dizziness. Significance: Perampanel was generally well-tolerated, and the TEAEs reported here are consistent with the known safety profile of perampanel. Perampanel offers a long-term treatment option for patients (aged ≥12 years) with GTC seizures. K E Y W O R D S epilepsy, generalized tonic-clonic seizures, Open-label Extension, perampanel Key points • The long-term safety/tolerability of adjunctive perampanel for GTC seizures was consistent with that observed in the double-blinded phase • Seizure control was maintained for ≥2 years with adjunctive perampanel ≤12 mg/day in patients with uncontrolled GTC seizures in IGE • The most common modal perampanel daily dose was >4-8 mg/day • Long-term adjunctive perampanel may have a favorable risk-benefit ratio in patients aged ≥12 years with uncontrolled GTC seizures Regulations Part 21. Trial protocol, amendments, and informed consent were reviewed by national regulatory authorities in each country and independent ethics committees or institutional review boards for each site. All patients gave written informed consent before participation. 1 | Study design Patients (aged ≥12 years) with GTC seizures in IGE who completed the prerandomization Phase (screening/ baseline) and the Double-blind, placebo-controlled, randomization Phase (4-week Titration; 13-week Maintenance) of Study 332 (i.e., Core Study), and who were otherwise eligible, had the option to enter the OLEx Phase ( Figure 1A). OLEx Phase. The OLEx Phase was terminated upon commercial availability of perampanel in the country where the patient resided. During the Conversion Period, all patients and investigators remained blinded to treatment received in the preceding Core Study (perampanel 2-8 mg/day or placebo). Patients who had been assigned to placebo in the Core Study were started on blinded treatment with perampanel 2 mg/day and up-titrated weekly in 2-mg increments to the optimal dose per the investigator's discretion. Patients assigned to the perampanel arm in the Core Study continued to receive perampanel once daily on a blinded basis at the dose received during the Maintenance Period of the Core Study. Per the investigator's judgment, the dose of perampanel was decreased in the event of intolerance, and the dose of perampanel was increased up to 12 mg/ day if needed for better seizure control until the optimal dose was found. Patients whose dose had been decreased could have their dose increased again once tolerability improved. At the onset of the OLEx Maintenance Period, patients were unblinded to study treatment and remained on the optimal perampanel dose established during the blinded Conversion Period. Dose adjustment during the Maintenance Period was allowed if medically necessary per the investigator's discretion. All perampanel dose adjustments (upwards or downwards) were done in 2-mg increments and patients who did not tolerate a minimum dose of 2 mg/day during the OLEx Phase were discontinued from the study. The maximum dose of perampanel allowed during the OLEx Phase was 12 mg/day. Patients entered the OLEx Phase on the same concomitant ASMs as they were receiving at the end of the Core Study. During the OLEx Maintenance Period, changes to concomitant ASMs (addition, deletion, or dose adjustment) were allowed, with care taken when switching between an inducer and noninducer ASM. Duration of participation in Part B of the OLEx Phase was dependent upon the patient's total number of weeks of exposure to perampanel, and the timing of Part A completion relative to the Core Study data review. Patients who elected to participate in Part B were treated until they had at least 52 weeks of total exposure to perampanel. If a positive risk-benefit assessment for the treatment of GTC seizures was demonstrated, patients in a country where an extended access program (EAP) had been activated ended treatment under this protocol and were given the option to enroll in the EAP. If an EAP had not been activated in their country, patients ended treatment under this protocol and continued to the Follow-up Period of the OLEx Phase. Patients who elected not to participate in Part B ended treatment and continued to the Follow-up Period of the OLEx Phase. | Efficacy assessments Efficacy analyses were based on the Full Analysis Set (FAS), which comprised all patients who were eligible to participate in the OLEx Phase, received ≥1 dose of perampanel in the OLEx, and had baseline seizure frequency data and ≥1 observation of valid seizure diary data during the perampanel treatment duration. Seizure diary data were recorded daily until the end of study Part A up to 55 weeks (diary collection was stopped at the start of study Part B). Any days with missing diary entries were classed as seizure-free days. Efficacy assessments included median percent change in seizure frequency per 28 days, 50% responder rates (defined as the proportion of patients with a ≥50% reduction in seizure frequency per 28 days), and seizure-freedom rates, all relative to preperampanel baseline and the Core Study Pre-randomization Phase. In addition, analyses were performed for patients who achieved freedom from GTC or all seizures for a period of at least 6 or 12 months, stratified by treatment received in the Core Study (prior placebo or prior perampanel). Due to the potential bias resulting from those patients who had a better response tending to remain in the study for a longer duration, a post hoc analysis was performed in which populations who had remained in the study for specific durations were assessed to see if, for these populations, efficacy was maintained over time. For this analysis, the OLEx population was subdivided into those that remained in the study for at least 26 weeks (n = 125), 39 weeks (n = 120), 1 year (n = 109), or 2 years (n = 44). | Safety assessments Safety assessments were based on the Safety Analysis Set (SAS), which included patients who received ≥1 dose of perampanel in the OLEx Phase and had any on-treatment safety data during this phase. Retention rates on perampanel at 6 months, 1 year, and 2 years were assessed in the SAS, where retention rate was defined as the number of patients on treatment for at least x months/the number of patients who could have been on treatment for at least x months. Treatment-emergent adverse events (TEAEs), serious TEAEs, and treatment discontinuation were all monitored throughout the study. TEAEs of special interest were also assessed using Medical Dictionary for Regulatory Activities Version 16.1. (MedDRA) Standardized MedDRA Queries (SMQs). TEAEs included adverse events (AEs) that occurred from the first day of perampanel administration (in the Core Study or OLEx Phase) to 30 days after the last dose of perampanel, or that were present before the first day of perampanel administration but worsened in severity during the study. TEAEs were considered serious if they were life-threatening (e.g., suicide attempt), or involved hospitalization or prolonged hospitalization. Suicidality (suicidal ideation and behavior) was measured using the Columbia Suicide Severity Rating Scale (C-SSRS) at each study visit. C-SSRS responses were reviewed by the investigator to determine whether any positive results constituted a TEAE of suicidality; only the events that were deemed a TEAE of suicidality are reported and discussed here. Prior and concomitant medication usage, clinical laboratory tests (chemistry, hematology, and urinalysis), vital signs, and changes in physical and neurological examinations were also assessed. In addition, a withdrawal questionnaire was administered to assess potential withdrawal signs and symptoms that might be associated with the discontinuation of perampanel. | Statistical analyses All data are presented descriptively, with summary statistics presented for continuous endpoints and frequency counts presented for categorical endpoints. | Data accessibility statement The data that support the findings of this study are available from the corresponding author upon reasonable request. | Patients In total, 140 patients completed the Core Study and were eligible to enter the OLEx Phase. Of these, 138 patients entered the OLEx Phase (70 placebo, 68 perampanel), representing 98.6% of the patients who completed the Core Study ( Figure 1B). All 138 patients in the OLEx Phase received ≥1 dose of perampanel and were included in the SAS. Table 1 shows baseline demographics and clinical characteristics for patients in the FAS/SAS. There was an observed female predominance in the study population (57.2% female vs 42.8% male) similar to previous IGE studies. 5,6 Overall, 34.8% of patients in the SAS were taking one concomitant ASM, 44.2% were taking two, and 20.3% were taking three at the Core Study baseline. The most common ASMs were lamotrigine (41.3%), valproic acid (32.6%), levetiracetam (29.0%), topiramate (16.7%), zonisamide (10.9%), and extended-release valproate (10.1%); all other background ASMs were taken by less than 10% of patients (Table 1). Of note, 13 (9.4%) patients were taking carbamazepine, 7 (5.1%) were taking phenytoin, and 5 (3.6%) were taking oxcarbazepine during the OLEx Phase. | Efficacy outcomes 3.2.1 | Efficacy relative to preperampanel baseline Across the entire perampanel treatment duration (Core Study and OLEx Phase), median percent reductions in seizure frequency per 28 days achieved during the first 3-6 months of adjunctive perampanel treatment relative to preperampanel baseline were maintained for at least 2 years for GTC seizures ( | Efficacy relative to Core Study Prerandomization Phase In patients who received perampanel during the Core Study, median percent reductions in seizure frequency per 28 days achieved during the Core Study Maintenance Phase relative to the Core Study Pre-randomization Phase were maintained during long-term treatment in the OLEx for GTC seizures (Figure 3Ai) and all seizures (Figure 3Aii). In patients who received placebo during the Core Study, median percent reductions in seizure frequency per 28 days were greater during the OLEx Phase as compared with the Core Study (Figures 3Ai,Aii). Fifty-percent responder rates were also maintained from the Core Study to the OLEx Phase, with over half of patients receiving placebo or perampanel in the Core Study achieving a ≥ 50% reduction in GTC seizure frequency at each treatment interval ( Figure 3Bi) and at least 40% of patients achieving a ≥ 50% reduction in the frequency of all seizures at each treatment interval ( Figure 3Bii). In addition, seizure-freedom rates were also maintained from the Core Study to the OLEx Phase (Figures 3Ci,Cii). Overall, 57.4% and 33.8% of patients who received perampanel during the Core Study and OLEx Phase achieved seizure freedom from GTC seizures for a period of at least 6 or 12 months, respectively ( Figure 3Di). In patients who received placebo during the Core Study before converting to perampanel during the OLEx Phase, 48.6% and 25.7% of patients were GTC seizure-free for at least 6 and 12 months, respectively ( Figure 3Di). For all seizure types, 39.7% and 22.1% of patients who received perampanel during the Core Study and 35.7% and 17.1% of patients who received placebo during the Core Study were free from all seizures for at least 6 or 12 months, respectively (Figure 3Dii). Among patients who received prior treatment with placebo during the Core Study, the median percent reduction in GTC seizure frequency and both the 50% responder and seizure-freedom rates increased to a level similar to that for patients who received treatment with perampanel during the Core Study by the end of the OLEx blinded Conversion Period. | Perampanel exposure The cumulative extent of exposure to perampanel across the Core Study and OLEx Phase is summarized by modal daily dose for the SAS in Figure S1. The mean (standard deviation [SD]) duration of perampanel exposure was 83.9 (38.4) weeks (range: 2.4-161.7 weeks), and 79.0% of patients in the SAS received more than 52 weeks of perampanel treatment. The total exposure to perampanel was 11 578.9 patient-weeks. | Perampanel dose Of 138 patients treated with perampanel during the OLEx Phase, two patients received a modal daily dose of <4 mg/ day, nine received a modal daily dose of 4 mg/day, 93 received a modal daily dose of >4-8 mg/day, and 34 received a modal daily dose of >8-12 mg/day. The mean (SD) dose of perampanel across the OLEx Phase was 8.0 (2.0) mg/ day (range: 2-12 mg/day) for the SAS. The mean (SD) dose during the OLEx Conversion Period was 6.8 (1.4) mg/ day (range: 3-11 mg/day) and for the OLEx Maintenance Period was 8.2 (2.1) mg/day (range: 2-12 mg/day). It should be noted that the mean dose during conversion was lower Period | Retention rates The retention rate at 6 months was 88.4% (n = 122/138), at 1 year was 74.6% (n = 103/138) and at 2 years was 49.2% (n = 31/63). Note that study closure and other administrative reasons affected the number of patients included in the calculation of retention rate at 2 years. At 6 months and 1 year, retention rates were slightly higher in the prior perampanel group (92.6% [n = 63/68] and 76.5% Serious TEAEs were reported in 18 (13.0%) patients, with the highest incidence occurring in patients receiving a modal dose of >8-12 mg/day (n = 8/34 [23.5%]). The most common serious TEAEs were convulsion (n = 3 [2.2%]) and suicide attempt (n = 2 [1.4%]). All but two of the nonfatal serious TEAEs had been resolved by the end of the study. In addition to two deaths that occurred during the Core Study, there was one death during perampanel exposure in the OLEx Phase that occurred in a patient who received 6 mg/day during the Core Study and 10 mg/day during the OLEx Phase. This death occurred 64 days after the last dose on study day 380 (day 261 of the OLEx). The cause of death was due to treatment-emergent acute pancreatitis and was assessed by the investigator as not related to study treatment. Treatment-emergent adverse events resulting in discontinuation of perampanel treatment occurred in 13 (9.4%) patients. The two events that resulted in the discontinuation of ≥2 patients were dizziness and suicide attempt ( Table 2). Two of the three patients who discontinued due to TEAEs of dizziness were receiving 8 mg/ day, and the third patient was receiving 12 mg/day. The two patients who discontinued due to TEAEs of suicide attempt were receiving 8 and 12 mg/day at the time of the suicide attempt. Regarding TEAEs of special interest, eight patients (5.8%) who experienced TEAEs of suicidality as determined by the investigator: five patients experienced suicidal ideation, two patients attempted suicide, and one patient engaged in self-injurious behavior. One of the five patients who experienced a TEAE of suicidal ideation had a history of anxiety, bipolar disorder, and depression prior to the study. As noted above, the two suicide attempts were serious and resulted in treatment discontinuation; all events were resolved. One patient who attempted suicide had reported a serious TEAE of depression prior to the suicidality event and received citalopram for the treatment of depression. Treatment-emergent adverse events related to alertness and cognition were reported in 35 (25.4%) patients; the most common of these events (≥2%) were somnolence (n = 18 [ Treatment-emergent adverse events related to hostility/aggression were reported in 8 (5.8%) patients using narrow SMQ terms, and 30 (21.7%) patients using narrow and broad SMQ terms. The most common TEAEs related to hostility/aggression using the narrow SMQ criteria were aggression (n = 4 [2.9%]) and anger (n = 3 [2.2%]); the most common using the narrow and broad SMQ criteria was irritability (n = 19 [13.8%]). One event of aggression was a serious TEAE. TEAEs related to psychosis and psychotic disorders were reported in 3 (2.2%) patients using narrow SMQ terms and 8 (5.8%) patients using narrow and broad SMQ terms. The most common of these using the narrow SMQ criteria was visual hallucination (n = 2 [1.4%]), and using the narrow and broad SMQ criteria was abnormal behavior (n = 2 [1.4%]). There were no events that were considered serious and none that led to discontinuation. Treatment-emergent adverse events related to status epilepticus or convulsions occurred in 10 (7.2%) patients. Four of these TEAEs were serious (three patients receiving 8 mg/day and one receiving 10 mg/day). None of these TEAEs resulted in treatment discontinuation. TEAEs related to drug-related hepatic disorder abnormalities were reported in four (2.9%) patients: two patients experienced events of increased aspartate aminotransferase (one patient receiving 12 mg/day and one receiving 2 mg/day); one patient experienced hepatopathy (2 mg/day); and one patient experienced hyperammonemia (10 mg/day). None of these events were serious or resulted in treatment discontinuation, and all patients recovered. | Laboratory results and vital signs There were no clinically important mean changes in hematology or clinical chemistry laboratory values during exposure to perampanel in the Core Study and/or OLEx Phase. Mean changes from baseline to the end of treatment in blood pressure and heart rate across all perampanel doses were less than or equal to ±3.8 mmHg or 6.4 beats per minute, respectively. Across the entire perampanel treatment duration, 39.1% of patients had a clinically notable increase in body weight and 13.0% had a clinically notable decrease in body weight. At the end of treatment, the mean change from baseline in body weight across all doses was 2.5 kg (range: −9-20.6). | DISCUSSION Tonic-clonic seizures are among the most serious and harmful seizures and are associated with injury and sudden unexpected death in epilepsy. 1,2,7-11 They are also one of the only seizure types in which occurrence has been associated with cognitive decline. 2,12 Perampanel was previously shown to be efficacious and well-tolerated in the randomized, Double-blind Phase of Study 332. 1 However, the aim of the OLEx study was to assess whether seizure reductions are enduring, particularly seizure freedom since these data are important to determine the long-term efficacy of an ASM. When assessing long-term outcomes in open-label studies, it is important to account for study drop-outs, as populations who stay longer tend to be enriched for patients with a better response. We addressed this by looking at patients with 26 weeks, 39 weeks, 1 year, or 2 years of perampanel exposure and assessing whether seizure frequency increased, decreased, or remained the same over time. Our results show that patients in each cohort experienced reductions in seizure frequencies for both GTC seizures and all seizures compared with pretreatment and that this effect was maintained over time. We also determined that efficacy established in the Core Study for the perampanel arm was maintained during the OLEx Phase, while efficacy was improved for the placebo arm when these patients were transitioned to perampanel during the OLEx Phase. Furthermore, by the end of the blinded Conversion Period of the OLEx Phase, patients who had received prior treatment with placebo Most common (≥10% of total patients) TEAEs, c n (%) during the Core Study had similar efficacy as patients who received perampanel, suggesting that a delay in the initiation of adjunctive perampanel treatment does not negatively affect long-term seizure control. The high seizure requirement for this study necessitated by the study design (three observable seizures during 8 weeks of baseline) could impact the generalizability of patients with less frequent seizures. A multicenter, retrospective, observational study showed that perampanel was associated with improved seizure outcomes, irrespective of seizure type, in the clinical care of patients with IGE, and 4 mg was the most common dose. 13 However, interpretations of the use of perampanel ≤4 mg/day in Study 332 may be limited due to the small number of patients (n = 11). Given the use of perampanel ≤4 mg/day may be of interest to patients with less severe IGE, further evaluation of perampanel in the clinic will be helpful in this regard. Taken together, our data show that perampanel is efficacious for the long-term treatment of GTC seizures in patients with IGE. Since some ASMs have previously been shown to aggravate certain seizure types in IGE, such as myoclonic and absence seizures, [14][15][16] it is also important to assess the effects of ASMs on these other seizure types. A recent post hoc analysis based on Study 332 showed that the median percent reduction in the frequency of myoclonic seizures per 28 days from the Core Study Prerandomization Phase was 52.5% (placebo) vs 24.5% (perampanel); for absence seizures, this was 7.6% (placebo) vs 41.2% (perampanel). 17 Seizure-freedom rates of myoclonic seizures were 13.0% (placebo) vs 16.7% (perampanel); for absence seizures, these were 12.1% (placebo) vs 22.2% (perampanel). Responses during the Core Study were maintained during long-term (>104 weeks) adjunctive perampanel treatment, suggesting that perampanel does not worsen myoclonic or absence seizures in patients with GTC seizures in IGE. 17 During the OLEx Phase of Study 332, no new AE signals were uncovered compared with the Core Study and the known safety profile of perampanel. 1,4 Furthermore, serious AE profiles were similar to those observed during long-term treatment in the focal epilepsy population. 18 With regard to the eight patients who experienced TEAEs of suicidality (as determined by the investigator), one patient had a medical history of depression prior to the study, and one patient experienced a serious TEAE of depression prior to the event of suicidal attempt. Even though the incidence of suicidality following perampanel treatment is low, patients receiving perampanel should be monitored for signs of psychiatric TEAEs as recommended in the class label of ASMs; and perampanel dose adjustments may be considered to manage symptoms of psychiatric TEAEs. A limitation of this study was the open-label design, meaning that no control arm was included. In addition, the study presented some confounders, including changes in background ASMs from baseline to the end of treatment (summarized in Table S1) and the potential association between treatment duration and tolerability, which could have influenced the results. Another limitation of this study was that participants were predominantly Caucasian or Asian, which may limit the generalizability of the findings to other groups. | CONCLUSIONS Seizure control established in the Core Study was maintained for at least 2 years during treatment with adjunctive perampanel up to 12 mg/day in patients (aged ≥12 years) with inadequately controlled GTC seizures in IGE. Relative to data from the Core Study, perampanel administration was similarly safe and well-tolerated, and the safety profile was consistent with that reported for double-blind, placebo-controlled studies in patients with focal-onset seizures. These data suggest that long-term adjunctive perampanel has a favorable riskbenefit ratio in patients with inadequately controlled GTC seizures.
2022-04-22T06:23:04.392Z
2022-04-20T00:00:00.000
{ "year": 2022, "sha1": "0bce352fe831ccdd571e4bc78763e438633ca05b", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "ScienceParsePlus", "pdf_hash": "5954eedcea7ee9f00b8fe2dc702c3015d2771bad", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220830987
pes2o/s2orc
v3-fos-license
A Unified Framework of Surrogate Loss by Refactoring and Interpolation We introduce UniLoss, a unified framework to generate surrogate losses for training deep networks with gradient descent, reducing the amount of manual design of task-specific surrogate losses. Our key observation is that in many cases, evaluating a model with a performance metric on a batch of examples can be refactored into four steps: from input to real-valued scores, from scores to comparisons of pairs of scores, from comparisons to binary variables, and from binary variables to the final performance metric. Using this refactoring we generate differentiable approximations for each non-differentiable step through interpolation. Using UniLoss, we can optimize for different tasks and metrics using one unified framework, achieving comparable performance compared with task-specific losses. We validate the effectiveness of UniLoss on three tasks and four datasets. Code is available at https://github.com/princeton-vl/uniloss. Introduction Many supervised learning tasks involve designing and optimizing a loss function that is often different from the actual performance metric for evaluating models. For example, cross-entropy is a popular loss function for training a multi-class classifier, but when it comes to comparing models on a test set, classification error is used instead. Why not optimize the performance metric directly? Because many metrics or output decoders are non-differentiable and cannot be optimized with gradientbased methods such as stochastic gradient descent. Non-differentiability occurs when the output of the task is discrete (e.g. class labels), or when the output is continuous but the performance metric has discrete operations (e.g. percentage of real-valued predictions within a certain range of the ground truth). To address this issue, designing a differentiable loss that serves as a surrogate to the original metric is standard practice. For standard tasks with simple output and metrics, there exist well-studied surrogate losses. For example, cross-entropy or hinge loss for classification, both of which have proven to work well in practice. training for conventional losses. To avoid the non-differentiability, conventional methods optimize a manually-designed differentiable loss function instead during training. Bottom: (a) refactored testing in UniLoss. We refactor the testing so that the nondifferentiability exists only in Sign(·) and the multi-variant function. (b) training in UniLoss with the differentiable approximation of refactored testing. σ(·) is the sigmoid function. We approximate the non-differentiable components in the refactored testing pipeline with interpolation methods. However, designing surrogate losses can sometimes incur substantial manual effort, including a large amount of trial and error and hyper-parameter tuning. For example, a standard evaluation of single-person human pose estimationpredicting the 2D locations of a set of body joints for a single person in an image-involves computing the percentage of predicted body joints that are within a given radius of the ground truth. This performance metric is nondifferentiable. Existing work instead trains a deep network to predict a heatmap for each type of body joints, minimizing the difference between the predicted heatmap and a "ground truth" heatmap consisting of a Gaussian bump at the ground truth location [28,17]. The decision for what error function to use for comparing heatmaps and how to design the "ground truth" heatmaps are manually tuned to optimize performance. This manual effort in conventional losses is tedious but necessary, because a poorly designed loss can be misaligned with the final performance metric and lead to ineffective training. As we show in the experiment section, without carefullytuned loss hyper-parameters, conventional manual losses can work poorly. In this paper, we seek to reduce the efforts of manual design of surrogate losses by introducing a unified surrogate loss framework applicable to a wide range of tasks. We provide a unified framework to mechanically generate a surrogate loss given a performance metric in the context of deep learning. This means that we only need to specify the performance metric (e.g. classification error) and the inference algorithm-the network architecture, a "decoder" that converts the network output (e.g. continuous scores) to the final output (e.g. discrete class labels), and an "evaluator" that converts the labels to final metric-and the rest is taken care of as part of the training algorithm. We introduce UniLoss (Fig. 1), a unified framework to generate surrogate losses for training deep networks with gradient descent. We maintain the basic algorithmic structure of mini-batch gradient descent: for each mini-batch, we perform inference on all examples, compute a loss using the results and the ground truths, and generate gradients using the loss to update the network parameters. Our novelty is that we generate all the surrogate losses in a unified framework for various tasks instead of manually designing it for each task. The key insight behind UniLoss is that for many tasks and performance metrics, evaluating a deep network on a set of training examples-pass the examples through the network, the output decoder, and the evaluator to the performance metric-can be refactored into a sequence of four transformations: the training examples are first transformed to a set of real scores, then to some real numbers representing comparisons (through subtractions) of certain pairs of the real-valued scores, then to a set of binary values representing the signs of the comparisons, and finally to a single real number. Note that the four transforms do not necessarily correspond to running the network inference, the decoder, and the evaluator. Take multi-class classification as an example, the training examples are first transformed to a set of scores (one per class per example), and then to pairwise comparisons (subtractions) between the scores for each example (i.e. the argmax operation), and then to a set of binary values, and finally to a classification accuracy. The final performance metric is non-differentiable with respect to network weights because the decoder and the evaluator are non-differentiable. But this refactoring allows us to generate a differentiable approximation of each nondifferentiable transformation through interpolation. Specifically, the transformation from comparisons to binary variables is nondifferentiable, we can approximate it by using the sigmoid function to interpolate the sign function. And the transformation from binary variables to final metric may be nondifferentiable, we can approximate it by multivariate interpolation. The proposed UniLoss framework is general and can be applied to various tasks and performance metrics. Given any performance metric involving discrete operations, to the best of our knowledge, the discrete operations can always be refactored to step functions that first make some differentiable real-number comparisons non-differentiable, and any following operations, which fit in our framework. Example tasks include classification scenarios such as accuracy in image classification, precision and recall in object detection; ranking scenarios such as average precision in binary classification, area under curve in image retrieval; pixel-wise prediction scenarios such as mean IOU in segmentation, PCKh in pose estimation. To validate its effectiveness, we perform experiments on three representative tasks from three different scenarios. We show that UniLoss performs well on a classic classification setting, multi-class classification, compared with the well-established conventional losses. We also demonstrate UniLoss's ability in a ranking scenario that evolves ranking multiple images in an evaluation set: average precision (area under the precision-recall curve) in unbalanced binary classification. In addition, we experiment with pose estimation where the output is structured as pixel-wise predictions. Our main contributions in this work are: -We present a new perspective of finding surrogate losses: evaluation can be refactored as a sequence of four transformations, where each nondifferentiable transformation can be tackled individually. -We propose a new method: a unified framework to generate losses for various tasks reducing task-specific manual design. -We validate the new perspective and the new method on three tasks and four datasets, achieving comparable performance with conventional losses. Direct Loss Minimization The line of direct loss minimization works is related to UniLoss because we share a similar idea of finding a good approximation of the performance metric. There have been many efforts to directly minimize specific classes of tasks and metrics. For example, [26] optimized ranking metrics such as Normalized Discounted Cumulative Gain by smoothing them with an assumed probabilistic distribution of documents. [11] directly optimized mean average precision in object detection by computing "pseudo partial derivatives" for various continuous variables. [18] explored to optimize the 0-1 loss in binary classification by search-based methods including branch and bound search, combinatorial search, and also coordinate descent on the relaxations of 0-1 losses. [16] proposed to improve the conventional cross-entropy loss by multiplying a preset constant with the angle in the inner product of the softmax function to encourage large margins between classes. [7] proposed an end-to-end optimization approach for speech enhancement by directly optimizing short-time objective intelligibility (STOI) which is a differentiable performance metric. In addition to the large algorithmic differences, these works also differ from ours in that they are tightly coupled with specific tasks and applications. [9] and [24] proved that under mild conditions, optimizing a max-margin structured-output loss is asymptotically equivalent to directly optimizing the performance metrics. Specifically, assume a model in the form of a differentiable scoring function S(x, y; w) : X × Y → R that evaluates the compatibility of output y with input x. During inference, they predict the y w with the best score: During training, in addition to this regular inference, they also perform the loss-augmented inference [29,9]: where ξ is the final performance metric (in terms of error), and is a small time-varying weight. [24] generalized this result from linear scoring functions to arbitrary scoring functions, and developed an efficient loss-augmented inference algorithm to directly optimize average precision in ranking tasks. While above max-margin losses can ideally work with many different performance metrics ξ, its main limitation in practical use is that it can be highly nontrivial to design an efficient algorithm for the loss-augmented inference, as it often requires some clever factorization of the performance metric ξ over the components of the structured output y. In fact, for many metrics the loss-augmented inference is NP-hard and one must resort to designing approximate algorithms, which further increases the difficulty of practical use. In contrast, our method does not demand the same level of human ingenuity. The main human effort involves refactoring the inference code and evaluation code to a particular format, which may be further eliminated by automatic code analysis. There is no need to design a new inference algorithm over discrete outputs and analyze its efficiency. The difficulty of designing loss-augmented inference algorithms for each individual task makes it impractical to compare fairly with max-margin methods on diverse tasks, because it is unclear how to design the inference algorithms. Recently, some prior works propose to directly optimize the performance metric by learning a parametric surrogate loss [13,30,22,8,6]. During training, the model is updated to minimize the current surrogate loss while the parametric surrogate loss is also updated to align with the performance metric. Compared to these methods, UniLoss does not involve any learnable parameters in the loss. As a result, UniLoss can be applied universally across different settings without any training, and the parametric surrogate loss has to be trained separately for different tasks and datasets. Reinforcement Learning inspired algorithms have been used to optimize performance metrics for structured output problems, especially those that can be formulated as taking a sequence of actions [21,15,4,31,33]. For example, [15] use policy gradients [25] to optimize metrics for image captioning. We differ from these approaches in two key aspects. First, we do not need to formulate a task as a sequential decision problem, which is natural for certain tasks such as text generation, but unnatural for others such as human pose estimation. Second, these methods treat performance metrics as black boxes, whereas we assume access to the code of the performance metrics, which is a valid assumption in most cases. This access allows us to reason about the code and generate better gradients. Surrogate Losses There has been a large body of literature studying surrogate losses, for tasks including multi-class classification [3,32,27,5,1,19,20], binary classification [3,32,19,20] and pose estimation [28]. Compared to them, UniLoss reduces the manual effort to design task-specific losses. UniLoss, as a general loss framework, can be applied to all these tasks and achieve comparable performance. Overview UniLoss provides a unified way to generate a surrogate loss for training deep networks with mini-batch gradient descent without task-specific design. In our general framework, we first re-formulate the evaluation process and then approximate the non-differentiable functions using interpolation. Original Formulation Formally, let x = (x 1 , x 2 , . . . , x n ) ∈ X n be a set of n training examples and y = (y 1 , y 2 , . . . , y n ) ∈ Y n be the ground truth. Let φ(·; w) : X → R d be a deep network parameterized by weights w that outputs a d-dimensional vector; let δ : R d → Y be a decoder that decodes the network output to a possibly discrete final output; let ξ : Y n × Y n → R be an evaluator. φ and δ are applied in a mini-batch fashion on x = (x 1 , x 2 , . . . , x n ); the performance e of the deep network is then e = ξ(δ(φ(x; w)), y). (3) Refactored Formulation Our approach seeks to find a surrogate loss to minimize e, with the novel observation that in many cases e can be refactored as where φ(·; w) is the same as in Eqn. 3, representing a deep neural network, f : R n×d × Y n → R l is differentiable and maps outputted real numbers and the ground truth to l comparisons each representing the difference between certain pair of real numbers, h : R l → {0, 1} l maps the l score differences to l binary variables, and g : {0, 1} l → R computes the performance metric from binary variables. Note that h has a restricted form that always maps continuous values to binary values through sign function, whereas g can be arbitrary computation that maps binary values to a real number. We give intermediate outputs some notations: This new refactoring of a performance metric allows us to decompose the metric e with g, h, f and φ, where φ and f are differentiable functions but h and g are often non-differentiable. The non-differentiability of h and g causes e to be non-differentiable with respect to network weights w. Differentiable Approximation Our UniLoss generates differentiable approximations of the non-differentiable h and g through interpolation, thus making the metric e optimizable with gradient descent. Formally, UniLoss gives a differentiable approximationẽẽ where f and φ are the same as in Eqn. 4, andh andg are the differentiable approximation of h and g. We explain a concrete example of multi-class classification and introduce the refactoring and interpolation in detail based on this example in the following sections. Example: Multi-class Classification We take multi-class classification as an example to show how refactoring works. First, we give formal definitions of multi-class classification and the performance metric: prediction accuracy. Input is a mini-batch of images x = (x 1 , x 2 , . . . , x n ) and their corresponding ground truth labels are y = (y 1 , y 2 , . . . , y n ) where n is the batch size. y i ∈ {1, 2, . . . , p} and p is the number of classes, which happens to be the same value as d in Sec. 3.1. A network φ(·; w) outputs a score matrix s = [s i,j ] n×p and s i,j represents the score for the i-th image belongs to the class j. The evaluator ξ(ỹ, y) evaluates the accuracy e fromỹ and y by where [·] is the Iverson bracket. Considering above together, the predicted label for an image is correct if and only if the score of its ground truth class is higher than the score of every other class: where ∧ is logical and. We thus refactor the decoding and evaluation process as a sequence of f (·) that transforms s to comparisons-s i,yi −s i,j for all 1 ≤ i ≤ n, 1 ≤ j ≤ p, and j = y i (n × (p − 1) comparisons in total), h(·) that transforms comparisons to binary values using [· > 0], and g(·) that transforms binary values to e using logical and. Next, we introduce how to refactor the above procedure into our formulation and approximate g and h. Refactoring Given a performance metric, we refactor it in the form of Eqn. 4. We first transform the training images into scores s = (s 1 , s 2 , . . . , s nd ). We then get the score comparisons (differences of pairs of scores) c = (c 1 , c 2 , . . . , c l ) using c = f (s, y). , where g can be arbitrary computation that converts binary values to a real number. In practice, g can be complex and vary significantly across tasks and metrics. Each comparison is Given any performance metrics involving discrete operations in function ξ and δ in Eqn. 3 (otherwise the metric e is differentiable and trivial to be handled), the computation of function ξ(δ(·)) can be refactored as a sequence of continuous operations (which is optional), discrete operations that make some differentiable real numbers non-differentiable, and any following operations. The discrete operations always occur when there are step functions, which can be expressed as comparing two numbers, to the best of our knowledge. This refactoring is usually straightforward to obtain from the specification of the decoding and evaluating procedures. The only manual effort is in identifying the discrete comparisons (binary variables). Then we simply write the discrete comparisons as function f and h, and represent its following operations as function g. In later sections we will show how to identify the binary variables for three commonly-used metrics in three scenarios, which can be easily extended to other performance metrics. On the other hand, this process is largely a mechanical exercise, as it is equivalent to rewriting some existing code in an alternative rigid format. Interpolation The two usually non-differentiable functions h and g are approximated by interpolation methods individually. where 1 ≤ i ≤ l. We now haveh as the differentiable approximation of h. Binaries to Performance: g. We approximate g(·) in e = g(b) by multivariate interpolation over the input b ∈ {0, 1} l . More specifically, we first sample a set of configurations as "anchors" a = (a 1 , a 2 , . . . , a t ), where a i is a configuration of b, and compute the output values g(a 1 ), g(a 2 ), . . . , g(a t ), where g(a i ) is the actual performance metric value and t is the number of anchors sampled. We then get an interpolated function over the anchors asg(·; a). We finally getẽ =g(b; a), whereb is computed fromh, f and φ. By choosing a differentiable interpolation method, theg function becomes trainable using gradient-based methods. We use a common yet effective interpolation method: inverse distance weighting (IDW) [23]: where u represents the input tog and d(u, a i ) is the Euclidean distance between u and a i . We select a subset of anchors based on the current training examples. We use a mix of three types of anchors-good anchors with high performance values globally, bad anchors with low performance values globally, and nearby anchors that are close to the current configuration, which is computed from the current training examples and network weights. By using both the global information from the good and bad anchors and the local information from the nearby anchors, we are able to get an informative interpolation surface. On the contrast, a naive random anchor sampling strategy does not give informative interpolation surface and cannot train the network at all in our experiments. More specifically, we adopt a straightforward anchor sampling strategy for all tasks and metrics: we obtain good anchors by flipping some bits from the best anchor, which is the ground truth. The bad anchors are generated by randomly sampling binary values. The nearby anchors are obtained by flipping some bits from the current configuration. Experimental Results To use our general framework UniLoss on each task, we refactor the evaluation process of the task into the format in Eqn. 4, and then approximate the nondifferentiable functions h and g using the interpolation method in Sec. 3. We validate the effectiveness of the UniLoss framework in three representative tasks in different scenarios: a ranking-related task using a set-based metricaverage precision, a pixel-wise prediction task, and a common classification task. For each task, we demonstrate how to formulate the evaluation process to our refactoring and compare our UniLoss with interpolation to the conventional taskspecific loss. More implementation details and analysis can be found in the appendix. Tasks and Metrics Average Precision for Unbalanced Binary Classification Binary classification is to classify an example from two classes-positives and negatives. Potential applications include face classification and image retrieval. It has unbalanced number of positives and negatives in most cases, which results in that a typical classification metric such as accuracy as in regular classification cannot demonstrate how good is a model properly. For example, when the positives to negatives is 1:9, predicting all examples as negatives gets 90% accuracy. On this unbalanced binary classification, other metrics such as precision, recall and average precision (AP), i.e. area under the precision-recall curve, are more descriptive metrics. We use AP as our target metric in this task. It is notable that AP is fundamentally different from accuracy because it is a set-based metric. It can only be evaluated on a set of images, and involves not only the correctness of each image but also the ranking of multiple images. This task and metric is chosen to demonstrate that UniLoss can effectively optimize for a set-based performance metric that requires ranking of the images. PCKh for Single-Person Pose Estimation Single-person pose estimation predicts the localization of human joints. More specifically, given an image, it predicts the location of the joints. It is usually formulated as a pixel-wise prediction problem, where the neural network outputs a score for each pixel indicating how likely is the location can be the joint. Following prior work, we use PCKh (Percentage of Correct Keypoints wrt to head size) as the performance metric. It computes the percentage of the predicted joints that are within a given radius r of the ground truth. The radius is half of the head segment length. This task and metric is chosen to validate the effectiveness of UniLoss in optimizing for a pixel-wise prediction problem. Accuracy for Multi-class Classification Multi-class classification is a common task that has a well-established conventional loss -cross-entropy loss. We use accuracy (the percentage of correctly classified images) as our metric following the common practice. This task and metric is chosen to demonstrate that for a most common classification setting, UniLoss still performs similarly effectively as the well-established conventional loss. Average Precision for Unbalanced Binary Classification Dataset and Baseline We augment the handwritten digit dataset MNIST to be a binary classification task, predicting zeros or non-zeros. Given an image containing a single number from 0 to 9, we classify it into the zero (positive) class or the non-zero (negative) class. The positive-negative ratio of 1:9. We create a validation split by reserving 6k images from the original training set. We use a 3-layer fully-connected neural network with 500 and 300 neurons in each hidden layer respectively. Our baseline model is trained with a 2-class cross-entropy loss. We train both baseline and UniLoss with a fixed learning rate of 0.01 for 30 epochs. We sample 16 anchors for each anchor type in our anchor interpolation for all of our experiments except in ablation studies. Formulation and Refactoring The evaluation process is essentially ranking images using pair-wise comparisons and compute the area under curve based on the ranking. It is determined by whether positive images are ranked higher than negative images. Given that the output of a mini-batch of n images is s = (s 1 , s 2 Results UniLoss achieves an AP of 0.9988, similarly as the baseline crossentropy loss (0.9989). This demonstrates that UniLoss can effectively optimize for a performance metric (AP) that is complicated to compute and involves a batch of images. PCKh for Single-Person Pose Estimation Dataset and Baseline We use MPII [2] which has around 22K images for training and 3K images for testing. For simplicity, we perform experiments on the joints of head only, but our method could be applied to an arbitrary number of human joints without any modification. We adopt the Stacked Hourglass [17] as our model. The baseline loss is the Mean Squared Error (MSE) between the predicted heatmaps and the manuallydesigned "ground truth" heatmaps. We train a single-stack hourglass network for both UniLoss and MSE using RMSProp [12] with an initial learning rate 2.5e-4 for 30 epochs and then drop it by 4 for every 10 epochs until 50 epochs. Formulation and Refactoring Assume the network generates a mini-batch of heatmaps s = (s 1 , s 2 , . . . , s n ) ∈ R n×m , where n is the batch size, m is the number of pixels in each image. The pixel with the highest score in each heatmap is predicted as a key point during evaluation. We note the pixels within the radius r around the ground truth as positive pixels, and other pixels as negative and each heatmap s k can be flatted as (s k pos,1 , s k pos,2 , . . . , s k pos,m k , s k neg,1 , . . . , s k neg,m−m k ), where m k is the number of positive pixels in the k-th heatmap and s k pos,j (s k neg,j ) is the score of the j-th positive (negative) pixel in the k-th heatmap. PCKh requires to find out if a positive pixel has the highest score among others. Therefore, we need to compare each pair of positive and negative pixels and this leads to the binary variables pos,i − s k neg,j > 0], i.e. the comparison between the i-th positive pixel and the j-th negative pixel in the k-th heatmap. Results It is notable that the manual design of the target heatmaps is a part of the MSE loss function for pose estimation. It heavily relies on the careful design of the ground truth heatmaps. If we intuitively set the pixels at the exact joints to be 1 and the rest of pixels as 0 in the heatmaps, the training diverges. Luckily, [28] proposed to design target heatmaps as a 2D Gaussian bump centered on the ground truth joints, whose shape is controlled by its variance σ and the bump size. The success of the MSE loss function relies on the choices of σ and the bump size. UniLoss, on the other hand, requires no such design. As shown in Table 1, our UniLoss achieves a 95.77 PCKh which is comparable as the 95.74 PCKh for MSE with the best σ. This validates the effectiveness of UniLoss in optimizing for a pixel-wise prediction problem. We further observe that the baseline is sensitive to the shape of 2D Gaussian, as in Table 1. Smaller σ makes target heatmaps concentrated on ground truth joints and makes the optimization to be unstable. Larger σ generates vague training targets and decreases the performance. This demonstrates that conventional losses require dedicated manual design while UniLoss can be applied directly. Accuracy for Multi-class Classification Dataset and Baseline We use CIFAR-10 and CIFAR-100 [14], with 32 × 32 images and 10/100 classes. They each have 50k training images and 10k test images. Following prior work [10], we split the training set into a 45k-5k trainvalidation split. We use the ResNet-20 architecture [10]. Our baselines are trained with crossentropy (CE) loss. All experiments are trained following the same augmentation and pre-processing techniques as in prior work [10]. We use an initial learning rate of 0.1, divided by 10 and 100 at the 140th epoch and the 160th epoch, with a total of 200 epochs trained for both baseline and UniLoss on CIFAR-10. On CIFAR-100, we train baseline with the same training schedule and UniLoss with 5x training schedule because we only train 20% binary variables at each step. For a fair comparison, we also train baseline with the 5x training schedule but observe no improvement. Results Our implementation of the baseline method obtains a slightly better accuracy (91.49%) than that was reported in [10]-91.25% on CIFAR-10 and obtains 65.9% on CIFAR-100. UniLoss performs similarly (91.64% and 65.92%) as baselines on both datasets (Table 2), which shows that even when the conventional loss is well-established for the particular task and metric, UniLoss still matches the conventional loss. Discussion of Hyper-parameters Mini-batch Sizes We also use a mini-batch of images for updates with UniLoss. Intuitively, as long as the batch size is not extremely small or large, it should be able to approximate the distribution of the whole dataset. We explore different batch sizes on the CIFAR-10 multi-class classification task, as shown in Table 3. The results match with our hypothesis-as long as the batch size is not extreme, the performance is similar. A batch size of 128 gives the best performance. Number of Anchors We explore different number of anchors in the three tasks. We experiment with 5, 10, 16 as the number of anchors for each type of the good, bad and nearby anchors. That is, we have 15, 30, 48 anchors in total respectively. Table 4 shows that binary classification and classification are less sensitive to the number of anchors, while in pose estimation, fewer anchors lead to slightly worse performance. It is related to the number of binary variables in each task: pose estimation has scores for each pixel, thus has much more comparisons than binary classification and classification. With more binary variables, more anchors tend to be more beneficial. Conclusion and Limitations We have introduced UniLoss, a framework for generating surrogate losses in a unified way, reducing the amount of manual design of task-specific surrogate losses. The proposed framework is based on the observation that there exists a common refactoring of the evaluation computation for many tasks and performance metrics. Using this refactoring we generate a unified differentiable approximation of the evaluation computation, through interpolation. We demonstrate that using UniLoss, we can optimize for various tasks and performance metrics, achieving comparable performance as task-specific losses. We now discuss some limitations of UniLoss. One limitation is that the interpolation methods are not yet fully explored. We adopt the most straightforward yet effective way in this paper, such as the sigmoid function and IDW interpolation for simplicity and an easy generalization across different tasks. But there are potentially other sophisticated choices for the interpolation methods and for the sampling strategy for anchors. The second limitation is that proposed anchor sampling strategy is biased towards the optimal configuration that corresponds to the ground truth when there are multiple configurations that can lead to the optimal performance. The third limitation is that ranking-based metrics may result in a quadratic number of binary variables if pairwise comparison is needed for every pair of scores. Fortunately in many cases such as the ones discussed in this paper, the number of binary variables is not quadratic because many comparisons does not contribute to the performance metric. The fourth limitation is that currently UniLoss still requires some amount of manual effort (although less than designing a loss from scratch) to analyze the given code of the decoder and the evaluator for the refactoring. Combining automatic code analysis with our framework can further reduce manual efforts in loss design. nearby anchors are obtained by flipping one binary bit from the best anchor and the anchor at the current training step respectively.We sample 16 anchors for each type. We train models with the baseline loss and UniLoss with SGD using the same training schedule: with a fixed learning rate of 0.01 for 30 epochs. Pose Estimation We use the Stacked Hourglass Network architecture. It takes a 224×224 image as input and passes it through one hourglass block as described in [17]. It outputs a heatmap for each joint. A heatmap essentially gives scores measuring how likely is the joint to be there for each pixel or region centered at that pixel. The baseline loss is the Mean Squared Error (MSE) between the predicted heatmaps and the manually-designed "ground truth" heatmaps. More specifically, the target "ground truth" heatmap has a 2D Gaussian bump centered on the ground truth joint. The shape of the Gaussian bump is controlled by its variance σ and the bump size. The commonly chosen σ is 1 and the bump size 7. Our loss is formulated as in Sec. 4.3. The h function is the sigmoid relaxation of the binary variables. The g function is the interpolation over anchors of the binary variables. For example, the anchor that gives the best performance is that all binary values to be 1. Following Sec. 3.4, three types of anchors are generated. For good anchors, we flip a small number of bits from the best. Nearby anchors are flipped from the current configuration by randomly picking a positive/negative pixel in the current output heatmaps and flipping all bits associated with this pixel. We sample 16 anchors for each type. We train the model both UniLoss and MSE with RMSProp [12] using an initial learning rate 2.5e-4 for 30 epochs and then divided by 4 for every 10 epochs until 50 epochs. Classification We use the ResNet-20 architecture [10]. The network takes a 32 × 32 image as input. The input image first goes through a 3 × 3 convolution layer followed by a batch normalization layer and a ReLU layer. It then goes through three ResNet building blocks with down-sampling in between. After an average pooling layer, the output layer is a fully-connected layer with a 10-way output. Our baseline loss is a 10-way cross-entropy (CE) loss. Our loss is formulated as in Sec. 4.4. The h function is the sigmoid relaxation of the binary variables. The g function is the interpolation over anchors of the binary variables. For example, the anchor that gives the best performance is that all binary values to be 1, meaning that for each image, the ground truth class has higher score than every other class. The good anchors and nearby anchors are obtained by flipping one binary bit from the best anchor and the anchor at the current training step respectively.We sample 16 anchors for each type. We use the same data augmentation for both with random cropping with a padding of 4 and random horizontal flipping. We also pre-process the images with per-pixel normalization. We train both models with SGD using an initial learning rate of 0.1, divided by 10 and 100 at the 140 epoch and the 160 epoch, with a total of 200 epochs on CIFAR-10. On CIFAR-100, we train baseline with the same training schedule and UniLoss with 5x training schedule but 20% binary variables at each step. B Additional Experimental Results Analysis of the Interpolation of g To evaluate how well the interpolator approximates the true performance metric, we sample binary configurations with various hamming distances from those encountered during training. The hamming distances ranges from 0, 1 512 , , 1 2 of total number of binary variables (#binaries). We compute the L2 distances and rank correlation coefficients between the approximated metric value and true metric value pairs, as in Table 5. We see that approximation is quite good when distance is small but poor when distance is large. Training and Validation Curves We present the training accuracy and validation accuracy curves over epochs in Fig. 2 and Fig. 3. We see that while UniLoss has similar validation accuracy (PCKh) later in the training with the CE (MSE) baseline, its training accuracy (PCKh) is slightly lower than the baseline over the whole training. One hypothesis is that due to the noise introduced by the randomness in anchor sampling, UniLoss naturally has a regularization effect compared to conventional losses.
2020-07-29T01:00:46.443Z
2020-07-27T00:00:00.000
{ "year": 2020, "sha1": "15c4bf562e1caa89456dda5fe032ae4f25bc05e2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2007.13870", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "15c4bf562e1caa89456dda5fe032ae4f25bc05e2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
202027514
pes2o/s2orc
v3-fos-license
GPlates dataset for the tectonic reconstruction of the Northern Andes-Caribbean Margin This contribution contains a GPlates digital reconstruction of the northern Andes and southern Caribbean margin for the last 90 Ma. It is built using different strain datasets fully described in “Continental Margin Response to Multiple Arc-Continent Collisions: the Northern Andes-Caribbean Margin” [1]. Two digital reconstructions are included here: one is a rigid block reconstruction, and the other is a continuously closing polygon reconstruction digitized every one -million years. We placed the South and North American plates at the root of the reconstruction tree, so that the Andean blocks move with respect to the former, and the Caribbean Plate, and related intra-oceanic arcs with respect to the latter. These reconstructions can be used as templates to place in palinspastic space any dataset that can be represented by lines or points. Data The data files consist of two tectonic reconstructions. The first reconstruction, a rigid block reconstruction (R_Recon), consists of three files (Blocks_v1.gpml, Rotations.rot, and Rigid_recon.gproj) that contain a rigid-block reconstruction of the northern Andes and southern Caribbean for the last 90 Ma. The second one, a continuously closing polygon reconstruction (CCP_Recon), consists of four.gpml files (Blocks, Auxiliary_Blocks, Auxiliary_Geometries, and Reconstruction), a project file (CCP_recon.gpml), and one rotation file (Rotations.rot), also for the last 90 Ma, but with resolved topological networks only for the last 65 Ma. All data files are in GPlates Markup Language (gpml) compressed as a supplementary file. Experimental design, materials, and methods The database utilized for the designing and building of the reconstruction was derived from fully georeferenced published geologic maps as detailed in Table SM1 of Montes et al. [1]. Intersections of major faults defined 108 tectonic blocks for the northern Andes and the southern Caribbean. Because of the flexibility allowed by the topological closing of polygons, the 108 rigid blocks are represented in the continuously closing polygon reconstruction by a smaller number of tectonic blocks (54). Both reconstructions (rigid and continuously closing) can be used to place custom files (points, lines or polygons) on top of the reconstruction to study their palinspastic location. The rigid reconstruction can be used to place points in palinspastic space strictly following the rigid blocks, so there is no Specifications Value of the data This digital reconstruction can be used by any researcher to place its data points in a palinspastic space for the last 65 Ma in the northern Andes and southern Caribbean margin. Anyone interested in the tectonics, structure and stratigraphy of the northern Andes and southern Caribbean margin and the development of mountain belts, landscape evolution and associated biotas should benefit of this dataset as it provides a palinspastic template on which to place data. Since all the files are made available, the end user can make modifications, extensions, or additional elaborations using the provided dataset. The software to manipulate these data files is state-of-the-art and freely available, so anyone with a computer can visualize and manipulate the data. Tectonic reconstructions are usually provided as still images. This reconstruction lends itself for animations of the tectonic evolution of this region that can more clearly convey the timing and style of deformation. distortion for the last 90 Ma. However, since the rigid block reconstruction at 0 Ma contains gaps (representing extension), and overlaps (representing shortening), not all points may get assigned a block ID (gaps), or may get more than one (overlaps). The continuously closing polygon reconstruction instead, has no gaps or overlaps, but the custom data will be distorted following the deforming closing polygons, and it only covers the last 65 Ma. Users of the continuously closing polygon reconstruction must also be warned that observation has to be done in the same, one million-year intervals starting at zero, as the software does not interpolate between the established one-million-year intervals. The rigid reconstruction, in contrast interpolates between user-defined times in the rotation file and observation can be performed at any point in time. For display purposes we suggest using the South American Plate as the reference plate (plate number 201). The polygons in the CCP reconstruction change their size, area and geometry, so that they can be deformed (stretched/shortened) in geologic time [2]. The continuously closing polygons can be used to generate topological networks, which may be used to consider the internal deformation of the blocks. A topological network is constructed by making a Delaunay triangulation between tectonic block boundaries (split in points) and inner points which describe their internal deformation, creating a triangular mesh. Topological networks can therefore be used to create a deformation field over the surface of the tectonic blocks, and restore the position of vector layers (except polygons) to a hypothetical pre-deformed state. Reconstruction results can be exported from Gplates as an ESRI shapefile at user-defined time intervals. In the following paragraphs we provide a simple set of steps for any end-user to interact with the reconstructions by restoring the paleo latitude/longitude of localities of interest between 65 and 0 Ma. First, however, the data of interest need to be converted to ESRI shape vector files where each geometry has to have the plate ID of the block where it belongs at present time (0 Ma). You can do this by first exporting the 0 Ma snapshot of the CCP reconstruction into a shape file. This shape file contains a column named "PLATEID1" where the program stores the name of each plate. To do this, open the continuously closing polygon reconstruction folder (CCP_recon) and make sure all four gpml files and the rotation file are located within the same directory as the project file (CCP_recon.prj). Open the project file with GPlates 2.0 or newer. Once opened, choose "Export" under the Reconstruction menu, and in the pop-up window choose "Add Export", and choose "Resolved Topologies (general)". Export a single snapshot at 0 Ma in shape file format making sure to choose to option to export to a single file and "Export resolved topological polygons". This operation will create a version of the polygons at 0 Ma in shape files with all of the plate ID names under the column called "PLATEID1". Use your GIS program of preference to add the identity (PLATEID) of each one of your geometries to your dataset. Open the continuously closing polygon reconstruction folder (CCP_recon). GPlates should automatically open the Layers window where the files should be loaded, including the rotation file, the auxiliary blocks and geometries, and three layers of the reconstruction: the Resolved Topological Geometries (visible), the Resolved Topological Networks (hidden), and the Reconstructed geometries (hidden). At this point it is convenient to set the reconstruction parameters by going to the Reconstruction menu and choosing "Configure animation". Choose a 90 Ma or younger starting point, as the reconstruction only covers this interval of time. Also, under the same menu, you may choose the South American Plate (201) as the anchored plate. Now, you can load your shape file by going to the File menu and choosing "Open Feature Collection", where you locate your shape file(s). When loading for the first time, a pop-up window will ask for the column that contains the plate ID of your dataset. Once loaded, look for your data in the Layers window, and expand it by clicking in the black arrow located to the left of the file name, and choose "Add new connection" in the "Topology Surfaces" option. Add the two closed dynamic polygon layers of the reconstruction (pink and light brown "Reconstruction" layers). Then, in the same window, select "Reconstruct using topologies" box (just click on "Yes). A window will be automatically pop-up where you set the youngest age to 0 Ma, and the oldest Age to 65 Ma. Then, you need to set the desired time increment, being careful not to use a frequency higher than the one used to generate the dynamic polygons (1 Ma). Leave all the other options as they are. Click OK, and after a few seconds you can click on the play button of the main window to show your localities in palinspastic space. Now that you have your dataset moving along the continuously closing polygons you can export it as shapefiles for custom time intervals, just use "Export Time Sequence of Snapshots". This way, any dataset that can be represented by points, or lines may be placed on top of one of the palinspastic reconstructions to study its hypothetical palinspastic location. These may include fossil localities, stratigraphic sections, geochronological/thermochronological sample locations, boreholes, geological cross-sections, fault lines, population boundaries, etc.
2019-09-09T21:21:56.202Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "26e0a61a1c9d943198ca752919d64440246774cc", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.dib.2019.104398", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6d7489d798711ebd25d186d3027e45257e57ef14", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Medicine", "Geology" ] }
233236630
pes2o/s2orc
v3-fos-license
Depletion of TrkB Receptors From Adult Serotonergic Neurons Increases Brain Serotonin Levels, Enhances Energy Metabolism and Impairs Learning and Memory Neurotrophin brain-derived neurotrophic factor (BDNF) and neurotransmitter serotonin (5-HT) regulate each other and have been implicated in several neuronal mechanisms, including neuroplasticity. We have investigated the effects of BDNF on serotonergic neurons by deleting BDNF receptor TrkB from serotonergic neurons in the adult brain. The transgenic mice show increased 5-HT and Tph2 levels with abnormal behavioral phenotype. In spite of increased food intake, the transgenic mice are significantly leaner than their wildtype littermates, which may be due to increased metabolic activity. Consistent with increased 5-HT, the proliferation of hippocampal progenitors is significantly increased, however, long-term survival of newborn cells is unchanged. Our data indicates that BDNF-TrkB signaling regulates the functional phenotype of 5-HT neurons with long-term behavioral consequences. INTRODUCTION Neurotrophin Brain-derived neurotrophic factor (BDNF) and neurotransmitter serotonin (5-HT) regulate neuronal survival, neurogenesis, and neuronal plasticity and they also co-regulate each other (Mattson et al., 2004;Martinowich and Lu, 2008). Changes or aberrations in these two systems, together or independently, are associated with neuropsychiatric disorders, which are a major health problem worldwide (Homberg et al., 2014). Brain-derived neurotrophic factor is associated with the regulation of activity-dependent neuronal connectivity and plasticity. BDNF together with its high-affinity cognate receptor, neurotrophic tyrosine kinase receptor 2 (Ntrk2/TrkB) plays a significant role in neuronal survival, synaptic plasticity, and in mediating 5-HT metabolism (Mamounas et al., 2000). BDNF and TrkB induce serotonergic phenotype and increase the number of 5-HT expressing neurons (Galter and Unsicker, 2000a). BDNF and TrkB have been concomitantly related together with 5-HT in a myriad of neurochemical and behavioral responses (Martinowich and Lu, 2008). 5-HT is a major modulatory neurotransmitter produced by neurons that are located in the brainstem raphe nuclei and project massively throughout the brain serving different physiological and behavioral functions (Dahlström and Fuxe, 1964;Steinbusch, 1981;Gaspar et al., 2003;Okaty et al., 2015). 5-HT is released both synaptically and through volume transmission (Halasy et al., 1992;Fuxe et al., 2007), and its effects are mediated by an ensemble of different receptors located on the target neurons as well as on 5-HT neurons themselves (Jacobs and Azmitia, 1992). Although a positive identification of BDNF and TrkB expression in serotonergic neurons is missing, TrkB are expressed in the region of raphe nuclei (Madhav et al., 2001;Lunden and Kirby, 2013;Adachi et al., 2017). However, the physiological role of the activation of TrkB in 5-HT neurons remains unclear. Previous studies have demonstrated that BDNF controls the survival and maintenance of developing 5-HT neurons through an auto/paracrine loop mediated by its autoreceptors, mainly 5HT1a, followed by sequential activation of BDNF and TrkB (Galter and Unsicker, 2000b). The 5-HT neurons from the mid-brain (MB) project throughout the brain and exert trophic actions on the target cells by controlling their proliferation and differentiation (Lauder et al., 1982;Commons, 2016). We recently found that TrkB plays a critical role in the maintenance of 5-HT and dopamine (DA) neurons in zebrafish brain (Sahu et al., 2019). Constitutive knockouts of both TrkB and BDNF are postnatally lethal in mammals (Erickson et al., 1996), so conditional and inducible transgenic mouse (TG) models have been utilized to delete TrkB in a regionally and temporally selective fashion. A recent study reported that deletion of TrkB from neurons in the midbrain raphe region resulted in a loss of antidepressant efficacy and heightened aggression (Adachi et al., 2017). In this study we have specifically deleted TrkB in Tph2 expressing 5-HT neurons in the adulthood, after the serotonergic system has fully matured, and demonstrate a key role for TrkB in serotonergic neurons in the regulation of 5-HT function. The effects of loss of TrkB in 5-HT neurons on behavior were also assessed by a battery of tests measuring anxiety, aggression, learning, and memory. Animals The Tph2creERT2 animals were obtained from Prof. Pierre Chambon laboratory (Feil et al., 1996;Yadav et al., 2011). The animals were rederived and backcrossed to the C57BL/6J strain in our animal facility for several generations. The TrkBflox mice were obtained from the Jackson lab and have been maintained in our system with C57BL/6J background. The Tph2creERT2 mice were crossed with homozygous TrkBflox mice to generate the Tph2creERT2:TrkBflox mice in the C57BL/6J strain background. The cre is hemizygous and TrkBflox is homozygous, therefore every internal mating produced Tph2creERT2:TrkBflox and TrkBflox mice. The animals that were cre positive have been grouped as transgenic and the flox alone as control throughout this manuscript. The cohorts were made by pooling several littermates born at the same time. For cre activation, tamoxifen was dissolved in corn oil with continuous shaking at 37 • C overnight. The tamoxifen stock was made into a final concentration of 20 mg/ml. For all the experiments we have used the same injection time. Six weeks old mice were administered tamoxifen at 0.1 mg/kg intra peritoneally (ip) dose daily for 5 days to activate the cre-mediated recombination. The control and transgenic mice that had received tamoxifen are called Ctrl and TG, respectively. The transgenic animals not injected with tamoxifen are represented as TG w/o TAM. For reporter assay, tDtomato mice (JAX 007914, Jackson Lab) were crossed with Tph2creERT2 animals. The Tph2creERT2:tDtomato animals were divided into two groups, one receiving tamoxifen and others received only corn oil. All the animals were maintained according to the guidelines from the ethical committee of Southern Finland (License number-ESAVI/10300/04.10.07/2016). Behavioral Tests Adult animals aged between 6 and 9 months were used for all the behavioral tests. Mice were single housed during the test period with unlimited access to food and water in the individually ventilated cage system. The single housing was done 1 week prior to the start of the behavior tests for a maximum of 1 month. Mice were kept at light/dark rhythm of 12 h each. The number of animals tested was 60 in total, with 30 experimental and 30 control group unless otherwise mentioned. The animals for most of the behavioral battery tests were males. The animals were always transferred to test rooms to adapt 30 min before starting the test. All behavioral tests were performed with the researcher blinded to the genotypes of the animals. We used three different cohorts of animals in this study. Therefore, in the first cohort we performed open field, light-dark, elevated plus maze and forced swim tests as well as Comprehensive lab animal monitoring system test, in the second cohort Barnes maze and resident intruder test, and in the third cohort the pattern separation test. The interval between the tests was usually 2 days for less stressful and 1 week for stressful experiments. Open Field Test (OF) The open field test was done with a brightly illuminated (550 lux) polypropylene chamber (30 × 30 × 30 cm). One mouse was put into each of the chamber for 30 min and their movement was monitored by Activity monitor (Med Associates Inc., United States). In this test, the total time of locomotor activity was recorded. The total ambulatory distance traveled was compared between the groups. The number of animals used were N = 30/group. Light-Dark Test (LD) The light-dark test was done in polypropylene chamber (30 × 30 × 30 cm), equipped with infrared light sensors detecting horizontal and vertical activity. A dark insert was used to divide the arena into two compartments. An opening on the wall of the dark insert ensured free movement of the animal between the two compartments. Illumination in the center of the light area was ∼550 lux. The mice were placed into the dark area at the start of the experiment. Their movements in and between the two areas were recorded and the data was collected with this activity monitoring setup for 30 min. Parameters for analysis were time spent in different compartments, vertical counts on both of the sides, latency to dark and total distance moved. The number of animals used were N = 30/group. Elevated Plus Maze (EPM) The elevated plus maze consisted of two open arms (30 × 5 cm) and two enclosed arms (30 × 5 cm) connected by a 5 × 5 cm central arena. The whole platform was 40 cm above the floor. The floor of the arms was light gray. The closed arms had transparent walls 15 cm high. The illumination during the test was ∼150 lux. The animal was placed in the center of the maze and tracked for a total of 5 min. The animals were tracked using Noldus EthoVision XT10 system (Noldus Information Technology, Netherlands). The parameters analyzed were latency to open arms, number of entries to the open and closed arms. The number of animals used were N = 30/group. Comprehensive Lab Animal Monitoring System (CLAMS) For metabolic monitoring, animals were subjected to comprehensive lab animal monitoring system (CLAMS) (Columbus Instruments, United States) for 3 days. This included 1 day of acclimatization and 2 days for data collection. The number of animals used were N = 12/group. Resident-Intruder Test (RI) The resident intruder test was performed in the animal's home cage and the trial was video recorded. The experimental animals were single-housed, and the intruder animals were group housed. This experiment was performed in the experimental animals (resident) home cage. The duration of the test was 10 min and the parameters analyzed were number of attacks by the resident animal, time of social interaction measured as duration spent on sniffing, chasing, climbing and non-social exploration such as digging, rearing, grooming, and scanning the intruder as it moves away. The total number of animals used were N = 11/group. Forced Swim Test (FST) The mice were injected with fluoxetine 30 mg/kg intra peritoneally (ip) or saline ip 30 min prior to the test. A mouse was placed into a glass cylinder filled with tap water stabilized to room temperature. The water was filled to the height of 14 cm in the glass cylinder. The animals were tracked using the Noldus EthoVision XT10 system (Noldus Information Technology, Netherlands). The immobility time was measured during the 6-min testing period. The number of animals used were N = 12/group. Pattern Separation (PS) Test The pattern separation test was based on a published protocol (Diaz et al., 2013) that uses a contextual fear discrimination learning paradigm. Both males and females were used in this paradigm. Males were tested first followed by females to avoid any discrepancies. The animals were subjected to two contexts A and B, which were highly similar to each other and subjected for the same duration of time. Context A was the training chamber and it referred to a fearful episode where the mice received a single 2 s foot shock of 0.8 mA at 180 s after being placed in the chamber. Context B was highly similar, except for a mild vanilla smell in the bedding and two different patterned paper inserts in the chamber wall. The test was carried out for 10 consecutive days. On test days 1, 4, 7, 8, and 10, mice were first exposed to context A before context B, and on test days 2, 3,5, 6, and 9 they were first exposed to context B followed by context A. The percentage of freezing in context A vs. B was evaluated on the last day. The number of animals used were N = 20/group. Barnes Maze Test (BM) The Barnes Maze test was conducted based on the published protocol for a period of 6 days (Harrison et al., 2006). The acquisition training was done for 3 days with three trials per day and the inter-trial interval was for 1 h. On the 4th and 6th day the probe trial was performed. Reversal training was started on the 4th day, after the first probe trial. The latency to find the escape box was measured during training and time spent in the vicinity of the target hole was measured during probe trials. The number of animals used were N = 12/group. High Pressure Liquid Chromatography (HPLC) The brains were dissected from adult animals after terminal anesthesia with CO 2 . We used naïve animals, N = 4-5 per group. The different regions of the brain collected for analysis were the pre-frontal cortex (PFC), striatum, hippocampus (HC), hypothalamus, mid-brain (MB), and lower brain stem (LBS). The different brain structures were weighed, collected into tubes and frozen on dry ice. The samples were homogenized in 0.3 ml (pre-frontal cortex, striatum, hippocampus, hypothalamus) or 0.4 ml (mid-brain and lower brain stem) of homogenization solution consisting of six parts 0.2 M HClO 4 and one part antioxidant solution (1.0 mM oxalic acid, 0.1 M acetic acid, 3.0 mM L-cysteine) (Kankaanpaa et al., 2001) with Rinco ultrasonic homogenizer (Rinco Ultrasonics AG, Romanshorn, Switzerland). The homogenates centrifuged at 20,800 g for 35 min at 4 • C. The supernatant was passed through 0.5 ml Vivaspin filter concentrators (10,000MWCO PES; Sartorius, Stonehouse, United Kingdom) and centrifuged at 8,600 g at 4 • C for 35 min. The medium collected from the cell cultures were filtered and processed likewise. Filtrates containing monoamines were analyzed using high-pressure liquid chromatography with electrochemical detection. Analyses of dopamine (DA), serotonin (5-HT) and its main metabolite 5-hydroxyindoleacetic acid (5-HIAA) were performed. The analytes were separated on a Phenomenex Kinetex 2.6 µm, 4.6 × 100 mm C-18 column (Phenomenex, Torrance, CA, United States). The column was maintained at 45 • C with a column heater (Croco-Cil, Bordeaux, France). The mobile phase consisted of 0.1 M NaH 2 PO 4 buffer, 120 mg/l of octane sulfonic acid, methanol (5%), and 450 mg/l EDTA, the pH of mobile phase was set to three using H 3 PO 4 . The pump (ESA Model 582 Solvent Delivery Module; ESA, Chelmsford, MA, United States) was equipped with two pulse dampers (SSI LP-21, Scientific Systems, State College, PA, United States) and provided a flow rate of 1 ml/min. One hundred microliters of the filtrate was injected into chromatographic system with a Shimadzu SIL-20AC autoinjector (Shimadzu, Kyoto, Japan). Monoamines and their metabolites were detected using ESA CoulArray Electrode Array Detector with 12 channels (ESA, Chelmsford, MA, United States). The chromatograms were processed, and the concentrations of monoamines were calculated using CoulArray for Windows software (ESA, Chelmsford, MA, United States). The concentrations of analytes are expressed as ng/g of wet tissue. Western Blotting The brains from both males and females were dissected out from adult animals (N = 4/group). The different brain structures used for analysis were the hippocampus, hypothalamus, and midbrain. The brain samples were homogenized using the standard RIPA lysis buffer, containing a cocktail of protease inhibitors and orthovanadate, followed by centrifugation at 13,000 rpm for 15 min at + 4 • C. The Rn33b cells were lysed similarly and sonicated before centrifugation. The samples were then separated using gradient gels 4-15% (NuPAGE Protein gels, Invitrogen) and blotted on to a PVDF membrane. The primary antibodies were RD-TrkB (Cat# AF397, RD systems Inc., MN, United States) and GAPDH (Cat# ab75479, Abcam). The primary antibodies were used at 1:1,000 dilution. The respective HRP conjugated secondary antibodies was blocked for 1 h at room temperature. The chemiluminescent assay was performed using ECL (Pierce). All images obtained were analyzed using ImageJ software. Proliferation and Survival of Hippocampal Cells Proliferation and survival of newborn cells in the dentate gyrus (DG) was quantified using a dot-blot method that was performed as previously described, with minor modifications (Wu and Castrén, 2009;Casarotto et al., 2021). The animals used for the assessment of cell proliferation and cell survival assay were 9-11 months old (N = 4/group). Mice were injected with BrdU 75 mg/kg body weight four times at 2 h interval. For the proliferation study, the injection was done 24 h prior to sacrifice and for the survival study, the injection was done 4 weeks before the sacrifice. The animals were sacrificed 1 day or 4 weeks after BrdU administration, with terminal anesthesia using CO 2 , and brains were quickly removed. Hippocampi were dissected on ice, instantly frozen on dry ice and stored at −80 • C until further use. DNA was isolated from one the of the hippocampi using DNeasy R Blood and Tissue Kit (QIAGEN, Germany) according to the manufacturer's instruction. The DNA purity was assessed by the spectrophotometer NanoDrop 2000C (Thermo Fisher Scientific, United States). DNA was incubated with 1 volume of 4 N NaOH solution for 30 min at room temperature to render it as single stranded and immediately kept on ice to prevent reannealing. The DNA solution was neutralized by an equal volume of 1 M Tris-HCl (pH = 6.8). The single-stranded neutralized DNA (1 mg) was pipetted in triplicates onto a nylon transfer membrane (Schleicher and Schuell, Keene, NH, United States) with a dot-blot apparatus (Minifold, Schleicher and Schuell) under vacuum and the DNA was fixed by ultraviolet cross-linker (1,200 µJ × 100, Stratagene, La Jolla, CA, United States). The membranes were incubated with mouse anti BrdU monoclonal antibody (1:1,000, B2531, Sigma) as the primary antibody and anti-mouse horseradish peroxidase (HRP) conjugated (Bio-Rad, United States) as the secondary antibody. The Pierce ECL Plus kit (Thermo Fisher scientific, United States) was used as a chemiluminescent method to develop the membrane. The membranes were scanned by imaging using a Fuji LAS-3000 Camera (Tamro Medlabs, Finland) and the densitometry analysis was performed by ImageJ Software. Immunohistochemistry and in situ Hybridization A mixed cohort of animals were used for immunohistochemistry. They were 9-11 months old at the time of processing. Except for the reporter mice we used for checking cre specificity were 3 months old. They were terminally anesthetized with pentobarbital (Mebunat vet 60 mg/ml) and Lidocaine followed by transcardial perfusion with 4% PFA. The brains were stored in 4% PFA overnight at + 4 • C and later transferred to 30% sucrose in PBS. These brains were cryo-embedded in an embedding matrix and stored at −80 • C until further use. The brains were sectioned into 40 µm thick slices and stored in cryoprotectant solution at −20 • C. For BrdU labeling, DNA was denatured by incubating the sections for 30 min in 2 M HCl at + 37 • C and then 15 min in 0.1 M boric acid at room temperature. All the samples were processed for immunostaining as described earlier (Karpova et al., 2011). The primary antibodies used in this study were Calretinin (Rabbit 7697, Catalog number CR 7697, Swant Switzerland), Doublecortin (Catalog number 4604, Cell signaling technology), BrdU (Catalog number ab82421, Abcam, United Kingdom), NeuN (Catalog number MAB377X, Millipore), GFAP (Catalog number 12389, Cell Signaling Technology), and Tph2 (Catalog number PA1-778, Thermo Fisher Scientific). The respective secondary antibodies were Alexa conjugated antibodies (Invitrogen). All the sections were stained with Hoechst 33342 (Thermo Fisher Scientific) before mounting. Stacked images were obtained using a 25× objective on a Zeiss confocal microscope LSM 780 with 1 µm interval between the sections. To avoid cross talk between channels in double labeled samples, we used sequential scanning. The cell counting and quantitation was done with the experimenter blinded to the treatment groups. The cell counting was performed as mentioned in Karpova et al. (2011) using ImageJ software (ImageJ 1.51s version) (Karpova et al., 2011). The images were collected in stacks and cell counting was done in each stack ensuring no overlap in between the stacks, although a stereological counting was not performed. The number of cells were averaged for every stacked image. For sample processing, we used at least five sections per hemisphere per animal and N = 4/group. The cell counting results are expressed as percentage to the control group. For in situ hybridization, brains were collected on superfrost slides (N = 3/group) and processed with the riboprobe synthesized for Bdnf (Hofer et al., 1990). The sections were probed with both sense and antisense probe labeled with digoxigenin. After washing the probes, they were labeled with alkaline phosphatase conjugated anti-digoxigenin fab fragments (1:5,000, Roche Diagnostics, Germany) overnight. The probes were detected by a chromogenic substrate nitroblue tetrazolium/bromochloroindoyl phosphate (NBT/BCIP, Roche, Germany). The reaction was stopped, and brightfield images were obtained by a Nikon stereomicroscope. q-RT PCR In this experiment we used N = 7 animals per group from a mixed cohort. The regions of interest including mPFC, hippocampus and midbrain were dissected and processed immediately for RNA isolation using the PureLink R RNA Mini Kit (Thermo Fisher Scientific, United States). Reverse transcription of RNA was carried out using the SuperScript IV reverse transcriptase enzyme (Invitrogen/Thermo Fisher Scientific, United States). The CFX96 Touch Real-Time PCR detection system (Bio-Rad, United States) with SYBR Green fluorescent DNA probe (Thermo Fisher Scientific, United States) was used to perform the real time PCR. The data were calculated by the normalization of the expression using Ct values of a housekeeping gene (Hprt) as the reference control. The primers used were as follows: Hprt F: GGGCTTACCTCACTGCTTTCC Hprt R: CTAATCACGACGCTGGGACTG 5ht2b F: CCATTTCCCTGGACCGCTAT 5ht2b R: GGCGATGCCTATTGAAATTAACCA Tph2 F: AGAGTTGGAGACGGAGTCGT Tph2 R: AAGGGCAGTGGCTTATGACC Bdnf F:CGATGCCAGTTGCTTTGTCTTC Bdnf R:AGTTCGGCTTTGCTCAGTGG Statistics All the experiments were analyzed using GraphPad Prism software version 9.0 (GraphPad Software Inc., CA, United States). Student's t test (two-tailed) was used when two groups were compared. For more than two groups, the analyses performed were one-way or two-way analysis of variance (ANOVA) followed by Tukey's post hoc test. All the error bars represent mean ± standard error of the mean (SEM) unless specified otherwise. The exact p values are mentioned in the text. The significance value was accepted at p ≤ 0.05. Deletion of TrkB in the Tph2 Neurons Increases 5-HT Production The timeline for all the experiments is summarized in Figure 1A. The specificity and effectiveness of creERT2-mediated recombination was verified by crossing the Tph2creERT2 mice with tDtomato reporter mice. One month after tamoxifen administration, tDtomato expression was observed to co-localize with the Tph2 antibody in the raphe nuclei of the mouse brain ( Figure 1B and Supplementary Figure 1). In the control mice injected with corn oil, very few tDtomato expressing cells were visible and they were not colocalized with Tph2 antibody. This suggests that cre expression is activated with tamoxifen and is specific to the Tph2 specific serotonergic neurons. For confirmation of TrkB deletion, brain tissues from the Tph2creERT2:TrkBflox (TG) and TrkBflox (Ctrl) mice were analyzed by western blotting. In the MB samples, where the 5-HT neurons are mostly located, the levels of TrkB were found to be reduced only in the cre-positive animals (p = 0.0239) (Figure 1C). A reduction, but not a total loss of TrkB signal was expected, since although 5-HT neurons are enriched in the MB regions, they nevertheless constitute a minority of all the TrkB positive cells in this region. No changes in TrkB levels were observed in the HC and hypothalamus (Supplementary Figure 2A). Our previous study in zebrafish indicated that TrkB regulates 5-HT and DA neurons (Sahu et al., 2019). We therefore measured the levels of 5-HT, DA, and 5-HT metabolite 5-Hydroxyindoleacetic acid (5HIAA) in the MB containing the raphe nuclei, as well as in the projection regions of serotonergic neurons in the PFC and HC using HPLC. The levels of 5-HT were significantly increased in the MB and PFC and there was a trend toward increase in the HC (Figure 1Da). However, the levels of 5-HIAA were not significantly altered (Figure 1Db) and, consequently, the levels of 5-HIAA/5-HT ratio, a measure of 5-HT turnover was significantly reduced in the MB (Figure 1Dc). Furthermore, we found that the levels of DA were also increased in the MB as well as in the PFC in the TG animals (Figure 1Dd). The other brain regions without any significant changes are represented in Supplementary Figure 2B. Consistent with increased 5-HT levels, q-RT-PCR experiments indicated that mRNA levels of the 5-HT synthetizing enzyme Tph2 were significantly up-regulated (p = 0.0070) in the MB, which suggests that the capacity for 5-HT synthesis is increased in the transgenic animals ( Figure 1E). We also investigated the mRNA levels for 5-HT receptors and found a significant decrease in the expression of 5ht2b receptor in the PFC (p = 0.0022). No significant changes were found in the expression of other 5-HT receptors assayed, including 5ht1 (a,b,d), 5ht2 (a,c), 5ht3a, 5ht6, and 5ht7 receptors (data not shown). Furthermore, the q-RT PCR ( Figure 1E) indicated that Bdnf mRNA levels were upregulated in the PFC of the TG animals (p = 0.0355). A representative image of the in situ hybridization suggests increased reaction with the Bdnf probe in the TG animals ( Figure 1F). Interestingly, we did not see any significant changes in the mRNA levels of Bdnf and 5ht2b in the HC. Taken together, these results indicate that TrkB signaling significantly modulates the neurotransmitter phenotype of 5-HT neurons. Mice With Reduced TrkB in 5-HT Neurons Are Lean in Spite of Increased Food Intake The TG littermates were found to be lean compared to the controls and transgenic mice not exposed to tamoxifen (p = 0.0067, Figure 2A). We therefore assessed the metabolic activity of the TG mice by subjecting the mice to the Comprehensive Laboratory Animal Monitoring System (E) q-RT PCR results of the transcripts Tph2 in the MB and Bdnf in the PFC was increased. The transcript for 5ht2b was reduced in the PFC of the TG mice. No change in Bdnf and 5ht2b transcripts in the HC was observed (N = 4). (F) In situ hybridization with Bdnf probe in the PFC of Ctrl and TG mice shows a robust expression (N = 3). Scale bar represents 100 µm, Student's t-test, *p-value < 0.01, **p-value < 0.01. HPLC-High pressure liquid chromatography, 5-HT-Serotonin, 5HIAA-5-Hydroxyindoleacetic acid, DA-Dopamine, PFC-pre-frontal cortex, HC-Hippocampus and MB-Mid-brain. (CLAMS) for 3 days. Unexpectedly, we observed a significant increase in the feeding behavior (p = 0.0351) during daytime and a trend toward increase during nighttime. At the same time, locomotor activity was also increased (p = 0.0221) during daytime for the TG animal, again with a trend toward increase in nighttime activity (Figures 2B,C). Furthermore, the respiratory exchange rate (the ratio of the amount of CO 2 expelled and O 2 consumed) was elevated both during the day and night (Figure 2D) in the TG animals, indicating increased metabolic activity. These data suggest that, in spite of increased food intake, enhanced physical and metabolic activity led to a significant reduction in body weight of the TG mice. Behavioral Effects of TrkB Deletion in Adult Serotonergic Neurons To assess the behavioral effect of TrkB loss in serotonergic neurons, we subjected both TG and Ctrl animals to a battery of behavioral tests, including tests for anxiety such as elevated plus maze (EPM), light dark test (LD) and open field (OF). For measuring aggression, we used the resident intruder (RI) test. The effect of Fluoxetine was studied using forced swim test (FST). The learning and memory were assessed using Pattern separation (PS) and Barnes maze (BM) test. These tests were performed in different cohorts of animals. The first cohort was exposed to OF, LD, EPM, CLAMS, and FST, in this order. There were 30 animals per group for OF, EPM, and LD. In the CLAMS and FST test we used N = 12/group. The second cohort was exposed to RI and BM and the N = 12/group. The third cohort was used for PS test and N = 20/group. The most stressful tests such as RI, FST, CLAMS, and PS test were performed in different cohorts of animals to reduce stress and variability arising from repeated exposures (Voikar et al., 2004). In the EPM, no significant difference in the time spent in the open or close arm was observed between the groups (Figure 3A and Supplementary Figure 3A). In the LD test, the time spent in light compartment was increased in the TG mice ( Figure 3B). In the open field test, in spite of the increased activity seen in the metabolic cages, we observed equal amount of ambulation for both the groups in total distance moved and time spent in the center (Figure 3C and Supplementary Figure 3B). To measure if these animals showed signs of aggression, they were subjected to RI paradigm. The experiment animals were residents in their home cage. No significant change was observed between the groups in the number of attacks on the intruder mice ( Figure 3D). Interestingly, the non-social exploration behavior characterized by digging, rearing and scanning the intruder was increased in the TG animals ( Figure 3D). The social exploration which included contacts with the intruder such as sniffing, chasing and climbing was unchanged (Supplementary Figure 3C). The acute effect of antidepressant fluoxetine was then assessed by the forced swim test. The drug fluoxetine was administered intraperitoneally 30 min before the test. Both groups responded similarly with decreased immobility (Figure 3E) suggesting that both the controls and TG animals responded to acute fluoxetine (p < 0.0001). In a separate cohort of animals, we subjected the animals to a PS paradigm of fear conditioning (Figure 3F). The baseline activity of all the animals was analyzed before proceeding with the behavior protocol. We used both males and females in this experiment and found similar effects in both sexes. After 10 days of the training protocol, the control animals exhibited a significant pattern separation, while the TG animals showed no pattern separation by two-way ANOVA [F(24,64) = 2.167, p = 0.007] (Figure 3F). A detailed analysis of everyday freezing of both groups is shown in Supplementary Figure 4. This phenomenon of incomplete pattern separation is attributable to impairment in memory consolidation. In order to characterize the effect of the TrkB deletion on spatial learning and memory, we performed a BM test. During training, the animals learned to find the escape box equally, irrespective of the genotype (Figure 3G). When after the initial training the goal was moved to the diagonally opposite quadrant, the TG animals needed significantly longer time to reach the new goal ( Figure 3H), indicating impaired cognitive flexibility in the TG mice. Increased Newborn Cells and Altered Mature Neuron Markers in TG Mice The 5-HT innervation is known to regulate neurogenesis in adult hippocampus (Gould, 1999). We therefore investigated proliferation and survival of cells in the DG of TG mice using the method of BrdU incorporation into the DNA (Wu and Castrén, 2009;Casarotto et al., 2021). A schematic representation of the experimental timeline for BrdU administration and immunostaining is represented in Figure 4A. One day after BrdU administration, we observed increased precursor cell proliferation in the TG mice when compared to controls ( Figure 4B). However, 4 weeks after BrdU injection, the levels of BrdU were greatly reduced and no significant difference between the genotypes was observed (Figure 4C), indicating that the excessively produced newborn cells failed to survive (Malberg et al., 2000;Santarelli et al., 2003). Consistently, the number of doublecortin (DCX, a marker of early post-mitotic neurons) and calretinin (marker of late post-mitotic neurons) positive neurons were significantly increased in the DG of the TG mice (Figures 4D,E). Thus, absence of TrkB receptors in the 5-HT neurons projecting to the HC increased the rate of cell proliferation but did not influence the long-term survival of hippocampal progenitors. DISCUSSION In this study, we have investigated the role of TrkB expression in Tph2 expressing 5-HT neurons. Our results suggest that loss of the TrkB receptor in the 5-HT neurons increases 5-HT levels, thereby regulating neuronal plasticity and behavior. Reduction of TrkB in 5-HT neurons increase proliferation, but not long-term In resident intruder test (RI), number of attacks was unaffected between the genotypes, whereas significant non-social behavior was observed. The TG animals showed non-social defensive behavior compared with the controls (N = 12/group). (E) The Immobility time was reduced after acute fluoxetine in both genotypes subjected to forced swim test (FST). (F) In Pattern separation experiment paradigm (PS), the control animals exhibit pattern completion distinguishing context A from context B after 10 days of continuous exposure to both the context. The TG animals contextual pattern separation is inhibited (N = 20/group). (G) In Barnes maze test (BM), both the animals learned during acquisition. (H) The latency to target zone during reversal training was significantly increased in the TG animals. Student's t-test, Two-way ANOVA, *p-value < 0.01, **p-value < 0.001, and ****p-value < 0.0001. Error bars represent Means ± SEM. survival, of hippocampal cells that is consistent with increase in immature neuronal markers such as doublecortin and calretinin in the transgenic animals. Previous studies have revealed a significant role for BDNF signaling in the early differentiation of 5-HT neurons (Galter and Unsicker, 2000a,b). Furthermore, excess 5-HT during development impairs cortical differentiation (Gaspar et al., 2003;Deneris and Gaspar, 2018). Deletion of monoamine oxidase A (MAOA) in transgenic mice increases 5-HT levels and interferes with the formation of visual and somatosensory maps (Cases et al., 1996;Salichon et al., 2001), and this phenotype was further accentuated when TrkB levels were reduced (Vitalis et al., 2013). To avoid potentially deleterious effects of TrkB loss on developing 5-HT neurons, we have used a conditional deletion of TrkB from these neurons in the adulthood and the early development of the 5-HT neurons is therefore intact. We did not observe any obvious loss of 5-HT neurons and the expression of Tph2 was in fact increased, suggesting that BDNF does not appear to be a critical survival factor for adult serotonin neurons. Our data indicate that BDNF through TrkB plays an important role in the regulation of 5-HT neurons and is likely a key element in the control of its function. Our data suggest that BDNF signaling through TrkB plays a major role in the proper functioning of the 5-HT neurons. Tph2, the 5-HT synthesizing enzyme in the brain (Walther et al., 2003) was increased in the TG mice which is consistent with increased 5-HT levels both in the MB as well as in the projection areas of these neurons in the PFC. Although the expression of most of the 5-HT receptors remained unchanged, the 5ht2b transcripts were significantly lower in the PFC of TG mice. 5HT2b agonists show antidepressant like effect and works in modulating serotonergic tone like selective serotonin reuptake inhibitors (SSRIs) (Diaz et al., 2016). In mice, 5HT2b receptors have been reported to regulate extracellular 5-HT levels (Callebert et al., 2006). However, the exact role of 5HT2b receptors in the increased serotonin levels observed in the TG mice requires further investigation. Excess 5-HT, especially during development, produces structural and functional abnormalities that are long-lasting (reviewed in Gaspar et al., 2003;Deneris and Gaspar, 2018). Even if 5-HT levels were increased, the main 5-HT metabolite 5-HIAA was not significantly enhanced and the ratio of 5-HIAA/5-HT, a measure of 5-HT turnover (Valzelli and Bernasconi, 1979) was decreased. Several other factors, in addition to increased synthesis, including lowered degradation rate or increased storage capacity, may have contributed to increased serotonin levels. Our data suggest increased serotonergic tone in the TG mice, although additional studies with microdialysis and in vivo voltammetry will be needed to investigate the role of TrkB deletion in 5-HT release in more detail. The serotonergic system is known to regulate appetite, and drugs that promote 5-HT release have been used for appetite reduction (Halford et al., 2005). We found that mice with loss of TrkB signaling in Tph2 expressing 5-HT neurons show significantly reduced body weight. Tph2 knockout mice lacking brain 5-HT have reduced weight at birth but their body weight normalizes during adulthood (Mosienko et al., 2015). Furthermore, we found that BDNF was increased in the TG mice. BDNF has been suggested to be one of the central factors regulating satiety in brain (Rios, 2013). Intraventricular BDNF administration reduces feeding and brain-wide reduction in the levels of BDNF or TrkB increases appetite and food intake, presumably acting at hypothalamic level, increasing body weight (Rios et al., 2001;Yeo et al., 2004;Gray et al., 2006;Rios, 2013). However, this reduced body weight is not produced by decreased appetite, as food consumption in the TG mice was actually increased, which suggest that the loss of weight was not a consequence of increased BDNF expression. Apparently, observed hyperactivity and increase in metabolic rate in mice with loss of TrkB in 5-HT neurons are sufficient to compensate the increased food intake. These data indicate that the effects of TrkB signaling and 5-HT on food intake and metabolic activity are complex and dependent on the brain regions being affected. Adachi et al. (2017) recently reported that a virally induced loss of TrkB in the dorsal raphe region also increased aggression. In spite of high 5-HT levels and hyperactivity mice deficient of TrkB specifically in 5-HT neurons showed normal phenotype in tests assessing anxiety and aggression. Furthermore, we observed a normal response to fluoxetine in the forced swim test in the TG mice, which is in contrast to the finding of Adachi et al. (2017) who reported a loss of responsiveness to fluoxetine in their mice. Our recent data show that mice where TrkB was deleted using an En1-cre promoter that is active in the midbrain region covering, but not confined to, the raphe nuclei, show increased aggression (Sahu and Castrén, unpublished). Since Adachi et al. (2017) used a local midbrain injection of cre-expressing viruses to delete TrkB, it is possible that deletion of TrkB in cells next to 5-HT neurons, such as GABAergic interneurons, might mediate the aggressive phenotype observed by Adachi et al. and the antidepressant mechanism. The 5-HT innervation is known to regulate the proliferation of hippocampal precursor cells (Deneris and Gaspar, 2018). Chronic antidepressant treatment increases neurogenesis (Malberg et al., 2000;Santarelli et al., 2003) and long-term survival of these newborn neurons is regulated by BDNF signaling (Sairanen et al., 2005). We found a significant increase in hippocampal cell proliferation and early differentiation of newborn DG neurons, as indicated by increased BrdU incorporation and doublecortin as well as calretinin positive neurons, although the latter finding awaits confirmation with stereological methods. However, their long-term survival was at a wildtype level, which is consistent with the notion that longterm survival requires activity-dependent incorporation into hippocampal networks (Castren and Hen, 2013). The failure of long-term survival is also consistent with the impairment in cognitive flexibility and alterations in pattern separation, processes that are thought to be dependent on the hippocampal function, We have previously observed that complete loss of TrkB in zebrafish has a major impact on the development of 5-HT and DA neurons (Sahu et al., 2019). Current findings indicate that the effects of TrkB signaling in the mammalian 5-HT neurons are predominantly at a functional level. Our data demonstrate that deleting a receptor in a circumscribed group of neurons can have widespread cell non-autonomous trans effects in many parts of the adult central nervous system. Through increased synthesis of 5-HT, lack of TrkB in these neurons significantly impacts on the maturation of hippocampal neurons and consequently the animal behavior. These findings underline the previously implicated close connectivity between neurotrophins and the 5-HT system (Mattson et al., 2004;Martinowich and Lu, 2008). DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. ETHICS STATEMENT The animal study was reviewed and approved by the Animal Ethical Committee of Southern Finland (Finland: ESAVI/10300/04.10.07/2016 and ESAVI/38503/2019). AUTHOR CONTRIBUTIONS MS and EC planned the experiments and wrote the manuscript. MS, YP-B, MP, AS, OB, and KK performed the behavioral and biochemical experiments. TP performed the HPLC analysis on the animals. All authors contributed to the article and approved the submitted version. FUNDING EC was received the ERC grant #322742 -iPLASTICITY; Sigrid Jusélius Foundation, Jane and Aatos Erkko Foundation, and the Academy of Finland grants #294710, #327192, and #307416. The Helsinki University Library has funded the processing fees for publication. ACKNOWLEDGMENTS We thank Outi Nikkila, Sulo Kolehmainen, Seija Lågas, and Erja Huttu for their expert technical help. We are grateful for Dr. Pierre Chambon and Dr. Daniel Metzger for the Tph2-creERT2 mice. We also thank Dr. Vootele Voikar, in charge of the Mouse Behavioral Phenotyping Facility, which is supported by Biocenter Finland and HiLIFE.
2021-04-15T13:36:54.753Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "af6a26508f34c853439a1640d189d55351b522d7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2021.616178/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af6a26508f34c853439a1640d189d55351b522d7", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
253045163
pes2o/s2orc
v3-fos-license
Alterations in Stem Cell Populations in IGF-1 Deficient Pediatric Patients Subjected to Mecasermin (Increlex) Treatment Pathway involving insulin-like growth factor 1 (IGF-1) plays significant role in growth and development. Crucial role of IGF-1 was discovered inter alia through studies involving deficient patients with short stature, including Laron syndrome individuals. Noteworthy, despite disturbances in proper growth, elevated values for selected stem cell populations were found in IGF-1 deficient patients. Therefore, here we focused on investigating role of these cells—very small embryonic-like (VSEL) and hematopoietic stem cells (HSC), in the pathology. For the first time we performed long-term observation of these populations in response to rhIGF-1 (mecasermin) therapy. Enrolled pediatric subjects with IGF-1 deficiency syndrome were monitored for 4–5 years of rhIGF-1 treatment. Selected stem cells were analyzed in peripheral blood flow cytometrically, together with chemoattractant SDF-1 using immunoenzymatic method. Patients’ data were collected for correlation of experimental results with clinical outcome. IGF-1 deficient patients were found to demonstrate initially higher levels of VSEL and HSC compared to healthy controls, with their gradual decrease in response to therapy. These changes were significantly associated with SDF-1 plasma levels. Correlations of VSEL and HSC were also reported in reference to growth-related parameters, and IGF-1 and IGFBP3 values. Noteworthy, rhIGF-1 was shown to efficiently induce development of Laron patients achieving at least proper rate of growth (compared to healthy group) in 80% of subjects. In conclusion, here we provided novel insight into stem cells participation in IGF-1 deficiency in patients. Thus, we demonstrated basis for future studies in context of stem cells and IGF-1 role in growth disturbances. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s12015-022-10457-2. Introduction Insulin-like growth factor 1 (IGF-1) is a part of essential axis responsible for development and growth of cells and tissues. Due to high homology both, IGF-1 and insulin activity, are related to control of metabolic phenomenon but also longevity [1]. To date, role of IGF-1 signaling pathway was found to be critical for numerous processes associated with growth, including inter alia osteoblasts expansion or nesting of hematopoietic stem cells (HSCs) in general. [2] Noteworthy, IGF-1 proper distribution, function and activity is closely related to its plasma binding protein -IGF-1 binding protein 2 (IGFBP2) [3]. IGF-1 deficiency demonstrates very low prevalence among pediatric patients diagnosed with short stature, constituting approximately 1.2% [4]. Pathological conditions like dwarfism, including Laron syndrome with substantially low levels of IGF-1, allowed us to discover crucial role of that protein in growth and maturation reduction [5]. Laron syndrome is associated with IGF-1 deficiency and mutations in growth hormone (GH) receptor. Due to disabled ability of the receptors to respond to GH, treatment of the IGF-1-deficient patients is based generally on recombinant protein application -mecasermin (available in drug Increlex) [6,7]. Current studies suggested that despite favorable effects of rhIGF-1 (recombinant human IGF-1), normal values for height are not commonly achieved. Therefore, novel therapeutic approaches are suggested including combination of rhIGH-1 and rhGH or even post-GH receptor agonists (still to be identified) [8,9]. Significance of VSELs discovery was not only based on the enormous differentiation potential of these cells and possible implementation in regeneration. Most importantly, these cells with embryonic-like characteristic were found to be present in body through whole life in quiescent state [10]. To date, that population of stem cells was found inter alia within cardiac cells [11] and lung tissue [12], apart from their confirmation in bone marrow and peripheral blood [13]. Regarding IGF-1, VSELs were shown to possess its receptor activity through involvement of the related signaling pathways and expression of IGFR1 gene [14], thus, enabling them to respond to the hormone. However, it is worth to note that these stem cells are able to modify their responsiveness to IGF-1 or insulin through corresponding receptors, and preserve quiescent state [15]. Besides reported role of VSELs in regeneration of tissues damage, elevated levels of these stem cells were also reported in condition associated with development disturbances, like Laron syndrome described above [16]. In contrast to hematopoietic stem cells name, that population was found to give rise not only to leukocytes or other cells of hematopoiesis [17]. More recent data demonstrated that adipocytes can be generated out of HSC cells [18,19]. In recent years researchers gradually focused more on practical assessment of VSELs role in growth and regeneration of various tissues. Those included inter alia bone defects where VSELs supported other populations of stem cells (present within fraction of bone marrowderived mononuclear cells-BM-MNC) in osteogenesis and effective reconstruction of the damaged areas [20]. Besides structural tissue losses, VSELs role was suggested to affect limitation of damage progression and maintenance of beta cells proper function in the course of type 1 diabetes [21]. When discussing possible applications of VSELs it is worth to remember that presence of these cells was shown to be closely associated with age, with highest numbers reported in children predominantly [11]. Numerous conditions and therapeutic protocols were demonstrated to be associated with increased levels of VSELs. These include inter alia FSH therapy in women prepared for in vitro fertilization [22]. Apart from VSEL and HSC populations, crucial role of endothelial progenitor cells has been shown in recent years in context of numerous pathological conditions. These include for example metabolic disorders [23]. To date, there are no reports on potential role of these subsets of cells in the course of Laron syndrome. However, considering endothelial progenitor cells (EPC) essential participation in vasculogenesis and angiogenesis [24,25], their involvement is highly possible in processes accompanying intensive growth of the body. Here we focused on evaluating changes in circulating stem cells, including VSELs, HSCs and EPCs, in IGF-1-deficient pediatric patients subjected to therapy with mecasermin (Increlex). Our results are the first to demonstrate long-term effects of IGF-1 deficiency syndrome patients' treatment in context of stem cells. Moreover, we have managed to reveal essential associations between studied cells and growth-related parameters, and therapy influence on these properties. Patients and Material Patients with reported IGF-1 deficiency were enrolled in the study, with age range of 6-12 years. Written consent has been obtained from each patient or legal guardian after full explanation of the purpose and nature of all procedures used. Complete patients' characteristics was included within supplementary materials (Supp. Figure 1) [26]. In addition, 36 subjects were selected for control group (with proper growth rate, and inflammatory/endocrine/oncological disturbances excluded), with age-matched subjects used for comparisons at before (8.5 ± 2.5 versus 9.7 ± 1.1 years in IGF-1-deficient group) and at the 4-5 th year (13.3 ± 2.5 versus 12.8 ± 2.5 years in IGF-1-deficient group) of the therapy implementation. Diagnosis of Primary Insulin-like Growth Factor Deficiency (PIGFD) was based on short stature or growth failure, proper growth hormone production and insufficient production of IGF-1. Other conditions including chronic diseases or poor nutrition has been excluded. Patients were treated with Increlex (mecasermin) -recombinant IGF-1, at initial dosage of 0.04 mg/kg, injected subcutaneously twice a day. Increlex dose was gradually increased by 0.04 mg/kg up to maximum of 0.12 mg/ include results obtained before therapy and at 4-5. th year of observation (data presented as violin plot with median and quartiles shown) (asterisks indicate significant p values: *-p < 0.05, **-p < 0.01, ***-p < 0.001, ****-p < 0.0001) kg. The study protocol was approved by the local Ethical Committee at the Medical University of Bialystok (APK.002.78.2021). Peripheral blood was collected by venipuncture prior to Increlex application and at control visits in the course of therapy, up to 4-5 years. Whole blood (600 μl) was subjected to immunostaining and flow cytometric analysis, and remaining material was used to obtain plasma. Collected plasma was stored in -80 °C for later immunoenzymatic tests. Complete gating strategy implemented for delineation of VSEL, HSC, and endothelial stem cells was comprehensively described in supplementary materials (Supp. Figure 3). Populations of VSEL and HSC were initially gated from small-sized cells of 2-6 μm (based on the size beads of 1, 2, 4 and 6 μl) (Life Technologies), including strict morphological properties based on the relative size (forward scatter, FSC) and shape/ granularity (side scatter, SSC). Subsequently, two-way gating strategy was implemented for mutual control and validation of the rare events analysis results. Thus, one way, population of mature cells in blood was excluded with Lineage1 coctail with addition of anti-CD235a for exclusion of the remaining erythrocytes. Then, cells of interest were distinguished on the basis of progenitor cell marker -CD133, and differential expression of anti-CD45: Lineage-/CD45 + CD133 + (HSC) and Lineage-/CD45-CD133 + (VSEL). The other gating approach first focused on detection of Lineagenegative cells with concomitant expression of CD133, and then, VSEL (Lineage-CD133 + /CD45-) and HSC (Lineage-CD133 + /CD45 +) were gated on the basis of CD45 marker presence. In reference to endothelial stem cells, cells of interest were initially gated out from PBMC population on the basis of FSC and SSC properties. Furthermore, Boolean gating was applied to distinguish cells demonstrating concomitant expression of all selected markers, including: CD134 + CD309 + CD133 + (EPC, endothelial progenitor cells), CD34 + CD144 + (CEC, circulating endothelial cells), and CD34 + CD309 + cells (Supp. Figure 3). All tested cell populations were presented as frequency of specific cell events within all events of leukocytes analyzed, referred as 'WBC' -white blood cells. Immunoenzymatic Assessment of SDF-1 Isolated plasma samples were used for immunoenzymatic analysis of SDF-1 level in SDF-1-deficient patients and healthy control group. Materials were tested in accordance with instructions included in ELISA DuoSet kit for detection of SDF-1/CXCL12 (R&D Systems). Selected assay allowed for detection of SDF-1 within range of 31.2 -200 pg/ml. Data were acquired with the use of LEDETECT96 microplate reader (Labexim, Lengau, Austria). Statistical Analysis Biostatistical analysis of the acquired data was performed using GraphPad Prism 9.0.0 statistical software (GraphPad Prism Inc., San Diego, CA, USA). Depending on data distribution obtained results were analyzed with the use of parametric or non-parametric tests for paired or unpaired data. Statistical significance was set at 0.05, with asterisks or p values indicating power on the graphs: *-p < 0.05, **-p < 0.01, ***-p < 0.001, ****-p < 0.0001. Violin plots with median values and quartiles were used when IGF-1-deficient and healthy control group were compared. Variations in time were visualized with graphs demonstrating mean change values Peripheral Blood Stem Cells Response to Increlex Application in IGF-1-Deficient Patients In the course of 4-5 years of therapy we observed essential changes within peripheral population of VSEL and HSC predominantly. Despite periodic elevations of peripheral levels of VSEL and HSC, final effect of Increlex application was associated with decline in these populations of stem cells. In reference to VSELs, significant decrease was observed at first 5 months of therapy, later at 22 nd and 25 th month, and finally at the end of monitoring at 51 st and 58 th month (Fig. 2A). These changes were concomitantly followed by a gradual decline in HSC levels. Most significant differences were found after 3-5 months of Increlex use, then at 25 th to 40 th month, with only slight declines in HSC at 15 th and 54 th month (Fig. 2B). In contrast to observed alterations in VSELs and HSCs, endothelial stem cells did not demonstrate significant response to the treatment regimen applied. Only transient increase in EPC was found at 1 st month of Increlex use, and transiently after 29 th month in reference to CD34 + CD144 + and CD34 + CD309 + cells (Fig. 3A-C). At last year of therapy, we tested stem cell levels in reference to their age-matched healthy controls. We found that Increlex application did not only reduce VSELs and HSCs in IGF-1-deficient patients, but also allowed to obtained significantly lower values compared to control group (Fig. 1A-C). SDF-1 Concentration in IGF-1-Deficient Patients and Its Response to Increlex Therapy Changes of SDF-1 concentration were comparable to variations in VSEL/HSC levels after Increlex application. Thus, decrease in SDF-1 plasma level was initially found at 3-5 th month of therapy, and subsequently, at 15 th and after 45 th month. Lower values of the chemokine seemed to be maintained until the end of patients monitoring (Fig. 4A). Considering simultaneous decline in both, VSEL, HSC and SDF-1, we intended to evaluate mutual association between these parameters. We found strong correlation of initial plasma SDF-1 levels with frequency of VSELs (r = 0.6629). VSELs also demonstrated strong correlation with HSC values (r = 0.5547). Interestingly, however, no association has been shown between SDF-1 peripheral HSC levels (Fig. 4B). SDF-1 levels in patients with IGF-1 deficiency were increased before Increlex therapy, thus, complementary to pre-treatment values of VSEL and HSC. Increlex application led to significant changes after 4-5 years of therapy diminishing these differences (Fig. 4C). Association Between Tested Cells and Clinical Characteristic of Patients with IGF-1 Deficiency VSEL and HSC were found to strongly correlate with weight, height and BMI of the studied subjects, mostly affected by Increlex application. Furthermore, moderate correlation of VSELs with free thyroxine (fT4) and thyroid stimulating hormone TSH levels were reported. Slight tendency shown for VSELs correlation with IGF-1, IGF-1 binding protein 3 (IGFBP3), and bone age (based on hand and wrist radiograph analysis). Regarding HSCs, similar results were only observed in context of IGF-1 and IGFBP3. At later stages of the therapy links between VSELs and glucose or glycosylated hemoglobin (HbA1c) (at 3 rd -25 th month of therapy), and HSC versus BMI and fT4 (positive correlation) Fig. 3 Increlex treatment effects on changes within Endothelial Progenitor Cells (EPC; CD34 + CD133 + CD309 +) (A), Circulating Endothelial Cells (CEC; CD34 + CD144 +) (B) and CD34 + CD309 + cells (C) in IGF-1-deficient pediatric patients. Mean change in time of selected parameters was plotted on the graph with standard deviation (SD) included (black vertical arrows indicate statistically significant changes compared to pre-treatment values (Time 0)) ◂ or glucose (negative correlation) were reported (at 29 th -58 th month of treatment) (Fig. 5A-B). EPC levels demonstrated tendency for correlation with weight and height, glucose matebolism-related parameters (glucose and HbA1c), and IGF-1 and IGFBP3 levels. Comparable associations were found in reference to CD34 + CD309 + cells. Nevertheless, found correlations were also influenced by Increlex therapy (Fig. 5C-D). Effects of Increlex Application on Growth Rate of Studied IGF-1-Deficient Patients Considering influence of Increlex on VSEL and HSC, we monitored growth-related parameters to verify the therapy efficacy on improving development disturbances. Slope analysis of the regression lines was applied to compare intensification of change rates between IGF-1-deficent and healthy patients. 60% of subjects demonstrated at least comparable or even higher rate of growth compared to healthy children. Others were also found to improve their weight in response to Increlex application, however, at slightly lower degree (Fig. 6A, D). For most of patients' height rates, these values were similar or higher compared to age-matched healthy subjects. Noteworthy, an increase in growth, supported by Increlex, was achieved in all IGF-1-deficient subjects (Fig. 6B, D). We also found that 80% of the Increlex-treated patients achieved at least comparable rate of BMI changes compared control group (Fig. 6C, D). Discussion Significance of IGF-1 axis has been shown to play crucial role in differentiation and growth of cells and tissues [1]. Recently, disturbances in these processes were studied intensively in Laron syndrome patients -suffering from growth disorder but simultaneously demonstrating relatively high longevity [6,27]. It might be tempting to presume that significantly reduced levels of IGF-1 in IGF-1 deficiency syndrome patients might lead to reduced growth associated with disturbances in stem cells. Since the discovery of VSELs and their presence even in the adult tissues, their use in regeneration has become a study aim for numerous researchers. Although, their limited numbers around body required a lot of effort made by later studies to reveal their potential and possible implications in medicine [10]. To date, presence of VSELs has been confirmed in different tissue compartments including heart, bones, lungs, in numerous cases, detected together with HSCs [11,12,20,28]. Thus, that embryoniclike cell population can be considered an essential element in development and regeneration of all main tissues and organs. With their demonstrated decline with age [11], their significance is most pronounced in the period of intensive growth and maturation. Here, IGF-1 deficiency syndrome patients exhibit significantly higher frequency peripheral blood VSELs and HSCs. These results are in accordance with previously reported increased ratio of these cells in mice model of growth disturbances. Interestingly, VSELs of these study subjects demonstrated significantly higher level of Oct4 demethylation -pluripotency regulator, potentially as a response to reduced IGF-1 signaling in declined GH receptor presence [16]. Thus, accumulation of studied stem cells in blood might also be associated with induced expression of genes related to pluripotency. However, question arises why VSEL or HSC high numbers were not able to induce growth of the subjects? That phenomenon could be explained by low IGF-1 level reported in the IGF-deficient patients. Previous studies demonstrated that IGF-1 signaling is important for osteoblasts expansion together with engraftment of HSC [2]. Previous reports indicated role of SDF-1 in regeneration through mobilization of progenitor cells at site of injury [29]. Considering that, decline in peripheral protein level might be associated with higher tissue concentration, thus, leading to increased involvement of VSELs and HSCs in tissues development. Implementation of recombinant IGF-1 in therapy could support proper nesting of HSCs, presumably also VSELs, in the tissues of interest. In the course of Increlex therapy implementation, we found that populations of HSCs and VSELs are gradually decreasing over time. Considering the fact that HSC were found in the past to be one of the potential source of adipocytes [18], we presume that intensified growth ratio can lead to increased utilization of these cells, possibly also VSEL population. That could be additionally associated with improved activity of the IGF-1 axis and migration of these cells into developing tissues. Decline in both VSEL and HSC populations, presumably as a result of their participation in tissues expansion, can also involve SDF-1 Fig. 4 Changes in the SDF-1 plasma concentration in IGF-1-deficient patients subjected to Increlex treatment. Therapy-related alterations in SDF-1 level presented as mean percentage change (A) (black vertical arrows indicate statistically significant changes compared to pretreatment values (Time 0)). Analysis of mutual correlation between SDF-1, VSEL and HSC before Increlex application (B). Comparison of SDF-1 values within IGF-1-deficient patients versus healthy control group, at admission and after 4-5 years of Increlex therapy (C) (asterisks indicate statistically significant values: *-p < 0.05) ◂ Fig. 5 Determination of mutual associations between studied stem cell populations and clinical and laboratory results at three main stages of Increlex therapy: before treatment (T0), 3 rd -25 th (TI) and 29 th -58 th (TII) month of treatment (correlation coefficient values mapped, statistically significant correlations indicated with asterisks: *-p < 0.05, **-p < 0.01, ***-p < 0.001, ****-p < 0.0001) activity. Stromal derived factor 1 (SDF-1), among other factors (including hormones-FSH), has been already demonstrated to be efficient mobilization protein, inducing release of stem cells like VSELs from bone marrow [22]. HSCs have also been reported to be involved by SDF-1 into damaged tissues regeneration, through modulation of progenitor cells within damaged mice hepatic tissue [28]. We speculate that initially high concentrations of plasma chemoattractant SDF-1 might originate from unresponsiveness of monitored stem cells -associated with IGF-1 axis disturbances [30]. Restored levels of IGF-1 in patients were followed by a decrease in peripheral VSEL and HSC, and reduced SDF-1 level. Cumulatively, these data emphasize high probability of our hypothesis of monitored stem cells involvement in tissues development. Previous studies indicated crucial role of VSELs in the animal model of bone defect regeneration. Studies revealed that bone marrow-derived mononuclear cells were significantly less efficient in regeneration when deprived of VSELs. Noteworthy, presence of VSELs was associated with reduced inflammation, based on CD68 + macrophages activity, and lower levels of proinflammatory cytokines: IL-1beta and MCP-1 [20]. Those studies could be partially linked to phenomenon present in children growth and associated increase in size and width of the bones. We are aware of the limited number of subjects where correlations between bone age and studied stem cells has been tested. However, we provide basis to consider VSELs a crucial participant in the tissue growth. Furthermore, changes in VSELs were followed by high efficacy of Increlex treatment as height and BMI change rate was higher or comparable to healthy controls in 80% of cases. Such beneficial effects of the therapy are in accordance with data from Israel team where recombinant IGF-1 application significantly improved height of the Laron subjects [31]. Although the up-to-date data questions efficacy of the rhIGF-1 application [8], here we shown that most of the patients achieved at least comparable growth versus healthy subjects. Importantly, we demonstrated not only significantly improved growthrelated values [9], but also change rate of these parameters. Regarding positive correlations between HSCs and anthropometric data, those stem cells seemed to have as important role in growth as VSELs. In reference to HSC-related phenomena prior and after therapy, we found significant correlation of that population with IGF binding protein 3 (IGFBP3) -responsible for IGF-1 distribution and activity [3]. These data might support previously reported role of IGFBP2 in promoting survival and circulation of HSCs. Despite indication of mechanism independent from IGF-1 signaling [32], here, we demonstrated strong link between IGF-1 levels and HSCs. Taking into account described importance of IGF-1 in HSC mobilization [2], we presume that normal level of the hormone together with IGFBP3 is essential in stem cells proper distribution in growth. Previously, IGFBP3 protein was found to be moderately associated with growth velocity, suggesting its role as predictor of response to recombinant GH therapy [33]. These findings are substantially supported by our data, additionally extended by knowledge of tested VSELs and HSCs strong association with both IGFBP3 and height. In context of CD34+ cells, only pre-treatment CD34+CD309+ cells were found to have lower levels in IGF-1-deficient patients. Despite an increase of CD34+ cells in response to GH replacement therapy was reported previously [34], here, we showed no change in EPC, CEC or CD34+CD309+ cells. Additionally, we found that CD34+ cells expressing VEGF receptor (CD309) were found to initially correlate negatively with heigh and weight of the patients. Essential role of these cells in context of vasculo-and angiogenesis cannot be excluded, however, in reference to growth those populations does not seem to play significant role. Cumulatively, our data provide clinical evidence of critical role of IGF-1 signaling pathway in growth of pediatric patients with IGF-1 deficiency syndrome. Furthermore, we found essential importance of IGF-1 in possible mobilization of VSELs and HSCs into developing tissues. Thus, implementation of rhIGF-1-Increlex, could be more precisely associated with supporting efficient nesting of developing tissues. We must always remember about possible complications associated with intensified activity of stem cells in response to growth factors, including promotion of tumor cells [18]. Therefore, despite favorable effects reported here, regular monitoring of the Increlex therapy is required [35]. Regarding study limitations, we must note extremely low incidence of IGF-1 deficiency syndrome (estimated to be around 500 worldwide for Laron syndrome) [36]. Therefore, relatively small number of our studied group, consisting of all the province patients, seems to be justified. However, we hope that further multi-center studies would be possible to support our conclusions, complemented with mechanistic explanation of risen hypothesis of VSEL/HSC role in growth. Despite limitations, current data shed a significant light on VSELs and HSCs participation in growthrelated phenomenon in the course of IGF-1 deficiency. In addition, here we provide important basis for further research on stem cells and their role in clinical aspects associated with development disturbances and related therapeutic approaches. Fig. 6 Comparative analysis slopes as visualization of weight, height and BMI differentials between IGF-1-deficient patients and healthy control group. Individual regression lines for each patient with corresponding control group were demonstrated for weight (A), height (B) and BMI (C) increase in time. Data supported by donut graphs with frequency of subjects within groups of higher, lower and unchanged values for the parameters in response to Increlex application (D) (slope analysis data presented as regressions of values in time within IGF-1-deficient and proper control group, with slope value) (statistically significant differences indicated with asterisks: *-p < 0.05, **-p < 0.01, ***-p < 0.001, ****-p < 0.0001) ◂
2022-10-22T06:16:34.450Z
2022-10-21T00:00:00.000
{ "year": 2022, "sha1": "9d7522ba54cd773e92051f7270a97382ec62a4ed", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12015-022-10457-2.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "d668eabd54bfac4676e23804642a12ea8702a5a5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53372136
pes2o/s2orc
v3-fos-license
Ultrasonography as an Evaluation Tool in a Randomized Controlled Trial Assessing Balneotherapy Effects in Rheumatoid Arthritis Objective: To evaluate the usefulness of ultrasonography as an evaluation tool in a Randomized Controlled Trial assessing Balneotherapy effects in Rheumatoid Arthritis. Methods: A prospective controlled clinical trial, not blinded, randomly assigned of patients with rheumatoid arthritis accordingly to the American College of Rheumatology criteria. The Balneotherapy group received Balneotherapy’s throughout 21 days in S. Jorge Spa. The main outcome was hand/wrist ultrasonography measured at the same moments in the two groups, and McNemar’s tests were used to compare changes in ecographics signals, with a 5% statistical significance level. Secondary outcomes were taken at same time for HAQ-DI and DAS28. A moderated regression analysis, complemented with the Johnson-Neyman (J-N) technique was used to perform the statistical analysis. Results: In thermal group there was a statistically significant result (p < 0.05) regarding the evolution of synovitis only at left hand/wrist according to ultrasonography signals, between baseline and day 21, end of thermal treatment, and after 3 months. Curiously, the same statistical findings were found in the control group, but at right side. No difference was found in DAS28 at the end of Balneotherapy but almost reach significance at month 3. HAQ-DI at end of treatment and 3rd month follow-up was significantly improved in the Balneotherapy. Conclusions: Pain and diminished function are hallmarks of RA patients, so any complementary contribution with no or mild side effects, as Balneotherapy, is welcome to enhance quality of life. In this study ultrasonography could detect improvement in synovitis in both RA patients groups, Balneotherapy translating both the possible effect of treatment and the natural history of RA. Both joints were the more affected at enrolment and the Balneotherapy had a slightly higher DAS28. Quality of life had a sustainable improvement with Balneotherapy. Ultrasonography is an objective, inexpensive modality to measure the response of RA small joint synovitis to Balneotherapy, provided that it is realized by a medical doctor with specific formation. Introduction Rheumatoid arthritis (RA) [1,2] is a chronic systemic autoimmune disease characterized by persistent inflammation of synovial joints with pain, often leading to joint destruction and disability, and despite intensive research the cause of RA remains unknown [3]. New effective drug treatments in RA has resulted in • Page 2 of 8 • initio, were stratified by age to ensure a better balance between groups and randomly assigned: Immediate thermal treatment or deferred thermal treatment [18].All patients signed an informed consent. Interventions The hydromineral occurrence at Balneotherapy centre (S.Jorge -30 Km from Porto), is a chloride-rich sulphur water with sodium prevailing in the cation composition. Most patients of the thermal group departed from the hospital, at 8 am, in a special mini-bus accredited for transportation of patients and returned to Porto (around 10.30 am) from Balneotherapy centre.The trip took about 20 minutes.Some patients preferred to take their own transport.All patients maintained their usual pharmacological treatment and kept their daily life activities, namely those who had professional jobs.Every day, the same medical hydrologist was in thermal treatments throughout all the session, adjusting treatments individually, if necessary. During 21 days, the thermal group has received alternately the following sulphur bath treatments: One day a collective thermal pool at 34 °C in groups of 8 patients, per 30 minutes-oriented by the same experienced physiotherapist.The prescription of the medical hydrologist was specific for each clinical condition, namely type of exercises for different body segments (paying attention to patient's limitations but emphasizing functioning and respiratory control) followed by 10 minutes of relaxation, including different water jets, electronically controlled, focused on the most painful body areas, always maintaining a jet distributed at safety. The following day patients had a sulphur bath (20 minutes) at 37 °C plus underwater jets (10 minutes) at 38 °C focused on to the most painful joints and finally global steam (5 minutes) at 38 °C.The latter two treatments were also adjusted by two experienced aquatic technicians, formerly prepared to be aware of symptoms and signs of alarm.The prescription of the medical hydrologist (jet force and temperature; area of the body) was individualized to each patient characteristics and the evolution of the disease.There was no direct massage because of the subjectivity of each therapist. The clinical evaluation was made simultaneously for the two groups (thermal and control) at day 0 (D0) baseline, day 21 (D21) end of thermal treatment and after 3 months (M3), following a pre-established protocol: Health Assessment Questionnaire -Disability Index (HAQ-DI), Visual Analog Scale (VAS) pain, fatigue, quality of life by the patient, Disease Activity Score -28 joints (DAS28), VAS for Global Health Assessment by the same physician who had no experience on the field of Balneotherapy, joint US (the same joints, chosen by the clinician according to pain and physical examination, in the same patient by the same experienced radiologist) and laboratory tests.less focus on non-pharmacological modalities, such as therapeutic exercise and balneotherapy [4][5][6][7][8].Balneotherapy has been used for a very long time, even ante christum (AC), and is recognized as an important way to treat rheumatologic diseases, specially osteoarthritis [9,10].It is called mineral baths or Balneotherapy, and uses different types of natural mineral water compositions like sulphur, radon, carbon dioxin, etc. Sukenik stated that the sulphur mineral water has special proprieties to rheumatologic diseases, including in the course of active inflammatory phases in RA [11,12].In Portugal, respiratory and rheumatologic (mostly osteoarthritis) diseases are the most frequently ones treated by sulphur mineral waters which are more common and focused in the Northeast of the country.In some European countries and in USA, the use of radon is prohibited as well as controversial [13]. According to the recommendations of the "Haute Autorité de Santé" for RA published in 2007 [14] and Forestier, et al. [15], Balneotherapy appears to provide an analgesic and functional benefit to patients with stable or long-established and non-progressive RA (grade C).It is not indicated when RA is active (professional agreement) [15]. So whereas balneotherapy has a large use in non-inflammatory osteo-articular conditions its real benefits are not clearly known in RA [16].To better understand and measure its effects a trial was conducted in order to compare Balneotherapy plus usual pharmacological treatment versus only pharmacological treatment.The aim of this study was to evaluate if Balneotherapy offers any benefit, using hand synovitis changes as the primary endpoint, evaluated by ultrasonography (US). Participants After approval by the ethics committee, patients were selected from the database of Unidade de Imunologia Clínica (UIC), Hospital Santo Antonio, Centro Hospitalar do Porto.Patients living at no more than 30 km away from the hospital were included in the study, in order to be able to attend the Balneotherapy and continue their ordinary daily life.An invitation letter was sent to patients to attend to a lecture on Balneotherapy in the hospital. The inclusion criteria were: 18-years-old or more; definitive diagnosis of RA according to American College of Rheumatology (ACR) criteria; with an evolution equal or more than 1; functional status I-III (classification ACR-Steinbrocker [17]). The exclusion criteria were: functional status grade IV; cognitive abnormalities (for example psychoses or senile dementia); active infection; participation in other complementary treatments. The 44 eligible patients, after a code attribution ab feasible.In respect to the care providers, they were not involved directly with the study nor with Balneotherapy modalities, however we can't assure that patients didn't comment anything during outpatient visits. Statistics analysis Analysis was performed by intention to treat.Mc-Nemar's tests were used to compare the proportion of individuals from the thermal and the control group that change ecographics signals (in terms of Synovitis and Hypervascularization).All patients were evaluated in moment zero and were reassessed after 3 weeks (moment 1) and 3 months (moment 2), in the follow-up.Statistical significance level was set at 5%.All the analyses were stratified by thermal and control group and performed in SPSS version 22. The US results were allocated in four different categories according to the degree of US changes (0 -no synovial thickening; 1 -mild; 2 -moderate; 3 -severe synovial thickening).We decide to group in one simple category status 2 and 3 taking in account the number of the participants that reported such symptoms, so our final US are splitted in three categories. The moderated regression analysis, complemented with the Johnson-Neyman (J-N) technique was used to perform the statistical analysis on HAQ-DI and DAS-28 results. Results 44 eligible patients accepted to participate in the trial: 22 participated in thermal group and 22 in control group between August 2011 and November 2011 [20]. Adherence to Balneotherapy was continuously assessed and a very good compliance of patients was achieved.There were only 3 cases of discontinued treatment due to reasons beyond the study.Table 1 summarizes the baseline characteristics of the enrolled sample, including US findings.The groups were homogenous at baseline with regard to age, duration of disease, gender.All were Caucasian. Concerning US monitoring of RA hand the following results were found (Table 2): -Synovitis: In thermal group statistically significant results were found, regarding the evolution of synovitis between D0 and D21 and D0 and M3. -Thermal patients improved their US signals regarding to left side joints.In what concerns to the right side, there was a tendency for improved results but not statistically significant.In control group, similar results were found, but curiously in the opposite sides: Statistically significant results were found, only regarding to the right side. -Hyper vascularization: A similar analysis was performed regarding this variable, but no statistically power was found for this parameter in either group. Additional information was collected, regarding to the daily pharmacological treatment, as well as complications felt during the study, filled in by the patient. Ultrasonographic evaluation All US examinations (only hands and wrists) -were performed on the same planned day of clinical examinations (D0; D21; M3) by the same radiologist (with special interest on musculoskeletal US).US examinations were performed using a Toshiba Xario equipment and a linear transducer of 5-12 Mhz.US studies of previous clinically selected joints were done using gray scale technique and color power Doppler technique.The gray scale images were obtained in the longitudinal, transversal and oblique planes.The gray scale evaluation was used to detect synovial thickening/hypertrophy and joint effusion.A simple visual semi quantitative score system was used to estimate synovial thickening (grade 0 -absence of synovial thickening, grade 1 -Mild synovial thickening, grade 2 -Moderate synovial thickening, 3 -Severe synovial thickening).Other parameters were screened and recorded with gray scale technique like synovial cysts, tenosynovitis and rheumatoid nodules.Detection and quantification of bone erosions associated with synovitis were not performed because of long time consuming and it was not contributive to the objectives of the study.Synovitis can predict structural damage in rheumatoid arthritis [19]. Color power Doppler technique studies were done in the same joints studied with gray scale technique.The color power Doppler studies were performed in the power angio mode, using standardized parameters with low velocity scale and low wall filter, adjusted to detect slow flow.Color gain was adjusted to maximize demonstration of blood flow, while avoiding noise artifacts.The transducer was gently placed on the surface of the joint to avoid compression of superficial vessels or an artifact increase in vascular resistance caused by compression.Taking into consideration the findings on previous gray scale US examination a simple visual semi quantitative score system was also used to report color power Doppler examinations (grade 0 -absence of vascularization, grade 1 -mild vascularization, grade 2 -moderate vascularization, 3 -marked vascularization). Outcomes The outcomes were ultrasonography scores, HAQ-DI and DAS28, at the same moments in the two groups. Randomization A blocked randomization stratified by age was use.For allocation of the participants to one of the two groups, a computer generated list of random numbers was used. Blinding Given the characteristics of Sulphur water with particular smell, blinding of patients and therapists was not Discussion Our study has limitations, namely no blinding, impossible due to the smell associated to Sulphur waters, Table 3 and Figure 1 and Figure 2 show the values of the change scores of HAQ-DI and DAS-28 between each moment of evaluation and the baseline, with the respective 95% confidence interval (CI).used to detect damage at an earlier time point (especially in early RA) [24]. Whereas US allows a sensitive detection of the inflammatory soft tissue process, synovitis and tenosynovitis, it is not optimal for the detection of erosions.There is an acceptable agreement between US and Magnetic Resonance Imaging (MRI) for detection of bone erosion in patients with early RA but not conventional radiography (CR).US might be considered as a valuable tool for early detection of bone erosion especially when MRI is not available or affordable.At least in one study, US being recognized that is much more complex to achieve blinding in non-pharmacological trials [21,22].The small sample size is also a limitation, although other studies regarding Balneotherapy also included low number of patients. In 2010, Smolen, et al. highlighted the importance of synovitis detection in daily practice, and its prevention as one of the major targets of RA therapy [23].Furthermore Dougados, et al. stated the ability of synovitis to predict structural damage in RA [19]. EULAR recommendations pointed that US may be Regarding activity of disease according to DAS28 with erythrocyte sedimentation rate (ESR) or C-reactive protein (CRP) [32] we observed a non-significant improvement in thermal group, but it must be emphasized, however, that the patients of this group had a mean of disease activity at baseline worse than the control group and that at month 3 the difference between groups almost reach significance. Finally, we found significant statistical differences in quality of life, as evaluated by the HAQ-DI, in both moments of evaluation, more pronounced in month 3. We must stress that the patient lived in the real world, not in the Spa hotel facilities, so we highlight the findings at month 3, long time after the Balneotherapy. The comments of our patients raised the possibility that these quality of life evaluations by a rigid questionnaire didn't correspond entirely to their major worries.The same concerns were found in some papers focusing about standardized or individualized measures [33,34]. Patients with rheumatoid arthritis have much to say about their own experiences along the evolution of their disease.We only stratified the patients by age, but many other variables like gender, duration of disease, functional status, medication, can still interfere in the evolution of the disease conducting to different functional limitations [35] and to different coping of the disease. Patients didn't report any complications during the study, namely infectious diseases. Conclusion In patients with RA, where pain (physical and psychological) predominates, every gain is benefit, contributing to enhance quality of life.That's what Balneotherapy seems to have done to the patients in this study, translated by the well-being felt by the same patient along the time of the study, according to the self-reports of health-related behaviors. More studies in RA, namely multicentre randomized controlled trials (RCT), with the same methodology, including subjective and objectives parameters of evaluation, should be carried out to validate non-drug interventions that are considered to have only marginal benefit.Moreover, US is a cheap modality to measure the response of RA small joint synovitis to Balneotherapy, provided that it is performed by the same radiologist specialist on muscle-skeletal US. seemed to be more reliable when the disease is more active [25]. Evaluation of pannus and the extent of vascularization within the joints of RA patients by high-resolution US might be helpful in the assessment of disease activity, and thus influence therapeutic strategies [26]. Carlo Orzincolo, et al. suggested in 1998 that conventional radiography remains the standard imaging technique for joint studies in the patients with suspected RA.US is recommended to diagnose soft tissue involvement (joint effusion).CT is very useful for showing abnormal processes in complex joints (sacroiliac and temporomandibular joints and craniocervical junction) which are difficult to depict completely with conventional radiography.MRI applications include the assessment of disease activity; in particular, this technique may be the only tool differentiating synovial fluid and inflammatory pannus [27]. Erosions represent a late stage in the disease process.One of the earliest detectable changes in patients with rheumatoid arthritis is proliferation of the synovium -the rheumatoid pannus.Both US and MRI are sensitive for the detection of synovitis, and both are superior to CR [28]. Owing to the better axial and lateral resolution of US, even minor bone surface abnormalities may be depicted.Thus destructive and/or reparative/hypertrophic changes on the bone surface may be seen before they are apparent on plain X-rays or even magnetic resonance imaging. US has a very powerful role in rheumatologic clinical practice and it is becoming the most frequently used imaging technique in evaluating patients with arthritis [29].Furthermore, US is the least expensive of the imaging procedures [30]. The "real time" capability of US allows dynamic assessment of joint and tendon movements, which can often aid the detection of structural abnormalities.Advantages of US include its non-invasiveness, portability, relative inexpensiveness, lack of ionizing radiation, and its ability be repeated as often as necessary, making it particularly useful for the monitoring of treatment.As US is the most operator dependent imaging modality, the experience and expertise of the examiner will determine the value of the diagnostic information obtained [31].That's why, in our study, the same radiologist physician dedicated to the muscle-skeletal field made all the evaluations (baseline and monitoring). The results we found in this study regarding US findings are puzzling.All patients of this study had right hand preference but we found different rheumatoid involvement between right and left hands in the thermal and control groups.That could be explained by different activities, asymmetrical rheumatoid lesions evolution, asymmetric severity of the disease; etc. Table 1 : Baseline characteristics of the study population. Table 2 : Proportion of individuals that change US findings from D0 to D21 and D0 to M3. a Conditional effect of Group on outcome scores at the mean value of the pre-test scores; * p < 0.05.
2018-10-21T10:39:45.677Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "b0ab864b26705a291b2cc1b03c729498d90327e6", "oa_license": "CCBY", "oa_url": "https://www.clinmedjournals.org/articles/jmdt/journal-of-musculoskeletal-disorders-and-treatment-jmdt-4-055.pdf?jid=jmdt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b0ab864b26705a291b2cc1b03c729498d90327e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54956306
pes2o/s2orc
v3-fos-license
Inclusion Evolution Behavior of Ti-Mg Oxide Metallurgy Steel and Its Effect on a High Heat Input Welding We have studied here the evolution of inclusions in ladle furnace (LF), Ruhrstahl & Heraeus furnace (RH), and simulated welded samples during Ti-Mg oxide metallurgy treatment and the mechanical properties of the heat-affected zone (HAZ) after high heat input welding. The study indicated that inclusions in an LF furnace station are silicomanganate and MnS of size range ~0.8–1.0 μm. After Mg addition, fine Ti-Ca-Mg-O-MnS complex oxides were obtained, which were conducive to the nucleation of acicular ferrite (AF). The corresponding microstructure changed from ferrite side plate (FSP) and polygonal ferrite (PF) to AF, PF, and grain boundary ferrite (GBF). After a simulated welding thermal cycle of 200 kJ/cm, disordered arrangements of acicular ferrite plates, fine size cleavage facets, small inclusions, and dimples all promoted high impact toughness. Introduction During the steel-making process, clean steel is the main objective and inclusions are harmful to steel properties, such as toughness and strength [1,2].However, with deep understanding, the concept of "oxide metallurgy" was proposed, where fine non-metallic inclusions are used to pin the grain boundaries and promote intragranular acicular ferrite (AF) transformation for enhancing the toughness of the heat-affected zone (HAZ). In order to realize the optimum efficiency of oxide metallurgy, the process of steel making needs to be carefully controlled [3,4].For high-strength, low-alloy steels, many researchers found that certain types of fine oxide inclusions with a high melting point could increase the toughness of the HAZ after high heat input welding [5][6][7].Ti-containing oxides are well-known to act as nucleation sites for acicular ferrite, which can divide large austenite grains into many finer and separate areas consisting of a fine-grained microstructure [8][9][10].Zhu et al. [11] found that the toughness of the HAZ in Ti-bearing low-carbon steels was improved by adding 0.005 wt % Mg.Xu et al. [12] studied the effect of Mg content on the toughness and microstructure of the HAZ after high heat input welding and found that with the increase of Mg content from 0 to 0.0099 wt %, the major microstructure in the HAZ changed from ferrite side plate (FSP), upper bainite (Bu), and grain boundary ferrite (GBF) to AF with the austenite size decreased from 437 µm to 122 µm.Miia et al. [13] reported the evolution during cooling of different types of Ti-oxide in C-Mn-Cr steel.Wang et al. [5] investigated the transformation behavior in Ti-Zr deoxidized steel.Besides the composition of inclusions, size is also an important factor: tough, Metals 2018, 8, 534 2 of 10 large inclusions are harmful to mechanical properties.Entrapped inclusions can lead to internal cracks, blisters, and slivers in the rolled plates or during subsequent working operations [14]. Although a number of studies have investigated the properties of the HAZ, few studies have focused on the evolution behavior during the steel-making process.In this study, we present the composition, morphology, average size, and number density of inclusions in Ti-Mg-treated EH420 ship-building steel.The properties of the HAZ after thermal welding simulations are also explored. Materials and Methods The chemical composition of plain EH420 and EH420-Mg with Ti-Mg treatment is shown in Table 1.The production steps were: Si-Mn pre-deoxidation → ladle furnace (LF) refining → vacuum treatment in Ruhrstahl & Heraeus furnace (RH) → continuous casting.For EH420-Mg steel, in the step of LF refining, Ti-Fe, Ni-Mg alloy, and Nb-Fe were sequentially added.EH420 steel was used only for toughness comparison after the thermal welding simulation.The sampling positions of EH420-Mg were the LF furnace station, Ti-Mg treatment in the LF furnace, and vacuum treatment in the RH furnace, respectively.After casting, the ingots were both reheated to 1100 • C for 2 h and rolled into 30-mm plate by thermo-mechanically controlled processing (TMCP) with a cooling rate of 32 In order to compare the ability of AF nucleation by different types of inclusions and the change in microstructure from the LF furnace to the RH furnace, samples from a steel shop were machined to 3 mm in diameter and 10 mm in length for a continuous cooling transformation with Formastor-FII full-automatic transformation equipment.The specimens were heated to 1250 • C and held for 3 min then cooled at a rate of 20 • C/s to 570 • C and 5 • C/s to room temperature.To simulate welding, specimens from the EH420 and EH420-Mg were cut from the hot-rolled steel plate and machined into 11 × 11 × 55 mm 3 for HAZ simulation using an MMS 300 machine (RAL, NEU, Shenyang, China), Rykalin 2D equipped with a welding software package.The peak temperature was 1400 • C with a heating rate of 100 • C/s and held for 2 s.The targeted heat input was estimated to be 200 kJ/cm.After the welding simulation, the specimens were machined to dimensions of 10 × 10 × 55 mm 3 for a Charpy v-notch impact test at −20 • C. After the test by a dilatometer and the thermal welding simulation, samples of EH420-Mg were etched with 4% nital and their microstructure was observed by optical microscope.The fracture surface of the impact test was examined using a scanning electron microscope (SEM), and the inclusions of each sample were analyzed via SEM equipped with an inclusion automatic analysis system and an energy dispersive spectrometer (EDS).In order to ensure the reliability of the EDS analysis, each sample was scanned and a large area (1.217 mm 2 ) for EDS analysis at high magnification (3000×) was used.On the other hand, the EDS analysis was employed to exclude the possibility that pores and blots would be misunderstood as inclusions. Evolution of Inclusions in EH420-Mg The shape and composition of inclusions (atomic percentage) of each position are shown in Figure 1.The inclusions were mainly spherical.In Figure 1a (LF furnace station), it can be seen that the content of Mn was higher than that of S, which suggested that Mn did not exist in the form of a sulfide and consisted of silicomanganate and MnS.After Ti-Mg treatment in the LF furnace, the inclusions were Ti-Mg-O complex oxides and MnS precipitates.Nucleation of AF plates induced by inclusions was found, but individual MgO inclusions were not observed; Chai et al. [15] reported that MgO can be observed only when the content of Mg is ~60 ppm.On the other hand, Mg-containing inclusions easily float and the gasification of Mg and TiOx reduced by Mg can also lower the content.In the sample of vacuum treatment in the RH furnace, the inclusions in the core were mainly Ti-Ca-Mg-O because a reducing slag containing CaO was formed before the RH furnace and reduction reaction.Thus, the content of Ti and Mg was decreased. Metals 2018, 8, x FOR PEER REVIEW 3 of 10 the content of Mn was higher than that of S, which suggested that Mn did not exist in the form of a sulfide and consisted of silicomanganate and MnS.After Ti-Mg treatment in the LF furnace, the inclusions were Ti-Mg-O complex oxides and MnS precipitates.Nucleation of AF plates induced by inclusions was found, but individual MgO inclusions were not observed; Chai et al. [15] reported that MgO can be observed only when the content of Mg is ~60 ppm.On the other hand, Mg-containing inclusions easily float and the gasification of Mg and TiOx reduced by Mg can also lower the content. In the sample of vacuum treatment in the RH furnace, the inclusions in the core were mainly Ti-Ca-Mg-O because a reducing slag containing CaO was formed before the RH furnace and reduction reaction.Thus, the content of Ti and Mg was decreased.Figure 1d shows inclusions consisting of Ti-Ca-Mg-O oxides in the core and MnS precipitation on the surface in the welded sample.After continuous casting, the solidification structure in Fe-10% Ni alloy deoxidized by Mg was studied [16] and the results showed that columnar dendrites grew from grain boundaries into a columnar dendrite zone and the interdendritic spacing was decreased.Holappa et al. [17] observed that Mn and S were affluent in an interdendritic melt.Thus, Mg addition decreases interdendritic spacing and increases the segregation of Mn and S to promote precipitation of MnS.However, during the heating process of the thermal welding simulation, the solubility of Mn and S was increased and showed a favorable diffusivity [18].A inclusion with a large interfacial area can promote multiple nucleations of MnS on its surface [19].In addition, it was reported that Ti-Mgcontaining oxides can partition Mn into inclusions according to the results of first principle calculation [20].Hence, individual MnS inclusions were barely observed and precipitated on the surface of Ti-Ca-Mg-O oxides. The statistical result of size distribution and number density (an intensive quantity used to describe the degree of concentration of particles in 1 mm 2 ) in samples is shown in Figure 2. The Figure 1d shows inclusions consisting of Ti-Ca-Mg-O oxides in the core and MnS precipitation on the surface in the welded sample.After continuous casting, the solidification structure in Fe-10% Ni alloy deoxidized by Mg was studied [16] and the results showed that columnar dendrites grew from grain boundaries into a columnar dendrite zone and the interdendritic spacing was decreased.Holappa et al. [17] observed that Mn and S were affluent in an interdendritic melt.Thus, Mg addition decreases interdendritic spacing and increases the segregation of Mn and S to promote precipitation of MnS.However, during the heating process of the thermal welding simulation, the solubility of Mn and S was increased and showed a favorable diffusivity [18].A inclusion with a large interfacial area can promote multiple nucleations of MnS on its surface [19].In addition, it was reported that Ti-Mg-containing oxides can partition Mn into inclusions according to the results of first principle calculation [20].Hence, individual MnS inclusions were barely observed and precipitated on the surface of Ti-Ca-Mg-O oxides. The statistical result of size distribution and number density (an intensive quantity used to describe the degree of concentration of particles in 1 mm 2 ) in samples is shown in Figure 2. The number density of inclusions was 1508, 1320, 874, and 591 /mm 2 , and the average size was 0.83, 0.6, 0.47, and 1.67 µm, respectively.In Figure 1a, the size of inclusions was mainly in the range of 0.8-1.0µm.In this range, due to the high oxygen content, some coarse inclusions of 2.0-5.0 µm were observed.After Ti-Mg treatment in the LF furnace, the number density of coarse inclusions was decreased due to collisions and a number of fine inclusions containing Mg were formed. The formation of inclusions can be divided into three stages: nucleation, growth, and Ostwald ripening [14].In the beginning, inclusions mainly depend on the degree of supersaturation (S), the interfacial energy between inclusions and molten steel (γ), and the concentration product of the deoxidation equilibrium.The concentration product between Ti/Mg and O increases with time and the inclusions start to nucleate when critical supersaturation (CS) is achieved.Subsequently, S will decrease with the reaction of deoxidation and the process of nucleation ends when CS is again achieved.Prior to the balance of S = 1, the diffusion of particles will cease and Ostwald ripening occurs.If the value of S is lower and γ is higher, it is difficult for inclusions to nucleate and the areal density decreases.However, after feeding Ni-Mg alloy wires, the content of Mg and O in the LF furnace was 0.01% and 0.006%, respectively, and S [Mg][O] was high enough at 1600 • C to form fine and dispersed inclusions.Thus, this is the most important aspect of oxide metallurgy. Vacuum treatment in an RH furnace reduces the number density of inclusions, but on the addition of Ca-containing cored wires, fine inclusions were formed, and the average size was refined to 0.47 µm.During the process of steel making, without considering the effect of collision and diffusion, inclusions grow following the theory of Lifshitz, Slyozov, and Wagner (LSW) [21,22].At time t, the radius of an inclusion is related to the oxygen concentration and time, and because there is enough time for inclusions to grow, the oxygen concentration needs to be controlled to prevent coarsening.After the thermal welding simulation, the size distribution of inclusions in the EH420-Mg experienced a significant change.At a high temperature, oxides are partially dissolved and partial ripening occurs such that the number of inclusions of size range ~1-5 µm increases and those of less than 0.4 µm disappear. Metals 2018, 8, x FOR PEER REVIEW 4 of 10 number density of inclusions was 1508, 1320, 874, and 591 /mm 2 , and the average size was 0.83, 0.6, 0.47, and 1.67 μm, respectively.In Figure 1a, the size of inclusions was mainly in the range of 0.8-1.0μm.In this range, due to the high oxygen content, some coarse inclusions of 2.0-5.0 μm were observed.After Ti-Mg treatment in the LF furnace, the number density of coarse inclusions was decreased due to collisions and a number of fine inclusions containing Mg were formed. The formation of inclusions can be divided into three stages: nucleation, growth, and Ostwald ripening [14].In the beginning, inclusions mainly depend on the degree of supersaturation (S), the interfacial energy between inclusions and molten steel (γ), and the concentration product of the deoxidation equilibrium.The concentration product between Ti/Mg and O increases with time and the inclusions start to nucleate when critical supersaturation (CS) is achieved.Subsequently, S will decrease with the reaction of deoxidation and the process of nucleation ends when CS is again achieved.Prior to the balance of S = 1, the diffusion of particles will cease and Ostwald ripening occurs.If the value of S is lower and γ is higher, it is difficult for inclusions to nucleate and the areal density decreases.However, after feeding Ni-Mg alloy wires, the content of Mg and O in the LF furnace was 0.01% and 0.006%, respectively, and S[Mg][O] was high enough at 1600 °C to form fine and dispersed inclusions.Thus, this is the most important aspect of oxide metallurgy. Vacuum treatment in an RH furnace reduces the number density of inclusions, but on the addition of Ca-containing cored wires, fine inclusions were formed, and the average size was refined to 0.47 μm.During the process of steel making, without considering the effect of collision and diffusion, inclusions grow following the theory of Lifshitz, Slyozov, and Wagner (LSW) [21,22].At time t, the radius of an inclusion is related to the oxygen concentration and time, and because there is enough time for inclusions to grow, the oxygen concentration needs to be controlled to prevent coarsening.After the thermal welding simulation, the size distribution of inclusions in the EH420-Mg experienced a significant change.At a high temperature, oxides are partially dissolved and partial ripening occurs such that the number of inclusions of size range ~1-5 μm increases and those of less than 0.4 μm disappear. Evolution of Microstructure in EH420-Mg The microstructure at every sampling position from the steel shop after a thermal cycle by a dilatometer was observed and is shown in Figure 3.The microstructure in the LF furnace station was mainly composed of FSP and polygonal ferrite (PF) as shown in Figure 3a.It can be seen that oxides by Si-Mn deoxidation cannot induce nucleation of AF. Figure 3b shows the microstructure after Ti-Mg treatment in the LF furnace.The microstructure consisted of GBF, PF, and AF.FSP was also observed but the volume fraction was less.These results indicated that (Ti-Mg-O) oxides introduced by oxide metallurgy in an LF furnace can provide effective nucleation sites for AF. Figure 3c shows the microstructure after vacuum treatment in the RH furnace.Compared with the sample in the LF furnace, Nb-Fe alloy was added in the LF furnace and Nb can enhance the hardenability of steel [23].On the other hand, a higher number density of Ca-containing inclusions was formed and the formation of AF was promoted [24].The microstructure was composed of AF, PF, and B. In Figure 3d, the microstructure is composed of a high volume fraction of AF inside the grains and a small block of GBF and PF, and the grain size was ~170 µm.The addition of Mg pinned the growth of grains effectively during the process of the welding thermal cycle. Evolution of Microstructure in EH420-Mg The microstructure at every sampling position from the steel shop after a thermal cycle by a dilatometer was observed and is shown in Figure 3.The microstructure in the LF furnace station was mainly composed of FSP and polygonal ferrite (PF) as shown in Figure 3a.It can be seen that oxides by Si-Mn deoxidation cannot induce nucleation of AF. Figure 3b shows the microstructure after Ti-Mg treatment in the LF furnace.The microstructure consisted of GBF, PF, and AF.FSP was also observed but the volume fraction was less.These results indicated that (Ti-Mg-O) oxides introduced by oxide metallurgy in an LF furnace can provide effective nucleation sites for AF. Figure 3c shows the microstructure after vacuum treatment in the RH furnace.Compared with the sample in the LF furnace, Nb-Fe alloy was added in the LF furnace and Nb can enhance the hardenability of steel [23].On the other hand, a higher number density of Ca-containing inclusions was formed and the formation of AF was promoted [24].The microstructure was composed of AF, PF, and B. In Figure 3d, the microstructure is composed of a high volume fraction of AF inside the grains and a small block of GBF and PF, and the grain size was ~170 μm.The addition of Mg pinned the growth of grains effectively during the process of the welding thermal cycle.There are several mechanisms that explain the nucleation of AF: (1) the solute depletion in the vicinity of non-metallic inclusions [25][26][27][28]; (2) thermal strain energy due to a different thermal contraction [29]; (3) reduced interfacial energy between austenite and ferrite [30]; and (4) provision of an inert surface [31].The most accepted explanation is that an Mn-depleted zone (MDZ) can be formed around Ti-containing oxides.An MDZ is formed because of the difference between the There are several mechanisms that explain the nucleation of AF: (1) the solute depletion in the vicinity of non-metallic inclusions [25][26][27][28]; (2) thermal strain energy due to a different thermal contraction [29]; (3) reduced interfacial energy between austenite and ferrite [30]; and (4) provision of an inert surface [31].The most accepted explanation is that an Mn-depleted zone (MDZ) can be formed around Ti-containing oxides.An MDZ is formed because of the difference between the diffusivity of Mn in austenite and the solubility of Mn in Ti 2 O 3 , which lowers the content of Mn around the inclusion as compared to the matrix.Thus, the AF nucleation was promoted during the γ → α phase transformation. Besides the chemistry of inclusions, size is an important factor in oxide metallurgy.The results of Lee et al. [31] showed that the ability for AF nucleation increased with inclusion size up to 1 µm.Thus, there is a need to control the size of inclusions.With the treatment of Ti and Mg, a number of inclusions are generated.When the peak temperature is 1400 • C, inclusions, such as TiN, will grow up and dissolve [32] in austenite; hence, the pinning effect is weakened and leads to the growth of grains.In order to predict the size of austenite grains, Zener's model in Equation ( 1) [33] expresses the pinning effect of particles on the movement of grain boundaries. where R is the radius of a grain; A is a constant depending on the geometry and force balance; r is the radius of the second-phase particles; and f is the volume fraction of second-phase particles. According to Equation ( 1), a fine size with a high volume fraction can restrict the growth of grains effectively, because Mg is a strong deoxidizing element and even a small amount of addition in steel can form many oxide inclusions and they tend to be dispersed [34].Thus, Mg-containing oxides can inhibit the growth of grains and induce the nucleation of AF. Effect of Inclusions on the HAZ after High Heat Input Welding In order to compare the mechanical properties of the HAZ after Ti-Mg treatment with plain EH420, Charpy impact tests were carried out.Table 2 shows the impact toughness values and the area fraction of the fracture surface.The average impact toughness at −20 • C of specimens EH420 and EH420-Mg was 168 and 262 J, respectively.The EH420-Mg steel showed excellent HAZ toughness because the fibrous zone and the shear leap zone was 44.3% and 33.5%, respectively.Figure 4 shows the SEM images and an optical micrograph of the microstructure adjacent to the fracture surface after a 200 kJ/cm simulated HAZ for the two steels.As shown in Figure 4a,b, the cleavage plane was rough and the cleavage plane size was related to the size of the ferrite packet.The large cleavage plane provides a path for crack cleavage resulting in a decrease of the toughness of the HAZ. Figure 4c shows that the fracture surface was composed of brittle cleavage and a ductile fracture.Compared with plain EH420, the size of the cleavage was small (Figure 4d).It is known that the cleavage plane is related to grain size, and fine size cleavage facets can cause crack deflection frequently; thus, the driving force for crack propagation is decreased and toughness enhanced. In fact, the crystallographic packet [35], which is defined as a group of adjacent ferrite laths with a crystallographic misorientation of less than 15 • of the critical angle, is the real microstructural unit that controls the propagation of cleavage cracks [36,37].When the misorientation angle of crystallographic packets' boundaries is 15 • or higher, it is equivalent to the dimensions of the cleavage plane and fine-grained interlocking plates can lead to the division of a crack path or frequent deflection, preventing the propagation of cracks.The ductile fracture zone is shown in Figure 4e, which made a prominent contribution to the impact toughness.The spherical inclusions were embedded in dimples, and these inclusions with appropriate diameters can reduce the stress concentration [11].During the plastic deformation process, the randomly distributed ferrite plates or grains were deformed and rotated to form sheaves with a similar orientation.It is inferred that the ductile fracture occurred as follows: the large size dimples are related to inclusions and are influenced by the size of inclusions.Small dimples are generated by fine inclusions, but they are more likely to be formed by the tearing of deformed ferrite sheaves.During the crack nucleation and growth process, inclusions were first separated from the The ductile fracture zone is shown in Figure 4e, which made a prominent contribution to the impact toughness.The spherical inclusions were embedded in dimples, and these inclusions with appropriate diameters can reduce the stress concentration [11].During the plastic deformation process, the randomly distributed ferrite plates or grains were deformed and rotated to form sheaves with a similar orientation.It is inferred that the ductile fracture occurred as follows: the large size dimples are related to inclusions and are influenced by the size of inclusions.Small dimples are generated by fine inclusions, but they are more likely to be formed by the tearing of deformed ferrite sheaves.During the crack nucleation and growth process, inclusions were first separated from the surrounding matrix to form large-size dimples.Subsequently, the ferrite sheaves in the matrix around the dimples experienced axial tension, the final fracture under the stress concentration occurred, and dense fine dimples were observed in the cross section of several ferrite plates [5]. Figure 4e shows that the disordered lath arrangement of AF acted as an obstacle to the propagation of cleavage cracks.Fine-size cleavage facets and a dimpled ductile fracture contributed to the high impact energy. 1. The composition of inclusions in the core changed from silicomanganate and MnS (LF furnace station) to Ti-Mg-O (Ti-Mg treatment in the LF furnace) and finally to Ti-Ca-Mg-O in EH420-Mg. The corresponding average size of inclusions decreased from 0.83 µm to 0.47 µm, and the number density decreased from 1508 /mm 2 to 874 /mm 2 .After the thermal welding simulation, the inclusions of size less than 0.4 µm disappeared.2. For EH420-Mg steel, after a continuous cooling transformation by a dilatometer, the microstructure of the LF furnace station sample was composed of FSP and PF, which then changed to AF, PF, and B (vacuum treatment in an RH furnace), while in the welded sample, the microstructure was composed of AF, GBF, and PF.A Ti-Mg-containing oxide promoted the nucleation of AF. 3. An interlocking AF microstructure acts as an obstacle to the propagation of cleavage cracks.Fine-size cleavage facets, small inclusions, and dense dimples contributed to the high impact toughness in Ti-Mg oxide metallurgy steel. Figure 1 . Figure 1.Evolution of typical inclusions about shape and composition.(a)ladle furnace (LF) station; (b) Ti-Mg treatment in the LF furnace; (c) vacuum treatment in the Ruhrstahl & Heraeus furnace (RH); (d) after the thermal welding simulation. Figure 1 . Figure 1.Evolution of typical inclusions about shape and composition.(a)ladle furnace (LF) station; (b) Ti-Mg treatment in the LF furnace; (c) vacuum treatment in the Ruhrstahl & Heraeus furnace (RH); (d) after the thermal welding simulation. Figure 2 . Figure 2. Size distribution, average size, and number density of inclusions in EH420-Mg: (a) LF furnace station; (b) Ti-Mg treatment in the LF furnace; (c) vacuum treatment in the RH furnace; (d) after the thermal welding simulation. Figure 2 . Figure 2. Size distribution, average size, and number density of inclusions in EH420-Mg: (a) LF furnace station; (b) Ti-Mg treatment in the LF furnace; (c) vacuum treatment in the RH furnace; (d) after the thermal welding simulation. Figure 4 . Figure 4. SEM images of the fracture surface and the microstructure adjacent to the fracture surface in (a,b) plain EH420; and (c-f) EH420-Mg. Figure 4 . Figure 4. SEM images of the fracture surface and the microstructure adjacent to the fracture surface in (a,b) plain EH420; and (c-f) EH420-Mg. Table 1 . Chemical composition of experimental steel in wt %. Table 2 . Statistical analysis of fracture surface and impact toughness of HAZ at −20 • C.
2018-12-16T19:26:33.797Z
2018-07-11T00:00:00.000
{ "year": 2018, "sha1": "eab3330cffe89759374ac5ffd7d2a8f880e1e768", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/8/7/534/pdf?version=1531295486", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "eab3330cffe89759374ac5ffd7d2a8f880e1e768", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
62815120
pes2o/s2orc
v3-fos-license
Physiological and Physical Impact of Noise Pollution on Environment Environment pollution is a major problem of the world and it is mainly influence to the health of human, animals and ecosystem. This paper provides the brief view about the affects of noise as environment pollution in the perspective of noise pollution on human by diseases and problems among living organisms. Study finds that these kinds of pollutions are not only seriously affecting the human by diseases and problems but also the biodiversity. Still time left in the hands of worlds institutions, local bodies and government to use the advance resources to balance the environment .With the promotion of science and technology at a unique tempo, the urban points of the world have evolved not just in size but also in terms of the living situation. This brings about new awareness about the noise pollution, which is the part of our day-to-day lives. It is conducted by studies that trace the amount of damage caused by the noise from various natural as well as anthropogenic sources, especially traffic. Noise is associated with the physical, mental, emotional and psychological to all the individuals be it human beings or even animals. This is a potential risk to the requirements of sound living conditions and needs to be checked at judicial level. Introduction Sound is a mechanical vibration produced from elastic medium (as air and water) which creates the pressure moving the particles, and can be feels by a person or equipment.Sound is defined by its characteristics.Sound has mechanic vibration, determine as the combination of pressure (Pascal, Pa) and frequency (Hertz, Hz), frequency or pitch is the number of cycles per second (Hertz, Hz or kilo Hertz, KHz), intensity or loudness is the "level of sonorous pressure" and is measured in Pascal (Pa) or decibels (dB) The intensity of human speak s average is 50 dB.Decibels are used for ease to express sound on a compressed, logarithmic scale.Noise is an unwanted or undesired sound.(For example: produced by a machine or airplane.Noise pollution can be from all sources such as an computers, traffic, a television, human talking, a dog barking, to more machinery such as large trucks and airplanes industrial equipments.Noise is affecting the work efficiency directly and indirectly (Singh and Davar, 2004).The Occupational Safety and Health Administration (OSHA) advise hearing protection in the working area if there is hazard of noise more than 85( dB) for eight hours or the potential of constant hearing loss (Griffiths and Langdon, 1968).Below is a chart that indicates some sources of noise taken from an article published in the American Family Physician in 2001 (Blessing, 2008) Objective To (Goines and Hagler, 2007).About 60% of population of Europe, is affected by traffic noise (Singh and Davar, 2004).Improper use of horn by the traffic and wide use of loudspeakers in Indian religious and social ceremonies causes health risk to the urban people (Ritovska et al., 2004).vehicular traffic is also a source of noise pollution around the globe especially in most urban cities around the world.The situation is getting seriously alarming with increase in traffic density on city roads.The smoke from cars and traffic are of great concern to the changes the climate of this country and that of the world in general (Niemann et al., 2006).Medical sciences giving lot of time and hard work to treat health hazards and risk, whereas health is a main issue because know increasing number of factories and transport in the globe the noise risk has been increased.Mainly in the cities areas throughout the world, this difficulty is increasing day by day due to population of huge industries, construction, high traffic, recreational areas etc; becomes the main font of noise production. Countries in particular which are developing in general have established their noise control standards, which are followed and implemented to protect their people (Zannin and Bunn, 2014).Industrial noise is major problem of noise mess in the industrial sites of Pakistan.The noise frustration among the workers of textile industries due to noise exposure is main problem in workers; there is criteria based on annoyance rather than hearing damage criteria (Regecová and Kellerová, 1995).Noise is one of the main pollution in the cities areas of Pakistan it has adverse affect on human health and community.It may be long term and short term exposure.These results of exposure can decrease the efficiency and output of work, loss hearing and feeling of irritated.It is estimated human that working in noisy environments shows many problems Heart problems work place accidents , Irritation, headache problems, Respiratory problems, Nervous problems and many physiological issues (McCarthy, 2004).Noise pollution is increased with increase of cardiovascular disease.These effects are destructive for body "fight or flight" leads to autonomic nervous and endocrine effects chronic to body if noise greater than 65 dB or acute affect above 80 to 85 dB (Fritschi, et al., 2011).Unluckily, there is prove that young people are also at risk.In one case performed in 1995, heart rate and blood pressure was measured of 1,542 children and their age is of 3-7 years old in areas with traffic noise was greater than 60 dB. The study showed that the children had a greater mean diastolic and stolic blood pressure and higher heart rate than in quite areas.95th percent of the children have high blood pressure ( (Regecová and Kellerová, 1995).Estimation of noise by railway traffic is present in large Latin American city.They measure the level of noise passing nearby through industrial .residentialareas.Noise maps were also made and calculated showing noise pollution produced by the train traffic.Annoyance of the community and residential area affected by railway noise pollution was evaluated based on interviews.That the noise levels produced by the moving of the train with its horn, clearly more than the daytime limits of equivalent sound level limits-Leq = 55 dB made by the municipal laws No10.625 of the city of Curitiba.The Leq = 45 dB (A) is for night time but it is not in limits while train is moving.. The people reported feeling disturbed by the noise generated by passing trains, which causes health problems, and 88% of them claimed distressing.This experiment showed that majority of residents (69%) believe that the noise other train can bring down their property (Shahid and Bashir, 2013).Noise pollution in aquatic niches has become an increasing problem for policy makers and conservationists.In 1972 the U.S. federal government enacted the Marine Mammal Protection Act, a law suggesting that marine mammals would not be destructively harmed by human deeds.However noise pollution under the act of marine mammal protection is not clearly mention and nearly impossible to implement, yet, noise pollution in our marine environment and ocean is a real danger to the continued existence of many marine mammals."The Marine Mammal Protection Act cannot links broadly with the source levels that cause pollution or and the total amount of noise produced in a mention area.Industry should be environmental friendly contact studies because in the long term it may benefit they more than get in the way their projects.Government, industry and institutes can help in by providing funding to research institution to form methods of monitoring noise caused by seismic blasts.It may be possible to decrease destructive explosive of seismic blasting and help to save marine species while economic growth of underwater areas (Khan, et al., 2010).Sound Thermometry is a type of researching and testing global warming."Sound waves move faster in warm water than in cold water, so , scientists measure the sound in two points and they measure the average temperature along that way (Jasny, 1999).San Diego scientists started the research they set two transmitters on the coast of Hawaii and other to the coast of California pumping(195)dB of sound across the Pacific Ocean the sound energy spreads out in a small depth in the whole ocean sink and is less loud at greater distances.At 1000km from the starting place the strength of intensity will be onemillionth as more as at one meter from the source.But the problem is not long exposure of the noise so much as it is exposure next to sensitive marine environments.Even though small research has been conducted into estimating the risk of sound thermometry on marine species and the marks so far are hopeful, researchers should not discard their studies in advance but should struggle to answer the basic questions of its long effects on marine species mainly how this type of monitoring will affect habitats.By decreasing cycles of transmission and the decibel levels even further this could help decrease risks to marine mammals (Blessing, 2008).Noise pollution is a type of energy pollution in which distracting sounds which are clearly audible and which may result in disturbing any natural process or causes human harm.Consequently, noise is unwanted sound.What is pleasant to some ears may be extremely unpleasant to others depending upon a number of psychological factor (Rabinowitz, 2000).Noise pollution is one of the environmental hazards affecting human as well as climate.In most urban areas of the third or developing countries of the world there are lots of noise pollutants which includes noise from exhaust cars, industrial as well as home generating plants.In the advanced countries however, scientific experimentations like launching and re-launching rockets, bombs and satellites sounds constitutes a major climate pollutant.Human being, animals, plants and even inert objects like buildings and bridges have been victor the increasing noise pollution caused in the world.Noise has become a very significant stress factor in the environment, to the level that the term noise pollution has been used to signify the hazard of sound which consequences in the modern day development is immeasurable (Blessing, 2008).Household equipments such as vacuum cleaners, mixers and some kitchen appliances are noisemakers of the house.Though they do not cause too much of problem, their effect of noise emitted on human health cannot be neglected.Furthermore, noise can be generated from neighbourhood noise consisting of neighbouring apartments and noise within one's own apartment The Federal Environmental Protection Agency Act defines environment broadly to include, air, water ,spoil and all plants all layers of atmosphere and human being or animals living ,organic and inorganic substance and there interaction .Environment is the totality of the living and non living things and surroundings, in which we do our cultural, religious, political and socio-economic work for self-and to enhance the communities, societies and nations.Human being in the globe till death lives in an environment and their life base mainly on an environment, once an environment becomes polluted.Environmental hazards on the other hand, has been provoked as the contamination of the surrounding by chemical, biological, and or physical agent that are lethal to human, animal or plant, life and the general environment may be disturbed from natural events, industrial and human activities Pollution is 'man made or man aided alteration of chemical, physical or biological quality of the environment to the extent that is detrimental to that environment or beyond acceptable limits' (Shahid and Bashir, 2013). Conclusions Children are subgroups and they are more sensitive towards noise children less than 5year have problem in reading, Comprehension, and their studies are affected by continuous exposure of noise so schools, colleges and universities are made away from busy and noisy areas.Noise more than 30 db also disturbs sleep cause stress and hyper tension and there should be strong implementation of law and enforcement of standards.Noise also affects the social disturbance and increase the crime rate and negative impact on environment.Noise also causes heart problem, nervous system disorder, respiratory problems, blood pressure problem and other physical problems related to health. .Noise pollution is a type of energy pollution in which distracting sounds which are clearly audible and which may result in disturbing any natural process or causes human harm.Industrial worker should wear personal protective measures while doing their work in fact noise pollution is becoming a major issue is developed and as well as in developing countries so positives steps are taken by individual person, community, policy makers and Government to avoid and maintain this hazard .Human being in the globe till death lives in an environment and their life base mainly on an environment, once an environment becomes polluted.Environmental hazards on the other hand, has been provoked as the contamination of the surrounding by chemical, biological, and or physical agent that are lethal to human, animal or plant, life and the general environment may be disturbed from natural events, industrial and human .Pollution is 'man made or man aided alteration of chemical, physical or biological quality of the environment to the extent that is detrimental to that environment or beyond acceptable limits' (Getzner and Zak, 2012)te, 2013).Another study included 2,000 heart attack people that were study with over 2,000 control patients from 32 hospitals of Berlin from 1998 to 2001.Traffic noise level was mentioned for each patient based on noise maps in the city.Uniform interviews were taken to meet with possible baffling factors of noise sources.All the results of the study support the hypothesis that chronic contact to traffic noise increases the risk for heart problems by increased cardiovascular risk conditions such as stress.Men expose to sound levels that equalled 70 dB(A) in the day shows increase in risk of heart attack than with those who lived in streets where the sound level is less than 60 dB(Öhrström, et al., 1979).Noise levels have been linked with more negative response such as increased aging, excitement, anger, and distraction it may give blow to social and behavioural negative responses.The level of annoyance is dependent on the noise and its type, the time at which it occurs(Getzner and Zak, 2012).Study shows the research which was done to investigate the difficulties faced by the workers and its risk on them while working in textile based industries.Research gives necessary tip-offs to remove those risks in a systematic way.It was to know that large number of machines (looms) driven by one worker.Different noise levels were estimated by using digital sound level meter to measure noise level/ unit.To indicate the health hazard like respiratory, hearing/listening, Laden measure).Private rental sector the property price effect in terms of Ld per additional 10 dB(A) is 6.6%, while in the public sector is 8% lower and airport area have 12% per additional 10 dB(A) is observed.1% per additional dB the impact of airport noise is relatively higher than other noise sources of 0.7% (Haines et al., 2001).Noise pollution in cities environments gernates from different sources, e.g., loud music, sirens, car and home alarms, neighbours, , , motorcycles ,horns trucks, cars ,public buses, trains and planes etc. K., (Maschke et al, 2006).Many divisions of community are affected by noise, which is particularly generated by traffic.Traffic noise -road, railway and planes causes uneasiness and frustration especially during activities that require consideration and attention irritation, heart/BP, annoyance and headache.So the minimum noise recorded was 101.6dB and maximum as 109.8dBaccording to OSHA and WHO (World Health Organization) standards.Result of this study shows that due to high intensity of noise there are mental and physical problems(Zannin and Bunn, 2014).The study of noise and air pollution in Geneva is estimated byBaranzini and Ramirez (2005).Three different results are included in the monitoring; a statistical data geographical informational data, on their results and study for airport noise and between public and private sector tenants, and day (LD measure) and day-evening-night noise levels (
2018-12-21T03:55:04.601Z
2017-01-09T00:00:00.000
{ "year": 2017, "sha1": "c0fb8e0224e560d8ebb259bba70f9f23de8a6cc4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.26480/esp.01.2017.08.10", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c0fb8e0224e560d8ebb259bba70f9f23de8a6cc4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
15479361
pes2o/s2orc
v3-fos-license
Parental risk factors for the development of pediatric acute and chronic postsurgical pain: a longitudinal study Background The goal of this longitudinal study was to examine the associations among psychological factors and pain reports of children and their parents over the 12 month period after pediatric surgery. Materials and methods Included in the study were 83 children aged 8–18 years undergoing major surgery. In each case, the child and one of their parents completed measures of pain intensity and unpleasantness, psychological function, and functional disability at 48–72 hours, 2 weeks (child only), 6 months, and 12 months after surgery. Results The strength of the correlation coefficients between the psychological measures of the parent and their child increased significantly over time. There was a fair level of agreement between parent ratings of child acute and chronic pain (6 months after surgery) and the child’s actual ratings. Parent and child pain anxiety scores 48–72 hours after surgery interacted significantly to predict pain intensity, pain unpleasantness, and functional disability levels 2 weeks after discharge from hospital. Parent pain catastrophizing scores 48–72 hours after surgery predicted child pain intensity reports 12 months later. Conclusion These results raise the possibility that as time from surgery increases, parents exert greater and greater influence over the pain response of their children, so that by 12 months postsurgery mark, parent pain catastrophizing (measured in the days after surgery) is the main risk factor for the development of postsurgical pain chronicity. Introduction Research on the role of psychological and social factors in pain perception has increased exponentially since the publication of the Gate Control Theory 1 and biopsychosocial models of pain. 2,3 Recently, factors relevant to pain perception have become central components of several pain models including the integrative model of parent and family factors in pediatric chronic pain and associated disability 4 and the pediatric fearavoidance model of chronic pain. 5 The former model 4 proposes that three interrelated levels of factors influence pediatric pain and disability: (1) individual factors (eg, parent behaviors such as solicitousness, parent reinforcement, parenting style), (2) parentchild interactions, and (3) family-related variables (eg, familial environment). In addition, several mediators and/or moderators of the relationships between pain, disability, and the three interrelated levels of factors mentioned above are proposed (eg, sex, age, developmental stage, coping, family history, and emotional symptoms). The latter model 5 recognizes the bidirectional relationship between parent and child factors as contributing to the initiation and maintenance of the pain experience. The ways in which parents react to their children's pain (including protective and solicitous behaviors and parents' psychological responses) also influence their children's behaviors and psychological responses to pain. As highlighted in the pediatric fear-avoidance model of chronic pain 5 , children's thoughts and beliefs related to the pain experience are shaped over time initially through interaction with their parents. 4,6 Moreover, pain experiences affect one's empathic responses to the pain of others, 7,8 and this is particularly true of the parent-child relationship. 7 Based on prior pain experiences and current beliefs and thoughts about pain, a parent might interpret his/her child's pain as threatening. This interpretation will likely lead to a higher estimation of the child's pain and an increased level of parental distress; 9-11 these factors will in turn affect the child's pain behaviors and expressions [12][13][14] and pain-related functional disability. 9,10,15 Understanding the transition from acute to chronic postsurgical pain (CPSP) in children would be enhanced by addressing these critically important parental influences on children's pain experiences. Research has shown that (1) parents are affected by their child's experience of a long-term condition or hospitalization; 16,17 (2) parents of hospitalized children report feelings of anxiety, fear, guilt, a sense of lack of control, and distress; 18 (3) high levels of parent anxiety prior to their child's surgery are associated with higher levels of child anxiety, 19 and parent postoperative anxiety correlates strongly with child postoperative anxiety; 20 (4) some parental behaviors intended to reduce a child's pain, such as reassurance, have been shown to increase the child's distress level; 21 and (5) parent distraction is associated with fewer child activity restrictions, whereas parent protective behaviors are associated with more child activity restrictions among children with juvenile idiopathic arthritis. 22 Taken together, these studies show that parents are significantly affected by their children's pain experience; at the same time, parents also influence their children's response to pain. Nevertheless, it remains unclear how parent and child pain-related psychological variables and pain reports are related and how/whether this relationship evolves over time as acute postoperative pain becomes chronic. The objectives of this study were to (1) examine the correlations among child and parent pain-related psychological factors as well as the agreement between child and parent pain reports over the 12 months after pediatric surgery, (2) identify parent pain-related psychological risk factors associated with child acute postsurgical pain 48-72 hours after surgery, and (3) identify parent pain-related psychological risk factors that predict pediatric CPSP 6 months and 12 months after surgery. Within the context of this study, CPSP is defined as the presence of pain of a moderate to severe intensity (average pain score of $4 or higher out of 10 on the numeric rating scale [NRS]) 6 months and/or 12 months after surgery. 23 Children who reported experiencing no pain or mild pain (pain intensity score of #3 on the NRS) were classified into the no/mild CPSP group. Child and parent pain-related psychological constructs examined in this study include pain anxiety, pain catastrophizing, and anxiety sensitivity. Pain anxiety refers to cognitive, physiological, behavioral, and fear dimensions of anxiety that are associated with current or anticipated pain experience. 24,25 Pain catastrophizing refers to cognitive and fear (rumination, helplessness, magnification) responses associated with actual or anticipated pain experience. 26 Anxiety sensitivity refers to the fearful interpretation of anxiety symptoms due to the belief that they might lead to potentially harmful or negative consequences. 27 Recent studies have shown that although these constructs are related, there is evidence that they contribute uniquely to the explanation of chronic pain disability after adult surgery. 28 These constructs were selected because they have been shown to be associated with child and adolescent pain severity and pain-related disability, [29][30][31] and they are central components of empirically validated models of chronic pain such as the diathesis-stress model of chronic pain and disability 32,33 and the cognitive-behavioral fear-avoidance model of chronic pain. 34,35 Participants and recruitment Children between the ages of 8-18 years who underwent either general surgical (thoracotomy, thoracoabdominal surgery, Nuss/Ravitch procedure, sternotomy, laparotomy, ostomy) or orthopedic (scoliosis, osteotomy, plate insertion tibia/femur, open hip reduction, hip capsulorrhaphy) procedures and one of their parents were eligible to participate in this study. Exclusion criteria included developmental or cognitive delay, being nonverbal, having cancer, or having a congenital insensitivity to pain. Inclusion criteria, other than age and surgery type, included both child and parent being fluent in written and spoken English. Questionnaires child measures The Child Pain Anxiety Symptoms Scale (CPASS) 36 729 Parental influences in pediatric postsurgical pain adult PASS-20. 37 Children rate the extent to which they think, act, or feel in relation to each item on a scale from 0 ("never think, act, or feel that way") to 5 ("always think, act, or feel that way"). Total score ranges from 0-100, with higher scores indicating higher levels of pain anxiety. The CPASS consists of four subscales: cognitive, escape/avoidance, fear, and physiological anxiety. The CPASS showed excellent internal consistency (α = 0.90) in a community sample of children 36 as well as in the present sample (α = 0.92-0.96). 38 The construct and discriminative validity of the CPASS are adequate as evidenced by greater correlations between the CPASS and pain catastrophizing (r = 0.63) and anxiety sensitivity (r = 0.60) than with general anxiety (r = 0.44). The CPASS was significantly associated with the frequency of pain reports in children. 36 The Childhood Anxiety Sensitivity Index (CASI) 39 is an 18-item scale that measures the extent to which participants interpret anxiety-related symptoms (eg, increased heart rate, feeling nauseated) as indicators of potentially harmful somatic, psychological, and/or social consequences. 40 Each item on the CASI is rated on a four-point Likert scale ranging from 1 ("none") to 3 ("a lot") yielding total scores between 18 and 54, with higher scores indicating higher levels of anxiety sensitivity. The CASI has good internal consistency (α = 0.87), test-retest reliability (r = 0.76), as well as adequate convergent and discriminant validity. 39 Internal consistency for the present study was excellent (α = 0.87-0.93). The Pain Catastrophizing Scale -Children (PCS-C) 26 measures the extent to which children worry, amplify, and feel helpless about their current or anticipated pain experience. 26 The 13-item PCS-C is a modification of the adult PCS. 41,42 Children rate each item on a scale from 0 ("not at all") to 4 ("extremely"), "how strongly they experience this thought" when they have pain. Total scores range from 0-52, with higher scores indicating higher levels of pain catastrophizing. The PCS-C also yields three subscale scores, namely rumination, magnification, and helplessness. The PCS-C has good internal consistency (α = 0.90) and correlates highly with pain intensity (r = 0.49) and disability (r = 0.50). 26 Internal consistency for the present study was excellent (α = 0.93). Functional Disability Inventory (FDI-C) 43 is a self-report measure that assesses the extent to which children experience difficulties in completing specific tasks of daily living. Typically, the FDI-C is used as a five-point Likert scale and yields total scores ranging from 0-60. Inadvertently, the FDI-C in the present study was measured using a four-point Likert scale and omitted the original "2" ("some trouble"). Children in this study rated each of the 15 items on a scale from 0-3: (0: "no trouble"; 1: "a little trouble", 2: "a lot of trouble", and 3: "impossible"). Total scores range from 0-45 with higher scores indicating higher levels of disability. The FDI-C has excellent internal consistency (α = 0.86-0.91) and good test-retest reliability at 2 weeks (r = 0.74) and 3 months (r = 0.48). The FDI-C has been used with many pediatric populations including children with chronic pain [44][45][46] and postsurgical pain. 47 Internal consistency for the present study was excellent (α = 0.83-0.89). The 11-point NRS for Pain Intensity (NRSI) and Pain Unpleasantness (NRSU) are verbally administered 11-point scales that measures pain intensity ("how much pain do you feel right now?"). The NRS was also used to measure pain unpleasantness ("how unpleasant/horrible/yucky is the pain right now?"). The end points represent the extremes of the pain experience. Since there are no agreed upon NRS anchors for measuring pain in children and adolescents, 48 the following anchors were used in the present study: for pain intensity, 0 = "no pain at all" to 10 = "worst possible pain"; for pain unpleasantness, 0 = "not at all unpleasant/horrible/yucky" to 10 = "most unpleasant/horrible/yucky feeling possible." The NRSI has been validated as an acute postoperative pain measure in children aged 7-17 years 38 and correlates highly with the Visual Analog Scale (r = 0.89) and the Faces Pain Scale-revised (r = 0.87). 49 To determine preoperative pain, children were asked retrospectively (48-72 hours after surgery) how much pain they had had on average before the surgery using a four-point verbal rating scale (0 = "no pain"; 1 = "a little bit of pain"; 2 = "a moderate amount of pain"; 3 = "a lot of pain"). Only three children with preoperative pain reported pain at the 6-month or 12-month follow-up; for the remaining 18, the surgery corrected the source of their pain. 23 The CPSP Questionnaire-Child report was designed specifically for the present study to evaluate children's pain experience postoperatively. Children were asked questions about the presence/absence of pain ("Do you ever feel pain in the area of your body where the surgery was done?"), pain frequency ("How often do you feel pain?"), pain intensity and unpleasantness (11-point NRS), type of pain ("What kind of pain do you usually feel?"), pain location ("When you feel pain where exactly is the pain you are usually feeling?"), as well as pain management strategies utilized (eg, pain medication, doctor visits, physiotherapy). CPSP was defined based on the child's response to the question "On average, how much pain do you usually feel?" Children 730 Pagé et al who rated their average pain as $4 out of 10 were classified as having moderate/severe CPSP, and children who rated their average pain as #3 out of 10 were classified as having no/mild CPSP. Parent measures The Pain Anxiety Symptoms Scale (PASS-20) 37 is a short version of the PASS, 25 consisting of 20 items assessing fear and anxiety reactions to pain. The four, five-item subscales of the PASS-20 measure cognitive anxiety, escape and avoidance responses, fearful thinking, and physiological anxiety responses. Participants answer each item on a scale from 0 ("never") to 5 ("always"). Total scores range from 0-100, higher score indicating higher level of pain anxiety. The scale has a good internal consistency (α = 0.81), good convergent validity with the original PASS-40 (r = 0.95), and good construct validity. 37 Reliability coefficients for the subscales range from 0.23-0.93. Internal consistency for the present study was excellent (α = 0.938-0.959). The Pain Catastrophizing Scale (PCS) 41 is a 13-item self-report measure of pain catastrophizing that includes three subscales: rumination, magnification, and helplessness. Participants rate each item on a scale from 0 ("not at all") to 4 ("all the time"), for a total score of 52. Cronbach's α of 0.87 for the total scale is satisfactory. The scale also has good convergent validity with measures of anxiety (r = 0.32) and negative affect (r = 0.32). The 10-week test-retest showed good reliability (r = 0.70). Internal consistency for the present study was excellent (α = 0.934-0.961). The Anxiety Sensitivity Index (ASI) 27 is a 16-item selfreport measure assessing the extent to which participants fear the potentially negative consequences of symptoms and sensations related to anxiety. Each item is rated on a scale from 0 ("very little") to 4 ("very much"), for a total score ranging from 0-64. The ASI has a high total score internal consistency (α = 0.83) and has good convergent and discriminant validity. 50 Internal consistency for the present study was excellent (α = 0.884-0.920). The Postoperative Pain Measure for Parents (PPMP) 51 is a 15-item checklist that assesses behavior children exhibit in response to postoperative pain. For each item, parents select "yes" or "no" as to whether the child exhibits the behavior. The checklist has good internal consistency (α = 0.88) and correlates highly with child ratings of pain (r = 0.61). 51 Using a cut-off score of 6, the PPMP has been shown to be highly sensitive (,80%) and specific (.80%) in identifying children with clinically significant pain intensity 2 days after surgery. 51 Internal consistency for the present study was adequate (α = 0.756). The Functional Disability Inventory -Parent report (FDI-P) 43 is a 15-item scale that assesses the extent to which children experience difficulties in completing specific tasks (eg, "walking to the bathroom", "eating regular meals", and "being at school all day"). Parents are asked to rate the extent to which their child experiences difficulties in completing each of the 15 items. Typically, the FDI-P is used as a five-point Likert Scale and yields total scores ranging from 0-60. Inadvertently, the FDI-P in this study was measured using a four-point Likert scale. Parents rated each item on a scale from 0 ("no trouble") to 3 ("impossible"), yielding total scores ranging from 0-45. Internal consistency for the present study was adequate (α = 0.798-0.886). The CPSP Questionnaire -Parent report was designed specifically for this study to evaluate parent perception of child's pain experience postoperatively. Parents were asked questions about their children's pain experience, including the presence/ absence of pain ("Does your child ever feel pain in the area of his/her body where the surgery was done?"), pain frequency ("How often does your child feel pain?"), pain intensity and unpleasantness using the NRS ("On average, how much pain does your child feel on a scale from 0 to 10?"), type ("What kind of pain does your child usually feel?"), and location ("When your child feels pain where exactly is the pain she/he is usually feeling?") of pain, as well as pain management strategies utilized (eg, pain medication, doctor visits, physiotherapy). Procedure The study was reviewed and approved by the Research Ethics Boards of the Hospital for Sick Children and York University. Potential participants were initially approached approximately 48-72 hours after surgery by nurses not part of the research project. After expressing initial interest in the study to the nurse, children and one of their parents were then approached 48-72 hours after surgery by one of the research team members. After obtaining written parental consent and child consent or assent, questionnaires were verbally administered to children by one of the research team members (Table 1). Meanwhile, parents independently completed a similar set of questionnaires. The order of administration of questionnaires was randomized (http://www.randomization.com) within participants to minimize potential order and fatigue effects. Telephone follow-up calls were conducted approximately 2 weeks after discharge from hospital (children only) and 6 months and 12 months after surgery with both parents and children (Table 1) larger project examining validation of pain anxiety and predictors of acute and chronic postoperative pain in children. 23,38,52 Data analysis Data were screened for the presence of univariate outliers on pain-related psychological predictor variables (pain anxiety, pain catastrophizing, and anxiety sensitivity), multivariate outliers (squared Mahalanobis distance has a probability χ 2 , 0.001), as well as skewness and kurtosis for both child and parent measures. Outlier analysis revealed that none of the data points was both a univariate and a multivariate outlier; as such, all participants were retained for the analyses. Skewness and kurtosis significance testing (estimate/standard error .3) revealed nonnormality of two outcome variables; namely, pain intensity and pain unpleasantness 2 weeks, 6 months, and 12 months after surgery. Nonnormality of the outcome variables was addressed through square root transformation (NRSI [2] t , NRSU [2] t , NRSI [6] t , NRSU[6] t , NRSI [12] t , and NRSU [12] t ), which resulted in normally distributed variables (skewness and kurtosis significance testing estimate/ standard error ,3). The superscript symbol "t" following a variable name indicates that this variable has been transformed to address nonnormality. Neither child nor parent pain-related psychological predictors were found to be skewed or kurtotic. correlation and concordance between child and parent pain-related psychological factors and pain reports correlations between child and parent pain-related psychological factors The associations among parent (PASS, PCS, ASI) and child (CPASS, PCS-C, CASI) pain-related psychological measures were examined using Pearson correlation coefficients. Strength of correlations between measures was examined using R 2 and 90% confidence intervals. agreement/relationship between child and parent ratings of a child's pain To determine a relationship between child and parent acute postsurgical pain ratings, two-tailed, Bonferroni-adjusted (α = 0.025) t-tests were used to examine differences in NRSI(0) scores and NRSU(0) scores using a cut-off score of 6 on the PPMP. We compared children whose parents rated them as having a score of $6 on the PPMP 51 (a clinically significant level of pain) with children whose parents rated them as having a score ,6 on the PPMP (a nonclinically significant level of pain). To determine agreement between child and parent CPSP ratings, inter-rater agreement between parental perception of the child's CPSP status (no/mild CPSP versus moderate/ severe CPSP) and child self-report of CPSP status was examined 6 months and 12 months after surgery using Cohen's kappa coefficient 53 (α set at 0.025). For both child report and parent report of the child's pain, moderate/severe CPSP was defined as the presence of pain of an intensity $4 on the NRS 6 months and/or 12 months after surgery. Parent and child factors associated with pediatric acute postsurgical pain 2 weeks after hospital discharge A multivariate general linear model (multivariate multiple regression analysis) was fit to the data to examine the effect of parent and child pain-related psychological measures and their interactions on child acute pain and functional disability levels. The PASS(0) and CPASS(0) and their interaction (using centered variables) as well as PCS(0) and PCS-C(0) and their interaction (using centered variables) 48-72 hours after surgery were entered as predictors of child NRSI(0) and NRSU(0) (model 1) and NRSI(2) t , NRSU(2) t , and FDI-C(2) 2 weeks after discharge from hospital (model 2). α level was set at 0.025 to control for multiple comparisons. Significant multivariate effects were followed-up with univariate multiple regression analyses. Parent predictors of pediatric cPsP Stepwise linear regression analyses were conducted to examine parent predictors of child CPSP. The PASS(0), PCS(0), and ASI(0) measured 48-72 hours after surgery were entered as predictors of children's pain intensity 6 months (NRSI [6] sample size analysis Sample size was estimated a priori (for all analyses except the multivariate multiple linear regression analyses) using G*Power version 3.1 (Franz Faul, Universitat Kiel, Germany). 54 48-72 hours after surgery Sample size analysis showed that 64 participants would be required for a two-tailed point biserial correlation with α = 0.05, and a power of 80%, and with a medium effect size. weeks after discharge from hospital Given that sample size analyses for multivariate multiple linear regression analyses are not readily accessible, power analyses were computed post hoc. Post hoc power analysis is important to rule out that nonsignificant findings are not due to lack of power. Power analysis for three response variables and six predictors showed that with 83 participants, noncentrality parameter λ = 16.6, α = 0.025, numerator df = 2, denominator df = 71, and a power of 92.0%. Power analysis for three response variables and six predictors showed that with 83 participants, noncentrality parameter λ = 16.6, α = 0.025, numerator df = 2, denominator df = 57, and a power of 87.3%. months and 12 months after surgery Sample size analysis showed that 57 participants would be required for a linear regression analysis with three predictors, α = 0.025, effect size f 2 = 0.25 and a power of 80%. Thus taking into account an attrition rate of ∼30% due to participant dropout and losing patients to follow-up, we recruited 83 patients to ensure we would have sufficient power for our analyses at the various time points after surgery. Recruitment Children were recruited between July 2008 and September 2010. Details of the recruitment are presented in Figure 1. A total of 83 children participated in this study, of whom 69 (83%), 61 (73%), and 59 (71%) completed the telephone follow-ups 2 weeks (mean = 15.6 days, standard deviation [SD] = 2.15), 6 months, and 12 months after discharge from the hospital, respectively. Descriptive statistics A total of 83 children (female = 56 [67.5%]) aged between 8-18 years (mean = 13.8, SD = 2.4) and one of their parents (mothers = 63 [75.9%]; mean age = 44.0, SD = 6.6) were used for the purpose of data analysis. The majority of children (n = 53; 64%) and parents (n = 56; 67.5%) in the sample self-identified as Caucasian. Eighty-nine percent of children spoke English as their first language at home, and 82% of parents identified English as the primary language spoken at home. Seventy-four percent of parents had completed at least some college/undergraduate education. The majority of children underwent surgery for scoliosis (spinal fusion) (n = 42; 50.6%) or osteotomy (n = 25; 30.1%); eight children (9.6%) underwent Nuss (n = 5) or Ravitch (n = 3) procedures, seven children (8.4%) had a laparotomy, and one child had a thoracotomy. As described elsewhere, significant differences were not found in pain intensity or pain unpleasantness scores across the different surgical procedures while in hospital 52 or after returning home. 23 A smaller proportion of boys than girls had surgery for scoliosis, and a greater proportion of boys had a Nuss or Ravitch procedure than expected by chance. 52 This was the first surgery for 44 children (53%); 39 others had previously undergone other surgical procedures (mean = 2.0, SD = 1.6, range = 1-7). When asked to rate the level of presurgical pain they had experienced, the majority of children (80.7%) reported "no pain" or "a little bit of pain." Approximately one quarter of parents reported having experienced chronic pain (either currently or in the past) (n = 20; 24.1%) whereas almost one third of parents reported experiencing ongoing pain problems (n = 26; 31.7%). Mean and SD of parent measures and medians and interquartile ranges of child pain experiences are presented in Tables 2 and 3, respectively. correlation and concordance among child and parent pain-related psychological factors correlations among child and parent pain-related psychological factors Correlations among child and parent pain-related psychological measures are presented in Table 4. Significant correlation coefficients were found between child and parent Notes: When calculating a participant's total score on a questionnaire, mean imputation was used to replace missing items if the total number of missing items amounted to #5% of the questionnaire items. if .5% of items on a specific questionnaire were left unanswered, total score for that participant was not calculated. *Functional disability scores were only computed for children who reported experiencing pain and do not take into account children who did not endorse pain at each time point. Functional Disability index -Parent report was measured inadvertently on a scale from 0-3. pain catastrophizing 6 months and 12 months, but not 48-72 hours, after surgery. Using R 2 and 90% confidence intervals, the magnitude of the correlation coefficients between child and parent pain anxiety, pain catastrophizing, and anxiety sensitivity increased significantly from 48-72 hours to 12 months after surgery ( Figure 2). agreement/relationship between child and parent ratings of their children's pain Mean pain intensity and pain unpleasantness scores 48-72 hours after surgery were compared between children whose parents' ratings on the PPMP identified the children as having clinically significant (PPMP score $6) 51 or lower than clinically significant (PPMP score ,6) levels of pain. Child self-reported pain intensity scores were significantly higher (mean NRSI = 4.08, SD = 2.3) in the clinically significant parent grouping (PPMP score $6) compared to the below clinically significant parent grouping (PPMP score ,6) (t[79] = 2.37, P = 0.020) (mean NRSI = 2.63, SD = 1.9). Significant differences were not found for pain unpleasantness scores (NRSU = 4.57, SD = 2.8 and NRSU = 4.31, SD = 2.8 for the clinically significant and below clinically significant parent groupings, respectively) (P = 0.743). Children and parents were asked whether or not the child experienced pain and if so, to rate the intensity of the pain using the NRSI, at 6 months and 12 months after surgery. This information was used to classify the ratings into moderate/severe CPSP (NRS $ 4) and no/mild CPSP (NRS , 3) at both 6 months and 12 months after surgery for both child report and parent report of the child's pain. Results indicate a fair agreement between child and parent ratings at 6 months (κ = 0.300, P , 0.023) but not 12 months (κ = 0.175, P = 0.205) after surgery (Table 5). These findings were not moderated by child or parent sex or child age. Discussion The goals of this study were to examine (1) the relationships among child and parent pain-related psychological factors, (2) perioperative parent pain-related psychological risk factors for pediatric acute postsurgical pain, and (3) perioperative parent pain-related psychological risk factors for the development of pediatric CPSP 6 months and 12 months after surgery. Table 3 Frequency (n) of child and parent self-report of pain and median pain intensity, pain unpleasantness and functional disability scores in children measured 48-72 hours after surgery, 2 weeks after discharge, and 6 months and 12 months after surgery Overall, the correlation coefficients were low between parent and child psychological and pain-related psychological measures across the first year after pediatric surgery. Of interest however, are the changes in the strength of the correlation coefficients between parent and child pain-related psychological measures over time. While the relationship between parent and child pain catastrophizing was the only one to reach statistical significance in the months following surgery, the relationships between pain anxiety and anxiety sensitivity also increased in strength over time. It is possible that child and parent pain-related anxiety and catastrophizing are normally correlated, but that this relationship is disrupted in the days after surgery and then normalizes by 12 months after surgery. As noted above, the relationship between parent and child pain catastrophizing reached statistical significance at 6 months and 12 months after surgery. It has been suggested that pain catastrophizing can be both dispositional and situational [55][56][57][58] and is learned through pain experience. Figure 3 interactions between parent and child pain anxiety 48-72 hours after surgery predict child pain intensity, pain unpleasantness, and functional disability 2 weeks after hospital discharge. Notes: For pain intensity (A) and pain unpleasantness (B), parents with low pain anxiety had children whose pain intensity and pain unpleasantness scores did not differ according to level of child pain anxiety. In contrast, among parents with high pain anxiety, child pain intensity and pain unpleasantness scores were significantly higher in children with high pain anxiety than in those with low pain anxiety. For functional disability (C), children with lower levels of pain anxiety reported similar levels of functional disability regardless of their parents' pain anxiety scores. children with higher pain anxiety scores reported higher levels of functional disability if their parents also reported higher compared to lower pain anxiety (C). The interaction between cPass and Pass is based on continuous variables; the cPass and Pass were dichotomized using median split for the purpose of illustrating the interaction. Abbreviations: cPass, child Pain anxiety symptoms scale measured 48-72 hours after surgery; FDi(2), Functional Disability inventory measured 2 weeks after discharge from hospital; nRsi(2) t , numeric Rating scale for Pain intensity transformed (square root transformation) measured 2 weeks after discharge from hospital; nRsU(2) t , numeric Rating scale for Pain Unpleasantness transformed (square root transformation) measured 2 weeks after discharge from hospital; Pass, Pain anxiety symptoms scale measured 48-72 hours after surgery. Pain status no/mild cPsP (n) 10 29 Given the correlational design of the present study, we do not know whether any of the risk factors we identified are causal. Nevertheless, the results of the present study raise the possibility that in the months following surgery, parents and children learn from each other's emotional responses to the pain experience and as such, over time, they exert a greater influence on each other's levels of pain catastrophizing. Research has generally shown a weak relationship between a child's self-reported pain score and parent perceptions of their child's pain. [59][60][61][62] In contrast, results from the present study indicated a significant relationship between child and parent acute pain ratings and a fair level of agreement between child and parent acute and chronic pain ratings up to 6 months after surgery. Children whose parent's scores indicated the child had clinically significant levels of acute postoperative pain rated their own pain intensity as higher compared to children whose parent's scores indicated the child had below clinically significant levels of pain. This finding is consistent with results showing that while parent and child ratings differed on the day of surgery and on the first postoperative day, this difference was no longer apparent by the second day after surgery. 59 The results of the present study also show fair agreement between parents and children on the presence/absence of moderate/severe CPSP 6 months after surgery. Consistent with the existing literature, 63,64 this agreement was not influenced by sex of the parent or child or by the child's age. The level of agreement between child and parent reports was no longer significant at 12 months after surgery. This indicates that parents and children are in greater agreement about the child's earlier CPSP status than later. It is possible that other variables not examined in the present study may affect the agreement between child and parent pain reports including parent surgical history, child surgical complications, and child behavioral pain expression. It would be interesting for future studies to explore whether, and how, these variables influence child and parent agreement. Parent pain-related psychological factors associated with pediatric acute postsurgical pain The present results show that the interaction between parent and child pain anxiety 48-72 hours after surgery predicted pain intensity, pain unpleasantness, and functional disability levels 2 weeks after discharge from hospital. When parent pain anxiety is low, child pain levels 2 weeks later do not differ between children with high or low pain anxiety. Low parent pain anxiety may moderate the effect of child pain anxiety on pain intensity and unpleasantness levels. In contrast, higher levels of parent pain anxiety were associated with higher pain levels (intensity and unpleasantness) among children who also endorsed high levels of pain anxiety. High levels of parent pain anxiety, however, were associated with significantly lower pain among children with low pain anxiety. It is possible that in some as yet unknown way, high parent pain anxiety is protective against pain in children with low pain anxiety. Alternatively, this might reflect a subset of children who underreport pain and pain anxiety so as to protect their parents from excessive worry and distress over the child's pain. Research has shown that parent and child postoperative anxiety are correlated, 20 but the present results are the first to show that parent and child pain-specific anxiety interact to predict pain intensity and unpleasantness reports 2 weeks after discharge. The present results suggest that high levels of both child and parent pain anxiety shortly after surgery are risk factors for higher pain intensity, pain unpleasantness, and functional disability 2 weeks after discharge; however, we do not know if they are causal risk factors. It is possible that psychological interventions designed to reduce pain anxiety in parents whose children also have high pain anxiety would reduce child pain reports in the days and weeks after surgery. Research on family and parent factors in pediatric pain has focused mainly on procedural pain or chronic pain and disability. Two primary theoretical models, the operant conditioning and family systems theories, have been used to examine parent-child pain dynamics. 4 These models mainly focus on individual variables (eg, parental behaviors such as solicitousness and reinforcement, parenting style) or family variables (eg, family environment). As proposed in the integrative model of parent and family factors in pediatric chronic pain and associated disability 4 and the pediatric fear-avoidance model of chronic pain, 5 it is also important to consider dyadic variables; namely, interactions between parent-child painrelated factors. Results from the present study contribute to this literature by suggesting that parent and child pain-specific emotional responses interact to predict acute pain. Parent pain-related psychological predictors of pediatric cPsP Initial levels of parent pain catastrophizing predicted child pain intensity 12 months after surgery. Pain catastrophizing has been conceptualized as a way of communicating pain distress to others. 65 As such, the parents may, through emotional and behavioral reactions, reinforce their child's pain 739 Parental influences in pediatric postsurgical pain behaviors and pain catastrophizing. 56 It is possible that initial parental catastrophizing reinforces the child's pain behaviors, thoughts, and emotions (either through modeling or by directing attention to the pain) thereby placing the child at greater risk of developing CPSP 12 months after surgery. The finding that this relationship is absent at the 6 month follow-up and does not appear until 12 months suggests that there may be important differences in the development of CPSP (ie, from surgery to 6 months after surgery) versus the maintenance of CPSP (ie, from 6 months to 12 months after surgery) that are influencing this relationship. 66,67 In this case, it may be that children are learning from their parents so that the effect of parent pain catastrophizing on child CPSP becomes apparent between 6 months and 12 months after surgery. limitations There are limitations to the present study. First, for practical reasons, recruitment and initial assessment did not take place until 48-72 hours after surgery. Since we did not assess children or parents before surgery, a true baseline was not obtained. It would be important for future studies to examine the parentchild dyadic relationship at baseline in the absence of pain in order to better understand how it changes with time after surgery. Second, children in this study all underwent major surgical procedures and the results cannot be generalized to procedural pain (eg, injections) or minor surgical procedures. Third, significantly more mothers than fathers took part in this study making it difficult to examine sex differences between parents. Lastly, the FDI was inadvertently administered omitting the original "2" ("some trouble"), yielding possible item scores ranging from 0-3 instead of 0-4. As such, levels of functional disability in this study cannot be directly compared to other studies of pediatric postsurgical pain. In conclusion, this study is the first to prospectively examine the relationship between parent and child pain-related psychological risk factors from acute pediatric pain to the development and maintenance of CPSP. Results indicate that while parent and child pain anxiety in the days after surgery interact to predict acute pain levels 2 weeks later, parent pain catastrophizing (48-72 hours after surgery) predicts the presence of CPSP 12 months after surgery. The results suggest the following hypothesis: as time from surgery progresses, parents exert an increasingly greater influence over the pain responding of their children so that by the 12 month mark, parent pain catastrophizing (measured in the days after surgery) is the main risk factor for the development of pediatric CPSP. A next step in this line of research would be to examine how different social and environmental factors, in addition to the parent-child dyad, influence pain outcomes in the short-and long-term. 4
2017-04-19T20:07:50.991Z
2013-09-30T00:00:00.000
{ "year": 2013, "sha1": "bb11262812986eb9e31dd3f3f4b3ee5f37074a4d", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=17661", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c85522a4b8016da1f55c604d7b20ce0051d7396", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
222109683
pes2o/s2orc
v3-fos-license
The Cone Flare Crush Modified-T (CFCT) stenting technique for coronary artery bifurcation lesions Background The present study is a prospective observational single arm clinical investigation, with parallel bench test interrogation, aimed at investigating the technical feasibility, safety and clinical outcomes with the cone flare crush modified-T (CFCT) bifurcation stenting technique. Bifurcation percutaneous coronary intervention (PCI) remains an area of ongoing procedural evolution. More widely applicable and reproducible techniques are required. Methods From April 2018 until March 2019, 20 consecutive patients underwent bifurcation PCI using the CFCT technique with a Pt-Cr everolimus drug-eluting stent with a bioresorbable polymer. Exercise stress echocardiography was performed at 12-month follow-up. The primary outcome was a composite of cardiac related mortality, myocardial infarction, target lesion/vessel revascularization and stroke. Safety secondary endpoints included bleeding, all-cause mortality and stent thrombosis. Results All patients underwent a successful CFCT bifurcation procedure with no complications to 30-day follow-up. One patient met the primary endpoint requiring target lesion revascularization at 9 months for stable angina. There were no other primary or secondary outcome events in the cohort. There were no strokes, deaths, stent thrombosis or myocardial infarction during the follow-up period. The mean CCS score improved from 2.25 to 0.25 (p < 0.0001). Optical coherence tomography (OCT) and bench test findings indicated optimal side branch ostial coverage and minimal redundant strut material crowding the neo-carina. Conclusions The CFCT technique appears to be a safe, efficacious and feasible strategy for managing coronary artery bifurcation disease. Expanded and randomized datasets with longer term follow-up are required to further explore confirm this feasibility data. (ANZCTR ID: ACTRN12618001145291). Introduction Bifurcation percutaneous coronary intervention (PCI) remains an area of ongoing procedural evolution and active research [1]. Multiple bifurcation strategies have been studied however bifurcation PCI continues to be heterogeneously managed with approaches that vary between operators, institutions and geographies. Dedicated two-limb pre-fabricated bifurcation stents have not been established to be adequately efficacious and are not widely available commercially [2,3]. Numerous provisional and upfront two-stent approaches have been assessed in registry and clinical trial settings with variable results [4][5][6][7][8][9][10][11]. Clinical outcome data suggests that a single stent provisional side-branch technique, outside of the left main bifurcation setting, should be the preferred approach [12][13][14]. Countervailing this, an a priori two-stent technique is frequently employed because of concern around the risk of irretrievable side-branch loss or the clinical significance of the side branch disease itself [15][16][17][18]. The strategy employed varies depending on operator preference, anatomical considerations and relative vessel sizes [8,19]. The cone flare crush modified-T (CFCT) bifurcation technique is a modified double kiss double-crush (DK-Crush) strategy and has been adopted by some operators as a default strategy where an upfront two-stent approach is deemed necessary [20]. The technique offers the potential for greater predictability for side-branch re-access for kissing balloon inflations (with less stent material in the peri-bifurcation region) and is potentially applicable to all bifurcation angles. (5) This study is a prospective observational single arm cohort investigation to examine the safety, technical feasibility and clinical outcomes with the implementation of the CFCT technique. The clinical data is complemented by intravascular imaging and bench test findings [21]. Study population The CFCT study is a prospective investigator initiated single arm registry of twenty consecutive patients planned to undergo an upfront two-stent bifurcation stenting strategy utilizing a resorbable polymer third generation drug eluting stent (DES) with a platinum-chromium (Pt-Cr) based platform design (SYNERGY, Boston Scientific Inc, Marlborough, MA, USA). The study was approved by the relevant institutional human research and ethics committee (HREC) and all patients provided informed consent in-line with the Declaration of Helsinki and in accordance with the guidelines of the American Physiological Society [22]. The study was registered with the Australian and New Zealand Clinical Trial Registry (ANZCTR ID number, ACTRN12618001145291). Clinical, demographic and procedural data was collected prospectively and entered into a central database as per the study protocol. Data collection was assisted by interrogation of the electronic medical record (EMR) system in place at our institution (Cerner Inc, Kansas, MO, USA). For each patient the Synergy Between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery (SYNTAX) score was calculated [23]. Angiographic variables assessed included lesion location and bifurcation type (Medina classification), presence of calcification and American Heart Association (AHA) lesion grade [24,25]. Procedural time, contrast load and fluoroscopy doses were collected in addition to the number of stents, stent lengths and diameters. Consecutive patients over the age of 18 at our institution who were planned to undergo a CFCT bifurcation technique using a Pt-Cr everolimus DES with a bioresorbable polymer in a nonemergent setting were eligible for enrollment. Patients were excluded if they were unable to take dual antiplatelet therapy, were undergoing primary PCI for STEMI (or rescue PCI for failed fibrinolysis), or did not have capacity to provide informed consent. Description of the CFCT technique The original form of the CFCT technique was initially described by Rajdev et al. [20]. The modified version of this technique involves the following steps (and is displayed in Fig. 1): 1. Stenting of the side-branch (SB) is performed first (with predilatation if required) with an iso-sized semi-compliant balloon in the main vessel (MV) sized to the vessel distal to the bifurcation. 2. Side-branch (SB) ostial stent placement is positioned so that the proximal end of the stent extends back to the SB-ostium distal vertex only (as shown in Fig. 2) and is deployed at nominal pressure. 3. The stent balloon is then pulled back 50 -70 per cent of its length and inflated for the 'cone flare' inflation to rated burst pressure (RBP). Following SB balloon deflation, the MV semicompliant balloon is then inflated to between nominal and RBP, at operator discretion, and an intermediary simultaneous kissing balloon (ISKB) inflation is performed to between 6 and 8 atm. 4. The SB wire and stent balloon are then removed. The MV balloon is then 'jogged' backward and forward to predict ease of MV stent passage with any resistance leading to further MV inflations before MV balloon removal. 5. The MV balloon is then removed and the main vessel stent positioned and deployed at a pressure at operator discretion (usually between nominal and RBP). 6. Murasato optimal proximal optimization technique (POT) is then performed on the MV stent, using an iso-sized (to the vessel proximal to the bifurcation) non-compliant balloon, prior to re-wiring of the SB [26]. The SB is re-accessed and a balloon isosized to the SB stent is positioned across the ostium of the SB. Sequential inflations are then performed in the SB and MV followed by a penultimate kissing balloon inflation (PUKBI) to 6 -8 atm. A final POT inflation is then performed in the MV. 7. Optical coherence tomography (OCT) is then performed on the MV and SB to document stent expansion and apposition [27]. Study outcomes The primary outcome was a composite outcome of cardiac related mortality, non-fatal myocardial infarction (MI), target lesion or target vessel revascularization (TLR/TVR) and stroke at 12 months. Secondary endpoints included the individual components of the primary outcomes in addition to safety outcomes of bleeding (BARC 2-5), all-cause mortality and Academic Research Consortium (ARC) defined stent thrombosis (ST) [28,29]. Periprocedural MI was defined as per contemporary PCI studies but also in accordance with ARC recommendations [29][30][31]. Symptom data was also recorded using the Canadian Cardiovascular Society (CCS) score for angina at baseline and follow-up reviews [32]. Quantitative coronary angiography (QCA) data was recorded pre and post-PCI. Data was analyzed using descriptive statistics with findings expressed as mean ± standard error of the mean (SEM) unless other specified. Data comparisons were done with Student's T test for continuous variables and Pearson's chi-squared test for categorical variables with SPSS Version 26 (SPSS Institute Inc, Chicago, IL, USA). Follow up Follow-up data was obtained during outpatient clinic visits (in person or via telehealth link) at 1, 6 and 12 months. All patients underwent an exercise stress echocardiogram (ESE) at 12-month follow-up on guideline mandated medical therapy including beta-blockade with invasive coronary angiography for all patients who had a result suggestive of inducible ischemia (but not for submaximal or equivocal studies without definite ischaemic features). Patients who could not exercise due to mobility or other issues underwent a dobutamine stress echocardiogram (DSE). Bench testing Methods Bench testing was performed employing consensus principles [33]. Two pre-fabricated separate bifurcation models representing narrow and wide bifurcation angles (30°and 70°respectively) were used (Terumo Inc, Tokyo, Japan). They both consisted of clear polyurethane tubing affixed to a transparent Perspex TM plate via stainless-steel ties. The 30°model had 4.0 mm proximal MV (PMV), 3.0 mm distal MV (DMV) and 3.0 mm SB lumens. The 70°m odel had 3.5 mm PMV and DMV lumen with a 2.5 mm SB lumen. The models were bathed in water on our catheter laboratory table and fluoroscopic images were taken with a Philips Allura Clarity imaging system (Koninklijke Philips NV, Amsterdam, Funding source and role of sponsor The CFCT study is an investigator-initiated prospective registry. An institutional grant to cover registry costs was provided by Boston Scientific Corporation (Marlborough, MA, USA). Terumo Corporation (Tokyo, Japan) provided pre-fabricated bench-testing models. The study structure, design and subsequent manuscript were prepared by the listed investigators. The study sponsor was given the opportunity to review the completed manuscript. All final decisions about manuscript content were made by the authors. Baseline characteristics The baseline characteristics are reported in Table 1. The mean age was 64.84 ± 2.23 years with 25% of female gender. Cardiac risk factors included hypertension and dyslipidemia in 12 patients (60%) and diabetes in 6 (30%). 14 patients had a history of current or prior smoking. Remote prior MI had occurred in 5 (25%) patients with 2 (10%) having undergone remote prior PCI. PCI was performed for acute coronary syndrome (ACS) in 45% of patients (NSTEMI n = 8, UAP n = 1). Most patients were taking statin therapy prior to PCI (90%) and all were loaded with DAPT prior to the PCI procedure. Procedural characteristics Procedural characteristics are shown in Table 2. The mean fluoroscopy time was 35 min. The median SYNTAX score was 27.9 ± 2.73. Medina 1,1,1 bifurcation disease made up 70% (n = 14) of the cohort. LAD/diagonal bifurcation disease accounted for 70% of cases (n = 14) while 10% were located in the distal LMCA (n = 2). Mechanical rotational atherectomy (MRA) was performed in 15% (n = 3) patients. The average number of stents was 2.35 ± 0.1. Penultimate kissing balloon inflations (PUKBI) were achieved in all patients (with a final POT inflation following this in all cases). Figs. 3 and 4 are panels that display the pre and post-PCI angiographic appearances for all patients. OCT was performed at the conclusion of all cases to confirm adequate stent apposition and expansion (Figs. 5 and 6 display representative OCT findings). All procedures were performed with intraprocedural unfractionated heparin (target ACT 250 -300) with no use of glycoprotein IIb-IIIa inhibitors or bivalirudin. QCA data is presented in Table 3. Clinical outcomes Complete follow-up data for all patients was available to 12 months with no loss to follow-up. One patient engaged in clin- ical follow-up but declined to undergo a 12-month ESE owing to living in a regional area without easy access to testing facilities but was well (CCS 0/NYHA I) with no symptoms suggestive of myocardial ischemia. Clinical data is shown in Table 3). An additional patient presented more than 12 months post PCI with indeterminate symptoms and underwent coronary angiography that demonstrated moderate to severe stenosis in the proximal to ostial portion of the side-branch stent and was managed medically given satisfactory symptomatic control on medical therapy. There were no cases of possible, probable or definite stent thrombosis. Bench testing findings The bench test demonstrated satisfactory ostial morphology on fluoroscopic and photographic assessment. Stent coverage was satisfactory without peri-bifurcation metallic crowding and no evidence of geographic miss around the carina (Figs. 1 and 7). Endon fluoroscopy demonstrated circular expansion of the SB ostium (Fig. 8). General discussion The data presented demonstrates the potential role for the CFCT technique in the treatment of coronary bifurcation disease. Bifurcation management represents an ongoing area of evolution and debate owing to sub-optimal outcomes when compared with the management of non-bifurcation disease and as a result has no universally accepted single technical solution [1]. The data from this cohort suggests that the CFCT strategy is a potentially reproducible method that minimizes strut material in the peri-bifurcation region and maximizes the ability for SB re-entry to facilitate a penultimate kissing balloon inflation (PUKBI). Bench testing and OCT findings indicate that the CFCT technique results in satisfactory coverage of the side-branch ostium. The technique emphasizes the role of optimal POT balloon inflation to cause carinal modification and to utilize extrusion of the peri-SB-ostial MV strut to facilitate proximal SB ostial coverage [26]. The CFCT technique appears to be a safe strategy and potentially holds promise as an additional tool in the armamentarium of contemporary bifurcation techniques. The findings also indicate the technical feasibility and suitability of this approach when using an everolimus-eluting bioresorbable polymer abluminal coated Pt-Cr DES (SYNERGY, Boston Scientific, Marlborough, MA, USA). The clinical data is supported by the above described bench testing findings with this platform. The complete resorption of polymer (poly-DL-lactide-coglycolide {PLGA]) with this platform within approximately 16 weeks and reduced levels of vessel inflammation may be particularly well suited to the bifurcation setting [34]. The twelve-month major adverse cardiac and cerebrovascular event (MACCE) rate in the present study was 5% owing to a single patient with symptom driven TLR for TLF. The DKCRUSH III study reported a 12-month MACCE of 6.2% in the DK-crush arm and 16.3% in the culotte arm with TLR of 2.4% and 6.7% respectively [4]. The findings are also comparable to the DKCRUSH V study where TLR was 5% in the DK-crush arm [35]. The current study findings appear to be in keeping with the original work on the initial iteration of the CFCT technique presented by Radjev et al. [20]. There were no instances of stent thrombosis, procedure related mortality or 12 mortality in the present study. The technique is reproducible with potentially greater predictability for SB re-crossing and PUKBI (prior to final POT inflation) than other techniques [26]. It is postulated that SB reaccess for the PUKBI is enhanced by the cone-flare inflation (CFI) prior to the intermediary KB step. The CFI step also serves to ensure apposition of the side branch stent to the ostium and acts as a high-pressure post inflation in the event that it is not possible to deliver a non-compliant balloon through the jailed stent. In the present cohort PUKBI was achieved in all patients and compares favorably with reported rates of 75 -90% of patients in the DK Crush literature [12,36]. Utilization the initial iteration of the technique Rajdev et al also found a higher success rate re-crossing side branches with shorter time taken compared with conventional crush stenting [20]. Sixty to seventy per cent of in-stent restenosis in bifurcation stent strategies is reported to occur at the neo-carina [4,37]. The CFCT technique minimizes stent struts at the carina through precise positioning of the proximal edge of the SB stent level with, but not beyond, the distal vertex of the SB ostium. The CFCT technique then relies on the Murasato POT to push main vessel stent struts into the proximal vertex of the SB-ostium rather than having two or three layers of stent struts with significant strut deformation in this region (see Figs. 1-3) [26,38]. Accurate positioning of the ostial SB stent is therefore a crucial part of the CFCT technique. Precise positioning of the stent so that the distal vertex of the SB ostium is level but not beyond will result in adequate final SB ostial coverage at the proximal vertex following Murasato optimal POT and PUKBI (Fig. 1). The dichotomy of either complete ostial coverage with DK-crush or culotte versus none with provisional SB management may be better addressed by the middle ground that the CFCT strategy represents. Study limitations The present study has several limitations. The first is the observational (albeit prospective) nature of the study with a relatively small sample size. The small sample size itself is in keeping with other technical feasibility studies in the bifurcation field [39,40]. The purpose of this was to attain data and outcomes in an organized, systematic, prospective way regarding current practice of a technique with limited published findings, but with growing adoption. The study is also limited by the absence of a randomized study design with a comparator arm. Furthermore, the procedures were performed by relatively high-volume individual PCI operators (150-200+ PCI cases per year) and this may limit applicability to lower volume operators. Countervailing this, a strength of the CFCT technique is its relative predictability and safety. The study would also have been enhanced by routine angiography with intravascular imaging at 12 months on all patients (rather than clinically driven) but this would not have been appropriate given the observational nature of the registry and that routine angiography is not used to check stent patency. Conclusion The CFCT technique for bifurcation coronary artery stenosis appears to be safe and feasible with satisfactory clinical outcomes at 12 months in this prospective cohort. Expanded datasets to confirm this feasibility data will include angiographic and OCT followup, and subsequent randomized controlled data. Declaration of Competing Interest The cost of the registry was funded by an institutional grant from Boston Scientific Incorporated. ACC is a clinical proctor for Abbott and Edwards Lifesciences and has received advisory board and or consulting fees from Boston Scientific, Medtronic and Abbott. SGW is a clinical proctor for Abbott and Edwards Lifesciences and has received research grant support from Abbott and Biotronik. AI is a clinical proctor for Edwards Lifesciences. SVC is a clinical proctor for Abbott. The remaining authors have nothing to disclose.
2020-10-03T05:07:28.111Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "2882bdc3fa9e1d3f8ca91d29e33c434c4908edfc", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijcha.2020.100643", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2882bdc3fa9e1d3f8ca91d29e33c434c4908edfc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
619270
pes2o/s2orc
v3-fos-license
UBIAD1 Mutation Alters a Mitochondrial Prenyltransferase to Cause Schnyder Corneal Dystrophy Background Mutations in a novel gene, UBIAD1, were recently found to cause the autosomal dominant eye disease Schnyder corneal dystrophy (SCD). SCD is characterized by an abnormal deposition of cholesterol and phospholipids in the cornea resulting in progressive corneal opacification and visual loss. We characterized lesions in the UBIAD1 gene in new SCD families and examined protein homology, localization, and structure. Methodology/Principal Findings We characterized five novel mutations in the UBIAD1 gene in ten SCD families, including a first SCD family of Native American ethnicity. Examination of protein homology revealed that SCD altered amino acids which were highly conserved across species. Cell lines were established from patients including keratocytes obtained after corneal transplant surgery and lymphoblastoid cell lines from Epstein-Barr virus immortalized peripheral blood mononuclear cells. These were used to determine the subcellular localization of mutant and wild type protein, and to examine cholesterol metabolite ratios. Immunohistochemistry using antibodies specific for UBIAD1 protein in keratocytes revealed that both wild type and N102S protein were localized sub-cellularly to mitochondria. Analysis of cholesterol metabolites in patient cell line extracts showed no significant alteration in the presence of mutant protein indicating a potentially novel function of the UBIAD1 protein in cholesterol biochemistry. Molecular modeling was used to develop a model of human UBIAD1 protein in a membrane and revealed potentially critical roles for amino acids mutated in SCD. Potential primary and secondary substrate binding sites were identified and docking simulations indicated likely substrates including prenyl and phenolic molecules. Conclusions/Significance Accumulating evidence from the SCD familial mutation spectrum, protein homology across species, and molecular modeling suggest that protein function is likely down-regulated by SCD mutations. Mitochondrial UBIAD1 protein appears to have a highly conserved function that, at least in humans, is involved in cholesterol metabolism in a novel manner. Introduction Schnyder corneal dystrophy [SCD, MIM 121800] [1,2] is an autosomal dominant eye disease characterized by an abnormal deposition of cholesterol and phospholipids in the cornea [3,4]. The resultant bilateral corneal opacification is progressive. Approximately 50% of SCD patients have corneal crystalline deposits [5] which represent cholesterol crystals. Of great interest, two-thirds of affected individuals are hypercholesterolemic [4]. Unaffected individuals in SCD pedigrees may also demonstrate hypercholesterolemia, thus it has been postulated that the corneal disease results from a local metabolic defect of cholesterol processing or transport in the cornea. A review of 115 affected individuals from 34 SCD families identified by one of the authors (JSW) since 1989, confirmed the finding that the corneal opacification progressed in a predictable pattern dependent on age [5,6]. All patients demonstrated corneal crystals or haze, or a combination of both findings. While patients have been diagnosed as young as 17 months of age, the diagnosis may be more challenging if crystalline deposits are absent. In acrystalline disease, onset of visible corneal changes may be delayed into the fourth decade [7]. Although many patients maintained surprisingly good visual acuity until middle age, complaints of glare and loss of visual acuity were prominent and increased with age. Disproportionate loss of photopic vision as compared to scotopic vision was postulated to be caused by light scattering by the corneal lipid deposits. Surgical removal of the opacified cornea was reported in 20 of 37 (54%) patients 50 years of age and 10 of 13 (77%) of patients 70 years of age [5]. Recently, several groups described the identification of mutations in human SCD patients in a gene with no prior connection to corneal dystrophies or cholesterol metabolism [8][9][10][11][12]. The gene, UBIAD1 (italics is used to indicate the gene), is predicted to encode a membrane protein containing a prenyltransferase domain similar to a bacterial (E. coli) protein, UbiA. The human gene, UbiA prenyl-transferase Domain containing 1 (UBIAD1), spans 22 kb and the locus gives rise to approximately three different transcripts with up to five unique exons [9]. To date, mutations have been described exclusively in exons 1 and 2, which encode a discrete transcript. Thirty-one apparently unrelated families have been examined and fifteen different mutations have been characterized. Genetic analysis of families revealed a putative mutation hotspot that altered an asparagine at position 102 to a serine reside [11]. Cumulatively, 12/31 (39%) of apparently unrelated families possess this single hotspot alteration. Thus, a major unresolved issue is whether all mutations have similar effect and whether protein activity is up-or down regulated by familial mutations. The current study examined newly recruited SCD families in order to investigate critical aspects of UBIAD1 protein structure and function. Protein homology across species and conservation of residues mutated in SCD were analyzed. The subcellular localization of UBIAD1 was determined and several forms of cholesterol were quantitated in immortalized peripheral blood mononuclear cell lines derived from SCD patients. Finally, protein threading was utilized to construct a three dimensional model of membrane-bound UBIAD1. The model allowed functional consequences of SCD mutations to be assessed and likely substrates identified that may offer novel therapeutic approaches. Demographics of New SCD Families Ten affected probands with SCD from ten different families were recruited. Six families resided in the United States, Families AA, GG, II, KK, LL, and MM. Four families resided out of the United States: Family CC from Japan, EE from Taiwan, N from Germany, and F1 from Finland. Clinical features of Family N were previously published [13]. No known history of SCD was discovered in five families, EE, GG, II, KK and LL. There was a known family history of SCD in the remaining five families and in three of these families (AA, F1, N) more than one affected individual participated in our study. Affected patients demonstrated findings of SCD including superficial corneal crystals ( Figure 1A, top). This study includes a first report of SCD in a family of Native American ancestry and the 69 year old proband demonstrated diffuse cornea haze with scattered superficial crystals and peripheral arcus lipoides ( Figure 1B). Probands from other families had similar corneal findings ( Figure 1C and 1D). Table 1 [7][8][9][10][11]. Figure 1A-1D (bottom) shows proband sequencing in UBIAD1 for Families GG, AA, KK, and LL. Five families exhibited novel mutations, A97T (Family GG), D112N (LL), V122E (AA), V122G (F1), and L188H (EE). Five newly analyzed families possessed the same N102S mutation: Families CC, II, KK, MM, and N. Mutations of the N102 residue are shown as distinct in Table 1 but it should be noted that some families may be distantly related and share an N102S mutation due to a founder effect. Over 220 chromosomes from unrelated CEPH individuals were sequenced and examined at the site of each novel mutation. No alterations were found in these healthy individuals confirming that these mutations are likely associated with SCD and not rare polymorphisms. A phenotype-genotype discrepancy was noted in Family F1 where a patient was diagnosed as affected by corneal exam but did not possess a V122G mutation. This may be due to difficulties of making the diagnosis of SCD in some patients and/or families [9]. The family and mutation are included in this study as evidence strongly suggests this is a valid SCD alteration. This includes that fact that the same residue was mutated (V122E) in the Family AA proband but not his unaffected sister. Examination of DNA from over 110 healthy individuals failed to indicate the presence of rare polymorphism(s) at this codon. Highly Conserved UBIAD1 Residues are Mutated in SCD The entire protein sequence was examined across species, with a focus on residues mutated in SCD because mutation of conserved residues could suggest interruption of an ancient function. A previous analysis, examined homology across 23 amino acids of a putative prenyltransferase active site [11,14]. UBIAD1 homologs were identified in 19 species and protein sequences were aligned using ClustalX [15] (Figure 2) Homology was high among mammals based upon pairwise alignment scores. Compared to human, chimp was 99.7% similar, mouse 92%, rat (92%), cattle (91%), and dog (89%). The protein was generally conserved in non-mammalian vertebrates. Compared to human, similarity to chicken was 81.7%, zebrafish 78.9%, and fruitfly 59.6%. Locations of 17 amino acids mutated in SCD are indicated (Figure 2A, 2B). Fifteen out of 17 (88%) were universally conserved in all 19 organisms examined from sea urchins to humans, including A97, D112, V122, L188. Groups of SCD mutations were clustered in regions of protein exhibiting the highest degree of conservation between species. These were separated by regions of protein that are less conserved. Based upon the alignment, a phylogenic tree was created ( Figure 2C). The tree is consistent with the high conservation of the protein in mammalian species and lesser but substantial conservation in other vertebrates. Linear and 2D Protein Models A linear diagram and 2-D model of UBIAD1 in a lipid membrane ( Figures 3A and 3B) demonstrate the number and location of newly reported and previously published familial SCD mutations [8][9][10][11][12]. Family GG possessed the most N-terminal SCD alteration yet described, A97T. N102S, the site of 17 familial mutations (41% of families), is located at the first transmembrane spanning region. The mutated amino acids ( Figure 3B) occur in regions of the protein on one side of the membrane (Loops 1-3). Loop 1 of the protein is affected by 9/10 of these newly reported mutations. Other mutations described in this study, D112N and two alterations at V122 (V122E and V122G), appear to affect aqueous portions of Loop 1. The single Loop 2 mutation is L188H which extends this cluster of mutations towards the C-terminus. Localization of UBIAD1 to Mitochondria in Keratocytes To examine whether SCD mutations altered UBIAD1 protein trafficking, the subcellular localization of wild type and mutant human UBIAD1 was examined ( Figure 4). Localization within cultured normal human keratocytes of UBIAD1 and protein disulfide isomerase, an enzyme marker for the endoplasmic reticulum, is shown ( Figure 4A-4C). Co-localization of UBIAD1 and a subunit of OXPHOS complex I (NADH dehydrogenase), an enzyme in mitochondria, is shown in Figure 4D-F. UBIAD1 did not co-localize with the endoplasmic reticulum ( Figure 4C), but did co-localize with mitochondria (orange in Figure 4F). Figure 5 presents localization of UBIAD1 in SCD and normal keratocytes. Co-localization of SCD mutant UBIAD1 protein and OXPHOS complex I mitochondrial marker in disease keratocytes ( Figure 5A-5C) and normal human keratocytes ( Figure 5D-F) is shown. Analysis of Cholesterol in SCD Patients Mutation of UBIAD1 in SCD is thought to result in deregulation of cholesterol and lipid metabolism, resulting in an abnormal accumulation of these substances in the cornea leading to corneal opacification and visual loss [5]. No significant differences were observed in total cholesterol, cholesteryl ester, and unesterified cholesterol in SCD and healthy patient B cell lines ( Table 2). Protein Threading Model of UBIAD1 Protein To examine UBIAD1 structure-function relationships and assess the potential impact of SCD mutations, three dimensional (3D) modeling was performed using protein threading. Available X-ray structures of prenyl-converting enzymes were examined, including a recently developed model of the all-alpha-helical E. coli UbiA [16][17][18][19]. Modeling using the Molecular Operating Environment (MOE) indicated UbiA but not other proteins possessed an arrangement of alpha helical structural elements that could be superimposed on UBIAD1 ( Figure 6A). The positional placement of geranylpyrophosphate and a single magnesium cation were extracted from the E. coli UbiA model and fitted into the model of UBIAD1. A second magnesium cation was manually added to the model due to an additional aspartate close to the putative binding site of the pyrophosphate moiety in UBIAD1. PROCHECK assessment of stereochemical quality of the model obtained from MOE and refined using YASARA indicated that 86.7% of all amino acid residues were located in the most favored area and only three residues were in disallowed (uncertain) loop regions. All parameters evaluated were better (overall Gfactor) than similar values for an analogous X-ray crystal structure at a 2 Å resolution. Inspection of the fold quality revealed a quality indication of 94%, with low quality scores in only five small regions. Over 30 models were generated and evaluated to obtain the model shown in Figure 6B and 6C. Transmembrane helices created an approximate circular pattern to form a substrate binding cleft on one side of the membrane. The N102 residue occupied a position where the first TM helix exited the membrane and its sidechain pointed inwards towards the center of a putative prenyldiphosphate binding pocket ( Figure 6C). A docked farnesyldiphosphate is shown with magnesium cations in the active site. The prenyl substrate appeared to approach the active site containing N102 from the central cavity. The model allowed docking of potential ligand(s)/substrate(s) to be examined including those involved in reactions catalyzed by UbiA ( Figure S1) [16,17]. Database searches revealed homology between the binding site in the model and 1,4-dihydroxy-2- Loop, see Figure 3B. f CA, Canadian Native American. g Nucleotides re-numbered based upon updated RefSeq NM_013319.2. h Clinical description of family N originally described in [14]. doi:10.1371/journal.pone.0010760.t001 naphthoate octaprenyltransferases (e.g. Q17BA9_AEDAE). This suggested that similar to aromatic prenyltransferases a second (aromatic?) substrate may be involved in catalysis, e.g. 4-hydroxybenzoate (cf. UbiA) or a 1,4-dihydroxy-naphthaline derivative. For the latter case, bacterial menaquinone (vitamin K-2) is the product of a similar prenylation reaction, a Docking simulations were compared using models of wild type and SCD mutant (N102S) UBIAD1 protein and substrates, farnesyldiphosphate and a 1,4-dihydroxy aryl compound ( Figure 6D and 6E). The diphosphate binding site was identified in the putative active site of both models in close proximity to N102. In the wild type protein, N102 formed weak hydrogen bonds to the 1,4-dihydroxy aryl compound (dotted line) which were lost upon mutation to a serine residue. Lastly, substrate docking was examined using prenylated aromatics with a role in human metabolism. This showed that menaquinone (vitamin K) fits excellently into the substrate binding cleft of UBIAD1 models ( Figure 6F). While this class of molecules is not a likely substrate for UBIAD1, they may be ligands that are transferred to protein binding partners. Amino acids mutated in SCD were found in the vicinity of the active site, including A97, N102, D112, V122, and L188. Figure 7A, 7B, S2, and S3 show side and top views, respectively. The N102 residue is shown as a spacefill atom with a docked farnesyldiphosphate. Significantly, a previously described polymorphism, S75F [8,9], was not identified as functionally important to substrate docking or catalysis. Discussion Recruitment and analysis of new families with SCD continues to facilitate investigation of the genotypic spectrum of this disease. Of ten new families recruited for this study, five possessed novel UBIAD1 alterations. SCD mutations A97T (Family GG), V122E (Family AA), and V122G (Family F1) expand the size of the Loop 1 mutation cluster ( Figure 3B). Similarly, L188H (Family EE) expands the Loop 2 cluster. All five of the novel amino acid substitutions represent non-conservative changes that are consistent with previously described alterations (Table 1): A97T (nonpolar to polar), D112N (negative to neutral), V122E (nonpolar to polar-negative), V122G (aliphatic to non-aliphatic), and L188H (nonpolar to polar-positive). Three distinct lines of evidence indicate that SCD results from loss of function of UBIAD1 protein due to a mutation: genetics, experimental mutagenesis of UbiA, and modeling of substrate-UBIAD1 interactions. There are three cases (Table 1) where UBIAD1 amino acids were mutated to other residues in SCD families, aspartic acid 112 to an asparagine or a glycine, leucine 121 to a valine or a phenylalanine, and valine 122 to a glutamic acid or a glycine. There were significant chemical differences between resulting mutant amino acids and from wild type. For example, substitution of non-polar valine 122 with either polar, negative glutamic acid or non-polar, neutral glycine results in SCD. This suggests that loss of valine 122 may be necessary for the formation of SCD rather than a gain of function. Despite low overall homology between E. coli UbiA and human UBIAD1 proteins, comparison of individual amino acids aligned in Figure 6A suggest SCD mutations may result in loss of function of UBIAD1. In prior work on UbiA [16][17][18]20], five aspartic acid residues were judged as crucial for catalytic activity based on modeling. These were individually mutated and all five inhibited product formation by .95%. Figure 6A shows that mutagenized UbiA residues (R137 and D191) aligned with amino acids in UBIAD1 that are mutated in human SCD, L181 and D236. Thus, mutagenized UbiA amino acids that resulted in loss of function aligned to UBIAD1 residues mutated in SCD. This is a second piece of evidence that a SCD mutation may lead to loss of function of UBIAD1. Modeling of UBIAD1 substrate docking indicates critical roles for several residues mutated in SCD by suggesting a mutation of these residues would block critical steps in catalysis. For example, naphthalin-1,4-diol was docked as a speculative second substrate of UBIAD1 and fitted nicely into the binding pocket ( Figure S2). However, a SCD mutation, N102S, changed binding of this substrate completely, rendering its prenylation at position 3 impossible ( Figure 6D and 6E). Although detailed modeling of active site residues is full of uncertainty, this approach is supported by modeling of UbiA that predicted the loss of enzyme activity and was experimentally verified [16]. Further, modeling of UbiA was able to connect the decreases in enzyme activity to specific chemical functions of mutated residues, i.e. activation of a . Cellular localization of wild type human UBIAD1. Colocalization within cultured normal human keratocytes of UBIAD1 protein and protein disulfide isomerase, an enzyme in endoplasmic reticulum, is shown in panels A-C. Co-localization of UBIAD1 and OXPHOS complex I, an enzyme in mitochondria, is shown in D-F. UBIAD1 labeling is red (B and E). Protein disulfide isomerase and OXPHOS I are green (A and D). UBIAD1 did not co-localize with the endoplasmic reticulum (C), but did co-localize with mitochondria (colocalizing red and green show as orange in F). Bar is 50 mm and applies to all. doi:10.1371/journal.pone.0010760.g004 The result that UBIAD1 did not localize with a marker for endoplasmic reticulum (Figure 4) while wild type and N102S mutant UBIAD1 did co-localize with a mitochondrial marker, OXPHOS complex ( Figure 4F), demonstrates that mislocalization of N102S mutant protein is not a factor in SCD. Mitochondrial localization is surprising in light of a previous report demonstrating interaction between UBIAD1 (also known as TERE1) and apolipoprotein E [21,22]. To our knowledge, a mitochondrial localization for apolipoprotein E has not been reported. However, some UBIAD1 immunostaining was localized outside of mitochondria in these analyses making interaction with apolipoprotein E outside of mitochondria possible. SCD has been associated with deregulation of cholesterol metabolism in the cornea as well as systemic hypercholesterolemia [3]. The UBIAD1 gene had been shown to be expressed in B-cells [20], however we found no significant differences in levels of cholesterol metabolites in extracts of B-cell lines established from SCD patients compared to an unaffected family member and healthy donors ( Table 2). This may indicate that UBIAD1 has a specialized corneal function, perhaps relying on specific proteinprotein interactions (such as with apolipoprotein E) or in posttranslational modification of binding partners, perhaps with a cholesterol or cholesterol-like moiety [23]. In this regards, substrate docking simulations indicated that menaquinone fits well into the interior of UBIAD1 ( Figure 6F). This may be significant since a relationship between menaquinone and cholesterol metabolism has been suggested by prior publications [24]. Experiments to determine if UBIAD1 will accept oligoprenyl diphosphates as a substrate or ligand may be informative, but the 3D protein model clearly shows an optimal binding pocket for this type of compound ( Figure 6B and 6C). UBIAD1 may be an aromatic prenyl transferase as indicated by its closest known protein homologues. If so, a second substrate or ligand moiety may be involved in enzyme catalysis, e.g. 4-hydroxy benzoate or a 1,4dihydroxy-naphthaline derivative. The high degree of conservation of the protein across species, and particularly residues mutated in SCD, indicates that the protein may have an essential or at least ancient metabolic function. These function(s) may play a role outside the cornea as the gene is widely expressed in human tissues [20] and the protein is present in species without eyes such as the sea urchin ( Figure 2). Modeling of UBIAD1 indicates the possibility of aromatic prenylation as an enzyme activity. This biochemistry may evolutionarily be at least as old as aerobic life, and it has been described in human metabolism. Accordingly, UBIAD1 may have a common origin directly from E. coli UbiA, but may not necessarily act as a transferase (see Figure S1). Presently, the only treatment for SCD is corneal replacement by penetrating keratoplasty (PKP) once corneal opacification causes decreased vision. PKP is performed in the majority of patients above the age of 50 years with SCD [5]. Unfortunately, there are no current therapies to prevent the progressive lipid deposition in the cornea which results in this visual loss. Prior studies have demonstrated that normalizing blood cholesterol levels does not affect the relentless deposition of corneal lipid that occurs with age [25]. Hopefully further understanding about the impact of UBIAD1 gene mutations in SCD will potentially lead to interventional strategies to prevent the relentless accumulation of corneal lipid which results in visual loss in these patients. Our results suggest that UBIAD1 protein function is lost or decreased by SCD mutations. Thus, therapeutic analogs of substrates which were successfully docked to the UBIAD1 model ( Figure S4) may further inhibit rather than restore protein function. Examination of protein binding partners may allow useful therapeutic targets to be identified. Patients and Samples New patients were recruited as previously described under Institutional Review Board approval of University of Massachu- setts Medical Center and Wayne State University School of Medicine [5]. IRB approval was also obtained from the NIH Office of Human Subjects Research. Creation of cell lines and analysis of patient samples described in this study are covered by the IRB approved protocol. All adults and parents of minors who participated in the study provided written informed consent under the research tenets of the Declaration of Helsinki. Affected probands were recruited from other physicians and also were self recruited by internet contact with JSW. Family history, ophthalmologic examination, blood samples were obtained on all affected patients. When possible, other family members were recruited to confirm the inherited nature of the SCD mutations. Ophthalmologic examination included assessment of visual acuity and slit lamp examination of the cornea detailing location and characteristics of the corneal opacity. Notation was made as to the presence of central corneal opacity, mid peripheral haze, arcus lipoides and corneal crystalline deposition. Slit lamp photographs were obtained whenever possible to document the diagnosis. DNA Extraction and Sequencing DNA was extracted using standard methods and either Puregene (Gentra/Qiagen, Valencia, CA) or other Qiagen reagents (All Prep DNA/RNA Kit). Genetic analysis of patient DNA was performed as previously described, [8][9][10] except that FastStart PCR reagents (Roche, South San Francisco, CA) and ABI (Foster City, CA) thermal cyclers were used. Sanger sequencing was performed using Big Dye reagents (ABI) and subjected to chromatography using a 3730 Genetic Analyzer (ABI). Sequence chromatograms were analyzed using Sequencher, v4.8 (GeneCodes, Ann Arbor, MI). Over 100 control DNAs from healthy donors were examined by double stranded sequencing for each mutation to insure that mutations were novel, associated with SCD, and unlikely to be rare polymorphisms. Healthy DNA samples were obtained from the Dean Lab database (MD) and the Coriell Institute for Medical Research (Camden, NJ). Homology and Phylogeny The following UBIAD1 sequences from 19 indicated species were identified using the Ensembl database: NP_037451. 1 Figure 6D upon in silico mutation of UBIAD1 from asparagine 102 to serine (arrow). The aromatic substrate is no longer recognized by N102, but by S69 and, as before, by R235 and P64. C2 of the aromatic substrate is no longer positioned correctly to allow prenylation. (F) Active site of UBIAD1 with a menaquinone-farnesyl derivative that optimally docks to the protein. Substrates with longer fatty acid tails were also successfully docked. The interaction is stabilized by hydrogen bonds (dashed lines) with N102 and R235. R235 may be influenced by neighboring residues, N232, N233, and D236, which cause SCD when altered. The quinone moiety and farnesyl chain are recognized by P64, F107, and other indicated residues via hydrophobic interactions. doi: 10 [15]. A global alignment performed on all proteins was followed by local optimization of overlapping, sequential regions of protein in approximately fifty amino acid increments. Localization of Human UBIAD1 Normal human keratocytes were purchased from ScienCell Research Lab (Carlsbad, CA). Schnyder corneal dystrophy and normal human keratocytes were cultured at 37uC in Fibroblast Medium (catalogue no. 2301) also obtained from ScienCell Research Lab. For immunofluorescence labeling experiments, the keratocytes were rinsed three times with DPBS before fixing with 2% formaldehyde for 10 minutes at room temperature. Cells were then blocked with 10% FBS in DPBS (FBS blocking solution) for 30 minutes, and then treated 15 minutes with avidin/biotin blocker (Vector Laboratories, Burlingame, CA) with a DPBS rinse between each step of the procedure described by the manufacturer. Chicken anti-UBIAD1 was diluted to 5 mg/ml in FBS blocking solution containing 0.2% Triton X-100, and incubated with keratocytes for one hour at room temperature. After three fiveminute rinses with DPBS, biotinylated goat anti-chicken IgY (catalogue no. 103-065-155 from Jackson Immunoresearch, West Grove, PA) diluted to 5 mg/ml in FBS blocking solution was incubated with keratocytes for one hour. This primary labeling of UBIAD1 protein was then visualized by incubating keratocytes with 5 mg/ml Alexa 594 (red) streptavidin diluted in DPBS (catalog no. S32356, Molecular Probes, Eugene, Oregon). To determine the subcellular localization of UBIAD1, keratocytes were further incubated one hour with either 5 mg/ml mouse IgG2b monoclonal anti-protein disulfide isomerase (catalogue no. S34253, Molecular Probes), an endoplasmic reticulum marker; or 5 mg/ml mouse IgG1 monoclonal anti-OXPHOS Complex I subunit, NADH dehydrogenase (catalogue no. A31857, Molecular Probes), a mitochondrial marker. This was followed by incubation with 5 mg/ml Alexa fluor 488 (green) anti-mouse IgG (catalogue no. A11029, Molecular Probes) for one hour to label the subcellular markers. All antibodies were diluted in FBS blocking solution. Cholesterol Measurements Lymphocytes were isolated from patient blood samples using lymphocyte separation medium and were immortalized using Epstein-Barr virus. Standard culture conditions utilized RPMI 1640 media (Invitrogen), 15% fetal bovine serum (Hyclone, Waltham, MA), and 26 L-glutamine (Invitrogen). Six well plates were used to grow approximately 1 million cells per well, which were rinsed three times each with Dulbecco's phosphate-buffered saline (DPBS) plus Mg 2+ , Ca 2+ , and 0.2% bovine serum albumin (BSA), and then DPBS plus Mg 2+ and Ca 2+ . Cells were harvested from wells by scraping into 1 ml of distilled water, and then processed as described previously [26]. Lipids were extracted from an aliquot of cell suspension using the Folch method [27]. The cholesterol content of cells was determined according to the fluorometric method of Gamble et al. [28]. Protein content was determined on another aliquot of cell suspension by the method of Lowry et al. using BSA as a standard [29]. Protein Models UBIAD1 transmembrane helices and topology were analyzed using the HMMTOP program and server [30,31] The Brookhaven Protein Data Bank (PDB) and PHYRE (Protein Homology/ analogY Recognition Engine) were searched for proteins homologous to UBIAD1 using BLASTp [31]. Homology between UBIAD1 and other prenyltransferases was examined using MOE (Molecular Operating Environment, Chemical Computing Group Inc., Montreal, Canada). Transmembrane helices were manually examined by using available X-ray structures of prenyl converting enzymes as templates, such as prenyl synthases (cyclases), protein prenyl transferases, and the recently developed model of the allalpha-helical E. coli UbiA prenyltransferase [16,17]. Alignment was performed as previously described [16]. The positional placement of geranylpyrophosphate and a single magnesium cation were extracted from the E. coli UbiA model. The model obtained from MOE was refined using the molecular dynamics refinement tool YASARA and stereochemical quality was analyzed with PROCHECK [32]. All parameters evaluated were better (overall G-factor) then required for an analogous X-ray of better than 2 Å resolution. Inspection of the fold quality was done with ERRAT [33]. Substrate suitability was approached by examining homologous proteins in the Uniprot Knowledgebase Release 15.2 database. Substrates examined are available upon request. Substrate binding and dynamics (4-hydroxybenzoic acid and 1, 4-naphthalin-diol) were evaluated using automated docking and molecular dynamics simulations (GOLD [34]). Web Resources The Figure S1 Key enzymatic prenylation reaction catalyzed by UbiA during biosynthesis of ubiquinone [16,17]. Prenylation of 4-hydroxybenzoic acid by oligoprenyl diphosphates are shown (n.1). A two substrate reaction is shown similar to that proposed for human UBIAD1 (see Discussion). Figure 3B for comparison, to identify SCD mutations in each loop. Two views are shown, a side view (left side) and top view (right side). These highlight the loop regions containing amino acids implicated in SCD. Loop 1 (containing amino acids A97 to R132) is shown in orange, loop 2 (Y174 to A184) in blue, and loop 3 (L229 to S257) in green. Mutated S102 is shown as a spacefill atom and a docked farnesyldiphosphate is shown as a stick representation (red).
2016-05-04T20:20:58.661Z
2010-05-21T00:00:00.000
{ "year": 2010, "sha1": "223692439084df68853ce74c4b1f98d0f9a7398b", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0010760&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "355f3ac2e9f6cc886b229f8e25b80d8cf7de34ba", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
254074411
pes2o/s2orc
v3-fos-license
Upregulation of miR-215 attenuates propofol-induced apoptosis and oxidative stress in developing neurons by targeting LATS2 Propofol is an intravenous anesthetic agent that commonly induces significant neuroapoptosis. MicroRNAs (miRNAs) have been reported to participate in the regulation of propofol exposure-mediated neurotoxicity. MiR-215, as one of miRNAs, was found to regulate nerve cell survival. However, the mechanism through which miRNAs regulate propofol exposure-mediated neurotoxicity is still unclear. Real-time PCR was used to detect miR-215 expression level. Cell viability was measured using MTT assay. Cell apoptosis was examined via flow cytometry analysis. ROS, MDA, LDH and SOD levels were assayed through ELISA kits. Dual luciferase reporter assay identified the interaction between miR-215 and large tumor suppressor 2 (LATS2). Protein level was detected using western blot analysis. MiR-215 expression was downregulated in propofol-treated rat hippocampal neurons. MiR-215 mimics promoted cell viability and reduced apoptosis in propofol-treated neonatal rat hippocampal neuron. MiR-215 mimics also caused inhibition of oxidative stress as evidenced by suppression of ROS, MDA and LDH levels as well as increase of SOD level. In addition, we found that large tumor suppressor 2 (LATS2) is a target of miR-215 and miR-215 mimics decreased LATS2 level in propofol-treated neonatal rat hippocampal neuron. Further, LATS2 overexpression suppressed the effect of miR-215 on propofol-induced apoptosis and oxidative stress in neonatal rat hippocampal neuron. Taken together, we demonstrate that miR-215 attenuates propofol-induced apoptosis and oxidative stress in neonatal rat hippocampal neuron by targeting LATS2, suggesting that miR-215 may provide a new candidate for the treatment of propofol exposure-induced neurotoxicity. Background Propofol is an intravenous anesthetic agent commonly used for the induction and maintenance of anesthesia and sedation (Chidambaran et al. 2015). Propofol can lead to significant neuroapoptosis and affect dendrite development and cognitive function, which causes concern about its safety in pediatric anesthesia (Bosnjak et al. 2016). Reported mechanisms of propofol exposureinduced neurotoxicity include calcium dysregulation, mitochondrial fission, abnormal expression of neurotrophic protein and neuroinflammation (Wei 2011;Cui et al. 2011;Unoki and Nakamura 2001). Therefore, it is necessary to explore the biomarker to prevent and improve propofol exposure-induced neurotoxicity. MicroRNAs (miRNAs), 18-25 nucleotides in length, are endogenous non-coding RNA molecules that regulate biological processes through suppression of target messenger RNA expression (Fabian et al. 2010;Bartel 2009;Shukla et al. 2011;Ji et al. 2019). Increasing evidences have shown that miRNAs are involved in the regulation of propofol exposure-mediated neurotoxicity. For example, Jiang et al. discovered that miR-141-3p is useful for propofol-mediated suppression of neural stem cells neurogenesis (Jiang et al. 2017). Zheng et al. proved that propofol attenuates neuroinflammatory response of microglia in response to LPS via regulating miR-155 (Zheng et al. 2018). Besides, Zhang et al. verified that propofol anesthesia decreases miR-132 level and inhibits the number of dendritic spines in the hippocampus (Zhang et al. 2017). Interestingly, miR-215 is reduced in ischemic stroke, which leads to suppression of nerve cell apoptosis, autophagy, ischemic infarction and improved neurological deficit via down-regulation of nuclear factor-κB activator 1/ interleukin-17 receptor A pathway. These results suggest that miR-215 plays a neuroprotective role in ischemic injury (Sun et al. 2018). However, the role of miR-215 in propofol exposure-induced neurotoxicity is still unclear. Large tumor suppressor 2 (LATS2) gene maps onto the human chromosome 13q11-12 (Yabuta et al. 2000). Nuclear LATS2 has been found to activate p53, maintaining the proper chromosome number when mitotic apparatus are impaired (Aylon et al. 2006). Previously, LATS2 was found to promote p53mediated apoptosis (Aylon et al. 2010) and reduce the expression of BCL-2 and BCL-x(L) (Ke et al. 2004). Moreover, Brandt et al. found that LATS2 is involved in the production of peripheral nerve sheath tumors (Brandt et al. 2019). LATS2 kinase activation inhibits Yap protein to suppress proliferation and cell cycle exit in the process of neurogenesis (Zhang et al. 2012). Notably, LATS2 is predicted to be a target of miR-215 using bioinformatic analysis. Thus, these findings suggest that miR-215 may be involved in regulation of propofol exposure-induced neurotoxicity through LATS2. In the current study, we investigated miR-215 expression in propofol-treated rat hippocampal neurons. The effects of miR-215 on cell viability and apoptosis were then examined. Furthermore, we explored whether miR-215 modulates propofol exposure-induced neurotoxicity via LATS2. Together these results suggest a new target for the treatment of propofol exposure-induced neurotoxicity. Neonatal rat hippocampal neuron isolation, culture and transfection The cell isolation procedures in this study were approved by Guide for the Care and Use of Laboratory Animals and in agreement with the Ethics Committee of The first affiliated hospital of Nanchang University. Neonatal rat hippocampal neuron was isolated as previously reported (Bhargava et al. 2010). Briefly, 1~2 day-old neonatal Sprague-Dawley rats were sacrificed and the whole brains were collected. The hippocampi were isolated from the neonatal brains and hippocampal neurons were harvested via collagenase digestion of hippocampus tissues. Subsequently, the hippocampal neurons were cultured as a monolayer at 37°C with a normoxic 95% air and 5% CO 2 incubator. The hippocampal neurons (1 × 10 7 cells/well) were seeded in 24-well plates and transfected with 50 nM miR-215 mimics (miR-215) or miR-215 negative control (miR-NC) after 3 days using Lipofectamine 2000 reagent. After transfection for 48 h, the cells were treated with 20 μM propofol for 0, 2, 4, 6 or 12 h under a normoxic 95% air and 5% CO 2 incubator. Finally, the cells were examined in the following experiments. Apoptosis assay Cells (10 5 cells) were collected and suspended in Annexin V incubation solution. Then, the cells were stained with 5 μl Annexin V-fluorescein isothiocyanate (FITC) and 5 μl propidium iodide (PI) solution (Beyotime, Shanghai, China) in the dark for 20 min. Apoptosis was then analyzed using flow cytometry. Oxidative stress and ROS measurement Transfected cells were treated with 20 μM propofol for 6 h under a normoxic 95% air and 5% CO 2 incubator and then collected. Subsequently, the malondialdehyde (MDA), lactate dehydrogenase (LDH) and superoxide dismutase (SOD) were detected via ELISA kits (NJJC Bio Engineering Institute, Nanjing, China). The reactive oxygen species (ROS) level was examined in cells using 2′, 7′-dichlorofluorescin diacetate (DCFDA) for 30 min at 37°C. The cells were observed, and data were analyzed through microplate reader. Dual luciferase reporter assay The wide-type 3′-UTR sequence of LATS2 contained the miR-215 binding site. Then, site-directed mutagenesis of the putative target site for miR-215 in wide-type 3′-UTR sequence of LATS2 was performed to generate the mutant-type 3′-UTR sequence and the site-directed mutagenesis was showed in Fig. 3a. The primers designed by primer premier 5.0 was used to conduct amplification. The thermal cycle profile was as follows: denaturation for 20 s at 95°C, annealing for 30 s at 54°C, and extension for 40 s at 72°C. Subsequently, the sequences were inserted into the pmirGLO reporter vector (Promega, Shanghai, China) between XhoI and SalI restriction enzyme sites using T4 DNA Ligase to generate luciferase reporter constructs, and named by LATS2-WT and LATS2-MUT. Nucleotide sequences of the constructs were identified through DNA sequencing. HEK293T cells were cotransfected with 50 ng of LATS2-WT or LATS2-MUT and 20 μM of miR-215 or miR-NC for 48 h. Finally, the luciferase activity was determined via dual luciferase reporter assay system (Promega, Shanghai, China). Statistical analysis The data were presented as means ± SD and analyzed with SPSS 18.0. Statistical analysis was conducted via two-tailed unpaired Student's t-test or one-way ANOVA with Tukey's test. P value less than 0.05 was considered as statistically significant difference. Effect of miR-215 on propofol-induced apoptosis in neonatal rat hippocampal neuron To explore the effect of miR-215 on propofol-induced apoptosis, we first examined miR-215 expression in neonatal rat hippocampal neuron. Real-time PCR showed that propofol treatment decreased miR-215 level in a time-dependent manner (Fig. 1a). MiR-215 level was increased in miR-215 mimics transfected neonatal rat hippocampal neuron treated with propofol (Fig. 1b). MTT assay demonstrated that propofol treatment reduced cell viability, whereas miR-215 mimics enhanced cell viability (Fig. 1c). In addition, apoptosis was increased by propofol treatment and miR-215 mimics suppressed propofolinduced apoptosis (Fig. 1d). These results indicate that miR-215 has a suppressive role in propofol-induced apoptosis in neonatal rat hippocampal neuron. Effect of miR-215 on propofol-induced oxidative stress in neonatal rat hippocampal neuron We then examined the role of miR-215 in propofolinduced oxidative stress. We first analyzed ROS level and the results showed that ROS generation was suppressed by miR-215 mimics in propofol-treated neonatal rat hippocampal neuron (Fig. 2a). MDA and LDH assays revealed that miR-215 mimics decreased MDA and LDH levels (Fig. 2b, c). On the other hand, miR-215 mimics increased SOD level (Fig. 2d). These findings suggest that miR-215 reduces propofol-induced oxidative stress in neonatal rat hippocampal neuron. LATS2 is a target of miR-215 TargetScan analysis (http://www.targetscan.org/vert_72/) was used to predict the binding site of miR-215. The results showed that LATS2 is a target of miR-215 (Fig. 3a). To confirm the interaction between miR-215 and LATS2, dual luciferase reporter assay was performed and showed that miR-215 mimics decreased the relative luciferase activity of LATS2-WT. However, there was no effect on the relative luciferase activity in HEK293T cells co-transfected with miR-215 mimics and LATS2-MUT (Fig. 3b). Western blot analysis verified that miR-215 mimics inhibited LATS2 protein level in neonatal rat hippocampal neuron (Fig. 3c). The data indicated that LATS2 is a target of miR-215 and miR-215 represses LATS2 level. Downregulation of LATS2 induced by miR-215 overexpression in propofol-treated neonatal rat hippocampal neuron We then examined the effect of miR-215 overexpression on LATS2 in propofol-treated neonatal rat hippocampal neuron. Western blot analysis demonstrated that LATS2 protein level was elevated in neonatal rat Fig. 1 Effect of miR-215 on propofol-induced apoptosis in neonatal rat hippocampal neuron. a miR-215 level was measured using real-time PCR. n = 3. *, p < 0.05. **, p < 0.01. * vs 0 h. Statistical analysis was conducted via one-way ANOVA by Tukey's test. b miR-215 level was detected via real-time PCR. Statistical analysis was conducted via one-way ANOVA by Tukey's test. c MTT assay was used to determine cell viability. Statistical analysis was conducted via one-way ANOVA by Tukey's test. d Flow cytometry analysis was used to detect apoptosis. Statistical analysis was conducted via one-way ANOVA by Tukey's test. n = 3. **, p < 0.01. ##, p < 0.01. * vs Control+NC mimics. # vs Propofol+NC mimics Fig. 2 Effect of miR-215 on propofol-induced oxidative stress in neonatal rat hippocampal neuron. a ROS level was examined in neonatal rat hippocampal neuron. Statistical analysis was conducted via one-way ANOVA by Tukey's test. b MDA detection was conducted in neonatal rat hippocampal neuron. Statistical analysis was conducted via one-way ANOVA by Tukey's test. c LDH assay examined the LDH level in neonatal rat hippocampal neuron. Statistical analysis was conducted via one-way ANOVA by Tukey's test. d SOD level from neonatal rat hippocampal neuron was assayed. Statistical analysis was conducted via one-way ANOVA by Tukey's test. n = 3. **, p < 0.01. ##, p < 0.01. * vs Control+NC mimics. # vs Propofol+NC mimics hippocampal neuron with propofol treatment in a time-dependent manner (Fig. 4a). MiR-215 overexpression exhibited a suppressive role in propofolinduced increase of LATS2 level using western blot analysis (Fig. 4b). These results indicate that miR-215 overexpression downregulated LATS2 level in propofol-treated neonatal rat hippocampal neuron. MiR-215 attenuates propofol-induced apoptosis and oxidative stress in neonatal rat hippocampal neuron by targeting LATS2 To test the hypothesis that miR-215 affects propofolinduced apoptosis and oxidative stress by targeting LATS2, we co-transfected miR-215 mimics and LATS2 in neonatal rat hippocampal neuron. Western blot analysis revealed that miR-215 mimics inhibited LATS2 protein level, whereas LATS2 overexpression elevated the protein level (Fig. 5a). Moreover, LATS2 overexpression suppressed the increase of cell viability induced by miR-215 mimics (Fig. 5b). Flow cytometry analysis showed that miR-215 mimics inhibited apoptosis, whereas LATS2 overexpression abrogated miR-215 induced inhibition of apoptosis (Fig. 5c). Furthermore, miR-215 mimics suppressed ROS, MDA and LDH levels, and promoted SOD level. On the other hand, LATS2 overexpression increased ROS, MDA and LDH levels, and decreased SOD level (Fig. 5d, e). These data suggest that miR-215 attenuates propofol-induced apoptosis and oxidative stress in neonatal rat hippocampal neuron by targeting LATS2. Discussion In this study, we found that miR-215 was decreased and LATS2 was increased in propofol-treated neonatal rat hippocampal neuron. Functional analysis showed that miR-215 has a suppressive role in propofol-induced apoptosis and oxidative stress in neonatal rat hippocampal neuron. Furthermore, we demonstrated that LATS2 is a target of miR-215, and miR-215 could reduce propofol-induced LATS2 level. LATS2 overexpression suppressed the effect of miR-215 on propofol-induced apoptosis and oxidative stress in neonatal rat hippocampal neuron. These data imply that miR-215 participates in the regulation of propofol-induced apoptosis and oxidative stress in neonatal rat hippocampal neuron by targeting LATS2. Previous studies have shown that miRNAs are implicated in neurological diseases (Johnson et al. 2008;Asikainen et al. 2010;Hebert et al. 2008). Moreover, miRNAs dysregulation plays an important role in neurotoxicity (Kaur et al. 2012). Recently, miR-665 was found to be significantly increased in primary cultured astrocytes treated with propofol, and it inhibited BCL2L1 (Bcl-xl) (a suppressor of apoptosis) (Sun and Pei 2016). Moreover, miR-34a was discovered to be elevated after propofol treatment, and miR-34a knockdown could inhibit propofol-induced apoptosis . Some miRNAs were reported to reverse the propofol-mediated effect. Wang et al. proved that miR-383 was downregulated by propofol treatment, and it could alter the propofol-induced upregulation of hippocampal neuron apoptosis (Wang et al. 2018a). Twaroski et al. found that Fig. 3 LATS2 is a target of miR-215. a TargetScan analysis was used to predict the binding site of miR-215. b Dual luciferase reporter assay confirmed the interaction between miR-215 and LATS2. Statistical analysis was conducted via one-way ANOVA by Tukey's test. c Western blot analysis detected LATS2 protein level in neonatal rat hippocampal neuron. Statistical analysis was carried out through two-tailed unpaired Student's t-test. n = 3. **, p < 0.01 miR-21 was decreased in the neurons, and miR-21 overexpression alleviated propofol-caused cell death in human embryonic stem cells-derived neurons (Twaroski et al. 2014). Consistent with the later studies, our study revealed that miR-215 was downregulated in propofoltreated neonatal rat hippocampal neuron, and miR-215 mimics inhibited propofol-induced apoptosis. These results imply that miR-215 may act as a suppressive factor in propofol-induced apoptosis in neonatal rat hippocampal neuron. Oxidative and antioxidative balance is disrupted upon oxidative stress. Oxidative and antioxidative products produced in cells include MDA, LDH, and SOD (Rodrigo et al. 2016). It is reported that the balance between ROS production and scavenging is important in oxidative stress and has protective or damaging effects in several diseases (Aon et al. 2010). Oxidative stress and ROS have been found to be associated with a number of physiological and pathological processes (Huang et al. 2009). Increasing evidences have revealed that a large number of ROS could directly or indirectly induce oxidative damage to cells (Wang et al. 2018b;Lee et al. 2015). Notably, in our study, we discovered that miR-215 mimics reduced ROS, MDA and LDH levels, and increased SOD generation in propofol-treated neonatal rat hippocampal neuron, suggesting that miR-215 can negatively regulate propofol-induced oxidative stress in neonatal rat hippocampal neuron. Accumulating evidences have shown that miRNAs carry out their functions by targeting mRNAs (Bartel 2009). Previous study proved that miR-410-3p has a neuroprotective effect on sevoflurane anesthesia-induced cognitive dysfunction by targeting C-X-C motif chemokine receptor 5 (Su et al. 2019). In addition, miR-133a-5p is involved in the protective effect of propofolmediated hepatic ischemia/reperfusion injury through targeting MAPK6 (Hao et al. 2017). MiR-665 was reported to participate in the neurotoxicity induced by propofol via targeting Bcl-2-like protein 1 BCL2L1 (Sun et al. 2015). Here we demonstrate that miR-215 could target LATS2. Moreover, the propofol-treated neonatal rat hippocampal neuron co-transfected with miR-215 mimics and LATS2 overexpression led to the suppression of miR-215 induced increase of cell viability and decrease of apoptosis and oxidative stress. The data indicate that miR-215 can attenuate propofol-induced apoptosis and oxidative stress in neonatal rat hippocampal neuron by targeting LATS2. However, to better clarify the role of miR-215 in neonatal rat hippocampal neuron by targeting LATS2, the function of miR-215 inhibition and LATS2 silencing in chemical-induced Fig. 4 Downregulation of LATS2 induced by miR-215 in propofol-treated neonatal rat hippocampal neuron. a LATS2 protein level was measured via western blot analysis in neonatal rat hippocampal neuron. Statistical analysis was conducted via one-way ANOVA by Tukey's test. n = 3. *, p < 0.05. **, p < 0.01. * vs 0 h. b Western blot analysis detected LATS2 protein level in neonatal rat hippocampal neuron. Statistical analysis was conducted via one-way ANOVA by Tukey's test. n = 3. *, p < 0.05. #, p < 0.05. * vs Control. # vs propofol+NC mimics neonatal rat hippocampal neuron with high miR-215 expression will be performed in the near future. Conclusion In conclusion, in the current study we found that downregulation of miR-215 and upregulation of LATS2 were induced by propofol. Additionally, miR-215 overexpression alleviated propofol-induced apoptosis and oxidative stress in neonatal rat hippocampal neuron by targeting LATS2. Our results suggest miR-215 may provide a new therapeutic target to treat propofol-induced neuroapoptosis in developing neurons.
2022-11-30T15:40:08.983Z
2020-05-06T00:00:00.000
{ "year": 2020, "sha1": "b579f8cedf03768cec965393a9ba6122d07c9256", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s10020-020-00170-6", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "b579f8cedf03768cec965393a9ba6122d07c9256", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
218572988
pes2o/s2orc
v3-fos-license
Ex post analysis of engineered tsunami mitigation measures in the town of Dichato, Chile Due to Chile’s notorious and frequent seismic activity, earthquake- and tsunami-related studies have become a priority in the interest of developing effective countermeasures to mitigate their impacts and to improve the country’s resilience. Mitigation measures are key to accomplish these objectives. Therefore, this investigation adopts a tsunami damage assessment framework to evaluate the direct benefits of tsunami mitigation works implemented by the Chilean government in the town of Dichato in the aftermath of the 2010 tsunami. We perform an ex post analysis of the potential damage reduction produced by these works studying what would have been the consequences on the built environment if they were in place for the tsunami that hit this area after the Maule earthquake in February 27, 2010. We use state-of-the-art tsunami simulation models at high resolution to assess the reduction in tsunami intensity measures, which serve as input to evaluate the benefit from averted damage against the costs of the mitigation measures. The obtained results show a reduction in the flooded area and a delay in the arrival times for the first smaller tsunami waves, but a negligible damage reduction when confronted to the largest waves. In conclusion, the tsunami mitigation measures would not have been effective to reduce the impact of the tsunami generated by the Maule earthquake in the town of Dichato, but could have had a benefit in retarding the inundation of low-land areas for the first smaller tsunami waves. The latter suggests that these works might be useful to mitigate storm waves or tsunamis of much smaller scales than the one that hit central-south Chile in 2010. Introduction Chile is a highly seismic country, mainly due to the subduction process of the Nazca Plate underneath the South American plate (Lomnitz 1970;Araya 2007;Fritz et al. 2011;Centro Sismológico Nacional 2013), where the trench is located very close to the coastline. Many megathrust earthquakes have generated tsunamis, with very short arrival times. Over the past 10 years, four large-magnitude events (Mw > 7.5) have occurred in the country, which generated tsunamis with varying levels of destructiveness. These include the Mw 8.8 Maule earthquake (Fritz et al. 2011) in 2010, the Mw 8.2 Pisagua earthquake (Catalán et al. 2015) in 2014, the Mw 8.3 Illapel earthquake (Aránguiz et al. 2016) in 2015, and the Mw 7.6 Melinka earthquake (Xu 2017) in 2016. The 2010 event caused large-scale economical, structural, and human impacts in central and southern Chile (Gobierno Regional del Bío-Bío 2010; Fritz et al. 2011). This event also triggered an increase in basic and applied research in the country. In addition, the Chilean government invested in several tsunami mitigation projects and reconstruction plans (Siembieda et al. 2012;Herrmann Lunecke 2015;Khew et al. 2015), including hard structures. Among the most damaged coastal towns in 2010 was Dichato, a relatively small settlement with three thousand inhabitants (according to the last census) (Koshimura et al. 2010), located north of the epicenter, that suffered the destruction of houses and infrastructures near the beach and estuary (Koshimura et al. 2010;Yamazaki et al. 2010;Martínez et al. 2016). After the event, the government proposed a tsunami mitigation plan, which included hard structural countermeasures to reduce the impact of tsunamis. The setting of this new infrastructure provides the opportunity to compare the effect of the tsunami with and without the presence of such mitigation works. The significant impacts associated with recent tsunamigenic events worldwide have motivated the development of numerous studies devoted to hazard assessment and tsunami modeling (e.g., Arcas and Titov 2006;González et al. 2009;Montenegro-Romero and Peña-Cortés 2010;Nandasena et al. 2012;Imamura et al. 2012;Suppasri et al. 2013;Melgar and Bock 2013;Adriano et al. 2014Adriano et al. , 2016Ozer Sozdinler et al. 2015;Catalán et al. 2015;Goda et al. 2015;Park and Cox 2016;Santa María et al. 2016;Aránguiz et al. 2016;Martínez et al. 2016). For example, the Great Eastern Japan Earthquake and tsunami offered the opportunity to evaluate different types of mitigation measures. Nandasena et al. (2012) made an ex post assessment of the effectiveness of tsunami mitigation works, including vegetated dunes, coastal forest, and seawalls. These are tested using numerical modeling to determine how they influence inundation extent, and other hydrodynamic properties such as momentum flux and velocities (Nandasena et al. 2012). Adriano et al. (2016) evaluated the effect of breakwaters in the coast through similar tsunami modeling approaches, concluding that the breakwater present in Onagawa at the time of the tsunami attack reduced the tsunami impact by diminishing the maximum inundation depth in 2 m, even if it was completely destroyed by the flow (Adriano et al. 2016). The performance of a breakwater located in Kamaishi is assessed by Ozer Sozdinler et al. (2015), who state that even though the presence of a breakwater may generate higher flow velocities, these hard countermeasures were beneficial in decreasing inundation depths and retarding the arrival of peak tsunami inundation. Moreover, it was found that a damaged breakwater could still provide some degree of protection. In Chile, no studies on the effectiveness of tsunami mitigation works have been published to date, but some efforts have produced data and relevant information that are worth mentioning. Santa María et al. (2016) produced an exposure model for residential structures that classifies them by characteristics such as construction materials, geo-localization and estimated replacement costs. Martínez et al. (2016) presented an assessment of the vulnerability of Dichato considering physical, socioeconomic, and educational dimensions, comparing pre-and post-event conditions. It is claimed that vulnerability remains large in this town, despite the existence of new tsunami mitigation measures. In the present work, we aim to assess the benefits that the presence of new mitigation works in Dichato may have generated, if they had been present during the tsunami of February 27, 2010. To this end, we implement a methodology to estimate the direct damage from the tsunami inundation and compare its results with and without the new infrastructure in place. The methodology used is intended to provide ex post evaluation, for the first time in Chile, on the tsunami impact reduction that such works may produce. The manuscript is organized as follows: Sect. 2 shows a brief overview of the important definitions and concepts being considered here, along with a characterization of the case under study. Section 3 describes the methodology, while in Sects. 4 and 5, we present the main results of the study and a discussion on their potential implications. The conclusions are summarized in Sect. 6. Risk, hazard, exposure, and vulnerability In what follows, risk is considered to be composed of three factors: hazard, exposure, and vulnerability (Weichselgartner 2001;González et al. 2009;Venegas San Martín 2012). The hazard is quantified by intensity variables that reflect the capability of a certain geophysical phenomenon to inflict harm (González et al. 2009). While in the risk analysis framework it is necessary to assess all the possible events that may occur in the area of interest to yield a probabilistic hazard assessment (Cutter et al. 2000;Pelling et al. 2004), here we limit ourselves to a scenario-based analysis using the actual 2010 tsunami, following similar lines as Suppasri et al. (2013). The characterization of exposure requires an inventory of all the elements that could be affected by the hazard, such as communities and physical infrastructure (Pelling et al. 2004), identifying those located in the flooded area by the tsunami (Penning-Rowsell et al. 2005). Lastly, in the context of this study, vulnerability is accounted for structural elements only (i.e., buildings), by means of fragility curves which allow to link hazard intensity variables with the damage probability. Linking hazard, exposure, and vulnerability provides means of assessing the risk level and gives the opportunity to quantify damage reductions from the benefits associated with mitigation measures (e.g., urban planning and relocation, engineered mitigation measures, educational programs, etc.). Estimation of the benefit from averted damage The damage reduction of the mitigation works in Dichato can be quantified using a cost-benefit analysis that considers the investment of such works and the benefits they produce against a tsunami. It must be noted that these benefits are exclusively evaluated over direct damage assessment, as other less tangible benefits are beyond the scope of this research. The benefit of the project (B) is estimated as (Ministerio de Desarrollo Social 2013): where P(i) is the probability of occurrence of the event i , C NP refers to the costs with no project in event i , and C WP are the costs with project in event i . Since a scenario-based analysis is used in this study, the consideration of the probability of its occurrence is beyond the scope of the investigation. Therefore, Eq. (1) will be adapted, as explained in Sect. 3.2.3. Estimation of damage A tsunami fragility function or fragility curve constitutes a direct relation between a hydrodynamic feature of the tsunami flow and the probability of damage of a structure (Koshimura et al. 2009a, b;Mas et al. 2012;Suppasri et al. 2012;Adriano et al. 2014;Favier et al. 2014a;Urra Espinoza 2015). They are usually built empirically from different data sources (Koshimura et al. 2009a). As a result, the level of refinement in the data and its categorization (such as construction material and number of stories of the structure) may affect the outcome of the analysis (Goda et al. 2015). Despite this potential shortcoming, fragility functions are essential in the methodology to estimate tsunami damage. The characteristics of the ones used here are described in Sect. 3.2.3. Description of Dichato and the local impact of the 2010 tsunami Dichato is located at the south end of the Coliumo Bay, almost 40 km north of Concepción (see Fig. 1). It consists of over 3000 inhabitants according to the last official census in 2002 (Instituto Nacional de Estadísticas) and about 2000 homes. The most common construction materials of buildings are wood and masonry (Servicio de Impuestos Internos). The Coliumo Bay has low wave energy, owing to the protection provided by the Tumbes Peninsula to SW incoming waves. On the other hand, its horseshoe shape may induce long wave resonance (Gobierno Regional del Bío-Bío 2010). The 2.4-km-long beach of Dichato makes the area a touristic attraction, especially during summer, when the population can increase up to 5000 inhabitants (Koshimura et al. 2010;Venegas San Martín 2012). At 3:34 a.m. (local time) in February 27, 2010, a violent earthquake shook the central region of Chile. The epicenter was located off the Chilean coast, about 105 km northeast from the city of Concepción (Saito et al. 2010;Yamazaki et al. 2010) with a reported moment magnitude (Mw) of 8.8 (Pulido et al. 2010;Saito et al. 2010;Mas et al. 2012;Robertson et al. 2012). The tsunami generated free surface disturbances observed all around the Pacific Ocean basin, including the coast of Japan (Koshimura et al. 2010;Robertson et al. 2012). Five hundred and twenty-one people died as a direct result of either the earthquake or tsunami. The tsunami itself claimed 124 victims and 46 missing, mainly in the coastal regions between latitudes 34.5° and 38° south (Nahuelpan López and Varas Insunza 2010;Fritz et al. 2011). Nearly 370,000 homes were damaged and the economic loss was estimated in USD 30 billion, which was roughly 15% of Chile's gross domestic product in 2010 (Yamazaki et al. 2010). Dichato was one of the most affected locations. Inundation depths in Dichato were estimated by post-tsunami surveys to be around 8 m, and water penetrated as far as 1.3 km inland (Martínez et al. 2016). Though most people were able to evacuate, there were 66 fatalities in Dichato, mostly tourists and elders who underestimated the intensity of the phenomenon ). More than 1200 families reported damages to their properties (Koshimura et al. 2010), and approximately 80% of the exposed built structures were washed away (Gobierno Regional del Bío-Bío 2010), evidencing the high degree of destruction from the tsunami (Venegas San Martín 2012; Martínez et al. 2016). This level of damage can be explained by the existence of wood houses, which were unable to uphold the large inundation depths and velocities. Damage was also induced by floating debris, including loose boats that collided against structures. Mitigation works and relocation Due to the tsunami impact in 2010, the Chilean government financed a mitigation project aimed to provide protection for Dichato and the community against future tsunamis (Martínez et al. 2016). This mitigation project consists of a low-height seawall approximately 4 m tall and 1.7 km long, made of reinforced concrete along the coastline, and a channeling of the Dichato Estuary at its mouth (see Fig. 1c). With the information provided by the Division of Port Infrastructure of the Minister of Public Works (Dirección de Obras Portuarias-DOP) and reference prices from a catalogue (Portal Ondac Construcción), the estimated construction costs of the wall and channel are US$ 6.8 million (details of this estimation are given in "Appendix 1," Fig. 8 and Tables 4,5,6,7,8). On the other hand, five neighborhoods were relocated either to higher areas or to piloted houses north of the Bay. Building restrictions were established in the town, allowing only commerce-related infrastructure along the coastline of the town. In this area, there is also a mitigation park, consisting of trees planted along the coastline, but its construction only started in 2015 (Martínez et al. 2016). Methodology The tsunami hazard associated with the 2010 tsunami is estimated by means of numerical modeling. Exposure is assessed from building inventories, and the vulnerability is estimated through fragility curves. The benefit of tsunami damage reduction from engineered mitigation measures on the coastal urban area is evaluated by a cost-benefit analysis of direct costs associated with the inflicted damage to buildings by the tsunami, and the investment costs for the mitigation measures. Hence, by taking the difference between results considering the situation with and without mitigation works, we can estimate the potential improvement that the project would have produced. We are not attempting to give a comprehensive economic evaluation of the future benefits, but to provide a quantification of the potential improvement that this works could produce if the town was subjected to a similar event on a ceteris paribus framework. Tsunami modeling We employ the open source code GeoClaw to perform the tsunami modeling. GeoClaw solves the nonlinear shallow water equations using finite volume methods and adaptive mesh refinement. The code has been used to model several historical tsunamis using bathymetric and topographic data (MacInnes et al. 2013;Melgar and Bock 2013;Arcos and LeVeque 2015). Tsunami waves along the coast of Chile have been known to last several hours (Catalán et al. 2015), including the 2010 event (Venegas San Martín 2012); hence, the simulation time is set to 8 h, which is considered sufficient to model the largest latearriving waves (Yamazaki and Cheung 2011). The information available did not allow us to include buildings in the topography, so a spatially constant Manning roughness coefficient of 0.2 s/m 1/3 is considered, as it produces the best agreement with in situ measurements (see Sect. 4.1). The spatial grid for the tsunami simulation is composed of eight rectangular nested grids, with a threefold increase in resolution between each one. The smallest computational grid, enclosing the Coliumo Bay, is built from bathymetric data collected before the event (2009) and has a resolution of 2 m (see Fig. 1c), while the spatial discretization in the largest grid is 1 km. The time steps in the simulations are adjusted dynamically to enforce the fulfillment of the Courant-Friedrichs-Lewy (CFL) stability conditions. The reference level used for the bathymetry and topography was the mean sea level, corrected by the tide level at the time of the event. To incorporate the mitigation measures in the tsunami modeling, an in situ campaign was carried out to obtain the updated topography with high resolution (wall, park, channels) using differential GPS. Data were corrected by means of the Ntrip network. As a result, the topography for the tsunami simulation considered the aggregate of the data collected before and after the event. The difference between the digital elevation model (DEM) including mitigation works and the original can be appreciated in Fig. 1c. The most important changes are around the wall and along the estuary. The finite fault seismic source model proposed by Delouis et al. (2010) is used and validated. The initial condition of the displacement of the ocean's free surface is obtained from the Okada formulation (Okada 1985). Figure 1b shows this displacement surface where it can be seen that Dichato is located slightly north of one of the two maxima. Exposed buildings The exposed residential buildings in Dichato and the tsunami effect on them are considered in this study. For simplicity, only the physical damage induced by the tsunami waves to these buildings is considered. The social impacts or other economic losses are beyond the scope of this work because of the difficulty in quantifying their vulnerability, especially when it comes to social vulnerability (Cutter et al. 2003). The characteristics of houses, construction materials, price values, and year of construction were estimated from a database provided by the Revenue Service (Servicio de Impuestos Internos). This database consists of 1878 houses; however, during the fieldwork, only 452 of them could be properly identified and geo-localized of the database. Hence, we use herein only the identified buildings which are considered as a sample of nearly 40% of the houses directly exposed to the tsunami waves. The characteristics and the spatial distribution of the inventory can be seen in Fig. 2. Most of the houses in Dichato are built with wood and masonry, while the rest are built with concrete, steel or other less common materials (see Fig. 2b). On the other hand, the majority of houses were valued in approximately US$7000 (Servicio de Impuestos Internos) (see Fig. 2a). Tsunami intensity measures The hazard intensity is estimated by computing inundation depths, velocity, energy, and arrival times (Macabuag et al. 2016;Park and Cox 2016). Maximum inundation depths are considered as the most relevant parameter because when used in conjunction with fragility curves, they allow to evaluate the probability of damage of individual buildings (Cançado et al. 2008). Nevertheless, the other intensity measures are also of interest. Flow velocities, for instance, have been studied and utilized for the development of fragility curves for structural damage in buildings Park and Cox 2016;De Risi et al. 2017), as they may contribute with information about the interactions between the flow and the topography that otherwise cannot be attained Arcos and LeVeque 2015). In addition, both maximum inundation depths and velocities have been found to be applicable for the estimation of building damage due to tsunamis (Park and Cox 2016). Thus, maximum flow velocities and maximum energy-as an extension of the latter-are also estimated from the tsunami simulation as reference. Damage estimation Fragility curves are used to estimate the probability of damage generated by the tsunami to the physical infrastructures. Direct damage is thus considered to be the result of the interaction between hydrodynamic forces and the structures (Penning-Rowsell et al. 2005). The damage depends on the construction material of buildings, and several damage levels can be considered. Suppasri et al. (2012) performed an analysis of all available data delivered by the Ministry of Land, Infrastructure and Transportation of Japan (MLIT) regarding the 2011 Great East Japan tsunami impact zone. These authors developed fragility curves with six damage levels for wood, masonry, reinforced concrete, and steel frame houses, with the damage levels shown in Table 1 ). These fragility curves were chosen since they involved the same four main construction materials observed in Dichato, with the possibility of including different damage levels for each materiality. Considering that a common type of building in Dichato is best represented as a concrete house in the first story (2 m in height), and as a wooden house in the second story, an additional fragility curve was arranged for the purpose of this investigation, by combining those of wooden and concrete fragility curves. More details are provided in "Appendix 2.1." With the probability of having the different damage levels for each building, the expected loss is quantified in monetary terms. The cost of repair (C r ) is assumed to be a percentage of the construction cost (C t ), as the more expensive the construction cost of a house would mean a more costly reparation as well. Likewise, if a building is constructed with cheap materials, the costs of repairing are assumed to be low. The relation between the two costs is a function of the damage level, which determines how much of the total construction cost the repairing cost is. Table 1 shows this relation. For example, if the building suffers minor damage, the repair cost is a fraction of the construction cost, assumed as C r = 0.2C t . If the house is washed away, the repair cost is larger than the construction cost (C r = 1.2C t ). This estimated larger cost is attributed to the management and disposal of debris. Finally, the total costs from the loss induced by the tsunami for each scenario are estimated as the aggregate of costs from all buildings. This is simply the sum of the product between the cost ratios and their probability of occurrence for every damage level and every type of material, i.e., where ds i stands for damage state i , c i for the corresponding cost ratio Cr/Ct for the damage state i , and P ds i for the probability of damage for damage state i . Finally, the expected damage ∑ i P � ds i � * c i for each building k is summed up to give the total expected damage. The cost-benefit analysis considers the comparison between costs that are averted by the application of mitigation measures and those without mitigation (Penning-Rowsell et al. 2005;Ministerio de Desarrollo Social 2013;Iwata et al. 2014). Equation (1) is adapted to take into consideration only one realization: 2010 Tsunami impact without engineered mitigation measures First, the tsunami modeling is carried out without mitigation works in order to estimate a reference scenario, and to validate the model. For the latter, results are compared to field data from in situ surveys summarizing observations from five campaigns, resulting in a set of 37 measurements (Imamura et al. 2010;Matsutomi et al. 2010;Fritz et al. 2011;Mikami et al. 2011). In 2002, the Japan Society of Civil Engineers (JSCE) proposed that K and values in the range 0.95 < K < 1.05 and < 1.45 indicate a good estimation (Tsunami Evaluation Subcommittee and Nuclear Civil Engineering Committee 2002). The observed and numerical sets of inundation heights are used to evaluate the K i coefficients from five post-tsunami surveys, resulting in a set of 37 observations (Imamura et al. 2010;Matsutomi et al. 2010;Fritz et al. 2011;Mikami et al. 2011). These 37 measurements are then compared with simulated data to obtain K i , and by the use of Eqs. (4) and (5), the values of the parameter K and are found. The Dichato simulation yields K = 0.97 and = 1.38 , which is within the recommended limits. The tsunami simulation results are also qualitatively compared with inundation maps obtained by Mas et al. (2012) (Fig. 3a). Relative sea levels during the event are compared with the information registered at the Talcahuano port inside the Bay of Concepción, as seen in Fig. 3b and with data provided by a Deep-ocean Assessment and Reporting of Tsunamis (DART) Station off the coast of northern Chile and southern Peru (Fig. 3c). Peak values of inundation depths, flow velocities, energy, and arrival times estimated with the tsunami model are shown in Fig. 4. The highest inundation depth was nearly 10.8 m. The maximum run-up reached 16 m. For flow velocities, the maximum value, as shown in Fig. 4b, is nearly 11.4 m/s. Local energy values are calculated using Bernoulli's expression H = h + v 2 2g , where h stands for inundation depth, v for flow depth-averaged velocity, g for the gravitational acceleration, and H for the hydraulic head. The results obtained for maximum energy are shown in Fig. 4c. Figure 4d also shows that there is an inundation of low areas within 50 min, but that the larger inundation extent occurs after 100 min. The total costs for this scenario are of nearly USD $ 2.9 million. 2010 Tsunami impact considering the engineered mitigation measures in place Next, simulations including the mitigation works are carried out considering the updated topography. Figure 5 shows the differences between this modified scenario and the baseline scenario. Negative values denote the baseline scenario values exceed those of the modified case. Values for inundation depths and run-ups with the incorporation of mitigation works have maxima of 10.6 m and 16 m, respectively, which are very similar to the base scenario. Near the estuary, depths are larger owing to the channeling effect of the works. However, for the maximum hydraulic head, the modeled values in the modified situation are lower than the baseline scenario, which highlights the potential in reducing the energy of the flow. The total costs estimated for the direct damage in this case are $USD 2.7 million. A summary of these results is presented in Table 2. Considering inundation depths are similar and that the damage estimation depends directly and solely on this variable, the benefit obtained with the presence of the mitigation works is estimated to be only of USD$ 211,000, i.e., a reduction of 7%. Discussion The Dichato reconstruction plan incorporated hard mitigation measures to minimize future tsunami impacts in the bay, but our results show that when confronted to the 2010 tsunami, the level of direct damage reduction is only marginal. The most relevant mitigation aspect corresponds to a delay in arrival times of tsunami waves, as the inundation area of the second wave is reduced to approximately 40% in the presence of the seawall and channel. The time available to carry out an evacuation in the area located in the vicinity of the river mouth is thus increased to approximately an hour, as shown in Fig. 5d. This aspect is relevant for evacuation purposes, but it is a benefit that is not granted for other tsunamis since it is highly dependent on the characteristics of the seismic source; indeed for the 2010 tsunami, the largest tsunami waves that arrived to Dichato were not the first to come, but this situation could be different for other events. Indeed, the largest wave arrived 120 min after the earthquake in 2010 and inundated roughly the same area as it did in the case we simulated with the engineered structures in place. Hence, the conducted mitigation works might be efficient against more frequent and less destructive events, such as storm waves or surges or even minor tsunamis, but are not able to withstand the largest tsunami waves generated by an event similar to the one that occurred in 2010. This finding motivates us to evaluate alternatives for the wall height and to produce a sensitivity analysis on this design parameter. The rationale behind this analysis is that one would expect that a higher wall height should produce larger direct benefits than the costs of construction, and eventually, and optimum constrained to specific objectives or criteria could be found. Thus, we consider different wall heights from + 0.5 m up to + 4 m from its actual configuration, to assess potential mitigation effects and damage reductions in view of determining an optimal wall height (Favier et al. 2014b;Shimozono and Sato 2016). We thus perform additional tsunami simulations and re-assess the cost-benefit analysis considering walls 0.5, 1.0, 2.0, 3.0, and 4.0 m taller than the actual one of 5 m on average. Special structural designs or protection elements that would increase the construction costs of higher walls to ensure its correct performance in this situation are not included, since the costs are only assumed to be proportional to the height of the wall. It is also important to emphasize that a more comprehensive analysis would require the inclusion of a time window for life operation of the engineered works, the probability of exceedance of certain tsunami hazard level each year, and a cash flow for costs and benefits. Although difficult to quantify, these indirect impacts are also important and should be considered in the design and decision process. 3 The resulting inundation maps for the same 2010 event are shown in Fig. 6. As expected, as the wall height increases, the inundated area gradually decreases and the damage cost reduction increases, as summarized in Table 3. Indeed, when the height of the seawall is increased in 3 m, the flooded area is greatly reduced (43.3% of area reduction and an 88.9% reduction in damage costs). The different scenarios are summarized Fig. 7, where it can be seen that the application of the methodology brings valuable technical information for decision making; nonetheless, this dimension constitutes only one element among many others that come into play before deciding investing in mitigation works. Indeed, the final decision has more to do with political issues, as it seems to have been the case in Dichato since we could not find any technical justification for the selected wall height in the official documents we could gather. In addition to hard mitigation structures, which are not foolproof, other measures can also be considered. With this regard, sustaining education and awareness programs to ensure an effective evacuation response in case of a disaster should not be undermined (Favier et al. 2014b). Mitigation and risk communication strategies unbalanced toward hard measures may induce a false sense of security (Suppasri et al. 2013). Urban planning is another important risk reduction tool that can be informed by the type hazard and damage assessment analysis presented here. The latter could be useful to improve the preparedness of Dichato against an event similar to the one that hit Chile in 2010. For example, if additional studies of potential evacuation are considered, restriction areas based on this information could be defined to ensure a prompt and safe evacuation (e.g., Imamura et al. 2012;León et al. 2019). Conclusion The present investigation provides an ex post analysis of the effectiveness of the mitigation measures that were built in the town of Dichato in the aftermath of the 2010 tsunami. This is the first study of this kind in Chile, but follows similar lines than research published after the 2011 Tohoku tsunami in Japan. Our results suggest that the seawall and the channeling of the river account for a direct damage cost reduction of nearly 7% if faced to a similar tsunami than the one that hit this town in 2010, with an estimated cost of implementation of USD $6.8 million. However, damage reduction could be increased if taller walls were considered, as a reduction of over 85% in damage costs is estimated with a wall 3 m taller than the reference one. Indeed, the direct benefits of the current measures are minor (approximately 200 thousand US dollars) and would deserve to be confronted with other indirect costs which should also include the impact on tourism and beach erosion over the operation lifetime of the mitigation works. The most important effect of the conducted mitigation works is an increase in evacuation times for the area close to the river, which should bring additional indirect benefits that are not accounted for in the present study. The latter is explained by the particularities of the tsunami hydrodynamics at this location since the biggest wave is not much affected by the mitigation works, and it arrives 2 h after the nucleation of the earthquake. This can be a situation particular to this place and event, so these results should be treated as a first estimate of the benefit. Despite this potential shortcoming, our simulations suggest that the mitigation works in place could be effective against minor tsunamis, storm waves or surges, but not against the biggest waves produced by a similar event than the one that occurred in 2010. While the present research is important to provide an ex post assessment of the mitigation work effectiveness, a more comprehensive analysis for the definition of optimal measures should consider additional aspects that are not easy to evaluate such as probabilistic simulation of the tsunami hazard, risk perception, and cultural issues. Indeed, mitigation plans should be evaluated from more complex perspectives than direct damage reductions attributed to hard works, including community risk awareness, land use, and evacuation plans. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. Appendix 1: Valuation of mitigation works To perform an approximate valuation of the mitigation works, a simplified version of the wall and channels' structures was considered, as composed of fewer materials. This simplification was carried out because of practical matters. All the information used to perform this valuation was provided by the Dirección de Obras Portuarias (DOP) and the Ondac catalogue, and in Table 4 is a detail of the materials considered in both the wall and channel. The prices are according to the Chilean market and include workforce and other minor materials required. The mitigation works can be better appreciated in Fig. 8. Appendix 1.1: Wall The wall's construction was divided in three sectors, each with a different set of profiles. The profiles each have different sizes and material proportions that define their construction costs, so to obtain the total cost of the wall's construction, it is necessary to calculate the amount of each material required for each profile along every set. Every profile's length together with the materiality information was obtained from a set of drawings provided by the DOP. The following tables specify the amount of each material used per profile and their length for each sector. The excavations along each sector were calculated using a mean area, which in the case of Litril Sector is 22.50 m 2 , for Etapa 1 Sector is 13.05 m 2 and for Estero Sector is 9.88 m 2 (Tables 5, 6, 7). It should be noted that the geometry of every profile was simplified into basic geometric elements, such as squares and triangles. Appendix 1.2: Channel In the case of the channel, there was one average profile used along the 286 m of its length. Thus, the material summary of it is shown in Table 8. 1 3 Appendix 2: Selection of fragility curves The fragility curves used correspond to a work by Suppasri et al. (2012) where six levels of damage were defined as 1: minor damage, 2: moderate damage, 3: major damage, 4: complete damage, 5: collapse, and 6: washed away. All six levels were conceived for wood, masonry, reinforced concrete, and steel frames, as shown in Fig. 9. Appendix 2.1: Fragility curve for mixed materials A fifth typology is proposed in the present investigation, which corresponds to a mix between reinforced concrete and wood for a specific kind of two-story houses. To this end, we consider the fragility curves for both reinforced concrete and wood, where the first story's materiality is reinforced concrete, while the second is made from wood. Thus, the idea is to apply an inundation height to each curve according to the relative (with respect of the base of each story) level of inundation of the first and the second stories. Therefore, using Fig. 10 as a reference, an inundation level equal to h ′ would be considered for the RC's fragility curve, and in the case of the wood's curve, an inundation height equal to h * is used. The final percentage of damage becomes the average between both curves. For instance, if a house built with mixed materials is exposed to an inundation height of 2.5 m, the total damage would be an average of the damage on the RC first story with an inundation of h ′ = 2 m and the damage on the wooden second story with an inundation of h * = 0.5 m. The RC curve shows that with a 2 m inundation, the approximated damage probabilities are: 100% chance of minor damage, 95% chance of moderate damage, 80% chance of major damage, 40% chance of complete damage, 10% chance of collapse, and 0% of washed away. On the other hand, the wood curve shows that with a 0.5 m inundation, the approximated damage probabilities are 85%, 60%, 20%, 5%, 0%, 0%, respectively. With these numbers, it is possible to apply the methodology explained in Sect. 3.2.3, with Eq. (2) and the information presented in Table 1. Appendix 3: Valuation of buildings All the information utilized for the valuation of the buildings' sample was obtained from the Servicio de Impuestos Internos (SII) and is presented in Table 9. The actual price of a building was considered to include the value of the terrain because of limitations in the available information.
2020-05-11T14:02:29.470Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "e2cb100933a349891854ba03c894f60f7f890139", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11069-020-03992-z.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "e2cb100933a349891854ba03c894f60f7f890139", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
257629409
pes2o/s2orc
v3-fos-license
Potential RNA-dependent RNA polymerase (RdRp) inhibitors as prospective drug candidates for SARS-CoV-2 The SARS-CoV-2 pandemic is considered as one of the most disastrous pandemics for human health and the world economy. RNA-dependent RNA polymerase (RdRp) is one of the key enzymes that control viral replication. RdRp is an attractive and promising therapeutic target for the treatment of SARS-CoV-2 disease. It has attracted much interest of medicinal chemists, especially after the approval of Remdesivir. This study highlights the most promising SARS-CoV-2 RdRp repurposed drugs in addition to natural and synthetic agents. Although many in silico predicted agents have been developed, the lack of in vitro and in vivo experimental data has hindered their application in drug discovery programs. Introduction Infectious diseases are still one of the major public health challenges.Diverse microorganisms including viruses, bacteria, fungi, and parasites are the main cause of infectious diseases [1].Many epidemics and pandemics due to HIV/AIDS, avian flu, swine flu, Ebola, Zika, SARS-CoV-2, and monkeypox viruses have occurred in the last few decades [2,3].The entire global population is still suffering from emergent and recurring infectious diseases caused by many microorganisms [4].About two years ago, the WHO (World Health Organization) announced COVID-19 (coronavirus disease 2019) as a public health concern (March 11, 2020) due to the widespread global impact of an infectious SARS-CoV-2 (severe acute respiratory syndrome coronavirus-2) parasite.More than 6.6 million deaths and 639 million infected patients have been reported till the beginning of December 2022 [5,6].The first identified case of SARS-CoV-2 was reported in the local fish and wild animal market in Wuhan City, China [7].SARS-CoV-2 is a RNA zoonotic virus (family: Coronaviridae, order: Nidovirales, genus: Betacoronavirus).It is mainly found in bats.After this virus was transferred to humans in China, it then widely spread to all other countries, leading to a global pandemic [8][9][10].Four genera (α, β, δ, and ɤ) of coronavirus are known [11].Several waves of SARS-CoV-2 infections have been recognized due to viral mutations.The most important variants of the virus are Beta, Gamma, Delta, and Omicron variants.The new Omicron variant indicates that the epidemic/pandemic is far from its end due to its efficient human-to-human transmission [12].The main symptoms due to the infection from this variant are fever, dry cough, diarrhea, and shortness of breath besides blood clotting and stroke in severe conditions [10,13].Repurposed drugs and vaccination substantially helped to overcome the pandemic and retain the socioeconomic status and human life to normal [14]. SARS-CoV-2 belongs to a positive-sense single-stranded RNA viral group (ssRNA(+)).It can infect humans and largely spreads through close contact and by breathing droplets generated by coughing/sneezing.Its genetic material can directly act as a viral messenger RNA (mRNA) and translate into viral proteins in the host cell [15][16][17].Several enzymes are involved in coronaviral replication.Thus, targeting these enzymes in drug discovery efforts might lead to promising antiviral drugs [18].The SARS-CoV-2 nonstructural proteins including RNA-dependent RNA polymerase (RdRp) and main protease (M pro ) are crucial for viral genomic transcription and replication [17,19].The crystal structures of both RdRP and M pro are available in the Protein Data Bank (PDB) [20][21][22]. The enzyme RdRp is encoded in all RNA viruses.Viral RdRp is the main target for developing potent antiviral agents against SARS-CoV-2, not only due to its ability to accelerate the replication of RNA but also due to the lack of RdRp closely related host cell counterparts.Theoretically, the designed agent will selectively target the viral RdRp with no off-target side effects [23]. RdRp inhibitors are categorized into two classes based on their structure and location for binding to RdRp: nucleoside analog inhibitors (NIs) and non-nucleoside analog inhibitors (NNIs).Structurally, NIs are known to bind the RdRp protein at the enzyme active site, NNIs bind to the allosteric sites of RdRp.Several studies on RdRp inhibitors with various applications have been reported.Tian et al. reported a recent review for RdRp inhibitors however, most of the mentioned analogs are for pyrimidine-containing compounds (either NIs and NNIs) [24].This current review adopts wider scope summarizing the natural and synthetic compounds as well as the repurposed drugs/active agents with RdRp inhibitory properties for treating SARS-CoV-2, with various heterocyclic scaffolds.Notably, the screening techniques for coronavirus RdRp are not as simple as those for proteases [25].With increasing interest, a few studies on RdRp screening for coronavirus have been reported. Drug repurposing Development of new drug(s) usually involves many successive stages which are: design, synthesis, bio-properties investigation, formulation of prototypes, and pre-clinical and clinical trials.Due to the pandemic outbreak, drug repurposing is considered the most accessible and appealing approach for urgent identifying potential therapeutics to control the disaster and save human lives.Drug repurposing means the adoption of an existing broad-spectrum therapeutical entity of potential efficacy and minimal adverse effects for clinical application of the infected patients supported by pre-clinical establishments.Drug repurposing strategy is superior to traditional drug discovery in terms of cost and time reduction.In addition to a lower failure rate compared with the traditional approaches owing to its well-established efficacy, metabolic characteristics, dose determination, and safety or toxicity issues [26][27][28].FDA (food and drug administration) approved several drugs of other pathophysiological under the emergency use authorization of which antiviral drugs (remdesivir, penciclovir and favipiravir) and antimalarial drugs (chloroquine, hydroxychloroquine) as anti-COVID-19 agents.Meanwhile, adverse effects humbled the clinical applicability of some of them [29][30][31]. Remdesivir (RDV, formerly GS-5734) Yin et al. investigated the inhibition of the RdRp from SARS-CoV-2 by remdesivir.The complex structure of SARS-CoV-2 RdRp and remdesivir reveals that the partial double-stranded RNA template is inserted into the central channel of the RdRp, where remdesivir is covalently incorporated into the primer strand at the first replicated base pair and terminates chain elongation [32].Remdesivir (RDV) is the first FDA-authorized drug to treat COVID-19 patients with severe conditions (on Oct. 22, 2020) [33,34].Remdesivir is a phosphoramidite prodrug of a 1 ′ -cyano-substituted adenosine nucleotide analog, which acts as an RNA polymerase inhibitor (regulate genomic replication) developed initially by Gilead Sciences for Ebola viral infections treatment in 2014 [35].It has been mentioned as a broad-spectrum antiviral property against various acute viral infections caused by different ssRNA viruses including Hendra virus, Lassa fever virus, Junin virus, Nipah virus, and coronaviruses (SARS-CoV and MERS-CoV) in addition to the efficacy in the therapeutical composition against HCV (hepatitis C virus) and HIV (human immunodeficiency virus) [36][37][38].Remdesivir is the first candidate as an anti-COVID-19 drug owing to its broad spectrum as an antiviral agent, especially on coronaviruses (SARS-CoV and MERS-CoV), this encourages Gilead Sciences to repurpose it to treat patients infected with SARS-CoV-2 and approved by FDA in 2020 for treatment patient with COVID-19 infection [34,39]. The broad-spectrum antiviral properties of Remdesivir are attributed to its ability to be metabolized in the host cell as nucleoside triphosphate (RDV-TP).Consequently, it can be integrated within the nascent viral To improve the t 1/2 and in-vivo metabolism of the Remdesivir, Wen et al. replaced the hydrogen in the active molecular group with isotope deuterium as the carbon-deuterium bonds (C-D) are more stable than carbon-hydrogen bonds(C-H) [24,42,43]. Favipiravir Another nucleotide analog inhibitor of RdRp is Favipiravir (FPV, favilavi, or Avigan).Favipiravir was originally developed by Toyama Chemicals, Japan, and approved as anti-influenza in 2014 in Japan [48,49].Favipiravir is a RdRp inhibitor that has demonstrated antiviral activity against influenza virus H1N1 infection [48].During the Ebola virus outbreak in West Africa, Favipiravir has been evaluated against human Ebola virus infection.It reveals a promising activity against Ebola virus infection in the mouse model however, poor activity was observed in human Ebola infections [50]. Favipiravir showed effectiveness in controlling the progression of the SARS-COV-2 virus.Patients with mild COVID-19 had a potential clinical recovery rate [51].While treating severe COVID-19 patients with Favipiravir, an improvement in the lymphocyte count was recorded [52].Therefore, the clinical tackle was recommended for mild-to-moderate COVID-19 patients with Favipiravir [53]. The inhibitory mechanism of Molnupiravir is quite similar to that of Favipiravir.The oral administration of molnupiravir rapidly appears in plasma and is converted to its triphosphate form in cells.The active NHC triphosphate form is incorporated into viral RNA in place of cytosine or Galidesivir Galidesivir (BCX4430) (Fig. 6) is an adenosine analog with RdRp replication inhibitory activities against many RNA viruses including Ebola, Zika, Marburg, and yellow fever (in vitro and in animal models) [63,64].In Syrian golden hamster models, it was tested as anti-SARS-CoV-2 reducing the lung pathology upon the treatment initiated 24 h before the viral infection, compared with untreated controls [65].The Galidesivir triphosphate is the active substrate responsible for binding to the active site of the viral RdRp and terminates the replication of viral RNA [66].Table 1 exhibits the anti-SARS-CoV-2 activity of Galidesivir in different cell cultures [65]. Ribavirin Ribavirin (RBV,1-β-D-ribofuranosyl-1,2,4-triazole-3-carboxamide) is a guanosine analog (Fig. 7).Ribavirin has a broad antiviral spectrum against in vitro human cell line and several animal models [67].It was repurposed to treat COVID-19, and reflected a promising activity against SARS-CoV-2 [68,69].RBV was reported to have a high survival rate in severe patients due to its ability for viral clearance (observational study) [70].Another clinical trial (Phase II) demonstrated that a combination of RBV, interferon beta-1b, and lopinavir-ritonavir, is a potential treatment for mild to moderate COVID-19 patients [71].Hemolytic anemia in addition to the reduction of calcium and magnesium in the elder besides the restriction for pregnant women are the serious adverse effects of RBV [72].The active substrate RBV triphosphate (RBV-TP) is formed via the host cell kinases, which pairs with the uridine triphosphate or cytidine triphosphate in the RNA template resulting in lethal mutagenesis and so prevents viral RNA replication [73].The EC 50 = 7.1, CC 50 = 160 μM (SI = 16) are of Ribavirin in Calu-3 assay [74]. Sofosbuvir and daclatasvir The replication process of HCV is similar to that of coronavirus, especially at the start of the disease.Anti-HCV therapies such as Sofosbuvir (HCV polymerase inhibitor) [75] and Daclatasvir (RNA replication and virion assembly inhibitor) [76] were suggested to have promising potential in the treatment of COVID-19 (Fig. 8) [74].Randomized clinical trials on moderate or severe COVID-19 patients demonstrated that sofosbuvir and daclatasvir reducing the hospital stay duration relative to the standard care alone.This is attributed to the daclatasvir/sofosbuvir antiviral efficacy on SARS-CoV-2 replication in respiratory cells [74,77].The EC 50 = 7.3, 1.1 and CC 50 = 512, 38 μM are for Sofosbuvir and Daclatasvir, respectively in Calu-3 assay [74]. Tenofovir Tenofovir is a broad-spectrum antiviral drug active against the HIV and hepatitis B virus (HBV).The active triphosphate form of the tenofovir diphosphate acts as a terminator of viral RNA subsequent polymerase synthesis (Fig. 10) [79,80].Few in vitro studies on tenofovir or its prodrug formulations have been reported.The results are not promising and are contradictory [81,82].Although these reports, in silico studies mentioned that tenofovir binds strongly to SARS-CoV-2 RdRp, with binding energies close to other successful drugs (Remdesivir, Galidesivir, Ribavirin, and Sofosbuvir) with no supporting biochemical observations (Fig. 11) [83]. AT-527 AT-527 is an orally available prodrug of a guanosine nucleotide analog that acts as a potent broad-spectrum anti-coronavirus inhibitor in a variety of cell lines by targeting the RdRp activity (Fig. 12) [84].AT-527 is converted by cellular enzymes to the active triphosphate metabolite, AT-9010.AT-527 recently entered phase III clinical trials to treat COVID-19 [85].AT-9010 can bind to the active site of RdRp exhibiting promising efficacy against COVID-19 [86]. Nucleoside analogs were used in searching for potent anti-SARS-CoV-2 inhibitors with both SARS-CoV-2-RdRp and SARS-CoV-2 exonuclease (ExoN) dual inhibitory enzyme effects as a strategy to combat COVID-19 [87].Based on the pharmacodynamic/pharmacokinetic results and predicted anti-SARS-CoV-2 activities, analogs were selected for docking studies (Fig. 13).Blind docking was performed via MOE (molecular operating environment) software using SARS-CoV-2 RdRp (PDB, ID: 7BV2) and ExoN (PDB, ID: 7MC6) proteins from PDB (protein data bank).The molecular docking results revealed that some of the compounds were with promising binding energies in both enzymes relative to Riboprine-TP and Forodesine-TP (triphosphates).The assumptions were supported by in vitro testing as anti-RdRp, anti-ExoN, and anti-SARS-CoV-2 [88]. Natural RdRp inhibitors Natural isolate compounds have a significant impact on drug development and pharmacotherapy due to their tremendous structural, and chemical variety, and relatively low toxicity.Numerous natural products were discovered to have medicinal potential, including morphine, quinine, paclitaxel, penicillin, lovastatin, and doxorubicin.Natural compounds were and still are one of the major sources for new medications, with an estimated 34% of approved new chemical entities between 2000 and 2014 coming from natural compounds isolated from plants, microorganisms, and other resources.Many analogs of potential bio-properties are mimicked based on the chemical scaffold of promising natural biologically active entities [89][90][91].A famous example is Artemisinin, one of the recent naturally isolate compounds with high potential as anti-malarial (Plasmodium falciparum) properties described by Tu Youyou (Noble Prize winner in 2015).Artemisinin is extracted from the plant Artemisia annua (sweet wormwood, a herb used in traditional medicine in Chinese) [92].Several review articles reported the antiviral properties of some natural isolate compounds [93][94][95]. Benzopyrans Flavonoids and their glycosides or their bioisosteres displayed antiviral properties inhibiting different stages of the virus infective cycle (inhibition of viral protease, RNA polymerase, and mRNA).Different reports mentioned the efficacy against some RNA viruses (SARS-CoV, MERS-CoV, and influenza A virus) [96][97][98]. Theaflavin extracted from Camellia sinensin, is a polyphenolic compound found in black tea with a considerable medicinal value useable in Chinese traditional medicine (Fig. 22).Theaflavin and theaflavin gallate derivatives exhibit broad-spectrum antiviral properties against several viruses, influenza A and B and hepatitis C virus [104,105].In silico studies utilizing theaflavin revealed promising efficacy against RdRp of SARS-CoV-2.Similar observations were also noticed for SARS-CoV, and MERS-CoV, using the UCSF Chimera and SWISS-MODEL (Fig. 23) [106]. In another in silico molecular docking study for some of the food bioactive compounds, three alkaloids phycocyanobilin (found in Spirulina), riboflavin (found in eggs, meat, fruits), cyanidin (found in grapes and berries) revealed high binding affinity towards SARS-CoV-2 M pro and RdRp enzymes relative to the antiviral drugs Remdesivir, Nelfinavir, and Lopinavir utilizing AutoDock Vina software (Figs.27 and 28) [110]. Suramin A 100 year-old-drug, Suramin (Fig. 34) is identified as a potent inhibitor of the SARS-CoV-2 RdRp and acts by blocking the binding of RNA to the enzyme.Biochemical studies suggest Suramin and its derivatives are at least 20-fold more potent than Remdesivir.The 2.6 Å cryoelectron microscopy structure of the viral RdRp bound to Suramin uncovers two binding sites of which one directly blocks the binding of the RNA template strand and the other clashes with the RNA primer strand near the RdRp catalytic site, thus inhibiting RdRp activity.The IC 50 values obtained from the solution-based assays of RdRp inhibition for Suramin is 0.26 μM, and for Remdesivir in its triphosphate form (RDV-TP) is 6.21 μM under identical assay conditions, suggesting that Suramin is at least 20-fold more potent than RDV-TP [117]. In addition to the nucleoside and non-nucleoside based RdRp inhibitors, many lead candidates were identified through molecular docking and homolog model-based screening.Parvez et al. revealed that antibacterial drugs (fidaxomicin, ivermectin, rifabutin, and rifapentine) show potential interaction with SARS-COV-2 RdRp protein (Fig. 35).These drugs could be further investigated and considered as leads for further development of potential RdRp inhibitors for SARS-COV-2 [132]. Predicted ADME-Tox Properties Drug development is always a tedious and challenging task.Several strategies have been used in recent years to expertise the process which includes molecular hybridization, molecular docking, and consideration of ADME-Tox properties.As many drug candidates fail to reach their drug target because of their poor ADME-Tox profile.It is essential to pay attention for having balanced pharmacokinetic properties while developing new RdRp inhibitors for SARS-CoV-2.The descriptor and druglikeness properties of the compounds included in this article were calculated with the Swiss ADME server [133] and STARDROP software [134].Among the various parameters, we considered to include molecular weight (MW), logP, hydrogen bond donors (HBD), hydrogen bond acceptors (HBA), number of rotatable bonds (RB), topological polar surface area (TPSA), ability to cross the blood-brain barrier (BBB), human intestine absorption (HIA), inhibition of P-glycoprotein (P-gp), human ether-a-go-go-related gene (hERG) potassium channel inhibition and bioavailability score (BS) as these properties play critical role (see Table 2).These data could serve as additional information for the new drug development process. Conclusion SARS-CoV-2 is considered one of the most severe pandemics facing the global population.The development of efficient anti-SARS-CoV-2 drugs over a short time is associated with many challenges, considerable obstacles, and unknown difficulties.Among various modern approaches to adopt for developing potential drug candidates for SARS-CoV-2, rational drug design and drug repurposing strategies are key.In the past two years, RdRp is found to be a prime target as this enzyme has capable to combat viral replication.Some drugs have been identified as anti-SARS-CoV-2 RdRp accessible for mild and weak infectious patients.Numerous natural and synthetic molecules have also been investigated.However, drug candidates with higher potency are still in demand.In silico studies have been exploring the most in the search for potential drug candidates but without supporting in vitro and in vivo observations their accessibilities will be hindered for any drug discovery program.We believe the compiled information will develop an interest in this field among the research community and provide them with a deeper understanding of the requirements and importance of chemical scaffolds for designing potential RdRp inhibitors for SARS-CoV-2. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. M.S.Bekheit et al. replication of the virus in model polymerase extension experiments[78].
2023-03-20T15:03:59.214Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "29eadae5b5afcb27cd635882e626cf0c6b8572db", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ejmech.2023.115292", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f9c2168bebb8bf442e1371d61c3bbddce99fafc8", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
264608622
pes2o/s2orc
v3-fos-license
Redox activity and chemical speciation of size fractioned PM in the communities of the Los Angeles-Long Beach harbor In this study, two different types of assays were used to quantitatively measure the redox activity of PM and to examine its intrinsic toxicity: 1) in vitro exposure to rat alveolar macrophage (AM) cells using dichlorofluorescin diacetate (DCFH-DA) as the fluorescent probe (macrophage ROS assay), and: 2) consumption of dithiothreitol (DTT) in a cell-free system (DTT assay). Coarse (PM 10−2.5), accumulation (PM2.5−0.25), and quasi-ultrafine (quasi-UF, PM 0.25) mode particles were collected weekly at five sampling sites in the Los Angeles-Long Beach harbor and at one site near the University of Southern California campus (urban site). All PM samples were analyzed for organic (total and watersoluble) and elemental carbon, organic species, inorganic ions, and total and water-soluble elements. Quasi-UF mode particles showed the highest redox activity at all Long Beach sites (on both a per-mass and per-air volume basis). A significant association ( R2=0.61) was observed between the two assays, indicating that macrophage ROS and DTT levels are affected at least partially by similar PM species. Relatively small variation was observed for the DTT measurements across all size fractions and sites, whereas macrophage ROS levels showed more significant ranges across the three different particle size modes and throughout the sites (coefficients of variation, or CVs, were 0.35, 0.24 and 0.53 for quasi-UF, accumulation, and coarse mode particles, respectively). Association between the PM constituents and the redox activity was further investigated using multiple linear regression models. The results showed that OC was the most important Correspondence to: C. Sioutas (sioutas@usc.edu) component influencing the DTT activity of PM samples. The variability of macrophage ROS was explained by changes in OC concentrations and water-soluble vanadium (probably originating from ship emissions – bunker oil combustion). The multiple regression models were used to predict the average diurnal DTT levels as a function of the OC concentration at one of the sampling sites. Introduction Epidemiological and toxicological studies have shown a positive association between adverse health effects and exposure to fine and ultrafine particulate matter (PM) (Dockery et al., 1993;Pope et al., 2002Pope et al., , 2004)).Atmospheric PM and its components have the potential to interact with airway epithelial cells and macrophages to generate reactive oxygen species (ROS), which have been linked to respiratory inflammation and other adverse health effects (Cho et al., 2005;Nel, 2005).A variety of methods in both cell-free and cell-based systems have been employed to examine the oxidative stress activity of PM.Cho et al. (2005) demonstrated that the dithiothreitol (DTT) assay can provide a good measure of the redox activity of particles by determining superoxide radical formation as the initial step in the generation of ROS.Li et al. (2003) showed that the consumption rate of DTT by PM samples is directly related to the particles' ability to induce a stress protein in cells.Other types of in vitro assays are able to assess the ability of PM (or PM extracts) to stimulate cellular generation of ROS in macrophage cells (Sioutas et al., 2005).Despite recent advancements in ROS analysis, the aerosol components driving the formation of ROS remain unclear. S. Hu et al.: Redox activity of size-segregated PM samples PM constituents that have been considered as major driving forces for ROS formation include organic species (Nel et al., 2001;Seagrave et al., 2005), transition metals (Goldsmith et al., 1998;Prahalad et al., 1999), and polycyclic aromatic hydrocarbons (PAHs) (Kumagai et al., 2002;Li et al., 2003;Cho et al., 2005).Due to the complex chemical compositions of PM, the specific role of different particle species in inducing oxidative stress, whether in non-cellular or cellular assays, is still not well understood and could be assays and/or method dependent. The link between PM components and their toxicity provides a particularly useful metric for aerosol monitoring, as there is wide agreement among the air pollution community that not all PM species are equally toxic.Ntziachristos et al. (2007a) demonstrated that the DTT activity could be attributed to PAHs via the formation of quinones.Geller et al. (2006) investigated the toxicity of PM emissions from gasoline and diesel passenger cars and demonstrated that a link exists between redox activity and chemical species including organic carbon (OC), low molecular weight PAHs and trace elements such as nickel and zinc.Water soluble metals could also be biologically active and act as catalysts to favor the formation of ROS (Goldsmith et al., 1998;Prahalad et al., 1999;Mudway et al., 2004).However, there are limited studies examining the relationships between the water-soluble PM content and its underlying toxic response. Efforts have also been made to associate specific sources of PM to oxidative stress (Zhang et al., 2008).However, toxicological studies on the adverse health effects of PM have focused on data collected at limited sampling sites dominated by only a few emission sources (e.g.vehicular emissions) (Li et al., 2003), or from laboratory generated aerosols (Su et al., 2008).There are few works conducted to-date examining the toxicity of PM collected at urban areas of interest, including locations impacted by nearby airports, harbors, power plants and refineries.The present study was conducted in the Los Angeles-Long Beach port, which represents the busiest harbor in the US and the fifth most important port complex in the world in terms of commercial activity.This is an area impacted by various sources, including several types of industries, refineries, as well as vehicular and marine vessels.The current work is an extension of a previous study conducted by Ntziachristos et al. (2007a), which addressed redox activity and chemical speciation of size-fractionated PM in urban and rural areas of the Los Angeles Basin.In addition to the DTT assay employed in Ntziachristos et al. (2007a), a macrophage-based ROS assay was also used, and associations between PM components (including water-soluble elements and water-soluble OC) and redox activities were investigated. Site locations Size-segregated PM samples were collected at four sampling locations in the Los Angeles-Long Beach port area (Site 1-Site 4), at a background location near the harbor of the Los Angeles port (Site 5, the closest to the oceanfront; see Fig. S1 in the supporting information document (http://www.atmos-chem-phys.net/8/6439/2008/acp-8-6439-2008-supplement.pdf) for a map of the sampling sites), and at an urban site (Site 6) at the University of Southern California (USC) campus.Samples were collected daily on weekdays (Monday to Friday) over a 7-week period, sequentially, from March to May of 2007.A detailed description of the sampling and chemical analysis methods is described elsewhere (Arhami et al., 2008); only a brief summary is reported here.The six sampling sites were selected to capture the impact of a complex source mix within the harbor community.Sites 1, 2 and 3 were located in Wilmington, West Long Beach.Site 1 was set-up at the intersection between a major street and a local residential road.Site 2 was about 3 km north of the ocean coast, at the intersection of two major streets, and in close proximity to the Alameda corridor (a 32 km freight rail "expressway").Site 3 was located inside a semi-industrial area and less than 1 km north of the CA-1 highway.Site 4 was further away from the ocean coast (∼7 km north), about 1 km east (downwind) of the I-710 freeway (where more than 25% of the vehicle fleet is represented by heavy-duty diesel vehicles), and about 1 km north of the I-405 freeway.Site 5 was a typical background site for the Long Beach harbor, while Site 6 (located at the USC main campus), was representative of urban air quality conditions in downtown Los Angeles. Sampling description At each site, size-segregated ambient aerosols were collected using two parallel Sioutas ™ impactors (SKC Inc, PA; operating flow rate = 9 lpm), one loaded with Zefluor filters (3 µm pore-size, Pall Life Sciences, Ann Arbor MI) and the other with Quartz fiber filters (Pall Life Sciences, Ann Arbor MI).Three different size fractions of PM were collected, coarse (2.5 µm<Dp<10 µm), accumulation (0.25 µm<Dp<2.5 µm), and quasi-ultrafine (Dp<0.25 µm) modes.All substrate were either baked at 550 • C (Quartz fiber filter) or cleaned with a series of solvents (Zefluor) before usage to minimize contaminations (see Arhami et al., 2008, for further details).After sampling, each Quartz fiber filter sample was wrapped in a piece of pre-baked aluminum foil, placed in a Petri dish and kept frozen (at −4 • C) until analysis. Gravimetric and chemical analyses Zefluor filters were weighed before and after sampling using a Mettler-Toledo MX5 microbalance (Mettler-Toledo, Columbus, OH; weight uncertainty ±2 µg) in a room with controlled temperature and humidity to determine the mass of the collected PM.Laboratory filter blanks were also weighed before, during, and after each weighing session to verify the accuracy and consistency of the microbalance.The electrostatic charges of the Zefluor substrates were minimized using a static neutralizer (500 µCi Po210, NRD LLC, Grand Island, NY). Weekly samples, collected on both Teflon (Zefluor) and quartz fiber filters, were sectioned into four equal parts and analyzed at the Wisconsin State Lab of Hygiene (University of Wisconsin-Madison) for several important inorganic and organic species.Two sets of quartz composites were analyzed by Ion Chromatography (IC) and Thermal Evolution/Optical Transmittance (TOT) to determine the concentrations of inorganic ions (Sheesley et al., 2000), and OC and elemental carbon (EC), respectively (Turpin et al., 2000;Schauer, 2003).The third set of quartz fiber filters was composited for the whole 7-week period at each site and analyzed by Gas Chromatography/Mass Spectrometry (GC/MS) for organic species/tracers including PAHs, n-Alkanes, n-Alkanoic Acids, Resin Acids, Hopanes and Steranes (Zheng et al., 2002;Chowdhury et al., 2007) The fourth set of quartz filters was archived for future analysis.Each set of the Zefluor filters was composited into a single sample representing the full 7-week sampling period at each site, and was prepared for the following analysis: (a) Total Elements (b) Water Soluble Elements, and (c) Water Soluble OC (WSOC) and macrophage ROS, and (d) DTT assay.A magnetic sector inductively coupled plasma mass spectrometer (HR-ICPMS, Finnigan Element 2) was applied for the quantification of 52 trace elements (Herner et al., 2006) in the total digests and water extracts.Water extract for total organic carbon (TOC) and ROS analysis were prepared by leaching the PM samples in 900 µL of Type 1 water for 16 h with shaking (Zhang et al., 2008).A General Electric Instrument (Sievers Total Organic Carbon, TOC; GE, Inc.) was used to determine WSOC concentrations (Zhang et al., 2008). Macrophage ROS and DTT assays The redox activity of PM was measured by two different types of assays: 1) in-vitro exposure to rat alveolar macrophage (AM) cells using dichlorofluorescin diacetate (DCFH-DA) as the fluorescent probe and 2) consumption of dithiothreitol (DTT) in a cell-free system (DTT assay).The first assay (applied to water soluble extracts of the collected PM filter samples) is directed at the biologically mediated production of ROS within the macrophage cell in response to cell stimulation from "toxic" species.ROS species produced within the cytoplasm de-acetylate the DCFH-DA, resulting in the fluorescing compound (DCFH).Extracellular and abiotic de-acetylation is considered to be small.Hereafter, this assay is referred to as Macrophage ROS.Detailed information about the macrophage ROS analysis is presented by Landreman et al. (2008).Experiments were performed with the rat alveolar cell line, NR8383, maintained in Hams F12 medium containing 2 mM L-glutamine supplemented with 1.176 g/L sodium bicarbonate and 15% heat inactivated fetal bovine serum.Cells were cultured at 37 • C in a humidified 5% CO 2 incubator and maintained by transferring non-adherent cells to new flasks weekly.Cultures were set up to contain a floating cell concentration of approximately 4×10 5 cells mL −1 of media.For the exposure experiments, cells were harvested and gently concentrated by centrifugation at 750 RPM for 5 min, the culture medium removed, and replaced with a salts-glucose-medium (SGM) to generate a cell suspension of 1000 cells/µL.A 15 mM stock solution of 2 7 -dichlorodihydrofluorescein diacetate (DCFH-DA, Sigma), prepared in N,N -dimethyl formamide, was diluted 10 fold in SGM just prior to use.One hundred µL of the macrophage cell suspensions were dispensed into each well of a 96 well plate and incubated at 37 • C for two hours.Approximately 15 min before the end of the incubation period, the diluted DCFH-DA solution was added to each prepared sample extract to achieve a final concentration of 15 µM DCFH-DA.After the incubation period, during which time >98% of the cells settled and adhered to the well bottom, the SGM was pipetted off and immediately replaced with 100 µL of SGM-buffered sample extract or control sample.The fluorescence intensity in each well was determined at 450±50 excitation and 530±25 emission using a CytoFlour II automated fluorescence plate reader (PerSeptive Biosystems) at regular intervals throughout the exposure period (typically 2.5 h).For each exposure experiment several untreated and method blank controls were included.Unopsonized zymosan was included as a positive control.Each sample/dilution was run in triplicate (i.e. 3 wells each). The DTT assay (applied to suspensions of the collected particles) provides an estimate of the redox activity of a sample based on the ability of the PM to catalyze electron transfer between DTT and oxygen in simple chemical systems (Cho et al., 2005).The electron transfer is monitored by the rate at which DTT is consumed under a standardized set of conditions and the rate is proportional to the concentration of the catalytically redox-active species in the PM sample as well as their rate constants for the reaction with DTT.This chemical assay measures the consumption of DTT that is capable of quantitatively determining superoxide radical formation as the first step in the generation of ROS.The methodological procedure used for the DTT assays conducted for this work is described in great detail by Cho et al. (2005) and Li et al. (2003).In brief, the Zefluor filters were sonicated in Milli-Q water for 20 min.The filters were then removed and the aqueous particle suspension was used in the DTT assay.The PM suspension was incubated at 37 • C with DTT (100 µM) in potassium phosphate (0.1 M) buffer at pH 7.4 (1 mL total volume) for times varying from 0 to 30 min.Trichloroacetic acid (10%, 1 mL) was added to the incubation mixture to quench the reaction at preset times.An aliquot of the quenched reaction mixture was then mixed with Tris-HCl,(0.4M, 1 mL, pH 8.9) that contains EDTA (20 mM) and DTNB (10 mM, 25 µL). The remaining DTT was measured by the formation of 5mercapto-2-nitrobenzoic acid. Statistical data analysis Bivariate Pearson Correlations between Macrophage ROS and DTT levels, and the concentrations of the chemically speciated PM were calculated for a preliminary identification of the most important predictor variables that could be included in multiple regression models for macrophage ROS and DTT.The chemical species with a significantly positive correlation (p<0.05) with the macrophage ROS and DTT concentrations were then chosen as predictors in a series of multiple linear regression analyses (i.e.stepwise, forward, and backward elimination selections) using SAS for Windows (V 9.1, SAS Inc., Cary, NC).A general multiple linear regression equation expresses the response variable (Y i ) as a linear combination of (p-1) predictor variables (X i ): where, Y i is the response in the i-th trial (i.e.Macrophage ROS or DTT), β 0 , β 1 ,. . ., β p−1 are the regression coefficients, X i,1 , X i,2 , ..., X i,p−1 are predictor variables (i.e.inorganic and organic species and trace elements), and ε i is the error term. Overview of the PM chemical speciation Table 1 shows the concentration of PM in three particle size ranges at each sampling site, and the corresponding percentage contribution of major aerosol components to PM mass. A detailed discussion about the chemical speciation results is described elsewhere (Arhami et al., 2008).The mass distribution of the different species in different size fractions was relatively homogeneous across sampling sites.OC was the most abundant component of quasi-ultrafine (quasi-UF) particles at all sites (31.0 to 38.9% at Site 5 and Site 2, respectively).The organic material in ultrafine particles predominantly originates from various combustion sources (Seinfeld and Pandis, 1998), such as vehicular and ship emissions.OC in the accumulation mode may also originate from the photo-oxidation of reactive gaseous precursors (i.e.secondary organic aerosol, or SOA, formation) (Turpin et al., 2000;Polidori et al., 2006).EC, primarily formed from incomplete combustion processes and often considered to be a good surrogate of diesel emissions (Seinfeld and Pandis, 1998), was present mainly in the quasi-UF mode (7.9 to 13.5% at Site 4 and Site 1, respectively).Secondary aerosol components, such as sulfate, nitrate and ammonium, were the most dominant species in accumulation mode particles, together accounting between 41.2% (Site 5) and 60.0% (Site 6) over the six sampling sites.Sulfate was the most abundant component in the accumulation mode (21.3 to 29.2% at Site 5 and Site 1, respectively) and the second most abundant component following OC in the quasi-UF mode (13.2 to 20% at Site 2 and Site 5, respectively) at most sites.Accumulation mode sulfate is mainly present in the urban air as ammonium sulfate, a secondary aerosol component formed in the atmosphere through the oxidation of sulfur dioxide (Rodhe, 1999), whereas in the quasi-UF fraction, a significant part of sulfate also originates from bunkerfuel combustion from the nearby marine port vessels (Lin et al., 2005;Arhami et al., 2008).Nitrate contributed mostly to the mass of accumulation mode (12.6 to 24.8% at Site 1 and Site 6, respectively) and coarse mode particles (11.2 to 23.4% at Site 2 and Site 1, respectively).In the accumulation mode, nitrate originates through secondary processes involving nitric acid and ammonia (Seinfeld and Pandis, 1998), while in the coarse fraction it is mostly formed from reactions between nitric acid and sea salt or mineral compounds (Kerminen et al., 1998;Pio and Lopes, 1998).Ammonium (NH + 4 ), present in the atmosphere mainly as ammonium nitrate and ammonium sulfate, is also formed through secondary processes from gaseous precursor, and typically contributed more to the mass of accumulation mode PM (5.8 to 11.7% at Site 5 and Site 6, respectively).Table S1 (supporting information at http://www.atmos-chem-phys.net/8/6439/2008/acp-8-6439-2008-supplement.pdf)shows weekly averages (± standard deviations) of the PM mass and of major chemical components at each site and for 3 different size fractions: quasi-ultrafine (quasi-UF), accumulation (ACC), and coarse particles.Overall, standard deviations are relatively small compared to the corresponding averaged levels, suggesting insignificant week-to-week variability of the concentrations of the PM mass and of the major chemical species measured at each site over the entire sampling campaign.This might be mostly because of the stable meteorological conditions and the constant influence of vehicular sources over the entire sampling period (Arhami et al., 2008). Trace elements accounted from 9.2 to 17.6% of the coarse particle mass, and between 7.5 and 19.1% of the accumulation mode mass.Their contribution to the quasi-UF fraction was relatively lower (3.9 to 6.9%) at all sites.Na and S were the most abundant elements in all three size fractions, followed by Ca, Mg, K, Fe and Al.Among all elements, Al, Fe, Ti, K, Mn, and Cs, which have a crustal origin (Ntziachristos et al., 2007b;Arhami et al., 2008) and are products of re-suspended soil dust, were found mostly in coarse PM.Sb, S, Cd, Mo, Zn, Pb and Cu, mainly generated by vehicular sources and present as constituents of lube oil (Ntziachris-tos et al., 2007b), were found in all size fractions.V and Ni, which are mostly emitted by marine vessels and oil combustion (Isakson et al., 2001;Lu et al., 2006), were more abundant in the quasi-UF mode. 3.2 Water-soluble elements and water-soluble organic carbon (WSOC) content Figure 1 shows water-soluble elements as a fraction of total element concentration in the three size ranges.Generally, trace elements in quasi-UF and accumulation mode particles are more soluble than those in coarse PM.For certain elements, in particular for Cd, Zn, Sb, Ni, Li, Co, and Cu, the solubility is highest in quasi-UF PM (>0.75) and decreases with increasing particle size.It should be noted that few elements (such as Cd, Zn and Na) showed a water-soluble fraction greater than 1 (between 1 and 1.6), which might be due to the analytical uncertainty.This class of compounds might originate from high temperature combustion processes, such as fresh vehicular emissions.Zn and Cd are almost entirely water-soluble in both quasi-UF and accumulation modes.Na showed very high and comparable water solubility among the three size ranges.The solubility of Ba, Mo, Mn, V, Mg, Cs, Pb and K, Sr peaked in the accumulation mode.The least soluble elements were Cr, Fe, Al, Ce, La and Ti (<15%), a finding consistent with their geochemical origin.These results agree with those reported in other studies conducted in an urban area of Birmingham, UK (Heal et al., 2005;Birmili et al., 2006).Birmili et al. (2006) reported that Zn and Cd in ambient PM 7.2 particles are the most soluble trace elements (∼50%), followed by Mn, Cu, Ba, Pb and Co. Figure 2 shows water-soluble (WSOC) and waterinsoluble OC (WIOC) concentrations in three size ranges at all sampling sites.While some WSOC originates from primary emission sources, such as biomass burning, its production is mostly attributed to SOA formation processes (Weber et al., 2007).The highest WSOC concentrations were in fine PM at all sites, with relatively equal partitioning between quasi-UF and accumulation modes (site-average WSOC concentrations were 0.25±0.08 and 0.20±0.12µg C/m 3 for quasi-UF and accumulation modes, respectively).The average percentage contributions of WSOC to measured OC across all sites were 13.3±4.0%,22.1±10.8%and 16.6±11.9%for quasi-UF, accumulation and coarse mode particles, respectively, consistent with WSOC/OC wintertime ratios measured at other locations (Miyazaki et al., 2006).The relatively low WSOC/OC values as well as absolute WSOC concentrations compared to those reported in other studies are reflective of the limited photochemical activity during our sampling period.Decesari et al. (2001) observed seasonal variations in WSOC/OC ratio from 0.38 (winter) to 0.50 (summer) for fine particles (Dp<1.5 µm) in the Po Valley.Sullivan and Weber (2006) St. Louis, MO, and Atlanta, GE.Ruellan and Cachier (2001) observed low mean WSOC/OC values (0.13) near a highly trafficked road around Paris in the summer and fall.During this study, the highest WSOC concentrations as well as WSOC/OC fractions were observed at Site 6 (downtown LA).This site is a receptor of freshly emitted particles upwind in the harbor area and transported to that site after considerable atmospheric aging (Site 6 is approximately 40 km north, thus mostly downwind, of the harbor sites).Ho et al. (2006) reported that the WSOC/OC fraction in PM 2.5 measured in Hong Kong was lower at an urban site than at urban-residential and background sites, due to the formation of SOA during transport/aging of the PM mass from urban to background sites. Measured redox activities The redox activities of size fractionated PM measured by the two assays are shown, for all sites, on a per PM mass basis in Table 1.The macrophage ROS level of quasi-UF particles measured at Site 1 was extremely high compared to those obtained in the same size-range at other sites.We do not have an obvious explanation for the higher PM activity in that site, at lest based on the detailed chemical PM composition discussed in earlier paragraphs.We thus treated this data point as an outlier in the statistical analysis described in subsequent sections.On a per mass basis, ultrafine particles exhibited significantly higher redox activity than fine and coarse mode PM.Few previous studies have demonstrated this size-dependent contrast in PM toxicity (Li et al., 2003;Cho et al., 2005;Ntziachristos et al., 2007a).We also investigated the redox potential of PM on a per unit of air volume basis (Fig. 3a), and quasi-UF particles still showed the highest activity levels at all Long Beach sites (Sites 1-5), but not at Site 6 (urban site near USC), where accumulation mode particles had higher toxicity measured by the macrophage ROS assay. The average DTT activities for PM 2.5 particles at the Long Beach sites (0.027±0.004 nmol DTT/min/µg mass; individual values for each site are reported in Table 1 on a per PM mass basis, and on Fig. 3b on a per unit of air volume basis) are well in the range of those reported for PM 2.5 particles in a previous study conducted during different seasons at different urban areas in Southern California (Ntziachristos et al., 2007a) (0.027±0.005 nmol DTT/min/µg mass).The average DTT activity of PM 0.25 (0.039±0.010 nmol DTT/min/µg mass) in this study is somewhat lower than that of PM 0.15 (0.058±0.015 nmol DTT/min/µg mass) estimated by Ntziachristos et al. (2007a).This discrepancy is probably due to the relatively lower contribution of particles between 0.15 and 0.25 µm to the DTT activity on a per mass basis. The variability of the redox potential among size-fractions was estimated by its coefficient of variation (CV; the standard deviation to mean ratio).CVs for DTT activities were 0.25, 0.20 and 0.27 for quasi-UF, accumulation and coarse mode PM, respectively.This rather low variability could be attributed to the fairly homogenous distribution of organic species on a per mass basis among the three size ranges in that area.As it will be discussed later, these species are mostly responsible for the variability in DTT.By contrast, higher CVs were observed for macrophage ROS (0.35, 0.24 and 0.53, for quasi-UF, accumulation and coarse mode particles, respectively. DTT vs. macrophage ROS DTT activities and macrophage ROS measurements are compared on Fig. S2 (supporting information).Macrophage ROS is significantly correlated with DTT consumption (R 2 =0.61, p<0.05) for the pooled samples (17 data points; as stated previously quasi-UF ROS at Site 1 was excluded from all calculations).It should be noted that these are two independent and intrinsically different assays and, thus, should not be expected to be correlated a priori.The consumption of DTT is based on the ability of a PM sample to accept electrons from DTT and transfer them to oxygen (Cho et al., 2005); whereas macrophage ROS assays use a filtered extract, so that cells are exposed to the soluble components of PM only.DCFH is a broad spectrum ROS probe, directly responsive to most common reactive oxygen species, including the hydroxyl radical, peroxide, superoxide radical, and peroxynitrite radical and, therefore, provides a more comprehensive, less targeted, assessment of the redox activity of PM.For example, the ROS produced by many redox active metals, will be addressed by the DCFH, while the DTT assay is relatively insensitive to this mechanism.In many respects the two assays are quite complementary.The DTT method is strictly a chemical probe, especially sensitive to many organic functionalities (e.g.quinines), while the DCFH approach, fundamentally a cell-based method, probes the general oxidative stress imposed by PM on a living organism.The substantial correlation between these two assays suggests that both analyses may be driven, at least in part, by variations in the concentrations of similar chemical species.The association between the DTT and Macrophage ROS assay and PM constituents are further investigated in the following sections. Macrophage ROS/DTT vs. chemical speciation Table 2 shows Pearson's correlation coefficients of macrophage ROS and DTT vs selected PM components.All data points obtained in the three size ranges at the six sampling sites were pooled to calculate the resulting mean Pearson's coefficients and p values.The species with tos et al., 2007a).Nitrate and sulfate have no functional groups to result in the formation of ROS, but may play a general role on particle toxicity by affecting PM acidity.OC showed a significant correlation with both assays.EC is also significantly correlated with both macrophage ROS and DTT levels, but this strong association may be due to the high correlation between EC and OC concentrations, both being emitted mostly by motor-vehicles.As shown in Table S2 (supporting information http://www.atmos-chem-phys.net/8/6439/2008/acp-8-6439-2008-supplement.pdf), a strong correlation of water soluble V and Ni with macrophage ROS was observed, with R values of 0.94 and 0.93, respectively.These two trace elements were highly correlated in this study (Arhami et al., 2008), suggesting that they originated from bunker fuel combustion from marine vessels (Isakson et al., 2001).With the exception of V, Ni and few other elements, the other species are moderately, insignificantly (p>0.05) or negatively correlated with both ROS and DTT assays (Table S3, see supporting information for details). Figure 4 show correlations between a selected group of PM components (expressed as a percentage of the measured PM mass) and redox activities of PM measured by the macrophage ROS (Fig. 4a) and DTT (Fig. 4b) assays.The corresponding regression slopes, intercepts, and correlation coefficients (R 2 ) are summarized in Table S4 (supporting information).Water soluble V and, to a lesser degree, light molecular weight PAHs (MW≤228) and OC are well correlated with macrophage ROS levels.With the exception of one data point (quasi-UF at Site 5), WSOC was also well correlated with ROS (R 2 =0.69, after excluding the influence of this last measurement).We hypothesize that the relative higher macrophage ROS level of quasi-UF particles at Site 5 is mostly driven by the abundance of water soluble V (4.5 ng/m 3 ) and Ni (1.2 ng/m 3 ), rather than water soluble OC, given the proximity of the site to the port and the lack of notable traffic sources nearby.OC had the highest correlation with DTT than any other PM species (Fig. 4b).A multiple linear regression (MLR) analysis was conducted to further investigate the contribution of the PM chemical components to the measured redox activities.3.6 Multi-variance analysis 3.6.1 "Best-fitting" model for DTT The "best-fitting" (3-parameters) regression equation for the DTT concentration was obtained using a "forward" selection method in SAS ("PROC REG"): DTT = 0.034 + 5.585 × 10 −02 same as the predicted number of parameters), indicating that the regression equation has an appropriate number of predictors.Thus, 95% of the DTT concentration variance can be explained by the variance of this 3-parameters model.These results confirm our earlier observations that organics drive the DTT response (Ntziachristos et al., 2007a).According to Cho et al. (2005), this assay is relatively insensitive to trace elements, which is consistent with our regression results.Although the redox activity of transition metals in biological reactions is well established, the DTT assay does not reflect the redox activity for trace elements such as Al and Co.We hypothesize this could be due to the correlation of the trace metals with PAHs as indicated by Ntziachristos et al. (2007a).Al showed moderated correlation with light-MW PAHs (R=0.46),suggesting Al concentrations in the regression model (Eq.2) might serve as a surrogate for the effect of light-MW PAHs on the DTT activity levels.Co concentration is not well correlated with PAHs (R=0.15);however, it is highly correlated with total OC (R=0.59).The PAHs only accounts for a small fraction of total OC (less than 30 ppm).Therefore Co might be a surrogate for other organic species.It is possible that regression models using different selection criteria, including OC and PAHs as predictor variables, could explain the variability of DTT equally well. The above best fitting regression equation (Eq.2) can be used to estimate the effect of an increase/decrease in the concentration of any of the predictive variables (i.e.OC, Al soluble and Co soluble ) on the DTT levels.For example, we var-ied OC over its typical average diurnal range at Site 2 (at the Wilmington site average hourly OC data were available only for May 2007), while holding constant the Al soluble and Co soluble concentrations to their average background levels (those measured at Site 5).This approach allowed us to describe/predict the DTT concentration at Site 2 as the sum of its "urban background" concentration and the enhancement due to an increase in OC.As shown in Fig. 5, the predicted DTT at Site 2 peaked during morning rush hour traffic because of increased motor-vehicle emissions, reached a minimum late in the afternoon, and slightly increased again at night because of a lowered mixing height and increased atmospheric stability.The DTT activity rates and OC concentrations were ∼4 times higher between 09:00 and 11:00 a.m.than at 17:00-18:00 p.m., suggesting that traffic emissions may increase the potential of airborne particles to induce oxidative stress on human cells. It should be noted that the intercept term influenced between 8 to 16% of the DTT levels predicted by Eq. ( 2), when considering the typical concentration range for OC in Wilmington.This small, but non-negligible effect of the intercept may be due to the contribution of redox active PM components, which are not included in our chemical analysis. "Best-fitting" model for ROS The "best-fitting" (2-parameters) regression equation for the ROS concentration was also obtained using a "forward" selection method in SAS ("PROC REG"): where, OC and V soluble are the measured concentrations of OC and water-soluble V, respectively.Similarly to DTT, the model was run considering all of the quasi-ultra-fine, accumulation and coarse concentrations together (a total of 16 data-points; 2 outliers were found and discarded), and the correlation between predicted and measured ROS was excellent (y [predicted ROS]=0.93X[measured ROS]+0.075);R 2 =0.93). As shown in Table S5b (supporting information http://www.atmos-chem-phys.net/8/6439/2008/acp-8-6439-2008-supplement.pdf),water-soluble V is the most influential factor in the regression (partial R 2 =0.86).OC was also selected as a predictor variable (partial R 2 =0.07).The overall model is statistically significant (p<0.0001), with an R 2 of 0.93, and a parameter coefficient (Cp) of 3, which suggests that the regression equation has an appropriate number of predictors.Hence, 93% of the ROS concentration variance can be explained by the variance of this 2-parameters model.These results indicate that the ROS response depends on two variables, each of which is an indicator of two major sources in that Long Beach area: OC (vehicular traffic) and V (ship emissions and oil combustion).The rest of PM species considered in this analysis were either non-correlated to ROS or, if they showed a significant association with ROS, they were probably emitted by the same two major sources. The best fitting regression equation for ROS (Eq. 3) could also be used to estimate the effect of an increase/decrease in the concentration of any of the predictive variables (i.e.OC or V soluble ) on the ROS levels.However, because of the significant influence of V soluble on ROS and of the lack of methodologies for near continuous measurements of particulate V, we could not predict the average diurnal variation of ROS as we did for DTT. We continue to confirm our earlier observations that organics are important and influence the redox properties of PM measured by the DTT assay.According to Cho et al. (2005), most trace elements do not mitigate this assay; therefore all of our results are internally consistent with our prior works.In contrast, the macrophage ROS assay is mainly a function of two PM species, OC and V, which are indicators of the two major sources dominating the study area, i.e. vehicular traffic and ship emissions, respectively. Summary and conclusions The redox properties of size fractionated PM samples collected in the Los Angeles-Long Beach port area were measured using: 1) a "biological" assay applied to water soluble extracts of the collected particles (Macrophage ROS assay), and: 2) a "chemical" assay performed on suspensions of the PM filter samples (DTT assay).Quasi-UF mode particles showed the highest redox activities at all sites, on both a per-mass and per-air volume basis, and the substantial correlation between these two assays (R 2 =0.61) suggests that both assays may be driven, at least in part, by variations in the concentrations of similar chemical species.A multiple linear regression model showed that OC (emitted from vehicle exhaust and port activities) was the single most important component influencing the DTT levels.A similar model also indicated that the variability of macrophage ROS is explained by changes in OC and water-soluble vanadium concentrations (from vehicular traffic and ship emissions/bunker oil combustion, respectively).The predicted DTT activity rates and measured OC concentrations at one of the port sites were ∼3-4 times higher between 09:00 and 11:00 a.m.than at 17:00-18:00 p.m., confirming that traffic emissions can increase the redox potential of airborne PM substantially and induce oxidative stress on human cells.The DTT and ROS are two independent and intrinsically different assays that measure different aspects/modes of PM toxicity, and, in this respect, they complement each other.A better understanding of the relationships between size-segregated PM (and PM components) and the associated DTT and ROS activities is important in terms of public health management and prevention policies. Supplemental information Weekly averages of the PM mass and of major chemical components at each site and for 3 different size fractions, Pearson correlation coefficients between macrophage ROS (and DTT) and selected chemical species, Pearson coefficients among selected water-soluble elements, summary statistics for Figs. 4 and 5, concentrations of water-soluble elements, a map of the sampling sites, and the results of the linear regression between macrophage ROS and DTT are included in the Supplemental Information document (http://www.atmos-chem-phys.net/8/6439 /2008/acp-8-6439-2008-supplement.pdf). Acknowledgements.The study was supported by the Southern California Particle Center (SCPC), funded by US EPA under the STAR program (Grant RD-8324-1301-0) to the University of Southern California.The research described herein does not necessarily reflect the views of the agency, and no official endorsement should be inferred.Mention of trade names or commercial products does not constitute an endorsement or recommendation for use.We thank the staff at the Wisconsin State Lab of Hygiene (WSLH) for chemical and toxicological analysis of the PM samples, and the staff at UCLA for the DTT analysis of the PM samples.We are thankful for the Port of Long Beach, Dinesh Mohda and the staff at the Long Beach Job Corps Center, Balthazar Alvarez and South Coast AQMD for the help in the sample collection at the port sites. Fig. 1 .Figure 2 Fig. 1.Mean fractions of water-soluble elements in each size range.Error bars are the standard deviation of measurements obtained over the sampling sites. Fig. 2 . Fig. 2. Concentrations of water-soluble organic carbon (WSOC) and water insoluble organic carbon (WIOC) in three size ranges at all sampling sites. Figure 3 Figure 3 Spatial distribution of size fractioned redox activities at the Long Beach Harbor; a) Macrophage ROS assay and b) DTT assay. Fig. 3 . Fig. 3. Spatial distribution of size fractioned redox activities at the Long Beach harbor; (a) Macrophage ROS assay and (b) DTT assay. Fig. 4 . Fig. 4. Scatter Plot of (a) Macrophage ROS and (b) DTT, with total, insoluble and water soluble OC (OC, WIOC, and WSOC, respectively) and selected water soluble elements.Open circles represent the non-fitting points for multiple regression models. Figure 5 Fig. 5 . Figure 5 Prediction of diurnal cycles of PM redox activity (DTT assay) based on real time OC concentration.The minimum and maximum estimated DTT values were included within broken horizontal lines. Table 1 . Size-resolved PM mass concentration, chemical composition and redox activities at the six sampling sites. reported mean WSOC/OC ratios for PM 2.5 particles in the range of 0.50 and 0.60 in winter and summer, respectively, for measurements in Mean fractions of water-soluble elements in each size range.Error bars are the standard deviation of measurements obtained over the sampling sites. Table 2 . Pearson correlation between macrophage ROS activity, DTT level, and selected species.
2019-02-28T23:00:59.839Z
2008-11-12T00:00:00.000
{ "year": 2008, "sha1": "c357e2a758ad7348d5a15667a10920949738404e", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/8/6439/2008/acp-8-6439-2008.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c357e2a758ad7348d5a15667a10920949738404e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
235284561
pes2o/s2orc
v3-fos-license
Optimizing of Product Logistics Digital Transformation with Mathematical Modeling The purpose of the paper is to demonstrate the possibility of improving the quality of logistics in a rapidly changing situation based on mathematical modeling. The evolution of information technology worldwide goes from separate accounting of individual operations to their integration based on cloud and integrated technologies. Adaptive optimization is required, taking into account the dynamics of the external environment. The paper proposes a flexible mathematical model of management transformation in logistics. For this, a single digital logistics platform is formed as a connecting link for all participants in the value-added chain, including product manufacturers, suppliers of resources and services, product consumers, and logistics companies. At the same time, intermediaries who do not create added value could be excluded from the supply chain. Introduction Russia ranks the 95th place in the ranking of logistics efficiency in the world. Therefore, the need for the digital transformation of this type of activity is obvious. One of the significant factors of improving the country's logistics quality is improving various types of interconnections between all logistics chains' participants by forming a single information space and applying optimization methods. This requires the introduction of modern modeling methods and end-to-end digital technologies, including artificial intelligence. This paper presents the result of an analytical review of the experience of creating integration associations of participants in supply chains, and then, in this context, we propose a mathematical model for optimizing the logistics activities of a coalition of participants. Overview of integration associations of participants in supply chains As a result of the evolution of Information and Communication Technologies (ICT) in logistics, the following types of associations can be distinguished according to the degree of their integration: Subcontracting Supply Network (SSN), Information Subcontracting Network (ISN), Production and Logistics Network (PLN) [1,2]. Unlike SSN and ISN, which are a kind of "bulletin board," in the PLN a Single Information Space (SIS) is a platform for planning and managing projects on the Internet with a common database in the digital cloud, which contains data on the performance of logistics operations, classifiers, standards, common for all registered participants. In the concept of the PLN, only a small part of various autonomous enterprises with a limited set of information content is included in the SIS. But in the concept of the national platform "Digital Agriculture," the authors proposed a mathematical model for the formation of a single digital logistics platform [3]. This work is devoted to expanding this model to provide the ability to form supply chains of arbitrary configuration with the participation of the majority of economic entities in almost all sectors of the country. The logistics fields of implementation in the context of supply chain engineering include inventory control, radio frequency identification, flexible manufacturing systems, assignment and scheduling methods, warehousing technologies [4]. During constructing the model, it was taken into account that the digital and intellectual transformation now occupies a special place in the development of logistics activities in the world. Thus, it is shown [5] that it is fundamentally impossible to build a "smart" city without smart supply chains. At the same time, among smart solutions, it is worth highlighting:  Intellectual transport systems;  Autonomous logistics providing unmanned movement of people and goods;  Physical Internet for the most efficient movement of goods;  Intellectual cargo that has all the knowledge about its movement;  Self-organizing logistics that can work without the effort of managers. Logistics activities vary in scale. Thus, it is noted [6] that global logistics operations associated with electronic commerce have become very time-dependent in recent years. International couriers face day-to-day issues related to time windows, delays, express delivery management, security, and value-added tax refunds. There are supply chain management methodologies that, based on lean manufacturing principles, integrated with the concepts of Industry 4.0, already solve these problems quite well. E.g., in [7], the logistic parameter is one of the main ones for prioritization and segmentation of the country's agro-industrial products export. In logistics, it is worth highlighting the methods of direct problem solving with tight synchronization of the stages of product delivery and the inverse problem solving, in which optimal supply chains are promptly synthesized for newly emerging user needs. An important place is occupied by institutional structures, mechanisms for managing responsibility and motivation, modern methods of building a space of trust using block-chain technologies (distributed ledgers), the Internet of things, cloud and fog computing. Against this background, it seems expedient to propose a fairly universal model for the formation of optimal logistics chains. Mathematical model for optimization of the logistics activities of participants' coalition This section formalizes the most demanded external logistics management system from the point of view of logistics activities to more efficiently analyze, plan, and design supply chains. We will exclude such types of logistics as production, purchasing, customs, and stock logistics. In this case, the participants in the supply chain of the following groups: suppliers of suppliers, suppliers, consumers of consumers, consumers will be combined into groups of suppliers and consumers, based on the criterion of entry -exit of the product. The same organizations can act as both suppliers and consumers of products. Then the participants in the supply chain will be represented by the following groups: suppliers, consumers, warehouses, transport companies, installation companies, recycling companies. The scheme analysis shows that the supply chain formation block gives a significant effect and is most indemand in terms of optimizing logistics activities based on mathematical modeling. Therefore, we formalized this block first. The choice of installation and disposal organizations with cost optimization was the second step of formalization. Since these two models can be separated, they belong to the class of block programming, i.e., related only to financial constraints and delivery time of installation equipment. So, we consider a system consisting of many suppliers of goods, many consumers, many transport companies, many warehouses. The task is to form optimal supply chains of supply by suppliers of products to consumers by transport companies using warehouses based on minimizing the total costs of products, their transportation, and warehouse services. At the same time, there should be a choice of suppliers of products, a choice of warehouses, and transport companies with loading vehicles. Due to this, the following tasks will be solved in a complex: tracking transport, managing orders (requests), and managing costs for transport and warehouse services. At the same time, we consider the case with enough transport companies to satisfy all the needs; the supply of goods exceeds the demand. The management process is periodically realized as assumed with a period of T, such that during this period there should be no delays in the production and delivery of goods, and all logistics operations' parameters are averaged over time. The choice of the period T is influenced by such characteristics as the number of orders, the volume of cargo transported from suppliers to the warehouse and from the warehouse to consumers, as well as directly from suppliers to consumers in order to load vehicles to the maximum. The need to increase the rate of turnover of funds, goods in warehouses, and the urgency of fulfilling orders also matters. If, for example, T is too small, then the model will give out a large underutilization of vehicles, if it is large, then queues will form, there will be not enough capacity of warehouses, vehicles, the participants in the supply chain will be mired in loans. The optimal value of T can be found out in at least three ways. The first one is based on information about all transport flows (time characteristics of loading, unloading, warehousing, transportation, cash flows; volumetric characteristics of the transported goods, types of vehicles, etc.) for a long period of time to find out some average characteristics and then calculate T (in this case, if all the information is available, you can try to build an optimization model). In the second method, the desired value T is found based on the model in the simulation mode. The third method is based of expert opinion. Note that T in the model is needed to start the supply chain modeling. It can be called T 0 . In the future, while using the model for operational management in dynamics, when any failure occurs in the supply chain, new requests will appear. The next planning period T t (t = 1, 2 ....) will be dependent on the same characteristics as T. Let us take into account that of all the characteristics of vehicles, such as carrying capacity, the volume of cargo transported, etc., the only actual carrying capacity of the vehicle will be considered with the specific volumetric carrying capacity. If necessary, the other characteristics can be considered, which will lead to the complication of the model. At the same time, under the unit of the product, the volume of supplies, storage, we mean both the unit and the volume of the specific volumetric carrying capacity of the product.  r  -the specific volumetric carrying capacity of the r-type of vehicle, calculated as the ratio of the total weight of all products to their volume intended for transportation by the r-type of vehicle;  s A -the warehouse capacity s (in specific volumetric capacity) is calculated by means of the table for converting pallet capacity into specific volumetric capacity (when developing an information system, users will work in their usual terms);  r G -the passport carrying capacity of the r-type vehicle;  r V -the body volume of the r-type vehicle; Variables The following variables were introduced:  ijk x -the volume of supplies of the k-th product from point j to point i, Equations and inequalities The following equations were solved and inequalities were used: Mathematical model of vehicle loading optimization To solve the new problem of loading each vehicle with specific products, heuristic algorithms were applied for three traffic streams. -th transport company. i=1, k=1, j=1, r=1, n=1, rn g =1. Let's also introduce the set * K = K. Step 1. If  r d then go to step 2, otherwise, go to step 5. Step 2. = r d , if at the same time , then go to step 3, otherwise, go to step 4. The vehicle rn g is loaded with good k from point j (location of the product of the j-th supplier) to point i (place of delivery of the i-th consumer). If rn g < rn N , then rn g = rn g +1. If rn g = rn N and r < n R , then r=r+1, if r= n R , then n=n+1 and go to the Step 1. Step 4. The optimization problem is solved: max Step 4.1. The vehicle rn g is loaded with good * k and part  with number 1 k . If rn g < rn R , then rn g = rn g +1. If rn g = rn N and r < n R , then r=r+1; if r= n R , then n=n+1 and go to the Step 1. Step 5. If If rn g < rn N , then rn g = rn g +1. If rn g = rn N and r < n R , then r=r+1, if r= n R , then n=n+1 and go to the Step 6. Step 6. If j < J, then j=j+1 and go to the Step 1, otherwise j=1. If i < I, then i=i+1 and go to the Step 1, otherwise go to the Step 7. Step 7. The calculations are over with receiving the direct deliveries 3 *t ijk x . Deliveries to warehouses As a result of previous calculations, suppliers have balances we will use this value for the distribution by vehicles in the future. Step 1. If t ns jg rn y 2  r d then go to the Step 2, otherwise, go to the Step 5. The vehicle rn g is loaded with good k from point j (location of the product of the j-th supplier) to the point s (warehouse). If rn g < rn N , then rn g = rn g +1. If rn g = rn N and r < n R , then r=r+1, if r= n R , Step 4.1. The vehicle rn g is loaded with good * k and part  with number 1 k . If rn g < rn R , then rn g = rn g +1. If rn g = rn N and r < n R , then r=r+1; if r= n R , then n=n+1 and go to the Step 1. Step 5. If  r d , the vehicle rn g is loaded with the product residue t ijk x * from point j (location of the product of the j-th supplier) to the point s (warehouse). If rn g < rn N , then rn g = rn g +1. If rn g = rn N and r < n R , then r=r+1, if r= n R , then n=n+1 and go to the Step 6. Step 6. If j < J, then j=j+1 and go to the Step 1, otherwise j=1. If s < S, then s=s+1 and go to the Step 1, otherwise иначе go to the Step 7. Step 7. The calculations are over with receiving the goods' k deliveries from the point j (location of the product of the j-th supplier) to the point s (warehouse) 2 *t ijk x , as well as loading warehouses 4 * p ks y . Step 1. If t nsh ig rn y 1  r d , then go to the Step 2, otherwise go to the Step 4. Delivery from the warehouse Step 2. Step 3.1. The vehicle rn g is loaded with good This completes the procedure for optimizing the formation of a logistics supply chain. Solving the optimization problem can be fixed by an agreement in the form of smart contracts. This model can also be used for operational management in dynamics. When a failure occurs in the supply chain, new requests will appear. To do this, it is necessary to enter the parameter t (planning period) into the model. If appropriate changes are made to the model, it will also be applicable for long-term planning with the definition of investments in infrastructure, such as the optimization of the placement of new warehouse premises through construction or lease. In improving management based on the optimization of logistics activities, the parameters of the model will also change, for example, prices. When forming a single Internet space of logistics operations, it would be possible to add the ability to connect suppliers, transport companies that are not part of the planned coalition and carry out transportation on opposite routes, to minimize idle mileage on both sides. The loading of each vehicle with specific products based on the considered heuristic algorithms for three traffic flows was carried out in ascending order of their conditional numbers. This sequence could be changed by introducing restrictions on arrival time, waiting time, time of unloading vehicles, the throughput of warehouses, dimensions and compatibility of products (cargo), etc. Mathematical model for choosing an assembly (utilization) organization The mathematical model for optimizing logistics activities in the above setting consists of a mathematical model for optimizing external logistics management and a mathematical model for choosing an assembly (utilization) organization. Since these two models, as already noted, are related only by financial constraints and the delivery time of installation equipment (installation work begins only after the delivery of the corresponding equipment), these models can be calculated separately (refer to the block programming class) with the subsequent integration of finance. Let us introduce the notation:  ijk p -the cost of the k-th type of work (installation, utilization) of the j-th contractor (supplier) of work for the i-th consumer;  j  -reputation of the j-th contractor (supplier) of works, j  = (0, 1), the value of j  is increasing with the level of supplier's reputation. Then the choice of the executor (supplier) of the k-th type of work for the i-th consumer, taking into account his reputation, is found from the solution of the following optimization problem * j :
2021-06-03T01:01:47.100Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "4b30dec0337f05e433da1e3ba7e62fc91e43dac7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1864/1/012100", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4b30dec0337f05e433da1e3ba7e62fc91e43dac7", "s2fieldsofstudy": [ "Mathematics", "Engineering", "Business", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
12991695
pes2o/s2orc
v3-fos-license
Prevalence of gastrointestinal parasitism in small ruminants in western zone of Punjab, India Aim: The aim of this study was to explore the prevalence of gastrointestinal parasitism in small ruminants in relation to various risk factors in the western zone of Punjab. Materials and Methods: During the study, 603 fecal samples (391 of sheep and 212 of goats) were examined qualitatively by floatation and sedimentation techniques, and quantitatively by McMaster technique. Results: Out of the 603 fecal (391 sheep and 212 goats) samples examined, 501 were found positive for endoparasitic infection with an overall prevalence of 83.08%, consisting of 85.16% and 79.24% in sheep and goats, respectively. Egg per gram in sheep was apparently more 1441.88±77.72 than goats 1168.57±78.31. The associated risk factors with the prevalence of gastrointestinal tract (GIT) parasites showed that females (85.97%) were significantly more susceptible than males (69.23%). Age wise the adults (>6 months) were significantly more prone to parasitic infection as compared to young ones (<6 months). Seasonal variation was recorded throughout the year and was significantly highest during monsoon (90.10%), followed by winter (83.84%) and summer (78.35%). Conclusion: The study revealed an overall prevalence of 83.08% of GIT parasitic infections in small ruminants constituting 85.16% in sheep and 79.24% in goats in the western zone of Punjab. The most relevant risk factors for the prevalence of gastrointestinal parasitism in ruminants were sex, age, and season. Introduction Small ruminants hold an important niche for sustainable agriculture in developing countries and support a variety of socioeconomic functions worldwide. India has an estimated sheep and goat population of 65 million and 135.17 million, respectively, whereas Punjab has 0.15 million sheep and 0.32 million goats as per 19 th livestock census [1]. Gastrointestinal tract (GIT) parasitism in sheep and goats is of paramount importance because small ruminants' rearing has been a major source of income especially to the marginal farmers of the country [2]. These parasites cause both acute infections with a rapid onset and high mortality levels and chronic infections, which are commonly subclinical and may lead to insidious and important economic losses [3] via reduction of live weight gain, reduced wool and milk production, and poor reproductive performance [4]. This problem is severe in tropical countries due to highly favorable environmental conditions for helminth transmission [5]. Studies dealing with the distribution and parasite control measures adopted by small landless marginal farmers in the Punjab state are very limited or absent, especially in the western zone. Present study aimed to identify the prevalence and risk factors associated with ovine and caprine GIT parasites, which is vital for future holistic prevention and control strategies in the area. Ethical approval This study was based on the fecal sample collection only, hence the ethical approval was not required. The fecal samples were directly collected from the animals without any harm or freshly voided samples with the prior consent of the owners. Study area Punjab state extends from the latitudes 29°30' N to 32°32' N and longitudes 73°55' E to 76°50' E in the northwest region of India. It covers a geographical area of 50,362 km 2 , which is 1.54% of country's total area and lies between altitudes 180 m and 300 m above mean sea level. Average rainfall in Punjab is 565.9 mm and ranges from about 915 mm in north to 102 mm in south. The state has been classified into five agro-climatic zones on the basis of homogeneity, rainfall pattern and distribution, soil texture, cropping patterns, etc. Western zone constitute of six districts, viz., Barnala, Bathinda, Mansa, Moga, Muktsar, and Sangrur having average annual rainfall of <400 mm, which is considered to be the hottest and drier zone of Punjab. Sample collection and fecal analysis A total of 603 (391 of sheep and 212 of goats) fecal samples were randomly collected directly from the rectum of animals or freshly voided during the period of March 2015 to May 2016 in each season uniformly from six districts of western zone. Samples were labeled accordingly and stored in ice chilled container to slow down the process of nematode eggs development during transportation. The samples were grossly examined for color, consistency, odor and for the presence of adult worms or developmental stages, if any. The fecal samples were processed and screened qualitatively using sedimentation and floatation methods for evaluating the incidence of infections. The quantitative examination or egg per gram (EPG) estimation was done as per McMaster technique [6]. A questionnaire was prepared for the prevalence in terms of various risk factors, viz., species, age, sex and season, type of management, and treatment given based on the history taken at the time of sampling. Statistical analysis Data analysis was performed using Statistical Analysis System (SAS for Windows, Version 9.4, USA). Association between the prevalence of GIT helminth infections and various factors was carried out by Chi-square test (χ 2 -test). Results In this study, out of 603 fecal samples examined, 501 were found positive with an overall prevalence of 83.08% for GIT parasitic infections (Table-1) indicating district wise significantly (p<0.05) highest prevalence in Sangrur (88.78%) and lowest in Bathinda (68.08%). The ovine (85.16%) were apparently more susceptible to the GIT parasitic infections than caprine (79.24%) ( Table-2). The district wise prevalence of GIT parasites in sheep and goat is given in Table-3. Similarly, to an overall prevalence of the GIT parasites in small ruminants, individually prevalence in both the species was highest in Sangrur district. The parasitic load in terms of mean EPG in sheep was apparently more 1441.88±77.72 than goats 1168.571±78.31. The parasite-wise distribution among the two species showed that only strongyle was significantly high in sheep (39.63%) than goats (19.04%). Sex wise an overall copro-prevalence of GIT parasites in both the species showed that females (85.97%) were significantly (p<0.01) more susceptible than males (69.23%) ( Table-4). In this study, the animals were divided into two age groups, viz., young (<6 months) and adults (>6 months). Age wise an overall prevalence between young and adult group showed that adults (>6 months) were significantly more prone to parasitic infection with the prevalence of 85.97%. In sheep, the results showed that an overall copro-prevalence of different age group was found to be significantly (p<0.01) higher in adults (89.73%) as compared to young animals (54.00%) ( Table-5). However, in goats, a nonsignificant difference was observed between young ones (71.42%) and adults (80.79%). The data collected in different months were partitioned according to season, viz., Monsoon (July to October), winter (November to February), and summer: (March to June) ( Table-6). Season wise copro-prevalence of GIT parasitic infections in both the species was significantly (p<0.01) highest in monsoon (90.10%), followed by winter (83.84%) and lowest in summer (78.35%). The quantitative parasitic load based on the mean values and standard error of EPG of helminth infection was highest in monsoon followed by winter and then summer (Table-7). The degree (severity) of helminth parasitic infection was determined from the EPG count. Out of 603 samples, 36.31% were infected lightly (EPG range 100-1000) and 3.64% were found highly positive with mean EPG range >4000. The animals with fecal egg count in the range of 1000-2000 were 23.21%, between 2000 and 3000 were 8.78% and only few proportions of animals had fecal egg count of 3000-4000 (1.65%) ( Table-8). Discussion There was slight variation in prevalence of GIT parasitic infection among five districts except Bathinda. The lowest prevalence in Bathinda district may be attributed to the fact that most of the animals examined were kept in confinement and managed on intensive system management. They were having restricted access to outer infection sources and were dewormed regularly as suggested by the veterinarian. In contrast in district Sangrur, the highest prevalence may be due to the fact that the field flocks of sheep and goats encountered during the study were mainly from the nomadic farmers that kept on changing the pastures, thus had an access to abundant of various parasitic egg/ova prevailing in these areas and rarely they preferred deworming their animals. District wise, the single parasitic infection was higher in Bathinda (57.44%), while the dual infection was high in Muktsar (49.48%) and multiple infections having more than three parasites were high in Sangrur district (14.01%). The high incidence of single infection in Bathinda district may be due to the fact that encountered animals were reared on the intensive grazing system. The results of the species-wise prevalence (Tables-2 and 4) revealed that the sheep was more susceptible to helminth infection than goats. Similar observations were reported in different states of India [5][6][7][8][9]. Higher prevalence of GIT parasitic infections in sheep as compared to goats was probably due to their grazing behavior. Sheep grazes very close to the ground so risk of ingestion of parasitic ova is comparatively more than the goats, as they are browsers [10]. In contrast to the present findings, higher rates of infection throughout the year in goats were reported [11,12]. This variation in prevalence depends on the difference in agro-climatic condition and availability of susceptible host [5]. During the present study, it was found that overall prevalence of parasitic infection was significantly higher in females than their counterpart males. Among sheep, a significantly (p<0.01) higher prevalence was recorded in females (87.38%) as compared to their male counterparts (72.41%). Similarly to sheep, the infection in goats was found to be significantly (p<0.01) higher in females (83.13%) than males (65.21%). The influence of sex on the susceptibility of animals to infections could be attributed to genetic predisposition and differential susceptibility owing to hormonal control. The physiological peculiarities of the female animals, which usually constitute stress factors thus reducing their immunity to infections, and for being lactating mothers, females happen to be weak and malnourished, as a result of which they are more susceptible to the infections besides some other reasons [13,14]. The current study revealed that the adults were significantly more prone to parasitic infection with the prevalence of 85.97% than the young ones. It could be explained that higher nematode prevalence in adults might be due to grazing on larger area of pastures being contaminated with various flocks and different stress conditions such as climate, long daily traveling, and gestation [15]. The young animals are less susceptible to parasitic infections due to less exposure for grazing as they mainly depend upon milk feeding. Our findings were in concordance with Yadav et al. [16], Emiru et al. [17] who recorded a higher prevalence of infection in adults than young ones. Out of the three seasons, the highest prevalence of parasitic infection was recorded in monsoon followed by winter and then summer. The findings are in consistent with the various published reports [8,18,19]. The reason for higher prevalence in monsoon could be attributed from the fact that favorable climatic conditions, viz., humidity and temperature, supports parasitic growth, and development led to increased availability of infective larvae in this season. It is well documented that GIT parasitism in grazing animals is directly related to the availability of larvae on pasture and seasonal pasture contamination [19]. Climatic factors also influence in larval dispersion on the herbage which increases the chance of contact between host and larvae. Higher rate of infection in monsoon may also be attributed to suitable molarities of salt present in soil which is an important factor for ecdysis [20]. Such climatic conditions also help in bacterial multiplication which provides nutrition to free-living larvae. Moreover, high prevalence in adults and in summer season can coincide with the fact that lambing and kidding in the study area normally occurs in the month of February (winter season) and in the month of October (monsoon season). Periparturient rise of eggs counts may be responsible for overall rise of infection rate during these seasons in summer season. In contrast to current findings, the highest prevalence of GIT parasites during monsoon followed by summer and winter was reported [15]. Hutchinson et al. [21] reported that cold stimulus is responsible for arrested development of larvae. During winter, animals are also partially stall-fed that reduces chance of infection. Period of grazing is also reduced during winter as well as pre-parasitic stages also undergo hypobiosis which also contributes to low infection during this period. The majority of ewes are pregnant during this period. Hormonal impact results in low fecal egg output and contributes to low availability of infection in pastures. About the levels of EPG to be considered as pathogenic, there is a wide variation in the opinion of researchers and no firm limit has been fixed for lower and upper EPG range. In an experimental study [22] categorized resistant goats with EPG range 250-1800 and susceptible with EPG range of 5400-14,900, while Palamapalle et al. [23] reported 6023 EPG (3000-105,000) in subclinical nematode infection. This study revealed that prevalence of nematode infection was not associated with clinical form though increase in the EPG count is positively correlated with worm burden [18]. Anthelminthic resistance also influence prevalence and egg counts [24].
2018-04-03T00:36:20.217Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "b3447db75bed4541f3986cb9e3f01762bf0fc3d3", "oa_license": "CCBY", "oa_url": "http://www.veterinaryworld.org/Vol.10/January-2017/10.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b3447db75bed4541f3986cb9e3f01762bf0fc3d3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
90391245
pes2o/s2orc
v3-fos-license
Human Influence at the Coast: Upland and Shoreline Stressors Affect Coastal Macrofauna and Are Mediated by Salinity Anthropogenic stressors can affect subtidal communities within the land-water interface. Increasing anthropogenic activities, including upland and shoreline development, threaten ecologically important species in these habitats. In this study, we examined the consequences of anthropogenic stressors on benthic macrofaunal communities in 14 subestuaries of Chesapeake Bay. We investigated how subestuary upland use (forested, agricultural, developed land) and shoreline development (riprap and bulkhead compared to marsh and beach) affected density, biomass, and diversity of benthic infauna. Upland and shoreline development were parameters included in the most plausible models among a candidate set compared using corrected Akaike’s Information Criterion. For benthic macrofauna, density tended to be lower in subestuaries with developed or mixed compared to forested or agricultural upland use. Benthic biomass was significantly lower in subestuaries with developed compared to forested upland use, and biomass declined exponentially with proportion of near-shore developed land. Benthic density did not differ significantly among natural marsh, beach, and riprap habitats, but tended to be lower adjacent to bulkhead shorelines. Including all subestuaries, there were no differences in diversity by shoreline type. In low salinities, benthic Shannon (H′) diversity tended to be higher adjacent to natural marshes compared to the other habitats, and lower adjacent to bulkheads, but the pattern was reversed in high salinities. Sediment characteristics varied by shoreline type and contributed to differences in benthic community structure. Given the changes in the infaunal community with anthropogenic stressors, subestuary upland and shoreline development should be minimized to increase benthic production and subsequent trophic transfer within the food web. Introduction Coastal ecosystems are threatened by anthropogenic stressors as human populations flock to coastal areas in record numbers (Halpern et al. 2007;Airoldi and Beck 2007). These coastal habitats are highly productive and serve important roles as feeding grounds, nursery areas, spawning areas, and corridors for migration of ecologically and commercially important marine species (Beck et al. 2001;Seitz et al. 2014); therefore, the loss or degradation of these habitats can have dramatic effects on ecosystem productivity. Among coastal areas, estuarine habitats and wetlands are especially productive, serving multiple ecosystem functions, with their value expected to increase in the future (Costanza et al. 1997(Costanza et al. , 2014. In response to sea-level rise and coastal erosion, estuarine shorelines are being hardened at alarming rates, with losses to wetlands and coastal habitats (Gittman et al. 2015). Multiple stressors, including the combination of upland and shoreline development, could have synergistic effects on estuarine fauna (Crain et al. 2008). The combination of these factors on estuarine fauna has rarely been examined (King et al. 2005;Li et al. 2007;Patrick et al. 2014), and never, to our knowledge, for the infaunal, deep-dwelling benthic community. Upland Use Construction of human infrastructure has resulted in increased watershed development and runoff (Jantz et al. 2005), which Communicated by Carles Ibanez Marti Electronic supplementary material The online version of this article (https://doi.org/10.1007/s12237-017-0347-6) contains supplementary material, which is available to authorized users. can add sediments, nutrients, pesticides, and contaminants to water (Jordan et al. 1997;Paul and Meyer 2001;Gregg et al. 2015). This can negatively impact benthic invertebrates (Hale et al. 2004;King et al. 2005), fish (Sanger et al. 2004), and waterbird communities (DeLuca et al. 2008;Prosser et al. 2017). Pollutants can reduce abundance, diversity, and trophic complexity of benthic species in favor of small, short-lived, opportunistic species, such as deposit-feeding polychaetes (Pearson and Rosenberg 1978;Warwick and Clarke 1993;Inglis and Kross 2000). Examining the combined effects of shoreline and upland development will help managers incorporate trade-offs between anthropogenic use and impacts on estuarine fauna into management and conservation decisions. Shoreline Alteration The extensive use of bulkheads (vertical seawalls) and riprap (rocky revetments) to armor shorelines and the construction of jetties, docks, piers, and marinas all result in the replacement of natural habitat with artificial structures, with modifications to the dynamics of these critical systems (Gittman et al. 2016b). Shoreline armoring has increased in developed countries including the Netherlands, Japan, and the USA; for example 14% of the US coastline has been armored, with some subestuaries having over 50% of shorelines developed (Gittman et al. 2015). In other areas (e.g., France, Spain, Italy), 45% of the coastal zone has been developed (Bulleri and Chapman 2010), and erosion protection through shoreline development is a problem (Jiménez et al. 2016;Santana-Cordero et al. 2016;Harik et al. 2017). Shoreline hardening is of particular interest in Chesapeake Bay. The dearth of information on the ecological effects of shoreline structures (Weinstein and Kreeger 2000) can limit managers' understanding of habitat degradation, reducing their ability to make environmentally sound decisions. In Chesapeake Bay, it remains unknown whether there are thresholds of shoreline and upland development that, if exceeded, lead to loss of ecosystem services, but thresholds of development have been identified in other systems (Dethier et al. 2016). With shoreline development, there may be some species that are Bwinners^(e.g., opportunistic species) and some that are Blosers^(e.g., sensitive species) (Weisberg et al. 1997); thus, overall diversity may not necessarily change with shoreline development. Therefore, our study aims to quantitatively estimate the effects of upland and shoreline development on the shallow, subtidal benthos of Chesapeake Bay. Salinity Estuarine salinity gradients structure flora and fauna, particularly in benthic communities, and they can mediate effects of anthropogenic stressors. For example, differences in salinity result in differential responses of seagrass to shoreline development in Chesapeake Bay (Patrick et al. 2016). Moreover, effects of physical or chemical stressors, such as hypoxia, have differential effects on benthos depending on salinity regime (King et al. 2005;Seitz et al. 2009). Predators may be more abundant in high-salinity areas, and coupling of the benthos with adjacent habitats may be greater in low-salinity areas, leading to differential effects on benthos in different salinity regimes. For example, blue crabs and fish are more abundant in high-salinity subestuaries (King et al. 2005), particularly near marsh habitats (Kornis et al. 2017), and their feeding may reduce benthic biomass (Menge and Sutherland 1987). Low-salinity (≤ 15 psu), upper-Bay subestuaries are typically smaller, shallower water bodies than high-salinity subestuaries, leading to closer coupling of the benthos with adjacent habitats that would be linearly related to benthic exchange with the water column (Gerritsen et al. 1994). Therefore, responses of benthic communities to upland and shoreline development may be expected to differ by salinity regime. Faunal Responses Shoreline development and upland use affect benthic communities and predators. Benthic communities are effective indicators of ecological condition and harbingers of ecological stress (Widdicombe and Spicer 2008). Abundance and diversity of subtidal, infaunal benthic invertebrates are higher adjacent to natural marsh than bulkhead shorelines, intermediate at riprap shorelines, and predator density and diversity tends to be higher adjacent to natural marsh shorelines . Negative effects of hardened shorelines can be compounded when there is extensive development in the surrounding landscape (Seitz and Lawless 2008). Further, there can be wider, ecosystem-level impacts of development. A Chesapeake Bay-wide trawl survey identified benthic invertebrates as the predominant source of carbon in the diets of fishes (Buchheister and Latour 2015). Given the key role of benthic invertebrates in the food web, ecologically and economically important species of invertebrates and finfish may be stressed by both habitat loss and prey reduction at the land-water interface. Several studies report negative effects of altered shorelines on predators in adjacent waters Hendon et al. 2000;Carroll 2003;Peterson and Lowe 2009;Kornis et al. 2017), and one review summarizes the negative effects on estuarine fish (Munsch et al. 2017), but these studies did not concurrently examine infaunal benthos. biomass, and diversity and thereby exacerbate the effects of shoreline hardening, and (2) shoreline development reduces local density, biomass, and diversity of benthic infauna. We compared density, diversity, and biomass of benthic macrofauna adjacent to four shoreline types across a range of salinities and land uses in replicate subestuaries of Chesapeake Bay. We also tested the generality of previous findings on the effects of shoreline alterations through a large-scale, multisubestuary empirical study. The experimental design included (i) two salinity regions-high salinity (generally polyhaline to meso-polyhaline), or > 15 psu, and low salinity (generally low-mesohaline), or ≤ 15 psu-to capture a range of conditions and account for salinity-driven differences among regions, (ii) subestuaries of three differing predominant land usage patterns (forested, agricultural, or developed), and (iii) four shoreline types per subestuary (natural Spartina marsh, sandy beach, riprap, and bulkhead [= seawall]) and thereby assess the impact of multiple stressors on benthic communities (Table 1). Study Locations The Chesapeake Bay is 300-km long, and its shoreline consists of over 100 subestuaries (Li et al. 2007;Weller and Baker 2014). Each of these subestuaries is unique, and their watersheds have differing proportions of forested, agricultural, and developed upland use (Li et al. 2007;Patrick et al. 2014). Urban development pressure tends to be highest in the upper Bay (King et al. 2005) due to the large cities in that area. The eastern shore of the Bay is heavily developed with agriculture, including crops and chicken farms, leading to runoff of sediments, nitrogen, and phosphorous (Jordan et al. 1997;Prasad et al. 2014). We investigated 14 subestuaries throughout the Chesapeake Bay from 2010 to 2013 (Fig. 1). The subestuaries were selected based on their primary upland use, forested, agricultural, developed or mixed, as characterized by Li et al. (2007) and Patrick et al. (2014), and on the amount of hardened shoreline and shoreline condition (VIMS-CCRM: http://ccrm.vims.edu/gisdatabases.html). Dominant watershed land cover was obtained from the 2006 National Land Cover Dataset, which was derived from Landsat 7 satellite remote sensing imagery with 30-m resolution (Fry et al. 2011). The watersheds were each classified (by Li et al. 2007 andPatrick et al. 2014) into a category of dominant land cover based on the following: B(1) forested (≥ 60% forest and forested wetland), (2) developed (≥ 50% developed land), (3) agricultural (≥ 40% cropland), (4) mixed-developed (15-50% developed land), (5) mixed-agricultural (20-40% cropland), and (6) mixed-undisturbed (watersheds which did not fit into categories 1-5)^ (Patrick et al. 2014). We used the term Bmixed^for category 6. The subestuaries included developed subestuaries (Stony Creek, Magothy River, Mill Creek, and Poquoson), agricultural subestuaries (Miles River, Harris Creek, Onancock Creek, Occohannock Creek), forested subestuaries (Monroe Bay, Corrotoman River, East River, Severn River, and Catlett Islands), and a mixed subestuary (Yeocomico Creek) (Fig. 1). Two to seven different subestuaries were sampled annually between late June and early August in 2010 to 2013 ( Table 2). The percentage of watershed use was not available for Catlett Islands, so it was interpolated using the mean calculated from the upland-use values from each of two nearby creeks, Poropotank Bay and Queen's Creek. We chose subestuaries within the mesohaline to polyhaline regimes (5-30 psu) so that the benthic communities were not composed of oligohaline species. Survey Methods In each subestuary, four shoreline types were sampled: marsh, beach, riprap revetment, and bulkhead. Four to six replicates were sampled at each shoreline in each subestuary. As in other surveys of this type, there may have been some spatial confounding because shoreline types were clustered within subestuaries. In 2010, four randomly selected replicates were collected at each shoreline type resulting in 16 samples in both East River and Occohannock Creek. From 2011 to 2013, six replicate samples were collected at each of the four shoreline types resulting in 24 samples in each subestuary. However, in three subestuaries (Harris Creek, Mill Creek, and Catlett Islands), samples were collected at only 3, 1, and 0 beaches, respectively, due to the limited number of beach shorelines. Where available, we used areas that had > 30 m of the k, the number of parameters in each model, which includes 1 for variance. If a β is located in a column, then that variable was included in the model. Excluding the Yeocomico, models were run for all subestuaries, lowsalinity subestuaries (Sal ≤ 15 psu), and high-salinity subestuaries (Sal > 15 psu) particular shoreline type, and we randomly selected from among multiple sites of a given shoreline type. Five to six replicates per treatment are sufficient to detect differences in densities of infauna among shoreline types . Infaunal organisms were collected five meters from shore using a benthic suction sampler designed to capture deepdwelling macrofauna (Eggleston et al. 1992), and this distance from shore has been sufficient to show significant effects of shoreline type on benthic organisms in past research . We made sure to take 2-4 samples of each shoreline type each on rising and falling tides (mean tidal range of 1 m). We used a 0.11-m 2 PVC cylinder inserted to a depth of 30 cm in the sediment, evacuated the contents of the cylinder, and sieved it through a 3-mm mesh bag. This sampling targets large macrofauna, such as bivalves, that are deep dwelling, sparsely distributed, and are also important prey items for blue crabs and epibenthic fish (Seitz et al. 2003). The contents of the mesh bag were frozen until sorting. In the laboratory, each sample was sorted thoroughly and double-checked with a second sorting. Infauna were identified to the lowest taxon possible (usually species except some polychaetes, e.g., Capitellidae and Spionidae), enumerated, and stored in 70% ethanol. Organisms were dried for 48 h at 70°C and then combusted in a muffle furnace for 4 h at 550°C to obtain ash-free dry weights (AFDW). For each sample, bulk weights were obtained for most taxa (e.g., polychaetes, crustaceans, anemones), while bivalve species were weighed separately. Our research examined direct and indirect effects of multiple stressors and focused on benthic communities as well as ecologically important individual benthic species. Direct effects of shoreline development and upland use include (circle, agricultural; square, developed; triangle, forested; octagon, mixed) changes in macrofaunal species composition. Indirect effects include changes in sediment grain size and composition, lowered dissolved oxygen, reduced submerged aquatic vegetation (SAV) abundance, and the spread of invasive saltmarsh plants in ways that secondarily influence macrofauna. The ecologically important species in our study included some long-lived, pollution-sensitive (= sensitive) taxa (sensu the Chesapeake Bay Benthic Monitoring Program's categorization: Weisberg et al. 1997), such as the clam Limecola balthica. This is an important sessile prey species that cannot alter its distributions in response to stressors, unlike fish and mobile invertebrates. Sensitive taxa included the polychaetes, Clymenlla torquata and Glycera americana, and the clams, Limecola balthica, Rangia cuneata, and Tagelus plebeius, which are indicative of a mature community (Weisberg et al. 1997;Llansó et al. 2002). We also examined responses of pollution-indicative (= tolerant) taxa, including the small polychaetes Leitoscoloplos spp. and Eteone heteropoda (Dauer 1993), and the clam Mulinia lateralis (Weisberg et al. 1997;Llansó et al. 2002), which are opportunistic, weedy species that are most common in degraded habitats (Pearson and Rosenberg 1978;Dimitriou et al. 2015). We compared tolerant taxa and sensitive taxa among shoreline types. Water quality was measured with a calibrated YSI Pro-Plus Multiparameter at each site for temperature (°C), salinity, and dissolved oxygen (mg 1 −1 ). Additionally, two sediment samples were collected with a 4-cm 2 syringe to a depth of 5 cm. One core was collected for total organic carbon and nitrogen (TOC/TN) content, and an Exeter CE440 elemental analyzer was used to quantify carbon, hydrogen, and nitrogen (CHN) content. The second core was used for grain-size analysis using a standard wet sieving and pipetting technique (Plumb 1981). Statistical Analyses The effects of environmental variables (shoreline type, upland use, salinity, and sediment type) on infaunal density, biomass, and Shannon diversity were examined in an information theory framework using general linear models. We used simple linear regression to examine relationships between some variables to establish independence. TOC/TN were positively related to each other and with sediment type (see Lawless and Seitz 2014) and therefore could not be included as independent variables in the analyses. Density estimates were obtained by standardizing the raw infaunal abundance per replicate by the surface area of the PVC cylinder, and we calculated a mean and standard deviation from the six replicates to obtain density, biomass, or diversity per shoreline type. Shannon (H′) diversity and richness (S) were calculated using PRIMER v 7. Explanatory mathematical general linear models were created based on multiple working hypotheses regarding influential variables. Akaike's Information Criterion (AIC) was used to evaluate the log likelihood of each explanatory model within a candidate set of models while accounting for the number of parameters of each model (Anderson 2008). Likelihood was estimated from general linear models and raw data were transformed if they deviated from a normal distribution. To account for the potential bias associated with a small sample size, the corrected AIC (AICc) was used; the correction factor approaches zero as the sample size increases (Anderson 2008). Parameter estimates and standard errors were Table 2 Year sampled and mean values (± SE) of water quality and sediment of each subestuary (with the upland use type indicated in parentheses) Subestuary Year generated via multimodel inference across all models containing a particular parameter (Anderson 2008). We proposed a total of eight candidate models (Table 1) for each of the three response variables. In addition to using AICc to explore the full dataset, we also examined model selection for the data split into low-(salinity ≤ 15 psu) and high-(salinity > 15 psu) salinity subestuaries. The high-salinity subestuaries included the East, Occohannock, Poquoson, Severn, Catlett, and Onancock rivers, and low-salinity subestuaries included Corrotoman River, Harris Creek, Magothy River, Miles Creek, Monroe Bay, Stony Creek, and Mill Creek. Our division agreed with classification of these subestuaries as high salinity (polyhaline) or low salinity from long-term Chesapeake Bay monitoring (Li et al. 2007). The Yeocomico subestuary was excluded from AIC analyses, as it was the only subestuary with mixed upland use. While diversity and biomass followed a normal distribution, density exhibited a right-skewed distribution and was log-transformed. For AIC analyses, samples containing zero individuals (3 instances out of 284 samples) were removed. Analyses were run in the open-source statistical package R (R Development Core Team 2016). We also used general linear models in some cases to more closely examine differences among levels of categorical upland or shoreline factors separately for three different response variables (benthic density, biomass, diversity), with α ≤ 0.05 as the cutoff for statistical significance. Fisher post-hoc multiple comparison tests were used to determine differences between levels of each factor for significant general linear models. We used non-linear least squares regression to examine mean community biomass versus proportion of land developed within 250 m of the shoreline across all subestuaries (Fig. 2), since non-linear relationships were hypothesized based on previous work on macrofaunal responses to developed land use (Bilkovic et al. 2006). Distance-based permutational multivariate analysis of variance (PERMANOVA; McArdle and Anderson 2001;Anderson 2001) was used to test for differences in benthic communities by shoreline type and upland use. Bray-Curtis similarities were calculated on square-root transformed species abundance and biomass matrices (to normalize the data) and permutated 9999 times. The design for the analysis consisted of three factors: shoreline (Sh), four levels, fixed; upland use (Up), three levels, fixed, and river (Ri), nested in upland (Up) 13 levels, fixed. We used the Type III sum of squares within PERMANOVA to determine significance ) because we had an unbalanced sampling design with different numbers of subestuaries in each upland type. Non-metric multidimensional scaling (nMDS) ordination plots of centroids were used to summarize patterns of benthic assemblages by the factors upland use (forested, agricultural, and developed) and shoreline (marsh, beach, riprap, and bulkhead). The centroid was determined as the center point of all samples for a certain shoreline within an upland type in multidimensional space. Matrices of the distances among centroids were derived from Bray-Curtis similarity matrices of square-root transformed infaunal community data. The centroid ordination plots were derived from the distances among centroid matrices generated from Bray-Curtis matrices of square-root transformed abundance and biomass data. All PERMANOVA, nMDS, and distance between centroid species contributions were calculated in the PRIMER v 7 PERMANOVA+ add-on package Clarke et al. 2014). Physical Characteristics Temperature and dissolved oxygen varied little among subestuaries, but salinity varied substantially among subestuaries (Table 2). Sediment % sand + gravel was generally high, as 13 of the 14 subestuaries had sediment > 80.0% sand + gravel. Overall by shoreline type, beaches had the highest percentage of sand + gravel (94.60%), followed by Fig. 2 Upland-use percentage 250 m from shore for experimental subestuaries. The Bagricultural^category includes cultivated crops, such as corn and soy, and includes grasses and hay riprap (88.46%), bulkhead (86.94%), and lastly marsh (80.37%). Many forested and agricultural subestuaries (e.g., Miles River, Monroe Bay, Severn River, Corrotoman River) tended to have high TOC (Table 2), whereas developed or mixed subestuaries (e.g., Magothy River, Yeocomico River) had some of the lowest TOC. Both total organic carbon (TOC) and total nitrogen (TN) were two times higher at marsh than at bulkhead and riprap and four times higher than at beaches. TOC and TN were significantly and tightly related (R 2 = 0.94), and both were inversely related to % sand + gravel (R 2 = 0.46 and 0.30, respectively). Upland Use We collected 37 benthic taxa in the 3-mm suction samples throughout the Bay. Polychaetes were the dominant taxon by richness (21 taxa) followed by bivalves (11 species). Thirteen taxa contributed to 90% of the abundance. The nereid polychaete Alitta succinea contributed most to abundance (28.92%), followed by the clam Limecola balthica (13.71%), spionid polychaetes (10.33%), and the clam Ameritella mitchelli (9.60%). By upland use, similar taxa were dominant across the subestuary upland-use types, but the percent contributions of each taxon to the community varied, with 4-8 taxa encompassing over 90% of total abundance (Table 3 ). Agricultural subestuaries were dominated > 70% by four taxa, A. mitchelli, Alitta succinea, L. balthica, and capitellid polychaetes, and these four cumulatively contributed to the same percentage of abundance as did the polychaete A. succinea alone in forested subestuaries. Developed and mixed-developed subestuaries were also dominated by A. succinea, and secondarily by Ameritella mitchelli and Rangia cuneata in developed subestuaries, and spionids in the mixed-developed subestuary. Comparing density among differing upland-use types across all subestuaries, there were no significant differences but some notable trends. Density tended to be higher in forested and agricultural subestuaries than in developed or mixed subestuaries (Fig. 3a). In low salinity, agricultural subestuaries tended to have the highest densities, forested and developed subestuaries were intermediate, and the mixed subestuary tended to be lower (Fig. 3b). In high salinity, densities tended to be higher in forested, intermediate in agricultural, and lowest in developed subestuaries (Fig. 3c). There were significant differences in biomass among upland-use types. Biomass was significantly higher in forested and agricultural subestuaries than in developed or mixed subestuaries (Fig. 3a, d) high salinity, biomass was not statistically different across upland-use types, with a tendency for highest biomass in forested subestuaries (Fig. 3f). Overall, richness and diversity were relatively low, with highest mean values of 6 and 1.4, respectively. There were some significant differences in richness and diversity among subestuary upland-use types. Richness among all subestuaries was highest in agricultural compared to forested, developed, and mixed subestuaries (Fig. 4a) (general linear model and Fisher test p < 0.0001). In low-salinity regions, richness was highest in agricultural and developed subestuaries (Fig. 4b) (general linear model and Fisher test p < 0.0001) while in high-salinity regions, richness was not statistically different across upland-use types (Fig. 4c). In all subestuaries, lowand high-salinity regions, Shannon (H′) diversity was higher in agricultural and developed subestuaries than in forested subestuaries ( Fig. 4d-f) (general linear model and Fisher test p < 0.0001, p < 0.0001, and p = 0.003, respectively). Across subestuaries, there was an exponential decline in infaunal biomass with % development in the zone 250 m from the shore (Fig. 5). In the subestuaries with < 20% upland development, mean biomass ranged from a high of 10.5 g AFDW m −2 to a low of 1.5 g AFDW m −2 . In subestuaries with ≥ 20% development, biomass remained low and never reached higher than 3.5 g AFDW m −2 . Community assemblages differed by upland use. Centroid nMDS ordination plots of abundance assemblages by shoreline type in each upland type clustered tightly by upland use and did not overlap other upland-use groups ( Supplementary Fig. A1a). Centroid nMDS ordination plots of biomass assemblages clustered by upland use, and shoreline types for forested and developed uplands did not overlap other upland-use Fig. 3 Upland-use effects on mean (± SE) 3-mm infaunal a density of all subestuaries, b density of subestuaries ≤ 15 psu, c density of subestuaries > 15 psu, d biomass of all subestuaries, e biomass of subestuaries ≤ 15 psu, and f biomass of subestuaries > 15 psu. Upland type: For, forested; Ag, agricultural; Dev, developed; and Mix, mixed. Means are from replicates among all shoreline types within a subestuary. Small letters above bars denote significant differences determined by a Fisher post-hoc multiple comparison test at α = 0.05. NA indicates data not available groups. Centroids of shoreline types within forested upland use clustered the tightest while centroids of shoreline types within agricultural upland use had greater dispersion. The centroids of agricultural marsh and beach biomass were closer to the forested shoreline types than to agricultural beach and riprap shoreline types. (Supplementary Fig. A1b). PERMANOVA revealed significant effects for BUpland Use,^BShoreline,^and BRiver^(nested within BUpland Use^) for abundance assemblages (Table A1). A significant interaction between BShoreline^and BRiver^(nested within BUpland Use^) indicated that shoreline effects varied between rivers and by upland use. For biomass assemblages, PERMANOVA revealed significant effects for BUpland Use,^BShoreline,^and BRiver^(nested within BUpland Use^) (Table A1). Shoreline Development Across all subestuaries, there were no significant differences in density by shoreline type, though some trends were evident (Fig. 6a-c). Among all subestuaries, density tended to be lowest at bulkhead shorelines (Fig. 6a). In low-salinity subestuaries, density tended to be higher at beaches than all other shoreline types and lowest at bulkheads (Fig. 6b). In high-salinity subestuaries, density did not differ among shoreline types and no consistent patterns were evident ( Fig. 6c; general linear model: p = 0.378). Biomass did not differ significantly by shoreline type, and trends were mixed (Fig. 6). Among all subestuaries, biomass was equivalent among bulkhead, riprap, and beach habitats (Fig. 6d). In the low-salinity subestuaries, biomass tended to Fig. 4 Upland-use effects on mean (± SE) 3-mm infaunal a richness of all subestuaries, b richness of subestuaries ≤ 15 psu, c richness of subestuaries > 15 psu, d diversity of all subestuaries, e diversity of subestuaries ≤ 15 psu, and f diversity of subestuaries > 15 psu. Upland type: For, forested; Ag, agricultural; Dev, developed; and Mix, mixed. Small letters above bars denote significant differences determined by a Fisher post-hoc multiple comparison test at α = 0.05. NA indicates data not available be higher at beaches than all other shoreline types (Fig. 6e). In the high-salinity subestuaries, biomass did not differ among shoreline types but trends were opposite of low-salinity subestuaries ( Fig. 6f; general linear model: p = 0.184). Across all subestuaries, richness and Shannon diversity did not differ; they tended to have opposite patterns by shoreline type depending on whether they were high-salinity or lowsalinity subestuaries (Fig. 7). In the low-salinity subestuaries (salinity ≤ 15 psu), richness and Shannon diversity tended to be higher at marshes and beaches than developed shoreline types (Fig. 7b, e). In the high-salinity subestuaries (salinity > 15 psu), richness and diversity were lowest at marshes (general linear model p = 0.018 and 0.005, respectively). Taxon-Specific Responses to Shoreline Development Individual taxon patterns by shoreline type (Table 4) were different depending on whether the taxon was long-lived and sensitive or short-lived and opportunistic (tolerant) (pollution sensitivity defined by Weisberg et al. 1997). Limecola balthica biomass was highest at marshes and beaches; biomass at marshes was nearly two times higher than at riprap and three times higher than at bulkheads (Fig. 8a) (general linear model: p = 0.041). Ameritella mitchelli (a sensitive clam) biomass tended to be higher at natural than hardened shorelines (Fig. 8b). Mulinia lateralis (a tolerant clam) biomass tended to be lowest at marshes while highest at beaches followed by riprap and bulkhead (Fig. 8c). Rangia cuneata (a tolerant clam) biomass tended to be highest at hardened shorelines (riprap and bulkhead) and lowest at marshes; bulkheads had five times higher biomass than marshes (Fig. 8d). Further, some taxa were only present at natural shorelines (e.g., Glycera americana) or were absent at bulkheads (e.g., Cirratulidae and Edotea triloba). The percentage of tolerant taxa tended to be highest at bulkheads while the percentage of sensitive taxa tended to be lowest at bulkheads (Fig. 9a, b). Two tolerant polycheate taxa, E. heteropoda and Leitoscoloplos spp., tended to have highest densities at bulkheads, followed by riprap, beaches, and marshes (Fig. 9c, d). Combined Development Influences Based on the AIC model weights, the model that only contained upland use was typically the top or one of the top models for all response variables (Table A2). For density, top models contained upland use, shoreline, and/or sediment type, and the global model was supported in the low-salinity grouping. Multimodel inference indicated that macrofaunal density and biomass varied among subestuaries with different upland use, with developed subestuaries having significantly lower density and biomass than agricultural or forested subestuaries (Table A3). For biomass, the model with upland use was consistently supported in all three subestuary groupings, and it was the best model for total and low-salinity subestuaries. For low-and high-salinity subestuaries, supported models also included shoreline, and for total subestuaries, the null model was also supported. For diversity, the global model was the best model for total subestuaries; however, for low salinities, the model with upland use was the best model, and for high salinities, the shoreline plus upland use plus sediment model was supported. Multimodel inference indicated that macrofaunal diversity varied among subestuaries with different upland use, with agricultural and developed subestuaries having significantly higher diversity than forested subestuaries (Table A3), though the number of taxa differed by only a few (Fig. 4a). Multiple Anthropogenic Stressors The multiple stressors of upland and shoreline development influenced benthic infaunal communities throughout Chesapeake Bay (Tables A1 & A2). Anthropogenic impacts can act in concert to negatively impact benthic communities. Our results agree with previous studies in the lower Chesapeake Bay where the effects of shoreline development on macrofauna depended on co-occurring stressors, such as shoreline hardening and upland use (Seitz and Lawless 2008;Davis et al. 2008;Peterson and Lowe 2009). Upland Use Across all subestuaries examined, benthic communities adjacent to upland watersheds that were forested or agricultural tended to have higher density and had significantly higher biomass than those adjacent to developed subestuaries or the Fig. 5 Percentage of development within 250 m of shoreline versus mean infaunal community biomass per subestuary (g ash-free dry weight; AFDW) with exponential decline curve estuary with mixed development (Figs. 3d and 5). For density, this pattern was driven by significant differences in highsalinity subestuaries; however, for biomass, both high-and low-salinity subestuaries showed similar patterns. Reduced biomass may be an indirect effect of the increased inflow of nutrients, toxicants, and sediments due to increased runoff over hardened surfaces (Jordan et al. 1997;Gregg et al. 2015), which can negatively affect some benthic organisms, especially species with low tolerance to stressful conditions. Our work concurs with previous studies in Chesapeake Bay that demonstrated the importance of both upland use and salinity for a few key species, namely Callinectes sapidus, Limecola (formerly Macoma) balthica, and Ameritella (formerly Macoma) mitchelli (King et al. 2005), but we expand this concept to the importance for the large, infaunal benthic community. In our study, infaunal biomass declined exponentially with upland development within 250 m of the shoreline, declining dramatically through 20% development, suggesting food availability decreased for higher trophic levels as upland development increased. Moreover, in other studies, landscape-level effects masked shoreline effects (Seitz and Lawless 2008;Lawless and Seitz 2014); therefore, upland effects are paramount. Densities of tolerant taxa increased with near-shore upland development, as the abundances of small, opportunistic taxa increased. The detrimental effects of pollutants associated with urban uplands likely contributed to increased variability with increased upland development, and this increased variability has been shown previously in stressed benthic communities (Warwick and Clarke 1993 Resource control of benthic communities may be affecting the overall trends of higher benthic density and biomass with reduced upland development. Benthic food availability can affect distributions of benthic species (Diaz and Schaffner 1990), and food for deposit-feeders may be of higher quality adjacent to both forested and agricultural watersheds where natural, high-carbon, allochthonous material runs off, as compared to developed watersheds with less allochthonous carbon input (Dauer et al. 1992;Rodil et al. 2008). Sedimentary food availability (organic carbon and nitrogen) for benthic organisms, which increases with lower % sand, was higher in many of the rivers with forested and agricultural upland use that we sampled (e.g., Miles River, Monroe Bay, Severn River). This potentially contributed to higher benthic densities of deposit-feeding species (e.g., A. mitchelli, L. balthica, and Alitta succinea; Lovall et al. 2016) in those locations. Shoreline Development In terms of shoreline development, bulkhead habitats tended to have reduced benthic density compared to natural habitats across all subestuaries. This is likely because marshes act as efficient filters of runoff from the upland (Howes et al. 1996;Roman et al. 2000), but bulkheads sever that land-water buffer, allowing excess toxicants to enter the water and deter benthic organisms. Several studies suggest that shoreline development may strongly affect macrofauna in nearby subtidal shallow zones (Tourtellotte and Dauer 1983;Weis et Davis et al. 2008;Peterson and Lowe 2009). This investigation encompassing the entire Chesapeake Bay agrees with earlier work showing detrimental effects of hardened shorelines within individual subestuaries Bilkovic and Roggero 2008;Gittman et al. 2016a;Lovall et al. 2016), but our large-scale study across a diverse set of subestuaries advances our knowledge about where, within a large estuary, effects are most prominent, and which species are most severely affected. Specifically, effects of shoreline development on density and richness were most prominent in low-salinity subestuaries. The pattern of increased density with natural shorelines was driven by large, long-lived sensitive taxa, such as the clams L. balthica and A. mitchelli (Weisberg et al. 1997;Long et al. 2014), which can deposit feed and take advantage of detrital material delivered from marshes (Kamermans 1994). The response of organisms to riprap structures was somewhat intermediate between bulkhead and natural habitats, as the unconsolidated rock structures possibly provided some improved habitat that was not provided by bulkhead habitats. Some taxa were less sensitive to pollutants that easily runoff from impervious surfaces of developed shorelines (Jordan et al. 1997;Paul and Meyer 2001;Gregg et al. 2015); tolerant taxa (Weisberg et al. 1997), such as Eteone heteropoda and Leitoscoloplos spp., had highest densities adjacent to bulkhead and riprap shorelines, which lack the filtering capabilities of wetland buffers (Howes et al. 1996;Roman et al. 2000). These taxa are typically considered Bweedy^and indicative of deteriorated habitat conditions (Weisberg et al. 1997), suggesting that bulkhead and riprap shorelines reduce functionality of benthic habitats. The trend of biomass by shoreline was somewhat counter to our hypothesis, possibly because of sediment effects on some abundant high-biomass, suspension-feeding bivalves (e.g., Tagelus plebeius, Rangia cuneata, and Mya arenaria; Table 4). These species were deterred in muddier sediments of the natural marsh habitats where a bivalve's feeding apparatus can get clogged with fine sediment (Steele-Petrović 1975), an indirect effect of shoreline development. There may have also been some effect of increased predator abundance at natural marsh habitats Bilkovic and Roggero 2008;Kornis et al. 2017), which could have resulted in lower benthic biomass there during our mid-summer sampling when predation peaks (Moody 2001). Upland, shoreline, and salinity were important in predicting benthic biomass, possibly driven by large bivalves (e.g., the introduced species Rangia cuneata) present particularly in developed, low-salinity subestuaries (Table 3) and Tagelus plebeius and Mya arenaria in high-salinity developed subestuaries and at bulkhead shorelines (Figs. 6f and 8d). Upland effects were paramount, as benthic community structure differed with upland type (Fig. A1a), and accounting for upland differences, community structure differed by shoreline within rivers. Biomass of indicators of high habitat quality, e.g., Limecola balthica (Pearson and Rosenberg 1978;Long et al. 2014), was greatest adjacent to natural marsh habitats, but these taxa were only common in forested and agricultural subestuaries. Larger, long-lived indicator species are indicative of good ecosystem functioning, as chronic disturbance can reduce their abundance and decrease ecosystem functioning (Pearson and Rosenberg 1978). The tendency for natural marsh habitats to have higher benthic richness and Shannon (H′) diversity than bulkhead and riprap habitats in low-salinity subestuaries, and the reverse trend in high-salinity subestuaries, may have been driven by differences in predation and coupling with uplands. Predators such as blue crabs and fish are more abundant in high-salinity subestuaries (King et al. 2005), particularly near marsh habitats, as determined by a companion study of fish abundance in relation to shoreline development in Chesapeake Bay (Kornis et al. 2017). Excessive feeding by these predators may contribute to the loss of benthic biomass (Menge and Sutherland 1987). Prominent effects of shoreline development in lower-salinity (≤ 15 psu), upper-Bay subestuaries are possibly because they are more closely coupled with influences from adjacent habitats, as water depths are shallower with greater exchange of water with the benthos (Gerritsen et al. 1994). However, only a small number of taxa changed overall, and the species-specific changes were more noteworthy. This is the first study to explicitly examine both upland and shoreline development effects on benthic communities in relation to salinity, as the effects of multiple stressors on estuarine fauna have rarely been examined (King et al. 2005;Li et al. 2007;Patrick et al. 2014), and such studies have not examined the entire infaunal, deep-dwelling benthic community. Moreover, the increase in upland and shoreline development throughout the world (Gittman et al. 2015) has increased the demand for studies examining the interacting effects of these anthropogenic stressors. Ecological stress in benthic communities was detected in subestuaries with high percentages of urban development and in response to shoreline development, particularly for sensitive species (e.g., Limecola balthica, Fig. 8a). Our extensive spatial coverage throughout Chesapeake Bay and examination of multiple stressors was instrumental in demonstrating where development had the greatest effects (in low salinity, upper Chesapeake Bay), and that upland use was most influential on benthic communities, though shoreline development was also important. Further management efforts need to be aimed at reducing both upland and shoreline development to maintain the production of benthic communities that are important in the estuarine food web.
2019-04-02T13:12:07.774Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "a7d4e079d4504e0a4e4efc9e91abc3d0fea5f2f1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12237-017-0347-6.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a4e0b35834056b03ee20322553edecdcbd003ad8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }