id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
1695615
pes2o/s2orc
v3-fos-license
Contextual generalized trust and immunization against the 2009 A(H1N1) pandemic in the American states: A multilevel approach The aim of the study was to investigate the association between contextual generalized trust and individual-level 2009 A(H1N1) pandemic immunization acceptance. A second aim was to investigate whether knowledge about the A(H1N1) pandemic mediated the association between contextual generalized trust and A(H1N1) immunization acceptance. Data from the National 2009 H1N1 Flu Survey was used. To capture contextual generalized trust, data comes from an aggregation of surveys measuring generalized trust in the American states. To investigate the association between contextual generalized trust and immunization acceptance, while taking potential individual-level confounders into account, multilevel logistic regression was used. The investigation showed contextual generalized trust to be significantly associated with immunization acceptance. However, controlling for knowledge about the A(H1N1) pandemic did not substantially affect the association between contextual generalized trust and immunization acceptance. In conclusion, contextual state-level generalized trust was associated with A(H1N1) immunization, but knowledge about A(H1N1) was not mediating this association. Introduction Social capital refers to features of social organization such as trust, norms, and networks facilitating collective action for mutual benefit (Putnam, 1993). Social capital can be understood as both individual-level characteristics and certain features of a community. Examples of individual social-capital indicators are norms of trust and reciprocity and membership in organizations. Examples of contextual-level measures of social capital are the density of organizations and aggregate levels of trust and reciprocity within a community (Putnam, 2000). Moreover, social capital is often claimed to have a structural component and a cognitive one. The structural side of social capital is the degree of civic participation, membership in associations, and formal and informal networks. The cognitive side of social capital are norms of generalized trust and reciprocity (Harpham et al., 2002). The prior literature has been able to demonstrate that both individual-level generalized trustdefined as the belief that 'most people can be trusted'and contextual generalized trustthe share in a community thinking, 'most people can be trusted'are associated with health and healthy behavior. In other words, being a trustful individual and residing in a community characterized by trust among people influences health and health behavior (Hyyppä & Mäki, 2001;Kawachi et al., 1999;Kim et al., 2008;Rose, 2000;Subramanian et al., 2003). When it comes to empirical social capital indicators, trust is argued to be the most relevant. The reason is that trust is more probable to precede civic engagement than vice versa, because without trust, membership groups and associations are unlikely to be established at all (Rothstein, 2005). Within the larger category of studies investigating the association between generalized trust and health, some studies focus specifically on health-related behaviors such as tobacco smoking, alcohol consumption, physical activity, and drug use (Lindström, 2008). Recently, Herian et al. (2014) investigated the link between contextual state-level trust and several aspects of health and health-related behaviors. They found state-level trust to be linked with health outcomes such as smoking, BMI, and general health. The current paper investigates the association between contextual state-level generalized trust and individual 2009 A(H1N1) pandemic immunization in the American states. Some prior studies have investigated the association between social capital/ generalized trust and immunization. To start with Ronnerstrand (2013) found an association between two aspects of trustgeneralized trust and trust in health careand intentions to accept vaccination against the 2009 A(H1N1) in Sweden. Jung and colleagues found that the degree of neighborhood social capital mediated the association between 2009 A(H1N1) pandemic knowledge among parents and immunization acceptance for their children (2013). Nagaoka and colleagues investigated the association between two measures of contextual social capitalvoting rate and volunteer rateand uptake of a measles-containing Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/ssmph vaccine. They found that the voting rate was associated with higher levels of immunization coverage rates in large municipalities in Japan (2012). Chuang, Huang, Tseng, Yen, and Yang (2015) found that social capital might influence the response to influenza pandemic in Taiwan, for example the intention to receive a vaccine. Social capital has also been linked with the substantial statelevel variation in A(H1N1) immunization coverage rates in the American states. In a cross-sectional, ecologic study, an association between three measures of contextual social capital and state-level immunization uptake was found (Rönnerstrand, 2014). All three contextual state-level social capital measures -Putnam's social capital index, contextual generalized trust, and volunteer ratewere very strongly positively correlated with immunization coverage rates. In a regression model including the confoundersstate-level health care spending per capita, state population, population per square mile, and median age in the American statesthe association between contextual social capital and immunization coverage rates was found to be persistent and strong. There are theoretical arguments for using states as geographical unit of analysis when investigating the link between contextual social capital and immunization. Putnam argues that social capital is beneficial from a societal perspective because it stimulates collective action and the provision of public goods (1993). In lager scale geographical units, such as states, immunization contributes to the provision of the public good of herd immunity, or at least to a reduction of disease transmission in society. This dynamic is less relevant in smaller geographical units. Similarly to Rönnerstrand (2014) the present study investigates the link between U.S. state-level contextual social capital and immunization against the 2009 A(H1N1) pandemic. But contrary to the above-mentioned study, by making use of multilevel statistical procedures, the aim of this paper is to investigate the association between contextual generalized trust and individual immunization against the A(H1N1) pandemic. It is hypothesized that contextual state-level generalized trust is associated with individual acceptance of vaccination against the 2009 A(H1N1) pandemic, also when controlling for individual-level and state-level confounders. Although the link between social capital and health is well studied, there is much less knowledge about the causal pathways linking social capital with health (Kim et al., 2008). Several potential causal pathways have been suggested. Scholars separate between vertical, policy-oriented pathways and behavioral-mediated pathways. The former of these pathways links social capital with health through civic engagement and participation in the political process. Behavior-mediated pathways include rapid circulation of health information, healthy norms, lower crime rates, emotional support within a network, and control over deviant health behavior in the community (Kawachi et al., 1999;Kawachi & Berkman, 2000;Kim et al., 2008). With regard to pandemic vaccination acceptance in general (Zijtregtop et al., 2009) and of the 2009 A(H1N1) flu immunization in particular (Maurer et al., 2010), it has been argued that knowledge of and information about disease and vaccination from family, friends, and co-workers increased the probability of pandemic immunization acceptance. Hypothetically, information about pandemic influenza and the possibility to vaccinate was more easily circulated in states characterized by high levels of trust. In turn, information about the pandemic and vaccinations was presumably a factor stimulating vaccination acceptance in those high-trusting states. A second aim of this study is to investigate whether knowledge about the 2009 A(H1N1) pandemic mediated the association between contextual generalized trust and A(H1N1) immunization acceptance. It is hypothesized that a substantial part of the association between contextual generalized trust and immunization acceptance was mediated through knowledge about the 2009 A (H1N1) pandemic. Put differently, the claim will be tested that the reason for a potential association between contextual levels of trust and immunization acceptance is due to information about the A(H1N1) pandemic and the possibility to vaccinate being transmitted more easily in states characterized by high levels of trust among its inhabitants. Many prior studies have investigated individual-level predictors of 2009 A(H1N1) pandemic vaccination acceptance. For example older age, higher education, belonging to an ethnic majority, belonging to a priority group, receiving recommendations from a health professional, receiving information from friends and co-workers, having correct information about pandemic influenza vaccination, being aware of the recommendations, acceptance of previous vaccinations, and having trust in institutions and health care all increased the probability of having the intention to vaccinate or having accepted vaccination against the 2009 H1N1 pandemic (Lau et al., 2010;Maurer et al., 2010;Prati et al., 2011;Rodríguez-Rieiro et al., 2010;Rubin et al., 2009;Schwarzinger et al., 2010;Torun et al., 2010;Vaux et al., 2011;Velan et al., 2011;see Bish et al., 2011;Brien et al., 2012 andNguyen et al., 2011 for reviews). However, although the predictors investigated in the referred studies account for a substantial part of the individuallevel variation in immunization uptake, these predictors are unlikely to account for the state-level variation in uptake. Apart from Rönnerstrand (2014), only a few studies have investigated factors accounting for the state-level variation in immunization uptake. Davila-Payan and colleagues (2014) investigate the impact of system factors on immunization coverage in the American states. In their study they found that state-level A (H1N1) immunization coverage rates were associated with shorter time between allocation and ordering and shipping, as well as the number and types of ship-to sites. Baum (2011) adopts a quite different perspective and couples state variation in immunization uptake with partisan support and news consumption. The results from this paper provide a unique contribution to the literature because no prior study, known to me, has investigated the independent effect of both individual-level factors and state-level contextual factors on 2009 A(H1N1) pandemic immunization acceptance. Using multilevel logistic regression, with individuals on the first level and state of residence on the second level, the present paper investigates if contextual generalized trust was associated with individual immunization acceptance, taking individual-level confounders into account. Study population The data for the present study comes from the public-use data file of the National 2009 H1N1 Flu Survey (NHFS). The NHFS is a random digit dialing telephone survey. The objective of the survey was to collect data on the uptake of the A(H1N1) and the seasonal influenza vaccines. It was conducted by the NORC at the University of Chicago on behalf of the Centers for Disease Control and Prevention. The survey operated from October 2009 through June 2010 and included separate strata for all the 50 states and the District of Columbia. The survey questions concerned influenza immunization status, opinions about influenza vaccine safety, recent respiratory illness, risk factors, as well as knowledge, attitudes, and practices in relation to the 2009 A(H1N1) pandemic and seasonal influenza. In addition, questions were asked to survey a number of household and individual demographic characteristics. The target population was all persons in the United States aged 6 months and older. From each household, a randomly selected member was interviewed. If a child were selected, a parent or guardian who knew about the health of the child was asked the same questions. The Council of American Survey Research Organization (CASRO) response rate was 34.7% for landline telephones and 27.0% for cell phones. At least eight attempts were made to contact respondents. The survey included 56,656 adults and 14,288 children, but my study includes only adults, aged 18 years and above. The analysis of data was restricted to the NHFS interviews conducted from January 2010 and onwards, because a new question was added to the NHFS survey from January 2010. The number of survey participants who answered all the NHFS survey questions required to be included in the analysis of data was 28,798. The dependent variable The dependent variable in this study is A(H1N1) immunization acceptance. In the survey, immunization acceptance was measured by the question, 'There were two ways to get the H1N1 flu vaccination, one is a shot in the arm and the other is a spray, mist or drop in the nose. Since September 2009, have you been vaccinated either way for the H1N1 flu?' The response alternatives were 'Yes', 'No', and 'Don't know'. Respondents answering 'Don't know' consisted of 0.6% of the sample population, and this group was not included in the analysis of data. Individual-level predictors The individual-level predictors age, sex, education, marital status, race, health care insurance, and chronic medical condition were included in the logistic multilevel model, the reason being that they are likely to be associated with both generalized trust (Uslaner, 2002;Rothstein, 2005;Kawachi et al., 1999) and immunization acceptance (see Introduction). Also, several of them are commonly considered to be important controls in studies investigating the link between social capital and health (Harpham et al., 2002). In the final model, knowledge about the 2009 A (H1N1) pandemic was introduced in order to investigate the potential mediating effect of knowledge in the association between contextual generalized trust and immunization. The education variable was based on the self-reported level of education. The answers were divided into four different categories, including (1) less than 12 years education, (2) 12 years education, (3) some college education, and (4) college graduate. The race variable consists of the following three categories: (1) white only, (2) black or African American only, and (3) all other races or multiple races. Health care insurance was measured using the following question: 'Do you have any kind of health care coverage, including health insurance, prepaid plans such as HMOs, or government plans such as Medicare?' The response alternatives were 'Yes', 'No', and 'Don't know'. Respondents answering 'Don't know' were of 0.1% of the sample population, and this group was excluded from the analysis of data. To measure the variable chronic medical condition, the interviewer asked the survey respondents if they had any of the following chronic medical conditions: asthma or some other lung condition, diabetes, a heart condition, a kidney condition, sickle cell anemia or some other anemia, a neurological or neuromuscular condition, a liver condition, or a weakened immune system caused by a chronic illness or by medicines taken for a chronic illness. This variable was dichotomous, consisting of respondents who said that they suffer from one or several of these chronic conditions, and respondents who said they suffered from none of them. The variable knowledge about the A(H1N1) pandemic was measured using the following survey question: 'How much, if anything, do you know about the 2009 H1N1 flu? Would you say that you know a lot, a little, or nothing about the 2009 H1N1 flu?' The response alternatives given were 'A lot', 'A little', 'Nothing', or 'Don't know'. Due to the composition of the data in the public file, respondents maintaining to know 'A little' and 'Nothing' about the 2009 H1H1 flu had to be analyzed jointly. This means that this variable was a dichotomized measure of whether respondents maintain that they 'Know a lot' or 'Don't know a lot' about the 2009 H1H1 flu. To check the robustness of the knowledge variable, an additional model was built (but not reported in detail) consisting of a dichotomized measure of whether respondents maintain that they 'Know nothing' as one category and 'A lot' and 'A little' in the other category. Survey respondents answering 'Don't know' (0.3%) were excluded. State-level predictor Estimations of state-level contextual generalized trust have been a widely used measure in the literature about social capita (Putnam, 2000;Uslaner, 2002) and the literature about social capital and health specifically (Kawachi et al., 1997(Kawachi et al., , 1999Subramanian et al., 2001). These estimations are most often based on the proportion of the total population within each state having answered affirmatively (or negatively) to the question 'Generally speaking, would you say that most people can be trusted or that you can't be too careful in dealing with people?' The current study utilizes estimations based on the aggregation of several different nationwide surveys (Neville, 2012). What speaks in favor of this measure is that it is highly correlated with Putnam's (2000) social capital index (Pearson's r ¼0.810), and furthermore, it is even more recent. In the multilevel logistic regression analysis, the variable was centered on its mean and standardized to vary between 0 and 1. This was done to enhance the interpretation of the contextual generalized trust coefficients." State-level confounder For the state-level confounders were health care spending per capita, median age, population size, and population density (Rönnerstrand 2014 Statistics It is well established that multilevel modeling is a useful statistical procedure for investigating the effect of place on individual health (Bingenheimer & Raudenbush, 2004;Merlo et al., 2012) and particularly in relation to the effect of contextual social capital on individual health (Murayama et al., 2012). Multilevel modeling makes it possible to estimate the share of variance in the outcome variable that can be accounted for by contextual and individuallevel predictors, respectively. In the present paper, a multilevel logistic regression model, with individuals at the first level and state of residence at the second level, was estimated. The study analyzes the effect of contextual state-level generalized trust on individual immunization behavior, after adjustment for the contextual factors statelevel health care spending per capita, median age, population size, and population density as well as the individual-level factors age, marital status, sex, education, race, health care insurance, chronic medical condition, and knowledge about the 2009 A(H1N1) pandemic. In the first model (the zero multilevel model), only a constant term in the fixed and random part was included. This model only provided information about how the likelihood of vaccination acceptance was distributed across the American states. In the second model, the individual-level predictors age, marital status, sex, education, race, health care insurance and chronic medical condition were added. In the third model (random-intercept model), the random contextual generalized trust variable was inclu`ded. In model four (random-intercept model), the state-level confounders were introduced. In order to investigate if knowledge about the 2009 A(H1N1) pandemic mediated a potential association between social capital and immunization, Model 5 included the variable 'Knowledge about H1N1 pandemics'. Comparing the odds ratios for contextual generalized trust in Model 4 (not including the knowledge variable) and Model 5 (including the knowledge variable) should reveal whether knowledge is a mediating variable between generalized trust and immunization. Thus, this approach is different from the four-step approach suggested by Baron and Kenny (1986). Instead, the analytical strategy to test mediation closely follows Poortinga (2006) and utilizes the indirect effect approach. Individual odds ratios and 95% confidence intervals were calculated for the fixed and random parts of the models. Also, between-state variance and fit statistics (log likelihood and Aikike's information criteria, AIC) were calculated. The deviance test was used to investigate the difference between log likelihood estimates between Model 2 and Model 3. To enable a direct comparison between the fixed effects and the level 2 between-state variation, median odds ratios (MOR) were calculated. The MOR is computed with the following formula: MOR ¼exp[√(2 Â V A ) Â 0.6745], where V A is the area-level variance and 0.6745 is the seventy-fifth centile of the cumulative distribution function of the normal distribution with mean 0 and variance 1 (Larsen & Merlo, 2005). In multilevel linear regression, the intraclass correlation coefficient (ICC) is informative with regard to the proportion of total variance that is accounted for by the area level. A difference in multilevel logistic regression is that the individual-level variance and the area-level variance are not directly comparable. To address these difficulties, alternative approaches for estimating ICC in logistic multilevel modeling have been developed. The Latent Variable Method ICC is calculated as the following: ICC ¼V A /(V A þπ 2 /3), where V A denotes the area-level variance. The ICC quantifies clustering within areas and may be interpreted as an estimation of the discriminatory accuracy of the area level (Merlo, 2014). The proportional change in variance (PCV) in area-level variance is computed to estimate the change in area variance when more variables are added to the model. The equation for the proportional change in variance is PCV ¼(V A ÀV B )/V A Â 100, where V A is the area variance in the initial model, and V B is the area variance in the model with more terms . The sample weights provided in the NHFS data were not used. This implies the limitation that the state-level vaccination rate loses strength to predict the likelihood of vaccination acceptance across all the states. However, the decision not to use the sample weights is the results of two considerations. Firstly, applying sample weights would distort the estimates of the standard errors (Asparouhov, 2004). Secondly, the aim of the multilevel model was not to predict the overall likelihood of vaccination acceptance in all the states, but rather to estimate the fixed and random effects on vaccination acceptance. The models have been run with gllamm in Stata13 (Rabe-Hesketh & Skrondal, 2005). Table 1 summarizes the number of observations and prevalence (%) of the sample population. The results in the table show that 88% of the respondents answered that they had a health care insurance. About one-third stated that they had at least one chronic medical condition (29%). Also, about one out of three said that they knew a lot about the 2009 A(A1H1) pandemic (34%). Results From the two bottom rows in Table 2, it is evident that the models (Models 1-5) were being improved gradually, as random-and fixedeffect predictors were added. Fit statisticslog likelihood and the Aikike's information criteriashow that the inclusion of the fixedeffect predictors (Model 2), the random-effect predictor contextual generalized trust (Model 3), the random-effect state-level confounders (Model 4) and the fixed-effect 'knowledge' variable (Model 5) all improved the model fit. Most importantly, a deviance test shows that the inclusion of the random effect significantly improved Model 3, as compared to the fixed effect in Model 2. The deviance value was 18.22 (df¼ 1, po0.005). As expected, the data analyses indicated state-level variation in A(H1N1) immunization acceptance in the American states. Table 2 shows that the crude state-level variance was 0.122 (0.017). This specifies the between-state variation in the likelihood of vaccination acceptance. In Model 1, the Latent Variable Method ICC turned out to be moderate in strength (0.036). This means that even though intraclass homogeneity exists, the state-level was only moderate in accuracy when it comes to discriminating between immunization acceptance and non-acceptance. In Model 2, the fixed effects have been introduced. Comparing Model 1 to Model 2, the PCV was only 5.7%. This indicates that state-level differences with regard to the composition of control variablesage, sex, education, marital status, race, health care insurance, and chronic medical conditioncan only explain a very limited share of the state-level differences in immunization uptake. In Model 3 the random part of the model has been added, that is, the state-level contextual generalized trust measure. The inclusion of this variable leads to a substantial reduction in statelevel variance. The PCV between Model 2 and Model 3 was 59.0%. This reduction signifies that state-level contextual generalized trust accounted for a large share of the state-variation in immunization acceptance. In Model 4, the random-effect state-level confounders were included. The model shows that contextual generalized trust was associated with immunization, also when state-level health care spending per capita, median age, population size, and population density was taken into consideration. However, when these controls are included, the association between contextual trust and immunization becomes weaker. In Model 5, the individual-level factor knowledge about the H1N1 pandemic was introduced in order to investigate if knowledge about the pandemic actually was a mediating causal pathway linking contextual social capital with immunization. However, the inclusion of this variable only resulted in a very limited increase in PCV (from 23.8.0% to 27.9%). Apart from the results obtained by comparing state-level variance in the five different models, interesting results were also obtained from the odds ratio of the random and fixed predictors. The results obtained in Model 5 show differences in vaccination acceptance among different age groups. The odds ratio (OR) was significantly higher for the group of respondents 65 years and above (OR ¼1.240, 95% confidence interval (CI) 1.140-1.348), as compared to the reference category of respondents aged 18-34. The age group 54-64 also displayed significantly higher ORs than the reference category (OR ¼1.194, CI 1.096-1.301), but the ORs for the group of respondents between 34 and 44 (OR ¼0.889, CI 0.806-0.980) and 45 and 54 (OR¼ 0.843, CI 0.770-0.922) were significantly lower compared to the reference category of respondents aged 18-34. Table 2 displays differences in vaccination acceptance depending upon marital status. There was a significant difference in ORs for vaccination acceptance between the categories 'Married' and 'Not Married'. The OR for the category 'Not Married' was 0.861 (CI 0.815-0.910), as compared to the reference category 'Married'. According to the results obtained, women and men were equally likely to accept vaccination. There was no significant difference in ORs for vaccination acceptance between men and women. The OR for 'Women' was 1.003 (CI 0.950-1.059), as compared to the reference category 'Men'. The results show some significant difference in the likelihood of vaccination acceptance depending upon level of education. The OR for the category 'College graduate' was 1.335 (CI 1.202-1.484), as compared to the reference category ' o 12 years of education'. There were no significant differences in ORs for the categories '12 years' (OR¼ 1.029, CI 0.922-1.147) and 'Some college' (OR ¼1.068, CI 0.959-1.189), as compared with the reference category. Results show that health care insurance was a factor strongly linked with vaccination acceptance. The OR for the group of respondents having health care insurance was significantly higher compared with respondents not having health care insurance. The OR for vaccination acceptance was 0.529 (CI 0.477-0.588) for the 'No' insurance category, compared to the reference category consisting for respondents in the 'Yes' category. Further, prevalence of having one or several chronic medical conditions was highly associated with immunization acceptance. Respondents stating that they have a chronic medical condition were more likely to accept vaccination against the A(H1N1) pandemic. The OR was 1.645 (CI 1.554-1.743) for respondents in the group of respondents having at least one chronic medical condition, as compared with the reference category of respondents having no chronic medical conditions. There were differences in the ORs for vaccination acceptance between the categories 'White only' and 'Black only'. The ORs were significantly lower in the 'Black only' category (OR ¼0.753, CI 0.674-0.841), as compared to the 'White only' reference category. There were no differences in OR for vaccination acceptance registered for members of the category 'Multiple and other races', when compared with the reference category (OR ¼ 1.041, CI 0.923-1.174). Results show that knowledge about the A(H1N1) pandemic was strongly linked with immunization. When it comes to knowledge about the A(H1N1) pandemic, the OR for immunization acceptance was 1.822 (CI 1.723-1.926) in the category of respondents that maintain that they knew a lot about the 2009 A(H1N1) pandemic, which is significantly higher compared to the reference category consisting of respondents who did not know a lot about the pandemic. A look at the random part of the model reveals interesting results. The random predictor state levels of generalized trust were significantly associated with immunization acceptance. In Model 4 (without the mediation knowledge variable), the OR for vaccination acceptance was 1.274 (CI 1.018-1.594). The contextual generalized trust variable was standardized to vary between 0 and 1. This implies that when going from the state with the lowest levels of generalized trust to the state with the highest, the odds of vaccination acceptance increases by about 27%. When it comes to the state-level confounders introduced in Model 4, both state-level median age and population size was significantly and negatively associated with immunization. Health care spending per capita was significantly and positively associated with immunization. No significant association was found for population density. When comparing the ORs for the random predictor contextual generalized trust in Model 4 and Model 5, it is evident that the inclusion of the knowledge variable did not substantially affect the association between contextual generalized trust and immunization acceptance. The OR for vaccination acceptance was only reduced slightly, from 1.274 (CI 1.018-1.594) in Model 4 to 1.248 (CI 1.000-1.557) in Model 5. There were no notable differences in the results when the same model was implemented using the alternative knowledge variable (see section about individual-level predictors). Discussion The results obtained in this study show that the likelihood of vaccination acceptance was highest among elderly, married, nonblack, college graduates, with health care insurance, one or many chronic medical conditions, who knew a lot about the 2009 A (H1N1) pandemic. These results are largely in line with variables found to predict A(H1N1) immunization acceptance in prior research (Bish et al., 2011;Brien et al., 2012;Nguyen et al., 2011). Among the state-level confounders, median age and population size was significantly and negatively associated and health care spending per capita significantly positively associated with immunization. The empirical analysis was specifically targeting two hypotheses regarding the relationship between contextual generalized trust and individual immunization acceptance. Using a multilevel logistical procedure, the empirical analysis supports the hypothesis that contextual generalized trust was associated with immunization acceptance, also when controlling for possible individual and state-level confounders. This finding corresponds well with prior studies linking individual-level social capital and immunization (Jung et al., 2013;Rönnerstrand, 2013) and contextual social capital and immunization (Nagaoka et al., 2012;Rönnerstrand, 2014). To my knowledge, this study is the first to investigate the link between contextual social capital and 2009 A(H1N1) immunization acceptance, utilizing multilevel statistical procedures. In doing so, the results of this study add to the literature about social capital and health, and to the emerging literature about social capital and immunization. Moreover, to my knowledge, this may be one of the first studies considering both individual-and contextual-level predictors in explaining 2009 A(H1N1) pandemic immunization acceptance. For this reason, this study provides important contributions to the field of vaccinations and health policy. The empirical investigation found that contextual generalized trust seems to have been linked to immunization. When the contextual generalized trust variable was added, the state-level variance was reduced substantially. However, it is important to keep in mind that the discriminatory accuracy of the state-level was quite moderate. It could be called into question if state of residence is the most appropriate unit of analysis (Duncan et al., 1993;Merlo et al., 2012;Giordano et al., 2011). Other bodies of the collective, such as neighborhoods, might provide better discriminatory accuracy with regard to immunization acceptance. The empirical investigation tested the hypothesis that knowledge about the A(H1N1) pandemic mediated the association between contextual generalized trust and immunization acceptance. However, empirics failed to support this claim. Controlling for knowledge in the multilevel model did not substantially influence the association between the contextual generalized trust variable and immunization acceptance. In other words, the study renders no support for the claim that generalized trust was associated with immunization acceptance because information about the A(H1N1) pandemic was being transmitted more easily in states with high levels of trust. Even so, the study shows that knowledge about the 2009 A(H1N1) pandemic was strongly linked with immunization acceptance in itself. One potential explanation for the absence of support for knowledge as a mediating variable could be that the data available only permitted an analysis of dichotomous measures of knowledge about the A(H1N1) pandemic. It is possible that a more fine-grained measure would have provided a more generous test of the hypothesis that knowledge mediated the relationship between contextual generalized trust and immunization acceptance. What speaks against this claim is that, as mentioned before, knowledge about the 2009 A (H1N1) pandemic, in itself, was found to be strongly linked with immunization acceptance. Perhaps it all boils down to that the effect of knowledge as mediator is influenced by the aggregation level, namely that the reason for the lack of support for knowledge as a mediator, could be that such knowledge is a more important mediator at the neighborhood level (Kawachi & Berkman, 2000). In addition, it is worth noting that the knowledge variable is relatively strongly correlated with other individual level variables, e.g. age and education. This may have reduced the effect of the inclusion of the knowledge variable into the model, because these variables may already account for some of the "knowledge" factor. In the absence of support for knowledge as a mediator linking generalized trust with immunization, other causal pathways must be considered. One factor possibly capable of accounting for part of the association between generalized trust and immunization is the link between generalized trust and trust in institutions, in particular health care. Prior research has found institutional trust to be associated with A(H1N1) immunization acceptance (Velan et al., 2011;Rubin et al., 2009;Prati et al., 2011). One explanation for the relationship between contextual generalized trust and immunization could be due to the fact that in states where levels of generalized trust are high, people will also tend to have confidence in the authorities responsible for the immunization campaign. In relation to immunization, Yaqub and colleagues (2014) argue that the credibility of institutions matter even more than the information content itself (2014). Despite this, prior individuallevel studies have shown individual generalized trust to be independently positively associated with immunization, taking trust in health care into account (Rönnerstrand, 2013). An important feature of social capital and generalized trust is that they are claimed to facilitate the solution to dilemmas of collective action by improving levels of voluntary cooperation (Putnam, 1993). When short-term individual gains are in conflict with longer-term collective interests, members of societies characterized by high levels of trust often tend to prioritize the common good. Immunization against transferable diseases is a textbook example of the kind of situation where individual and collective objectives sometimes collide. High vaccination uptake in a community may provide an incentive for individuals to benefit from the herd immunity generated by others being vaccinated in their place, without being exposed to any potential side effects of the vaccination. But the other-regarding consequences of the vaccination decision can also motivate people to accept vaccination, for altruistic reasons. Recent studies have demonstrated the significance of altruism in the vaccination decision as a motive for immunization acceptance (Skea et al., 2008;d'd'Alessandro et al., 2012;Shim et al., 2012) This study was not designed to investigate altruism as a mediating variable, but it seems probable that contextual generalized trust might have been a factor stimulating concern about the way in which their vaccination decision would influence disease transmission in the wider community. Strengths and limitations This study has several strengths. By using multilevel statistical procedures, it adds to the existing literature about social capital and immunization by indicating that contextual generalized trust was associated with individual 2009 A(H1N1) pandemic immunization acceptance, taking individual-level confounders into account. Moreover, the present study adds to the literature about social capital and health-related behaviors by studying information diffusion as a causal pathway. Despite its strengths, the study has several limitations. Firstly, aggregate levels of generalized trust are based on the combination of several different nationwide surveys, in which the wordings of the survey questions about trust varied slightly (Neville, 2012). Secondly, the data analysis is restricted to NHFS survey responses gathered between January and June 2010, the reason being that one variable in the statistical models (health care insurance) was added to the NHFS survey in January 2010. However, concerning the main findings in this study, the results are largely similar if this variable is excluded and the analysis is carried out on cases gathered from October 2009 and onwards. Thirdly, the response rate in the survey used was quite low, reaching only 34.7% for landline telephones and 27.0% for cell phones. Selection bias could be a problem because immunization acceptance might be lower among survey non-participants as compared to participants. Moreover, it is likely that high-trusting individuals are over-represented among survey respondents, as compared to survey non-respondents. But since contextual generalized trust was measured on the aggregated level, selection bias due to low response rate in the NHFS survey has not influenced on the association between contextual generalized trust and immunization. Fourthly, even though quite a strong association between contextual generalized trust and individual-level immunization has been established, because of the study design, the question about causality cannot be addressed in the present paper. Fifthly, the empirical analysis assumes that state of residence and the stat where the vaccination was obtained is overlapping. This assumption is not necessarily valid in all cases. Finally, the present paper is incapable of separating between the compositional and contextual effect of generalized trust. In other words, it would have been an advantage to be able to control for individual-level generalized trust in the multilevel model.
2018-04-03T03:43:35.010Z
2016-09-10T00:00:00.000
{ "year": 2016, "sha1": "b2392080dbb60cb5e1071273cdc5d933c584ade8", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ssmph.2016.08.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2392080dbb60cb5e1071273cdc5d933c584ade8", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
210833376
pes2o/s2orc
v3-fos-license
Potential osteomyelitis biomarkers identified by plasma metabolome analysis in mice Osteomyelitis, which often arises from a surgical-site infection, is a serious problem in orthopaedic surgery. However, there are no specific biomarkers for osteomyelitis. Here, to identify specific plasma biomarkers for osteomyelitis, we conducted metabolome analyses using a mouse osteomyelitis model and bioluminescence imaging. We divided adult male pathogen-free BALB/C mice into control, sham-control, and infected groups. In the infected group, a bioluminescent Staphylococcus aureus strain was inoculated into the femur, and osteomyelitis was detected by bioluminescence imaging. We next analysed the metabolome, by comprehensively measuring all of the small molecules. This analysis identified 279 metabolites, 12 of which were significantly higher and 45 were significantly lower in the infected group than in the sham-control and control groups. Principal component analysis identified sphingosine as the highest loading factor. Several acyl carnitines and fatty acids, particularly ω-3 and ω-6 polyunsaturated fatty acids, were significantly lower in the infected group. Several metabolites in the tricarboxylic acid cycle were lower in the infected group than in the other groups. Thus, we identified two sphingolipids, sphinganine and sphingosine, as positive biomarkers for mouse osteomyelitis, and two components in the tricarboxylic acid cycle, two-oxoglutarate and succinic acid, as negative biomarkers. Mouse osteomyelitis model. Thirty-six pathogen-free BALB/C adult male mice (12 weeks old; body weight 20 to 25 g) purchased from Sankyo Service (Shizuoka, Japan) were used in this study. The mice were assigned to three groups (infected, sham-control, and control groups; n = 12 each) and were maintained in our animal facility under specific-pathogen-free conditions 10 . The number of samples was determined by previous reports using metabolome analysis 14 . In the infected group, mice were anesthetized with an intraperitoneal injection of butorphanol (5.0 mg/ kg of body weight), medetomidine (0.4 mg/kg), and midazolam (2.0 mg/kg), and the skin on the left knee was shaved and sterilized with povidone iodine. A skin incision was made over the left knee, and the distal end of the femur was exposed through a lateral parapet arthrotomy with medial displacement of the quadriceps-patella complex. The distal end of the femur was perforated using a high-speed drill with a 0.5-mm sharp steel burr (Fine Science Tools Inc., Foster City, CA). Bioluminescent Staphylococcus aureus (1.0 × 10 8 CFU in 1 μl of Luria Bertani medium) was inoculated into the medullary cavity of the femur using a Hamilton syringe. The burr hole was closed with bone wax, the dislocated patella was reduced, and the muscle and skin openings were closed by suture 10 . The animals were placed on a heating pad and monitored until they recovered. In the sham-control group, the mice underwent the same procedure but without bacterial inoculation. This study was performed in strict accordance with recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Keio University Committee on the Ethics of Animal Experiments (Permit number 09108), and all experiments were approved by the Animal Care and Use Committee of Keio University. BLI. To monitor bacterial growth in the femur, we measured bacterial photon intensity (PI) by BLI immediately after surgery and on day 3 after the surgery. The mice were anesthetized via inhaled aerosolized isoflurane mixed with oxygen, placed on their back, and imaged for 5 min. To quantify bacterial luminescence, we defined and examined regions of interest (ROIs) in the inoculated area 10 . For BLI, we used a Caliper LS-Ivis Lumina cooled CCD optical macroscopic imaging system (Summit Pharmaceuticals International Co., Tokyo, Japan) 15 to detect inoculated bacteria that emit a bacterial bioluminescent signal through the tissues of a living animal. Photon emissions of the bacterial bioluminescent signal were captured, converted to false-colour photon-count images, and quantified with Living Image software version 3.0 (Caliper LS Co., Hopkinton, MA). Bacterial PI was expressed as photon flux in units of photons/s/cm 2 /sr. Serology. Blood samples were collected from an abdominal artery of mice under ether anaesthesia on day 3 after the operation. Samples were collected after the mice had fasted for 12 h. All samples were collected by one physician with experience in drawing this type of sample. Plasma was obtained by centrifuging the samples at 1200 rpm at 4 °C for 10 min. In each group, the two smallest plasma samples were excluded, the remaining samples were pooled, and 2 samples were combined to 1 specimen to make 5 specimens per group. The specimens were stored at −80 °C until use [16][17][18][19][20] . This study was performed in strict accordance with recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. Metabolome analysis. To inactivate native enzymes, 50 μl of specimen was added to 450 μl of methanol at 0 °C. This extract solution was added to 500 μl of chloroform and 200 μl of Milli-Q water and then centrifuged at 2300 rpm at 4 °C for 5 min. The upper aqueous layer (400 μl) was centrifuged through a Millipore 5-kDa cut-off filter to remove proteins, and then centrifuged at 9100 rpm at 4 °C for 120 minutes. The filtrate was lyophilized, suspended in 25 μl of Milli-Q water, and analysed by CE-TOFMS. In addition, 1500 μl of 1% formic acid acetonitrile was added to 500 μl of specimen, and the sample was centrifuged at 2300 rpm at 4 °C for 5 min. After solid-phase extraction to remove phospholipids, the filtrate was recovered and lyophilized, suspended in 100 μl of 50% isopropanol, and analysed by LC-TOFMS. Both CE-TOFMS and LC-TOFMS were performed in a facility at Human Metabolome Technologies (Tsuruoka, Japan) 5 . Statistical analysis. The peaks detected by CE-TOFMS and LC-TOFMS were processed with Master Hands ver.2.13.0.8.h (Keio University) to obtain m/z values, peak areas, and migration time (CE-TOFMS) or retention time (LC-TOFMS) 21 . The metabolic pathway map was provided using the public-domain software VANTED (Visualization and Analysis of Networks containing Experimental Data, Germany) 22,23 . Principal component analysis (PCA) and hierarchical clustering analysis (HCA) were performed using SampleStat ver.3.14 and PeakStat ver.3.18 (Human Metabolome Technologies). PCA is the most widely used dimension-reducing technique for analysing the large datasets involved in metabolome analysis 24 . A heatmap was generated for the HCA, with red and green indicating high and low concentrations, respectively. All values were presented as the mean ± standard deviation. We considered a p value less than 0.05 to be significant (Welch's t test). Results Bacterial PI. On day 3 after surgery, stable luminescent signals were detected in all surviving mice in the infected group (Fig. 1a). The mean bacterial PI in the infected group was 12.3 ± 7.4 × 10 3 photons/s/cm 2 /sr on the front view and 12.2 ± 5.1 × 10 3 on the lateral view. In contrast, the mean bacterial PI in the sham-control group was 2.3 ± 0.4 × 10 3 photons/s/cm 2 /sr on the front view and 2.7 ± 0.5 × 10 3 on the lateral view. The highest PI in the sham-control group was less than 3.5 × 10 3 photons/s/cm 2 /sr on both views. The bacterial PI was significantly In Group H, 66 metabolites (red bars) were elevated in the infected group compared to the sham or control groups, and 12 of them were significantly higher in the infected group (p < 0.05). Of 195 Group L metabolites (green bars), which decreased in the infected compared to the sham-control or control groups, 45 were significantly lower in the infected group (p < 0.05). (2020) 10 www.nature.com/scientificreports www.nature.com/scientificreports/ higher in the infected group (p < 0.001; Fig. 1b). In addition, the presence of Staphylococcus aureus was confirmed in the histology of femur from the mice in the infected group day 3 after surgery (Fig. 1c). Metabolome analysis. We detected 279 metabolites that differed between the infected group and the sham-control and control group (191 by CE-TOFMS and 88 by LC-TOFMS) among the 1200 molecules we measured. An HCA heat map showed 66 metabolites that increased in the infected group (Fig. 2, Group H), 12 of which were significantly higher in the infected group than in either the sham-control or control group (p < 0.05). We found 195 metabolites that decreased in the infected group (Fig. 2, Group L), 45 of which were significantly lower in the infected group than in either the sham-control or control group (p < 0.05). Metabolites that increased or decreased significantly in the infected group were classified according to the Human Metabolome Database (HMDB; www.hmdb.ca), as shown in Table 1. Group H consisted of 1 amine, 1 carbohydrate or carbohydrate conjugate, 7 carboxylic acids or derivatives, 1 fatty acyl, and 2 sphingolipids (Table 1). In comparisons between the infected and sham-control groups, the lowest p values were for the 2 sphingolipids, sphingosine (p < 0.001) and sphinganine (p < 0.001) (Fig. 3a). The individual plasma concentrations of sphingosine and sphinganine revealed that there were significantly higher in the infected group, compared with both the control group and the sham control group (p < 0.01 each: Welch's t test) ( Table 2). Some metabolites of the tricarboxylic acid (TCA) cycle were depleted below the limit of detection in the infected group: two-oxoglutarate (2-OG), succinic acid, nicotinamide adenine dinucleotide (NAD + ), and fumaric acid. In contrast, 2-OG was detected in all 5 sham-control and all 5 control specimens, succinic acid in all sham-control and 3 control specimens, and NAD + in one sham-control and 3 control specimens. Fumaric acid was detected in one control specimen. PCA. We used PCA to examine the metabolic effects of osteomyelitis. A PCA score plot showed that each of the three groups was tightly clustered along the PC1 axis (Fig. 4a). The highest loading factor for the PC1 axis in the H group was sphingosine (0.097), and the lowest loading factor of the PC1 axis in the L group was cis-8,11,14-eicosatrienoic acid (−0.1; Fig. 4b). Discussion We used a mouse infection model that is reproducible and suitable for evaluating the pathophysiology of osteomyelitis, since the bacterial bioluminescence can be visualized and quantified immediately prior to sacrifice [8][9][10] . Because the blood samples of the infected and control mice are compositionally homogeneous, the plasma metabolome analysis provides more accurate results than those from other infection models. PCA revealed that the three treatment groups were tightly clustered along the PC1 axis, indicating that the results accurately reflected the effect of osteomyelitis with high reproducibility. Sphingolipids are bioactive lipids that are involved in cellular signalling and regulatory functions 25 . Sphingolipids have been implicated as potent mediators in several inflammation and diseases, including cancers, inflammatory diseases and injury [25][26][27][28][29][30] . In addition, antimicrobial activity of sphingosine against Staphylococcus aureus was reported 27 , and lack of sphingosine induced pulmonary Staphylococcus aureus infection 28 . Therefore, sphingosine had an important role for Staphylococcus aureus infection as both a mediator and a resistant. Although sphingosine is reported to exert antimicrobial activity in infected areas of the skin and lung [27][28][29][30] , to the best of our knowledge, this is the first report describing a correlation between sphingosine and osteomyelitis. www.nature.com/scientificreports www.nature.com/scientificreports/ Moreover, the sphingomyelin cycle is an important sphingolipid pathway, and its turnover is so rapid that ceramide mass levels return to baseline within just 4 h 31 . In the present study, the sphingosine and sphinganine levels were significantly higher in the infected group than in the sham-control group 3 days after surgery (p < 0.001). Our results indicate that both infection and surgical stress can activate the sphingomyelin cycle; however, the rapid turnover of the sphingomyelin cycle can help to distinguish limited time stress such as operative stress from persistent stress such as osteomyelitis. Therefore, sphingosine and sphinganine are candidate positive biomarkers for osteomyelitis especially caused by Staphylococcus aureus in the early phase. Omega-3 polyunsaturated fatty acids (ω-3 PUFAs), such as α-linolenic acid, eicosapentaenoic acid, and docosahexaenoic acid, have well-documented anti-inflammatory properties [32][33][34][35][36][37][38] , and potential benefits of 12 Group H metabolites that were significantly higher in the infected group than the sham-control group (p < 0.05); the p values were lowest for sphingosine and sphinganine (p < 0.001). Carboxylic acids or derivatives with 7 metabolites were the most pathway label in the Group H. (b) Plasma concentrations of 45 Group L metabolites that were significantly lower in the infected group than in the sham-control group (p < 0.05); the p values were lowest for glycine and oleoyl ethanolamine (p < 0.001). Fatty acyls with 24 metabolites were the most pathway label in the Group L. The P values for each row are: no mark, p < 0.05; *p < 0.01, and **p < 0.001. (2020) 10:839 | https://doi.org/10.1038/s41598-020-57619-1 www.nature.com/scientificreports www.nature.com/scientificreports/ supplementing the diet with ω-3 PUFAs have been reported 35,36,39 . In the initial inflammatory response, the mobilization of eicosapentaenoic acid and docosahexaenoic acid from the circulation to inflammation sites requires the conversion of these acids to resolvins, which control excessive neutrophil infiltration, protect organs, and promote the resolution of the inflammation 34 . Therefore, the plasma ω-3 PUFA levels are inversely correlated with infection 35 . In contrast, ω-6 PUFAs, such as linoleic and arachidonic acids, are precursors of eicosanoids (prostaglandins and leukotrienes) 40,41 . In our study, although linolenic acid (an ω-3 PUFA) and linoleic acid (an ω-6 PUFA) were in the L group, no specific changes were observed in other ω-3 or ω-6 PUFAs. Since PUFAs are nutritionally essential FAs, the ω-3 and ω-6 PUFAs are correlated with dietary intake like other FAs 42,43 . It would therefore be inappropriate to measure ω-3 and ω-6 PUFAs as specific osteomyelitis biomarkers. In AC pathway, carnitine palmitoyltransferase-II releases fatty acyl coenzyme A (CoA) and free carnitine in mitochondria 44,45 . Several disorders associated with immune responses such as autoimmune diseases, chronic fatigue syndrome or infection reduce the pool of carnitines in the patient's tissues or serum 46 . In other words, these situations accelerate β-oxidation, defined as the oxidation of fatty acyls to acetyl CoA. Elevated β-oxidation generates more adenosine triphosphate but decreases fatty acyl levels in the circulation 47 . Our results revealed that osteomyelitis also reduces the serum AC and FA levels by accelerating β-oxidation. Table 2. Individual plasma concentration of sphingosine and sphinganine relative to control average. There were significantly higher in the infected group, compared with both the control group and the sham control group (p < 0.01 each: Welch's t test). www.nature.com/scientificreports www.nature.com/scientificreports/ Interestingly, in the infected group, all metabolites in the TCA cycle were decreased or depleted (Fig. 5). These molecules included malic acid, which is a fatty acyl, and citric and isocitric acid, which are carboxylic acids or derivatives. Malic acid, citric acid, and isocitric acid, that are upstream of isocitric acid, were significantly lower in the infected group (p < 0.05). Four metabolites downstream of 2-OG-2-OG, succinyl CoA, succinic acid, and fumaric acid-fell below measurable limits in the infected group. To the best of our knowledge, this is the first report that the metabolites in the TCA cycle were significantly decreased by osteomyelitis. Our results showed that the transfer from isocitric acid to 2-OG was inhibited in the infected group. This pathway is the oxidation reaction of NAD + in the presence of NAD + -specific isocitrate dehydrogenase, which is inhibited by NADH 48,49 . In the present study, NAD + fell below measurable limits only in the infected group. This depletion of NAD + appeared to result from accelerated β-oxidation, since β-oxidation reduces NAD + in a manner dependent on palmitoyl CoA 50 . Osteomyelitis may have accelerated β-oxidation, in turn decreasing all of the metabolites involved in the TCA cycle. Therefore, the metabolites in the TCA cycle, especially 2-OG and succinic acid, are potential negative biomarkers for osteomyelitis. Thiamine in diazine is an essential coenzyme associated with the pyruvate decarboxylation that converts pyruvate into acetyl-CoA 51 . The pyruvate decarboxylation is accelerated by starvation like β-oxidation 52 . In the present study, thiamine was in the L group. Therefore, osteomyelitis may have also accelerated the pyruvate decarboxylation, decreasing thiamine like NAD + . Histamine was reported to increase in peripheral blood concentration during parasite infection and virus infection 53,54 . In the present study, 1-Methylhistamine was in the H group. To the best of our knowledge, there are no reports about relationship between osteomyelitis and histamine. Therefore, bacterial infection might influence blood concentration of 1-Methylhistamine with any kind of inflammatory processes, although the mechanism is still not fully explained. Previous reports showed other biomarkers for osteomyelitis such as inflammatory cytokines 55,56 and antibody for Staphylococcus aureus [57][58][59] . However, to the best of our knowledge, there are no biomarkers that facilitate the diagnosis of osteomyelitis in the acute phase. For inflammatory cytokines, we previously evaluated the serum levels of interleukin-6 and interleukin-1β in this mouse model 10 . The mean serum levels of these biomarkers increased in the infected mice to the same extent in the sham control mice on day 3 after surgery, and they were significantly higher in the infected mice on day 7 after surgery 10 . Thus, the inflammatory cytokines could not distinguish between surgical-site infection and surgical stress at the acute phase after surgery. For this reason, several biomarkers detected in the present study are significantly useful, especially for early diagnosis of osteomyelitis. Figure 5. The TCA cycle. All metabolites in the TCA cycle were decreased or depleted in the infected group. In particular, four metabolites downstream of 2-OG (2-OG, succinyl CoA, succinic acid, and fumaric acid) fell below measurable limits in the infected group. The metabolic pathway from isocitric acid to 2-OG was strongly inhibited in the infected group according to the depletion of NAD + from accelerated β-oxidation. The value of each metabolites is the concentration relative to the control. www.nature.com/scientificreports www.nature.com/scientificreports/ There are several limitations to this study. First, we only analysed the Staphylococcus aureus infection model, but not other pathogens. Second, other inflammatory diseases such as rheumatoid arthritis were not evaluated. Although these new biomarkers are useful to diagnose Staphylococcus aureus infection, further study is needed to prove the specificity of these biomarkers for any other infectious diseases. Taken together, of 1200 molecules measured in a mouse myelitis model, we identified 12 metabolites as candidate positive biomarkers for osteomyelitis, including the sphingolipids sphingosine and sphinganine. We also identified two candidate negative biomarkers for osteomyelitis, the TCA-cycle metabolites 2-OG and succinic acid. These new plasma biomarkers for osteomyelitis should improve the prognosis and treatment consistency for patients with postoperative osteomyelitis. Data availability The datasets analysed during the current study are available in the Metabolights repository, https://www.ebi. ac.uk/metabolights/MTBLS1398.
2020-01-21T16:14:57.741Z
2020-01-21T00:00:00.000
{ "year": 2020, "sha1": "faf28e9b8833a31bc8c5869872333fd113bcdf21", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-57619-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "faf28e9b8833a31bc8c5869872333fd113bcdf21", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119058488
pes2o/s2orc
v3-fos-license
Infrared Variation of Radio Selected BL Lacertae Objects In this paper, the historical infrared (JHK) data compiled from the published literature are presented in electronic form for 40 radio selected BL Lacertae objects (RBLs) for the first time. Largest variations are found and compared with the largest optical variation. Relations between color index and magnitude and between color-color indices are discussed individually. For the color-magnitude relation, some objects (0048-097, 0735+178, 0851+202, 1215+303, 1219+285, 1749+096, and perhaps 0219+428, 0537-441, 1514-241) show that color index increases with magnitude indicating that the spectrum flattens when the source brightens while some other objects (0754+100, 1147+245, 1418+546, and 1727+502) perhaps show an opposite behaviour, while remaining objects do not show any clear tendency; For color-color relation, it is found common that (J-K) is closely correlated with (J-H) while (J-H) is not correlated with (H-K). Introduction While the nature of active galactic nuclei (AGNs) is still an open problem, the study of AGNs variability can yield valuable information about their nature. Photometric observations of AGNs are important to construct light curves and to study variability behavior over different time scales. In AGNs, some long-term optical variations have been observed and in some cases claimed to be periodic (see Fan et al. 1998a). Before γ-ray observations were available, Impey & Neugebauer (1988) found that the infrared emission (1-100 µm) dominates the bolometric luminosities of blazars. The infrared emissions are also an important component for the luminosity even when the γ-ray emissions are included ( von Montigny 1995). Study of the infrared will provide much information of the emission mechanism. The long-term optical variations have been discussed in the papers: Webb et al. (1988); Pica et al. (1988); Bozyan et al. (1990) ;Stickel et al. (1993); Takalo (1994); Heidt & Wagner (1996) and references therein. The long-term infrared variations have been presented for some selected blazars in a paper by Litchfield et al. (1994). BL Lacertae objects are an extreme subclass of AGNs showing rapid and large amplitude variability, high and variable polarization, no or weak emission line features (EW < 5Å). From surveys, BL Lacertae objects can be divided into radio selected BL Lac objects (RBLs) and X-ray selected BL Lac objects (XBLs); From the differences in their overall spectral energy distributions as represented by their location in the α ro versus α ox , they can be divided into quasar-like BL Lacertae objects (Q-BLs) and X-ray strong BL Lacertae objects (X-BLs) (Giommi et al. 1990); From the difference in the peak frequency of the spectral energy distribution (SED), they can be divided into high frequency peaked BL Lac objects (HBLs) and low frequency peaked BL Lac objects (LBLs) (Padovani & Giommi 1995;Urry & Padovani 1995). In the last catalogue of active galaxies, Veron & Veron (1998) list 357 BL Lac objects and BL Lac object candidates. There are some obvious differences between RBLs and XBLs (see Fan 1998 in preparation, for a brief review). Some authors claim that they are the same type of objects with XBLs having wider viewing angle than RBLs (Ghisellini & Maraschi, 1989;Georganoppoulos & Marscher 1996;Fan et al. 1997); Other claim that there is a continuous spectral sequence from XBLs to RBLs (Sambruna et al. 1996) while some claim that they are different classes (Lin et al. 1997). Infrared observations have been done for BL Lac objects for more than 20 years. Observations in the infrared were obtained originally on BL Lacertae by Oke et al. (1969). Strittmatter et al. (1972) observed the continuity of spectral-flux distribution from the optical to infrared wavelengths. Afterwards, some other authors also did similar observations (see Stein 1978 for summary). Using the 1.26-m infrared telescope at Xing Long station, Beijing observatory, we obtained some infrared observations of a few BL Lac objects ( Xie et al. 1994;Fan et al. 1998b). But long-term infrared variations are not available in the literature. In this paper, we mainly present the long-term infrared (J,H, and K bands ) light curves for some RBLs and discuss their variability properties. The paper has been arranged as follows: In section 2, we review the literature data and light curves; in section 3, we discuss the variations and give some remarks for each object, and a short summary. 3, the J magnitude; Col. 4 the uncertainty for J; Col. 5, the H magnitude; Col. 6, the uncertainty for H; Col. 7, K magnitude; Col. 8, the uncertainty for K. Data Analysis The flux densities from the literature had been converted back to magnitudes using the original formulae. In the literature, different telescopes are used, different telescopes use photometers with slightly different filter profiles, resulting in slightly different calibration standards and zero-points, but the uncertainty aroused by the different systems is no more than a few percent. The magnitude is dereddened using A(λ) = A V (0.11λ −1 + 0.65λ −3 − 0.35λ −4 ) for λ > 1µm and A V = 0.165(1.192-tan b) csc b for |b| ≤ 50 • and A V = 0.0 for |b| > 50 • (Cruz-Gonzalez & Huchra 1984, hereafter CH (84) ;Sandage 1972;Fan et al. 1998c). It is clear that some RBLs have many observations while others have only a few data points, in For data shown in Figs. d -i of each object we have performed linear fit with uncertainties in both coordinates considered (see Press et al. 1992 for detail), In principle, a and b can be determined by minimizing the χ 2 merit function (i.e. equation in Press et al. book) where 1 w i = σ 2 y i + b 2 σ 2 x i , σ y i and σ x i are the x and y standard deviations for the ith point Unfortunately, the occurrence of b in the denominator of the above χ 2 merit function, resulting in equation ∂χ 2 ∂b = 0 being nonlinear makes the fit very complex, although we can get a formula for a from ∂χ 2 ∂a = 0: Minimizing the χ 2 merit function, equation (2), with respect to b and using the equation (3) for a at each stage to ensure that the minimum with respect to b is also minimized with respect to a, we can obtain a and b. As Press et al. stated, the linear correlation coefficient r then can be obtained by weighting the relation (14.5.1) terms of Press et al. (1992). Finding the uncertainties, σ a and σ b , in a and b is more complicated (see Press et al. 1992). Here, we have not performed this. For the data whose uncertainties were not given in the As discussed in the paper of Massaro & Trevese (1996), there is a statistical bias in the spectral index-flux density correlation. Following their suggestion, we considered the relation between magnitude in one band and the color index obtained fro two other bands to avoid this bias. The relations are shown in Fig. d Remarks There is no redshift available for this polarized ( P 3.7cm = 9.3%, Wardle 1978; P opt =1%-27 %, CH(84); Brindle et al. 1986 ) object. Very faint infrared fluxes have been observed by Gear (1993), which leads the object to have the largest infrared variation among the objects discussed in the present paper. The infrared variations are ∆J = 6.55, ∆H = 6.23, and ∆K = 6.07 while that in the optical band is 2.7 mag (Angel & Stockman 1980). We can expect that the optical monitoring should get a similar amplitude. PKS 0118-272 An absorption feature at z = 0.559 was found by Falomo (1991). It was observed in the infrared during the period JD 244000+ 4773-7753. For this object, there are no correlations for color-color indices or for color index and magnitude. It is a highly polarized object ( P opt ∼ 17%) and shows dP/dν > 0 (Mead et al. 1990). PKS 0138-097 It was observed by Falomo et al (1993) at the visual and 2.2µm wavelengths, the spectral flux distribution is well described by a single power law of α = 1.0 (f ν ∼ ν −α ). It shows variable and high optical polarization (P = 6% − 29.25%) (Mead et al. 1990). The variation in the infrared is almost the same as that in the optical band (∆m = 1.52 on a time scale of one week, Bozyan et al. 1990). There is a correlation for (J-K) and (J-H) but not for other colors. 0219+428, 3C66A A redshift of 0.444 has been derived from a possible MgII λ2800 feature . A rapid decline of 1.2 mag in 6 days was observed by Takalo et al. (1992). Different properties for the color and brightness are found in this source: Takalo et al. (1992) found that although the spectral index is considerably flat when the source was at its observed maximum state, the (J-H) and (H-K) against K color-magnitude diagrams are essentially scattered; Massaro et al. (1995) found that there is a positive correlation between the infrared spectral index and the brightness at more than the 98 % level while De Diego et al. (1997) found a different behaviour. From the long-term data compiled in the present paper no clear correlation has been found for color index and magnitude except that there is a tendency for (H-K) to increase with J ( and p = 5.0 × 10 −3 . It shows two lower states in the light curves ( Fig. 3 a,b,c). Its largest variation of 2.0 mag (Miller & McGimsey 1978) obtained in the optical band is similar to those found in the infrared band (1.67mag in the K for instance). Polarizations of P opt = 6%-18% and P Rad. = 5% are reported in the papers of (CH(84)) and Takalo (1991). AO 0235+164 Absorption redshifts of z abs = 0.524 and 0.852 were found by Burbidge et al. (1976) and an emission line redshift of z em = 0.94 was found by Nilsson et al. (1996). It is one of the most extensively studied objects in the infrared (Impey et al. 1982;Gear et al. 1986a;Takalo et al. 1992), variations of ∆J = 1.9, ∆H = 1.8, and ∆K = 1.9 are found by Takalo et al. (1992). The spectral index is found to be correlated with J flux at a significance level of more than 90% by Brown et al. (1989). It is the most highly polarized (P (V ) = 44%,P IR = 36.4%) and the reddest (α = 4.61) object (Impey et al. 1982;Stickel et al. 1993). The infrared light curves show three clear outbursts, and variations as large as the optical ones ( ∆m = 5.3 mag, Stickel et al. 1993). There are positive correlations between color indices against magnitude when it is brighter than J = 14., H = 12.5 and K This object has been observed in the infrared only once (Puschell & Stein 1980) with H = 13.44 and K = 12.46, which gives a color index of (H-K) = 0.98± 0.07. Its flux distribution from 0.36µm to 2.3µm can be expressed by a power law with α = 1.0 ( f ν ∝ ν −α ). It is 11.5% polarized at 3.7cm (Wardle 1978) and 24±5% polarized at 0.44µm (Puschell & Stein 1980). 0306+103 This variable (∆m = 2.2 mag, Leacock et al 1976) object has been observed in K for only two times (K =11.82 and 11.92 mag). PKS 0537-441 Large variation of ∆m = 5.4 mag (Stickel et al. 1993) 3.2.9. PKS 0735+178, V RO17.07.02, OI158 It has been observed in the optical band for about 90 years ) and in the infrared for about 20 years (Lin & Fan 1998). The largest optical variation of ∆m = 4.6 ) is greater than that in the infrared (Lin & Fan 1998). Rapid variability of 0.4 mag in about 15 minutes and a positive correlation between infrared spectral index and brightness are found (Massaro et al. 1995;Brown et al. 1989). and (H-K) (see Lin & Fan 1998). It is highly polarized in the radio (P 3.7cm = 4.7%, Wardle 1978), in the infrared (P IR = 32.6%, Impey et al. 1984) and in the optical (P opt = 35%, see Lin & Fan 1998) bands. PKS 0754+100, OI 090.4 PKS 0754+100 is observed to show rapid variability in the infrared (P IR = 4% − 19%) and in the visual (P Opt. = 4 − 26%) polarizations, a variation of 6% over 4 hours in optical polarization was observed by Impey et al. (1982Impey et al. ( , 1984. The polarization is independent of wavelength between the visual and the 2.2µm regions (Craine et al. 1978). An optical variation of 2.0 magnitude was found from the examination of the plate archives by Baumert (1980) and a variation of 1 mag in the infrared was reported (CH84). It is the only object in Massaro et al (1995) sample which, despite a recorded variability of about 1 mag, does not show statistically significant changes of the spectral index. 0754+100 shows a decreasing brightness since the first infrared observation although there are some brightness fluctuations in the light curves. It is interesting that the variation in H is greater than those in J and K with variability ranges of J = 12.31 to 14.08, H = 11.21 to 13.27 and K = 10.55 to 12.43. Besides, although an anti-correlation between H and (J-K) appears in Polarizations of P 3.7cm = 4.8% ( Wardle 1978) and P opt = 12% (CH(84)) are reported. It is hard to draw any conclusion for the relation between color index and magnitude. It Fig. 9). 3.2.17. PKS 0851+202, V RO20.08.01, OJ287 OJ287 is one of the very few AGNs for which continuous light curve over more than one hundred years has been observed. A redshift of z = 0.306 is derived from [OIII]λ5007 by and confirmed by Sitko & Junkkarinen (1985). Its observed properties from the radio to the X-ray have been reviewed by Takalo (1994). OJ287 shows a 12-year period in the optical (Sillanpaa et al. 1988;Kidger et al. 1992) and in the infrared (Fan et al. 1998b). It is highly polarized in the radio (P 3.7cm = 16.6%, Wardle 1978), infrared (P IR = 14.3%, Impey et al. 1982) and optical (P opt = 37.2%, Smith et al. 1987) bands. Its optical variation of 6.0 mag (Fan et al. 1998b) is larger than the infrared variation. There is a tendency for the color index to increase with magnitude. ( A large optical variation of 4.6 mag was observed (Stein et al. 1976). The observed optical, infrared and radio polarizations are, respectively, 0%-7.9% (Rieke et al. 1977 The infrared variation found here is much smaller than the optical band one. PKS 1144-379 It is a high-redshift object with z = 1.048 ( Stickel et al. 1989). An optical variation of 1.92 mag (Bozyan et al. 1990) and an optical polarization of P opt = 8.5% (Impey & Tapia 1988) are reported. The infrared variation is larger at lower frequency and is greater than in the optical. B2 1215+303, ON325 For this polarized (P 3.7cm = 6.3% and P opt = 14%, Wardle 1978) object, we found are excluded. In table 3, we can see that the variation in J is smaller than those in H and K. The reason is that there are only H and K data for the faint state (Glass 1981). The variations in J and H are the same, and greater than in K if we do not consider the low state points. B2 1308+326, OP 313, GC An emission line redshift of 0.996 comes from the identification of a fairly strong, broad emission feature at 5586Å as MgIIλ2800, while the absorption redshift of 0.879 is from a weak doublet at λ5252.4Å and 5264.4Å . B2 1308+326 shows high and variable polarization indicating variability of ∆P = 10% on a time scale of 24 hours (Impey et al. 1984) and a decreasing of 12% in two days (Moore et al. 1980). The recorded maximum polarizations of P IR = 19.6% and P opt = (28 ± 5)% at 0.44µm was also observed by Puschell et al. (1979). An optical variation of ∆m = 5.6 mag has been observed (Angel & Stockman, 1980). When the infrared emission increases the spectral index does not change (Sitko et al. 1983), but an indication of a positive correlation between the infrared spectral index and brightness is found by Brown et al. (1989). The 1977/1978 optical outbursts show a double peak structure separated by 1 year. Following the 1978 peak, Puschell et al (1979) found that the infrared fluxes did not decrease with time as rapidly as the visual fluxes and proposed that the more rapid decline of visual compared with infrared light is a general characteristic of these events. From the light curve, we can see that the B2 1418+546, OQ530 1418+546 shows a large optical variation of 4.8 magnitude ranging from 11.3 to 16.1 magnitude in Miller's (1978) study; its optical polarization from 5% to 24% (Marchenko 1980) is greater than the infrared polarization of P IR = 19% (Impey et al. 1984). In the infrared, some properties have been shown in the papers of O'Dell et al. (1978) and Puschell & Stein (1980). No correlation was found between spectral index and brightness, but the flux variations are found larger at lower frequency by Massaro et al. (1995) and . For the spectral shape and the brightness of the source, the long-term data show B2 1652+398 was shown to have a redshift of 0.0337 by Ulrich et al. (1975). It has effective spectral indices similar to that of X-ray selected BL Lac objects but has been classified as an RBLs. The largest optical variation of 1.3 mag (Stickel et al. 1993) is smaller than in J. Its optical and infrared emissions are polarized at P opt = 2% − 7% and P IR = 3% ( CH (84)). During the flare the colors are found to be very much bluer than in quiescent state, reaching (H-K) = 0.13 and (J-H)=0.48 ), but there are almost no correlation between color index and magnitude or for color-color indices (see Fig. 18). It is interesting that the optical polarization (P opt = 4% − 6%) is smaller than the radio polarization (P 3.7cm = 11.3%) (Kinman, 1976). IZW 187 shows the highest ratio of X-ray luminosity to optical luminosity, L X−ray /L opt. , among BL Lac objects observed by Einstein observatory (Ku et al. 1982). An optical variation of 2.1 mag was reported by Scott (1976) (see also Bozyan et al. 1990). Recently, it was found to have a massive black hole (5.4 × 10 8 M ⊙ ) at its center in our previous paper (Fan 1995). Variations of ∆H = 0.57mag, and ∆K = 0.82 mag are found. There is an indication of color index (J-K) decreasing with H. No correlation for the color-color indices is really found (see Fig. 19). PKS 2200+420, V RO42.22.01, BLLacertae The prototypical object of its class lies in a giant elliptical galaxy at a redshift of ∼ 0.07. Maximum radio polarization of P 3.7cm = 6.4% (Wardle 1978), infrared polarization of P IR = 15.1% (Impey et al. 1984) and optical polarization of P opt = 23% (Kuhr & Schmidt 1990) are obtained. It has been observed for about 27 years in the infrared (see Fan et al. 1998c) and about 100 years in the optical (Fan et al. 1998a). A 14-year period and a maximum optical variation of ∆B = 5.31 are found from the B light curve (Fan et al. 1998a). The optical variation is greater than infrared variation. Correlations have been found between (J-K) and (J-H) and (H-K) but not between color index and magnitude (Fig 21, also see Fan et al. 1998c). PKS 2240-260 There are only five-night of infrared data, showing a variation of ∆K = 1.7 mag. There is an indication of a correlation between J-H and K, but it should be investigated with more observations. P opt = 15.1% is reported for it (Impey & Tapia 1988). Summary In this paper, the infrared variations are presented for 40 RBLs, the light curves are shown in Fig. 2 This manuscript was prepared with the AAS L A T E X macros v4.0.
2014-10-01T00:00:00.000Z
1999-03-01T00:00:00.000
{ "year": 1999, "sha1": "171f4122c87880eeb5ba0dfe00a7c6eb5edeb9b6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/9908104v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "23e06524e74f1574a8f24b10e4a072496c461a13", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Geography" ] }
47021281
pes2o/s2orc
v3-fos-license
Nanoscale membranes that chemically isolate and electronically wire up the abiotic/biotic interface By electrochemically coupling microbial and abiotic catalysts, bioelectrochemical systems such as microbial electrolysis cells and microbial electrosynthesis systems synthesize energy-rich chemicals from energy-poor precursors with unmatched efficiency. However, to circumvent chemical incompatibilities between the microbial cells and inorganic materials that result in toxicity, corrosion, fouling, and efficiency-degrading cross-reactions between oxidation and reduction environments, bioelectrochemical systems physically separate the microbial and inorganic catalysts by macroscopic distances, thus introducing ohmic losses, rendering these systems impractical at scale. Here we electrochemically couple an inorganic catalyst, a SnO2 anode, with a microbial catalyst, Shewanella oneidensis, via a 2-nm-thick silica membrane containing -CN and -NO2 functionalized p-oligo(phenylene vinylene) molecular wires. This membrane enables electron flow at 0.51 μA cm−2 from microbial catalysts to the inorganic anode, while blocking small molecule transport. Thus the modular architecture avoids chemical incompatibilities without ohmic losses and introduces an immense design space for scale up of bioelectrochemical systems. After stirring for 1h, the mixture was acidified by 2N hydrochloric acid to generate a yellow precipitate, which was centrifuged and washed with ethyl acetate. A yellow solid was obtained after drying overnight. To completely transform carboxylate residue to acid, the yellow solid was dispersed in 10 mL dimethylformamide and 2N hydrochloric acid was added until the dark green suspension completely changed to yellow. 50 mL water was subsequently added for further precipitation. The precipitate was filtered, washed with water and dried, and identified as the S4 target product. Yield: 320 mg (31%). 1 S6 As shown by the comparison of the FT-IR traces (1) and (2) of Figure 3a, the only modes associated with the TMSA amine group that underwent significant red shifts upon anchoring are the NH 2 scissoring mode at 1643 cm -1 by 15 cm -1 and the CN stretch at 1303 cm -1 by 27 cm -1 , which is consistent with the change of environment of this group upon surface attachment 2,3 . With all other modes unchanged, this indicates that the silyl aniline remained intact upon anchoring. To confirm attachment of PV3 to the TMSA anchor, we examined both solid PV3 with aniline end groups and PV3 attached to TMSA on Pt/SnO 2 by infrared spectroscopy and XPS as presented in Figure 3a traces (3) and (4) Figure 6A) indicates that the wire molecules possess predominantly S7 perpendicular orientation relative to the SnO 2 surface 4 . Assignments of XPS spectra of embedded molecular wires In addition to the 406.2 eV N 1s band of the nitro group observed upon attachment of the PV3 wire to TMSA anchored on Pt/SnO 2 (Figure 3b, trace (3)), two overlapping components at 399.69 eV (blue) and 399.44 eV (red) which arise from nitrile and amine groups 5 are seen, and a shoulder at 402.59 eV assigned to shake-up involving intramolecular charge transfer between the molecule π system (donor) and functional nitro groups (acceptor), or the PV3 π  π* transition 6 . Ellipsometry shows that on top of the 4.7 nm SnO 2 layer is an organic layer that is 0.6 ± 0.25 nm thick, which corresponds to the TMSA height of 0.6 nm, consistent with perpendicular orientation relative to the surface. Taken together, these infrared and XPS analyses confirm that the two-step anchoring method results in the attachment of the intact wire molecules on the inorganic oxide material. Supplementary Figures Supplementary Figure 1
2018-06-12T13:31:37.894Z
2018-06-11T00:00:00.000
{ "year": 2018, "sha1": "e38e77580f467925433b4175af7e4464931160f2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-04707-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0df76bf3e6c9bbdfd3cfab11b9e61ddcc5a6684b", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
239887897
pes2o/s2orc
v3-fos-license
Set1-mediated H3K4 methylation is required for Candida albicans virulence by regulating intracellular level of reactive oxygen species ABSTRACT Candida albicans is an opportunistic human fungal pathogen that exists in normal flora but can cause infection in immunocompromised individuals. The transition to pathogenic C. albicans requires a change of various gene expressions. Because histone-modifying enzymes can regulate gene expression, they are thought to control the virulence of C. albicans. Indeed, the absence of H3 lysine 4 (H3K4) methyltransferase Set1 has been shown to reduce the virulence of C. albicans; however, Set1-regulated genes responsible for this attenuated virulence phenotype remain unknown. Here, we demonstrated that Set1 positively regulates the expression of mitochondrial protein genes by methylating H3K4. In particular, levels of cellular mitochondrial reactive oxygen species (ROS) were higher in Δset1 than in the wild-type due to the defect of those genes’ expression. Set1 deletion also increases H2O2 sensitivity and prevents proper colony formation when interacting with macrophage in vitro, consistent with its attenuated virulence in vivo. Together, these findings suggest that Set1 is required to regulate proper cellular ROS production by positively regulating the expression of mitochondrial protein genes and subsequently sustaining mitochondrial membrane integrity. Consequently, C. albicans maintains proper ROS levels via Set1-mediated transcriptional regulation, thus establishing a rapid defense against external ROS generated by the host. Introduction Candida albicans is the most common fungal pathogen in humans. Although it is a member of the normal flora of the skin and mucosal surfaces in most healthy people, C. albicans can cause opportunistic infection in response to environmental changes in the host [1]. Indeed, C. albicans overgrowth can cause local or systemic infections, from candidiasis to life-threatening candidemia; therefore, it is necessary to study its pathogenesis to more effectively prevent and treat C. albicans infections [2]. When transitioning from a non-virulent commensal to a virulent pathogen, C. albicans undergoes various phenotypic switching events, including hyphal formation [1,3], white-opaque switching [4], biofilm formation [5], and secreted aspartyl proteinase expression [2,[6][7][8]. In addition, C. albicans experiences a considerable change in the expression of genes encoding virulence factors, which are involved in recognizing the host environment [9]. Consequently, this leads to phenotypic switching and survival within the host, allowing C. albicans to proliferate rapidly by neutralizing and resisting host attacks [2,6,10]. Therefore, it is important to understand the differentially expressed genes and transcriptional regulation mechanisms that alter virulence-related genes' expression. One of the mechanisms that regulate transcription is the post-translational modification of histone proteins, which is a well-conserved phenomenon in all eukaryotes. Histone protein residues can be chemically modified in various ways, including acetylation, phosphorylation, methylation, and ubiquitination [11,12]. The methylation of histone H3 at lysine 4 (H3K4) is a well-conserved and widely studied histone modification due to its positive role in transcription [13,14]. In general, histone lysine residues can be mono-, di-, or trimethylated, and H3K4 trimethylation (H3K4me3) is a marker of active transcription given that it is enriched in the 5′ regions of actively transcribed genes [13,14]. A previous study revealed that the deletion of Set1, the only H3K4 methyltransferase in C. albicans, attenuated its virulence in mice [15]. H3K4 methyltransferase-deletion in other pathogenic fungi has also been shown to attenuate the virulence [16][17][18][19][20][21]. H3K4 methyltransferases positively regulate the expression of secondary metabolite genes in some fungal pathogens, thereby contributing to their virulence [17,20]; however, the mechanisms linking the effect of Set1 on virulence to transcriptional regulation by Set1mediated H3K4 methylation in C. albicans remain unclear. In this study, we investigated the effect of Set1 on the pathogenesis of C. albicans based on the finding that Set1-regulated genes are responsible for the pathogenicity of C. albicans. Whole-transcriptome sequencing (RNA-seq) revealed that the decreased expression of mitochondrial genes in Set1-deleted mutants increased cellular reactive oxygen species (ROS). Therefore, we suggest that C. albicans maintains proper ROS levels via Set1-mediated transcriptional regulation, thus establishing a rapid defense against external ROS generated by the host. Therefore, we suggest that Set1mediated gene expression enables C. albicans to respond more rapidly to ROS generated by the host and protect against it. RNA-seq analysis Total RNA was extracted using NucleoSpin® RNA (MN, MN740955) according to the protocol of the manufacturer using each duplicated sample. C. albicans were grown in YPD and harvested at exponential phase (OD 600 = 1.0). For sequencing, mRNA was captured using NEBNext® Poly(A) mRNA Magnetic Isolation Module (NEB, E7490), and a strand-specific sequencing library was synthesized using NEBNext® Ultra TM Directional RNA Library Prep Kit for Illumina (NEB, E7420) according to the instruction manual. To compare the differential expression between wild-type (WT) and Δset1, we used DEseq2 normalization. Expression data was visualized using Heatmap generated by Pheatmap R package and Integrative Genomics Viewer (IGV) genome browser track. Chromatin immunoprecipitation (ChIP) The ChIP assay was performed, as previously described [22]. Antibody used in the ChIP assay was anti-H3K4me3. ChIP DNAs were analyzed by quantitative PCR (qPCR) using the SYBR Green PCR mix (Toyobo, TOQPS-201) and the Applied Biosystems 7500 Real-Time PCR System. The sequences of primers used in this study are listed in Table S1. Cellular ROS observation Overnight cultured cells were diluted to OD 600 of 0.5 into fresh 5 ml YPD. After 1 h incubation at 30°C with shaking, cells were harvested and washed with PBS. For making spheroplast, cells were resuspended with zymolyase buffer (1 M sorbitol, 50 mM Tris-Cl, pH 7.4), added 10 μg zymolyase, and incubated at 30°C for 10 min. To observe cellular ROS, cells were incubated with 5 mM CellROX® Green (Thermo, C10444) at 37°C for 30 min, and sequentially DAPI (Sigma, D9542) was added and incubated for 10 min. After staining, cells were washed with PBS three times and observed under a fluorescence microscope. Macrophage interaction assay For the survivability against macrophage attacks, harvested C. albicans cells were resuspended at 10 7 cells ml −1 in cold PBS containing 10% FBS. RAW264.7 cells were seeded in each well of 96-well culture dish at 2.5 × 10 4 cells 150 μl −1 per well or 5 × 10 4 cells 150 μl −1 per well. Prepared C. albicans cells were serially diluted, and 50 μl cells were co-incubated with macrophages. The plate was incubated on ice for 30 min and followed by cultured for 24 h at 37°C in 5% CO 2 . After 24 h incubation, C. albicans colony was observed and counted to calculate survived cells. Mouse survival test Animal experiments were performed at the Kangwon National University Animal Laboratory Center with approval through the Institutional Animal Care and Use Committee (IACUC) of Kangwon University (Approval Number KW-170302-5). Five-week-old female BALB/c mice were acclimated for 1 week, and tail vein injected with 10 6 CFU of C. albicans. In 25 days after injection, survived Δset1-injected mice were euthanized with CO 2 inhalation. Kidney, spleen, and liver sections were stained with H&E (Hematoxylin and Eosin) or PAS (Periodic Acid Schiff) and observed under a microscope. Set1 is required for full virulence but has a marginal effect on overall gene expression Set1 is the only H3K4 methyltransferase in C. albicans and is necessary for its full virulence in ICR mice [15]; however, the relationship between Set1-mediated H3K4 methylation and virulence remains unknown. Therefore, we investigated the mechanism by which Set1 regulates the pathogenicity of C. albicans through H3K4 methylation. First, we examined whether the virulence of Δset1 was similar in BALB/c mice, which are inbred mice compared to outbred ICR mice. Briefly, two groups of mice (n = 5 per group) were infected with 1 × 10 6 CFU of wild-type (WT) and Δset1 C. albicans strains via tail vein injection, and their survival was observed for 21 days. We observed that BALB/c mice infected with Δset1 survived longer than those infected with the WT (Figure 1(a)), indicating that the Δset1 mutant displays attenuated virulence in this mouse model. Because the Δset1-infected mice that survived after 21 days appeared to be healthy, we examined the tissue condition of the infected but survived mice. No significant damage was observed in any C. albicans-infected tissues, including the kidney, spleen, and liver, compared to uninfected tissues. Moreover, C. albicans was not detected in any mouse tissues by staining with Periodic acid-Schiff (PAS), indicating that C. albicans had been cleared (Fig. S1). For nonpathogenic C. albicans to become pathogenic in the host, large-scale alterations in gene expression are required. Previous studies have reported a positive correlation between H3K4 methylation mediated by Set1 and transcription [13,14,23]; therefore, we performed RNAseq to determine which genes were differentially transcribed in Δset1. Surprisingly, we found that most genes' overall expression patterns were unaltered, even without H3K4 methylation (Figure 1(b-c)). Moreover, several pathogenesis-related genes exhibited an increased expression in Δset1 mutant (adjusted p-value <0.05) (Fig. S2A), contrary to our expectations from the defective virulence phenotype of Δset1 mutant. Most of these genes with the increased expression in Δset1 mutant were classified as cell wall proteins (Table S2). We observed that only 3% of all annotated genes were differentially expressed in the absence of Set1 (adjusted p-value <0.05). In other words, the expression of 97% of genes was unchanged despite the lack of Set1-mediated H3K4 methylation. Remarkably, only 41 genes were downregulated in Δset1 (Table S3 and Figure 2(a)), adjusted p-value <0.05). To allow the meaningful statistical analysis of expression levels, we selected the top 50% more highly expressed genes for further analysis (3,097 of 6,194 genes). Of these genes, only 30 genes showed a decrease in expression in the Δset1 mutant (Figure 1(c), adjusted p-value <0.05). Because a previous study reported that Set1 can regulate pathogenicity by regulating adherence to host epithelial cells [15], we selected 100 genes with GO slim for adhesion among all C. albicans genes using the Candida Genome Database (http://www.candida genome.org/) and checked the expression levels of selected 100 genes. As a result, we confirmed that the expression of most genes (93 of 100 genes) was not significantly different in Δset1 mutant (p-value ≥0.05) (Table S4). In addition, adhesion of C. albicans is the first step in biofilm formation. When we checked the amount of biofilm formation through XTT assay, we did not observe significant differences in the amount of biofilm between WT and Δset1 mutant (Fig. S2B). It was previously reported that the expression of most genes does not change in the Saccharomyces cerevisiae strain lacking Set1 [24,25]; therefore, the authors of this genome-wide study suggested that Set1mediated H3K4 methylation does not regulate gene expression [24]. Because the virulence phenotype was clearly reduced in C. albicans Δset1, we focused on the 41 genes whose expression decreased depending on H3K4 methylation. To examine the functional role of these downregulated genes in Δset1, we analyzed the functional annotation of each orthologous gene in S. cerevisiae, which is well studied in terms of functional genomics (Table S3). Although no gene was found to have virulence-related functions, functional annotation clustering (https://david.ncifcrf.gov/tools. jsp) analysis revealed that approximately 25% of the 41 Δset1-downregulated gene-encoded proteins were localized in the mitochondria, while most of the other gene products were located in the cytoplasm (Figure 2 (b) and Table S3). Approximately 13% of the 6,100 total proteins in yeast cells are generally localized in mitochondria [26]; therefore, there were considerably more mitochondrial proteins among the Set1dependent genes than average. Set1 regulates the expression of mitochondrial genes involved in biogenesis and protection against oxidative damage The mitochondrion is the primary site for cellular ROS production since its inner membrane houses the electron transfer system (ETS), a series of four large complexes (I to IV) that transfer electrons from donors to acceptors via redox reactions and pump protons out of the mitochondrial matrix, forming a proton gradient [27][28][29]. Because this proton gradient drives protonmotive force, accumulated protons enter the matrix via ATP synthase to lower the proton gradient, leading to ATP synthesis. Electrons must pass through complexes to finally be absorbed as H 2 O, yet when electrons leak out, they react with O 2 and become unstable superoxide anions (O 2 − •). Generally, O 2 − • is converted to hydrogen peroxide (H 2 O 2 ) by superoxide dismutase (SOD), thereby reducing its reactivity; however, hydroxyl radicals (•OH) can be generated from H 2 O 2 via the Fenton reaction and cause cellular damage through lipid oxidation, protein denaturation, and DNA mutations [27][28][29]. We hypothesized that the virulence attenuation observed in Δset1 may be due to mitochondrial dysregulation. Thus, we focused on the 10 mitochondrial protein-coding genes, which were downregulated in the absence of Set1 in C. albicans (Figure 2(b)). The S. cerevisiae orthologs of the Set1-regulated genes, SOM1 (orf19.6359) and TOM5 (orf19.6247.1), play roles in the assembly and translocation of mitochondrial proteins, including the respiratory chain complex (Figure 2(a-b) and Table S3) [30][31][32][33]. A reduction in the expression of these genes could therefore reduce overall mitochondrial membrane integrity. Subsequently, the localization defect of the respiratory chains involved in oxidative phosphorylation could generate more mitochondrial ROS and eventually cause cellular damage. We also detected the downregulation of genes known to protect against oxidative stress damage in mitochondria, including NFU1 (orf19.2067) and AIM25 (orf19.3929; Figure 2(a-b) and Table S3) [34][35][36]. We found that the expression of mitochondrial protein genes was downregulated in the absence of SET1, which may result in the production of many mitochondria-driven ROS. In addition, we found that the expression of some oxidative stress-responsive genes was downregulated (Table S3), suggesting that the SET1-deleted strain produces too much cellular ROS to remove, making Candida cells more sensitive to external attacks. H3K4 methylation is highly enriched in set1-regulated genes Because H3K4me3 is an important histone modification for transcriptional initiation, we confirmed that Set1 mediated all three H3K4 methylation states using antibodies that recognize H3K4me1, -me2, and -me3, respectively (Figure 3(a)). We already selected 10 mitochondrial protein-coding genes whose expression decreased in Δset1 strain (Figure 2(b)). We then carried out chromatin immunoprecipitation (ChIP) followed by qPCR (ChIP-qPCR) to determine whether these 10 mitochondrial protein-coding genes were regulated by H3K4 methylation. ChIP-qPCR with the H3K4me3 antibody revealed that H3K4me3 levels were high in the 5′ ORF of the mitochondrial protein-coding genes among the Set1-regulated genes in the WT (Figure 3 (b)). Together, these results indicate that Set1 positively regulates target gene expression via H3K4me3, and that SET1 deletion reduces target gene expression. Set1 deletion causes cellular ROS accumulation and renders Δset1 cells hypersensitive to external oxidative stress We found that the expression of mitochondrial protein genes and oxidative stress response-related genes were downregulated in Δset1 cells (Figure 2(a) and Table S4); therefore, we observed the cells under a fluorescence microscope to measure the levels of ROS generated using CellROX Green staining, which detects oxidative stress by binding to DNA when oxidized. Briefly, cells were treated mildly with zymolyase to produce spheroplasts for increased permeability and then stained with CellROX Green and DAPI. The CellROX Green signal was more potent in Δset1 than in WT cells (Figure 4 (a)), suggesting that the absence of Set1 increases cellular ROS levels. We hypothesized that SET1 deletion causes the mitochondrial membrane leakage, thereby accumulating cellular ROS and making cells susceptible to external oxidative stress. To test this hypothesis, we determined the sensitivity of the Δset1 strain to oxidative stress by spotting them on H 2 O 2 -containing media. We observed that the Δset1 strain was more sensitive to H 2 O 2 than the WT strain (Figure 4(b)), consistent with our hypothesis. Set1-dependent gene expression is required for C. albicans survival in the host Macrophages play a vital role in the innate immune system by degrading pathogens and sterilizing tissues [37]. When phagocytosis is induced by the host recognizing C. albicans, macrophages form inflammasomes and secrete ROS to attack the pathogen [38]. In response to the host, C. albicans expresses hyphaspecific genes, resulting in morphogenesis and establishing a defense against oxidative stress [39]. Depending on the outcome of these processes, C. albicans is either cleared by macrophages or escapes from the phagosome and kills the macrophages [38]. To determine whether Δset1 C. albicans was more vulnerable to attack by ROS-releasing macrophages in the host, we performed an in vitro interaction analysis between mouse macrophages and C. albicans. H3K4me3 is enriched in 5ʹ ORF of Set1-regulated genes. A, Western blot analysis of H3K4 methylation in S. cerevisiae (Sc) and C. albicans (Ca). FM391 is used as a WT control for S. cerevisiae Δset1. Set1 is the sole methyltransferase in C. albicans. Histone H3 is used as a loading control. B, H3K4me3 ChIP followed by qPCR was performed in WT and Δset1. The 5ʹ end sequences of Set1-regulated genes related to mitochondria were used as amplicon. Intergenic region (IGR) is used as a negative control. All ChIP analyses were performed in two independent biological replicates and qPCR was performed in triplicated. *p < 0.05 and **P < 0.01. To determine colony formation of C. albicans surviving macrophage attack, properly diluted C. albicans strains were cultured with the mouse macrophage cell line Raw264.7 (Figure 5(a)). We observed that the colonies formed by the Δset1 strain were too small to be characterized as colonies, unlike those formed by the WT strain ( Figure 5(a)). Moreover, the Δset1 cells formed 50% fewer colonies than the WT cells ( Figure 5(b)). These data suggest that Set1 contributes to C. albicans virulence by affecting resistance to and survival from macrophage attack. Because the virulence of Δset1 is undoubtedly attenuated both in vivo and in vitro, we concluded that Set1 is required for the full virulence phenotype of C. albicans when infecting host tissues. The antioxidant enzymes that protect cells from ROS are conserved in C. albicans including SOD, glutathione peroxidase (GPX), and catalase (CAT) [40][41][42][43] (Candida Genome Database; http://candidagenome. org/). The expression of antioxidant-encoded genes generally increases under oxidative stress conditions to neutralize ROS and prevent cell damage [44]; however, we found that the expression of these genes did not change in the Δset1 strain, which contains high cellular ROS levels (Table S5), suggesting that Set1 . Cellular ROS is more generated in Δset1. A, Cellular oxidative status analysis in WT and Δset1. Cells were harvested in the early exponential phase and permeabilized with zymolyase followed by staining with 5 mM CellROX® Green. Cellular ROS was detected by fluorescence microscopy. C, Comparative growth of WT and Δset1 on H 2 O 2 -containing media. Overnight cultured cells were diluted into 10 7 CFU ml −1 . 3 μl of 5-fold serial diluted cells were spotting on YPD or YPD added 5 mM H 2 O 2 . Each experiment was repeated at the same condition. deletion does not affect the ability to neutralize ROS. Because the Δset1 strain contains high cellular ROS levels, its antioxidants are saturated faster than in the WT, making Δset1 more vulnerable to ROS attack. Overall, we propose that Set1 contributes to the virulence of C. albicans as shown in (Figure 5(c)). In WT cells, mitochondrial protein genes and oxidative stress response-related genes are generally expressed and localized correctly; therefore, mitochondrial ROS levels are low because WT C. albicans can immediately detoxify ROS as they are produced. Conversely, the mitochondrial proteins in Δset1 are not correctly localized, causing electrons to escape from the respiratory chain and increasing mitochondrial ROS levels. Although the ROS scavenging system in Δset1 eliminates ROS generated from the defective mitochondrial membrane, Δset1 results in high ROS levels even in the absence of external oxidative stress. When C. albicans is attacked by an external ROS derived from host macrophages or is grown in a medium supplemented with H 2 O 2 , the WT strain can remove ROS under On the other hand, because the mitochondrial protein genes are not properly expressed in Δset1, the mitochondrial membrane becomes loose and generates more cellular mitochondrial ROS than WT. Since the amount of ROS scavenger is the same as WT, the ROS accumulated in the cell is continuously neutralized by the antioxidant enzymes. However, if they are attacked by macrophage or treated with external ROS, Δset1 is more susceptible to oxidative stress because of the already generated cellular ROS. the oxidative stress, but Δset1 cells cannot due to their saturated ROS scavenging system. Consequently, ROS quickly accumulates to a level that cannot be neutralized and decreases the survival rate of C. albicans Δset1 as the cells are unable to resist oxidative stress ( Figure 5(c)). Therefore, we propose that Set1-mediated gene expression is required for the survival of C. albicans against oxidative stress and its pathogenicity. Discussion Opportunistic pathogens can become pathogenic via direct or indirect mechanisms: direct mechanisms involve a change in the ability of pathogens to attack hosts by altering their virulence factor expression, whereas indirect mechanisms involve pathogens passively enduring and surviving against attacks by the host immune system [45]. To understand why the virulence of C. albicans decreases in the absence of Set1, we identified genes regulated by Set1 using RNAseq. The expression of genes directly involved in virulence did not change or was even upregulated in the Δset1strain; however, the expression of many genes required to resist oxidative stress, including mitochondrial protein genes, decreased significantly. The reduced expression of these genes increased the generation of mitochondrial ROS but did not reduce the growth rate compared to the WT under normal conditions (data not shown), since the generated cellular ROS could be neutralized continuously by antioxidant enzymes, such as SOD. However, the Δset1 strain was more sensitive than the WT strain when treated with H 2 O 2 or co-incubated with macrophages. This hypersensitivity was likely due to the inability of the Δset1 strain to neutralize the excessive amounts of ROS generated during H 2 O 2 treatment or co-incubation with macrophages. Indeed, the failure of the Δset1 strain to induce oxidative stress-related genes in response to excessive ROS levels led to C. albicans cell death. These findings suggest that Set1 did not directly affect virulence by promoting the expression of virulencerelated genes but instead lowered the production of mitochondrial ROS by regulating the expression of mitochondrial protein genes. Therefore, antioxidant enzymes remained available for external ROS attack to reduce ROS levels to normal physiological conditions. Together, these findings indicate that Set1 allows C. albicans to survive longer in the host and eventually switch to its pathogenic form ( Figure 5(c)). Histone H3K4 methylation and the H3K4 methyltransferase, Set1, are well conserved in many eukaryotes, and several studies have reported that Set1 and Set1-mediated H3K4 methylation are important for the pathogenicity of some fungal pathogens, in which Set1 regulates virulence and the stress response by controlling the expression of specific genes. For instance, Set1 has been shown to be involved in the virulence of the plant pathogen Fusarium verticillioides by regulating the expression of fumonisin B1 toxin-encoded genes [17], whereas Set1-mediated H3K4 methylation activates the expression of TR1, which encodes the toxin deoxynivalenol (DON), in the plant fungal pathogen Fusarium graminearum [20]. In addition, Set1 is involved in Magnaporthe oryzae fungal virulence by activating virulence-related genes via H3K4 methylation [21], while an H3K4 methyltransferase is required to induce genes in the entomopathogenic fungus Metarhizium robertsii under host conditions and induce virulence to mosquito infection [19]. Importantly, C. albicans is the only example that the virulent effects of Set1 have been reported in a human pathogen [15]. H3K4me3 is a marker of active transcription that is enriched in actively transcribed genes; however, we found that the absence of SET1 did not change 97% of the total gene expression (Figure 1(a-b)). Similarly, recent genome-wide studies revealed that the absence of H3K4 methyltransferase did not dramatically change the overall gene expression in other fungal organisms, such as S. cerevisiae [46,47], M. oryzae [21], and Fusarium fujikuroi [18]. Because we observed that the expression of some genes increased in the absence of H3K4 methyltransferase, it is difficult to assume that the role of H3K4 methylation simply correlates with active transcription. Although the effect of Set1 was not significant in terms of the genome-wide expression profile, H3K4 methyltransferase has been implicated in the pathogenicity phenotype of various pathogenic fungi, including C. albicans [48]. In this study, we described the mechanism via which the absence of Set1 in C. albicans attenuates virulence, with a particular focus on the genes regulated by H3K4 methylation. Transcriptome analysis revealed that RNA levels did not change significantly in C. albicans in the presence or absence of Set1 under normal conditions; however, there may be a set of genes whose expression differs depending on the presence or absence of Set1 under oxidative stress conditions or during interactions with macrophage. Therefore, the role of H3K4 methylation in transcriptional regulation cannot simply be defined as active or negative regulation, but must be described as a function that regulates each gene differently in specific environments and should be investigated in future studies.
2021-10-27T06:18:23.890Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "5d815481a9d164bfafcc079d3d1a837353619d88", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21505594.2021.1980988?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a08ac98fd1ac15cd04f8f5fec168eeda5bc6f251", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248911451
pes2o/s2orc
v3-fos-license
The Usual Presentation of an Unusual Case: Spontaneous Primary Splenic Cyst Rupture Acute abdominal pain is one of the most common reasons for emergency admissions. Even though initial differentials are wide, a physician is able to narrow them down with detailed history, careful physical examination, and appropriate laboratory tests along with imaging studies. Unfortunately, some of the cases do not have an established diagnosis despite multiple blood work and imaging studies in the emergency department. In such conditions, physicians' recognition of rare diseases generally avoids extra costs for additional investigations, unnecessary consultations, and most importantly wasting valuable time in life-threatening conditions in emergency settings. Here, we report a 30-year-old woman with acute severe abdominal pain and hemodynamic instability who was found to have ascites that was actually hemoperitoneum secondary to spontaneous primary non-parasitic splenic cyst rupture. The primary splenic cyst is an extremely rare entity and is often found on imaging incidentally. A few case reports regarding primary splenic cyst and its complications were published in the literature. Since it is an exceptionally uncommon condition, there is no consensus on treatment. We aimed to increase the understanding of spontaneous primary splenic cyst rupture and its management among healthcare providers with this case report. Introduction Acute abdomen refers to sudden onset, localized, or generalized severe pain in the abdomen necessitating urgent care. It could be due to many reasons including infection, inflammation, obstruction, perforation, or vascular occlusion. A thorough history along with a physical examination is generally enough for diagnosis. Laboratory tests and imaging studies are utilized to confirm the diagnosis by physicians. However, some cases remain undiagnosed even after extensive workup. In these cases, increased awareness by healthcare providers for rare diseases that cause acute abdomen saves critical time for patient care and decreases excess cost and unnecessary interventions. The non-parasitic splenic cyst is a rare condition that is classified into two main groups: primary and secondary [1]. While primary splenic cysts could be of congenital, neoplastic, or dermoid origin, secondary splenic cysts either originated from trauma or necrosis. The primary nonparasitic splenic cyst is often detected as an incidental finding in imaging studies in individuals; however, some patients present with localized left upper quadrant symptoms due to increased size or generalized symptoms due to complications including infection and rupture. The potential complications of infection or rupture are especially in the setting of trauma, which is the most common cause of splenic rupture in patients with a normal or cysted spleen. Spontaneous primary splenic cyst rupture without trauma is occasionally identified in the literature [2]. Moreover, the treatment of splenic cyst and its complications is still controversial due to the rarity of the disease. Here, we aimed to discuss a rare case of spontaneous primary splenic cyst rupture that was successfully managed with conservative treatment. Case Presentation A 30-year-old woman presented to the emergency department with acute onset severe diffuse abdominal pain. She reported that she woke up early morning due to constant abdominal pain radiating to the back with 8/10 severity. She did not recall any trauma to the chest or abdomen. She was a sexually active, heterosexual woman using condoms and has regular menstrual cycles; the last one was three weeks ago. Her heart rate was 118/min, and the rest of her vital signs were normal. Her examination was also notable for moderate abdominal distension with no rebound or guarding, but her physical examination was otherwise normal. Laboratory data were significant for normocytic anemia with hemoglobin of 10.4 g/dL and neutrophilic leukocytosis of 19.8 x 10 9 /L. The serum pregnancy test was negative excluding the possibility of ectopic pregnancy. Computed tomography (CT) of the abdomen and pelvis with intravenous contrast revealed moderate ascites in the abdomen and pelvis and a splenic cyst of 2.5 cm x 2.5 cm (Figure 1, Panels a and b). FIGURE 1: Axial CT images of the splenic cyst (a) and hemoperitoneum (b) The patient was admitted to the medical floor overnight for further workup of ascites and severe abdominal pain. Repeat hemoglobin in next morning was 7.2 g/dL, but vitals remained stable other than persistent tachycardia. Interventional radiology-guided paracentesis yielded 500 mL of frank blood consistent with hemoperitoneum ( Figure 2, Panels a and b). Retrospectively, ascites defined by CT of the abdomen was reevaluated by a radiologist, and the density of ascites was found to be 38 Hounsfield units (HU), which was similar to the density of blood in the aorta. The fluid analysis also confirmed hemoperitoneum with an RBC count of 8663 M/µL. Surgery was consulted immediately with the diagnosis of spontaneous rupture of the presumably primary splenic cyst. The surgery team considered conservative management in the critical care unit for close hemodynamic monitoring and serial abdominal exams along with frequent hemoglobin checks, given moderate symptoms. Interventional radiology-guided angiography to rule out aneurysms was deferred at this time because the splenic artery looks normal in the CT scan. The patient was discharged as hemoglobin did not drop any further and abdominal pain subsided during her stay. She did not report any active symptoms in a three-month follow-up upon discharge. The surgery team decided to follow up closely for recurrence with the plan of splenectomy if cyst rupture recurs. Discussion Acute abdomen requires rapid detection of etiology and appropriate treatment and follow-up decisions in the emergency room. Even though most of the cases are straightforward for clinicians, some cases could be challenging in terms of diagnosis and management decisions. Splenic cysts are generally incidental findings on imaging modalities in people without any symptoms. The primary splenic cyst can cause symptoms either when it is large or when it is complicated by infection, rupture, bleeding, or hemoperitoneum. Splenic rupture is a life-threatening emergency that frequently occurs secondary to trauma. Non-traumatic rupture of the spleen is extremely uncommon and usually related to underlying pathological conditions including hematological diseases, neoplasms, inflammations, and infections. Here, we present a non-traumatic splenic rupture in the setting of a primary splenic cyst that was successfully managed with conservative treatment. To date, less than 1000 individuals with splenic cysts and only 13 cases with ruptured splenic cysts have been reported in the literature [3][4][5]. The most common symptom of patients with splenic rupture is left upper quadrant abdominal pain that later generalizes with abdominal distention and rigidity. Pallor, tachycardia, hypotension, and oliguria are also expected as signs of bleeding. Paracentesis with the aspiration of fresh blood is useful to diagnose intraperitoneal hemorrhage. Point-of-care ultrasound (POCUS) or focus assessment with sonography in trauma (FAST) has been used at the bedside in emergency settings to address specific clinical questions and speed the diagnosis and treatment of patients. Over the last three decades, the use and application of ultrasound have expanded to include multiple diagnostic studies and procedural uses and become an integral part of emergency assessments. Our patient was a good candidate for POCUS and POCUS-guided paracentesis if splenic rupture was suspected within the initial hours of admission and would prevent diagnostic delay. Fortunately, our case did not suffer from the late diagnosis. Splenic cysts are globally divided into two groups: parasitic cysts (secondary to Echinococcus granulosus infection) and non-parasitic cysts. Non-parasitic cysts can be subclassified into primary (congenital, neoplastic, and dermoid) and secondary cysts (trauma and necrosis). Most splenic cysts are acquired cysts in the setting of trauma that does not have an endothelial lining (pseudocyst) in contrast to congenital cysts. Radiologically, splenic cysts are fluid-density lesions. Ultrasound usually demonstrates an anechoic to hypoechoic well-defined intrasplenic lesion with no septations unless complicated. Splenic cysts typically are well-defined, fluid-attenuation, unilocular masses with imperceptible walls by CT. A CT is able to identify cyst wall calcifications and septations very well. Magnetic resonance (MR) shows splenic cysts as well-defined cystic non-enhancing lesions with low signal intensity on T1 and very high signal intensity on T2. MR is also useful for understanding the relationship between the cyst, the spleen, and the surrounding organs [6]. There has been no consensus on the treatment of splenic cysts due to a limited number of cases even though many distinctive approaches have been reported, including conservative treatment, spleen-saving procedures, and total splenectomy [7]. Total splenectomy is generally a choice of treatment in asymptomatic splenic cysts greater than 5 cm, any size of splenic cysts with symptoms and complicated cysts [6]. Due to the increasing awareness of the immunologic function of the spleen, spleen-saving techniques including percutaneous drainage, fenestration, marsupialization, and partial splenectomy or conservative management with close monitoring have more interest among surgeons, especially in hemodynamically stable patients instead of total splenectomy in order to prevent the need of vaccinations, risk of encapsulated organism infections, and prolonged antibiotic use following splenectomy [7]. Partial splenectomy saves more than 25% of parenchymal tissue, which is generally enough to conserve the immunologic function of the spleen without increasing the risk of relapse. Marsupialization or partial cystectomy is another option for splenic cysts that decrease surgery time with minimal risk of recurrence [6]. In our patient, the surgery team preferred to manage the patient conservatively with closer monitoring in the surgical ICU, given moderate abdominal symptoms and stable vitals other than tachycardia. Conclusions Spontaneous splenic rupture due to a primary splenic cyst is an extremely rare condition, and clinicians should have a high index of suspicion for diagnosis, especially in patients that are presenting with an unexplained fluid in the abdomen with anemia or hemodynamic instability. In such cases, healthcare providers should be paying more attention to the spleen and its pathologies in particular. Delayed diagnosis and management of primary splenic cyst jeopardize patient's safety and can result in serious consequences including death. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-05-20T15:15:39.932Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "518bcefd0e87352b6425e7e2e68e7c1539380b07", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/96385-the-usual-presentation-of-an-unusual-case-spontaneous-primary-splenic-cyst-rupture.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8a270d40247de453527d3aa99d0a9209c29c5e15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
266558858
pes2o/s2orc
v3-fos-license
A Pharmacodynamic Study of Aminoglycosides against Pathogenic E. coli through Monte Carlo Simulation This research focuses on combating the increasing problem of antimicrobial resistance, especially in Escherichia coli (E. coli), by assessing the efficacy of aminoglycosides. The study specifically addresses the challenge of developing new therapeutic approaches by integrating experimental data with mathematical modeling to better understand the action of aminoglycosides. It involves testing various antibiotics like streptomycin (SMN), kanamycin (KMN), gentamicin (GMN), tobramycin (TMN), and amikacin (AKN) against the O157:H7 strain of E. coli. The study employs a pharmacodynamics (PD) model to analyze how different antibiotic concentrations affect bacterial growth, utilizing minimum inhibitory concentration (MIC) to gauge the effective bactericidal levels of the antibiotics. The study’s approach involved transforming bacterial growth rates, as obtained from time–kill curve data, into logarithmic values. A model was then developed to correlate these log-transformed values with their respective responses. To generate additional data points, each value was systematically increased by an increment of 0.1. To simulate real-world variability and randomness in the data, a Gaussian scatter model, characterized by parameters like κ and EC50, was employed. The mathematical modeling was pivotal in uncovering the bactericidal properties of these antibiotics, indicating different PD MIC (zMIC) values for each (SMN: 1.22; KMN: 0.89; GMN: 0.21; TMN: 0.32; AKN: 0.13), which aligned with MIC values obtained through microdilution methods. This innovative blend of experimental and mathematical approaches in the study marks a significant advancement in formulating strategies to combat the growing threat of antimicrobial-resistant E. coli, offering a novel pathway to understand and tackle antimicrobial resistance more effectively. Introduction Over the past few decades, antimicrobial resistance has emerged as one of the most pressing health concerns worldwide [1].Central to this concern is the resistance exhibited by common pathogens, with Escherichia coli (E.coli) being a prime example [2].This resistance is not merely an academic interest; it poses real-world consequences.As these microorganisms evolve and develop resistance, they render many previously effective therapeutic agents obsolete [3].The resulting dearth in the therapeutic strategies complicates clinical treatments and prolongs patient recovery [4]. Within this backdrop, aminoglycosides have stood out as a beacon of hope [5].As a potent class of antibiotics, they have been tailored specifically to counter Gram-negative bacterial threats, a category to which E. coli belongs [5].Aminoglycosides enter bacterial cells passively, then actively cross the inner membrane, where they hinder protein synthesis by binding to the 30S ribosomal subunit, leading to defective proteins and bacterial cell death [6].This disruption in protein synthesis is the primary mode of their bactericidal action [7].However, the efficacy of antibiotics is not solely contingent on their direct bacterial action but also on a complex dance of absorption, distribution, metabolism, and excretion in the human body, collectively referred to as pharmacokinetics (PK) [8].Moreover, it is not just about how the body processes these drugs; it is equally about how these drugs, once administered, influence both the pathogen and the host.This sphere of influence, known as pharmacodynamics (PD), encapsulates the drug's therapeutic and adverse effects [9].The relationship between PK and PD is integral for determining the dosage regimen of a drug [10], optimizing its therapeutic efficacy [11], and minimizing adverse effects [12]. Despite the pivotal role that aminoglycosides play, especially within the PK/PD framework, there exists a puzzling gap.Comprehensive and granular data on these antibiotics, spanning from their absorption kinetics to their bacterial eradication rates [13,14], is not as abundant as one would expect.This paucity of data is especially surprising given the gravity of the antimicrobial resistance issue and the prominence of aminoglycosides in counteracting such resistance [15].It underscores an urgent need for research endeavors to inform more robust clinical strategies.The E max model, which holds significant importance in PK/PD for quantifying the effect of a drug in relation to its concentration, is intricate due to its reliance on in vivo studies [16].These in vivo studies often introduce complexities due to physiological variables, making the extrapolation of results challenging [17].Recognizing this, previous study pioneered an alternative approach through their PD modeling [18].While conceptually parallel to the E max model, it leverages mathematical computations, offering a more systematic, replicable, and less labor-intensive method. Driven by the above gaps and innovations, our research embarked on a dual-phase journey.The first phase involved in vitro time-kill assays of five distinct aminoglycosides, gauging their efficacy against E. coli.These assays, through controlled conditions, aimed to chart the bactericidal trajectory of each aminoglycoside over time.Armed with this data, the subsequent phase employed the PD model proposed by previous research [18].This study aimed to enhance understanding of aminoglycosides' potential against increasing antimicrobial resistance by connecting experimental results with mathematical models.Through detailed computational analysis, we seek to offer dependable methods that assist in making well-informed decisions for therapeutic strategies. MIC and MBC In the assessment of five antibiotics against E. coli, the findings revealed the following MIC and MBC values (Figure 1 and Table 1).For SMN, the MIC was determined to be 2, with an MBC of 4 and an MBC-to-MIC ratio of 2. KMN exhibited an MIC of 1, an MBC of 2, and a ratio of 2. GMN had an MIC of 0.25, an MBC of 1, and a ratio of 4. TMN presented with an MIC of 0.5, an MBC of 1, and a ratio of 2. Lastly, AKN demonstrated an MIC of 0.25, an MBC of 1, and a ratio of 4. These values provided insights into the efficacy of each antibiotic in inhibiting and killing E. coli. Time-Kill Curves against E. coli Figure 2A-E illustrates the bacterial count measurements for several antibiotics against E. coli over a 24 h period.At the onset (0 h), all antibiotics exhibited consistent bacterial counts across all concentrations (MIC, 2MIC, and 4MIC), as well as the control, with a value of 6.21 Log cfu/mL.As time progressed, the control samples consistently showed an upward trajectory in bacterial growth, culminating at 9.56 at the 24 h mark.In contrast, for all antibiotics, as the concentration increased, bacterial counts typically decreased.By 24 h, SMN's bacterial count was highest at MIC with 8.12 Log cfu/mL, but dwindled to 2.32 Log cfu/mL at 4MIC.Similarly, KMN displayed a count of 6.75 Log cfu/mL at MIC and plummeted to 1.61 Log cfu/mL at 4MIC.GMN, TMN, and AKN followed the same trend.This overarching pattern underscored the potent growth-inhibitory effects of these antibiotics on E. coli, with their efficacy generally amplifying at higher concentrations. PD Modeling through Simulation Equation (3), which represented the PD function, corresponded to the observed growth rates of bacteria, as depicted in Figure 2A-E.A model based on logarithm ver response was developed from this.Consequently, this approach facilitated the derivat of four key parameters: ψmax, ψmin, κ, and EC50, which are elaborately presented in Fig 3 and detailed in Table 2. Based on the parameter estimates displayed in Table 2 for various aminoglycosid we can observe distinct PD profiles for each antibiotic.For SMN, the ψmax was estima at 0.5651 with a confidence interval (CI) ranging from 0.4419 to 0.7845.Conversely, ψmin showed a significant inhibitory effect at −0.8166 (CI: −1.028 to −0.6976).The Hill co ficient, indicative of the steepness of the drug effect curve, was noted at −0.7631 (CI: −1. to −0.4600), and the EC50 value, the concentration required to achieve half the maxim The area under the curve (AUC) of viable cells, which were consistent with time-kill curves, offered insights into the performance of various antibiotics against E. coli over time.Heat map results, as shown in Figure 2F, also provided a visual representation of the comparative efficacy of antibiotics.Darker shades indicated reduced efficacy activity, while lighter shades suggested higher activity.For SMN, the AUC values are 179.3,162.5, and 83.22 for MIC, 2MIC, and 4MIC concentrations, respectively, while the control showed a higher AUC of 208.5.KMN demonstrated AUC values of 151.8, 98.47, and 67.5 for MIC, 2MIC, and 4MIC respectively, again with a control AUC of 208.5.GMN has AUC measurements of 112.9, 93.77, and 61.38 for its respective concentrations.TMN posted AUC results of 123.3, 94.17, and 69.63, while AKN registered values of 107.5, 85.08, and 71.29.This findings indicated the effectiveness of the antibiotics in inhibiting bacterial growth over time, with lower AUC values representing better antibiotic efficacy. PD Modeling through Simulation Equation ( 3), which represented the PD function, corresponded to the observed net growth rates of bacteria, as depicted in Figure 2A-E.A model based on logarithm versus response was developed from this.Consequently, this approach facilitated the derivation of four key parameters: ψ max , ψ min , κ, and EC 50 , which are elaborately presented in Figure 3 and detailed in Table 2. ψmax is maximal bacterial growth rate; ψmin is maximal bacterial growth rate; EC50 is the value to produce 50% of the maximal antibacterial effect. These calculated values then facilitated the introduction of the Gaussian scatter model, followed by the application of the Monte Carlo simulation.The PD function provided an excellent fit for all five antibiotics, as evidenced by the adjusted R 2 values shown in Figure 4. Simulated ψmax, ψmin, EC50, and Hill coefficient were described in Figure 5. Table 3 presented the parameter estimates for various antibiotics determined through Monte Carlo simulations, with a particular emphasis on the zMIC values.SMN has a zMIC of 1.22, indicating the concentration at which it inhibits the growth of the bacterial population.KMN exhibits a lower zMIC value of 0.89 ± 0.52, suggesting a potent antibacterial effect at lower concentrations.GMN shows an even lower zMIC of 0.21 ± 0.02, highlighting its strong efficacy in inhibiting bacterial growth. TMN presents a zMIC of 0.32 ± 0.15, which is comparable to GMN, indicating its effectiveness in halting bacterial proliferation.AKN has the lowest zMIC of 0.13 ± 0.02 among the group, suggesting that it is the most effective at inhibiting bacterial growth at minimal concentrations.These zMIC values provide critical insights into the pharmacodynamics of these antibiotics, showcasing their potential effectiveness at specific concentrations against bacterial growth.Based on the parameter estimates displayed in Table 2 for various aminoglycosides, we can observe distinct PD profiles for each antibiotic.For SMN, the ψ max was estimated at 0.5651 with a confidence interval (CI) ranging from 0.4419 to 0.7845.Conversely, the ψ min showed a significant inhibitory effect at −0.8166 (CI: −1.028 to −0.6976).The Hill coefficient, indicative of the steepness of the drug effect curve, was noted at −0.7631 (CI: −1.244 to −0.4600), and the EC 50 value, the concentration required to achieve half the maximal antibacterial effect, was 2.996 (CI: 1.781 to 5.250).This antibiotic also showed a high R 2 value of 0.9842, suggesting a strong fit to the observed data.KMN, on the other hand, presented a higher ψ max of 0.7290 (CI: 0.5889 to 1.022) indicating a faster growth rate of bacteria in the absence of the drug.Its ψ min was −0.9728 (CI: −1.213 to −0.8488), and the Hill coefficient was −0.5324 (CI: −0.7153 to −0.3553), suggesting a less steep response curve than SMN.The EC 50 value for KMN was lower at 1.374 (CI: 0.8118 to 2.202), and the R 2 value was exceptionally high at 0.9938, denoting a very accurate model fit.Each antibiotic's individual PD parameters, including GMN, TMN, and AKN, similarly reflect their unique inhibitory profiles and potency, with varying degrees of bacterial growth inhibition and death rates, as evidenced by their respective EC 50 values and Hill coefficients. These calculated values then facilitated the introduction of the Gaussian scatter model, followed by the application of the Monte Carlo simulation.The PD function provided an excellent fit for all five antibiotics, as evidenced by the adjusted R 2 values shown in Figure 4. Simulated ψ max , ψ min, EC 50 , and Hill coefficient were described in Figure 5. Table 3 presented the parameter estimates for various antibiotics determined through Monte Carlo simulations, with a particular emphasis on the zMIC values.SMN has a zMIC of 1.22, indicating the concentration at which it inhibits the growth of the bacterial population.KMN exhibits a lower zMIC value of 0.89 ± 0.52, suggesting a potent antibacterial effect at lower concentrations.GMN shows an even lower zMIC of 0.21 ± 0.02, highlighting its strong efficacy in inhibiting bacterial growth. Discussion Addressing the challenge of antimicrobial resistance in E. coli necessitates the development of rigorous and consistent methods for the in vitro evaluation of antimicrobial interventions [2].The research presented in this study performed an in vitro time-kill assay of aminoglycosides against E. coli.By using this assay, we sought to gain a deeper understanding of the dynamic interaction between antimicrobials and bacterial popula- Discussion Addressing the challenge of antimicrobial resistance in E. coli necessitates the development of rigorous and consistent methods for the in vitro evaluation of antimicrobial interventions [2].The research presented in this study performed an in vitro time-kill assay of aminoglycosides against E. coli.By using this assay, we sought to gain a deeper understanding of the dynamic interaction between antimicrobials and bacterial populations over time, which is essential for predicting treatment outcomes in clinical settings.TMN presents a zMIC of 0.32 ± 0.15, which is comparable to GMN, indicating its effectiveness in halting bacterial proliferation.AKN has the lowest zMIC of 0.13 ± 0.02 among the group, suggesting that it is the most effective at inhibiting bacterial growth at minimal concentrations.These zMIC values provide critical insights into the pharmacodynamics of these antibiotics, showcasing their potential effectiveness at specific concentrations against bacterial growth. Discussion Addressing the challenge of antimicrobial resistance in E. coli necessitates the development of rigorous and consistent methods for the in vitro evaluation of antimicrobial interventions [2].The research presented in this study performed an in vitro time-kill assay of aminoglycosides against E. coli.By using this assay, we sought to gain a deeper understanding of the dynamic interaction between antimicrobials and bacterial populations over time, which is essential for predicting treatment outcomes in clinical settings.The incorporation of the PD model, which elucidates the intricate relationship between antimicrobial concentration and bacterial growth rate, added another layer of sophistication to our analysis.Such models are paramount in bridging the gap between in vitro results and their clinical implications.They offer insights into the optimum concentration levels needed to curb bacterial growth, thereby facilitating a more targeted approach to dosing regimens. The MBC to MIC ratio serves as a critical parameter in understanding the bactericidal nature of antibiotics [19].It is widely accepted that an antibiotic with an MBC/MIC ratio of 4 or less is generally considered bactericidal against a particular microorganism [20].In our study, evaluating five antibiotics against pathogenic E. coli, the findings suggested promising results in terms of bactericidal activity.Specifically, all the antibiotics tested exhibited an MBC/MIC ratio of 4 or less, classifying them as bactericidal agents against this strain of E. coli.SMN, KMN, and TMN each displayed a ratio of 2, which indicated a strong bactericidal potential since their killing concentrations are only twice their inhibitory concentrations.On the other hand, GMN and AKN presented with a ratio at the threshold of 4. While still within the bactericidal range, this suggested that their bactericidal concentrations are four times their inhibitory concentrations, marking a relatively higher distinction between inhibition and killing capabilities when compared to the other antibiotics. The AUC derived from the time-kill assay is an indispensable metric when assessing the efficacy of antimicrobials [21].Essentially, it offers a quantitative representation of bacterial response over time when subjected to antibiotic treatment.In the context of antimicrobial susceptibility, a smaller AUC typically denotes a more potent antibacterial effect as it indicates fewer viable bacterial cells over the assay's duration.Delving into specifics, the AUC values for SMN across various concentrations (MIC, 2MIC, and 4MIC) clearly indicated a concentration-dependent effect.As the concentration increased, the AUC diminished, underscoring a heightened antibiotic effect.A similar trend is discernible for KMN, with its AUC values diminishing progressively with increasing antibiotic concentration. The PD function served as a tool to elucidate the relationship between bacterial vitality rates and varying concentrations of antibiotics belonging to different classes [18].This function corresponds closely with E max models previously mentioned in other report [22].Within this function, there are four essential parameters clearly outlined.Firstly, 'ψ max ' highlights the peak bacterial growth rate when no antibiotic is present.'ψ min ' depicts the lowest net bacterial growth rate when confronted with high antibiotic concentrations.'zMIC' acts as an indicator for the PD MIC.The Hill coefficient underscores the sensitivity of bacterial growth or mortality rates to changes in antibiotic concentrations [23].Central to this is the Hill coefficient, a pivotal determinant of the curve's gradient, especially around the zMIC point.This coefficient provides profound insights into how alterations in antimicrobial concentrations influence bacterial elimination [24].Intriguingly, previous research established a noteworthy correlation: antimicrobials with a concentration-dependent manner, epitomized by drugs like ciprofloxacin, typically align with elevated Hill coefficients [18].In contrast, time-dependent antimicrobials like tetracycline tend to have lower Hill coefficients.An in-depth analysis of Table 3, which details the parameter estimates obtained via Monte Carlo methods, further supported and enhanced these results.The data show GMN having a Hill coefficient of 1.00 ± 0.06 and TMN with a coefficient of 1.56 ± 0.20, hinting at a likely concentration-dependent mechanism.In contrast, AKN, with a Hill coefficient of 0.53 ± 0.14, appears to indicate a tendency towards a time-dependent mode of action.While AKN is generally recognized as concentration-dependent [25], previous research has revealed its time-dependent toxic effects on the renal functions of male Wistar rats [26].The research found variations in toxic effects, like decreased creatinine clearance and urinary excretion of furosemide, based on the timing of AKN administration.The MICs of these antibiotics were assessed in vitro using a twofold dilution method.The zMICs were found to align with the range determined by the dilution method.However, greater precision is offered by them, as they are not constrained by a twofold dilution method.The parameter estimates from Table 3, obtained through Monte Carlo simulations, revealed distinct pharmacodynamic profiles of various antibiotics against E. coli.Antibiotics like SMN and KMN show significant differences in their ψmax (0.46 ± 0.05, 0.90 ± 0.13) and ψ min values (−0.92 ± 0.13, −0.99 ± 0.12), indicating varied ranges of action and potencies.SMN showed intricate interactions with bacterial cells, whereas KMN revealed a stable and strong efficacy over various concentrations.This was further explored in the study of streptomycin resistance in E. coli mutants [27].GMN, known for its pronounced concentration-dependent impact, a finding corroborated by earlier studies on intracellular Yersinia pestis [28], stands in stark contrast to TMN.The latter displays a distinct mode of action, as evidenced by its notably high negative Hill coefficient, the most extreme among the antibiotics evaluated in the study.AKN stood out with a potential time-dependent action, suggested by its Hill coefficient and the lowest zMIC value.These findings underscore the diverse mechanisms of action of these antibiotics, crucial for understanding their effectiveness against antimicrobial-resistant E. coli strains. The obtained parameters from in vitro time-kill curves, notably the ψ max , ψ min , and zMIC in tandem with the Hill coefficient, crafted the PD profile of these antibiotics.Tailoring antibiotic therapy based on such insights could pave the way for a reproducible and affordable strategy to measure the antibiotics' properties. The role of PD tackling antibiotic resistance has gained paramount importance.Recent research has enriched our knowledge in this domain, notably in the area of gonorrhoea caused by Neisseria gonorrhoeae, where an increasing resistance to conventional treatments has been observed [29].To address this, a unique in vitro time-kill curve assay was innovatively used, revealing the effectiveness of nine different antimicrobials against established reference strains.The study emphasized the crucial role of this approach, especially through a PD lens, in shaping future gonorrhoea treatments.In another study, the intrinsic qualities of antimicrobial peptides, recognized for their unique PD attributes and resistance to bacteria, were scrutinized [30].A detailed analysis revealed their effects on Staphylococcus aureus, demonstrating an adaptive PD relationship under extended drug exposure, underlining the necessity to comprehend these adaptations to manage resistance development.Another research focused on evaluating the impact of specific antimicrobials on Neisseria gonorrhoeae's growth [31].Detailed analyses through PD functions were carried out, revealing that higher doses of ceftriaxone might be potent against particular Neisseria gonorrhoeae variants and introducing GMN as a potential contender for treatment. In the study presented, Gaussian models were applied to examine the data distributions and identify primary patterns and anomalies within the data set.Following this, Monte Carlo simulations were executed to predict outcomes, taking into account the stochastic nature and inherent fluctuations of the system being studied.Although Monte Carlo simulations are pivotal for assessing stochastic phenomena, integrating them with Gaussian models introduces certain complexities.One notable limitation is the assumption in the simulation process that variables act independently, an assumption which may not hold true in Gaussian frameworks where variables often exhibit interdependencies that could influence the simulation outcomes.Despite integrating Gaussian models with Monte Carlo simulations, the results of this study showed a linear correlation (as reflected by the R-squared value).Therefore, while the methodologies employed herein have provided valuable insights, cautious interpretation is required.It is recommended that these methods be supplemented with additional analytical techniques to ensure more robust and reliable results.This multifaceted approach would help to mitigate any methodological limitations and provide a more holistic understanding of the data and their implications. Although the results of this research are encouraging, it is crucial to acknowledge the intrinsic constraints associated with in vitro experiments.Clinical conditions in actual practice are considerably more intricate, shaped by numerous elements such as the host's immune response, the virulence of the bacteria, and the pharmacokinetics (PK) of the drug.However, the integration of time-kill curve assays and sophisticated PD modeling has laid a robust groundwork for future research endeavors to expand upon. Chemicals and Reagents The antibiotics (streptomycin (SMN), kanamycin (KMN), gentamicin (GMN), tobramycin (TMN), and amikacin (AKN)) used in the study were obtained from Sigma-Aldrich (St. Louis, MO, USA).They were prepared for use by dissolving them according to the provided guidelines and recommendations. Bacteria Culture Escherichia coli (E.coli) ATCC 43888 was acquired from the American Type Culture Collection (ATCC).The procured bacteria were cultured on Luria Bertani (LB) agar plates (BD, Diagnostics, Sparks, MD, USA) and allowed to incubate at a temperature of 37 • C for a period of 24 h.After the incubation, emerging colonies were selected and transferred into 5 mL of Mueller Hinton broth (MHB) (BD, Diagnostics, Sparks, MD, USA) and incubated overnight at 37 • C. The bacteria were then subcultured into another 5 mL of the same medium and maintained at 37 • C, with agitation at 180 rpm in a shaker/incubator for a duration of 3 h, facilitating the bacteria to reach the mid-logarithmic growth phase [32]. Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) of Antibiotics against E. coli The minimum inhibitory concentration (MIC) of antibiotics (SMN, KMN, GMN, TMN, and AKN) against E. coli O157:H7 strain ATCC 43888 was assessed using a two-fold serial dilution method, with concentration variations between 0.03125 to 64 µg/mL in accordance with the guidelines of the Clinical and Laboratory Standards Institute (CLSI) [33].After inoculation, the plates were incubated at 37 • C for 24 h.The MIC was determined as the lowest concentration of antibiotic that visually inhibited bacterial growth in the medium.A microplate reader (Versamax™, Idaho Emmett, ID, USA) was used to confirm the results.For establishing the minimum bactericidal concentration (MBC), samples from three concentrations above the determined MIC, where no visible bacterial growth was observed, were placed onto LB plates.These plates, after being incubated at 37 • C for 24 h, were examined to recognize a 3log10 reduction in the initial bacterial count. Time-Kill Curves of Antibiotics against E. coli The in vitro time-kill curves of antibiotics (SMN, KMN, GMN, TMN, and AKN) against E. coli O157:H7 strain ATCC 43888 were created following the guidelines from the CLSI [33].The bacterial concentration was adjusted to a final inoculum of 1.5 × 10 6 cfu/mL and then exposed to various antibiotic concentrations ranging from 1× to 4× MIC.Control growth curves were established using MHB without any antibiotics.Bacterial counts were conducted at various intervals: 0, 1, 2, 4, 8, 12, and 24 h of culturing, after which they were incubated for 24 h at 37 • C on LB plates [34]. PD Modeling The study investigated the PD relationship between the concentration of an antibiotic and the corresponding growth and death rates of bacteria [18].A model was proposed to depict the net growth rate (ψ) of a bacterial population when exposed to a particular antibiotic concentration (a).This net growth rate is a function of various factors.In our model, the maximal bacterial growth rate is denoted as ψ max , while the bacterial death rate at a given antibiotic concentration is represented by µ (a), following a Hill function. Essential parameters of this model include E max , indicating the maximum death rate induced by the antibiotic, and EC 50 , representing the antibiotic concentration at which the death rate is half of E max .The Hill coefficient, κ, is another critical parameter that describes the steepness of the curve relating µ to a, typically showing a sigmoidal relationship. To quantify these rates, such as ψ (a), ψ max , µ (a), and E max , we evaluated the hourly logarithmic changes (base 10) in bacterial density.Furthermore, zMIC is defined as the PD MIC at which no bacterial growth is observed, meaning ψ (zMIC) is zero.Figure 6 in our study illustrates how these parameters affect the relationship between antibiotic concentration and bacterial growth rate.To quantify these rates, such as ψ (a), ψmax, μ (a), and Emax, we evaluated the hourly logarithmic changes (base 10) in bacterial density.Furthermore, zMIC is defined as the PD MIC at which no bacterial growth is observed, meaning ψ (zMIC) is zero.Figure 6 in our study illustrates how these parameters affect the relationship between antibiotic concentration and bacterial growth rate. Monte Carlo Simulation In this approach, the rate of bacterial growth, derived from the data of time-kill curves, underwent a transformation into logarithmic scale.Following this, a detailed model was established to link these logarithmic values with their corresponding responses.The process of generating subsequent data points involved adding a fixed increment of 0.1 to each value.To incorporate randomness and variability in the data, a Gauss- Monte Carlo Simulation In this approach, the rate of bacterial growth, derived from the data of time-kill curves, underwent a transformation into logarithmic scale.Following this, a detailed model was established to link these logarithmic values with their corresponding responses.The process of generating subsequent data points involved adding a fixed increment of 0.1 to each value.To incorporate randomness and variability in the data, a Gaussian scatter model was applied, characterized by a standard deviation of 0.1. Statistical Analysis The data were presented as mean values accompanied by standard deviations.For statistical analysis, ANOVA (analysis of variance) was utilized, executed via the GraphPad Prism software (version 8.0.1, based in La Jolla, CA, USA).A p-value below 0.05 was considered statistically significant. Conclusions In summary, facing the growing menace of antimicrobial-resistant E. coli, the integration of novel experimental methodologies and mathematical modeling will play a crucial role in guiding future research and developing new treatment approaches.Although the journey forward is filled with obstacles, employing thorough and scientific methods such as those demonstrated in this study equips us more effectively to tackle the intricate issues surrounding antimicrobial resistance. Figure 5 . Figure 5. Simulated EC50 and Hill coefficient by Monte Carlo method. Figure 5 . Figure 5. Simulated EC50 and Hill coefficient by Monte Carlo method. Figure 5 . Figure 5. Simulated EC 50 and Hill coefficient by Monte Carlo method. Figure 6 . Figure 6.Pharmacodynamic model of relationship between antibiotic concentration and bacterial growth.Ψ represents the net growth rate of the bacteria.ψ max and ψ min are maximal bacterial growth rate and minimal bacterial growth rate.к defined as Hill coefficient is the steepness of the curve. Figure 6 . Figure 6.Pharmacodynamic model of relationship between antibiotic concentration and bacterial growth.Ψ represents the net growth rate of the bacteria.ψ max and ψ min are maximal bacterial growth rate and minimal bacterial growth rate.κ defined as Hill coefficient is the steepness of the curve. Table 1 . Minimum inhibitory concentration and minimum bactericidal concentration of 5 antibiotics against E. coli. Table 1 . Minimum inhibitory concentration and minimum bactericidal concentration of 5 antibiotics against E. coli. Table 2 . Parameter estimates based on observed bacterial growth rate (n = 3). Table 3 . Parameter estimates of antibiotics through Monte Carlo.
2023-12-27T16:01:32.670Z
2023-12-24T00:00:00.000
{ "year": 2023, "sha1": "751e9a1bd89182efabcc6fd320c2b7c8c95e4e04", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/17/1/27/pdf?version=1703406147", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09e834df2b5b07fcf4f01bc72efd291327709fc9", "s2fieldsofstudy": [ "Medicine", "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
238226953
pes2o/s2orc
v3-fos-license
Multiple accelerated particle populations in the Cygnus Loop with Fermi-LAT The Cygnus Loop (G74.0-8.5) is a very well-known nearby supernova remnant (SNR) in our Galaxy. Thanks to its large size, brightness, and angular offset from the Galactic plane, it has been studied in detail from radio to $\gamma$-ray emission. The $\gamma$ -rays probe the populations of energetic particles and their acceleration mechanisms at low shock speeds. We present an analysis of the $\gamma$-ray emission detected by the Large Area Telescope on board the Fermi Gamma-ray Space Telescope over 11 years in the region of the Cygnus Loop. We performed detailed morphological and spectral studies of the $\gamma$-ray emission toward the remnant from 100 MeV to 100 GeV and compared it with X-ray, UV, optical, and radio images. The higher statistics with respect to the previous studies enabled us to decompose the emission from the remnant into two morphological components to model its nonthermal multiwavelength emission. The extended $\gamma$-ray emission is well correlated with the thermal X-ray and UV emission of the SNR. Our morphological analysis reveals that a model considering two contributions from the X-ray and the UV emission regions is the best description of the $\gamma$-ray data. Both components show a curved spectrum, but the X-ray component is softer and more curved than the UV component, suggesting a different physical origin. The multiwavelength modeling of emission toward the SNR suggests that the nonthermal radio and $\gamma$-ray emission associated with the UV component is mostly due to the reacceleration of preexisting cosmic rays by radiative shocks in the adjacent clouds, while the nonthermal emission associated with the X-ray component arises from freshly accelerated cosmic rays. Introduction It is widely accepted that supernova remnants (SNRs) accelerate cosmic rays (CRs) through their fast shock waves that propagate into the interstellar medium (ISM). In particular, SNRs are characterized by the diffusive shock acceleration (DSA) process (Bell 1978a,b;Blandford & Ostriker 1978;Malkov & Drury 2001) that results in nonthermal emission observed from radio to γ-rays. Strong γ-ray emission has been observed by the Fermi Large Area Telescope (LAT) and the AGILE satellite in SNRs interacting with interstellar material. These SNRs are typically evolved and extended intermediate-age (> 10 kyr) remnants interacting with molecular clouds, with a characteristic high-energy break between 1 and 20 GeV (Giuliani et al. 2011;Ackermann et al. 2013). The spectrum of these sources can be explained by π 0 decay emission of accelerated CRs protons in the shocks of SNRs, or alternatively, by the reacceleration of ambient Galactic CRs inside the shock-compressed clouds (Uchiyama et al. 2010). The study of intermediate-age SNRs is therefore crucial for understanding the CR acceleration at modest shock speeds (at which the bulk of GeV CRs are accelerated) and the importance of the reacceleration mechanism. A prototypical intermediat-age SNR is the Cygnus Loop. It is about 21 kyr old at a distance of 735 pc that was derived from Gaia parallax measurements of several stars (Fesen et al. 2018). It is slightly aspherical, with minor and major axes of 37 and 47 pc, E-W and N-S, respectively. Its large size (∼ 3 • ) and an-gular offset from the Galactic plane (b ∼ −8.5 • ) ensured that this remnant has been widely studied from radio (Uyanıker et al. 2004;Sun et al. 2006;Loru et al. 2021), infrared (Sankrit et al. 2014;Koo et al. 2016), optical (Katsuda et al. 2016;Fesen et al. 2018), UV (Blair et al. 2002;Kim et al. 2014), X-ray (Katsuda et al. 2011;Oakley et al. 2013), and γ-rays (Katagiri et al. 2011;Acero et al. 2016). The SNR has an approximate shell morphology, with a prominent limb in the northeast region, a blow-out in the south, and several filaments in the north-central region. Several studies (Levenson et al. 1998;Uchida et al. 2009) and hydrodynamical simulations (Fang et al. 2017) have shown that the Cygnus Loop properties and morphology are consistent with a scenario of a supernova (SN) explosion taking place in a windblown cavity created by the progenitor star. However, very recently, Fesen et al. (2018) have proposed that the Cygnus Loop evolved in a low-density region with discrete interstellar clouds in its vicinity: a dense molecular cloud to its west and northwest, and smaller clouds in the east and northeast regions. A previous analysis of the Cygnus Loop region in the γray band was performed by Katagiri et al. (2011), who modeled it with a ring with inner and outer radii of 0.7 • ± 0.1 • and 1.6 • ± 0.1 • . They described its emission with a log-normal (LP for LogParabola) spectrum, . (1) In this work we analyze ∼ 11 years of Fermi-LAT data. This represents an improvement of a factor 5 with respect to the previous study by Katagiri et al. (2011), providing us with unprecedented sensitivity to study both spatial and spectral features of the γ-ray emission from the Cygnus Loop. In Section 2 we briefly describe the observations and data reduction. Our morphological and spectral analysis is reported in Section 3. The origin of the γ-ray emission is discussed in Section 4. Finally, the conclusions are summarized in Section 5. Observations and data reduction Our primary goal is to model the γ-ray emission from the Cygnus Loop. To this end, we correlated the γ-ray data with templates from other wavelengths that are characteristic of distinct physical processes. Gamma-ray band The LAT is the main instrument on board the Fermi satellite. It is a pair-conversion instrument sensitive to γ-rays in the energy range from 30 MeV to more than 1 TeV (Atwood et al. 2009). For this analysis, we used more than 11 years (from August 4, 2008, to October 28, 2019 of Fermi-LAT P8R3 data (Atwood et al. 2013;Bruel et al. 2018). The region of interest (ROI) is 10 • × 10 • and aligned with Galactic coordinates that are centered on the Cygnus Loop (R.A.=20h50m51, Dec=30 • 34 06 , equinox J2000.0). The relatively small ROI size was chosen to avoid the strong diffuse emission from the Galactic plane itself. To analyze the γ-ray data, we used version 1.2.1 of the Fermitools and version 0.18 of the Fermipy package, which are publicly available from the Fermi Science Support Center (FSSC) 1 . We selected γ-rays in the 0.1 − 100 GeV energy range. Because our analysis relies on morphology, we selected events with good point-spread function (PSF), that is, with good angular resolution: below 316 MeV, we selected data with Event Type PSF2 and PSF3; between 316 MeV and 1 GeV, we added PSF1 events; above 1 GeV, we used all events including the PSF0 Event Type. The bright γ-ray emission from the Earth's atmosphere was greatly reduced by selecting events within 90 • from the local zenith below 316 MeV and within 105 • of the zenith above 316 MeV. We also applied a good time interval (GTIs) selection on the data using the quality flag DATA_QUAL > 1 and the instrument in science configuration (LAT_CONFIG == 1). We used the CLEAN event class selection and version P8R3_CLEAN_V2 of the instrument response functions. To describe the γ-ray emission around the Cygnus Loop, we performed a binned likelihood analysis for which the pixel size was set to 0.05 • . We also used ten energy bins per decade and summed the log-likelihood over the Event Type selections. We included in the model all the background sources from the 4FGL catalog (Abdollahi et al. 2020a) within 13 • of the ROI center. We used the gll_iem_v07.fits model to describe the Galactic diffuse emission and the tabulated model iso_P8R3_CLEAN to describe the isotropic emission, using the appropriate template for each Event Type selection. We included the effects of energy dispersion on all model components. The only exception was the isotropic emission, which was obtained in data space. X-ray band The X-ray emission is a good tracer of the shocked gas at densities < 1 cm −3 and temperatures of a few 10 6 K occupying most of the SNR interior. Because the Cygnus Loop is very large (hard to mosaic with the current generation of X-ray instruments) and its emission is very soft (peaking below the C edge at 284 eV), the image from the ROSAT survey (Aschenbach & Leahy 1999) remains the best reference. We obtained the full band image (0.1 -2.4 keV) from SkyView 2 . We removed by eye disks of 0.1 • radius around 10 obvious point sources in the image (only one of which is inside the SNR, at α, δ = 312.56,+29.37 in the southern breakout). We subtracted the large-scale background estimated from Sextractor (Bertin & Arnouts 1996), at a scale (defined by the BACK_SIZE parameter) set to 1 • .5. Then we applied adaptive smoothing using the XMM SAS task asmooth so that the signal-to-noise ratio in each pixel is at least 5σ (the inner areas have only a few counts per pixel). The point sources that we removed were filled by this procedure because we entered the mask as an exposure map. None of these steps is critical to the resulting γ-ray fit. Finally, we set to 0 signal outside a circle of 1.5 • radius, with rectangular extensions covering the southern outbreak. The 68% angular resolution of the resulting image shown in Figure 1 (top left) is approximately 0.03 • (estimated from the point sources). This is much better than the γ-ray angular resolution. In the soft X-ray band, interstellar absorption along the line of sight can significantly reduce the emitted X-ray flux. This can affect the morphology of the observed emission if absorption varies strongly across the large angular extent of this source. In order to estimate these variations across the SNR region, we used data from the atomic hydrogen survey HI4PI (HI4PI Collaboration et al. 2016). To focus on the foreground gas, we integrated over velocities from 0 to 10 km s −1 (local standard of rest) in order to match the absorption value measured in X-rays in the interior of the remnant (N H ∼ 3 ×10 20 cm −2 , Uchida et al. 2009). In this velocity-integrated N H map, we observe a gradient of column density toward the Galactic plane from 3 ×10 20 cm −2 to 6 ×10 20 cm −2 from the eastern to the western bright edges of the SNR. Assuming an average plasma temperature of 0.3 keV (Katsuda et al. 2008;Uchida et al. 2009), the ROSAT/PSPC effective area and using the count rate simulator WebPIMMS 3 , the count rate in the 0.1 -2.4 keV band varies by about 20% for the aforementioned N H values. We consider this effect to be negligible for our γ-ray study and did not attempt to correct for absorption effects in the X-ray map. Ultraviolet band The UV emission is a good tracer of the radiative shocks developing in interstellar clouds with densities of several cm −3 (about ten times denser than the gas that is observed in X-rays). In order to cover the full Cygnus Loop, we started from the GALEX mosaic 4 kindly provided in FITS form by M. Seibert. This image was built at 3 resolution from the ner-UV (NUV) images ( The main difficulty with the UV mosaic is that it is dominated by point sources (and secondary reflections, so-called "smoke rings", next to the bright ones). Therefore it cannot be used directly as a template. We applied Sextractor in two passes: a first pass with large BACK_SIZE=128 (6 ) to detect bright sources everywhere, and a second pass (meant to detect faint sources while avoiding removal of pieces of filaments) with smaller BACK_SIZE=32 (1.5 ) followed by a selection (based on source angular size, flags, and flux/background ratio) requiring that a detection looks like a point source. We generated cir-cular regions excluding the entire regions where sources increase the background visually (radius proportional to flux to the power of 0.3), adapted a few by eye, and added 80 regions around the secondary reflections. This resulted in about 10,000 excluded regions in total. After this, we rebinned the masked image and the mask into 30 pixels (we do not need better angular resolution to fit the γ-rays). We smoothed the image locally around the zero values in the rebinned mask (where bright stars were) and divided the smoothed image by the smoothed mask to recover a flat exposure. The last stage (large-scale background subtraction, adaptive smoothing, and clipping) was the same as in 2.2, except that BACK_SIZE was set to 32 because there are no large-scale features in the UV image. The resulting image is shown in Figure 1 (top right). Optical band The optical band also traces radiative shocks. Because the lines in this wavelength range are not the same as in the UV, its sensitivity to different shock speeds and different ages can be slightly different. Again because of the angular size of the Cygnus Loop, a sky survey is better. We therefore used the Digital Sky Survey 2 (DSS2) images in the red band, which covers 6000 to 7000 Å, including [O I] λ6300, Hα λ6536, [N II] λ6584, and [S II] λ6717 − 6730 Å. We obtained the data from the STScI server 5 , forcing plate XP463 in order to preserve a uniform background (no automatic jump to plate XP464). We extracted 3 × 4 60 × 60 images at 1 resolution separated by 55 , which provided coverage of the full SNR with 2.5 overlap between images. The principle of source detection and exclusion was the same as in the UV, with the additional difficulty that the bright stars saturate the plates and look broader than the faint stars. We therefore used three different Sextractor settings, which reach deeper while using smaller BACK_SIZE and DETECT_MINAREA. The third run (for the faint sources) also required that the detections looked like point sources. About 5,000 regions were excluded from each image on average. The radius of the circles was set to twice the source full width at half maximum (FWHM) reported by Sextractor for the bright sources, and 1.5 FWHM for the medium and faint sources. We rebinned the images to 3 (aligned with the UV image as much as possible) to avoid needlessly large files, built the mosaic of 3×4 original images to cover the entire Cygnus Loop, and then rebinned to the final 30 pixels. The last stages (point-source filling, large-scale background subtraction, adaptive smoothing, and clipping) were the same as in 2.3. The resulting image is shown in Figure 1 (bottom left). From the γ-ray point of view, the optical image is very similar to the UV. However, the brightest filaments approach the saturation level in the DSS2 images. Radio band The radio band (synchrotron emission) traces a combination of low-energy electrons and magnetic field. In order to ensure that the large-scale signal was preserved, we used the single-dish images reported in Uyanıker et al. (2004) from the Effelsberg 100 m telescope at 21 and 11 cm, and Sun et al. (2006) from the Urumqi 25 m telescope at 6 cm. The respective half-power beam widths were 9.4 , 4.3 and 9.5 . This is better than the γ-ray resolution, but not good enough to extract point sources self-consistently as in the other wavebands. Instead, we used the NVSS catalog (Condon et al. 1998) that was obtained at higher resolution with the VLA to select 40 point sources brighter than 100 mJy and with intrinsic sizes smaller than 100 in the field of view. We excluded disks with a radius 0.1 • at 11 cm (0.15 • at 6 and 21 cm), scaled by F 0.3 Jy as we did in the UV (F Jy is the source flux in Jy). We refilled them by smoothing as in 2.3. Similarly to the X-rays, the Cygnus Loop is bright enough in the radio for none of these steps to be critical for the resulting γ-ray fit. The map with the best angular resolution (at 11 cm) is shown in Figure 1 (bottom right). Analysis The 4FGL catalog records three sources around the position of the Cygnus Loop: The extended ring (4FGL J2051.0+3049e) introduced by Katagiri et al. (2011), a point source in the eastern part of the ring (4FGL J2056.4+3142), and a point source in the southern part of the ring (4FGL J2053.8+2922). They are all described by LP spectra (see the left and middle panel in Figure 2). While the former two sources are associated with the Cygnus Loop, the latter is associated with an AGN (RX J2053.8+2923, Brinkmann et al. 1997) in the 4FGL catalog. We performed the morphological analysis from 0.1 GeV to 100 GeV. The free parameters in the model were the normalizations of the sources located closer than 6 • to the ROI center, of the Galactic and isotropic diffuse emissions, and the spectral parameters of the Cygnus Loop and 4FGL J2053.8+2922. The nearest bright sources are PSR J2028+3332 and PSR J2055+2539, which are stable sources farther away than 5 • from the ROI center. Geometrical models To perform the morphological analysis, we considered the model without emission from the Cygnus Loop (we removed sources 4FGL J2051.0+3049e and 4FGL J2056.4+3142) as our null hypothesis, which has a maximum likelihood L 0 . Figure 3 shows the excess map of a 6 • × 6 • region, with a pixel size of 0.05 • and centred on the Cygnus Loop position, obtained using our null hypothesis as a model. Then, we tested alternative models by adding spatial templates and/or by varying the parameters of the models, and we computed the corresponding maximum likelihood L mod . The fit improvement is quantified by the test statistic (Mattox et al. 1996), which in the absence of a real source, follows a χ 2 distribution with k degrees of freedom corresponding to the additional free parameters between the models and the null hypothesis. We tested several spatial models to describe the emission from the Cygnus Loop, assuming an LP spectrum with all parameters free. First, we started with three geometrical models: a uniform disk, a 2D symmetric Gaussian, and a uniform ring. We report the best-fit positions and extensions we obtained, with the associated TS values for these models, in Table 1. The Akaike information criterion (AIC; Akaike 1974) was adopted to compare the different geometrical models, where the AIC values are computed as AIC = 2k − 2 log L (k is the number of estimated free parameters in the model). The result in Table 1 shows an obvious improvement when the disk model rather than the Gaussian model is used (∆AIC = AIC Gauss − AIC Disk = 246). To explore the uniform ring template, we defined a 2D ring with a morphology defined by a FITS template. We kept the ring centered at the best-fit position of the disk model, and we varied inner and outer radii and evaluated the maximum likelihood values. We explored values of inner radius in the range of r min = 0.2 • − 0.6 • and of the outer radius r max = 1.5 • − 1.7 • . In Table 1 we report the best model. The ∆AIC value for the ring with respect to the disk shape is 50: the ring is clearly favored. Figure 3 shows that the emission along the remnant is not very uniform. We therefore also searched for a possible spectral variation in the γ-ray emission along the Cygnus Loop. To this end, we divided our best-fit ring into four sections, as shown in the top right panel in Figure 3. We independently fit the four sections leaving the normalizations, α and β parameters, as free parameters. This leads to a higher TS value than the uniform ring because the nonuniform ring can handle the differences along the remnant better. The results, shown in Table 2, indicate that there are no significant differences in the spectral indices along the Cygnus Loop. The γ-ray emission is fainter in the southern region (region 4) and brighter in the northeast (region 3). Correlation with other wavelengths We further investigated the Cygnus Loop morphology by evaluating the correlation with emission at other wavelengths: Xrays, UV (see bottom panels in Figure 3), and radio continuum. We used the images at these wavelengths as spatial templates to fit the γ-ray emission, assuming an LP spectrum. The TS values for the X-ray, optical, and UV templates show a large improvement compared to all the other models. The UV template is best, but even the X-ray template is favored compared to the nonuniform ring (∆AIC = 38) because it has far fewer degrees of freedom. The optical template is somewhat worse than the UV, but this is probably due to the saturation of the DSS2 images (2.4). In contrast, the radio templates have lower TS values because of their bright emission in the southern region of the remnant, where the γ-ray emission is fainter. This difference between the radio emission and the other wavebands has been explained by the existence of a separate SNR interacting with the Cygnus Loop (Uyanıker et al. 2002), although a recent multiwavelength analysis (Fesen et al. 2018) makes this interpretation controversial. The X-ray distribution follows the rims of the remnant, and its correlation with the γ-ray emission seems to suggest that the high-energy particles may be located in the forward-shock region. The UV template instead traces the radiative shocks in the remnant, and its filamentary structures are correlated with the central and west regions of the remnant. The residual map after fitting the X-ray (UV) template shows significant emission correlated with the UV (X-ray) template. We therefore tested a two-component model, including both the X-ray and UV maps. The sharp increase in the TS parameter (by ∼ 200; see Table 1) together with the residuals within 4σ (see the right panel of Figure 2) indicates that the X-ray+UV templates fit the γ-ray morphology adequately. The residuals are normally distributed around a mean value of 0.05σ with a standard deviation σ tot = 1.106σ, implying a systematic contribution of σ syst = σ 2 tot − 1 = 0.47σ, which is lower than the statistical contribution. Spectral analysis Using the UV template as a morphological model for the remnant, we investigated the spectral shape of the Cygnus Loop as a whole. We compared the likelihood values of the spectral fit for Article number, page 5 of 15 Tables 1 and 2. Bottom left: Contours of the ROSAT X-ray template (cyan, see 2.2). The templates were smoothed with a Gaussian kernel of σ = 0.2 • to make the contours more regular. Contours for the X-ray template are at 30%, 20%, 10%, and 1% of the maximum. Bottom right: Contours for the GALEX UV template (cyan, see 2. 3) are at 40%, 25%, 15%, and 2% of the maximum. a power law with other spectral functions over the entire considered energy range. TS values and best-fit parameters are reported in Table 3. A curved spectrum is clearly preferred over a powerlaw spectrum. The exponentially cutoff power law is not a good model (the spectrum does not fall off exponentially toward high energies). A simple symmetric log-normal (LogParabola) model fits the data quite well. The PLSuperExpCutoff4 model in the Fermitools has a superexponential index b < 0 (i.e., with a subexponential fall-off toward low energies and power-law decrease toward high energies). It increases TS by 10, corresponding to an improvement of slightly more than 3σ with respect to LogParabola (which corresponds to b = 0, with d = 2β; see the parameter definitions in Table 3). The smoothly broken powerlaw model (with one more parameter) does not improve the fit. Considering the three possible spectral models, the integrated energy flux of the Cygnus Loop using the UV template in the energy band of 0.1-100 GeV is 5.6 ± 0.2 × 10 −5 MeV cm −2 s −1 . We extracted the spectral energy distribution (SED) in ten logarithmically spaced energy bands from 0.1 to 10 GeV and two broader bins above 10 GeV. In each bin, the photon index of the source was fixed to 2, and we imposed a TS threshold of 4, below which we calculated an upper limit. The upper panel of Figure 4 shows the resulting SED. We then performed a spectral analysis using our best morphological template: the X-ray+UV template. Statistics are not enough to constrain models with more than two shape parameters such as PLSuperExpCutoff4, but we explored different combinations of power-law and LogParabola functions for each one of the two components (the X-ray and UV template). Using an LP spectral function for both the X-ray and UV templates, we Notes. (a) The normalization is computed at 837 MeV, following Katagiri et al. (2011) obtained the highest TS value. It is increased by 61 compared to fixing the shape parameters to the best PLSuperExpCutoff4 of Table 3 and fitting only normalizations. This implies that the spectral shapes of the X-ray and the UV components differ significantly. The results are summarized in Table 4. We then extracted the SED of both components as explained previously. The lower panel of Figure 4 shows the resulting SEDs. Radio data extraction A major difficulty when fitting the nonthermal emission of the Cygnus Loop is that the radio maps do not look like the γ-ray, X-ray or UV/optical maps (Figure 1). Radio maps alone show strong emission toward the southwest, and indeed, the radio template is by far the worst fit to the γ-ray data (Table 1). We are interested in the part of the radio emission that follows the other wavebands. More precisely, because we have shown (Table 1) that the γ-ray data are well fit by a combination of the UV and X-ray templates, we wish to decompose the radio emission in the same way. The first step is to convolve the UV and X-ray templates into the radio PSF (different in each band). Because the X-ray angular resolution σ X is not negligible with respect to the radio angular resolution σ R , this was achieved by a convolution with a Gaussian of σ 2 = σ 2 R − σ 2 X . We primarily worked on the 11 cm map, which has the best angular resolution and signal-to-noise ratio among the three radio maps. Fitting these convolved templates to the radio maps using a standard χ 2 fit results in deep negative residuals in the northeast and west (Figure 5 left) because the fit tries to push the UV and X-ray templates as high as possible to account for the radio structure. We instead searched for a decomposition that would leave only positive residuals, corresponding to the part of the radio emission that is uncorrelated with the UV and the X-ray emitting regions. In order to achieve this in a simple way, we increased the errors in the χ 2 formula by a factor R wherever the residuals are positive. Figure 5 (right) shows that for R = 10, no negative residuals are left. The fraction of the total radio flux in the residuals is 40% for R = 1, and this increases to 78% for R = 10. The most likely reason for this is that a large fraction of the radio emission arises in even more tenuous gas than the X-rays, mostly in the southwest. These fractions are very similar at 21 cm and 6 cm. We consider that the radio emission correlated with the UV or X-rays must lie between the extreme R = 1 and 10 (a visually reasonable solution is obtained for R = 5), and we used them as an error interval. For each value of R, we obtained the part of the radio emission that was correlated with the UV and that correlated with the X-rays. At 11 cm, the fraction of the radio associated with the UV is between 19% (R = 10) and 34% (R = 1), and that associated with the X-rays is between 3% and 26%. At all wavelengths the correlation with the UV is better than with the X-rays. Smoothly broken PL 430 Notes. Modeling the multiwavelength emission from the Cygnus Loop To explain the observed γ-ray spectrum of the Cygnus Loop, we conducted a multiwavelength modeling of the remnant spectrum. Our analysis included radio data from 22 MHz up to 30 GHz (Uyanıker et al. 2004;Loru et al. 2021 and citations therein), reduced by a constant factor reflecting the fraction of radio emission at 11 cm associated with the UV and/or X-ray emission, as explained in Section 3.3.1, together with the LAT GeV spectrum from this work. We modeled the radiative processes using the naima package (Zabalza 2015). In our analysis, we assumed a distance to the SNR of 735 pc (Fesen et al. 2018), and assuming a Sedov phase, a kinetic explosion energy of E SN = 0.7 × 10 51 erg and an age of t age ∼ 21 kyr. We considered the contribution to the γ-ray spectrum from π 0 decay produced by the interactions of protons with ambient hydrogen, together with contribution from Bremsstrahlung radiation and inverse Compton (IC) scattering by accelerated electrons, which also contribute to the radio through synchrotron emission. To take the presence of He, which has a spectral shape similar to protons in the spectrum of accelerated particles into account, we multiplied the emissivity from π 0 decay by a constant factor of 1.3. The ISM composition of the target gas is accounted for in naima. Following Katagiri et al. (2011), seed photons for IC scattering of electrons include the cosmic microwave background, two infrared (T IR = 34, 470 K, U IR = 0.34, 0.063 eV cm −3 , respectively), and two optical components (T opt = 3.6 × 10 3 , 9.9 × 10 3 K, U opt = 0.45, 0.16 eV cm −3 , respectively) in the vicinity of the Cygnus Loop. Emission from the secondary electrons is neglected because of the low-density environment around the remnant. Ambient parameters The Cygnus Loop blast wave has encountered discrete clouds to the east and northeast and a large molecular cloud to its west approximately t c ∼ 1200 yr ago (Raymond et al. 2020). The range of shock speeds can vary widely in these regions due to the interaction of the remnant with the environment. In our analysis, we considered a cloud shock velocity v s = 244 km s −1 (Fesen et al. 2018) and an upstream cloud density n 0,cl = 1.5 cm −3 (Long et al. 1992), where smooth nonradiative Balmer-dominated filaments are present. We assumed a cloud shock velocity v s = 130 km s −1 and an upstream cloud density n 0,cl = 6 cm −3 (Raymond et al. 2020), where the deceleration is faster and UV and optical line emission cools down and compresses the gas, producing regions of radiative filaments. In between these dense clouds, the remnant expands in a low-density region (∼ 0.4 cm −3 , Raymond et al. 2003) with a faster shock velocity (∼ 350 km s −1 , Medina et al. 2014;Raymond et al. 2015;Fesen et al. 2018). To compute the required physical parameters in the cooled radiative regions of the remnant, we followed the approach described in Uchiyama et al. (2010). The upstream magnetic field strength and density in the clouds are related by where b = v A /(1.84 km s −1 ), with v A being the Alfvén velocity, and ranges between 0.3 and 3 (Hollenbach & McKee 1989). Raymond et al. (2020) found for the radiative regions an upstream magnetic field value of 6 µG , which following equation 3, implies b = 2.5. Using the same b value for the nonradiative shock regions, we found B 0,cl = 3 µG. The magnetic field just downstream (before radiative compression, if any) is B d,cl = r B B 0,cl , where the magnetic compression ratio r B = (2r 2 sh + 1)/3 (Berezhko et al. 2002) assumes a turbulent field (r sh = 4 is the shock compression ratio). The density of the cooled gas in the radiative shocks, n m , was obtained by assuming that the compression is limited by SED extracted using the UV template. The PLSuperExpCutoff4 best-fit spectrum for the global γ-ray data (Table 3) is plotted as the dashed black line, and its upper and lower 1σ bounds as the solid black lines. Lower panel: Red (green) points are LAT flux points using the X-ray (UV) maps as spatial templates together. The lines are the best-fit LogParabola models (Table 4). magnetic pressure. Because the compression is strong, only the tangential field remains in the compressed magnetic field, We define v s7 the shock velocity in units of 100 km s −1 . Equating B 2 m /8π with the shock ram pressure n 0,cl µ H v 2 s , where µ H ∼ 1.4 m p is the mean mass per proton, we obtain n m 94 n 0,cl v s7 b −1 . For the regions dominated by radiative shocks, we computed values of n m = 293 cm −3 and B m = 244 µG (consistent with those reported in Raymond et al. 2020). A summary of the parameters we used can be found in Table 5. Particle spectrum We discuss the CR spectrum we used to model the multiwavelength emission from the Cygnus Loop. Two mechanisms can Notes. (a) Taken from Fesen et al. 2018. (b) Taken from Raymond et al. 2020. (c) Fit to the data. contribute to the observed emission: the diffusive shock acceleration (DSA) of thermal injected particles, and reacceleration of Galactic CRs (GCR). We first discuss the model involving reacceleration of preexisting ambient CRs (hereafter RPCR). This model was adopted by Uchiyama et al. (2010) to explain γ-ray and radio emission from W 44. At the main shock location, the reaccelerated CRs number density n acc (p) is given by that is, the steady-state DSA spectrum n int (p) (Blandford & Eichler 1987) with an exponential cutoff at p max and a break at p br , where δ = r sh +2 r sh −1 (in this work δ = 2), n GCR (p) is the preexisting ambient CR density and p is the particle momentum. We tried two parameterizations of the Galactic CR spectrum: We used the spectra of the Galactic CR protons and electrons from Uchiyama et al. (2010) and from Phan et al. (2018), the former derived by data from Strong et al. (2004) and Shikaze et al. (2007), the latter derived by fitting together local CR data from the Voyager 1 probe (Cummings et al. 2016) and from the Alpha Magnetic Spectrometer (AMS, Aguilar et al. 2015). In order to take the maximum attainable energy of particles (due to energy loss and finite acceleration time) into account,we introduced an exponential cutoff at p max . Following Uchiyama et al. (2010), the age-limited maximum momentum is where t 4 is the remnant age (or the shock-cloud interaction age t c ) in units of 10 4 yr. The gyro or Bohm factor η depends on the remnant age; it is η ∼ 1 for efficient and young SNRs such as RX J1713.7−3946 (Uchiyama et al. 2007;Tsuji et al. 2019), but larger than 1 in older SNRs (η = 10 in Uchiyama et al. 2010). We also considered a spectral steepening above p br for both electrons and protons. The cooling break in the electron population was calculated by equating the synchrotron loss time (Parizot et al. 2006), ion collisions (Malkov et al. 2011), where ν i−n 9 × 10 −9 n 0,cl T 0.4 4 s −1 is the ion-neutral collision frequency and T 4 is the precursor temperature in units of 10 4 K. Because of the adiabatic compression in the radiative shocks, each particle gains energy as p → s 1/3 p, where s ≡ (n m /n 0,cl )/r sh (s = 12.22 in this work). Therefore the number density of accelerated and compressed CRs at the point where the density becomes ∼ n m is (Uchiyama et al. 2010) n ad (p) = s 2/3 n acc (s −1/3 p). The effect of reacceleration and compression on the CR proton and electron spectra based on eq. 11 is shown in Figure 6. Following Uchiyama et al. (2010), we parameterized the emission volume as V = f 4/3πR 3 , where f is the filling factor of the clouds before they were crushed with respect to the entire SNR volume. The particle spectrum integrated over the SNR volume is therefore We then also considered the contribution of freshly accelerated CRs at the blast wave, according to the DSA theory. The CR spectrum resulting from DSA of thermal injected particles for both protons and electrons is assumed to be a steady-state DSA spectrum with a break and an exponential cutoff given by eqs. 10 and 8. Nonradiative regions: DSA scenario The nonradiative shocks of the Cygnus Loop are characterized by shocks that are fast enough to accelerate particles through the DSA mechanism. As we showed in Figures 3 and 2 The total contribution from Bremsstrahlung, IC, and π 0 decay is shown by the solid black line. Our best model is obtained with a total energy W tot of 1.2 × 10 49 erg and the electron-to-proton ratio K ep = 0.01 at 10 GeV. Protons and electrons have an energy cutoff of 15 GeV and an energy break of 62 GeV. described the γ-ray emission with a two-component model (X-ray+UV templates) in which the X-ray emission arises from the fast nonradiative shocks. We therefore modeled the emission extracted with the X-ray component (radio data extraction is described in Section 3.3.1, γ -ray data are shown in lower panel of Figure 4) using a particle distribution arising from the DSA mechanism. On the one hand, the environmental parameters are kept fixed in our model; on the other hand, the spectral parameters (the cutoff and break energies) depend on unknown parameters such as η and T 4 . The environmental parameters that are best suited to model the X-ray related emission should be those of the intercloud region (v s = 350 km s −1 , n 0 = 0.4 cm −3 ), where the shock is fast enough to generate X-ray emission. However, when a pre-shock magnetic field of B 0 = 2 µG (see eq. 3) is considered, eq. 8 yields p max c > 260 GeV (t 4 = t age , η = 10), which is incompatible with the soft γ-ray emission. We therefore decided to use intermediate values (v s = 244 km s −1 , n 0,cl = 1.5 cm −3 , B 0,cl = 3.0 µG, t = t c ) to better fit the data points. Following equations 8 and 10, the cutoff energy is 10 < p max c < 105 GeV for 10 > η > 1, and the energy break is 33 < p br c < 62 GeV for 10 5 > T > 10 4 K. Here, the break in the electron population can be neglected because synchrotron cooling is not relevant (see eq. 9). In Figure 7 we present the γ-ray spectrum from the Cygnus Loop, and we demonstrate the expected level of the γ-ray emission with varying p max c. To compute the γ-ray emission from π 0decay, we used as the target density the upstream cloud density (1.5 cm −3 ), as an average over the entire volume where cosmic rays are present. We kept fixed the total energy (W tot = W p +W He , where W p and W He are the total energy of protons and He, respectively) to W tot = 1.2 × 10 49 erg (corresponding to ∼ 2% of E SN ) and the electron-to-proton differential spectrum ratio in kinetic energy K ep = 0.01 at 10 GeV. Figure 7 shows the effect of the cutoff energy on the modeled emission. Because a low value of p max c is necessary to fit the data, p br c does not affect the model. Hence, we decided to set p max c = 15 GeV and p br c = 62 GeV (see Table 5), which correspond to η = 7 and T = 10 4 K, in order to reproduce the γ-ray data as shown in Figure 8. Radiative regions: Reacceleration of preexisting ambient CRs In contrast to the nonradiative shocks, the radiative shocks are slower and cannot efficiently accelerate particles through the DSA mechanism. We then considered a model involving the RPCR in regions dominated by radiative shocks. In Section 3 we showed that part of the γ-ray emission of the remnant is associated with the UV component (emitted by radiative shocks) in the X-ray+UV model. We then used the SED data points extracted with the UV component (radio data extraction is described in Section 3.3.1, γ -ray data are shown in lower panel of Figure 4) to model these regions. Again, the spectral parameters of the compressed and reaccelerated particle populations are not constrained; we therefore explored values of 14 < s 1/3 p max c < 140 GeV for 10 > η > 1 and 6 < s 1/3 p br c < 70 GeV for 10 5 > T > 10 4 K. The break due to synchrotron losses can be neglected. Another free parameter is the filling factor f of the clouds, and it is obtained from the data. We explored two different preexisting ambient CR spectra: the Galactic CR protons, and electrons from Uchiyama et al. (2010) and from Phan et al. (2018). By exploring different values of p max and p br for both preexisting CR spectra, we found that the differences between the two reaccelerated particle populations are minimal, also in terms of γ-ray emission. We therefore decided to use preexisting CRs from Phan et al. (2018), obtained from the more recent Voyager 1 (Cummings et al. 2016) and AMS-02 (Aguilar et al. 2015) data. In Figure 9, as in Figure 7, we present the expected level of the γ-ray emission with varying p max c and p br . To describe the data, we used s 1/3 p max c = 20 GeV (corresponding to η = 7) and s 1/3 p br c = 70 GeV (corresponding to T = 10 4 K). The best filling factor is f = 0.013 (see Table 5). The resulting fit to the spectrum of the radiative regions is shown in Figure 10. Our model is too peaked in γ rays and fails to fit the data at energies > 10 GeV. Modeling the entire Cygnus Loop We first attempted to model the emission from the entire Cygnus Loop (obtained using the UV template alone), considering either a DSA or an RPCR scenario. Assuming the same environmental parameters as for the northeast region (v s = 244 km s −1 , n 0,cl = 1.5, B 0,cl = 3.0µG), we tried to model the multiwavelength emission in a DSA scenario. When values of p max c = 40 GeV and p br c = 62 GeV for protons and electrons together with W tot = 2.1 × 10 49 erg and K ep = 0.025 are adopted, the γ-ray spectrum can be well reproduced by the DSA model shown in Figure 11 (upper panel). This energy cutoff requires η = 3 (see eq. 8), which is lower than the typical η = 10 found in other intermediate-age SNRs (Uchiyama et al. 2010;Devin et al. 2018Devin et al. , 2020Abdollahi et al. 2020b). In addition, it clearly emerged from our morphological analysis (see Section 3) that the γ-ray emission is mainly correlated with the UV templates and a consequence, with the radiative regions. Therefore the DSA mechanism is not favored to explain the Cygnus Loop multiwavelength emission. We also tried to model the overall spectrum assuming an RPCR scenario. We found that the best parameterization to model the overall spectrum is the same as that reported in Sec-tion 4.4 (s 1/3 p max c = 20 GeV, s 1/3 p br c = 70 GeV, and preexisting CRs from Phan et al. 2018), except for f ∼ 0.02. However, this model presents several discrepancies with observed data, as shown in the lower panel of Figure 11. The synchrotron emission is not able to reproduce the radio points at lower energies, and the peak in the γ-ray emission (at ∼ 2 GeV) is higher than in the LAT data. Therefore the RPCR scenario alone is not able to satisfactorily explain the emission from the entire remnant either. From our analysis reported in Sections 4.3 and 4.4, it emerged clearly that the nonthermal emission from the radiative and nonradiative regions has a different physical origin. As a consequence, we propose to model the total SNR spectrum with two contributions: an RPCR scenario caused by radiative shocks arising from the denser clouds, and a DSA contribution connected to the faster shock traveling in the lower density environment. The upper panel of Figure 12 shows the corresponding emission model with the contribution from the DSA (solid lines) and the RPCR (dot-dashed lines). The parameters are set exactly like those reported in Sections 4.3 and 4.4 for the DSA and RPCR contribution, respectively (see Table 5). The high magnetic field in the cooled regions behind the radiative shocks makes the contribution of the RPCR scenario dominant with respect to the DSA in the radio band, while in the γ-ray band, the two components have similar contributions, reflecting the γ-ray flux associated with the X-ray and UV templates (see Section 3.3). Overall, the modeled emission reproduces the observed data points in the radio and γ-ray bands well, unveiling the complex origin of the nonthermal emission of the remnant. The model is slightly too soft to fit the highest γ-ray energy points. A contribution at these energies could come from particles that are accelerated, through DSA, by the faster shocks in the low-density intercloud medium. This scenario (namely, DSA 2) could arise from the parameters described previously (v s = 350 km s −1 , n 0 = 0.4, and B 0 = 2µG), implying p max c ∼ 260 GeV. We set the total energy of the protons and the electron-proton ratio equal to those of the DSA component (i.e., W tot = 1.2 × 10 49 erg and K ep = 0.01). This new component is shown in the lower panel of Figure 12. By adding this new component, our model is able to explain the entire spectrum. When compared to the previous model, adding the DSA 2 component gives a TS value of 12 that is computed from the SED points. Conclusions We have presented the analysis of ∼ 11 years of Fermi-LAT data in the region of the Cygnus Loop. Our morphological analysis between 0.1 and 100 GeV confirmed an extended emission in the γ-ray band in the shape of a ring with maximum and minimum radii of 1.50 • (+0.01,−0.02) and 0.50 • (+0.04,−0.07), respectively. We found a strong correlation between the γ-ray emission and the X-ray and UV thermal emission. In particular, we found that the GeV morphology of the Cygnus Loop is best described by a two-component model: one consisting of a spatial template obtained from the X-ray thermal emission that is brightest in the northeast region of the remnant, the other consisting of a UV spatial template that dominates the central and west regions of the remnant. The γ-ray spectra extracted from these two components present a peak at ∼ 1 GeV and can be described by the LogParabola function. Overall, the Cygnus Loop has a γ-ray spectrum that can be described by a power law with subexponential cutoff toward low energies and an integrated energy flux in the energy band 0.1 -100 GeV of 9.0 ± 0.2 × 10 −11 erg cm −2 s −1 . The peak in the γ-ray spectrum suggests a hadronic origin of the nonthermal emission, as already shown by Katagiri et al. (2011). We constrained the high-energy particle population using the radio and γ-ray emission. The wide range of shock speeds in different regions in the Cygnus Loop together with the results of our morphological analysis indicates two different possible physical scenarios for the origin of these particles: the DSA mechanism in regions with shock velocity > 150 km s −1 , and RPCR otherwise. Our multiwavelength analysis confirms that neither scenario alone is capable of explaining the entire nonthermal emission from the Cygnus Loop, but a model involving both scenarios simultaneously works well. We found that two different populations of hadrons and leptons are responsible for the nonthermal emission: one arising from the DSA mechanism, the other due to the RPCR. Our best-fit model requires a maximum attainable energy of ∼ 15 GeV for hadrons and leptons in the DSA and RPCR populations. In this model, 2% of the kinetic energy released by the SN go into particles accelerated through DSA (another fraction could have already escaped), and an electron-proton ratio of K ep ∼ 0.01. The pre-shock filling factor for the RPCR scenario is < 0.02. Because the particles have a harder spectrum below the cutoff in the RPCR scenario, the RPCR component in the γ-ray spectrum is harder than the DSA component. By extracting radio and γ-ray contributions from the entire Cygnus Loop using the X-ray and UV templates, we disentangled the two different contributions to the nonthermal emission and unveiled the multiple origins of the accelerated particles in the remnant. Although it has been studied for many years, the Cygnus Loop continues to be of great interest to the community. Models describing the full evolution of the remnant (Ferrand et al. 2019;Ono et al. 2020;Orlando et al. 2020;Tutone et al. 2020) and its thermal and nonthermal emission (Orlando et al. 2012;Miceli et al. 2016;Orlando et al. 2019;Ustamujic et al. 2021) would be very useful.
2021-10-01T01:16:22.214Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "5b893c4ca192d9112b819ff1c8e666feb48c3ed0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2109.15238", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5b893c4ca192d9112b819ff1c8e666feb48c3ed0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7936049
pes2o/s2orc
v3-fos-license
Impact of early salmon louse, Lepeophtheirus salmonis, infestation and differences in survival and marine growth of sea-ranched Atlantic salmon, Salmo salar L., smolts 1997–2009 The impact of salmon lice on the survival of migrating Atlantic salmon smolts was studied by comparing the adult returns of sea-ranched smolts treated for sea lice using emamectin benzoate or substance EX with untreated control groups in the River Dale in western Norway. A total of 143 500 smolts were released in 35 release groups in freshwater from 1997 to 2009 and in the fjord system from 2007 to 2009. The adult recaptures declined gradually with release year and reached minimum levels in 2007. This development corresponded with poor marine growth and increased age at maturity of ranched salmon and in three monitored salmon populations and indicated unfavourable conditions in the Norwegian Sea. The recapture rate of treated smolts was significantly higher than the controls in three of the releases performed: the only release in 1997, one of three in 2002 and the only group released in sea water in 2007. The effect of treating the smolts against salmon lice was smaller than the variability in return rates between release groups, and much smaller that variability between release years, but its overall contribution was still significant (P < 0.05) and equivalent to an odds ratio of the probability of being recaptured of 1.17 in favour of the treated smolts. Control fish also tended to be smaller as grilse (P = 0.057), possibly due to a sublethal effect of salmon lice. Introduction Salmon farming in sea cages has grown to become a large industry since the 1980s. Because of the increase in the numbers of available hosts, the spread of salmon lice larvae is above natural levels in the vicinity of salmon farms (Bjørn & Finstad 2002;Krkosek, Lewis & Volpe 2005;Jansen et al. 2012). Salmon lice infestations have been shown to affect the physiology and pathology of salmonids in controlled experiments (Boxaspen 2007;Wagner, Fast & Johnson 2008;, and high infestation rates in wild salmonids have been observed (Birkeland & Jakobsen 1997;Finstad et al. 2000). However, the ecological consequences of salmon smolts being infested with salmon lice while migrating through regions with salmon aquaculture are poorly understood, and the impact on wild salmon populations has been debated (Costello 2009). Declines in many salmon populations in recent years (Anon 2011a) have contributed to increasing concerns about the possible influence of various anthropogenic factors, such as the reduction in long-term fitness due to enhancement practices (Araki, Cooper & Blouin 2007) and introgression of escaped farmed salmon in wild population ) and salmon lice. Few methods are available for estimating or quantifying the impact of salmon lice on wild populations. One approach has been to treat hatchery-reared smolts against salmon lice and then release them to compare their returns as adults with untreated controls. So far, a total of 37 single release groups have been reported from western Ireland (Jackson et al. 2011a,b;Gargan et al. 2012) and Norway (Skilbrei & Wennevik 2006b;Hvidsten et al. 2007. Krkošek et al. (2012) conducted a meta-analysis of all the published data, and estimated the overall effect size (odds ratio) to be 1.29 in favour of the treated smolts. However, the reports demonstrate that the effects of treatment vary greatly across years, release sites and release dates. Most of the groups were moderately or not affected, while the survival of the treated smolts was significantly improved in others. Jackson et al. (2011a,b) released smolts in western Ireland from 2001 to 2008 and concluded that salmon lice were of minor importance for survival in the sea. Gargan et al. (2012), on the other hand, reported experimental releases from three other locations in western Ireland from 2004 to 2006 and found a much clearer advantage of the treatment for salmon lice. Both Skilbrei & Wennevik (2006b) and Hvidsten et al. (2007) found significant differences in survival in one of three smolt releases. The risk of the smolts being infested with salmon lice therefore appears to vary substantially. There is a need for more field experiments to improve estimates of the impact of salmon lice on wild salmon populations, preferably from long-term studies with several releases per year. Several pioneer farms started salmon production along the coast of western Norway during the 1960s, and the region now hosts a large salmon farming industry (Skilbrei & Wennevik 2006a). It was discovered during the 1990s that salmon lice posed a threat to wild sea trout and salmon smolts in the area (Birkeland & Jakobsen 1997;Heuch et al. 2005;. At the same time, many local salmon populations declined. This development was very dramatic in the River Vosso, long known for its large salmon, which fell to a very low level during the late 1980s and early 1990s (Barlaup 2008). The ecological effects of introgression of escaped farmed salmon (Saegrov et al. 1997;Skaala, Wennevik & Glover 2006) and impacts of salmon lice have been proposed as possible causes for this development. Against this background, experimental releases of smolts treated against salmon lice were started in 1997 in the River Dale, which is located in the same fjord and close to the River Vosso. With the exception of 2000, hatcheryreared smolts of River Dale stock have been released every year. This study reports the results of the 35 experimental releases of hatchery-reared smolts from 1997 to 2009. We also tested releases of these smolts on various dates and at different locations in the fjord system and collected wild and stocked smolts in the river for experimental releases in 2004 and 2005. Because of the wide variability in the growth of salmon at sea during this period, we also present data from three wild salmon populations for comparison. Materials and methods Salmon smolts were derived from broodstock collected from the River Dale from 1995 to 2007. The eggs were fertilized in late October or early November. Ten to fourteen family groups were produced each year. The fish were reared in 1 9 1-m indoor tanks under continuous light from first feeding in May. The presmolts were moved between November and early January to between four and eight 2-m-diameter circular tanks under natural photoperiod, which was obtained via a translucent roof above the tanks in which they were kept until time of release. To ensure thorough mixing of the fish before release, each release group comprised approximately equal numbers of fish from each of the rearing tanks. Fish were anaesthetized (benzocaine, metomidate or MS222) before being tagged with Carlin tags (1997Carlin tags ( -1999 (Carlin 1955), or adipose fin-clipped and group tagged with sequentially numbered Decimal Coded Wire tags (2001-2009. Approximately 50% of the smolts were treated for salmon lice immediately before release (Tables 1 and 2). Three different treatment methods were used. The prophylactic substance EX (Pharmaq) was used to treat the hatchery smolts in 1997-1999 and 2005, and wild and stocked smolts in 2004 and 2005. Substance EX protects fish from sea lice infection for up to 16 weeks (B. Martinsen, Pharmaq, pers. comm.). The fish were bathed in a solution of 1 mg EX/L-1 for 30 min before release. From 2001 to 2004, and in 2006, the smolts were orally administered 50 lg kg À1 emamectin benzoate for 7-8 days prior to release (SLICE ® , Schering-Plough Animal Health, 1.5 mm particle size dry feed, manufactured by Skretting AS). From 2007 onwards, 400 lg kg À1 emamectin benzoate was administered by intraperitoneal injection (Glover et al. 2010) and controls were given sham injection 6-8 days before release. More than 6000 wild and stocked smolts were caught in a smolt trap close to the hatchery in 2004 and 2005 (Table 3). They were treated with substance EX, microtagged and then released in the river. The stocked smolts were siblings of the hatchery-reared smolts, which had previously been released into the river as autumn juveniles. From 1997 to 1999, the smolts were released where the River Dale drains into a 4.3-km-long narrow bay that is connected to the sea, but dominated by fresh water. From 2001 to 2009, the smolts were released directly into the River Dale near the hatchery (Fig 1). There was one release each year in 1997, 1998, 1999 and 2001, three releases every second week from early May to early June in 2002 until 2008, and two releases in 2009 (Table 1). From 2007 on, groups were also transported to the river mouth in a transport tank supplied with oxygen, transferred to a floating transport tank (2007) Data from returning adult salmon were derived mainly from angling in the Dale River and from a bag net in the fjord 18 km from the river (69% of recaptures of the 1997-1999 releases and 91% of the 2001-2009 releases). The recovery address was printed on the Carlin tags. Posters were distributed from 2002 onwards to advise the anglers of the microtagged and fin-clipped fish that had been released. A reward of NOK 50 (raised to 100 NOK during the project) was paid to anglers who provided scale samples and the upper jaw (containing the coded wire tag) from adipose fin-clipped salmon. A freezer for storing samples was installed in a room open to the public in the hatchery, which is close to the most productive angling sites. A wild salmon reference material was built by collecting scales from salmon captured in the Rivers Eid, Gloppen and Dale (see locations in Fig. 1). The scales were read using a microfiche reader printer. Scale characteristics were used to estimate smolt length, smolt age and sea age, and to separate wild from escaped farmed salmon according to the method described by Lund & Hansen (1991). The examination revealed that 970, 3208 and 1397 salmon of wild origin had been sampled in the Dale, Eid and Gloppen, respectively, from 1998 to 2011. From the River Dale, an average of 119 recaptured wild individuals per smolt year class from 1998 to 2004 were analysed, and a mean of 20 wild salmon per year class from 2005 to 2009. In the Rivers Eid and the Gloppen, the numbers of wild salmon from each smolt year class ranged between 94-396 and 49-225 individuals, respectively, from 1998 to 2009. A 2 9 2 G-test (Sokal & Rohlf 1981) was used to test the effect of the sea lice treatment in single release groups. For analyses of multiple release groups, the LOGISTIC procedure of SAS Software Package version 9.1 (SAS Institute) was used to fit generalized linear models (GLM) (McCullagh & Nelder 1989), with a logistic link function to test for differences in the probability of fish being recaptured (binomial response variable) with treatment against sea lice, release year and release date. where P is the probability of recapture, I is the intercept and the categorical parameter A treat is the parameter estimate for the effect of the treatment against sea lice (treated or control Results In two of the 27 experimental releases of smolts into fresh water, recapture rates of treated adult fish were significantly higher than control recaptures (Table 1); the smolts released in 1997 (G-test, P < 0.001) and the third group released in 2002 (G-test, P < 0.001; for detailed analysis of this year class see Skilbrei & Wennevik 2006b). The decline in the recapture rates from 1997 to 2009 (Fig. 2) was reflected in a strong effect of release year (Model I: Wald chi-square: W = 270.7, df = 11, P year < 0.001), but the treatment also contributed significantly to the variability in the marine survival rates during this period (W = 4.3, df = 1, P treat < 0.05). An analysis of the years 2002 to 2008, when there were three releases every year at almost fixed dates (Table 1), shows that release year (Model II: W = 165.1, df = 6, P year < 0.0001) and release date (W = 6.2, df = 1, P date < 0.05) significantly affected the recapture rates, but sea lice treatment did not (W = 1.7, df = 1, P treat = 0.19). The recapture rates of the smolts released in a marine environment were several times as high as those of the smolts released in the river from 2007 to 2009 (Figs. 2 & 3). The smolts that were towed to the coast in late June 2007 benefited significantly from the sea lice treatment (Fig. 3, P < 0.005), but treatment did not affect return rates when all the fjord and coast releases from 2007 to 2009 were taken together (Model II: W = 1.8, df = 1, P treat = 0.18; W = 13.7, df = 2, P year < 0.05). From mid-until late-May 2009, the recapture rate of the fish released at release site M1 more than doubled, while the recapture of the M2 release was halved (Fig. 3). There was thus no statistical effect of release date when these four releases were compared (df = 1, W = 0.22, P date = 0.64). Overall analysis of all the release groups from 1997 to 2009, irrespective of release site and date, shows that the overall effect of the treatment against sea lice was significant (Model I: W = 344.5, df = 11, P year < 0.0001; W = 5.9, df = 1, P treat < 0.05). The recapture rates of all treated and control fish were 0.78 and 0.66%, respectively. The mean yearly recapture rates were 0.82 and 0.68% for the treated and control groups. The odds ratio estimate of the probabilities of recapture of treated versus control fish was 1.17 (Wald 95% confidence limits, 1.03 and 1.32), which imply that a treated fish had 1.17 times the chance of a control fish to be recaptured, but also show a substantial confidence interval. The main reasons for the significant difference were the three single releases with significantly higher returns of treated fish: the 1997 release, the 7 June 2002 release in the River Dale and the 18 June 2007 group released at the coast. When these three release groups are excluded from the analysis, there was no effect of treating the smolts against salmon lice in the remaining 32 release groups (W = 0.02, df = 1, P treat = 0.89). Most wild smolts were captured in the smolt trap during May, while stocked smolts were more common in June (Table 3). The total recapture rate of wild smolts released in 2004 was significantly higher than that of the stocked smolts, 1.0 vs. 0.27% (2 9 2 G-test, P < 0.01), but there was no effect of salmon lice treatment. The recapture rate was zero or close to zero in all groups of wild and stocked smolts in 2005 (Table 3). The total recaptures of wild smolts were somewhat higher than those of hatchery-reared smolts, 0.99 1997 1998 1999 2001 2002 2003 2004 2005 2006 2007 2008 2009 Smolt year class , P year < 0.0001), but the differences between wild and hatchery-reared smolts (W = 2.4, df = 1, P wild = 0.12) and treatment effect (W = 0.1, df = 1, P treat = 0.80) were not. Generally, grilse (one-sea-winter salmon; 1 SW salmon) weights varied considerably between years (Fig. 4, GLM; df = 10, F = 14.1, P year < 0.0001). Treatment against salmon lice contributed less to the variability in grilse weight, but on average, the treated fish were 0.1 kg larger than the control fish, 1.79 (SD = 0.05) kg vs. 1.69 (0.05) kg (Fig. 4). This trend was significant for the smolts released in 2002 (details in Skilbrei & Wennevik 2006b) and was very close to significance for the period from 1997 to 2009 (df = 1, F = 3.6, P treat = 0.057). Grilse size declined during the experimental period, from about 2 kg for the smolt year classes 1997-2002 to a mean size of less than 1 kg when the 2007 smolts returned. A parallel development was also seen in the 2 SW salmon. The 2007 smolt year class weighed around 3 kg as 2 SW salmon, which is~60% of the average weights during the first part of the study (Fig. 5). Mean weights of 2 SW salmon then increased sharply towards 5 kg at the end of the study. The relative proportion of fish returning as grilse also declined sharply, from dominance of grilse during the first years to multi-sea-winter salmon being far more numerous during the last years of the study (Fig. 6). Discussion Our study was performed in a region with high production of farmed salmon (Skilbrei & Wennevik 2006a). It was demonstrated that treating Atlantic salmon smolts against salmon lice resulted in a higher percentage of returns than in untreated control groups (odds ratio 1.17), but clear lethal effects of salmon lice were seen in only three of the 35 releases of ranched salmon smolts from 1997 to 2009, and in three of the 12 years examined. These results are similar to those of Jackson et al. (2011a,b) who released smolts in Ireland from 2001 to 2008, but show a slightly greater effect of salmon lice. The difference between treated smolts and controls, on the other hand, was less pronounced than the more distinct differences that were observed at three other sites in Ireland in 2004, and also lower compared with the meta-analysis of Krkošek et al. (2012). The relatively consistent trend from 1997 to 2009, that treated grilse were~6% (0.1 kg) heavier than the controls, indicates that the control smolts were normally exposed to sublethal levels of salmon lice. This implies that salmon lice were generally present in the migration route of the smolts, also at times when survival rates did not differ between treated smolts and controls. The There are several limitations to the methods that may have underscored the effect of salmon lice. A laboratory experiment performed in 2003 concluded that the oral treatment of the Dale hatchery-reared smolts with emamectin benzoate that year resulted in highly variable concentrations of the drug, with some fish receiving only partial dosages, and that the protection lasted for <6 weeks (Skilbrei et al. 2008). Because of this uncertainty, intraperitoneal injection of emamectin benzoate was introduced in 2007, which produced a several fold increase in the concentration of the drug in muscle and protection against salmon lice from 2007 of~9 weeks (Glover et al. 2010). Reduced sensitivity of adult salmon lice to emamectin benzoate has been demonstrated in fish farms in western Norway since autumn 2009, and also in Scottish fish farms (Lees et al. 2008). The situation is being monitored by the Norwegian authorities, and farmers are still permitted to use Slice ® in spring 2012. The use of two drugs and three administration techniques, and the possible development of reduced sensitivity to emamectin benzoate, imply that the duration of the protection against sea lice varied during the experiment. We have not sufficient background information to account for such effects in the models. If we assume that the risk of being infested with salmon lice is highest at the coast in the vicinity of fish farms and lower in open sea, then an important question may be whether the smolts moved through this zone during the first weeks postreleases when they were still protected, regardless of anti-lice treatment method used. The duration of the protection was probably shortest when emamectin benzoate was administered orally (2001-2004 and 2006). The clear differences in survival and marine growth between the treated and control smolts released in 2002 (Skilbrei & Wennevik 2006a) at least indicate that a large portion of the smolts were protected that year. The release strategy was also improved from 2007 onwards, when groups of smolts were released at various sites along the migration route from the fjord to the coast. The rationale was to increase return rates, which had been low during the previous years, and also to increase the probability that the smolts could reach and pass the outer fjord and coastal areas while still protected against salmon lice. Because of the existence of a surface layer of fresh water in the inner fjord system during spring, the risk of being infested with salmon lice is thought to be low during the first stage of the migration route (Skilbrei 2012). Transfer of the fish to, and releases from, net pens in a higher-salinity environment were beneficial for survival. Reasons for this may have been that the physiological adaptation to sea water, school formation and migratory behaviour may have been stimulated (Skilbrei et al. 1994), predation in the river and estuary was avoided, and it may also have been beneficial for the smolts to avoid or reduce their exposure to moderate acid water and aluminium (Al) in the fjord. Watersheds in this area are affected by acid rain mobilizing and transporting aluminium (Al) to the fjord with river water (Barlaup 2008), which have occasionally caused mortality of cultured salmon in the fjord (Bjerknes et al. 2003). Under brackish conditions, non-toxic forms of aluminium (Al) present in the fresh water will be transformed to toxic forms of Al, which may precipitate onto fish gills (gill-Al) (Teien, Standring & Salbu 2006). This may impair physiological status, delay migration and reduce the survival of smolts (Kroglund & Finstad 2003;Finstad et al. 2007). The gradual and clear decline in recapture rates of smolts released in the River Dale from~1-1. 7% in 1997-2002 to~0-0.3% in 2005-2008 (from~10 to~2%), which was parallel for treated and control smolts. Other workers have reported that the survival rates of both wild salmon and released hatchery-reared smolts in the Atlantic Ocean have declined markedly in recent years (Friedland et al. 2009;ICES 2010;Otero et al. 2011). The reductions in the sizes of the grilse and 2 SW salmon suggest that growth conditions in the sea became poorer. This development was accompanied by the very clear drop in the ratio between grilse and multi-sea-winter salmon during the same period. A lowering of growth rate is generally correlated with increased age at maturity in salmonids (Alm 1959;Taranger et al. 2010), but on the other hand, Jonsson & Jonsson (2007) present data showing that the opposite relationship between early marine growth and age at maturity has also been observed. Nevertheless, it seems more likely that the falls in marine survival and growth from 2002 to 2007 were related to changes in the marine ecosystem, rather than by infestation by sea lice. Salmon lice may still have been of importance for marine survival, but its impact is difficult to evaluate during the years when the returns were very low, due to correspondingly low statistical precision. It is also possible that the enhancement practice in River Dale, with releases of juveniles ) and smolts, may have influenced its productivity negatively. Due to domestication selection during the captive phase, there is concern that enhancement may reduce the long-term fitness of salmon stocks (Araki et al. 2007). This study has demonstrated that survival rates of released hatchery-reared smolts are highly variable, both within and between years. Environmental fluctuations and more or less stochastic variability in the composition and distribution of predators and food organisms along the migration routes of the smolts have presumably contributed to differences between release groups and demonstrate the need for repetitive releases to characterize the conditions the smolts may experience in the course of a season. The clear differences in survival rates between smolts released in the same fjord~25 km apart from each other on the same dates in 2009 illustrate the variability and stochastic nature of conditions influencing survival of released smolts. There is an obvious potential to increase smolt survival by developing improved release methods, which could increase statistical accuracy in treatment studies. On the other hand, standardization of release sites and release methods is advisable if the goal is to estimate differences in survival rates at sea in long-term studies. The small and insignificant differences in survival between the hatchery-reared and wild smolts released in 2004 and 2005 were encouraging with regard to the issue of whether results obtained from releases of hatchery-reared smolts are indicative of the marine performance of wild smolts. The low recapture of the stocked smolts, on the other hand, appears to confirm the suggestion of Skilbrei et al. (2010) that smolts that originate from stocking of juveniles during the previous autumn migrate several weeks later than wild smolts and suffer from higher marine mortality. Although other relationships contributed more to the variability in the survival rates of the release groups, salmon lice appeared to impose an average additional marine mortality of~17% (odds ratio of 1.17 for recapture of treated/control fish). According to the considerations by Norwegian expert groups aiming to quantify the impact of salmon lice, this level of influence would be expected to represent a moderate regulatory effect on a salmon population (Anon 2011b;Taranger et al. 2012). While effects of salmon lice were not observed in most of the release groups or in most years, the reduction in survival was dramatic in particular groups, for example the 1997 release, which would have had very serious consequences for the year class if it had been representative for all the smolts that migrated that year. Salmon lice also killed a significant proportion of the smolts in 2002, but in that particular year, the smolt survival was relatively high and returns from the control groups were still acceptable in comparison with other years. The 2007 results exemplify a situation that, under certain circumstances, may represent a clear negative impact of salmon lice, which contributed significantly to increased mortality, while at the same time, oceanic survival and growth rates appeared to be unusually low. Our data also indicate that non-lethal effects of salmon lice such as reduced growth, and subsequently reduced fecundity, were also a frequent consequence of salmon lice infestation throughout the period under study. It is likely that wild salmon populations in the region were also influenced negatively by salmon lice, but to what extent cannot be easily estimated. Nor is it a simple matter to estimate the population regulatory effects of salmon lice. One reason for this is the apparently stochastic nature of the impact of salmon lice, while control fish performed equally well as treated fish in most release groups and release years, its influence was very clear in the three groups affected. We therefore suggest that evaluations of the risk of negative impact of salmon lice should be linked to the status of the population, which must involve considerations of the combined impacts of salmon lice and other potential stressors. The magnitude of the effects of salmon lice observed in this study may be too small to threaten viable populations, but may give rise to concern for small and vulnerable populations. Large numbers of escaped farmed salmon are found at spawning sites in many Norwegian salmon rivers (Anon 2011a), which may threaten the genetic integrity of populations (Skaala et al. 2006). If escaped farmed salmon aggregate in the river while the survival and fecundity of the wild fish are low due to poor oceanic conditions, then the potential impact of salmon lice may increase because of the combined impacts of several negative conditions. A recent study in Norwegian rivers demonstrated that the risk of introgression of fish that is not native to the river, like escaped farmed salmon, is greater in rivers where the number of wild spawners has declined during recent decades; among these is the River Vosso, which is the neighbouring river to the Dale .
2016-05-04T20:20:58.661Z
2013-01-13T00:00:00.000
{ "year": 2013, "sha1": "cc0e2218ef0b4e8bae7ca05fa4ef223086ff892e", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3596981?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "cc0e2218ef0b4e8bae7ca05fa4ef223086ff892e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15386248
pes2o/s2orc
v3-fos-license
Adverse Effect of Trauma on Neurologic Recovery for Patients with Cervical Ossification of the Posterior Longitudinal Ligament Study Design Retrospective study. Objective Minor trauma, even from a simple fall, can often cause cervical myelopathy, necessitating surgery in elderly patients who may be unaware of their posterior longitudinal ligament ossification (OPLL). The aim of this study is to determine the influence of trauma on the neurologic course in patients who have undergone surgery for cervical OPLL. Methods Patients who underwent surgery due to OPLL were divided by trauma history and compared (34 in the trauma group; 70 in the nontrauma group). Results Ground falls were the most common type of trauma (20 patients, low-energy injuries), but 23 patients developed new symptoms after a trauma. Although the symptom duration (17.68 months) was shorter, the Japanese Orthopedic Association (JOA) score and the Nurick scale showed lower values in the trauma group. Trauma histories led patients to earlier hospital visits. Initial JOA scores were associated with a good recovery status upon the last follow-up in both the groups. The narrowest diameter of the spinal canal showed different radiologic parameters: 5.78 mm in the trauma group and 6.52 mm in the nontrauma group. Conclusion Minor trauma can cause the unexpected development of new symptoms in patients unaware of cervical OPLL. Patients with a history of trauma had lower initial JOA scores and showed a narrower spinal canal compared with a nontrauma group. The initial JOA scores were correlated with a good recovery status upon the last follow-up. among surgeons. 6,[10][11][12] In fact, several longitudinal cohort studies have revealed that cervical trauma has little effect on the outcomes of asymptomatic patients with OPLL; thus, the current evidence does not allow for firm and broad recommendations to be made regarding prophylactic surgery to reduce the risk of aggravation as caused by a minor cervical trauma. 4,12 Moreover, cervical SCIs and related disabilities are more likely to occur in patients with OPLL, and conservatively treated OPLL increases the risk of SCI (4.8 per 1000 person-years). 13 Our clinical impression of the patients with cervical OPLL is that even minor trauma such as a fall onto the ground can induce cervical myelopathy, and patients may be unaware of having the condition before the trauma incident. In addition, the surgical outcomes for such patients are not comparable to those of patients without a history of trauma. Hence, we undertook clinical and radiologic evaluations of patients who underwent surgical treatment due to cervical OPLL and identified the influence of trauma on their clinical courses and neurologic recovery characteristics. Patient Population Between 2000 and 2010, 121 patients with cervical OPLL underwent surgical treatment by two experienced spine surgeons at one institute. According to their histories of trauma obtained from their medical records, 41 (33.8%) had a history of trauma and 80 (66.2%) had experienced no trauma. Patients with a minimum of 1 year of follow-up were included, and patients with cerebrovascular disease, Parkinson disease, or cerebral palsy were excluded. After paring the potential participant list, 104 patients were included in the study: 34 (32.7%) in the trauma group and 70 (67.3%) in the nontrauma group. Clinical and Radiologic Evaluations The medical records and radiologic images were reviewed initially and upon the last follow-up. The initial symptoms during the first clinical visit were classified in terms of whether or not the lower extremities were involved. The time between symptom development and surgery was also investigated. In the trauma group, the etiology of trauma in terms of the injury mechanism was divided into high-and low-energy injuries. 14 Deterioration of pre-existing symptoms and the development of new symptoms after trauma were also checked. The Japanese Orthopedic Association (JOA) score and the Nurick scale were used to assess the degree of cervical myelopathy. 15,16 The recovery rate was calculated as follows: recovery rate (%) ¼ [postoperative JOA score À preoperative JOA score]/[17 À preoperative JOA score] Â 100. 17 The recovery rate at the last follow-up was ranked at !50% for a good recovery status and at <50% for a fair recovery status. The surgical treatments were divided into anterior, posterior, or combined approaches based on the patients' operation records. Radiographically, bony injuries with spinal instability were investigated. On the plain radiographs, the cervical angle and cervical range of motion (ROM) at C2-C7 were calculated by Cobb's method. On computed tomography (CT) scans, the type of OPLL was classified as segmental, continuous, mixed, or other type. 18 The narrowest space available for the spinal cord (SAC) was measured on midsagittal the CT images. The presence and length associated with the high signal intensity were evaluated on the patients' initial T2weighted magnetic resonance images (MRIs). Statistical Analysis The statistical analysis was performed using the Student t test for the continuous variables. A chi-square test or Fisher exact test was used for categorical variables. Logistic regression analysis was used to evaluate the factors affecting a good recovery rate. A value of p < 0.05 was considered significant. To assess this, SPSS software version 19.0 (IBM, Armonk, New York, United States) was used. Patient Demographics Previous trauma history was detected in 34 patients (32.7%) who were dividing into the trauma group, with the remaining 70 patients (67.3%) were categorized into the nontrauma group (►Table 1). The 7 women and 27 men in the trauma group had a mean age of 56.24 AE 8.79 years (range, 44 to 77). The etiologies of their incidences of trauma were mostly low-energy injuries. These included 20 falls onto the ground, with 5 falls from a low height (<1 m) and 3 with head trauma and neck extension. In addition, 6 were in motor vehicle accidents with highenergy injuries (►Table 2). Pre-existing symptoms deteriorated after the experience of trauma in 11, but only 1 patient was aware of having cervical OPLL and had had surgery as recommended previously. Twenty-three patients showed newly developed symptoms after the trauma. At the moment of the trauma, 5 patients experienced transient quadriparesis, which spontaneously resolved within a few minutes to a few days, and 7 patients underwent surgery within 48 hours after the trauma. With regard to symptom duration between symptom development and the surgery, the durations were 17.68 AE 22.96 months (range, 0 to 60), with all patients showing the symptoms related to cervical myelopathy upon surgery. Thirteen had the symptoms without involvement of the lower extremities and 21 with involvement of the lower extremities. The surgical treatment was given to 14 via the anterior approach, with 17 undergoing the posterior approach and 3 undergoing a combined approach. The mean follow-up period was 56.47 AE 51.25 months (range, 13 to 254) in the trauma group. The 70 patients in the nontrauma group (17 women and 53 men) had a mean age of 53.90 AE 9.32 years (range, 38 to 79). Their symptoms gradually developed and continued for 31.00 AE 39.57 months (range, 0.5 to 240). The symptom durations were longer in the nontrauma group than in the trauma group (p ¼ 0.033). Upon the first clinical visit, 36 patients complained of symptoms in their upper extremities, and 34 patients noted that all four extremities were involved. Surgery was performed on 33 patients via the anterior approach, 34 by means of the posterior approach, and 3 in a combined approach. After the surgery, the mean follow-up period was 47.73 AE 31.89 months (range, 12 to 145) in the nontrauma group. Clinical Outcomes The initial JOA score was lower in the trauma group (10.53 AE 4.71, range, 0 to 16) than in the nontrauma group (12.89 AE 2.49, range, 5 to 16, ►Fig. 1A). The final JOA score was also lower in the trauma group (13.38 AE 4.00, range, 3 to 17) than in the nontrauma group (15.49 AE 1.93, range, 8 to 17). The recovery rate was 51.06 AE 35.12% (range, 0 to 100) in the trauma group and 68.76 AE 31.78% (range, 0 to 100) in the nontrauma group. The JOA scores upon the initial and last evaluation and the recovery rate all showed favorable and statistically significant results in the nontrauma group (p ¼ 0.001, p ¼ 0.000, and p ¼ 0.012, respectively). The clinical outcomes using the Nurick scale showed similar results upon the initial and the last evaluations of the two groups (p ¼ 0.002 and p ¼ 0.039, ►Fig. 1B). Radiologic Outcomes None of the patients in the trauma group showed bony injury with spinal instability preoperatively. The cervical angle and ROM at C2-C7 had greater values in the nontrauma group, but the difference was not statistically significant relative to those of the trauma group (►Table 3). In the trauma group, the mixed type of OPLL was prominent in 19 (55.88%), and 8 (23.52%) showed other type, 6 (17.64%) showed the segmental type, and 1 (2.94%) showed the continuous type. In the nontrauma group, the mixed type was also the most commonly classified, occurring in 31 (44.28%). The OPLL type differences were not statistically significant between the trauma and the nontrauma groups (p ¼ 0.670). The narrowest SAC was 5.78 AE 1.29 mm (range, 3.59 to 8.03) in the trauma group and 6.52 AE 1.50 mm (range, 3.09 to 9.90) in the nontrauma group (p ¼ 0.028). The presence of a high signal intensity in the T2-weighted sagittal MRIs was observed in 20 patients (58.82%, 13.25 AE 10.33 mm) in the trauma group and 40 (57.14%, 10.03 AE 9.29 mm) in the nontrauma group (p ¼ 0.408). Parameters Influencing a Good Recovery Status A good recovery status (recovery rate ! 50%) was confirmed in 18 patients (52.94%) in the trauma group and 55 (78.57%) in the nontrauma group (p ¼ 0.007). Meanwhile, a fair recovery status upon the last evaluation was noted in 16 (47.05%) in the trauma group and 15 in the nontrauma group (21.42%). The trauma history affected the fair recovery result upon the last evaluation, showing statistical significance (p ¼ 0.001, relative risk ¼ 1.484, 95% confidence interval, 1.057 to 2.084). Among the gender, age, and clinical and radiologic parameters, the logistic regression analysis results identified the initial JOA score as the factor most strongly affecting a good recovery status (p ¼ 0.024, odds ratio¼ 1.206, 95% confidence interval, 1.025 to 1.418). Injury mechanism Etiology of trauma Number Low-energy injury Fall onto the ground level 20 Fall at the low height (<1 m) 5 Head trauma with neck extension 3 High-energy injury Motor vehicle accident 6 Influence of Cervical Trauma on the Clinical Course We analyzed the clinical and radiologic outcomes of patients who underwent surgical treatment for cervical OPLL at one institute between 2002 and 2010. One adverse effect of trauma was the surprisingly common development of new symptoms in elderly patients who had not been aware of their cervical OPLL. As in the previous reports focusing on cervical OPLL and trauma, the etiology of the trauma was mostly lowenergy injuries from a fall onto the ground in the present study. 19,20 Even when the trauma was minor, 23 patients developed new symptoms, and among them, 5 experienced transient quadriparesis for a short time. Although 7 patients had to undergo surgical treatment within 48 hours posttrauma, symptoms in 27 patients who were conservatively observed became gradually more aggravated. As a result, trauma-induced symptoms led these patients to visit the hospital within relatively short symptom durations with unfavorable neurologic status compared with patients without trauma. Because a good recovery status was associated with no trauma history and a good initial JOA score, the trauma itself adversely affected the patients' clinical courses. In the present study, a narrow SAC of less than 8 mm was noted in both the groups, but a narrower SAC (5.78 mm) was determined in the trauma group. Although the mixed and segmental types of OPLL were identified in $80% of those in the trauma group, a similar proportion was observed in the nontrauma group. The other radiologic parameters as well (i.e., the cervical angle, ROM, and high signal intensity on MRI) were not significantly different between the two groups. Although a narrow spinal canal as confirmed by a radiologic evaluation was not always associated with poor clinical parameters, the authors suggest that a narrow spinal canal is an important risk factor for trauma-induced cervical myelopathy in patients with OPLL. Surgical Timing for Trauma-Induced Cervical Myelopathy from Cervical OPLL There is controversy regarding the role of prophylactic surgery in patients who are asymptomatic or who have only mild myelopathy from cervical OPLL. 4,10-12 Therefore, many surgeons have debated the efficacy of surgery for asymptomatic individuals with narrow spinal canals from cervical OPLL. Matsunaga et al prospectively found that trauma-induced cervical SCI in patients with OPLL could be decreased by informing the patients of the risk. They also showed that the prophylactic surgery was not necessary in the patients with OPLL. 4 Several authors have argued that the risk of surgical complications would be higher than the risk of myelopathy after trauma. 10 However, once elderly patients sustain a cervical SCI, their quality of life can deteriorate seriously and their lives can be threatened in some cases. 6,21 One report found that conservatively treated OPLL increases the risk of SCI, and a comparative study of acute cervical SCI from OPLL reported that laminoplasty showed more satisfactory outcomes than a conservative management strategy. 13,22 In the present study, a good recovery status upon the last follow-up was not noted in the trauma group to the extent that it was in the nontrauma group despite the fact that the surgery was performed. Although the relationship between the surgery and prognosis was unclear because the natural course of the clinical symptoms was not fully detailed, a good initial JOA score was found to be important in the determination of a good recovery status in the present study. 23,24 Concerning the proper timing of the surgery, early surgical decompression for OPLL was recommended because the outcome of this procedure was better for younger patients and for those with a higher JOA score. 25 Although the need for the prophylactic surgery in asymptomatic patients with cervical OPLL cannot be fully supported in this study, if patients with a history of trauma are found to have a narrow spinal canal from OPLL, the surgical treatment should be recommended as soon as possible before further deterioration of their neurologic status. Limitations of the Study There has been debate over which surgical approach is better for the treatment of cervical OPLL. 22,26,27 In the present study, anterior or posterior, or occasionally a combined approach, was performed with anterior cervical diskectomy and fusion at a single level to three levels. Also performed were corpectomy with/without additional anterior cervical diskectomy and fusion, laminectomy with/without fusion, or laminoplasty. Considering the advantages and disadvantages of each surgical approach, the results of each can offer useful data. Because the surgical approach did not show a significant difference between the trauma and nontrauma groups, the authors left out the details about the surgical approach. The progression of OPLL after the posterior approach can affect the long-term clinical outcomes, but the authors did not measure this owing to a lack of available CT scans during the last evaluations. Adjacent segmental disease after cervical spinal surgery could develop and result in poor clinical outcomes and additional surgeries. As patients age, they often complain of the symptoms related to degenerative lumbar spinal diseases after surgery for cervical OPLL. In addition, a trauma event after the surgery also acted as a risk factor for the neurologic deterioration, even in the patients without a history of trauma. As described previously, multiple factors can affect the clinical course of the patients who undergo surgical treatment for cervical OPLL. However, cervical myelopathy can be induced after minor trauma via low-energy injuries in patients who are unaware of their cervical OPLL status, and these patients typically undergo the surgery to relieve their symptoms. Minor trauma could develop during one's lifetime, and the alarm to prevent injury alone cannot prevent traumainduced cervical myelopathy. The prophylactic surgery for asymptomatic patients with narrow spinal canals from OPLL should be tailored based on population-based case-control studies or prospective long-term studies with electrophysiologic evidence of the cervical radicular dysfunction or the central conduction deficits. 28 Conclusion Surprisingly, minor trauma could lead to the development of new symptoms in patients who were not aware of their cervical OPLL status. Patients with a history of trauma showed lower initial JOA scores with a narrower spinal canal compared with a nontrauma group. Initial JOA scores were correlated with a good recovery status upon the last follow-up.
2017-10-28T21:41:14.658Z
2015-01-07T00:00:00.000
{ "year": 2015, "sha1": "99889f213f1b0590c314d878d1ef78d1f3ae4cef", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1055/s-0034-1397340", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93723439ea26fadac1d1e82aaf5c5a6fbacd537b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250512871
pes2o/s2orc
v3-fos-license
Causal and Evidential Conditionals We put forth an account for when to believe causal and evidential conditionals. The basic idea is to embed a causal model in an agent’s belief state. For the evaluation of conditionals seems to be relative to beliefs about both particular facts and causal relations. Unlike other attempts using causal models, we show that ours can account rather well not only for various causal but also evidential conditionals. 3 (1) If the flag had been up, the King would have been in the castle. Kratzer says that this conditional is acceptable. To see why the conditional is evidential, let us distinguish two readings of (1). (2) If the flag had been up, this would have caused that the King is in the castle. (3) If the flag had been up, this would have been evidence that the King is in the castle. (3) but not (2) seems to be an appropriate paraphrase of (1). In general, the antecedent of an evidential conditional provides, or would provide, evidence for the consequent. Now, the above mentioned causal model semantics capture the unacceptability of (2), but fail to account for (3) and thus (1). This is simply because the flag being up has no causal influence on whether or not the King is in the castle. 2 The flag merely constitutes evidence for the presence of the King. Here we propose an account for when an agent should believe causal and evidential conditionals. We assume that an agent has beliefs about both particular facts and causal relations. We embed a causal model in an agent's belief state to represent beliefs about causal relations. Then we illustrate how beliefs about causal relations and particular facts influence the evaluation of conditionals by walking through several examples. An Epistemic Account We represent an agent's belief state by ⟨M, B⟩ , where M is a causal model ⟨V, E⟩ and B ⊆ W a set of value assignments to the variables in V. W denotes the set of all relevant possible worlds. The agent believes what is true in all value assignments, or equivalently worlds, w ∈ B that are most plausible. The plausibility of worlds is lexicographically determined: (i) worlds that assign to more variables the values about which the agent is certain are more plausible than worlds which assign those values to fewer variables; and whenever there are ties in (i), (ii) the worlds that satisfy more of the structural equations in E are more plausible than worlds which satisfy fewer. 3 An agent's certainty about particular facts trumps the believed causal relations. Causal and Evidential Conditionals A causal model is a tuple ⟨V, E⟩ , where V is a set of variables and E a set of structural equations. The set E of structural equations represents the causal relations the agent believes to be true. For simplicity, we restrict ourselves to binary variables representing that a fact obtains or else does not. The variables thus come with a range of two values, we call true 1 and false 0. For instance, let K be the variable that represents whether or not Kennedy has been shot. K taking the value 1, K = 1 , says that Kennedy has been shot, while K = 0 says that he has not been shot. Whenever the binary variables behave like propositional variables, we abbreviate K = 1 by k and K = 0 by ¬k. Each structural equation specifies a relation of direct causal dependence between some of the variables in V. For each variable X in V, there is at most one structural equation of the form: where the set Pa X of X's parent variables is a subset of V ⧵ {X} . A structural equation says that the value of the variable on the left-hand side is causally determined by the values of the variables on the right-hand side. The value of X is determined by the values of the variables in Pa X , in the way specified by f X . In what follows, we will only use a subclass of the functions applicable to binary variables, the Boolean operators ¬, ∧, ∨. Sides matter for structural equations. In (*), the value of X is determined by f X (Pa X ) , but the value of f X (Pa X ) is not determined by X. A structural equation expresses how the variable on the left-hand side depends on the variables on the right-hand side. Hence, (*) encodes a set of conditionals that have the following form: If the variables in Pa X were to take on these or those values, X would take on this or that value. As soon as you assign values to all the "parentless" variables of a causal model M, the values of the remaining variables can be computed. This solution of M is unique if the structural equations can be ordered such that no variable occurs on the left-hand side of an equation after having occurred as a parent on the right-hand side. In what follows, we consider only such acyclic causal models. The result M a of intervening on M by a propagates the effects of the intervention causally downstream according to the structural equations. The "parentless" variables include now A. The result can be thought of as the causal possible world, where A takes the value 1, and which is otherwise most similar to the actual world. If the variable C takes the value 1 in M a , the causal conditional a > c c is true relative to M and a value assignment to the (parent) variables in V. An agent believes a conditional a > c to be true iff a > c is true at each ⟨M, w⟩ where w ∈ B . In words, the agent believes those conditionals which are true at each most plausible world. It remains to state the definition for when a causal conditional is true at what we will call a causal world ⟨M, w⟩: Definition 1 Causal Conditionals A causal conditional a > c c is true at ⟨M, w⟩ iff c is true at ⟨M a , w⟩. As above, M a represents the causal model obtained from M where the structural equation f A has been replaced by A = 1 . Let us turn to evidential conditionals. We define when an evidential conditional is true at a causal world as follows: Definition 2 Evidential Conditionals An evidential conditional a > e c is true at ⟨M, w⟩ iff c is true at all ⟨M, w ′ ⟩ , where w ′ satisfies a and is otherwise most plausible. 5 Recall that a world w is more plausible than another w ′ if w, as compared to w ′ , assigns to more variables the values about which the agent is certain. When there are ties on the first comparison, a world w is more plausible than another w ′ if w satisfies more of the structural equations in E than w. If an agent believes some particular facts for certain, the corresponding worlds are more plausible. By contrast to believing for certain, an agent merely believes causal relations. The worlds corresponding to the agent's certain beliefs are more plausible than the worlds corresponding to the agent's causal beliefs. In a slogan, the particular facts about which the agent is certain trump the believed causal relations. In what follows, we apply our account to several examples. King Ludwig of Bavaria Recall Example 1. You observe that the flag is down ¬ and that the lights are on I, so you believe these for sure. You also believe that the presence of the king causes the flag to be pulled up ( F = K ) and the lights to be on ( L = K ). The facts you believe for certain, ¬ and l , trump the causal relations you believe. No worlds are eliminated, some are just more plausible than others. On the left is a graphical representation of your causal model, where the set E of structural equations is {F = K, L = K}. 6 On the right is a table containing the set W of all possible worlds for the variables K, F, L. The most plausible worlds are w 3 and w 4 . Causal Conditional You believe the causal conditional f > c k to be true iff f > c k is true at ⟨M, w 3 ⟩ and ⟨M, w 4 ⟩ . But f > c k is false at ⟨M, w 4 ⟩ because k is false at ⟨M f , w 4 ⟩ . The intervention by f replaces the structural equation F = K by F = 1 . K is unaffected by the intervention. For K is on the right-hand side of the structural equation, and so is not determined by an intervention on the left-hand side. More colloquially, causation flows only downstream and K is causally upstream from F. Hence, k is not true at ⟨M f , w 4 ⟩ . If the flag had been up, this would not have caused the King to be in the castle. The causal reading (2) of conditional (1) is thus inappropriate. And the extant causal model semantics cited in the Introduction have no problem to account for this. Evidential Conditional You believe the evidential conditional f > e k to be true iff f > e k is true at ⟨M, w x ⟩ for x ∈ {3, 4} . The evidential conditional f > e k is true at ⟨M, w x ⟩ because k is true at all ⟨M, w ′ ⟩ , where w ′ satisfies f and is otherwise most plausible. The antecedent f overwrites your certain belief in ¬ . The most plausible worlds that satisfy f are those that satisfy as well, and the structural equations. The most plausible world is thus w 1 . Hence, we need only to check whether k is true at ⟨M, w 1 ⟩ . This is the case. So you believe f > e k to be true. If the flag had been up, this would be evidence that the king is in the castle. The evidential reading (3) of conditional (1) is thus appropriate according to our account. The extant causal model semantics cannot account for this. Kratzer (1989, p. 640) agrees that (1) is acceptable but asks: "why wouldn't the lights be out and the King still be away?" Well, on an evidential reading, you first keep fixed your certain beliefs backed by the available evidence as much as the supposition of the antecedent allows; in a second step, you identify the worlds which satisfy more of the structural equations than any other worlds; and these are your most plausible worlds. Unlike w 1 , w 6 does not keep fixed your certain belief in ; moreover, unlike w 1 , w 6 does not satisfy the structural equations. The answer is thus that Kratzer's alternative is not compatible with your certain beliefs backed by your evidence and the causal relations you believe to be true. Or so says our account. Adams (1970, p. 90) puts forth an example similar to the following. Oswald-Kennedy Example Example 2 You believe that Oswald shot Kennedy (o) and that he acted alone: no one else shot Kennedy ( ¬s ). However, you are not quite certain about this. By contrast you believe for sure that Kennedy has been shot ( ) either by Oswald or by someone else. So you accept the indicative conditional: (4) If Oswald didn't shoot Kennedy, someone else did. And yet you do not believe the corresponding subjunctive conditional: (5) If Oswald hadn't shot Kennedy, someone else would have. On the left, you see a graphical representation of your causal model, where E = {K = S ∨ O} . On the right, you see the set W of all possible worlds for the variables S, O, K. You believe for sure that Kennedy has been shot . Moreover, you believe that it was either Oswald or someone else. w 1 , w 2 , w 3 are thus the most plausible worlds. Causal Conditionals You believe the causal conditional ¬o > c k to be true iff ¬o > c k is true at ⟨M, w x ⟩ for x ∈ {1, 2, 3} . But ¬o > c k is false at ⟨M, w 2 ⟩ because k is false at ⟨M ¬o , w 2 ⟩ . If ¬o is set by intervention in w 2 , the structural equation determines ¬k . Hence, you do not believe ¬o > c k . You do not believe "If Oswald had not shot Kennedy, Kennedy would have been shot". 7 The fact you believe for certain is not kept fixed when evaluating the causal conditional. Rather the value for the variable K is overwritten by the downstream effects of intervening by ¬o in world w 2 . Let us modify the example slightly. Suppose you believe for sure that someone else shot Kennedy. That is, you believe . Hence, you believe w 1 or w 3 to be the actual world. Now, you believe ¬o > c k to be true. Given that someone else shot Kennedy: if Oswald had not, Kennedy would still have been shot. Evidential Conditionals You believe the evidential conditional ¬o > e s to be true iff ¬o > e s is true at ⟨M, w x ⟩ for x ∈ {1, 2, 3} . The evidential conditional ¬o > e s is true at each ⟨M, w x ⟩ because s is true at all ⟨M, w ′ ⟩ , where w ′ satisfies ¬o and is otherwise most plausible. The only such w ′ is w 3 . So you believe ¬o > e s to be true. Given you believe for sure that Kennedy has been shot, if Oswald did not shoot Kennedy, (this is evidence that) someone else did. 8 To evaluate an evidential conditional, the variable assignments that are believed for sure must be true in the worlds at which the consequent is evaluated, unless this conflicts with the antecedent. In the present example, is believed for sure and thus must be true at the evaluation worlds. By contrast, suppose our agent were not sure whether Kennedy was shot. Then she would not believe ¬o > e s to be true. For the evidential conditional is not true at w 8 . Adams (1970) uses his example to argue that indicative and subjunctive conditionals differ. We have already seen that the indicative/subjunctive distinction crosscuts the causal/evidential one. The subjunctive conditional (1) is evidential while the subjunctive (5) is causal, and the corresponding indicative (4) is evidential again. 9 Rott (1999, Sec. 3) discusses an example due to Hansson (1989). Hamburger Example Example 3 Suppose that one Sunday night you approach a small town of which you know that it has exactly two snackbars. Just before entering the town you meet a man eating a hamburger ( ). You have good reason to accept the following indicative conditional: It seems clear to me that it is not justified to accept this conditional. According to Rott, the example illustrates that subjunctive conditionals are understood in an ontic way while indicative conditionals are understood in an epistemic way. The indicative (6) tells you how you would revise your beliefs upon learning that snackbar A is closed. By contrast, the subjunctive (7) tells you what the world would be like if snackbar A were closed. And A being closed does not cause B to be open. 10 Rott's reason for rejecting the subjunctive conditional (7) is thus that you think there is no causal relation between the antecedent and the consequent. On the left, you see a graphical representation of your causal model, where E = {H = A ∨ B} . On the right, you see the set W of all possible worlds for the variables H, A, B. You believe for certain that snackbar A is open and the man eats a hamburger . Moreover, you believe that the man can only eat the hamburger if snackbar A or snackbar B is open. w 1 and w 3 are thus the most plausible worlds. You believe the causal conditional ¬a Even if ¬a is set by intervention in w 3 , b is false there. Hence, you do not believe ¬a > c b . You do not believe "If snackbar A were closed, this would cause snackbar B to be open"; for all we know, snackbar B might be closed as well if A were. Evidential Conditional Before seeing that snackbar A is in fact open, you believe but you have no beliefs about a and b (except the structural equation). That is, you believe the worlds w 1 , w 2 , and w 3 to be most plausible. Why? Because only these worlds satisfy and the structural equation. You believe the evidential conditional ¬a > e b to be true iff ¬a > e b is true at ⟨M, w x ⟩ for x ∈ {1, 2, 3} . The evidential conditional ¬a > e b is true at ⟨M, w x ⟩ because b is true at all ⟨M, w ′ ⟩ , where w ′ satisfies ¬a and is otherwise most plausible. The only world w ′ that satisfies ¬a , , and the structural equation is w 2 , and b is true there. So you believe ¬a > e b to be true. Given you believe for sure that the man eats a hamburger, if snackbar A is closed, (this is evidence that) snackbar B is open. 11 Our account delivers the desired results. And it seems that the evidential/causal distinction is similar to the epistemic/ontic distinction. In light of the King Ludwig example, however, we refrain from associating these distinctions with the mood of a conditional. Backtracking A backtracking conditional traces some causes from an effect: if this effect had not occurred, (it must have been that) some of its causes would have been absent. 12 To illustrate such conditionals, consider the following scenario inspired by Veltman(2005, p. 179). Example 4 Tom and Marianne wait in the lobby to be interviewed for a great job. Tom goes in first, Marianne continues to wait outside the interview room. When Tom comes out he looks rather unhappy. Marianne thinks: (8) If Tom had left the interview smiling, the interview would have gone well. (8) is a backtracking conditional. Tom had an interview and comes out looking rather unhappy. Marianne believes the causal hypothesis that the interview influences whether or not Tom looks happy. In particular, she believes that the interview going bad ( ¬i ) caused him to look rather unhappy ( ¬s ). Assuming that the effectlooking unhappy-is not the case, the cause-the interview not going well-must have been different. On the left, you see a graphical representation of your causal model, where E = {S = I} . On the right, you see the set W of all possible worlds for the variables S, I. You believe for certain that Tom looks rather unhappy ¬ . Moreover, you believe that whether or not the interview went well affects whether or not Tom is smiling. So w 4 is the most plausible world. Causal Conditional Like all backtracking conditionals, you do not believe the backtracking counterfactual (8) under the causal reading. An intervention on a variable effects changes only causally downstream, and so does not change its causal past. You believe the causal conditional s > c i to be true iff s > c i is true at ⟨M, w 4 ⟩ . But s > c i is false at ⟨M, w 4 ⟩ because i is false at ⟨M s , w 4 ⟩ . Even if s is set by intervention in w 4 , i is false there. Hence, you do not believe s > c i . You do not believe "If Tom had left the interview smiling, this would have caused the interview to go well". 13 Evidential Conditional You believe the evidential conditional s > e i to be true iff s > e i is true at ⟨M, w 4 ⟩ . The evidential conditional s > e i is true at ⟨M, w 4 ⟩ because i is true at all ⟨M, w ′ ⟩ , where w ′ satisfies s and is otherwise most plausible. The only world w ′ that satisfies s and the structural equation is w 1 , and i is true there. So you believe s > e i to be true. If Tom had left the interview smiling, this would have been evidence that the interview went well. Our evidential conditional captures backtracking. A Generalization So far, our account relies on a simple distinction between believing a particular fact for certain and not doing so. This simple distinction factors into the plausibility order over worlds. We have said that worlds that assign more variables the values about which the agent is certain are more plausible than worlds which assign those values to fewer variables. But the underlying distinction is too simple, as we will show now. Then we will generalize our account in response. Consider a modification of the Oswald-Kennedy example. 14 Example 5 You believe that Oswald shot Kennedy (o) and that he acted alone: no one else shot Kennedy ( ¬s ). You also believe that Kennedy has been shot (k) either by Oswald or by someone else. You are, however, not certain about any of your beliefs. Still, you are much more certain that Kennedy has been shot than that it was Oswald. So you accept the conditional: (4) If Oswald didn't shoot Kennedy, someone else did. On our present account, however, you do not believe (4). The reason is that you have no certain beliefs. The most plausible worlds are thus all the worlds that satisfy the structural equation K = S ∨ O . These worlds are w 1 , w 2 , w 3 , and w 8 of the original Oswald-Kennedy example. You believe ¬o > e s to be true iff ¬o > e s is true at ⟨M, w x ⟩ for x ∈ {1, 2, 3, 8} . But ¬o > e s is false at ⟨M, w 8 ⟩ because s is false at w 8 . Hence, you do not believe ¬o > e s. The result would be acceptable if you were just as sure that Kennedy was killed than that it was Oswald (cf. Sect. 4.2). As long as you are more certain that Kennedy was killed, however, the result is inacceptable. The problem seems to be that the present account is only sensitive to certain beliefs about particular facts. 15 In the modified Oswald-Kennedy example, by contrast, it seems to matter that you are much more certain that Kennedy has been shot than that it was Oswald. But the present account is blind for this relative certainty. We modify our account. The idea is that it is not certainty but relative certainty that actually matters. You believe that a particular fact is more certain than another. And this makes-in your view-some worlds more plausible than others. We therefore replace (i) in our plausibility order as follows: worlds that assign to more variables the values about which the agent is most certain are more plausible than worlds which assign those values to fewer variables. An agent is most certain about a variable value if (a) she is at least quite certain about the variable value, and (b) there is no variable value of which she is more certain. The modification accounts for Example 6. To see this, recall that you are quite confident that Kennedy has been shot and more certain of this than that it was Oswald. Moreover, you are more certain that Oswald did it than somebody else. There are no other variable values involved. Hence, you are most certain that Kennedy has been shot-even though you are not certain of it. There is no other variable value of which you are most certain. In the presence of the structural equation, the most plausible worlds are thus w 1 , w 2 , w 3 . And so you believe (4). Our final account treats the modified Oswald-Kennedy example like our preliminary account treats the original one. Our final account is a proper generalization of the preliminary account. If you believe a variable value for certain, you also believe it for most certain. Believing a particular fact for certain is-so to speak-the highest degree of being most certain. It does therefore not come as a surprise that our final and preliminary account agree on all the examples of the previous sections. What we have learned in this section is this: in general it is relative certainty that matters for the evaluation of evidential conditionals. Conclusion We have put forth a causal model account for the evaluation of causal and evidential conditionals. An agent comes equipped with beliefs about causal relations and beliefs about particular facts. Our account, like many other causal model accounts, captures causal conditionals rather well. Unlike other causal model accounts, ours captures as well evidential conditionals. Embedding a causal model in an agent's belief state enables the agent to evaluate evidential conditionals better than without the information encoded in the causal model. Unsurprisingly, the believed causal relations help determine which facts are to be kept fixed when evaluating a conditional. Notably, however, neither the sophisticated causal model account of conditionals due to Pearl (2011) nor the one due to Halpern (2016) provide the desired results for evidential conditionals like the one in the King Ludwig example. We have thus provided the first extension of a causal model account to what we call evidential conditionals. Deng and Lee (2021) proposed a causal model semantics that has structural similarities to our account. 16 Their semantics is designed to account for conditionals in the indicative and subjunctive mood, respectively. And indeed, they capture the Oswald-Kennedy conditionals and other minimal pairs in a way that parallels our account. However, like the other causal model semantics, theirs faces troubles with the King Ludwig conditional. For Kratzer's If the flag had been up, the King would have been in the castle is a subjunctive conditional which comes out false under their account of subjunctive conditionals. Our account, by contrast, can explain why this evidential subjunctive is acceptable. We have observed that the distinction between evidential and causal conditionals cross-cuts the distinction between indicative and subjunctive conditionals. Our account relies on beliefs about causal relations and can therefore explain why certain evidential subjunctives are acceptable. Furthermore, our distinction between evidential and causal seems to refine the distinction between epistemic and ontic conditionals drawn by Lindström and Rabinowicz (1995). In the Hamburger example, both the evidential/epistemic and the causal/ontic conditional are epistemic in the sense that they are evaluated relative to an agent's beliefs. Of course, we do not claim that our account captures all readings of all conditionals. It has, in particular, no resources to model uncertainty about causal relations. A generalization to remedy this situation must await another occasion. Still our account is quite widely applicable given its simplicity. So we may hope that our account is a first step towards a more sophisticated account based on which we can capture our intuitions about causal and evidential conditionals. A causal model account of the type proposed here may one day even instruct an intelligent machine to evaluate conditionals just like we do. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-07-14T18:17:44.185Z
2022-07-11T00:00:00.000
{ "year": 2022, "sha1": "659477cb35f52093f2a82006181a66d393bc849b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11023-022-09606-w.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "8363e232d7aed890568f4df549d1b8537aec3d24", "s2fieldsofstudy": [ "Philosophy", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237577116
pes2o/s2orc
v3-fos-license
Liver Fibrosis: Therapeutic Targets and Advances in Drug Therapy Liver fibrosis is an abnormal wound repair response caused by a variety of chronic liver injuries, which is characterized by over-deposition of diffuse extracellular matrix (ECM) and anomalous hyperplasia of connective tissue, and it may further develop into liver cirrhosis, liver failure or liver cancer. To date, chronic liver diseases accompanied with liver fibrosis have caused significant morbidity and mortality in the world with increasing tendency. Although early liver fibrosis has been reported to be reversible, the detailed mechanism of reversing liver fibrosis is still unclear and there is lack of an effective treatment for liver fibrosis. Thus, it is still a top priority for the research and development of anti-fibrosis drugs. In recent years, many strategies have emerged as crucial means to inhibit the occurrence and development of liver fibrosis including anti-inflammation and liver protection, inhibition of hepatic stellate cells (HSCs) activation and proliferation, reduction of ECM overproduction and acceleration of ECM degradation. Moreover, gene therapy has been proved to be a promising anti-fibrosis method. Here, we provide an overview of the relevant targets and drugs under development. We aim to classify and summarize their potential roles in treatment of liver fibrosis, and discuss the challenges and development of anti-fibrosis drugs. INTRODUCTION Liver fibrosis is an abnormal repair reaction for chronic liver injury caused by various causes, such as chronic hepatitis B (CHB), chronic hepatitis C (CHC) and alcoholic fatty liver disease (AFLD). It is characterized by diffuse excessive production and deposition of extracellular matrix (ECM) in liver (Poynard et al., 1997;Benhamou et al., 1999;Pinzani and Macias-Barragan, 2010;Povero et al., 2010). Organism initiates pro-inflammatory mechanism firstly when the injury accumulates. With the pro-inflammatory reaction, the normal structure and physiological function of the liver tissues are gradually destroyed, which causes the production of scar tissues replacing the liver parenchyma. It further develops into liver cirrhosis, liver failure or liver cancer, which eventually leads to the death of the patients (Zoubek et al., 2017). In recent years, with the in-depth study of the occurrence and development mechanism of liver fibrosis and the use of clinical drug, it is found that cleaning pathogens or removing etiology, such as blocking or curing virus infection, has the potential of reversing liver fibrosis. Yet, there are still many great difficulties in the reversal of liver fibrosis. Although many anti-fibrotic candidate drugs have shown good results in experimental animal models, their anti-fibrotic effects in clinical trials remain very limited. In this review, we classify and summarize the relevant targets and drugs under research and development for the treatment of liver fibrosis at home and abroad, and we also explore their potential roles and curative effects, and discuss the challenges in the research and development of anti-fibrosis drugs. PATHOGENESIS OF LIVER FIBROSIS Liver fibrosis is caused by chronic liver injuries which can be induced by virus infection, autoimmune diseases, metabolic diseases, drug toxicity, alcoholic liver disease (ALD), nonalcoholic fatty liver disease (NAFLD) and so on (de Alwis and Day, 2008;Pinzani and Macias-Barragan, 2010). With shortterm liver injury, liver fibrosis will not occur due to the balance of pro-fibrosis and anti-fibrosis mechanisms. However, when a long-term or chronic liver injury occurs, the hepatocyte membrane is destroyed, which causes hepatocyte's necrosis and apoptosis. The injured hepatocyte releases damage-associated molecular patterns (DAMPs) which stimulate the transformation of quiescent hepatic stellate cells (HSCs) into activated ones directly. And then, fibrogenic phenotype of HSCs is activated, and excessive ECM was produced with the main components of type I and III collagen and fibronectin, leading to the balance between matrix metalloproteinases (MMPs) and tissue inhibitors of metalloproteinase (TIMPs), which regulate the synthesis and degradation of ECM, to be broken. MMPs that promote ECM degradation decrease, while TIMPs that inhibit MMPs increase. The imbalance between MMPs and TIMPs leads to the excessive deposition of ECM in the Disse space and the formation of scar (Zhu et al., 2004;Tacke and Weiskirchen, 2012). The imbalance between pro-fibrosis and anti-fibrosis mechanisms results in the destruction of liver tissue structure and normal physiological function, and eventually leads to the formation of liver fibrosis. Moreover, the activated HSCs have increased contractility, express alpha smooth muscle actin (α-SMA) highly, and secrete cytokines, such as transforming growth factor beta 1 (TGF-β 1 ), platelet derived growth factor (PDGF), and connective tissue growth factor (CTGF). The autocrine of activated HSCs further activates HSCs continuously. And activated HSCs also secrete chemokines, which move to the injured liver site, chemotactically accumulate in the inflammatory compartment and aggravate inflammatory damage. In addition, DAMPs released by injured hepatocytes stimulate the activation of Kupffer cells and other immune cells which further stimulate the activation of HSCs and maintain its survival via secreting pro-inflammatory and profibrotic factors to induce inflammation, such as PDGF, TGFβ 1 , tumor necrosis factor alpha (TNF-α) and interleukin-1 beta (IL-1β), and activating TGF-β 1 /Smad signal pathway, mitogen-activated protein kinase (MAPK) signal pathway and other signal pathways. Furthermore, Kupffer cells also secrete chemokine (C-C motif) ligand 2 (CCL2) and CCL5, which recruit monocytes to inflammatory injured site. Moreover, monocytes further cause hepatocyte injury, promote HSCs activation and aggravate inflammation and fibrosis by synthesizing and secreting pro-inflammatory and pro-fibrogenic substances including apoptosis-signal-regulating kinase 1 (ASK1), Pancaspase, Galectin-3 (Gal-3) and so on (Aydin and Akcali, 2018;Roehlen et al., 2020). In addition, TGF-β 1 stimulates monocytes to differentiate into macrophages. Macrophages produce inflammatory mediators, such as IL-1 and IL-6, which promotes the aggravation of the inflammatory response and the continuous activation and survival of HSCs . The paracrine of Kupffer cells and macrophages affects the activation of HSCs. The pathogenesis of liver fibrosis is shown in Figure 1. PROGRESS IN THE TREATMENT OF LIVER FIBROSIS Etiological Treatment Chronic liver disease is a major health problem in the world, and it causes about 2 million deaths every year. Nowadays, liver cirrhosis among chronic liver diseases has become the 11th most common cause of death in the world (Asrani et al., 2019). As an early stage of liver cirrhosis, liver fibrosis is a reversible and complex pathological process caused by various chronic liver diseases, just as organ fibrosis is a feature of the progression of chronic inflammatory diseases. Therefore, it is a top priority for the treatment of liver fibrosis. Etiological treatment, that is to say, elimination of primary pathogenic factors, is the primary countermeasure of anti-liver fibrosis. If the etiologies were effectively suppressed or removed, it will reduce the persistent liver injury and have a magnificent meaning in further blocking or reversing liver fibrosis. The main cause of chronic liver diseases and liver fibrosis is hepatitis virus infection, such as CHB and CHC. Thus, antihepatitis virus therapy plays an important role in the treatment of liver fibrosis. At present, the main drugs for the treatment of CHB are nucleotide analogs and interferon. Entecavir, a firstline antiviral drug, was used to treat 120 CHB patients with liver fibrosis, among whom 54 patients (45%) showed fibrosis regression after 78 weeks of antiviral treatment, indicating that liver hardness continued to decrease and liver fibrosis was alleviated after effective antiviral treatment . Tenofovir effectively inhibited hepatic fibrosis in 148 patients with advanced fibrosis or liver cirrhosis complicated with human immunodeficiency virus-hepatitis B virus (HIV-HBV) infection (Boyd et al., 2010). Nowadays, the most effective treatment of CHC is the use of direct-acting antiviral agents (DAAs) (Premkumar and Dhiman, 2018). One of the commonly used DAAs is epclusa, "the third generation product of Gilead, " which is the compound tablet of velpatasvir and sofosbuvir (Abramowicz et al., 2017). Sofosbuvir is a nucleoside HCV NS5B polymerase inhibitor and it was explored in a prospective study, the results of which showed that liver fibrosis score and liver hardness decreased significantly in 32 CHC patients with liver fibrosis after 12 weeks of treatment (Bernuth et al., 2016). Epclusa, as a pan-genotypic drug, is used for all 6 genotypes of hepatitis C, and its cure rate for hepatitis C is up to 98%, FIGURE 1 | Pathogenesis of liver fibrosis. Activation of HSCs is a crucial step of the occurrence and progression of liver fibrosis. Quiescent HSCs are activated to fibrogenic phenotype by DAMPs released by injured hepatocytes. Activated HSCs are continuously activated and proliferated by paracrine and autocrine. They secret abundant fibrogenic cytokines and produce excessive ECM, which causes the break of the balance of pro-fibrosis/anti-fibrosis mechanism. The pro-fibrosis mechanism leads to the abnormal formation of scar and eventually induces liver fibrosis (Tacke and Weiskirchen, 2012;Roehlen et al., 2020). higher than that of sofosbuvir. However, the use of epclusa is limited by the restrictions of medical insurance. Moreover, "the fourth generation product of Gilead" vosevi, which is based on epclusa with addition of voxilaprevir was approved to be listed in China at the end of December 2019. Its treatment spectrum is wider than epclusa. It is used to treat hepatitis C patients who failed to be treated with epclusa, and the cure rate is close to 100% (Link et al., 2019). Therefore, liver fibrosis caused by these chronic diseases is treated or even reversed because of the cure of these chronic diseases, that is, the removal of the cause. In addition, autoimmune hepatitis (AIH), drug-induced liver injury (DILI), non-alcoholic steatohepatitis (NASH), and alcoholic steatohepatitis (ASH) are also the main causes of liver fibrosis (Mueller et al., 2010;Valera et al., 2011;Tacke and Weiskirchen, 2021;Wan et al., 2021). It has been reported that fibrosis and histological activity index of 54 AIH patients with liver fibrosis who received immunosuppressive therapy decreased significantly (Valera et al., 2011). The results showed that immunosuppressive therapy is an important method for reversing liver fibrosis of AIH. Moreover, the DILI patients with liver fibrosis should reduce or stop the use of the related drugs which induced liver injury and fibrosis. Also, the patients with liver fibrosis caused by NASH should balance diet, take more exercise and control weight, while patients with ASH must abstain from alcohol. Anti-inflammatory Treatment The occurrence and development of fibrosis is always accompanied with inflammatory response. In liver injury, Kupffer cells firstly cause injury and initiate inflammatory cascade reaction, release a variety of inflammatory factors and secrete CCL2/5 to recruit inflammatory monocytes, macrophages and lymphocytes to the injury site. Macrophages release ASK1, TNF-α and pan-caspase, which further aggravates inflammatory injury. In addition, Kupffer cells also produce cytokines, such as TGF-β 1 , PDGF, IL-6 and Gal-3, which promote the activation and proliferation of HSCs. Furthermore, peroxisome proliferators-activated receptors (PPARs), as a kind of regulatory factor against liver fibrosis, are inhibited by activated HSCs, resulting in excessive proliferation and deposition of ECM and the occurrence and development of liver fibrosis (Li and Wang, 2007;Luo et al., 2017). Moreover, chronic liver diseases are generally accompanied by increased de novo lipogenesis (DNL) in the liver (Tamura and Shimomura, 2005;Ameer et al., 2014). Fat is broken down into fatty acids under the action of lipase, and the excessive accumulation of fatty acids in the liver also leads to hepatotoxicity and inflammation (Ameer et al., 2014). Therefore, it is an important measure to prevent liver fibrosis by inhibiting the accumulation of fat in the liver and reducing the secretion of inflammatory cytokines and the release of apoptotic proteins. The research progress of related drugs is summarized in Table 1. After liver injury, the overproduction of chemokine CCL2/CCL5 is induced in livers of mouse and human, which is related to the severity of liver fibrosis. In the two models of liver fibrosis induced by carbon tetrachloride (CCl 4 ) and methionine and choline-deficient diet, it was found that Ccl5 −/− mice showed a reduction of HSCs activation, immune cell infiltration and liver fibrosis. Met-CCL5, an antagonist of chemokine (C-C motif) receptor 5 (CCR5), inhibited the migration, proliferation and collagen secretion of HSCs effectively, ameliorated the liver fibrosis in experimental model mice significantly, and accelerated the regression of fibrosis (Berres et al., 2010). Therefore, it is feasible to reduce experimental liver fibrosis by antagonizing CCR5. Cenicriviroc (CVC) is a dual inhibitor of CCR2 and CCR5, and it reduced the amassment and accumulation of pro-inflammatory macrophages in animal models of liver fibrosis . In a phase II clinical trial (NCT02217475) of NASH patients with liver fibrosis, fibrosis was ameliorated in patients who were administered with 150 mg CVC for 2 years with good safety (Friedman et al., 2016). The results of a phase 2b clinical trial of 289 patients showed that there were twice as many patients with ameliorated fibrosis and no deterioration of steatohepatitis in the 150 mg CVC group as in the placebo group after one-year treatment. The safety and tolerance of CVC were comparable to placebo in Freidman's research, and the main adverse reactions are fatigue, diarrhea, and headache (Friedman et al., 2018;Tacke, 2018). However, a phase III study (NCT03028740) of patients with advanced fibrosis and cirrhosis, aiming to assess the efficacy and safety of CVC treatment, was terminated recently due to lack of efficacy (Anstee et al., 2020). It reveals that the effectiveness of inhibiting CCR2/CCR5 in the treatment of liver fibrosis still needs more studies to verify. Overexpression of galactose lectin significantly promoted inflammatory response and aggravated liver fibrosis. Gal-3, as a galactose lectin with immune effects, is secreted by activated Kupffer cells and macrophages in inflammatory state and participates in the pathophysiological process of liver fibrosis (Gudowska et al., 2015;Moon et al., 2018). Belapectin (GR-MD-02), as a Gal-3 inhibitor, is a complex carbohydrate drug, and it was proved to be safe and well tolerated at the maximum dose of 8 mg/kg in phase I clinical trial of NASH patients with advanced fibrosis . However, a recent phase 2b multicenter placebo-controlled clinical trial (NCT02462967) of belapectin in patients with liver fibrosis, NASH, and cirrhosis showed that it was tolerated and safe in a dose of 2 mg/kg for 52 weeks compared with placebo, but had no significant effect on reduction of fibrosis or NASH scores. A high proportion of patients in the placebo group and the belapectin group had adverse reactions such as infections and gastrointestinal diseases, with the severity of grade 1 (mild) or grade 2 (moderate). Yet, Chalasani's research showed that belapectin with dose of 2 mg/kg reduced hepatic venous pressure gradient and variceal development in NASH patients without esophageal varices (Chalasani et al., 2020). In addition, Aspirin, a classic antipyretic and analgesic, was founded that it exerted a significant anti-inflammatory effect by inhibiting IL-6 and TNF-α and reducing the number of inflammatory cells, and it also inhibited the activation and proliferation of HSCs and liver fibrosis via inhibition of toll-like receptor 4 (TLR4)/nuclear factor kappa beta (NF-κB) signal pathway. These results suggested that Aspirin is a potential effective drug for the treatment of liver fibrosis . IL-1β and IL-1 receptor antagonist (IL-1ra) are important mediators of chronic liver disease. IL-1ra treatment had a certain anti-fibrotic effect in the bile duct ligation-induced (BDL) mouse model of hepatic fibrosis, but it had a pro-fibrotic effect in the CCl 4 -induced mouse model of hepatic fibrosis (Meier et al., 2019). This suggests that blocking IL-1-mediated inflammation may only be selectively beneficial to liver fibrosis. De novo lipogenesis plays a major role in fatty acid metabolism and is a necessary link for HSCs activation. The first step of the synthesis of DNL is catalyzed by acetyl-CoA carboxylase (ACC) as the rate-limiting enzyme. It has been reported that the inhibition of ACC decreased liver steatosis and serum fibrosis biomarkers in patients with NASH, depressed the profibrotic activity of HSCs, and reduced the severe degree of liver fibrosis in diethylnitrosamine (DMN) chemical-induced liver injury model and high fat diet-induced rat model (Ross et al., 2020). An ACC small molecule inhibitor, GS-0976, was used in a phase II randomized placebo-controlled trial (NCT02856555) of 126 NASH patients with F1-F3 fibrosis. The results showed that the fibrosis marker TIMP1 declined in a dose-dependent manner in the patients who were administered 20 mg/d GS-0976 for 12 weeks, accompanied by 30% decrease in liver fat and depression of liver injury markers, but there was no change in liver hardness. In addition, GS-0976 was safe, but the plasma triglyceride level was > 500 mg/dL observed in 16 patients, which may cause atherosclerosis (Loomba et al., 2018a). The effects of GS-0976 on cardiovascular function need long-term studies to determine. The novel ACC1/2 inhibitor WZ66 was reported that it significantly improved NASH-related liver function by reducing steatosis, triglycerides and other lipids, and inhibiting the activation of Kupffer cells and HSCs in high fat diet-induced mice model (Gao et al., 2020). Glucagon-like peptide-1 (GLP-1) directly ameliorated the state of liver fibrosis by increasing insulin release, reducing glucagon secretion, decreasing the concentration of liver enzymes and depressing hepatic steatosis. The GLP-1 analog liraglutide was carried out in a phase II randomized placebocontrolled trial (NCT01237119) of 52 patients with NASH. The liver biopsy results showed that 39% of patients with continuous administration of 1.8 mg/d liraglutide for 48 weeks had definite non-alcoholic steatohepatitis improvement without further exacerbation of liver fibrosis, while only 9% of patients in the placebo group had improvement. Furthermore, only 9% of patients in the liraglutide group had further fibrosis compared with 36% in the placebo group. Safety and tolerability of liraglutide were comparable to placebo, and the main adverse events of liraglutide group are gastrointestinal disorders, nausea and diarrhea with the severity of grade 1 or grade 2 (Armstrong et al., 2016). Resmetirom (MGL-3196) is a selective thyroid hormone receptor β agonist with oral activity, which aims to improve NASH by increasing liver fat metabolism and reducing lipo-toxicity. The results of a 36-week multicenter randomized double-blind placebo-controlled trial (NCT02912260) of 348 patients showed that resmetirom decreased liver fat content by 32.9% and 37.3%, respectively, after 12 weeks and 36 weeks treatment with a dose of 80 mg in patients with F1-F3 fibrosis, compared with 10.4% and 8.5% in the placebo group. Moreover, most adverse events were mild or moderate, such as diarrhea and nausea . Antioxidant Stress Oxidative stress is an important factor in liver injury and liver fibrosis. Oxidative stress reaction (ROS) produces excessive reactive oxygen species and active free radicals in the liver, which weakens the antioxidant function and causes the increase of active free radicals in hepatocytes, the decrease of scavenging, and the destruction of hepatocyte membrane. These results affect the function of synthesis and degradation of hepatocytes, and lead to hepatocyte necrosis and apoptosis. In addition, ROS also promotes the activation of HSCs and liver fibrosis by causing peroxidation damage to Kupffer cells and neutrophils, up-regulating the gene expression of collagen type I alpha 2 in the liver, and triggering inflammation (Schwabe and Brenner, 2006;Yang et al., 2017). Up to now, the common anti-oxidative stress and hepatocyte protection drugs include reduced glutathione, tiopronin, silymarin, s-allylcysteine (SAC), oroxylin A, methyl ferulic acid (MFA) and so on. Reduced glutathione protects the hepatocyte membrane from the damage of active free radicals by accelerating the scavenging of free radicals. Tiopronin can not only scavenge free radicals, but also promote hepatocyte regeneration. Silymarin, as a classical drug for repairing liver injury, inhibits the formation of lipid peroxide and stabilizes liver cell membrane. Also, it has the effects of protecting liver and anti-liver fibrosis. SAC inhibited the fibrosis process and improved the survival rate of rats with CCl 4 -induced liver fibrosis in a dose-dependent manner. Its therapeutic effect is better than that of N-acetylcysteine (Kodai et al., 2015). Therefore, SAC is expected to become an effective drug for the treatment of liver fibrosis. RAP-8 showed anti-fibrotic effect by inhibiting oxidative stress and promoting cell cycle arrest . Oroxylin A effectively alleviated liver fibrosis by clearing ROS, suppressing phosphatidylinositol 3-kinase (PI3K)/AKT/mTOR signal transduction, and inhibiting the secretion of pro-inflammatory cytokines in activated HSCs (Shen et al., 2020). MFA, a bioactive monomer, has a protective effect on liver injury. And it inhibited liver fibrosis in CCl 4 -induced rats by inhibiting TGF-β 1 /Smad and NADPH oxidase 4 (NOX4)/ROS signal pathways, down-regulating the level of procollagen type III, collagen type IV and laminin, up-regulating the ratio of MMP2/TIMP1, and inhibiting the synthesis of ECM and the activation of HSCs (Cheng et al., 2019). Nicotinamide adenine dinucleotide phosphate (NADPH) oxidase (NOX) is a multicomponent transmembrane enzyme complex, including six subtypes of NOX1, NOX3, NOX4, NOX5, DUOX1 and DUOX2. When the cells are subjected to stimulation, NOX receives the signal to produce ROS, and then causes oxidative damage. The key cells in the liver to produce NOX are Kupffer cells and HSCs. Kupffer cells produce only NOX2, while HSCs produces NOX1, NOX 2 and NOX 4. In the process of hepatic fibrosis, NOX1, NOX 2 and NOX 4 play a key role in HSCs activation, proliferation and ECM synthesis, and NOX4 is involved in hepatocyte apoptosis (De Minicis and Brenner, 2007;Mortezaee, 2018). As a dual NOX1/4 inhibitor, GKT137831 reduced the production of ROS in HSCs both in vitro and in vivo. It significantly inhibited the formation of hepatic fibrosis and hepatocyte apoptosis in prevention or treatment groups in mice models of hepatic fibrosis induced by CCl 4 and BDL (Aoyama et al., 2012;Jiang et al., 2012). Currently, GKT137831 is undergoing a fibrotic effect trial (NCT03226067) on patients with primary cholangitis after 24 weeks of treatment, which expected to have a satisfactory feedback. Angiotensin II (Ang II) promoted fibrosis by phosphorylating non-phagocytic NOX regulatory subunit p47 phox and inducing oxidative stress. Losartan, as an Ang II receptor blocker, ameliorated inflammation and fibrosis in 50% of the patients who were administered with 50 mg/d for 18 months in a clinical trial of 14 HCV patients with liver fibrosis (Colmenero et al., 2009). However, there is a lack of control study in this trial, and the reliability of the results needs to be further confirmed. Inhibition of Hepatocyte Apoptosis In the process of hepatic fibrosis, hepatocyte death and apoptosis are the main influencing factors of inflammation and HSCs activation. Dead hepatocytes release DAMPs to activate HSCs and Kupffer cells. Hepatocyte apoptosis activated Fas death receptor, which induced the release of apoptotic bodies, and finally leaded to fibrogenic response (Mihm, 2018). In addition, phagocytosis of apoptotic cells activated HSCs (Witek et al., 2009). Therefore, the inhibition of hepatocyte apoptosis is beneficial to inhibit inflammation, prevent the activation of HSCs and reduce liver fibrosis. Whether it is endogenous or exogenous apoptosis pathway, the last common step of hepatocyte apoptosis is carried out by a family of cysteine-proteases termed caspases. Pancaspase promoted apoptosis by activating apoptotic protease. Pan-caspase inhibitor VX-166 inhibited hepatocyte apoptosis, decreased hepatic steatosis, and postponed the process of fibrosis in NASH mice model, but didn't cause significant improvement in liver injury (Witek et al., 2009). Emricasan, a small molecule pan-caspase inhibitor, significantly ameliorated liver injury and fibrosis in NASH mice model by inhibiting caspase activity and reducing hepatocyte apoptosis, ameliorating inflammatory environment and inhibiting HSCs activation (Barreyro et al., 2015). Moreover, Emricasan dramatically ameliorated fibrosis, portal hypertension and liver function by improving hepatic sinusoidal microvascular dysfunction in rats with advanced liver cirrhosis induced by CCl 4 (Gracia-Sancho et al., 2019). Emricasan also improved liver function of patients with severe liver cirrhosis with good safety, tolerance and similar adverse reactions compared with placebo group in a 3-month multicenter phase II randomized clinical trial (NCT02230670) of 74 patients with liver cirrhosis. The new or worsening decompensation event in the placebo group was mainly ascites, while that in the emricasan group was mainly hepatic encephalopathy, which was generally caused by the patient's original disease (Frenette et al., 2019). However, emricasan did not improve liver inflammation or fibrosis in NASH patients with F1-F3 fibrosis who received 72 weeks of 5 mg/d or 50 mg/d treatment (NCT02686762), but may cause more severe liver fibrosis and hepatocyte swelling, which due to the activation of other cell death or necrosis mechanisms (Harrison et al., 2020a). Tumor necrosis factor alpha participates in the activation and expression of apoptotic ligand. Its inhibitor pentoxifylline prevented porcine serum-induced liver fibrosis in rats by inhibiting the production of IL-6 and the proliferation of HSCs. In addition, pentoxifylline decreased the inflammatory state, reduced oxidative stress and ameliorated the degree of liver fibrosis in patients with NASH by inhibiting the transcription of TNF-α gene, but it had no significant effect on patients with alcoholic hepatitis (Toda et al., 2009). A study showed that β-elemene prevented hepatic fibrosis by downregulating the expression of serum TNF-α and liver CD14 and decreasing plasma endotoxin in CCl 4 -induced hepatic fibrosis rats (Liu et al., 2011). Another way to reduce hepatocyte death associated with liver injury is to suppress stress signals. ASK1, which is activated by a variety of pro-fibrotic factors, activates MAPK signal pathway, and participates in hepatocyte apoptosis, inflammation and fibrosis. Selonsertib (GS-4997), a selective ASK1 inhibitor, inhibited HSCs proliferation and ECM production by blocking the ASK1/MAPK pathway, and significantly alleviated DMNinduced liver fibrosis in rats (Yoon et al., 2020). Selonsertib was used to treat 74 NASH patients with F2-F3 fibrosis for 24 weeks with dose of 6 mg/d or 18 mg/d in a multicenter phase II clinical trial. The results demonstrated that selonsertib decreased fibrosis-related markers and biomarkers in the process of apoptosis, reduced inflammatory levels, and ameliorated hepatic fibrosis. Most patients have experienced mild or moderate adverse reactions, such as headache, nausea and rhinitis. In addition, three patients in the selonsertib group discontinued treatment due to serious adverse reactions (Loomba et al., 2018b;Harrison et al., 2020c). Moreover, the results need to be confirmed by further studies, because the placebo control group was not included in this study. Yet, selonsertib had no anti-fibrotic effect on NASH patients with F3 or F4 fibrosis after 48 weeks treatment with dose of 6 mg/d or 18 mg/d in a phase III clinical trial (NCT03053050; NCT03053063) (Harrison et al., 2020c). Inhibition of Activation and Proliferation of Hepatic Stellate Cells The activation of HSCs is a key event in the occurrence and development of liver fibrosis. Quiescent HSCs are activated after being stimulated by liver injury. HSCs are continuously activated via TGF-β 1 , PDGF, CTGF and other cytokines secreted by Kupffer cells and other cells, which promote the proliferation and prolong the survival time of HSCs through related signaling pathways. Furthermore, autocrine action of HSCs also activates itself (Aydin and Akcali, 2018). In addition, the activation of HSCs is also promoted via some proteases, such as 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) reductase and dipeptidyl peptidase-4 (DPP4). The activated HSCs, as the main source of ECM, lead to the deposition of a large number of ECM, formation of scar tissue, destruction of normal liver tissue structure and function, and the occurrence of liver fibrosis. Therefore, inhibiting the activation and proliferation of HSCs is the key to alleviate or even reverse hepatic fibrosis. The related drugs are summarized in Table 2. Inhibition of TGF-β 1 /Smad Signal Pathway Tumor necrosis factor-β 1 is a vital profibrotic cytokine in the development of liver fibrosis. The up-regulation of TGF-β 1 /Smad signal pathway is one of the most important factors in the process of liver fibrosis. During liver injury, TGF-β 1 binds to the type II receptor on HSCs, which recruits and activates type I receptor via phosphorylation of serine residues. The activated type I receptor activates receptor-regulated protein Smad2/3 via re-phosphorylation, which separates from the receptor and forms a complex with Smad4. The complex trans-locates to the nucleus and down-regulates the expression of Smad7, which inhibits TGF-β 1 by negative feedback, regulates the expression of fibrosis-related genes and induces the activation, proliferation and trans-differentiation of HSC into myofibroblasts, promotes the excessive synthesis and deposition of ECM and finally aggravates fibrosis (Derynck and Zhang, 2003). Therefore, the inhibition of TGF-β 1 /Smad signal pathway plays a critical role in inhibiting the activation and proliferation of HSCs and ameliorating liver fibrosis. Pirfenidone (PFD) is a broad-spectrum anti-fibrotic drug, which was approved by FDA in 2014 for the treatment of idiopathic pulmonary fibrosis. Preclinical studies showed that PFD effectively ameliorated the liver inflammation and fibrosis induced by concanavalin A, CCl 4 and BDL in mice by significantly reducing the level of serum TGF-β 1 and collagen expression (Garcia et al., 2002;Seniutkin et al., 2018;Salah et al., 2019). At present, the study of PFD against liver fibrosis has entered phase II clinical trials. The results of treating 22 patients with HCV infection showed that PFD significantly ameliorated liver fibrosis in 67% of the patients with continuous administration for 2 years by significantly decreasing TGFβ 1 levels and ameliorating inflammation, steatosis and liver function. The adverse events are mild, such as gastritis and nausea (Flores-Contreras et al., 2014). Yet, because of no placebo control group in this study, further research is needed to confirm these results. In addition, a clinical study (NCT04099407) of prolonged-release formulation pirfenidone (PR-PFD) showed that it significantly reduced fibrosis in 35% of the 122 patients who had advanced liver fibrosis with different chronic liver injury diseases and were treated for one year, by decreasing levels of TGF-β 1 and improving liver function, while there was only 4.1% in non-PR-PFD group. Moreover, only 12% patients had transient burning or nausea and 7% patients had photosensitivity, indicating that PR-PFD had good safety in advanced liver fibrosis (Poo et al., 2020). Therefore, PR-PFD is a potential candidate drug for anti-fibrosis. The me-better drug of PFD, Fluorofenidone, alleviated liver fibrosis and liver injury induced by porcine serum in rats through reducing the activation of HSCs induced by TGF-β 1 and inhibiting TGF-β 1/ Smad and MAPK signal pathways (Peng et al., 2019). Furthermore, praziquantel, as a schistosomicide with good safety, significantly alleviated liver fibrosis induced by CCl 4 in mice by up-regulating the expression of Smad7 in HSCs, inhibiting TGF-β 1 /Smad signal pathway, inhibiting the activation of HSCs and reducing collagen production . In addition, ferulic acid effectively improved hepatic fibrosis in vivo and in vitro by inhibiting the expression of α-SMA, collagen, fibronectin and other fibrosis markers in human HSC line LX2 induced by TGF-β 1 , inhibiting the protein levels of p-Smad2 and p-Smad3 in CCl 4 -induced hepatic fibrosis model, and inhibiting TGF-β 1 /Smad signal pathway (Mu et al., 2018). Although inhibiting TGF-β 1 /Smad signal pathway is challenging for liver fibrosis, it still has pros and cons. TGF-β 1 /Smad signal pathway is crucial to maintain liver immune homeostasis through its anti-inflammatory nature and growth regulatory function. TGF-β 1 inhibits the activation of macrophages and expression of inflammatory factors, such as TNF-α and MMP-12, in a manner that depends on smad3, which balances the immune microenvironment . It has been reported that the levels of TGF-β 1 on patients with null-mild liver fibrosis are higher than those with advanced liver fibrosis, which hint that TGF-β 1 may exert an anti-fibrotic effect through its anti-inflammatory activity and immune regulatory functions, as the improvement of liver inflammation can improve liver fibrosis in patients (Rallón et al., 2011). In addition, TGF-β 1 regulates the proliferation, differentiation, survival and functions of various immune cells, such as T lymphocytes, B lymphocytes, dendritic cells, and macrophages. Since those immune cells play an important role in mediating liver homeostasis , blocking TGF-β 1 may lead to disorders of liver homeostasis. Therefore, it is crucial to focus on how to balance the relationship between the pro-fibrotic activity, anti-inflammatory activity and functions of maintaining liver homeostasis of TGF-β 1 /Smad. Inhibition of PDGF Receptor-Mediated Signaling Pathway Platelet derived growth factor is an important mitogen in the differentiation of HSCs. During liver injury, Kupffer cells mediate platelet recruitment in the liver and produce a large number of PDGF. In addition, endothelial cells and activated HSCs also express PDGF. The binding of PDGF to its receptor (PDGFR) induces dimerization and phosphorylation of the receptor, which in turn phosphorylates tyrosine residues on different substrates in the cell. These results regulate the expression level of fibro-genic target genes, such as collagen type I alpha 1 (COL1a1), TIMPs, MMPs and apoptosis regulatory factor B cell lymphoma/lewkmia 2 (Bcl2), which leads to the survival and proliferation of HSCs (Borkham-Kamphorst and Weiskirchen, 2016). Stimulation of PDGFR activates several signal pathways, including Ras/extracellular signal-regulated protein kinase/MAPK pathway, PI3K/AKT pathway, Janus kinase/signal transducer and transcriptional activator (STAT) pathway. Moreover, PDGFR mRNA deletion in hepatocytes inhibited the up-regulation of PDGFR mRNA expression in HSCs, decreased activation of HSCs and alleviated liver fibrosis (Lim et al., 2018). Therefore, the inhibition of PDGFR is useful to inhibit the proliferation of HSCs and alleviate liver fibrosis. A study founded that sorafenib up-regulated the expression of Fas, Fas-L and Caspase-3 by inhibiting PDFGR and VEGFR2, decreased the ratio of Bcl2 to Bcl2-related protein x (Bax), inhibited the proliferation of HSCs, promoted apoptosis of HSCs, reduced collagen accumulation, and alleviated liver fibrosis (Wang et al., 2010;Sung et al., 2018). However, another study founded that although sorafenib inhibited the levels of PDFGR and p-AKT when HSCs were treated with low-dose sorafenib with sub-micromolar concentration, it still induced the activation of MAPK in HSCs and promoted the differentiation of myofibroblasts. Sorafenib and MEK inhibitor AZD6244 jointly inhibited the contradictory activation of MAPK and HSCs in vitro through chemokine (C-X-C motif) receptor 4-targeted nanoparticles delivery, and alleviated liver fibrosis in liver injury model induced by CCl 4 in mice (Sung et al., 2018). Nilotinib, an inhibitor of tyrosine kinase, down-regulated the level of profibrosis cytokines by reducing the expression of PDGFR and the level of TGF-β 1 , and significantly reduced CCl 4 -induced hepatic fibrosis in rats (Shiha et al., 2014). Dihydroartemisinin promoted the activation of caspase cascade in HSCs, up-regulated Bax, and down-regulated Bcl, suppressed PI3K/AKT pathway, inhibited proliferation of HSCs and induced its apoptosis, improved liver tissue structure, and ameliorated liver fibrosis induced by BDL in rats (Chen et al., 2016). Asiatic acid inhibited the activation of HSCs and the synthesis of ECM by reducing oxidative stress, inflammation and hepatocyte apoptosis, and inhibited PI3K/AKT/mTOR signal pathway, which effectively improved CCl 4 -induced liver injury and fibrosis in rats (Wei et al., 2018). Rilpivirine (RPV) is an anti-HIV drug with no hepatotoxicity reported. It reduced collagen expression in vitro, and had obvious anti-inflammatory and anti-fibrosis effects in NAFLD model and rat model of liver fibrosis induced by CCl 4 and BDL. Through selective activation of STAT1, RPV promoted STAT3-dependent hepatocyte proliferation and HSCs apoptosis, and had bystander effects on hepatocytes, which promoted liver regeneration and ameliorated liver fibrosis (Marti-Rodrigo et al., 2020). Inhibition of CTGF Connective tissue growth factor is an important pro-fibrotic factor in the process of fibrosis, and is induced by TGF-β 1 . It promotes the production of ECM and enhances the proliferation, migration and survival ability of activated HSCs, which promotes the occurrence and development of liver fibrosis in different liver chronic diseases (Kovalenko et al., 2009). It has been reported that pioglitazone inhibited the development of liver fibrosis by inhibiting the expression of CTGF and type III collagen in HSCs and preventing the morphological changes of HSCs induced by TGF-β 1 in a dosedependent manner (Jia et al., 2007). Curcumin ameliorated fibrosis by inhibiting the expression of CTGF, preventing the activation of HSCs in vitro, and reducing the synthesis of ECM (Chen and Zheng, 2008). Moreover, basic studies have shown that CTGF small interfering RNA (siRNA) significantly inhibited the expression of CTGF, type I collagen, type III collagen and hyaluronic acid, reduced the synthesis and secretion of ECM, alleviated liver fibrosis and protected liver function (Li et al., 2004). Additionally, Src family kinases (SFKs), as non-receptor tyrosine kinases, were activated by CTGF induced by TGF-β 1 and had an essential effect on transcription of CTGF . The expression of Src kinases is up-regulated in mice with liver fibrosis and cirrhosis induced by thioacetamide (TAA), and the expression of phosphorylated Src kinases is up-regulated when HSC is activated. Saracatinib, an inhibitor of Src kinases, attenuated the expression of type I collagen, CTGF and α-SMA in mice induced by TAA. What's more, inhibition of Src kinases increased autophagy flux and reduced liver fibrosis (Seo et al., 2020). SU6656, a dual inhibitor of Src family and Aurora kinases, reduced CTGF expression by inhibiting Src kinases in nontransformed epithelial cells (Cicha et al., 2014). As a member of SFK, Fyn was activated in the liver of patients with fibrosis and knockdown Fyn with siRNA or gene knockout significantly prevented the activation of HSCs and reduced fibrosis in CCl 4induced mice. Saracatinib treatment decreased the activation of Fyn, prevented the activation of HSCs, and depressed the severity of liver fibrosis in mice induced by CCl 4 (Du et al., 2020). Inhibition of Fibroblast Growth Factor There are several isoforms in the Fibroblast Growth Factor (FGF) family, which bind to four different receptors (FGFR1-4). Among them, FGF15/19 and FGF21 could inhibit the occurrence of liver fibrosis by down-regulating HSCs activation. In addition, FGFR1-mediated signal transduction is closely related to hepatic fibrosis and liver cirrhosis. FGF21 inhibits the activation of HSCs by down-regulating the expression of TGF-β 1 , decreasing the phosphorylation level of Smad2/3 and reducing the nuclear translocation of NF-κB. FGF21 also induces apoptosis of activated HSCs by increasing the expression of caspase 3 and decreasing the ratio of Bcl2 to Bax . Pegbelfermin (BMS-986036) is a polyethylene glycol modified FGF21 analog. In a multicenter double-blind phase 2a clinical trial (NCT02413372), 75 NASH patients with F1-F3 fibrosis were treated with 10 mg or 20 mg pegbelfermin once a day for 16 weeks. The results showed that the absolute liver fat fraction in the pegbelfermin group was much lower than that in the placebo group, and most of the adverse events were mild, such as ascites and varicose, indicating that pegbelfermin ameliorated NASH, steatosis, liver injury and fibrosis (Sanyal et al., 2019a;Verzijl et al., 2020). FGF19 is a hormone that regulates the synthesis of bile acids directly in the liver. The engineered FGF19 analog NGM282 was injected subcutaneously with 3 mg or 6 mg to 82 patients with F1-F3 fibrosis for 12 weeks in a randomized, double-blind, placebo-controlled phase II clinical trial (NCT02443116). The results indicated that the absolute liver fat contents were at least 5% lower than the baseline in 20 patients (74%) in the 3 mg group and 22 (79%) patients in the 6 mg group while in only 2 patients (7%) in the placebo group . This study demonstrated that NGM282 played a positive role in improving the condition of patients with NASH. In another open label study, NGM282 with 1 mg or 3 mg administration for 12 weeks improved the histological characteristics of NASH, reduced fibrosis-related markers significantly, decreased NASH and fibrosis scores, and ameliorated fibrosis effectively. And the most common adverse reactions are mild or moderate abdominal pain, diarrhea and nausea (Harrison et al., 2020b). The FGFR1 inhibitor hydronidone improved the inflammation and fibrosis in rats with hepatic fibrosis induced by CCl 4 , DMN and human serum albumin. Hydronidone was well tolerated with no obvious adverse reaction in phase II clinical trial (NCT02499562), but its absorption rate and degree decreased via food intake (Liu et al., 2017). Inhibition of Wnt/β-Catenin Signal Pathway Studies have shown that Wnt/β-catenin signal pathway is related to the activation of HSCs and hepatic fibrosis. Wnt protein forms a ternary complex with frizzled receptor and lipoprotein receptor-related protein (LRP)-5/6, which blocks the degradation of β-catenin. β-catenin is activated with the help of coactivators such as cyclic-AMP response element binding protein binding protein (CBP), and then accumulates and easily locates in the nucleus, which activates the transcription of related target genes (Ge et al., 2014). During liver injury, Wnt/β-catenin signal pathway is abnormally activated in activated HSCs and alleviates liver fibrosis by promoting collagen deposition and epithelialmesenchymal transition (EMT) (Nishikawa et al., 2018). ICG001 is a small molecular inhibitor that disrupts the interaction between CBP and β-catenin. ICG001 reduced the secretion of CCL12 and prevented the infiltration of macrophages by inhibiting the Wnt/β-catenin signal pathway in HSCs, which reduced liver inflammation and significantly decreased the activation of HSCs and the accumulation of ECM in liver fibrosis model induced by CCl 4 in mice (Akcora et al., 2018). CBP/β-catenin inhibitor PRI-724 improved HCV-induced liver fibrosis in mice by inhibiting the activation of HSCs (Tokunaga et al., 2017). In a single-center phase I clinical trial (NCT02195440), PRI-724 was well tolerated in HCV cirrhotic patients who were given 10 mg/d or 40 mg/d within 12 weeks. However, severe liver injury may occur in cirrhotic patients with HCV given 160 mg/d PRI-724 (Kimura et al., 2017). At present, a phase 1/2a clinical trial (NCT03620474) of PRI-724 in patients with hepatitis B or hepatitis C-related liver cirrhosis is under way, which is expected to have some implications for the study of PRI-724 in liver fibrosis. Octreotide is an analog of somatostatin. It significantly inhibited the expression of Wnt1 and β-catenin in vitro and in vivo, depressed the activation and proliferation of LX2 and reduced CCl 4 -induced liver fibrosis in rats . These findings provide more options for the treatment of liver fibrosis. Inhibition of Farnesoid-X Receptor Farnesoid-X receptor (FXR) is an inherent inhibitor of apoptosis in hepatocytes. It interacts with caspase-8 in the cytoplasm, prevents the formation of death-induced signal complex and the activation of caspase-8, mediates the inhibition of HSCs activation and ameliorates liver fibrosis . Some studies found that the lack of FXR aggravated liver fibrosis and inflammation in mice, indicating that FXR played a key role in protecting the liver from inflammation and fibrosis (Ferrell et al., 2019). Obviously, FXR is a very important anti-fibrosis target. Farnesoid-X receptor agonist obeticholic acid (OCA, INT-747) is a semisynthetic chenodeoxycholic acid, which has good anti-fibrotic activity in animal model of liver fibrosis (Goto et al., 2018;Fan et al., 2019). In 2015, a double-blind, randomized placebo-controlled phase 2b clinical trial of 283 NASH patients showed that liver histology improved significantly in 45% patients after 72 weeks of short-term 25 mg/d OCA treatment, compared with 21% in the placebo group. But 23% patients in the OCA group had adverse reactions such as itching, compared with only 6% in the placebo group. These results indicate that the safety of OCA needs further research to determine (Neuschwander-Tetri et al., 2015). Recently, the mid-term results of the first 18 months of a multicenter, randomized placebo-controlled phase III clinical trial (NCT02548351) of 931 NASH patients with F2-F3 fibrosis who received long-term treatment with OCA showed that 23% of the patients in the 25 mg OCA group had a significant improvement on the severe degree of fibrosis and NASH in a dose-dependent manner, compared with 12% in the placebo group. And the most common adverse reaction was pruritus (Younossi et al., 2019). Cilofexor (GS-9674) is a small nonsteroidal agonist of FXR. In a double-blind placebo-controlled phase II clinical trial (NCT02854605), 140 patients with NASH were administered with 30 mg or 100 mg cilofexor for 24 weeks. The results showed that cilofexor significantly reduced hepatic steatosis and serum bile acid in NASH patients and was well tolerated. Moderate to severe itching was more common in the 100 mg cilofexor group (14%) than that in the 30 mg group (4%) and the placebo group (4%) (Patel et al., 2020). In another study, FXR agonist PX20606 effectively improved liver fibrosis in CCl 4induced cirrhotic rats by reducing the expression of collagen (Schwabl et al., 2017). Inhibition of Cannabinoid Receptor 1 Endogenous cannabinoid system is involved in the pathogenesis of liver fibrosis. In normal liver, the expression of cannabinoid receptor is very low. However, in ALD, NAFLD, liver regeneration/injury, liver fibrosis/cirrhosis and liver cancer, cannabinoid receptor 1 (CB1), a kind of G protein-coupled receptor, is up-regulated in liver myofibroblasts due to extracellular stimulation, which promotes the development of liver fibrosis (Teixeira-Clerc et al., 2006;Bataller and Gao, 2013). Therefore, blocking the CB1 signal pathway is expected to become a new strategy to treat a variety of liver diseases including liver fibrosis. Rimonaban, a CB1 receptor (CB1R) antagonist, significantly reduced inflammation and fibrosis in CCl 4 -induced cirrhotic rats by down-regulating the expression of fibrosis and inflammationrelated genes. It showed that even in the late stage of the disease, pharmacological CB1 antagonism still had a positive effect on the regression of fibrosis (Giannone et al., 2012). Another CB1R antagonist SR141716A effectively alleviated hepatic fibrosis in three chronic liver injury models by inhibiting CB1 to decrease the expression of TGF-β 1 and prevent the accumulation of fibrogenic cells in the liver (Teixeira-Clerc et al., 2006). JD5037 is a peripheral CB1 antagonist. It attenuated CB1R-regulated activation of HSCs and liver fibrosis by inhibiting CB1R-arrestin1/AKT signal pathway, as CB1R,which were induced in liver sections of patients and mice with hepatic fibrosis, promoted the activation of HSCs by recruiting beta arrestin1 and activating AKT signal pathway. Therefore, JD5037 is a potential compound for anti-hepatic fibrosis . Activation of Peroxisome Proliferator Activated Receptors Peroxisome proliferators-activated receptors are a family of nuclear receptors, including PPAR-α, PPAR-β/δ and PPARγ. PPAR-α is a critical regulatory factor against hepatic fibrosis. PPAR-γ negatively regulates the activity of HSCs in hepatic fibrosis and reduces the differentiation of myofibroblasts (Wei et al., 2019). Elafibranor (GFT-505) is a PPAR-α/δ agonist. In a randomized placebo-controlled phase II clinical trial (NCT01694849) involving 276 patients with NASH, the results of continuous treatment with 120 mg elafibranor for 52 weeks indicated that elafibranor ameliorated liver fibrosis and liver function while improving the NASH status of the patients, and was well tolerated. However, in the intended treatment, there was no significant difference between the elafibranor group and the placebo group. Elafibranor caused a slight and reversible increase in creatinine levels, but it did not cause adverse effects on patients with renal insufficiency (Ratziu et al., 2016). Rosiglitazone, a PPAR-γ agonist, improved BDL-induced liver fibrosis in mice by regulating NF-κB-TNF-α pathway in a PPAR-γ-dependent manner, down-regulating the expression of TGF-β 1 , α-SMA and type I collagen, inhibiting NF-κB phosphorylation, and alleviating inflammation, but it did not alleviate liver injury in Hep Ppar-γ KO mice (Wei et al., 2019). 15-d-PGJ2 (PPAR-γ natural ligand) or GW7845 (synthetic ligand) significantly promoted the transformation of TGF-β 1 -induced activated HSCs to quiescent phenotype by inhibiting PPAR-γ-dependent CTGF expression at both mRNA and protein levels in HSCs (Sun et al., 2009). Crocin, a naturally occurring carotenoid, was reported that it ameliorated CCl 4 -induced hepatic fibrosis in a dose-dependent manner by up-regulating the expression of PPAR-γ and down-regulating the expression of inflammatory and hepatic fibrosis-related factors (Chhimwal et al., 2020). Inhibition of HMG-CoA Reductase Statins are HMG-CoA reductase inhibitors, which reduce serum cholesterol levels by inhibiting the activity of HMG-CoA reductase. The effects of statins on reducing liver inflammation, oxidative stress and fibrosis have been reported in several studies on animal models of liver fibrosis (Oberti et al., 1997;Jang et al., 2018). Yet, the safety of statins in patients with chronic liver disease and cirrhosis needs to be evaluated in further research, considering that statins may increase the risk of rhabdomyolysis due to impaired liver CYP3A4 metabolism. Moreover, a study reported that 3% of cirrhotic patients who were administrated with statins had severe rhabdomyolysis (Abraldes et al., 2009). Currently, three placebo-controlled trials about statins (NCT03780673; NCT02968810; NCT04072601) are under way with an extended safety assessment. Inhibition of Dipeptidyl Peptidase-4 Dipeptidyl peptidase-4 is a serine protease widely expressed on various cell surfaces, which has an influence on fibronectinmediated interaction between hepatocytes and ECM, and participates in the adhesion of cells to collagen. In addition, DPP4 is expressed on the surface of activated HSCs. It has been reported that the DPP4 inhibitor sitagliptin ameliorated NAFLD (Iwasaki et al., 2011). Alogliptin, which is a classical DPP4 inhibitor, inhibited the activation of LX2 induced by TGF-β 1 stimulation in vitro. Chronic treatment with alogliptin reduced hepatic steatosis in mice and protected them from liver injury in the hepatic fibrosis model induced by CCl 4 , which delayed the progression of hepatic fibrosis. Alogliptin also had a positive effect on ameliorating liver fibrosis via the negative regulation of HSCs activation . This indicates that alogliptin is also a potential candidate drug for treatment of liver fibrosis. Inhibition of ECM Production and Promotion of ECM Degradation Diffuse excessive production and deposition of ECM is the main manifestation of liver fibrosis. The production and degradation of ECM are in a relative balance under normal circumstances. However, during liver injury, activated HSCs are the main cells that produce ECM, resulting in excessive ECM production and continuous deposition. In addition, the main composition of ECM also changes from type IV and VI collagen to type I and III collagen, which increases the density and hardness of ECM, making it difficult for ECM to be degraded by protease (Iredale et al., 2013). Therefore, inhibition of ECM production and promotion of ECM degradation are two essential means of direct anti-fibrosis. The related drugs are summarized in Table 3. Matrix Metalloproteinases/Tissue Inhibitors of Metalloproteinase Matrix metalloproteinases/TIMPs are important enzymes that regulate the deposition and degradation of ECM. MMPs are the main ECM-degrading enzyme in the liver and their endogenous inhibitors are TIMPs. MMP2 and MMP14 are highly expressed in activated HSCs. MMPs degrade ECM under normal circumstances. But with liver injury, MMPs are inhibited by TIMPs, which are high expressed. This causes the break of the balance between deposition and degradation of ECM, which leads to excessive deposition of ECM and eventually leads to liver fibrosis (Iredale et al., 2013). Therefore, up-regulation of MMPs or down-regulation of TIMPs activity is an effective measure to alleviate liver fibrosis. Halofuginone, a small molecular derivative of quinolones, inhibited the expression and synthesis of collagen, reduced the level of TIMPs and alleviated hepatic fibrosis in rats induced by TAA and Con A (Bruck et al., 2001;Liang et al., 2013). It has been reported that Fraxinus rhynchophylla ethanol extract (FR(EtOH)) had an anti-fibrosis effect in CCl 4 -induced liver fibrosis in SD rats. FR(EtOH) effectively alleviated liver lesions and fibrous connective tissue proliferation by down-regulating the expression of MMP2, MMP9 and TIMP1 (Peng et al., 2010). As the main composition of ECM is type I and III collagen after liver injury and type I collagen is the most abundant collagen in fibrotic liver, inhibition type I or III collagen production and accumulation will be beneficial to the Meissner et al., 2016;Sanyal et al., 2019b Frontiers in Cell and Developmental Biology | www.frontiersin.org treatment of liver fibrosis. It has been reported that liposome COL1a1 siRNA specifically inhibited collagen production and accumulation in liver fibrosis model in mice (Jimenez et al., 2015). Hsp47, a kind of type I collagen molecular chaperone, has the ability to block collagen synthesis. Furthermore, Hsp47 siRNA containing vitamin A-coupled liposomes had significant anti-fibrosis effect in three liver fibrosis models in vivo (Sato et al., 2008). BMS986263, a kind of Hsp47 siRNA, which delivered lipid nanoparticles, did not show any toxicity to healthy people (Kavita et al., 2019). And its phase Ib/2 dose increment study (NCT02227459) has been completed recently. Lysyl Oxidase The Lysyl oxidase (LOX) family promotes the deposition of ECM by increasing the cross-linking of collagen. Therefore, inhibiting the LOX family is beneficial to reduce the deposition of ECM and alleviate liver fibrosis. Simtuzumab (GS-6624), an antibody of lysyl oxidaselike protein 2 (LOXL2), decreased the stability of ECM by antagonizing the collagen cross-linking induced by LOXL2, and had a good therapeutic effect on liver cirrhosis and fibrosis induced by NASH. Moreover, simtuzumab was well tolerated in a 22-week phase II clinical trial (NCT01707472) of 18 patients with advanced fibrosis. The most common adverse reactions of simtuzumab treatment are fever, headache, glossitis, etc., which are mild. In addition, one patient experienced a serious adverse reaction and recovered after antibiotic treatment (Meissner et al., 2016). However, a phase Ib clinical trial (NCT01672853) of 234 patients showed that simtuzumab had no effect on preventing the progression of liver fibrosis in patients with primary sclerosing cholangitis with a dose of 75 mg or 125 mg for 96 weeks (Sanyal et al., 2019b). In addition, A study showed that the expression of LOXL2 decreased rapidly after liver injury compared with the stable up-regulation of LOX and LOXL1, indicating that LOXL2 had little effect in liver fibrosis (Perepelyuk et al., 2013). Future research should solve this problem of selectivity by specifically targeting LOX1. Gene Therapy Liver transplantation is considered to be the only effective treatment for end-stage liver fibrosis, but it has some shortcomings, such as difficulty in finding liver source, immune rejection, poor prognosis and so on. In recent years, gene therapy, such as antisense oligonucleotide chain, RNA intervention and decoy oligonucleotides, is expected to solve these problems. RNA intervention (RNAi) is a technique that uses siRNA of 21-23 nucleotides to specifically knock out target genes (Buchman, 2005). It has been reported that the direct knockout of TGF-β 1 via siRNA significantly reduced the expression of α-SMA and type I collagen in HSC-T6 cells, and played an anti-fibrotic role in mice and rats with CCl 4 -induced hepatic fibrosis (Cheng et al., 2009). Histone deacetylase 2 (HDAC2) is an up-regulated protein found in HSCs treated with TGF-β 1 or in fibrotic liver tissues induced by CCl 4 . Blocking the expression of HDAC2 with siRNA decreased the expression of α-SMA and COL1a1 in HSC-T6 cells treated with TGF-β 1 (Li et al., 2016). β-Catenin siRNA inhibited collagen synthesis and β-catenin expression in HSCs in a time-dependent manner, down-regulated Wnt/β-catenin signal pathway, inhibited the proliferation and induced the apoptosis of HSC-T6 cell, which prevented the progression of hepatic fibrosis (Ge et al., 2014). It suggests that β-catenin siRNA provide a new strategy for the treatment of liver fibrosis. Lentivirusmediated CB1 siRNA (CB1-RNAi-LV) significantly inhibited the expression of CB1 and the activation and proliferation of HSCs in vitro. CB1-RNAi-LV alleviated DMN-induced liver fibrosis in rats by inhibiting TGF-β 1 /Smad signal pathway, reducing the expression of α-SMA and improving EMT . The selective inhibition of CB1 by siRNA provides a new choice for the treatment of liver fibrosis. In addition to siRNA-based treatment, microRNA (miRNA) is also another treatment method for liver fibrosis. MiRNA is a kind of endogenous non-coding small RNA, which regulates the expression of RNA after transcription. Miravirsen (SPC3649), which is a mixture of nucleic acid and DNA, effectively inhibited the function of miR-122 and reduced liver fibrosis in a phase 2a study (NCT01200420) of HCV infection (Zeng et al., 2015). MiR-101, a small non-coding RNA that regulates the MAPK response, significantly improved the liver function of mice with hepatic fibrosis induced by CCl 4 . MiR-101 markedly reduced the damage of liver parenchyma and postponed liver fibrosis by inhibiting the levels of α-SMA and COL1a1, reducing the accumulation of ECM components and inhibiting PI3K/AKT/mTOR signal pathway (Lei et al., 2019). MiR-29b, a small non-coding RNA downregulated in fibrotic liver tissues and in primary activated HSCs, effectively suppressed the expression of Smad3 in HSC line LX1 and decreased the expression of α-SMA and type I collagen in mice with hepatic fibrosis induced by CCl 4 . Moreover, MiR-29b dramatically prevented the progress of liver fibrosis by depressing the activation of HSCs and inhibiting the apoptosis of HSCs induced by PI3K/AKT pathway . The related siRNA and mRNA are summarized in Table 4. DISCUSSION The mechanism of hepatic fibrosis formation is complex. The treatment of hepatic fibrosis aims at many links in its pathogenesis to reduce or even reverse hepatic fibrosis, mainly from the aspects of anti-inflammation and liver protection, inhibition of the proliferation and activation of HSCs, and depression of the production and deposition of ECM, as shown in Figure 2 (Campana and Iredale, 2017;Roehlen et al., 2020). Many causes will lead to the imbalance of pro-fibrosis/antifibrosis mechanism and promote the occurrence and development of liver fibrosis, such as the excessive production and secretion of pro-inflammatory cytokines, the increase of hepatocyte apoptosis, the proliferation of activated HSCs, and the excessive production and deposition of ECM. On the other hand, a lot of factors effectively inhibit the occurrence and development of liver fibrosis, delay the process of fibrosis, and even reverse fibrosis and return the structure and function of the liver to normal, including the production and release of anti-inflammatory cytokines, the proliferation of hepatocyte, the apoptosis and restoration of resting phenotype of activated HSCs, as well as the increase of ECM degradation. Therefore, finding a drug that has the ability to balance pro-fibrosis/anti-fibrosis mechanism is crucial for treatment of liver fibrosis. To date, the anti-hepatic fibrosis candidate drugs that are at the forefront of research with a good development momentum mainly include OCA, resmetirom, CVC, selonsertib, and elafibranor. Among them, OCA is very promising as a selective FXR agonist. The results of its phase II clinical trial showed that 25 mg/d OCA significantly improved liver fibrosis, steatosis and lobular inflammation with good tolerated (Neuschwander-Tetri et al., 2015). These effective results prompted OCA to enter phase III clinical trial, aiming to evaluate its long-term efficacy, clinical benefits and safety through a 7-year treatment period. The 18month mid-term results of the phase III clinical trial showed that 25 mg/d OCA had a significant anti-liver fibrosis effect without worsening the NASH-related symptoms, and had basically the same serious adverse reactions as the placebo group Younossi et al., 2019). In addition, OCA has good safety based on the 18-month mid-term study on health-related quality of life. OCA treatment had adverse reaction of mild itching in the early stage, which did not worsen with the treatment progresses, had better curative effects than the placebo group, and improved the quality of life of patients (Younossi et al., 2021). Therefore, OCA has a good effect in terms of efficacy, safety and quality of life based on the currently known results. In addition, clinical trials of OCA include patients with F1-F3 liver fibrosis and advanced NASH (Younossi et al., 2019). This is of great significance for researching different disease states to discover the clinical benefits of OCA, such as the discovery of biomarkers at different stages of the disease. Resmetirom is a selective thyroid hormone receptor β agonist. Its phase II clinical trial showed that it improved the symptoms of NASH patients with liver fibrosis and reduced liver toxicity by reducing fat content FIGURE 2 | Therapeutic approaches of liver fibrosis. The liver fibrosis is induced when the balance of pro-inflammatory/anti-inflammatory, or apoptosis/proliferation of hepatocyte and HSCs, or production and deposition/degradation of ECM is destroyed (Campana and Iredale, 2017;Roehlen et al., 2020). . Its phase III clinical trial for patients with F2-F3 liver fibrosis is currently underway, and no results have been obtained yet. In addition to OCA and resmetirom, CVC is also one of the fast-developing anti-liver fibrosis candidate drugs. CVC is a dual inhibitor of CCR2/5. Its Phase II clinical trial results showed that 150 mg/d CVC improved liver fibrosis, prevented liver cirrhosis, and reduced mortality from liverrelated diseases by improving NAS-related symptoms. It also has good safety and resistance (Friedman et al., 2016). However, the role of CVC treatment is mainly focused on patients with F2-F3 liver fibrosis, that is, patients with a higher risk of intermediate and advanced liver cirrhosis, and the role of CVC treatment in patients with mild liver fibrosis is still unclear. In addition, the effects of CVC on liver fibrosis are related to the reduction of inflammation-related biomarkers, such as IL-6, IL-1β, etc., which indicates that CVC is a great potential anti-liver fibrosis candidate drug because inflammation is one of the important factors causing liver fibrosis (Friedman et al., 2016). However, the results of phase III clinical part 1 of AURORA research showed that CVC has a lack of efficacy, which led to the termination of the study (Anstee et al., 2020). Fortunately, a phase II clinical trial on CVC and tropifexer (an FXR agonist) showed that the combination of CVC and tropifexer effectively ameliorated liver fibrosis (Pedrosa et al., 2020). In addition, considering that the half-life of CVC as long as 30-40 h, it has good safety for advanced liver fibrosis (Friedman et al., 2016). Therefore, the combination may be a better way for CVC to exert its efficacy. Selonsertib is a selective ASK1 inhibitor, and its phase II clinical trial showed that 18 mg/d selonsertib effectively reduced fibrosis in patients with F2-F3 liver fibrosis (Loomba et al., 2018b). However, its phase III clinical trial for patients with F3 liver fibrosis was terminated due to lack of efficacy (Harrison et al., 2020c). Moreover, a phase III clinical trial about another antiliver fibrosis candidate drug elafibranor was also terminated, because it did not reach the alternative efficacy endpoint. It should be considered whether it is due to targeting or pathological barriers and so on. Perhaps the combination and improvement of delivery systems are important means to increase efficacy for anti-liver fibrosis candidate drugs. Although many anti-fibrotic candidate drugs have shown good efficacy in experimental animal models, their anti-fibrotic effects in clinical trials are very limited. This may be due to the complicated pathological mechanism of liver fibrosis, which is the repair response after liver injury that whole body participates in, while most of the currently developed drugs are targeted at a single target rather than multiple targets. Moreover, the actual pathological conditions between animal models and patients have a great difference, which also leads to poor efficacy of drugs in clinical trials. In addition, obvious adverse reactions induced by large dosage are also one of main causes. In this case, gene therapy, which targets specific genes accurately, shows unique advantages of improving therapeutic effect and reducing side effects. Therefore, gene therapy is believed to be a promising direction of anti-liver fibrosis strategy in the future. To date, most of anti-fibrosis drugs are still in the stage of preclinical research, including drugs for chronic liver diseaserelated liver fibrosis induced by different etiologies. There are also some drugs with clear anti-fibrosis effect, good safety and tolerance in the clinical research stage. It is believed that with the in-depth study on the pathogenesis of liver fibrosis and the continuous progress in the research and development of new drugs, the reversal of liver fibrosis will eventually become possible. AUTHOR CONTRIBUTIONS TY, YY, and ZT designed the structure of the article and revised the manuscript. ZT, HS, and CG drafted the initial manuscript and prepared the figures. HS, ZT, TX, HL, and YX revised the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by National Natural Science Foundation of China (81873580).
2021-09-21T13:11:05.306Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "ad3c6033b8e2a60b91d49fa5ba5be345fc165d47", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.730176/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad3c6033b8e2a60b91d49fa5ba5be345fc165d47", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
46923700
pes2o/s2orc
v3-fos-license
Platelet concentrate as an additive to bone allografts: a laboratory study using an uniaxial compression test Chemical cleaning procedures of allografts are destroying viable bone cells and denaturing osteoconductive and osteoinductive proteins present in the graft. The aim of the study was to investigate the mechanical differences of chemical cleaned allografts by adding blood, clotted blood; platelet concentrate and platelet gel using a uniaxial compression test. The allografts were chemically cleaned, dried and standardized according to their grain size distribution. Uniaxial compression test was carried out for the four groups before and after compacting the allografts. No statistically significant difference was found between native allografts, allografts mixed with blood, clotted blood, platelet concentrate and platelet concentrate gel regarding their yield limit after compaction. The authors recommend to chemical clean allografts for large defects, optimize their grain size distribution and add platelet concentrate or platelet rich plasma for enhancing as well primary stability as well bone ingrowth. Introduction Bone grafts are used to fill bone defects in different applications of orthopaedic and trauma surgery with good long term results ). Autografts are the gold standard in reconstructive surgery; however they are available only in limited quantity. They can be obtained from the femoral head during total hip arthroplasty or from the iliac crest (Khan et al. 2005;Myeroff and Archdeacon 2011;Nogler et al. 2012). Autografts have optimum osteoconductive, osteoinductive properties and allow osteogenesis as they contain surviving cells and osteoinductive proteins (BMPs) such as BMP-2 and BMP-7, fibroblast growth factor (FGF), insulin-like growth factor (IGF) and platelet-derived growth factor (PDGF) (Bauer and Muschler 2000;Dimitriou et al. 2011;Brydone et al. 2010;Parikh 2002). To compensate for the reduced availability of autografts, allografts or synthetic materials are widely used. Allografts have variable osteoinductive and osteoconductive properties but are lacking viable cells which results in lower osteogenic potential than autografts (Zimmermann and Moghaddam 2011). Sterilization processing of allografts includes the usage of hypotonic solutions, acetone, ethylene oxide, or gamma irradiation which can eliminate cellular and viral particles and therefore reduce the risk of infectious and transmissible diseases (Muller et al. 2013). Chemical cleaning processes are also used to remove the fat content of the allografts enhancing the mechanical properties of the allograft (Putzer et al. 2014a;van der Donk et al. 2003;Fosse et al. 2006;Voor et al. 2004). However, despite modern sterilization and storage methods, processing of allografts is not completely safe (Zimmermann and Moghaddam 2011;Malinin and Temple 2007). The chemical cleaning as well as gamma irradiation of allografts may destroy the bone cells and denature proteins present in the graft and alter osteoconductive and osteoinductive characteristics, essentially eliminating the osteogenic properties and inhibiting the bone remodeling process (Keating and McQueen 2001). During fracture healing and implant ingrowth the recruitment and migration of osteogenic cells are essential for bone regeneration (Kark et al. 2006). The migration of these cells is stimulated by growth factors as transforming growth factors (TFG), PDGF, IGF, vascular endothelial growth factors (VEGF), platelet derived angiogenic factor (PDAF) and FGF (Fiedler et al. 2002;Mayr-Wohlfart et al. 2002;Martineau et al. 2004;Wang and Avila 2007), all of which are released by platelets in response to injury (Martineau et al. 2004;Weibrich et al. 2002). In addition to growth factors (GFs), platelets release numerous other substances (e.g., fibronectin, vitronectin, sphingosine 1-phosphate, etc.…) that are important in wound healing (Wang and Avila 2007). Platelets can be applied as autologous product to sites of bone injury by either being concentrated in combination with blood plasma [platelet-rich plasma (PRP)] or as a platelet gel that is created by clotting the PRP (Kark et al. 2006). PRP, plasma rich in growth factors (PRGF), and platelet concentrate (PLC) are essentially an increased concentration of autologous platelets suspended in a small amount of plasma after centrifugation (Wang and Avila 2007). By centrifugation donors blood is separated into platelet poor plasma (PPP), PRP and red blood cells (Marlovits et al. 2004). Prior to application, topical bovine thrombin and 10% calcium chloride is added to activate the clotting cascade, producing a 3-5 times higher platelet concentrated gel than native plasma contains (Petrungaro 2001). The release of PDGF, IGF, VEGF, PDAF and TGF-b present in the PRP is triggered by the activation of platelets by means of a variety of substances or stimuli such as thrombin, calcium chloride, or collagen (Wang and Avila 2007). PRP indeed is widely used in plastic surgery and jaw reconstruction surgery for enhancing bone and connective tissue growth (Wang and Avila 2007;Marx et al. 1998;Board et al. 2008;Anitua et al. 2004;Thorwarth et al. 2006). Enriched platelet preparations have shown a rapid bone healing and regeneration when combined with autologous bone and bone substitute materials (Anitua et al. 2004;Kim et al. 2001). In PRP no cross reactivity, immune reaction or disease transmission has been observed (Weibrich et al. 2001). A reduced healing time (50%) has been shown in a study of Kassolis and Reynolds (2005). In animal studies no statistically significant difference was found between cancellous bone graft material with and without PRP (Butterfield et al. 2005;Jakse et al. 2003). There is large evidence that by adding blood, autologous PRP or PLC the bone in-growth is enhanced, although antigenity is getting reduced in some cases (Khan et al. 2005;Anitua et al. 2004;Hannink et al. 2009;Baylink et al. 1993;Canalis et al. 1989;Canalis 1985;Lozada et al. 2001;Cenni et al. 2010;Blair and Flaumenhaft 2009). The aim of the study was to investigate the mechanical differences of chemical cleaned allografts with known grain size distribution mixed with blood (BL), clotted blood (CB), platelet concentrate (PC) and platelet concentrated gel (PG) using an uniaxial compression test. Methods Bone tissue was donated by 5 patients to the local bone bank, from whom informed consent was obtained. In the preparation of allografts the local bone bank requirements for producing fresh-frozen allografts were followed. Cartilage and cortical tissue was removed and using a bone mill (Spierings Medische Techniek BV, Nijmegen, The Netherlands) allografts sizing 5-10 mm were produced (McNamara 2010). The allografts were frozen to -80°C, carefully mixed to reduce patient specific properties and stored at -11°C. A chemical cleaning procedure was used to remove fat content of the allografts and reduce the contamination risk (Coraca-Huber et al. 2013). For the cleaning procedure a sonicator (40 kHz, 200 W eff , BactoSonic, Bandelin eletronic GmbH & Co. KG, Berlin, Germany) was used. As washing solutions 700 ml of 1% Triton X-100 (Sigma-Aldrich, Schnelldorf, Germany), 500 ml 3% hydrogen peroxide solution (Sigma-Aldrich, Schnelldorf, Germany) and a 70% ethanol solution were used. The allografts were dried in an incubator (Memmert GmbH & Co. KG, Schwabach, Germany) at 37°C. Allograft samples were assembled according to their grain size in proportions specified in Table 1 after being separated using sieves ranging from 0.063 to 16 mm in correspondence to ASTM C 125 standard (Application time 1 h, Amplitude 10 mm, Haver und Böcker, Ö lde, Germany) (Putzer et al. 2014b). Samples with a mean weight of 8 ± 0.01 g were obtained and divided into four groups each containing 20 samples. In one group (BL) 4 ml blood from the same donor, who gave his informed consent prior donation, were added. In different studies native allografts showed a liquid component of 50% in weight (Putzer et al. 2014a). Adding 4 ml of blood approximately compensates the liquid component, which was previously removed. The blood was stored at -4°C and was obtained from the local tissue bank. In the second group (CB) clotting was induced by adding 480 ll of 1 mol calcium chloride (CaCl 2 ) in addition to the 4 ml blood and mixed thoroughly for 1 min as described by Oakley and Kuiper (2006) and Camenzind et al. (2000). The cleaned allografts were mixed with the clotted blood and after 5 min (6 min activation time) the samples were used for mechanical testing. In group PC 4 ml of concentrated platelets from one donor, who gave his informed consent prior donation, were added. The PC was stored at -4°C. In platelet concentrated gel (PG) in addition to the 4 ml of concentrated platelets 666 ll of 1 mol calcium chloride (CaCl 2 ) were added (Marx et al. 1998;Oakley and Kuiper 2006;Camenzind et al. 2000). After 6 min activation time the samples were used for mechanical testing. All samples were filled into a compaction chamber with an internal diameter of 40 mm. A uniaxial compression test was carried out before and after a standardized compaction procedure resulting in 20 measurements before and 20 measurements after compaction for each of the four groups. In the standardized compaction procedure a fall hammer (1.45 kg) was dropped 10 times from a height of 18 cm. The uniaxial compression test was carried out with a 15 mm punch which was lowered with a speed of 1 mm/min into the allografts. Force and displacement were measured by the testing machine after reaching a preload of 5 N (Zwicki-Line Z 2.5, maximal load 2.5 kN, 320 kHz sample rate, Zwick GmbH & Co. KG, Ulm, Deutschland) with a preciseness of ± 0.04 N and ± 2 lm. A peak analysis was performed on the resulting force displacement curves using OriginPro8.5 (Origin Lab Corporation, Northampton, Massachusetts, USA) (Fig. 1a). In all curves a fitted baseline (50 anchor points, 1 and 2 derivation method, polynomial smoothing of order 2) was subtracted to remove the logarithmic trend (Fig. 1b). The signal was analyzed for positive local maxima over 100 data points with a smoothing window size of 10 data points. The yield limit (YL) was determined as the first local maxima on the force curve (Fig. 1c). The corresponding displacement value was used to obtain the density at the yield limit d YL . The punch displacement value at 5 N was used to calculate the initial density of the samples d i . Previously published data by the authors were used as control groups were native allografts (NA) (Putzer et al. 2014a) and allografts with optimized grain size distribution (OG) were evaluated (Putzer et al. 2014b). Measurements of each variable and group were tested for normal distribution using the Kolmogorov-Smirnov Test. Comparison before and after compaction were analyzed using the two-tailed T Test for dependent samples. All groups were tested for Results NA, OG, BL, PC and PG had a normal distribution for all investigated measurement parameters. CB showed a normal distribution except for the yield limit before compaction. IN BL 1 outlier was eliminated for the yield limit before and 1 outlier after compaction. In CB one outlier was eliminated for the initial density, two outliers were eliminated for the density at the yield limit and two for the yield limit when uncompacted. After compaction in CB one outlier was eliminated for the yield limit. PC showed two outliers for the initial density when uncompacted. PG showed two outliers for the yield limit before compaction. No statistically significant change of the initial density was observed after compaction for BL and PC (Table 2). In NA a statistically significant increase of 22% and in OG a of 34% was observed, while CB showed a statistical significant increase of the initial density by 10% and PG increased its initial density after compaction by 13%. Considering the density at the yield limit before and after compaction BL showed a statistically significant increase of 13% and PG of 14%, while NA showed an increase of 10% and OG of 22% (Table 3). In CB and PC no statistically significant increase of the density at the yield limit could be observed. All groups showed a statistical significant difference when comparing the yield limit before and after compaction (Table 4). BL and PC showed a * 35% higher yield limit after compaction, while in the groups with the activation liquid CB and PG the yield limit increased by 15% for CB and 20% for PG. NA showed an increase of the yield limit by 80% and OG of 90%. The uncompacted initial density and uncompacted yield limit showed a homogeneity distribution of variances. All other variables did not show homogeneity of variances. Fig. 1 a Force displacement curves were from uniaxial compression test using a testing machine. b In all curves a fitted baseline with 50 anchor points (squares) was subtracted to remove the logarithmic trend. c The signal was analyzed for positive local maxima. The yield limit (YL) was determined as the first local maxima on the force curve and is indicated by a line in all three graphs The OG group showed a statistically significant lower initial density and a lower density at the yield limit to all other groups before and after compaction (NA, BL, PG, CB and PC) (p B 0.006). NA showed a statistically lower initial density to BL, PG, CB and PC (p B 0.006) before compaction. All other groups (BL, PG, CB and PC) showed no statistically significant difference between each other (p [ 0.376). After compaction no statistically significance was found for all pairwise comparison between NA, BL, CB, PC and PG (p [ 0.105) except for a higher initial density The difference was calculated as a percentage and p value of the T test (comparison before and after compaction) is reported which was found for PG in comparison to BL (p = 0.030). All pairwise comparison between NA, BL, CB, PC and PG of the density at the yield limit before and after compaction did not reach statistical significance level (p [ 0.080). When considering the yield limit a statistically significant lower value could be found for CB in comparison to PC (p = 0.027), to NA (p = 0.008), to BL (p = 0.016) and to OG (p = 0.003) before compaction. After compaction OG showed a statistical significant higher yield limit in comparison to other groups (p B 0.001). NA showed a statistically higher yield limit in comparison to PG (p = 0.038) after compaction. All other pairwise comparisons between groups did not reach statistically significance level (p [ 0.077) as well before as after compaction. Discussion Adding blood, PRP or PLC in allografts has shown in different studies to enhance bone ingrowth (Khan et al. 2005;Anitua et al. 2004;Hannink et al. 2009;Baylink et al. 1993;Canalis et al. 1989;Canalis 1985;Lozada et al. 2001;Cenni et al. 2010;Blair and Flaumenhaft 2009). It is therefore a promising additive to chemical cleaned allografts, were growth factors may be washed out by the cleaning procedure itself. Our measurements showed that the yield limit of four different prepared allografts did not significantly differ from each other after compaction and did not differ in comparison to native allografts. However a statistically significant difference was found in comparison to dried allografts with optimized grain size distribution. This findings are in accordance with several other studies (Putzer et al. 2014a, b;Fosse et al. 2006;Voor et al. 2004). OG showed a statistically significant lower initial density and a lower density at the yield limit before and after compaction in all cases, as in the preparation process no liquids were added resulting in a sample weight of 8 g. All samples from the other groups had a weight of 16 g. In CB and PG the initial density was increased by more than 10%, while the other two groups, where clotting was not activated, did not show a statistically significant difference. The activation of the clotting may have induced a better interlocking between particles for the uncompacted allografts. After compaction initial density was similar between all groups PG had the highest initial density after compaction, which was statistically different from BL. As PG can be considered as a gel it can be deduced that the gel may absorb better kinetic energy during the standardized compaction procedure and therefore reduce its volume not as much as BL. When considering the density at the yield limit a statistically significant reduction before and after compaction was found for BL and PG, which was again higher than 10%. Between the five groups under investigation no statistically significant difference was found. When considering the mean of each group after compaction, they are surprisingly similar for BL,CB, PC,PG and NA, which could be an indication for the mixage with blood for BL, CB and NA and mixage with platelets in PC and PG. However no statistically evidence was found for this observation and the samples did not differ from native allografts. The yield limit was improved in all cases significantly after compaction. In case of the clotted groups CB and PG the difference before and after compaction was less prominent \ 21% than for the not activated groups BL and PC, where an increase of [ 35% was observed. In the uncompacted group the Yield limit seemed to be higher for BL and PC than for CB and PG although no statistically significant difference between groups could be found. After compaction the yield limit of al four groups (BL, PC, CB and PG) reached similar values and no difference could be found. It can be deduced that the liquid phase is absorbed in the spongious allograft material and plays an inferior role on the mechanical properties. OG showed the highest yield limit after compaction, which shows the benefit in reducing liquid and fatty content for the improving mechanical interlocking of the particles. NA showed a statistically higher yield limit in comparison to PG (p = 0.038) after compaction, however all other pairwise comparisons between groups did not reach statistically significance level (p [ 0.077) as well before as after compaction. This indicates that all four mixtures are similar to native bone. Their usage can be recommended, especially the platelet concentrate gel as it should contain the highest amount of GF, while having similar mechanical properties to native allografts. Several studies show, that the fat and liquid content of allografts reduce the primary stability of allografts (Putzer et al. 2014a;Fosse et al. 2006;Voor et al. 2004). However the authors believe that by optimizing the grain size distribution (Putzer et al. 2014b), defatting the graft material by an appropriate cleaning procedure (Coraca-Huber et al. 2013;Wurm et al. 2016) and adding GF using platelet concentrate gel or PRP will enhance primary stability and speed up bone growth in. To reduce patient specific properties all samples were carefully remixed before usage to reduce any biasing effect. A part of liquids may be lost during the compaction process, altering the sample composition during the measurements. In our experiments a large quantity of liquids (50% in weight) were added to compensate for any liquid loss during the measurements. Bone quality was not assessed radiological by the authors, however all sample where previously screened for osteoporosis according to the quality guidelines of the local bone bank. Conclusion In conclusion the study shows that there was no statistically significant difference in the yield limit between allografts mixed with blood, clotted blood, platelet concentrate and platelet concentrate gel in comparison to native allografts. All of them are therefore suitable from a mechanical point of view to be used in bone impaction grafting to enhance bone remodeling by adding growth factors. From literature it seems that platelet concentrate gel or PRP has the highest change to speed up bone ingrowth. Adding liquids could decrease primary stability in comparison to dry allografts an optimum level of liquid content still needs to be defined. The authors recommend to chemical clean allografts for large defects, optimize their grain size distribution and add GF for enhancing bone ingrowth. All of the findings have to be evaluated and tested in an in vivo study for further applicability. Author contribution All authors have seen and concur with the contents of the manuscript. All authors have made substantial contributions and were involved in the study as well as the preparation of the manuscript. The authors declare that the material within the submitted paper has not been and will not be submitted for publication elsewhere, including electronically in the same form, in English or in any other language, without the written consent of the copyright-holder. Compliance with ethical standards Conflict of interest This research did not receive any specific grant from funding agencies in the public, commercial, or notfor-profit sectors. Ethical approval Ethical approval was not required for this study. Informed consent Informed consent was obtained from all human tissue donors. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2018-06-21T12:41:03.789Z
2018-05-31T00:00:00.000
{ "year": 2018, "sha1": "80984ff4f7cfafe059bddf7fcd953b3ae80b7e4e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10561-018-9704-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "62078d5c8255735b4ac3404169451ebf2158cdd8", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
16329538
pes2o/s2orc
v3-fos-license
Screw Photon-Like (3+1)-Solitons in Extended Electrodynamics This paper aims to present explicit photon-like (3+1) spatially finite soliton solutions of screw type to the vacuum field equations of Extended Electrodynamics (EED) in relativistic formulation. We begin with emphasizing the need for spatially finite soliton modelling of microobjects. Then we briefly comment the properties of solitons and photons and recall some facts from EED. Making use of the localizing functions from differential topology (used in the partition of unity) we explicitly construct spatially finite screw solutions. Further a new description of the spin momentum inside EED, based on the notion for energy-momentum exchange between $F$ and $*F$, isintroduced and used to compute the integral spin momentum of a screw soliton. The consistency between the spatial and time periodicity naturally leads to a particular relation between the longitudinal and transverse sizes of the screw solution, namely, it is equal to $\pi$. The Planck's formula $E=h\nu$ in the form of $ET=h$ arizes as a measure of the integral spin momentum. Introduction The very notion of really existing objects, i.e. physical objects carrying energy-momentum, necessarily implies that all such objects must have definite stability properties, as well as propertiies that do not change with time; otherwise everything would constantly change and we could not talk about objects and structure at all, moreover no memory and no knowledge would be possible. Through definite procedures of measurement we determine, where and when this is possible, quantitative characteristics of the physical objects. The characteristics obtained differ in: their nature and qualities; in their significance to understand the structure of the objects they characterize; in their abilities to characterize interaction among objects; and in their universality. Natural objects may be classified according to various principles. The classical point-like objects (called usually particles) are allowed to interact continuously with each other just through exchanging (through some mediator usually called field) universal conserved quantities: energy, momentum, angular momentum, so that, the set of objects "before" interaction is the same as the set of objects "after" interaction, no objects have disapeared and no new objects have appeared, only the conserved quantities have been redistributed. This is in accordance with the assumption of point-likeness, i.e. particles are assumed to have no internal structure, so they are undestroyable. Hence, classical particles may be subject only to elastic interaction. Turning to study the set of microobjects, called usually elementary particles: photons, electrons, etc., physicists have found out, in contrast to the case of classical particles, that a given set of microobjects may transform into another set of microobjects under definite conditions, for example, the well known anihilation process: (e + , e − ) → 2γ. These transformations obey also the energy-momentum and angular+spin momentum conservation, but some features may disapear (e.g. the electric charge) and new features (e.g. motion with the highest velosity) may appear. Hence, microobjects allow to be destroyed, so they have structure and, consequently, they would NOT admit the approximation "point-like objects". In view of this we may conclude that any theory aiming to describe their behaviour must take in view their structure. In particular, assuming that the Planck formula E = hν is valid for a free microobject, the only reasonable way to understand where the characteristic frequency comes from is to assume that this microobject has a periodic dynamical structure. The idea of conservation as a dynamics generating rule was realized and implemented in physics firstly by Newton a few centuries ago in the frame of real (classical) bodies: he invented the quantity momentum, p, for an isolated body (treated as a point, or structureless object), as its integral characteristic, postulated its time-constancy (i.e. conservation), and wrote down his famous equationṗ=F. This equation says, that the integral characteristic momentum of a given point-like object may (smoothly) increase only if some other object loses (smoothly) the same quantity of momentum. The concept of force, F, measures the momentum transfered per unit time. The conservative quantities energy and angular momentum, consistent with the momentum conservation, were also appropriately incorporated. The most universal among these turned out to be the energy, since, as far as we know, every natural object carries energy. This property of universality of energy makes it quite distinguished among the other conservative quantities, because its change may serve as a relyable measure of any kind of interaction and transformation of real objects. We especially note, that all these conservative quantities are carried by some object(s), no energy-momentum can exist without corresponding objects-carryers. In this sense, the usual words "energy quanta" are sensless if the corresponding carryers are not pointed out. So, theoretical physics started with idealizing the natural objects as particles, i.e. objects without structure, and the real world was theoreticaly viewed as a collection of interacting, i.e. energy-momentum exchanging, particles. As far as the behaviour of the real objects as a whole is concerned, and the interactions considered do NOT lead to destruction of the bodies-particles, this theoretical model of the real world worked well. The 19th century physics, due mainly to Faraday and Maxwell, created the theoretical concept of electromagnetic field as the interaction carrying object, responsible for the observed mutual influence between distant electrically charged (point-like) objects. This concept presents the electromagnetic field as an extended (in fact, infinite) continuous object, having dynamical structure, and although it influences the behaviour of the charged particles, it does NOT destroy them. The theory of the electromagnetic field was based also on balance relations of new kind of quantities. Actually, the new concepts of flux of a vector field through a 2-dimensional surface and circulation of a vector field along a closed curve were coined and used extensively. The Faraday-Maxwell equations in their integral form establish, in fact, where the time-changes of the fluxes of the electric and magnetic fields go to, or come from, in both cases of a closed 2-surface, and of a not-closed 2-surface with a boundary, and in this sense they introduce a kind of ballance relations. We note, that these fluxes are new quantities, specific to the continuous character of the physical object under consideration; the field equations of Faraday-Maxwell do NOT express directly energy-momentum balance relations as the above mentioned Newton's lawṗ = F does. Nevertheless, they are consistent with energy-momentum conservation, as it is well known. The corresponding local energy-momentum quantities turn out to be quadratic functions of the electric and magnetic vectors. Although very useful for considerations in finite regions with boundary conditions, the pure field Maxwell equations have time-dependent solutions in the whole space that could hardly be considered as mathematical models of really existing fields. As a rule, if these solutions are time-stable and not static, they occupy the whole 3-space, or its infinite sub-region (e.g. plane waves), and, hence, they carry infinite energy and momentum (infinite objects). On the other hand, according to Cauchy's theorem for the D'Alembert wave equation [1], which is necessarily satisfied by any component of the vacuum field in Maxwell theory, every finite (and smooth enough) initial field configuration is strongly time-unstable: the initial condition blows up radially and goes to infinity, and its forefront and backfront propagate with the velocity of light. Hence, Faraday-Maxwell equations cannot describe finite time-stable localized fieldobjects. The inconsistences between theory and experiment appeared in a full scale at the end of the last century and it soon became clear that they were not avoidable in the frame of classical physics. After Planck and Einstein created the notion of elementary field quanta, named later by Lewis [2] photon, physicists faced the above mentioned problem: the light quanta appeared to be real objects of a new kind, namely, they did NOT admit the point-like approximation like Newton's particles did. In fact, every photon propagates (transitionally) as a whole with a constant velocity and keeps unchanged the energy-momentum it carries, which should mean that it is a free object. On the other hand, it satisfies Planck's relation E = hν, which means that the very existence of photons is intrinsically connected with a periodical process of frequency ν, and periodical processes in classical physics are generated by external force-fields, which means that the object should not be free. The efforts to overcome this undesirable situation resulted in the appearence of quantum theory , quantum electrodynamics was built, but the assumption "the point-like approximation works" was kept as a building stone of the new theory, and this braught the theory to some of the well known singularity problems. Modern theory tries to pay more respect to the view that the point-like approximation does NOT work in principle in the set of microobjects satisfying the above Planck's formula. In other words, the right theoretical notion for these objects should be that of extended continuous finite objects, or finite field objects. During the last 30 years physicists have been trying seriously to implement in theory the "extended point of view" on microobjects mainly through the string/brane theories, but the difficulties these theories meet, generated partly by the great purposes they set before themselves, still do not allow to get satisfactory results. Of course, attempts to create extended point view on elementary particles different from the string/brane theory approach, have been made and are beeng made these days [3]. Anyway, we have to admit now, that after one century away from the discovery of Planck's formula, we still do not have a complete and satisfactory self-consistent theory of single photons. So, creation of a self-consistent extended point of view and working out a corresponding theory is still a challenge, and this paper aims to consider such an extended point of view on photons as screw solitons in the frame of the newly developed EED [4]. First we summurize the main features/properties of solitons and photons. Solitons and Photons The concept of soliton appears in physics as a nonlinear elaboration -physical and mathematicalof the general notion for exitation in a medium. It includes the following features: 1. The medium is homogeneous, isotropic and has definite properties of elasticity. 2. The exitation does not destroy the medium. The exitation is finite: -at every moment it occupies comparetively small volume of the medium, -it carries finite quantities of energy-momentum and angular momentum, and of any other physical quantity too, -it may have translational-rotational (may time-periodical) dynamical structure, i.e. besides its straightline propagation as a whole it may have internal rotational degrees of freedom. 4. The exitation is time-stable, i.e. at lack of external perturbations its dynamical evolution does not lead to a self-ruin. In particular, the spatial shape of the exitation does not (significantly) change during its propagation. The above 4 features outline the physical notion of a solitary wave. A solitary wave becomes a soliton if it has in adition the following property of stability: 5. The exitation survives when collides with another exitation of the same nature. We make some comments on the features 1-5. Feature 1 requires homogenity and some ealstic properties of the medium, which means that it is capable to bear the exitation, and every region of it, subject to the exitation, i.e. draged out of its natural (equilibrium) state, is capable to recover entirely after the exitation leaves that region. Feature 2 puts limitations on the exitations considered in view of the medium properties: they should not destroy the medium. Feature 3 is very important, since it requires finite nature of the exitations, it enables them to represent some initial level self-organized physical objects with dynamical structure, so that these objects "feel good" in this medium. This finite nature assumption admits only such exitations which may be created and destroyed; no point like and/or infinite exitations are admitted. The exitation interacts permanently with the medium and the time periodicity available may be interpreted as a measure of this interaction. Feature 4 guarantees the very existence of the exitation in this medium, and the shape keeping during propagation allows its recognition and identification when observed from outside. This feature 4 carries in some sense the first Newton's principle from the mechanics of particles to the dynamics of continuous finite objects, it implies conservation of energy-momentum and of the other characteristic quantities of the exitation. The last feature 5 is frequently not taken in view, especially when one considers single exitations. But in presence of many exitations in a given region it allows only such kind of interaction between/among the exitations, which does not destroy them, so that the exitations get out of the interaction (almost) the same. This feature is some continuous version of the elastic collisions of particles. II. MATHEMATICAL 1. The exitation defining functions Φ a are components of one mathematical object (usually a section of a vector/tensor bundle) and depend on n spatial and 1 time coordinates. 2. The components Φ a satisfy some system of nonlinear partial differential equations (except the case of (1+1) linear wave equation), and admit some "running wave" dynamics as a whole, together with available internal dynamics. 3. There are (infinite) many conservation laws. 4. The components Φ a are localized (or finite) functions with respect to the spatial coordinates, and the conservative quantities are finite. 5. The multisoliton solutions, describing elastic interaction (collision), tend to many single soliton solutions at t → ∞. Comments: 1. Feature 1 introduces some notion of integrity: one exitation -one mathematical object, although having many algebraically independant but differentially interrelated (through the equations) components Φ a . 2. Usually, the system of PDE is of evolution kind, so that the exitation is modelled as a dynamical system: the initial configuration determines fully the evolution. The "running wave" dynamics as a whole introduces Galileo/Lorentz invariance and corresponds to the physical feature 4. The nonlinearity of the equations is meant to guarantee the spatially localized (finite) nature of the solutions. 3. The infinite many consrvation laws frequently lead to complete integrability of the equations. 4. The spatially localized nature of Φ a represents the finite nature of the exitation. 5. The many single soliton asymptotics at t → ∞ of a multisoliton solution mathematically represents the elastic character of the interactions admitted, and so it takes care of the stability of the physical objects beeing modelled. The above physical/mathematical features are not always strictly accounted for in the literature. For example, the word soliton is frequently used for a solitary wave exitation. Another example is the usage of the word soliton just when the energy density, being usually a quadratic function of the corresponding Φ a , has the above soliton properties [5]. Also, one usually meets this soliton terminology for spatially localized, i.e. going to zero at spatial infinity, but not spatially finite Φ a , i.e. when the spatial support of Φ a is a compact set. In fact, all soliton solutions of the well known KdV, SG, NLS equations are localized and not finite. It is curious, that the linear (1+1) wave equation has spatially finite soliton solutions of arbitarary shape. Further in this paper we shall present 1-soliton screw solutions of the vacuum EED equations, so we may, and shall, use the more attractive word soliton for solitary wave. We hope this will not bring any troubles to the reader. The screw soliton solutions we are going to present are of photon-like character, i.e. the velocity of their translational component of propagation is equal to the velocity of light c, and besides of the energy-momentum, they carry also internal angular (spin, helicity) momentum accounting for the available rotational component of propagation. Therefore, this seems to be the proper place to recall some of the well known properties of photons. First of all, as it was explained in the Introduction, photons should not be considered as point-like objects since they respect the Planck's relation E = hν and carry internal (spin) momentum, so they have to be considered as extended finite objects with periodical rotationaltranslational dynamical sructure. Therefore, we assume that the concept of soliton, as described above, may serve as a good mathematical tool in trying to represent mathematically the real photons, so that, their well known integral properties to appear as determined by their dynamical structure. We give now some of the more important for our purposes properties of photons. 3. The existence of photons is generically connected with some time-periodical process of period T and frequency ν = T −1 , so that the Planck relation E = hν, or ET = h, where h is the Planck constant, holds. 4. Every single photon carries momentum p with |p| = hν/c and spin momentum equal to the Planck constant h. 5. Photons are polarized objects. The polarization of every single photon is determined through the relation between the translational and rotational directions of propagation, hence, the 3-dimensionality of the real space allows just two polarizations. We call the polarization right, if when looking along its translational component of propagation, i.e. from befind, we find its rotational component of propagation to be clock-wise, and the polarization is left when under the same conditions we find anti-clock-wise rotational component of propagation. 6. Photons do not interact with each other. i.e. they pass through each other without changes. These well known properties of photons would hardly need any comments. However, according to our oppinion, these properties strongly suggest to make use of the soliton concept for working out a mathematical model of their structure and propagation. And this was one of the main reasons to develop the extension of Faraday-Maxwell theory to what we call now Extended Electrodynamics (EED). We proceed now to recall the basics of EED, in the frame of which the screw soliton model of photons will be worked out. In terms of δ the vacuum Maxwell equations are given by In EED the above equations (1) are extended to In components, equations (2) are respectively: The Maxwell energy-momentum tensor is assumed as energy-momentum tensor in EED because its divergence is obviously zero on the solutions of equations (3) and no problems of Maxwell theory at this point are known. The physical sense of equations (3) is, obviously, local energy-momentum redistribution during the time-evolution: the first two equations say that F and * F keep locally their energy-mimentum, and the third equation says (in correspondence with the first two), that the energy-momentum transferred from F to * F is always equal locally to the energy-momentum transferred from * F to F , hence, any of the two expressions F µν (δ * F ) ν and ( * F ) µν (δF ) ν may be considered as a measure of the rotational component of the energy-momentum redistribution between F and * F during propagation (recall that the spatial part of δF is rotB and the spatial part of δ * F is rotE). Obviously, all solutions of (1) are solutions to (3), but equations (3) have more solutions. In particular, those solutions of (3) which satisfy the relations are called nonlinear. Further we are going to consider only the nonlinear solutions of (3). Some of the basic results in our previous studies of the nonlinear solutions of equations (3) could be summarized in the following way: For every nonlinear solution (F, * F ) of (3) there exists a canonical system of coordinates (x, y, z, ξ) in which the solution is fully represented by two functions Φ(x, y, ξ + εz), ε = ±1, and ϕ(x, y, z, ξ), |ϕ| ≤ 1, as follows: We call Φ the amplitude function and ϕ the phase function of the solution. The condition |ϕ| ≤ 1 allows to set ϕ = cosψ, and further we are going to work with ψ, and ψ will be called phase. As we showed [4], the two functions Φ and ϕ may be introduced in a coordinate free manner, so they have well defined invariant sense. Every nonlinear solution satisfies the following important relations: We recall also the scale factor L, defined by the relation L = |Φ|/|δF |. A simple calculation shows that it depends only on the derivatives of ψ in these coordinates and is given by Screw Soliton Solutions in Extended Electrodynamics Note that EED considers the field as having two components: F and * F . As we mentioned earlier, the third equation of (3) describes how much energy-momentum is redistributed locally with time between the two components F and * F of the field: F µν (δ * F ) ν dx µ gives the transfer from F to * F , and ( * F ) µν δF ν dx µ gives the transfer from * F to F , thus, if there is such an energy-momentum exchange equations (3) require permanent and equal mutual energymomentum transfers between F and * F . Since F and * F are always orthogonal to each other [F µν ( * F ) µν = 0] and these two mutual transfers depend on the derivatives of the field functions through δF and δ * F (i.e. through rotB and rotE, which are not equal to zero in general), we may interpret this property of the solution as a description of an internal rotation-like component of the general dynamics of the field. Hence, any of the two expressions F µν (δ * F ) ν dx µ or ( * F ) µν δF ν dx µ , having the sense of local energy-momentum change, may serve as a natural measure of this rotational component of the energy-momentum redistribution during the propagation. Therefore, after some appropriate normalization, we may interpret any of the two 3-forms ( * F ) ∧ (δ * F ) and F ∧ δF as local spin-momentum of the solution. Making use of the above expressions for F and * F we compute F ∧ δF = ( * F ) ∧ (δ * F ): Since Φ = 0, this expression says that we shall have nonzero local spin-momentum only if ψ is not a running wave along z. The above idea to consider the 3-form F ∧ δF as a measure of the spin momentum of the solution suggests also some additional equation for ψ, because the spin momentum is a conserved quantity and its integral over the 3-space should not depend on the time variable ξ. This requires to have some nontrivial closed 3-form on R 4 , such that when restricted to the 3-space and (spatially) integrated to give the integral spin momentum of the solution. Therefore we assume the additional equation In our system of coordinates this equation is reduced to i.e. The running wave solutions ψ 1 , defined by 1 o lead to F ∧ δF = 0 and to |δF | = 0, and by this reason they have to be ignored. The solutions ψ 2 and ψ 3 , defined respectively by 2 o and 3 o , give the same scale factor L = 1/|g|. Since at all spatial points where the field is different from zero we have ξ + εz = const, we may choose |g(x, y, ξ + εz)| = 1/l(x, y) > 0, so we obtain the following nonrunning wave solutions of (10): where κ = ±1 accounts for the two different polarizations. Clearly, the physical dimension of l(x, y) is length, b(x, y) is dimensionless and the scale factor is L = l(x, y). The corresponding electric E and magnetic B vectors for the case 2 o in view of (11) are E = Φ(x, y, ξ + εz)cos ±ξ l(x, y) + b(x, y) ; Φ(x, y, ξ + εz)sin ±ξ l(x, y) B = εΦ(x, y, ξ + εz)sin ±ξ l(x, y) + b(x, y) ; −εΦ(x, y, ξ + εz)cos ±ξ l(x, y) A characteristic feature of the solutions, defined by (12)-(13), is that the direction of the electric and magnetic vectors at some initial moment t o , as seen from (12)-(13), is entirely determined by (ψ 2 ) to (under L = 1/|g| = l(x, y)), and so, it does not depend on the coordinate z. Therefore, at all points of the 3-region Ω o , occupied by the solution at t o , under the additional conditions L = l(x, y) = const and b(x, y) = const, the directions of the representatives of E at all spatial points will be the same, independently on the spatial shape of the 3-region Ω o . At every subsequent moment this common direction will be rotationally displaced, but will stay the same for the representatives of E at the different spatial points. The same feature will hold for the representatives of B too. Thus, the representatives of the couple (E, B) will rotate in time, clockwise (κ = −1) or anticlockwise (κ = 1), coherently at all spatial points, so these solutions show no twist-like, or screw, propagation component even if the region Ω o has a screw shape. Hence, these solutions may be soliton-like but not of screw kind. The electric E and magnetic B vectors for the case 3 o in view of (11) are We are going to use this particular solution (14)-(15) to construct a theoretical example of a screw photon-like soliton solution. We have to give an explicit form of the amplitude function Φ. In accordance with the finite nature of the solution Φ must have compact spatial support, and this guarantees that the solution is finite because the phase function ϕ = cos, so, the products Φ.cos(ψ), Φ.sin(ψ) are also finite. First we outline the idea (FIG1). We mean to choose Φ = Φ o at ξ = 0 in such a way that Φ o (x, y, z) to be localized inside a screw cylinder of radius r o and height |z 2 − z 1 | = 2πl o and we shall denote this region by Ω o ; the z-prolongation of Ω o will be an infinite screw cylinder denoted by Ω. Let this screw cylinder have the coordinate axis z as an outside axis, i.e. Ω windes around the zaxis and never crosses it. The initial configuration Ω o shall be made propagating along Ω through both: z-tanslation, and a consistent rotation around itself and around the zaxis. This kind of propagation will be achieved only if every spatial point inside Ω o will keep moving along its own screw line and will never cross its neighbors' screw lines. We consider the plane (x, y) at z = 0 and choose a point P = (a, a), a > 0 in the first quandrant in this plane, so that the points (x, y, z = 0) where Φ o (x, y, z = 0) = 0 are inside the circle x 2 + y 2 ≤ r 2 o centered at P = (a, a), and the distance a √ 2 between P and the zero-point (0, 0, 0) is much greater than r o . Now we consider the function 1 cosh (x − a)) 2 + (y − a)) 2 . It is concentrated mainly inside Ω and goes to zero outside Ω although it becomes zero just at infinity. Restricted to the plane z = 0 it is concentrated inside the circle A ro of radius r o around the point P and it becomes zero at infinity (x = ∞, y = ∞). We want this concentrated but still infinite object to become finite, i.e. to become zero outside Ω. In order to do this we shall make use of the so called localizing functions, which are smooth everywhere, are equal to 1 on some compact set A and quickly go to zero outside A. (These functions are very important in differential topology for making partition of unity and glueing up various structures [6]). We shall denote these functions by θ(x, y, ....). Let θ(x, y; r o ) be a localizing function around the point P in the plane (x, y), such that it is equal to 1 inside the circle A r , centered at P , of radius r, and r is considered to be a bit shorter but nearly equal to r o , and θ(x, y; r o ) = 0 outside the circle A ro . We modify now the above considered function as follows: . Let now θ(z; l o ) be a localizing function with respect to the interval (z 1 , z 2 ), z 2 > z 1 , where is different from zero only inside Ω o . We choose now the scale factor l(x, y) to be a constant equal to l o , and making use of the phase ψ 3 with κ = 1 we define the function where C is a constant with appropriate physical dimension. This function (16) represents the first component E 1 of the electric vector at t = 0. Similarly, for E 2 at t = 0 we write The right hand sides of (16) and (17) define the initial state of the solution, and this initial condition occupies the screw cylinder Ω o , its internal axis is a screw line away from the z-axis at a distance of d = a √ 2. Hence, the solution in terms of (E, B) in this sytem of coordinates will look like (we assume C > 0, and ε = −1, b(a, a) = 3 π 4 and we write just b for b(x, y)). We consider now the vectors E and B on the screw line passing through the point (a, a, 0), where b(a, a) = 3 π 4 . We choose the coordinates x, y, z as usual: x grows rightwards, y grows upwards, and then z is determined by the requirement to have a right orientation. If we count the phase anticlockwise from the x-axis, then at ξ = ct = 0, at the point P = (a, a, z = 0) the vector B is directed to the point (0, 0, 0) and E is orthogonal to B in such a way that (E × B) is directed to +z. When z grows from 0 to 2πl o the magnetic vector B on this screw line ( which is the central screw line of the screw cylinder) turns around the axis z and stays always directed to it, while the electric vector E rotates around the z-axis and it is always tangent to this rotation; at the same time the Poynting vector keeps its +z-orientation. This situation corresponds to the clockwise polarizarion (looking from behind). If κ = −1 then the electric vector E stays always directed to the z-axis and since we have chosen ε = −1, i.e. the solution propagates along +z, the magnetic vector B rotates around the z-axis anticlockwise (looking from behind) and is always tangent to this rotation. So, this corresponds to the anticlockwise polarization. We note once again that any actual assumption b(x, y) = const would make the representatives of the magnetic vector (when κ = 1) B parallel to each other at every point of the plane z = z o , z 1 < z o < z 2 at t = 0. This may cause some instabilites with time, so in order to have all representatives of B (or E in the other polarization) at every point of that plane to be directed to the z−axis and not to be paralel to each other, we may use b(x, y) for the necessary corrections. This remark shows the importance of the relation b(x, y) = const, b(x, y) must be equal to the angle between the two lines passing through the points [(0, 0, z o ); (a, a, z o )] and [(0, 0, z o ); (x, y, z o )], where x 2 + y 2 ≤ r 2 o . Now, every point (x, y, z) inside the solution region follows its own screw line so, that the distance ρ = x 2 + y 2 between this screw line and the z-axis is kept the same when z grows. Since the spatial periodicity along z is equal to 2πl o , (note that l o is in fact the maximum value of l(x, y)), we obtain an interpretation of the scale factor L = l o as the distance between the screw line inside Ω along which L = l o and the z-axis. This, in particular, means that for r o << a √ 2, i.e. for a very thin screw cylinder, the scale factor L = l(x, y) is approximated by L = l o θ(x, y; r o ); it also follows that the relation between the longitudinal ant transverse sizes of the solution is approximately equal to π. The Spin-momentum We turn now to the integral spin-momentum (helicity) computation. According to our assumption its density is given by any of the correspondingly normalized two 3-forms F ∧ δF , or ( * F ) ∧ (δ * F ). In order to have the appropriate physical dimension we consider now the 3-form β defined by Its physical dimension is "energy-density × time". Since L = L(x, y) at most, and in view of (8), we see that β is closed: dβ = 0. The restriction of β to R 3 (which will be denoted also by β) is also closed: and we may use the Stokes' theorem. We shall make use of the solutions 3 o of equation (10). We have ψ ξ = 0, ψ z = κ/l(x, y), L = |ψ ξ − εψ z | −1 = l, so, in view of our approximating assumption L = l o , we integrate β = 2πl o c κΦ 2 dx ∧ dy ∧ dz over the 3-space and obtain where E is the integral energy of the solution, T = 2πl o /c is the intrinsically defined timeperiod, and κ = ±1 accounts for the two polarizations. According to our interpretation this is the integral intrinsic angular momentum, or spin-momentum, of the solution, for one period T . This intrinsically defined action ET of the solution is to be identified with the Planck's constant h, h = ET , or E = hν, if we are going to interpret the solution as an extended model of a single photon. Remark. For the connection of F ∧ δF with our earlier definition of the local intrinsic spinmomentum through the Nijenhuis torsion tensor of F ν µ one may look in our paper, cited as the last one in [5]. Conclusion According to our view, based on the conservation properties of the energy-momentum and spinmomentum, photons are real objects and NOT theoretical imagination. This view on these microobjects as real extended objects, obeying the famous Planck relation E = hν, almost necessarily brings us to favor the soliton concept as the most appropriate and self-consistent working theoretical tool for now, because no point-like conception is consistent with the availability of frequency and spin-momentum of photons, as well as with the possibility photons to be destroyed. So, the dynamical structure of photons is a real thing, clearly manifesting itself through a consistent rotational-translational propagation in space, and their finite nature reveals itself through the finite 3-volumes of definite shape they occupy at every moment of their existence, and through the finite values of the universal conserved quantities they carry. The dynamical point of view on these objects reflects theoretically in the possibility to make use of, more or less, arbitrary initial configurations, i.e. to consider them as dynamical systems. This important moment allows the localizing functions θ(x, ...) from differential topology to be used for making the spatial dimensions of the solution FINITE, and NOT smoothly vanishing just at infinity as is the case of the usual soliton solutions. Our theoretical screw example, presenting an exact solution of the nonlinear vacuum EED equations, was meant to present a, more or less, visual image of the well known properties of photons, of their translational-rotational dynamical structure, and, especially, of the nature of their spin-momentum. Of course, we do not insist on the function 1/cosh(.....) chosen, any other localisable inside the circle A ro function would do the same job in this theoretical example. So far we have not experimental data concerning the shape of single photons, and we do not exclude various ones, so our choices for the amplitude Φ and for the scale factor L(x, y) are rather admissible approximations than correct mathematical images. The more important moment was to recognize the dynamical sense of the quantity F ∧ δF , and to find that the spin-momentum conservation equation d(F ∧ δF ) = 0 gives solutions for the phase function ϕ = cosψ, which helps very much to visualize the dinamical properties of photons in our aproach. So, we are able to obtain a general expression for the integral spin-momentum, which, in fact, is the Planck's formula h = ET . Moreover, the whole solution (F, * F ) describes naturally the polarization properties of photons, it clearly differs the clockwise and anticlockwise polarizations through pointing out the different roles of E and B in the two cases. An important moment in our approach is that F and * F are considered as two components of the same solution, so the couples (E, B) and (−B, E) give together the 3-dimensional picture of one solution. This, of course, would require the full energy-momentum tensor Q ν µ , to be two times the usual Q ν µ , given by (4): Q ν µ (F, * F ) = 2Q ν µ (F ) = 2Q ν µ ( * F ). Finally, we'd like to mention that, making use of the localizing functions θ(x, y, z; ...), we may choose an amplitude function Φ of a "many-lump" kind, i.e. at every moment Φ to be different from zero inside many non-overlaping 3-regions, probably of the same shape, so we are able to describe a flow of consistently propagating photon-like exitations of the same polarization and of the same phase. Some of such "many-lump" solutions (i.e. flows of many 1-soliton solutions) may give the macroimpression of, or to look like as, (parts of) plane waves.
2014-10-01T00:00:00.000Z
2001-04-10T00:00:00.000
{ "year": 2001, "sha1": "3b6c47e8623b62011d422f2e651214251f3bd7fb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0104088", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3b6c47e8623b62011d422f2e651214251f3bd7fb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
20631271
pes2o/s2orc
v3-fos-license
Presentation pattern and management of effusive–constrictive pericarditis in Ibadan Background Effusive–constrictive pericarditis is a syndrome in which constriction by the visceral pericardium occurs in the presence of a dense effusion in a free pericardial space. Treatment of this disease is problematic because pericardiocentesis does not relieve the impaired filling of the heart and surgical removal of the visceral pericardium is challenging. We sought to provide further information by addressing the evolution and clinico-pathological pattern, and optimal surgical management of this disease. Methods We conducted a prospective review of a consecutive series of five patients managed in the cardiothoracic surgery unit of University College Hospital, Ibadan, in the previous year, along with a general overview of other cases managed over a seven-year period. This was followed by an extensive literature review with a special focus on Africa. Results The diagnosis of effusive–constrictive pericarditis was established on the basis of clinical findings of features of pericardial disease with evidence of pericardial effusion, and echocardiographic finding of constrictive physiology with or without radiological evidence of pericardial calcification. A review of our surgical records over the previous seven years revealed a prevalence of 13% among patients with pericardial disease of any type (11/86), 22% of patients presenting with effusive pericardial disease (11/50) and 35% who had had pericardiectomy for constrictive pericarditis (11/31). All five cases in this series were confirmed by a clinical scenario of non-resolving cardiac impairment despite adequate open pericardial drainage. They all improved following pericardiectomy. Conclusion Effusive–constrictive pericarditis as a subset of pericardial disease deserves closer study and individualisation of treatment. Evaluating patients suspected of having the disease affords clinicians the opportunity to integrate clinical features and non-invasive investigations with or without findings at pericardiostomy, to derive a management plan tailored to each patient. The limited number of patients in this series called for caution in generalisation. Hence our aim was to increase the sensitivity of others to issues raised and help spur on further collaborative studies to lay down guidelines with an African perspective. Effusive-constrictive pericarditis is a clinical syndrome characterised by concurrent pericardial effusion and pericardial constriction where constrictive haemodynamics are persistent after the pericardial effusion is removed. The treatment of effusive-constrictive pericarditis is problematic because pericardiocentesis does not relieve the impaired filling of the heart, and surgical removal of the fibrinous exudate coating the visceral pericardium may not be possible. 1 Pericardiectomy following development of a pericardial skin that is amenable to surgical stripping is usually the most successful treatment option. The objectives of this case series were to document the evolution and clinico-pathological pattern of this disease in Nigerians. Methods We conducted a prospective review of a consecutive series of five patients managed in the cardiothoracic surgery unit of University College Hospital, Ibadan in the previous year, along with a general overview of other cases managed over a seven-year period. This was followed by an extensive literature review with a special focus on Africa. The diagnosis of effusive-constrictive pericarditis was established on the basis of clinical findings of features of pericardial disease with evidence of pericardial effusion, and echocardiographic finding of constrictive physiology with or without radiological evidence of pericardial calcification. results A review of our surgical records over the previous seven years revealed a prevalence of 13% among patients with pericardial disease of any type (11/86), 22% of patients presenting with effusive pericardial disease (11/50) and 35% who had pericardiectomy for constrictive pericarditis (11/31). The present subset was chosen for the prospective follow up due to the unusual consecutive presentation and a dearth of studies specifically on this subset of patients from Africa. All five cases in this series were confirmed by a clinical scenario of non-resolving cardiac impairment despite adequate open pericardial drainage. All five patients were prospectively followed up. One patient, who we treated for effusive-contrictive pericarditis, is described in detail and four other cases are summarised in tabular form (Table 1). cardiothoracic unit of the University College Hospital, Ibadan with a three-year history of easy fatigability, exertional dyspnoea and weight loss. There was a history of cough productive of whitish sputum. There was an associated history of orthopnoea, chest discomfort and bulging chest, but no history of leg swelling. The patient was wasted and afebrile with a respiratory rate of 32 breaths/min. Her blood pressure and pulse were, respectively, 105/80 mmHg and 102 per min. Her neck veins were distended and she had a bulging anterior chest and hepatomegaly. The patient's packed cell volume was 40%. Her blood chemistry findings were normal. The chest radiograph showed a globular heart shadow (Fig. 1). The ECG revealed low-voltage waves. An echocardiogram revealed a large pericardial effusion with echo speckles within it and a thickened pericardium. There was septal bounce and a dilated inferior vena cava with blunted respiratory fluctuations in diameter. A diagnostic pericardiocentesis yielded serosanguinous fluid. The patient underwent a subxiphoid tube pericardiostomy with pericardial biopsy. A postoperative chest radiograph showed evidence of pericardial calcification (Fig. 2). She was scheduled for an elective pericardiectomy, which was declined. The pericardiostomy tube was removed one week post operation. A subsequent radiograph revealed evidence of re-accumulation of pericardial fluid. The patient and her relatives still declined surgery and asked for a discharge. She represented about 48 hours later with evidence of massive pericardial effusion and cardiac tamponade. She then had an emergency pericardiocentesis under echocardiographic guidance, during which 1 940 ml of haemorrhagic effusion was aspirated and another 2 250 ml four days later. She improved following this and then had a pericardiectomy. Findings at surgery included a thickened parietal and visceral pericardium, about 1.5 l of serosanguinous fluid in the pericardial space, and an area of calcification particularly over the right atrium (Fig. 3A, B). Both the parietal and visceral pericardium were stripped. The patient had an uneventful postoperative recovery period and was discharged home 10 days after surgery. She has been seen twice since discharge, the last visit eight months post operation, with remarkable recovery, and NYHA class I status. SM had a pre-operative (pericardial window) echo, which showed effusion with constrictive physiology. He had modest postoperative improvement and was discharged but he represented three months later with worsening of pre-operative symptoms. He then had a pericardiectomy, following which he improved progressively. Following tube pericardiostomy, DS had very transient improvement in his symptoms. Repeat lateral chest X-ray showed evidence of pericardial calcification while echocardiography showed moderate pericardial effusion and diastolic dysfunction (Fig. 4). He made a rapid recovery following pericardiectomy. MN had minimal improvement following tube pericardiostomy, remaining dyspnoeic at rest. Postoperative chest radiography and echocardiography showed pericardial calcification. In addition, there was a markedly enlarged right atrium, grade III-IV tricuspid regurgitation and a small right ventricle with endocardial thickening, suggestive of endomyocardial fibrosis. We elected to go ahead with a pericardiectomy on account of the pericardial thickening with calcification. She improved following pericardiectomy, with NYHA class I status. OS had pericardiostomy with slight improvement and was discharged home on anti-tuberculous therapy. He had a pericardiectomy three months later, during which he had an intraoperative complication of right ventricular wall injury, which was promptly repaired. He had an uneventful postoperative recovery until the 12th and 19th days postoperatively, when he developed Fournier's gangrene and upper gastrointestinal bleeding, respectively. These were successfully managed and he was discharged home on the 36th day postoperatively. discussion Effusive-constrictive pericarditis is said to be an uncommon pericardial syndrome. 2 4 This is quite similar to the prevalence of 13% among patients with pericardial disease of any type in our seven-year review (11/86). We are not aware of any specific series from Africa. Patients with effusive-constrictive pericarditis present with symptoms due to limitation of diastolic filling. These findings are secondary not only to the pericardial effusion but also the pericardial constriction. Symptoms and physical findings vary, while a moderate-to-large pericardial effusion may occur. Management of effusive-constrictive pericarditis is therefore fraught with challenges. The diagnosis is usually made by echocardiography, which should demonstrate diastolic dysfunction. The diagnosis can easily be missed by an unwary clinician because of the usual superimposed features of accompanying pericardial effusion or tamponade. This may have accounted for the premature discharge and re-admission of one of our patients (SB). Pericardial effusion is seen as an echo-free space around the heart on echocardiography (Fig. 4). The presence of a large pericardial effusion with frond-like projections and a thick 'porridge-like' exudate is suggestive of an exudate but not specific for a tuberculous aetiology. 1 Patients with acute haemorrhagic effusions may have pericardial thrombus appearing as an echo-dense mass. 5 Small pericardial effusions are only seen posteriorly, while those large enough to produce cardiac tamponade are usually circumferential. In large pericardial effusions, the heart may move freely within the pericardial cavity ('swinging heart'). In the parasternal long-axis view, pericardial fluid reflects at the posterior atrio-ventricular groove, while pleural fluid continues under the left atrium, posterior to the descending aorta. Rarely, tumour masses are found within or adjacent to the pericardium and may masquerade as tamponade. 6 Diagnostic criteria for cardiac tamponade include diastolic collapse of the right atrial and ventricular anterior free wall, and left atrial and very rarely left ventricular collapse. Right atrial collapse is more sensitive for tamponade, but right ventricular collapse lasting more than one-third of diastole is a more specific finding for cardiac tamponade. Doppler findings include distension of the inferior vena cava that does not diminish with inspiration, which is a manifestation of the elevated venous pressure in tamponade. 6 In addition, there can be marked reciprocal respiratory variation in mitral and tricuspid flow velocities. Tricuspid flow increases and mitral flow decreases during inspiration (the reverse in expiration). A challenging differential diagnosis is endomyocardial fibrosis, a common form of restrictive cardiomyopathy (RCM) in Africa. 7 Because constrictive pericarditis can be corrected surgically, it is important to distinguish chronic constrictive pericarditis from restrictive cardiomyopathy, which has a similar physiological abnormality, i.e. restriction of ventricular filling. Helpful in the differentiation of these two conditions are right ventricular trans-venous endomyocardial biopsy (by revealing myocardial infiltration or fibrosis in RCM) and echocardiography, CT scan or cardiac magnetic resonance imaging (by demonstrating a thickened pericardium in constrictive pericarditis but not in RCM). 8 Our fourth patient (MN) actually presented this challenge but a convincing thickening of the pericardium at echocardiography was enough to help us clarify the diagnosis. Another important problem is the lack of placebo-controlled trials from which appropriate therapy may be selected, and of guidelines that assist in important clinical decisions. As a result, the practitioner must rely heavily on clinical judgment. 9 The absence of guidelines specific to this subset of pericardial disease may be due to its relative rarity in the Western world. The recent European Society of Cardiology guidelines on management of pericardial diseases was also silent on the subset of patients with effusive constrictive pericarditis, presumably due to a paucity of data on the subject. 6 Other reasons could be difficulty in reaching a diagnosis and varied aetiopathogenesis, necessitating different evolution patterns. While there is an abundance of diagnostic armamentarium in the West, practitioners in sub-Saharan Africa largely have to cope with severe limitations in diagnostic facilities. An exception to this may be South Africa, where a recent report highlighted the value of contrast-enhanced magnetic resonance imaging (MRI) in delineating epicardial and pericardial inflammation in effusive-constrictive pericarditis. 10 Cost is still an issue even if MRI becomes widely available. Clinical acumen and reasoning therefore still form the bedrock of clinical practice in most centres. The cases managed in this series illustrate this point. In only two of the five cases was there a hint of constrictive physiology at the initial echocardiography, even though it is known there is a phase of transient sub-acute constriction, which may improve after pericardial drainage and medical treatment, especially with anti-tuberculous therapy in those arising secondary to tuberculosis. The only strong evidence of a high likelihood of need for pericardiectomy was the duration of the history in the first three patients. They all had a history longer than two years, suggestive of a chronic process. Reaching an aetiological diagnosis is a real challenge globally but more problematic in our local practices. The results of pericardial fluid culture are frequently falsely negative and pericardial biopsy has a higher yield of diagnostic specimens. [11][12][13] One therefore has to rely on pericardial tissue biopsy microbiology and histology. None of our patients had positive evidence from pericardial fluid microbiology or cytology. The histology of their pericardia is shown in Table 1. Three of the patients were therefore treated empirically with anti-tuberculous therapy. The difficulty in establishing a bacteriological or histological diagnosis is foremost among unresolved issues in patients with pericarditis. 14 A definite or proven diagnosis is based on demonstration of tubercle bacilli in the pericardial fluid or on histological section of the pericardium. A probable or presumed diagnosis is based on proof of tuberculosis elsewhere in a patient with otherwise unexplained pericarditis, a lymphocytic The diagnostic difficulty is best demonstrated by a recent series of patients with tuberculous pericarditis where most patients were treated on clinical grounds, with microbiological evidence of tuberculosis obtained in only 13 (7.0%) patients. 4 Hence, the focus currently is on indirect tests for tuberculous infection, including ADA levels and more importantly, lysozyme or IFN-γ assay, which appears to hold promise for reaching diagnosis of cases arising secondary to tuberculosis. [14][15][16][17][18] Technical and financial constraints may, however, limit the diagnostic utility of IFN-γ in many developing countries. 1 These tests are currently not available in our centre. The importance of recognising the haemodynamic syndrome of tamponade and constriction characteristic of effusiveconstrictive pericarditis lies in an acknowledgment of the contribution of the visceral layer of the pericardium to the pathogenesis of constriction and of the need to remove it surgically. However, not only is it sometimes surgically challenging to do an epicardectomy in some patients due to a flimsy, fibrinous visceral pericardium with attendant risk of haemorrhage; some patients may recover with medical treatment alone -so-called transient effusive-constrictive pericarditis. 3,19 Three of the patients in this series actually had intra-operative haemorrhage from atrial or ventricular injury during the epicardectomy part of the procedure. Visceral pericardiectomy is therefore a much more difficult and hazardous procedure than parietal pericardiectomy, but it is necessary for a good clinical result in cases of effusiveconstrictive pericarditis. The clinical decision as to which patients need to be observed on medical treatment depends on presumed or confirmed aetiology, timing of presentation, and response to medical therapy. Decision based on aetiology Causes of effusive-constrictive pericarditis are varied and usually practice-dependent. Tuberculosis is said to be responsible for approximately 70% of cases of large pericardial effusion and most cases of constrictive pericarditis in developing countries. However, in industrialised countries, tuberculosis accounts for only 4% of cases of pericardial effusion and an even smaller proportion of instances of constrictive pericarditis. 14 Series from Europe and North America report a predominance of idiopathic cases, followed by cases that occur after radiotherapy or cardiac surgery, or as a result of neoplasia or tuberculosis. 3,11,20 The aetiological spectrum indeed reflects the general aetiological spectrum of pericardial diseases in each area and can be influenced by the changing aetiological spectrum of pericarditis in general and constrictive pericarditis in particular. 3,21,22 The varying aetiological spectrum impacts on the need for and timing of pericardiectomy. 17 In the Sagrista-Sauleda series, pericardiectomy was not performed in eight of 15 patients; in five of them owing to a poor general prognosis (four patients with neoplastic pericarditis) or a high surgical risk (one patient with radiation pericarditis), and in three patients (all with idiopathic pericarditis) because of progressive improvement and eventually resolution of the illness after pericardiocentesis. Wide anterior pericardiectomy was performed in seven patients between 13 days and four months after pericardiocentesis owing to the persistence of severe right heart failure. The diagnoses in these seven patients were idiopathic pericarditis in four, radiation pericarditis in one, tuberculous pericarditis in one, and postsurgical pericarditis in one. The patients in our limited series, as in others cases due to tuberculosis, usually had attendant pericardial calcification with no room for improvement without pericardectomy. This partly explains the need for pericardectomy in these patients. Decision based on timing of presentation and response to medication Related to aetiology is the timing of presentation. Transient sub-acute effusive-constrictive pericarditis is known to resolve after pericardiocentesis without the need for pericardiectomy. 3,23,24 In fact in two of three patients with idiopathic pericarditis who had resolution of their symptoms following pericardiocentesis in the Sagrista-Sauleda series, the onset of their illness was stated to be very recent. The monitoring of intra-cardiac and intrapericardial pressures as part of a pericardiocentesis procedure has been suggested in patients who present with a sub-acute course of pericardial tamponade, particularly those in whom the condition is idiopathic or is related to infection, neoplasm or rheumatological disease. 2 The duration of pericardial disease in three of our patients was more than two years, suggesting chronicity and need for pericardectomy. Although the duration in the fourth and fifth patients was relatively short, non-resolution of their symptoms and presence of pericardial calcification in the fourth patient appeared to be a predictor of need for pericardial stripping. Management One can propose a management algorithm from the above discussion (Fig. 5). We would suggest pericardiocentesis followed by pericardiostomy and pericardial biopsy for bacteriology and histology as a first step in patients with tamponade or imminent tamponade. Duration of illness should be the next guide in those without tamponade, with those patients with duration more than one year offered pericardiostomy and biopsy. Other patients could be tried on medical treatment for six to eight weeks and operated on when there is persistent evidence of constriction. Presence of pericardial thickening with calcification following pericardiocentesis is an absolute indicator of need for a pericardiectomy. This can be further confirmed on a cardiac CT scan. We believe this management algorithm is preliminary at best and is subject to improvement with more collaborative research. The current on-going multicentre study on the role of steroids in the prevention of constrictive pericarditis, involving centres in South Africa, Nigeria and other African countries, is one such study. 4 Other studies could focus on influence of aetiology and duration of pericardial disease on the need for pericardiectomy in other areas. conclusion Effusive-constrictive pericarditis as a subset of pericardial disease deserves closer study and individualisation of treatment. Evaluating patients suspected of having the disease affords clinicians the opportunity to integrate clinical features and non-invasive investigations with or without findings at pericardiostomy to expeditiously arrive at a patient-specific management plan. The limited number of patients in this series is a limitation, which calls for caution in generalisation. Hence our aim was to increase the sensitivity of others to issues raised and help spur on further collaborative studies to lay down guidelines with an African perspective.
2016-05-16T03:51:17.072Z
2012-05-01T00:00:00.000
{ "year": 2012, "sha1": "594007f96d95794f2a9d9c091f8bc6a974cd6fff", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3721937?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "594007f96d95794f2a9d9c091f8bc6a974cd6fff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246751293
pes2o/s2orc
v3-fos-license
Neuronal growth regulator 1 (NEGR1) promotes synaptic targeting of glutamic acid decarboxylase 65 (GAD65) Neuronal growth regulator 1 (NEGR1) is a glycosylphosphatidylinositol-anchored cell adhesion molecule encoded by an obesity susceptibility gene. We demonstrate that NEGR1 accumulates in GABAergic inhibitory synapses in hypothalamic neurons, a GABA-synthesizing enzyme GAD65 attaches to the plasma membrane, and NEGR1 promotes clustering of GAD65 at the synaptic plasma membrane. GAD65 is removed from the plasma membrane with newly formed vesicles. The association of GAD65 with vesicles results in increased GABA synthesis. In NEGR1 deficient mice, the synaptic targeting of GAD65 is decreased, the GABAergic synapse densities are reduced, and the reinforcing effects of food rewards are blunted. In mice fed a high fat diet, levels of NEGR1 are increased and GAD65 abnormally accumulates at the synaptic plasma membrane. Our results indicate that NEGR1 regulates a previously unknown step required for synaptic targeting and functioning of GAD65, which can be affected by bidirectional changes in NEGR1 levels causing disruptions in the GABAergic signaling controlling feeding behavior. Increased susceptibility to pentylenetetrazol-induced seizures of these animals (Singh et al., 2018) suggests that NEGR1 is involved in control of neuronal activity via currently unknown mechanisms. Inhibitory γ-amino butyric acid (GABA)ergic signaling controls the excitability of neuronal networks (Roth and Draguhn, 2012). Deficits in GABAergic neurotransmission are associated with 4 neurodevelopmental disorders, including attention deficit / hyperactivity disorder (Edden et al., 2012), developmental co-ordination disorder (Umesawa et al., 2020), and depressive disorders (Luscher et al., 2011). GABAergic neurotransmission also plays a key role in regulating food intake (Meng et al., 2016;Xu et al., 2012). GABA is synthesized from glutamate by glutamic acid decarboxylase (GAD), an enzyme encoded by GAD1 and GAD2 genes coding for GAD67 and GAD65 isoforms. GAD67 produces the majority of GABA and is required for development of inhibitory neuronal circuits and maintenance of basal inhibitory firing. GAD65 synthesizes GABA to fine-tune and maintain GABAergic synapse function during neuronal activity (Baekkeskov and Kanaani, 2009;Patel et al., 2006). GAD65 is a soluble cytosolic protein, which attaches to membranes via hydrophobic modifications, with subsequent palmitoylation required for its targeting to synaptic vesicles (Baekkeskov and Kanaani, 2009). Glutamate decarboxylating activity of purified GAD65 is increased on purified synaptic vesicles (Hsu et al., 2000) and is coupled to the transport of GAD65-synthesized GABA into synaptic vesicles by the vesicular GABA transporter (VGAT) (Jin et al., 2003). Synaptic vesicles fuse with the PM to release GABA and are then reformed via the retrieval of synaptic vesicle membranes from the PM. Many peripheral proteins involved in this vesicle recycling process are recruited to synapses and then to synaptic vesicle membranes in a highly regulated manner and at different stages of the recycling process. GAD65 is co-purified with vesicles in the presence of pyridoxal 5'-phosphate, a co-factor necessary for GAD65-mediated synthesis of GABA (Jin et al., 2003), but not in its absence (Reetz et al., 1991;Takamori et al., 2006;Takamori et al., 2000), suggesting that the attachment of GAD65 to vesicles is regulated. Regulation of the synaptic targeting of GAD65 is, however, poorly understood. In this work we show that GAD65 attaches to the PM and is removed from the PM with newly formed vesicles. NEGR1 targets GAD65 to synapses by promoting clustering of GAD65 at the presynaptic PM. NEGR1 deficiency causes GABAergic synapse loss indicating that clustering of GAD65 at the presynaptic PM is required for the maintenance of inhibitory synapses. NEGR1 promotes synaptic targeting of GAD65 Since NEGR1 is involved in regulating feeding behavior, we studied its distribution in cultures of hypothalamic neurons, which play a key role in regulating food intake. Confocal microscopy showed that NEGR1 was present along dendrites and axons (Fig. 1A) identified as thick tapering MAP2-positive protrusions and thin tau-enriched protrusions of a uniform diameter, respectively (Fig. 1B). The majority of synapses in these neurons (76.6 ± 2.03%, n = 20 neurons) were positive for GAD65. NEGR1 clusters co-localized with accumulations of VGAT and GAD65 indicating that NEGR1 accumulates in inhibitory synapses (Fig. 1A) and suggesting that it plays a role in regulating their formation or function. Western blot analysis demonstrated that the levels of an inhibitory synapse marker protein VGAT were similar in brain homogenates of 3-month-old NEGR1+/+ and NEGR1-/-mice (Fig. 1C), while the levels of GAD65 tended to be higher in NEGR1-/-brain homogenates (Fig. 1E) (mean ± SEM fold change in NEGR1-/-vs NGER1+/+ was 1.17 ± 0.08 for VGAT and 1.67 ± 0.31 for GAD65 (n = 5, p = 0.06, Wilcoxon signed rank test)). NEGR1 was highly enriched in synaptosomes vs brain homogenates of NEGR1+/+ mice (Fig. 1C). Hence, we asked whether NEGR1 deficiency affects targeting of GAD65 to synapses. To take into account changes in total protein levels, synaptic targeting (ST) was estimated by calculating the ratio of protein levels in synaptosomes and brain homogenates in the same animal. STGAD65 was reduced in NEGR1-/-vs NEGR1+/+ mice (Fig. 1E, F), whereas STVGAT was not changed (Fig. 1C, D). To determine whether NEGR1 promotes synaptic targeting of GAD65 by recruiting it to membranes from the soluble protein pool, the enrichment of GAD65 in soluble protein fractions relative to its total levels in brain homogenates was analyzed. Surprisingly, GAD65 enrichment in the soluble protein fraction was reduced in NEGR1-/-vs NEGR1+/+ mice (Fig. 1E, F), indicating that NEGR1 deficiency causes accumulation of GAD65 in non-synaptic membranes. 6 GAD65 distribution was then analyzed in cultured NEGR1+/+ and NEGR1-/-hypothalamic neurons by confocal microscopy. In NEGR1+/+ neurons, GAD65 clusters predominantly co-localized with synaptophysin accumulations (Fig. 1G), however, GAD65 clusters which did not co-localize with synaptophysin were also observed (Fig. 1G). The percentage of these non-synaptic GAD65 clusters was increased in NEGR1-/-neurons (Fig. 1G). To exclude a possibility that this effect was solely due to increased levels of GAD65 in NEGR1-/-vs NEGR1+/+ neurons, the synaptic targeting of GAD65-GFP was compared in NEGR1-/-hypothalamic neurons co-transfected with NEGR1 or empty pcDNA3 vector. In NEGR1-co-transfected neurons, clusters of GAD65-GFP co-localized with NEGR1 accumulations, and the percentage of synaptophysin negative GAD65-GFP clusters was reduced ( Fig. 1H). Our combined observations thus indicate that NEGR1 promotes synaptic targeting of GAD65. Trans-interactions of NEGR1 promote synaptic targeting of GAD65 To determine whether GAD65 is targeted to synapses by trans-interactions of NEGR1, NEGR1+/+ cultured hypothalamic neurons were treated with recombinant soluble NEGR1 (sNEGR1), which binds to cell surface NEGR1 (Kim et al., 2014). Remarkably, sNEGR1 induced a strong increase in synaptic GAD65 levels ( Fig. 2A). The effect was observed at 15 min after application of sNEGR1 and lasted for 24 h (the last time point analyzed) ( Fig. 2A). A similar effect was found in slices of NEGR1+/+ hypothalamic tissue treated for 24 h with sNEGR1. Western blot analysis of synaptosomes from these slices showed an increase in synaptic levels of GAD65 when compared to GAD65 levels in synaptosomes from control tissues treated with bovine serum albumin (Fig. 2B). Interestingly, the overall levels of GAD65 were also increased in sNEGR1-treated slices (Fig. 2B). To analyze whether the NEGR1-dependent increase in synaptic GAD65 levels results in enhanced GABA synthesis, GABA levels in the synaptosomes were determined by mass spectrometry. Surprisingly, this analysis 7 showed that GABA levels were reduced in synaptosomes isolated from sNEGR1-treated tissues despite the increase in GAD65 levels, indicating that NEGR1-mediated trans-interactions restrain GAD activity (Fig. 2C). NEGR1 restrains the retrieval of synaptic vesicle membranes from the PM GAD65 protein and mRNA levels are reduced in response to prolonged stimulation of synaptic vesicle recycling (Buddhala et al., 2012), suggesting that the regulation of GAD65 levels is coupled to synaptic vesicle recycling. Since sNEGR1 induced an overall increase in GAD65 levels, we asked whether NEGR1 regulates synaptic vesicle recycling. Activity-dependent synaptic vesicle recycling was visualized in cultured NEGR1+/+ and NEGR1-/-hypothalamic neurons by loading synaptic vesicles for 2 min with the lipophilic FM4-64 dye applied in buffer containing 47 mM K + , which causes PM depolarization and Ca 2+ influx initiating synaptic vesicle fusion with the synaptic PM followed by vesicle reformation. The amount of FM4-64 taken up into synaptic boutons was increased in NEGR1-/-neurons (Fig. 3A). FM4-64 release in response to electric field stimulation was then analyzed as a measure of synaptic vesicle fusion with the PM and was found to be faster in NEGR1-/neurons ( Fig. 3A). NEGR1 was reported to be present post-synaptically (Hashimoto et al., 2008) but was also detected in pre-synaptic compartments (Takamori et al., 2006) and was proposed to mediate transsynaptic adhesion (Ranaivoson et al., 2019;Venkannagari et al., 2020). Hence, we investigated whether pre-or post-synaptic NEGR1 regulates synaptic vesicle recycling by determining the effects of NEGR1 overexpression in axons and dendrites of cultured NEGR1+/+ hypothalamic neurons. Control neurons were transfected with the empty pcDNA3 vector. Neurons were co-transfected with GFP, and their proximal dendrites were identified as thick tapering protrusions, while axons were identified as thin protrusions of a uniform diameter. Synapses formed by GFP-positive axons of 8 transfected neurons on GFP-negative dendrites of non-transfected neurons were analyzed to determine the effect of presynaptic overexpression. Synapses formed by GFP-negative axons of non-transfected neurons on GFP-positive dendrites of transfected neurons were analyzed to determine the effect of postsynaptic NEGR1 overexpression. The FM4-64 uptake was reduced in synaptic boutons formed by axons of NEGR1 overexpressing vs control neurons (Fig. 3B) and was also reduced in synapses formed on dendrites of NEGR1 overexpressing vs control neurons (Fig. 3C). FM4-64 release in response to the electric field stimulation was slower in axons of NEGR1 overexpressing vs control neurons ( To exclude the possibility that these changes in synaptic vesicle recycling reflect solely developmental effects of prolonged NEGR1 loss or overexpression, NEGR1+/+ cultured hypothalamic neurons were acutely treated with antibodies against NEGR1 used as a specific NEGR1 ligand, a method used to cluster and mimic trans-interactions of cell adhesion molecules at the cell surface (Sheng et al., 2015;Sheng et al., 2019). Control neurons were incubated with non-specific immunoglobulins (Ig). The FM4-64 uptake was strongly reduced in NEGR1 antibody-vs Ig-treated neurons (Fig. 3D). Unexpectedly, FM4-64 release in response to the electrical field stimulation was slightly faster in NEGR1 antibody-treated neurons (Fig. 3D). Altogether, our data indicate that both pre-and post-synaptic NEGR1 restrain the retrieval of synaptic vesicle membranes from the synaptic PM, while only presynaptic NEGR1 also plays a role in regulating fusion of synaptic vesicles with the synaptic PM. Inhibition of synaptic vesicle membrane retrieval promotes GAD65 clustering at the synaptic PM 9 To determine whether a restraint of synaptic vesicle membrane retrieval influences the synaptic localization of GAD65, the association of GAD65 clusters with synaptophysin accumulations was analyzed in NEGR1+/+ cultured hypothalamic neurons treated for 15 min with dynasore. This inhibitor of dynamin selectively suppresses synaptic vesicle membrane retrieval after action potential-induced neurotransmitter release but does not affect spontaneous synaptic vesicle trafficking (Chung et al., 2010). Control neurons were treated with vehicle (DMSO). The numbers of synaptophysin-negative GAD65 clusters were reduced in dynasore-treated vs control neurons (Fig. 4A). sNEGR1 induced a similar effect in a non-additive manner with dynasore (Fig. 4A). Interestingly, tetanus toxin, which blocks fusion of synaptic vesicles with the PM, also reduced the percentage of synaptophysin-negative GAD65 clusters in a non-additive manner with sNEGR1 ( Fig. 4B). Synaptic vesicle membrane retrieval is coupled to vesicle fusion with the PM (Haucke et al., 2011). To determine whether sNEGR1 inhibits fusion of synaptic vesicles with the PM or retrieval of their membranes from the PM in GABAergic synapses, live cultured hypothalamic neurons pretreated with sNEGR1 or culture medium for control were incubated for 30 min with antibodies against the lumenal domain of VGAT. The antibodies bind to the lumenal domain of VGAT when it is exposed at the cell surface after fusion of synaptic vesicles with the PM. The antibodies then either remain at the cell surface or are taken up into vesicles reformed via membrane retrieval from the PM. The pool of antibodies remaining at the cell surface (VGATsurface_remaining), which is inversely proportional to the retrieval rate, was visualized with secondary antibodies applied to neurons which were not permeabilized with detergent. The total pool of antibodies at the cell surface and in newly formed vesicles (VGATsurface_delivered), which is proportional to the rate of synaptic vesicle fusion with the PM, was visualized with secondary antibodies applied to fixed and detergent-permeabilized neurons (Fig. 4C, D). The total synaptic pool of VGAT (VGATsynaptic) was visualized by co-labeling detergentpermeabilized neurons with antibodies against the cytoplasmic domain of VGAT. VGATsurface_delivered and VGATsynaptic were similar in sNEGR1-treated and control neurons (Fig. 4C), whereas 10 VGATsurface_remaining was increased in sNEGR1-treated neurons (Fig. 4D), indicating that sNEGR1 did not affect the fusion of vesicles with the PM but inhibited the retrieval of synaptic vesicle membranes from the PM. Inhibition of synaptic vesicle membrane retrieval causes clustering of synaptic vesicle proteins in the presynaptic PM (Dason et al., 2014). To test whether sNEGR1 induces accumulation of GAD65 in the presynaptic PM, synaptosomes isolated from slices of hypothalamic tissue were used to obtain synaptic PMs isolated after synaptosome lysis with osmotic shock, which releases synaptosomal contents including synaptic vesicles. Western blot analysis demonstrated that GAD65 was co-isolated with the synaptic PMs (Fig. 4E). In slices treated with sNEGR1 for 24 h, the levels of GAD65 coisolated with the synaptic PMs were ~2-fold higher than in membranes from slices treated with bovine serum albumin for control (Fig. 4E). Dynasore induced a similar increase in GAD65 levels in the synaptic PMs in a non-additive manner with sNEGR1 ( Fig. 4E). An increase in GAD65 levels coisolated with the synaptic PMs was also found in slices treated with tetanus toxin (Fig. 4E), indicating that fusion of vesicles with the PM is not required for clustering of GAD65 at the synaptic PM and that GAD65 is not delivered to the synaptic PM by synaptic vesicles. GAD65 attaches to the PM and NEGR1 promotes its recruitment to the PM-derived vesicles in CHO cells Our finding that GAD65 is co-isolated with the synaptic PM contrasts with extensive microscopy data in different cell types showing that GAD65 does not accumulate at the PM. It is, however, inherently difficult to visualize transient interactions of peripheral proteins such as GAD65 with the PM in intact cells. We attempted to achieve this by using bimolecular fluorescence complementation (BiFC) to simultaneously capture and visualize GAD65 in close proximity to the PM via irreversible reconstitution of the Venus fluorescent protein. As a membrane localized sensor, we engineered an LCK-VN protein consisting of the N-terminal fragment of Venus fused to the first 26 amino acids of human LCK containing sites for myristoylation and palmitoylation which target it to the PM (Fig. 5A). The sensor was used to detect the PM attachment of GAD65 fused to the complementary C-terminal fragment of Venus (GAD65-VC). Reconstitution of Venus from the VC and VN fragments brought in close proximity to each other by the attachment of GAD65 to the PM captures GAD65 at the site of PM attachment and simultaneously visualizes it by producing fluorescence (Fig. 5A). The crowded environment of synaptic boutons does not allow unambiguous determination of whether proteins are bound to the PM rather than synaptic vesicles. Therefore, we used cultured CHO cells as a model system to visualize GAD65 interactions with the PM in vesicle free PM areas. Labelling of LCK-VN transfected CHO cells with antibodies against GFP, which detect the VN fragment, showed that this sensor was localized at the PM and intracellular accumulations partially overlapping with the Golgi marker GM130 (Fig. 5B). Co-transfection of cells with LCK-VN and non-mutated GAD65WT-VC produced BiFC fluorescence, which accumulated in the perinuclear region ( Fig. 5C) where GAD65 tightly attaches to the Golgi (Kanaani et al., 2008). In addition, clusters of BiFC fluorescence were found at the PM and in small accumulations in the cytosol (Fig. 5C). A closer analysis of cross-sections of the 3D reconstructed transfected cells showed that BiFC signals at the PM were found in small clusters or vesicle-like structures (Fig. 5D). A similar level of BiFC fluorescence was observed in CHO cells transfected with GAD65(C30,45A)-VC mutant (Fig. 5C), in which cysteines responsible for GAD65 palmitoylation (Kanaani et al., 2008) were mutated to alanine. The BiFC signal produced by the GAD65(C30,45A)-VC mutant was also found at the PM (Fig. 5C), indicating that palmitoylation is not required for the attachment of GAD65 to the PM. The BiFC signal was dramatically reduced in cells co-transfected with GAD65(24-31A)-VC (Fig. 5C), in which amino acids 24-31 responsible for binding to membranes (Shi et al., 1994) were substituted for alanine. In cells co-transfected with NEGR1, cell surface clusters of NEGR1 were found at sites of BiFC fluorescence accumulations (Fig. 5E). A subpopulation of PM-localized BiFC fluorescence clusters 12 co-localized with clathrin accumulations suggesting that the attachment of GAD65 to the PM triggers its recruitment to the clathrin-coated pits (Fig. 5F). To determine whether NEGR1 plays a role in recruiting GAD65 to the vesicles derived from the PM, CHO cells were incubated live with FM4-64 to load the dye in and visualize newly formed vesicles. Since stabilization of the PM attachment via BiFC may influence endocytosis, GAD65-GFP was used in these experiments to visualize GAD65 on the vesicles. The levels of FM4-64 uptake were reduced, however, the percentages of FM4-64-loaded vesicles co-localizing with GAD65 accumulations were increased in NEGR1-transfected vs control pcDNA3-transfected cells (Fig. 5G). Altogether, these observations indicate that GAD65 attaches to the PM and NEGR1 promotes its targeting to the PM-derived vesicles. Our data also further suggests that this sorting requires a restraint of the retrieval of membranes from the cell surface. NEGR1 promotes the assembly of lipid rafts at the PM and in synapses Disruption of lipid rafts blocks synaptic clustering of GAD65 resulting in formation of nonsynaptic GAD65 clusters (Kanaani et al., 2002) resembling those found in NEGR1-/-neurons ( Fig. 1G). NEGR1 associates with lipid rafts via its GPI-anchor and is involved in cholesterol transport (Kim et al., 2017). We therefore asked whether NEGR1 regulates synaptic clustering of lipid microdomains. The targeting of cholesterol and ganglioside GM4 to the PM was increased in NEGR1transfected vs control pcDNA3 vector-transfected CHO cells (Fig. 6A, B) indicating that NEGR1 promotes the assembly of lipid rafts at the PM. A dot blot analysis showed that the enrichment of lipid raft-localized gangliosides and phosphatidylinositol 4,5-bisphosphate in synaptosomes vs homogenates was strongly reduced in NEGR1-/-vs NEGR1+/+ mice (Fig. 6C), indicating that NEGR1 can target GAD65 to synapses by inducing synaptic clustering of lipid rafts, which can also restrain the retrieval of synaptic vesicle membranes since synaptic vesicle proteins synaptophysin and VAMP2 bind to the PM localized cholesterol (Thiele et al., 2000). 13 The association of GAD65 with vesicles promotes GABA synthesis The trafficking of GAD65 to post-Golgi vesicular membranes is controlled by its palmitoylation cycle (Kanaani et al., 2008). We assessed whether a palmitoylation-deficient mutant of GAD65, which is capable of firmly anchoring to ER/cis-Golgi but incapable of anterograde trafficking from cis-Golgi to TGN and post-Golgi vesicles (Phelps et al., 2016), would exhibit altered enzyme activity. Pancreatic beta cells synthesize and secrete large quantities of GABA, making them excellent candidates for studying GAD enzyme activity (Menegaz et al., 2019). INS-1 and MIN6 beta cell lines have lost the expression of endogenous GAD65 and GAD67 (Cianciaruso et al., 2017). We thus used INS-1 beta cells to study the GAD enzyme activity of cells transfected with GAD65-GFP relative to the palmitoylation deficient mutant GAD65(C30,45A)-GFP in the absence of a background of endogenous GABA biosynthesis. As previously observed (Kanaani et al., 2008;Phelps et al., 2016), GAD65-GFP distributed between cytosol, ER, Golgi, and post-Golgi vesicles, while GAD65(C30,45A)-GFP was distributed only to the cytosol, ER, and cis-Golgi (Fig. 7A). HPLC analysis of amino acids from INS-1E cell lysate showed a 50% decrease in the cellular GABA content for GAD65(C30,45A)-GFP and a significant increase in glutamate, the precursor to GABA synthesis . These data demonstrate that palmitoylation-dependent vesicular trafficking of GAD65 is correlated with higher levels of GABA biosynthesis. The density of GABAergic synapses is reduced in 7-8-month-old NEGR1-/-mice In cultures of disassociated hypothalamic neurons, NEGR1 was particularly strongly expressed in axons and synaptic boutons of neuropeptide Y (NPY) / GAD65 positive neurons when compared to other neurons present in these cultures (Fig. 8A). NPY positive neurons and their projections are present at high density in the ARC (Suyama and Yada, 2018). Analysis of NEGR1+/+ and NEGR1-/-14 brain sections by confocal microscopy showed that the density of VGAT/GAD65 positive inhibitory synapses was strongly reduced in the ARC of NEGR1-/-vs NEGR1+/+ mice (Fig. 8B). In NEGR1-/mice, the density of VGAT/GAD65 positive synapses was also reduced in the CA3 region of the hippocampus (Fig. 8C), which also contains NPY-positive interneurons (Kuruba et al., 2011). Altogether, our data indicate that NEGR1 deficiency causes a loss of inhibitory synapses. NEGR1-/-mice show reduced motivation for a highly palatable food reward GABAergic neurotransmission is involved in regulation of the motivational aspects of food reinforced behaviors and is sensitized by intake of palatable food (Newman et al., 2013). To determine whether NEGR1 deficiency affects motivation for food, 7-8-month-old NEGR1+/+, NEGR1-/-and NEGR1+/-mice were tested using an instrumental conditioning task. The mice were initially allowed to eat sucrose pellets released into a magazine at a rate of 1 pellet / min over 30 min. Analysis of these initial magazine training sessions performed over 6 days showed that the mice of all genotypes initially consumed similar numbers of pellets per session (Fig. 9A). Over time, the numbers of pellets consumed per session increased for NEGR1-/-but not NEGR1+/+ mice with an intermediate effect observed for NEGR1+/-mice (Fig. 9A). The mice were then trained to press a lever to obtain a sucrose pellet. In this fixed ratio task, mice of all genotypes pressed the lever with approximately the same frequency and consumed similar numbers of pellets per session (Fig. 9B). Finally, the mice were tested using a progressive ratio schedule of reinforcement, where the number of lever presses required to obtain the pellet progressively increased across the session. When the number of lever presses required to obtain a single sucrose pellet exceeded motivation for the pellet, the mice stopped responding and this was considered the break point. This analysis showed that breakpoint values were similar in all genotypes on the first day of tests (Fig. 9C), and progressively increased in NEGR1+/+ mice over three days of tests (Fig. 9C, D). Accordingly, the number of pellets delivered per session and the number of 15 active lever presses per session also increased in NEGR1+/+ group (Fig. 9E). This sensitization was inhibited in NEGR1+/-and NEGR1-/-mice ( Fig. 9C-E). Hence, NEGR1 deficiency does not negatively impact on the palatability of the food pellet or learning around the instrumental task, but rather specifically affects motivation for highly palatable food rewards. A high fat diet causes an increase in NEGR1 levels affecting subsynaptic localization of GAD65 and synapse maintenance A high fat diet (HFD) causes a reduction in GABA levels at least in some brain regions (Hassan et al., 2018;Sandoval-Salazar et al., 2016). Hence, we asked whether levels of NEGR1 are influenced by HFD. Western blot analysis demonstrated that levels of NEGR1 were increased in brain homogenates and synaptosomes of HFD-vs chow-fed mice (Fig. 10A, B). NEGR1 can be shed by ADAM10, which releases its ~50 kDa cleavage product (Pischedda and Piccoli, 2015). The levels of this product in the soluble protein fraction were, however, below the detection limit indicating low levels of the ADAM10-mediated cleavage of NEGR1 in the mature brain ( (Bocarsly et al., 2015). Selective loss of inhibitory but not 16 excitatory synapse markers is found in dynamin 1 and 3 double knock-out neurons (Raimondi et al., 2011), indicating that defects in the retrieval of synaptic vesicle membranes affect the maintenance of inhibitory synapses. To determine whether the NEGR1 overexpression-induced decrease in synaptic vesicle membrane retrieval (Fig. 3B, C) causes changes in synapse numbers, synaptophysin levels were measured along dendrites and axons of NEGR1-and control pcDNA3-transfected cultured hypothalamic neurons. Synaptophysin levels were strongly reduced along axons and dendrites of NEGR1-overexpressing neurons (Fig. 10H). Altogether, our data indicate that NEGR1 is overexpressed in brains of HFD-fed mice, and an increase in levels of this protein correlates with reduced GABA synthesis and can cause a loss of inhibitory synapses. Discussion We report that NEGR1 promotes the synaptic targeting of GAD65, a GABA synthesizing enzyme. We demonstrate that GAD65 attaches to the synaptic PM and show that trans-interactions of the synaptic PM localized NEGR1 result in an increase in this pool (Fig. 11). This increase correlates with a reduction in numbers of non-synaptic GAD65 clusters, indicating that NEGR1 promotes the recruitment of non-synaptic GAD65 to the synaptic PM. Fusion of synaptic vesicles with the PM may also result in the PM targeting of GAD65 attached to the synaptic vesicle membranes. We show, however, that the pool of GAD65 at the synaptic PM is not reduced and is even increased in tissues treated with tetanus toxin, which blocks fusion of synaptic vesicles with the PM. These experiments, electron microscopy data from previous studies showing GAD65 on some but not all synaptic vesicles, and biochemical studies showing that GAD65 does not co-purify with synaptic vesicles (Reetz et al., 1991;Takamori et al., 2006;Takamori et al., 2000) collectively indicate that GAD65 disassociates from vesicles before they fuse with the PM. Interestingly, synaptic vesicles contain high levels of palmitoyl protein thioesterase-1, an enzyme which de-palmitoylates GAD65 (Kim et al., 2008) and can therefore reduce the palmitoylation-dependent association of GAD65 with the vesicles (Kanaani et al., 2008). Deficiency in palmitoyl protein thioesterase-1 leads to a reduction in GAD65 levels in the brain (Kim et al., 2008) indicating that GAD65 de-palmitoylation is required for the maintenance of inhibitory synapses and GAD65 pool. Cell adhesion molecules are involved in organizing the presynaptic nanoarchitecture required for highly regulated neurotransmitter release (Tang et al., 2016). NEGR1 is attached to the PM via a GPI anchor and accumulates in lipid microdomains, where palmitoylated proteins, including GAD65 (Kanaani et al., 2002) are also targeted. Trans-synaptic adhesive bonds formed by NEGR1 can promote the synaptic recruitment of GAD65 by limiting diffusion of lipid microdomains that GAD65 binds to. This scenario is supported by our observations showing that the synaptic clustering of lipid rafts is reduced in NEGR1-/-neurons. NEGR1-dependent synaptic clustering of lipid rafts can also constrain the retrieval of synaptic vesicle membranes, since synaptic vesicle proteins interact with PM-localized cholesterol (Thiele et al., 2000). Clustering of lipid rafts by GPI-anchored proteins triggers the assembly of the submembrane spectrin cytoskeleton (Leshchyns'ka et al., 2003) which also inhibits the retrieval of vesicle membranes (Puchkov et al., 2011). Our experiments showing that inhibition of synaptic endocytosis triggers clustering of GAD65 at the synaptic PM indicate that GAD65 is removed from the PM with membranes of reformed synaptic vesicles. An NEGR1-dependent restraint of vesicle reformation from the PM may be necessary for "loading" of GAD65 on the membranes of reforming synaptic vesicles, which may be facilitated within the cholesterol-enriched environment required for synaptic vesicle reformation (Martin, 2000), created by interactions between cholesterol enriched NEGR1-containing lipid microdomains and cholesterol-enriched synaptic vesicle membranes (Fig. 11). Palmitoylation plays a critical role in synaptic sorting of GAD65 (Kanaani et al., 2004;Kanaani et al., 2008). The GAD65-palmitoylating enzyme huntingtin-interacting protein HIP14 (Huang et al., 2004) is found at the neuronal PM and in Golgi and sorting/recycling and late endosomal structures in neurons (Huang et al., 2004), but is not detectable in synaptic vesicle preparations (Takamori et al., 2006). Slower reformation of vesicles may therefore facilitate the HIP14-mediated palmitoylationdependent attachment of GAD65 to the membranes of vesicles during their retrieval from the PM. We demonstrate that NEGR1 is highly expressed in NPY positive hypothalamic neurons and accumulates in the inhibitory GABAergic synapses formed by these neurons in culture. NPY/AgRPexpressing neurons play a prominent role in promoting food intake by inhibiting anorexigenic proopiomelanocortin (POMC) cells (Dietrich and Horvath, 2013;Suyama and Yada, 2018). We show that NEGR1 deficiency causes a reduction in numbers of inhibitory synapses in the ARC, where NPY/AgRP positive neurons form synapses on the POMC neurons (Dietrich and Horvath, 2013;Suyama and Yada, 2018). This synapse loss can be caused by impaired synaptic targeting of GAD65 in NEGR1-/-mice, because GAD65 deficiency causes a reduction in the size of the synaptic vesicle pool (Tian et al., 1999). We cannot exclude, however, that NEGR1-mediated adhesive bonds are also 19 required for formation or maintenance of the GABAergic synapses. It is noteworthy that NEGR1 deficiency causes a reduction in the density of spines in cortical neurons (Pischedda et al., 2014;Szczurkowska et al., 2018). In cultured hippocampal neurons, NEGR1 expression peaks at 14 days in vitro, and NEGR1 overexpression in twenty-one-day-old hippocampal neurons, i.e., when the levels of endogenous NEGR1 decline, results in an increase in synapse density (Hashimoto et al., 2008), also suggesting that NEGR1 promotes synapse stabilization. A reduction in the efficiency of the inhibitory synaptic input into the anorexigenic POMC cells could cause a reduction in food intake observed in mice with inactivated NEGR1 and could also cause a reduction in body mass , observed also in our colony: at 2-3 months of age, mean ± SEM: 26.38 ± 0.54 g NEGR1+/+ males vs 23.63 ± 0.71 NEGR1-/-males, *p = 0.006 Mann-Whitney test; 20.43 ± 0.27 g NEGR1+/+ females vs 19.02 ± 0.39 NEGR1-/-females, *p = 0.004 Mann-Whitney test). A similar mechanism may also contribute to a reduction in the body mass index in humans with mutations affecting NEGR1 expression (Antunez-Ortiz et al., 2017). NPY/AgRP neurons also play a key role in coordinating the activity of hypothalamic hunger circuits with the activity of midbrain reward circuits (Alhadeff et al., 2019). Our results, demonstrating that NEGR1 deficiency results in blunting of the reinforcing effects of food, indicate that NEGR1 also functions in the brain reward pathways. Dysregulated brain reward pathways may be contributing to increased intake of palatable foods and ultimately obesity (Berthoud et al., 2011). Mutations in noncoding regulatory elements upstream of NEGR1 containing binding sites for transcriptional repressors are found in people with severe early-onset obesity (Wheeler et al., 2013;Willer et al., 2009), suggesting that NEGR1 overexpression causes changes in feeding behavior, which ultimately lead to obesity. We report that the levels of NEGR1 and its synaptic enrichment are increased in brains of mice with high fat diet induced obesity confirming a previous study, which found increased NEGR1 mRNA levels in the hypothalamus of 15-week-old female mice fed a high fat diet (Lee et al., 2010). High fat diet induces the loss of synapses in multiple brain regions including the hypothalamus 20 (Dietrich and Horvath, 2013;Horvath et al., 2010;Lizarbe et al., 2018), hippocampus (Lizarbe et al., 2018;Stranahan et al., 2008;Valladolid-Acebes et al., 2012), and cortex (Bocarsly et al., 2015;Lizarbe et al., 2018). Overexpression of NEGR1 in fourteen-day old hippocampal neurons, i.e. at the peak of endogenous NEGR1 expression in these neurons, leads to a reduction in synapse density (Hashimoto et al., 2008). We demonstrate that NEGR1 overexpression causes a reduction in the density of synapses formed on dendrites and axons of NEGR1-overexpressing hypothalamic neurons, indicating that NEGR1 overexpression can directly contribute to the high fat diet induced synapse loss, including the loss of inhibitory synapses reported previously (Valladolid-Acebes et al., 2012). NEGR1 overexpression causes a strong inhibition of synaptic vesicle membrane retrieval and can therefore affect synapse formation or maintenance by inhibiting synaptic vesicle biogenesis, which is required for synapse formation during neuronal development (Hannah et al., 1999) and maintenance of neurotransmitter release in mature synapses (Milosevic, 2018). Similarly, the loss of inhibitory synapses was found in dynamin 1, 3 double knock-out mice (Raimondi et al., 2011). Furthermore, we demonstrate that the high fat diet and trans-interactions of NEGR1 cause similar reductions in synaptic GABA concentrations and induce similar increases of GAD65 levels at the PM, suggesting that the high fat diet caused reduction in the concentration of GABA, observed at least in some brain regions including the frontal cortex and hippocampus of rats (Sandoval-Salazar et al., 2016) and the prefrontal cortex of mice (Hassan et al., 2018), can also be associated with NEGR1 overexpression. NEGR1 downregulation in mice also results in impaired core behaviors related to autism spectrum disorders (Szczurkowska et al., 2018). In humans, deletions of the NEGR1 gene have been found in patients with developmental co-ordination disorder, attention deficit / hyperactivity disorder, learning disability, delayed speech and language development, and dyslexia (Genovese et al., 2015;Tassano et al., 2015;Veerappa et al., 2013). The results of our work suggest that NEGR1-dependent dysregulation of the GABAergic system can also be a factor contributing to the development of these disorders. 22 DNA coding for full length HA-tagged human NEGR1 (NM_173808.2) was synthesized using the GeneArt® Gene Synthesis service (Life Technologies) and subcloned into the pcDNA3 vector. Mice C57Bl/6NJ mice for high fat diet experiments were obtained from the Australian Phenomics Network (APN) based at the Monash University, Melbourne, Australia, and housed at 22 ± 1 °C with a controlled 12-h light/dark cycle and had ad libitum access to water. Mice (9-11-week-old) were fed with either chow (8% calories from fat) or high-fat diets (HFD) (45% calories from fat) (Turner et al., 2009) Analysis of the GABA levels in synaptosomes Serially diluted GABA standards (0 to 5 µM) were used to construct a standard calibration curve. All standards and samples contained a fixed amount of deuterium-labeled GABA internal standard (d6_GABA) to correct for variations during extraction and instrumental analysis. Synaptosomes were pelleted down by centrifuging them at 100000 g for 45 min at +4ºC. The pellets were immediately frozen and stored at -80ºC. Synaptosomes were extracted by adding 50 µl of water to tubes with synaptosomes followed by brief sonication and addition of 100 µl acetonitrile with vigorous vortexing. Internal standard (100 µl in acetonitrile) was added and the tubes were vortexed and incubated at -20°C for 10 min. Tubes were centrifuged at 13000 rpm for 10 min and pellets containing proteins, buffer salts and sugars were discarded. Supernatants were dried, reconstituted in 100 µl acetonitrile: water (4:1) and filtered through 4 mm x 0.2 µm disks (Phenomenex, Australia) into LC vials. For highly specific and sensitive measurement of GABA, analysis was carried out on a LC-MS/MS system consisting of Accela Open AS autosampler, Accela UPLC pump coupled to a Quantum Access triple quadrupole mass spectrometer equipped with a heated electrospray (HESI) probe (Thermo Scientific, San Jose, CA, USA). The capillary and spray temperatures were both set to 300°C and the electrospray capillary voltage to 4 kV. The dwell time for each transition was set to 250 ms. Argon was used as the collision gas at a pressure of 1 torr. Standards and synaptosome extracts (20 µl each) were injected onto a Luna NH₂ column (3 µm, 2.0 mm × 150 mm) (Phenomenex, Australia) maintained at room temperature. The flow rate was set at 300 µl / min. Mobile phase A was a mixture of 10% aqueous 5 mM ammonium acetate at pH 9 and 90% neat acetonitrile. Mobile phase B was 100% aqueous 5 mM ammonium acetate at pH 9. A gradient program was used for the optimum retention of GABA followed ECD with FAO3DS column) prior to electrochemical detection (Zandy et al., 2017). Serial dilutions of GABA and glutamate were analyzed to generate a linear calibration curve using area under the curve analysis from chromatograms. Isolation of synaptic PM Synaptic PM was isolated from synaptosomes as described (Leshchyns'ka et al., 2006). Unless otherwise stated, all steps were performed at 4ºC. Briefly, synaptosomes were lysed by diluting them in 9 volumes of ice-cold H2O and then immediately adjusted by 1 M HEPES, pH 7.4 to a final 27 concentration of 7.5 mM HEPES. After incubation on ice for 30 min, the mixture was centrifuged at 100000 g for 20 min and the pellet containing synaptic PM was collected. SDS-PAGE electrophoresis and Western blot analysis Proteins were separated on a 4-12% Bis-Tris bolt mini gel (Life Technologies) and electroblotted Culture, transfection and treatment of primary hypothalamic neurons Primary hypothalamic neurons were prepared by dissecting hypothalami from brains of 1-3-dayold mice and disassociating neurons as described (Sheng et al., 2015;Sheng et al., 2019). Neurons were maintained in Neurobasal A medium supplemented with 2% B-27, GlutaMAX and FGF-2 (2 ng/ml) (all reagents from Thermo Fisher Scientific) in 24-well plates for 2 weeks on coverslips coated with poly-D-lysine (100 µg/ml). Neurons were transfected before plating by electroporation using a Neon transfection system (Thermo Fisher Scientific). Alternatively, neurons were transfected using the calcium phosphate method essentially as described (Jiang and Chen, 2006 or dynasore (80 µM prepared using a stock solution in DMSO) were applied to neurons in the culture medium. Control neurons were treated with the culture medium containing the same concentration of the vehicle used to prepare the stock solutions. Labelling of live hypothalamic neurons with antibodies against the lumenal domain of VGAT Neurons were pre-incubated for 10 min with sNEGR1 (10 µg/ml) in the culture medium or mock treated with the culture medium at 37°C and 5% CO2. Rabbit polyclonal antibodies against the lumenal domain of VGAT were then added to live neurons and incubated for 30 min at 37°C and 5% CO2. Immunofluorescence labelling and analysis of fixed neurons The labelling was performed essentially as described previously (Bliim et al., 2019;Sytnyk et al., 2002). intensities of GAD65 in synaptophysin accumulations, they were outlined using a threshold function of ImageJ, and GAD65 labelling intensities were measured within the outlines. To calculate percentages of synaptic and non-synaptic GAD65 clusters, all GAD65 accumulations were outlined using a threshold function in ImageJ, and the presence of synaptophysin labelling within the outlines was then manually analyzed. Analysis of the activity-dependent FM4-64 dye uptake and release in neurons The analysis was performed essentially as described previously (Andreyeva et al., 2010;Leshchyns'ka et al., 2006;Shetty et al., 2013). When indicated, neurons were incubated for 10 min at room temperature with rabbit polyclonal antibodies against NEGR1 or non-immune rabbit immunoglobulins diluted in 4 mM K + buffer ( Sigma) supplemented with 5% foetal bovine serum (Sigma)) in an incubator at 37°C and 5% CO2. Cells were transfected using calcium phosphate method. Thirty min prior to transfection, growth medium was replaced with DMEM/F12. For each coverslip, 1 µg of plasmid DNA and 3.1 µl of 2 M CaCl2 were mixed with water to a final volume of 25 µl. DNA-Ca 2+ -phosphate precipitate was prepared by adding DNA/CaCl2 solution to 25 µl of 2x HEPES-buffered saline (HBS, 280 mM NaCl, 1.5 mM Na2HPO4, 50 mM HEPES, pH 7.06) (1/8 th at a time and mixing briefly between each addition) and incubating for 10 min at 37°C. The resulting suspension was applied dropwise to the coverslips and incubated at 37°C and 5% CO2 for 3 h. Cells were then treated with 15% glycerol in DMEM/F12 for 1-2 min in the incubator, washed with PBS, placed in growth medium, and maintained for 24 h at 37°C and 5% CO2, and then used for analysis. When indicated, CHO cells were loaded with FM4-64FX dye (15 μM, Thermo Fisher Scientific) applied for 15 min at 37°C and 5% CO2 in 4 mM K + buffer, washed 3 times with PBS, fixed and embedded in the ProLong Gold Antifade mounting medium (Thermo Fisher Scientific). Operant Conditioning Task Prior to start of the experiment, mice were put on diet restriction for 7-15 days to achieve the weight loss to 80-85% of their original weight. They were kept on diet restriction throughout the experiment to maintain the weight of mice at 80-85% of their original weight. During the experiments, the mice were placed into an operant chamber containing two levers. The left "active" lever was connected to sucrose pellet dispenser releasing sucrose pellets into a food-magazine. The right lever was inactive. An operant conditioning test was conducted in three stages consisting of magazine training, fixed ratio training and progressive ratio test. The magazine training lasted for 30 min during which palatable sucrose pellets were automatically delivered from the food-dispenser to the foodmagazine at a rate of 1 pellet / min. The numbers of times mice entered the magazine and numbers of pellets eaten by mice were counted. The magazine training was conducted for 6 to 10 days until the mouse left 13 or less pellets in the magazine by the end of the training session. The mice were then trained to press the lever in the fixed ratio training. During this training, active lever press but not inactive lever press resulted in the delivery of a single pellet into the food magazine. Each fixed ratio training session lasted for ≤ 30 min or until 30 pellets were delivered. Fixed ratio training was performed for ten days until the mouse left <15 pellets in the food-magazine by the end of the session. The mice were then subjected to the progressive ratio test essentially as described (June and Gilpin, 2010) during which the number of active lever presses was progressively increased to activate the release of the pellet. The number of presses required to release each consecutive pellet was calculated as described by Richardson and Roberts (1996) (Sharma et al., 2012) using the following formula: [5e ] -5, where R denotes the number of pellets rewarded already plus 1. Each PR session lasted for 1 h or until the break point at which mice refused to press the lever the number of times required to obtain the next pellet. The total number of times the active lever was pressed and the number of magazine entries were also counted as described (June and Gilpin, 2010). The progressive ratio sessions were performed on three consecutive days. Immunohistochemical analysis Mice were deeply anaesthetized with intraperitoneally injected pentobarbitone sodium (100 mg / kg) and transcardially perfused with ice-cold PBS for 2 min followed by ice-cold 4% (A) Mean ± SEM percentages of pellets eaten over 30 min during magazine training sessions. One pellet was released into the magazine each minute. (B) Mean ± SEM numbers of active lever presses and pellets eaten during the fixed ratio task. Each active lever press led to the release of a pellet. (C) Individual progressive ratio break point values measured over three consecutive days in the progressive ratio task. Lines connect data for individual mice. Graphs showing data normalized to the values on day 1 set to 100% are included to illustrate an increase in the progressive ratio break point in NEGR1+/+ mice. (D) Mean ± SEM of the normalized values shown in C. (E) Mean ± SEM numbers of pellets delivered and active lever presses in the progressive ratio task. The numbers of active lever presses were normalized to the number on day 1 set to 100%. In A-E, n = 9 NEGR1+/+, 8 NEGR1+/-and 12 NEGR1-/-mice were analyzed. *p, repeated measures two-way ANOVA and Tukey's multiple comparisons test. Graphs show levels of GAD65 in brain homogenates and synaptosomes of HFD fed mice relative to the levels in chow fed mice set to 100%. n = 3 mice per group. (H) Axons and dendrites of NEGR1+/+ cultured hypothalamic neurons co-transfected with cherry and control pcDNA3 vector or NEGR1. Neurons were immunolabelled for synaptophysin and NEGR1. In Contains uncropped Western blot source data for Figure 10. (A) Synaptic vesicle localized GAD65 uses glutamate to synthesize GABA, which is transported to the vesicle lumen by VGAT (a). GAD65 is de-palmitoylated and removed to the cytosol from refilled vesicles (b), which then fuse with the PM and release GABA. GAD65 released to the cytosol can travel to the Golgi in the neuronal cell body (red arrow, (Kanaani et al., 2008)) or attach to the PM in the synapse (c). Palmitoylation of GAD65 at the PM targets it to the NEGR1-containing cholesterolenriched microdomains, which are anchored in synapses by NEGR1-containing adhesive bonds (d). NEGR1-dependent synaptic clustering of lipid rafts promotes 'loading' of palmitoylated GAD65 on the newly formed synaptic vesicles (e). This loading is facilitated by interactions between NEGR1 containing lipid rafts and components of synaptic vesicles constraining the retrieval of synaptic vesicle membranes. Synaptic recycling of vesicles and GAD65 is shown with black and pink arrows, respectively. (C) In NEGR1-/-neurons, synaptic clustering of lipid rafts is reduced, synaptic targeting of GAD65 is inhibited, and non-synaptic GAD65 clusters are formed.
2022-02-12T14:20:30.139Z
2022-02-08T00:00:00.000
{ "year": 2022, "sha1": "b20ea6587ecaec78d046b9933f405528a19ec0ab", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/02/08/2022.02.08.479601.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "b20ea6587ecaec78d046b9933f405528a19ec0ab", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
248632818
pes2o/s2orc
v3-fos-license
Construction of English Translation Model Based on Neural Network Fuzzy Semantic Optimal Control This work addresses four aspects of the English translation model: consistency, model structure, semantic understanding, and knowledge fusion. To solve the problem of lack of personality consistency in the responses generated by neural networks in English translation models, an English translation model with fuzzy semantic optimal control of neural networks is proposed in this study. The model uses a fuzzy semantic optimal control retrieval mechanism to obtain appropriate information from an externally set English information table; to further improve the effectiveness of the model in retrieving correct information, this work adopts a two-stage training method, using ordinary English translation data for model pretraining and then fine-tuning the model using English translation data with optimal control containing fuzzy semantic information. The model consists of two parts, a sequence generation network that can output the probability distribution of words and an evaluation network that can predict future whole-sentence returns. In particular, the evaluation network can evaluate the impact of currently generated words on whole sentences using deep inheritance features so that the model can consider not only the optimal solution for the current words, as in other generative models, but also the optimal solution for future generated whole sentences. The experimental results show that the English translation model with fuzzy semantic optimal control of the neural network proposed in this study can obtain better semantic feature representation by using a novel bidirectional neural network and a masked language model to train sentence vectors; the combination of semantic features and fuzzy semantic similarity features can obtain higher scoring accuracy and better model generalization. In English translation applications, there are large improvements in scoring accuracy and generality. Introduction With the development of artificial intelligence and the wide application of deep learning technology in natural language processing (especially neural machine translation), the performance of machine translation is becoming more and more powerful. However, for the time being, most of the research in domestic academia and practice in the industry use English as the source language and English language for machine translation, and the research on translation between English and other languages is still far from enough [1,2]. Under the situation of economic globalization and integration, economic, trade, and cultural exchanges are rapidly increasing, and the research on Chinese-French neural machine translation can help to further strengthen the exchanges between the two countries. However, since English has incomplete equivalence in lexical and syntactic aspects, no matching, and variation in verb tense and person, such incomplete equivalence undoubtedly brings greater difficulties and challenges to neural machine translation [3]. For English translation tasks, they can be classified into different levels according to different categorization methods. According to the granularity of processed text, it can be classified into word level, sentence level, and chapter level. According to the research methods, they can be classified as supervised learning, semisupervised learning, and unsupervised learning. Traditional classification algorithms mainly use manually constructed feature selection methods to perform feature extraction and train classifiers to finally complete the classification task. e performance of these classification methods heavily relies on the extraction of text features [4]. e description and modeling of nonlinear systems have become a hot research issue. Neural network fuzzy semantic optimization control technology has quickly become the key research object of experts and scholars due to its strong fitting ability and excellent characteristics of universal approximation. Many translation algorithms based on neural network fuzzy semantic optimization control technology have been proposed. In the translation system, the use of mathematical tools such as neural network fuzzy semantic optimization control technology to further accurately describe the motion state of the maneuvering target still has very important research value [5]. Since English translation models have very promising applications, they have been rapidly developed in both academic and industrial circles. At present, before the emergence of deep learning technology, the traditional research ideas are mainly divided into two types: linguistic rule/template-based approach and retrieval-based approach. However, researchers have found that with the increasing complexity of application scenarios and the increasing user requirements for interaction experience, rule-based matching approaches cannot meet these needs [6,7]. While retrieval-based chatbots can ensure the grammatical rationality and fluency of the replies, they are limited by the richness of the training data, and the English translation model cannot give more satisfactory responses if the user's required replies are not in the conversation database. Deep learning techniques can solve the problems that are difficult to solve by traditional methods in English translation models and enable robots to learn dialogue patterns from a large amount of dialogue data so that they can effectively respond to any input from users. However, the existing deep learning methods still have defects such as poor response quality, inconsistent responses, and errors in semantic understanding, which are to be further explored in depth, which is also the significance of this research [8]. Deep learning methods can transform the text into vector representation for deep semantic extraction. Two-way long-and short-term memory networks can memorize the input information above and combine the data information above to influence the output of the later text and represent the contextual information of the text to extract the features of utterance sequences and semantics. Combining the deep semantic and shallow linguistic features, the constructed model can solve the automatic scoring problem of the English translation by combining fuzzy semantics and text fuzzy semantic similarity to calculate a reasonable score [9]. is study presents a neural network fuzzy semantic optimal control model for English translation. e model uses inherited features to unify the data and English for the training and testing process and gives the model the ability to focus on whole-sentence generation. e proposed model takes into account the rubric of the task and the long-term payoff of model decoding and is trained and tested on a large-scale English translation corpus, demonstrating significant enhancement effects [10]. In the first chapter, the research background, significance of this topic, and the main research content of this study are introduced. Chapter 2, related work, briefly analyzes the scoring problem of English translation model. e existing text fuzzy semantic similarity algorithm, word vector and sentence vector representation methods, and English translation model based on neural network fuzzy semantic optimal control are investigated. In the third chapter, the English translation score points and basis are analyzed, and then, the candidate features are summarized and extracted, including shallow linguistic features: lexical features (lexical number ratio, named entities, and keywords), sentence features, and deep semantic features, combined with fuzzy semantic similarity, and finally, the English translation model is constructed. e combination of semantic features and fuzzy semantic similarity features makes the scoring more accurate and has better generalizability. In chapter 4, the results are analyzed to compare different algorithmic models using two datasets, and after experiments on English translation-related datasets, the accuracy and efficiency of the proposed model on knowledge decision and response generation tasks are demonstrated. Chapter 5 is as follows: summary and prospect. e research work covered in this study is summarized and concluded; then, the shortcomings of this study and the current problems that still need to be solved are presented, and finally, the future research trends have prospected. Related Research. Deep learning can automatically extract features from the text by building neural networks, without the need to manually design features. e principle is that deep semantic features of the text are learned and then classified by classifiers [11]. With the development of deep learning theory and word vector technology, neural network models have been gradually applied to English translation tasks and are favored by many scholars. Madani et al. used CNN to model sentences and completed English translation tasks on this basis with better results than traditional methods and achieved good results on several datasets [12]. Reyes-Magaña et al. proposed the Tree-LSTM model for predicting the semantic relevance of text and English translation,which is used to predictthe semantic relevance of text and English classification [13]. In recent years, the Attention model in deep learning was first used for machine translation by Yuan Z et al. Later, various variants of the Attention model were also applied to English translation work [14]. Chen et al. proposed a multiattentional convolutional neural network and used it for English classification of specific English, which can obtain deeper information on English features and effectively identify the English polarity of different English languages [15]. Neural network English translation has entered the field of vision of more and more people, providing convenient and fast translation services for more and more users, and has become an indispensable part of the public's daily life and communication. However, due to the explosive growth of multilingual information, people's demand for translation between different languages has increased, and there are higher requirements for the accuracy, fluency, and speed of neural network English translation [16]. One of the key reasons for the gap between human translation and human translation is the lack of rich translation knowledge of various granularities. For the research of neural network English translation, the research results of other languages can be used for reference in the research process. With the rapid development of technology and continuous acceleration of the economic process, machine translation has become an important link of international cooperation and exchange [17]. With the development of artificial intelligence, the addition of deep learning technologies such as RNN and LSTM has changed the pattern of traditional machine translation in China, and the machine translation technology has been further developed in the new era, and the biggest change is the transformation from PBMT to NMT, which undoubtedly promotes the rapid improvement of machine translation quality. Chen et al. used multitask learning and semisupervised learning to improve the translation performance of NMT on resourcescarce languages [18]. Alom et al. used the method of incorporating bilingual dictionaries and linguistic knowledge to incorporate external prior knowledge into neural machine translation, a study on a fuzzy semantic-based approach to modeling NMT decoders. ey also applied fuzzy semantic knowledge to NMT and also included a fuzzy semantic-level decoder and word-level decoder, similar to AI Lab's approach [19]. e main feature of the fuzzy model is to use multiple linear systems to deconstruct the input quantity through some methods in fuzzy mathematics and then defuzzy through fuzzy reasoning, thereby generating multiple sets of linear input and output functions, so as to solve complex nonlinear problems. e system is fitted [20]. In this study, we analyze the English translation scoring criteria, and the linguistic expressions and wording accuracy of students' answers will largely affect the final score, so this study focuses on the automatic determination of text semantics and language score points with fuzzy semantic similarity methods [21,22]. e deep learning method is used to abstractly represent the deep semantic and linguistic features of the text. is study proposes a method to analyze and extract deep semantic features of text and shallow text features, then designs an automatic scoring model for the English translation, fuses the algorithm models, introduces an algorithm based on fuzzy semantic similarity for text retrieval, selects multiple fuzzy semantic similarities, calculates the final score by combining semantic features through regression or classification algorithms, forms an algorithm model for English translation scoring, and after model training, gets the optimal algorithm model that is obtained after model training, and then experimental validation is conducted. e word vectors trained on the monolingual corpus are uniformly mapped in the generic vector space to obtain the generic word vector embedding space. e model is trained using meta-learning, and a preliminary translation model, the initial model parameters, is obtained by training with the English bilingual training set and validation set, and then, the English bilingual test set is trained with the model parameters for fast adaptation, and a suitable translation model is finally obtained through training and fine-tuning [23]. At the same time, for the shortcomings of the nonautoregressive translation model, the knowledge distillation method is applied to it, which can significantly improve the English translation effect while enhancing the generation rate. A Study on English Translation Model Based on Neural Network Fuzzy Semantic Optimal Control Neural Network Fuzzy Semantic Optimal Control of Translation Features. e English words are sorted and filtered by the English dictionary, and the top n words with similar meanings to the English words are selected as the auxiliary quantities for updating the English word vector [24]. Finally, the English word vector is updated by the exact word vector generation algorithm, and the final exact representation of the English words is calculated, and then, the exact representation of the whole text is obtained. e above additional text information and the traditional model training word vector are combined to complete the pretraining of the precise word vector. at is, when using the traditional model for training, the contextual text information is used to guide the model to learn the emotional word features according to the sentiment polarity of the sentences in the corpus and the polarity of the self-labeled seed words. e precision word vector pretraining model and the word vector update algorithm are the two core points of the whole model. e accuracy of the former directly acts on the auxiliary amount of updating English word vectors, which has an indirect effect on the English word vector results; the latter plays a decisive role in the final generated English results. e neural network fuzzy semantic optimal control translation model mainly consists of three parts: pretrained exact word vector, English word sorting and screening, and exact word vector generation. e overall structure is shown in Figure 1. Redundant punctuation and incorrect grammatical formatting often appear in the grammaticalized text. In order to make it easier and more accurate for the model to learn text features, these redundant information needs to be filtered out, for example, breaking long sentences into short sentences, removing redundant symbols, and correcting misspellings. Formatted data can indirectly improve the training efficiency of the model. e feature learning model consists of three components: context words and the information of the words that make up these words, the English polarity information of the sentences, and the English polarity information of the seed words. e word vector is trained by combining the context words and the information of the words that make up these words. e extended skip-gram model can better integrate the other two components. e skip-gram model aims to predict the context words of a given word and optimize the English language by maximizing the average log probability of equation (1), where H denotes the corpus, h denotes the words in H, and E(h) denotes the context words for h within the specified window. e English information of the sentences is added during the word vector training. is is performed by incorporating the English information of the sentence into the training model by predicting the English polarity of the sentence. e implementation of this process is similar to the first part, except that the prediction focus is no longer on the context of the word but on the English polarity of the whole sentence [25]. Each sentence is represented as a vector s at the time of implementation, and the vector s is the average of the word vectors that make up the words of the sentence. e sentences in the corpus used in this study are annotated with English polarity so that the information on the English polarity of the sentences can be used to improve the training quality of the model. is process aims to maximize the English function of the sentences. As shown in equation (2), K represents the corpus, k represents the sentences in the corpus, and S(k) denotes the English polarity of the sentences. e same negative sampling optimization method is still used as in the first part [26]. During the training process, the English polarity of words is fully considered in conjunction with the acquired seed English dictionary, and more English information is incorporated into the word vector learning process by predicting the English polarity of words. e English polarity of which is divided into three categories: positive, negative, and other. e English of this process also maximizes the logarithm of English probability. As shown in equation (3), A denotes the seed lexicon, S(k) denotes English polarity, and ] ab denotes the auxiliary vector. e other parameters have similar meanings to those in the above two parts of equation. From the trained corpus lexicon, the top k words are selected in descending order of frequency, and the k words are manually labeled with English polarity [27]. e number of seed words is small, and the current word set needs to be extended to generate a more complete lexicon. Considering the hit rate and accuracy in seed word matching, the seed English lexicon should also contain the most common English words in the corpus to improve the training efficiency of the model and the final classification performance. erefore, the initial selection of k high-frequency words from the original corpus, based on these words, and the use of synonym word forests to find words with semantic similarity to the high-frequency words can satisfy this extended requirement. Figure 1: Neural network fuzzy semantic optimal control translation model. 4 Computational Intelligence and Neuroscience translation models of some English translation amounts established [28]. Similar to the traditional interactive multimodel, these rules can be transformed to interact with each other, and different fuzzy rules in ITS-UKF will transform to interact with each other by themselves according to the intersection degree of fuzzy affiliation function between fuzzy rules. e weight of each model in ITS-UKF and the antecedent parameters are formed into a knowledge system, and the neighborhood rough set is used to reduce and eliminate redundant models. At the same time, the excessive rough set reduction will consume a lot of computing power and lose effective information. According to each time and each model residual, an adaptive reduction judgment algorithm is proposed, which monitors the tracking situation through the residual, adaptively performs feature rereduction, and then simplifies the features. It may not be set to X j i that the fuzzy set of translated English being divided under the jth feature at the time i. e closer the fuzzy sets of different rules and their antecedent parameters are to each other, the higher the transfer change between rules. At the time i, the transfer probability of transforming fuzzy set E to fuzzy set S is shown in equation (4), where (E, k) represents the intersection degree between fuzzy set E and fuzzy set k, nJ represents all fuzzy sets under the jth feature, and E, k ∈ nJ. e optimal control rule of neural network fuzzy semantics should be the hyperplane state value, and the general hyperplane FCM algorithm is not applicable. Here, the FCRM clustering algorithm is applied to realize the clustering of the overclocking surface data. Construction of English Translation e model probability of prediction is calculated as shown in equation (5), in which ω k h+1 denotes the normalized probability of the k-th fuzzy rule at h + 1 moment. ω k h|h+1 denotes the mixing probability of total other fuzzy rules transformed to the k-th fuzzy rule. e mixing probability is normalized as follows, as shown in equation (6), where ω k|i h|h+1 is the normalized mixing probability when the i-th fuzzy rule is transformed to the h-th fuzzy rule at the moment h + 1. When the difference △ between the estimated result of a certain model and the observed result is large, it can be assumed that the English language at this time may not be translated into English by the English translation model that cannot describe the current English translation status, and it may begin to maneuver. In other words, a certain threshold can be set, and when the difference Δ between the predicted result of a certain model and the observed result is larger than this threshold, it means that the English may not be translated in this way English. is is applied to feature selection. First, the difference between the predicted observation S(h i k ) and the observation G(h) is calculated for a set of all models at the current moment. F(ϕ) is trained to predict the embedding vectors. Also, we choose cosine fuzzy semantic similarity as a neighboring metric for the semantic similarity between word vectors. e equation of the English function for training is equation (8). After the training of the model, we get E(h), with the help of which we can get the word vectors of unregistered words on the test set. In equation (8), S(k) is the input, and it is obtained using fuzzy semantic optimal control. e flowchart of fuzzy semantic optimal control is shown in Figure 2. Evaluation of Neural Network Controlled English Translation Model Training. Usually, the word vectors used for semantically similar word pairs also have higher similarity, the calculated cosine similarity value will be higher and vice versa, and the dataset itself carries a manually labeled fuzzy semantic similarity value. erefore, the semantic relevance of the word vectors can be measured by calculating Spearman's correlation coefficient between the actual cosine values and the artificially labeled values, where i denotes the number of word pairs and h i denotes the difference between the corresponding ordinal numbers of the same word pairs in different sorting sequences after sorting the word pairs according to the calculated cosine and labeled values. When constructing the model, the shallow text features of word co-occurrence, text length, number of words in each lexical category, word list size, and deep semantic features were selected for the features of English text. Only representative shallow text features were selected, on the one hand, because the constructed scoring model focuses on using a deep neural network to do feature extraction, which contains features such as word meaning, word order, and sentence order, and is not repeatedly extracted, and on the other hand, to make the model applicable to English text. e linguistic shallow features mainly include two aspects of words and syntax. As shown in Table 1, the model mainly judges the pros and cons of answers based on similarity features and semantic features. For the semantic understanding of English-translated texts, especially in the case of diverse language expressions but the same semantics, it can score more accurately. At the same time, the understanding of polysemy and contextual relevance is also sufficient. In addition, the requirements for the size of the training set are not harsh. By calculating the weights through the attention Computational Intelligence and Neuroscience mechanism, the influence of similarity features and semantic features on the final score can be adjusted. e fuzzy semantic similarity of shallow linguistic features and the fuzzy semantic similarity of deep semantics are separately calculated, and the fuzzy semantic similarity is selected and combined with the deep semantic features to calculate the assigned weights to obtain the final score. e selection of the fuzzy semantic similarity method is explained in chapter 4. e existing English translation scoring models are mainly divided into two types of methods; the first one focuses on extracting shallow text features with a large number of dimensions and calculating the fuzzy semantic similarity with standard English translations; the second one treats scoring as a simple classification problem by representing the text by vectors or combining shallow text features and does not consider the influence of fuzzy semantic similarity with standard English translations. e limitations of these two types of methods are as follows: the first type of method is lacking for semantic extraction, which may lead to only lower scores when students' English translations are semantically identical to standard English translations but with special textual representations; the second type of method requires a large amount of training data, which may lead to lower scoring accuracy if the model is not sufficiently trained. e neural network-controlled English translation model is mainly based on fuzzy semantic similarity features and semantic features to discriminate the merits of the English translation, and it can score more accurately for the semantic understanding of English translated texts, especially when the language expressions are diverse but semantically identical. It also has a more adequate understanding of polysemy and contextual association. It is not demanding on the size of the training set, and the size of Vocabulary size e number of words in each part of speech 6 Computational Intelligence and Neuroscience the influence of fuzzy semantic similarity features and semantic features on the final score can be adjusted by the way of calculating weights through the attention mechanism. Response generation uses neural networks to encode and decode the input text and knowledge. A neural network is a deep network that contains only attention mechanisms. He has the advantage of being able to process all words or symbols in a sequence in parallel while using self-attention mechanisms to combine context with more distant words, thus improving the disadvantage of slow training of RNN networks. By processing all words in parallel, each word is allowed to notice other words in the sentence in multiple processing steps. It can also increase to very deep depths to fully exploit the properties of deep neural network models and improve the accuracy of the models. e response generation part and the knowledge decision part are jointly trained together at the same time, and the model is trained in a process similar to multitask learning, while the reinforcement learning model is trained to learn how to make knowledgeable decisions, and the codec model learns how to generate responses based on input utterances, context, and selected knowledge. rough each round of interactive dialogues, the optimal path is found in the subgraph starting from the initial topic entity and finally stopping at the English topic entity. e reinforcement learning agent is trained to learn to find the optimal topic transfer path by using a sequence of topics with known initial and English topics and conversation pair data. Analysis of Optimal Control Translation Algorithm. In the training process of the algorithm, a complete epoch process includes the updating of training parameters, testing, and validation of the algorithm. In the actual training process, the data need to be trained for several iterations to make the algorithm achieve the training effect of convergence fitting. In this experiment, different numbers of iterations are selected and the results are shown in Figure 3. From the above experimental results, we can see that the algorithm achieves the highest accuracy on the Data Hall dataset when the epoch is 7, and the algorithm achieves the highest accuracy on the NLP&CC2014 and ChnSentiCorp datasets when the epoch is 8. e algorithm achieves the highest accuracy on both NLP&CC2014 and ChnSentiCorp datasets when the epoch is 8. erefore, the above experimental results show that the algorithm can obtain the best classification results in a short time by choosing the optimal number of iterations during training. To verify the accuracy and effectiveness of the SA-RNN-CNN algorithm, we compare the three datasets used in this study with LSTM, BiLSTM, CNN, Self-Attention, and CNN-RNN algorithms. e CNN-RNN is an algorithm. e algorithm first obtains the phrase representation of the text using CNN, then obtains the sentence representation of the text as a whole using bidirectional RNN, and finally averages the sentence representation and achieves the classification by softmax classifier. e experimental results are shown in Figure 4. From the experimental results, it can be seen that the SA-RNN-CNN algorithm achieves the highest classification accuracy on all three datasets, while the BiLSTM algorithm achieves the lowest loss rate. e classification accuracies of the SA-RNN-CNN algorithm and CNN-RNN algorithm are close to each other, and the SA-RNN-CNN algorithm has 0.93%, 0.95%, and 1.03% higher accuracy and 0.4279, 0.4245, and 0.4085 lower loss rate than the CNN-RNN algorithm, respectively, which are much lower than the loss rate of CNN-RNN algorithm; BiLSTM algorithm achieves the lowest loss rate, but the SA-RNN-CNN algorithm is 5.18%, 5.95%, and 6.51% more accurate than the BiLSTM Computational Intelligence and Neuroscience 7 algorithm, and the loss rates are closer and 0.0213, 0.0378, and 0.0335 higher, respectively. English Translation Model Performance Analysis. A comparison experiment was conducted to verify the effectiveness of NTM in English translation by evaluating the BLEU values and the time consumption distribution. After the experimental training, the trend of BLEU values of the three experimental models is shown in Figure 5. When the training reaches 100,000 steps, the model starts to enter the convergence stage, and the growth of the BLEU value starts to slow down, and when it reaches about 550,000 steps, the BLEU value of the model is stable and the best BLEU value of training is obtained. e best BLEU value of the three models is 65.48. Figure 6 shows the trend of time consumption of the three experimental models. In Figure 6, it can be seen that the time consumed by the three groups of translation models gradually increases with the increase in the number of training steps, but the time consumed by the three groups of experiments remains at a lower level compared with the transformer model because the three groups of experiments adopt the English translation model based on the NATarchitecture. According to the trend of the line change, we can understand that the English machine translation incorporating the NTM architecture does not greatly change in terms of time consumption, and the time consumption is slightly higher than that of the NAT + KD + MAMML translation model, which still maintains a lower level of time consumption. Based on the NAT architecture, fused NTM can effectively improve the quality of English neural machine translation. e NATmodel that integrates the meta-learning strategy can further improve the performance of Mongolian-Chinese machine translation. Compared with the NAT model that only performs knowledge distillation processing, it finally improves the BLEU value by 2.3 points and improves 5.1 points of BLEU compared with the transformer model. e time consumption is much lower than the time consumed by the transformer model and slightly higher than the time consumed by the NAT + KD model. e model is supplemented with contextual semantic information by the excellent storage capacity of the external memory of neural Turing machines to improve the translation effect. en, we build an English translation model based on NTM and word-level attention mechanism, use external memory to write and read contextual semantic information, and set the relevant experimental parameters; next, we conduct group experiments on the built English translation model to verify the effectiveness of NTM in English translation. Finally, the experimental results are compared to verify the effectiveness of NTM on English machine translation based on NAT architecture, which can further improve the quality of English neural machine translation while maintaining the low time consumption of the model. Simulation Analysis of English Translation Model Training. To test the effect of inheritance features on the model's ability to predict long-term returns, experiments are conducted on sentences of different lengths. Figure 7 shows the performance of the tested model at different sentence lengths. It can be seen that the proposed model outperforms the benchmark model in almost all length intervals. In particular, when predicting long sentences, the proposed model can obtain better BLEU scores. Intuitively, the experimental results demonstrate that the DSF module can help the model achieve better long-term returns than local optima, and this difference is more obvious when the sentence length is longer. It can also be seen that the DSF-E2D model can perform generally better than the E2D-RL model when short sentences are used. We believe this is because the generation of short sentences is more dependent on the current per-word probability rather than on the longterm payoff. However, the E2D-RL model fine-tunes the model again after pretraining by maximizing the likelihood probability per word and the expected payoff, which makes the model effect more favorable to the prediction of long sentences to the detriment of short sentences. Unlike them, the proposed model calculates a 2-value for a group of similar words, and this value only slightly corrects the probability distribution of all words without destroying the pretrained model. erefore, the proposed model performs better on short sentences relative to the E2D-RL model. We briefly test the migration learning capability of the DSF-E2D model. e test is performed by replacing the evaluation method of the model when the training is close to convergence, i.e., replacing the BLEU value with the ROUGE value. e experimental results are recorded in Figure 8. An interesting phenomenon can be found that DSF-E2D can quickly adapt to the change in the evaluation method. It only needs to relearn the parameters w of the reward network to achieve a better evaluation score. In this study, we use SF to unify the training and testing process in English and give the model the ability to focus on generating whole sentences. e proposed model shows significant improvement by training and testing on a largescale bilingual corpus. We intend to apply this model to more text generation tasks in the future, especially dialogue systems, and use its inherited features to solve the problem of transfer learning of dialogue models trained on different domain corpora. Conclusions To address the lack of consistency in English translation models, this study considers English sentence features as the most important part to reflect consistency. erefore, this study sets up some English translation-related information bases, retrieves information in the information bases through the attention mechanism, and then fuses the attention retrieval mechanism into the end-to-end generation model. e generation model compares the probability of generating words in the lexicon with the probability of generating information in the information base when generating words so that it can generate responses containing English information when English information-related questions need to be answered. Traceless Kalman maneuver English tracking algorithm is based on neural network fuzzy semantic multimodel. To address the problem that the set of models in traditional multimodel algorithms cannot cover all English translation models, the ITS-UKF algorithm uses the powerful fitting ability of neural network fuzzy models to select some English translation features of English as the system antecedents of neural network fuzzy models and fuzzy partition them to form several fuzzy rules. At the same time, each rule uses different traceless Kalman filtering algorithms to calculate the posterior of the neural network fuzzy model. Finally, the FCRM algorithm is iteratively used to compute the optimized updated preconditioner parameters. e simulation experiments demonstrate that ITS-UKF has better translation error performance compared with the traditional IMM algorithm. e experimental results verify the effectiveness and feasibility of this method. However, as the number of neural network layers increases, the computational complexity also increases, and it will be affected in terms of training time and speed. Meanwhile, since the research of optimal controlbased neural machine translation is still in its infancy, most of the external tools for optimal control slicing are not mature enough. erefore, the next step is to reduce the computational complexity of the model, improve the translation speed, and optimize the external optimal control slicing tools. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-05-10T16:36:11.598Z
2022-05-02T00:00:00.000
{ "year": 2022, "sha1": "7f69eb094362bf39bfe668044f4656cfacc518f6", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/9308236.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3834ca4c731f4c3a55f2911c0e04c5695f591fca", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
17102188
pes2o/s2orc
v3-fos-license
The Faint and Extremely Red K-band Selected Galaxy Population in the DEEP2/Palomar Fields We present in this paper an analysis of the faint and red near-infrared selected galaxy population found in near-infrared imaging from the Palomar Observatory Wide-Field Infrared Survey. This survey covers 1.53 deg^2 to 5-sigma detection limits of K_vega = 20.5-21 and J_vega = 22.5, and overlaps with the DEEP2 spectroscopic redshift survey. We discuss the details of this NIR survey, including our J and K band counts. We show that the K-band galaxy population has a redshift distribution that varies with K-magnitude, with most K<17 galaxies at z<1.5 and a significant fraction (38.3+/-0.3%) of K>19 systems at z>1.5. We further investigate the stellar masses and morphological properties of K-selected galaxies, particularly extremely red objects, as defined by (R-K)>5.3 and (I-K)>4. One of our conclusions is that the ERO selection is a good method for picking out galaxies at z>1.2, and within our magnitude limits, the most massive galaxies at these redshifts. The ERO limit finds 75% of all M_*>10^{11} M_0 galaxies at z ~ 1.5 down to K_vega = 19.7. We further find that the morphological break-down of K<19.7 EROs is dominated by early-types (57+/-3%) and peculiars (34+/-3%). However, about a fourth of the early-types are distorted ellipticals, and within CAS parameter space these bridge the early-type and peculiar population, suggesting a morphological evolutionary sequence. We also investigate the use of a (I-K)>4 selection to locate EROs, finding that it selects galaxies at slightly higher average redshifts (= 1.43+/-0.32) than the (R-K)>5.3 limit with= 1.28+/-0.23. Finally, by using the redshift distribution of K<20 selected galaxies, and the properties of our EROs, we are able to rule out all monolithic collapse models for the formation of massive galaxies. INTRODUCTION Deep imaging and spectroscopic surveys in the optical have become the standard method for determining the evolution of the galaxy population (e.g., Kron 1980;Steidel & Hamilton 1993;Williams et al. 1996;Giavalisco et al. 2004). These surveys have revolutionised galaxy formation studies, and have allowed us to characterise basic properties of galaxies, such as their luminosity functions, stellar masses, and ⋆ E-mail: conselice@nottingham.ac.uk morphologies, and how these properties have evolved (e.g., Lilly et al. 1995;Ellis 1997;Wolf et al. 2003;Conselice et al. 2005a). However, due to technological limitations with near-infrared arrays, most deep surveys have been conducted in optical light, typically between λ = 4000-8000Å. This puts limits on the usefulness of optical surveys, as they select galaxies in the rest-frame ultraviolet at higher redshifts where many of the galaxies contributing to the faint end of optical counts are located (e.g., Ellis et al. 1996). Galaxies selected in the rest-frame ultraviolet limit our ability to trace the evolution of the galaxy population in terms of stel-lar masses. As the properties of galaxies can be quite different between the rest-frame optical and UV (e.g., Windhorst et al. 2002;Papovich et al. 2003Papovich et al. , 2005Taylor-Manger et al. 2006;Ravindranath et al. 2006), it is desirable to trace galaxy evolution at wavelengths where most of the stellar mass in galaxies is visible. To make advances in our understanding of galaxy evolution and formation at high redshifts, (z > 1), therefore requires us to search for, and investigate, galaxy properties in the near-infrared (NIR). Studying galaxies in the NIR has many advantages, including minimised K-corrections which are often substantial in the optical, as well as giving us a more direct probe of galaxy stellar mass up to z ∼ 3 (e.g., Cowie et al. 1994). This has been recognised for many years, but most NIR surveys have been either deep pencil beam surveys (e.g., Gardner 1995;Djorgovski et al. 1995;Moustakas et al. 1997;Bershady, Lowenthal & Koo 1998;Dickinson et al. 2000;Saracco et al. 2001;Franx et al. 2003), or large-area, but shallow, surveys (e.g., Mobasher et al. 1986;Saracco et al. 1999;Mc-Cracken et al. 2000;Huang et al. 2001;Martini 2001;Drory et al. 2001;Elston et al. 2006). This is potentially a problem for understanding massive and evolved galaxies at high redshifts, as red objects are highly clustered (e.g., Daddi et al. 2000;Foucaud et al. 2007), as are the most massive galaxies (e.g., Coil et al. 2004a). Previous deep NIR surveys can detect these unique galaxy populations, but they typically do not have a large enough area to probe the range of galaxies selected in the near-infrared. Likewise, large area, but shallow surveys may not be deep enough to detect with a high enough signal to noise these unique populations. In this paper we overcome this problem by presenting a 1.5 deg 2 survey down to 5 σ depths of Kvega = 20.2-21.5 and Jvega = 22.5. This brings together the properties of both deep and wide surveys. In this paper we explore NIR galaxy counts, and study the properties of faint NIR galaxies, which are often red in nearinfrared/optical colours. Unique galaxy populations have long been known to exist in near-infrared selected surveys. These include the extremely red objects (Elston, Reike, Reike 1988) and the distant red galaxies (Saracco et al. 2001;Franx et al. 2003;Conselice et al. 2007a;Foucaud et al. 2007), both of which are difficult to study at optical wavelengths. The existence of these galaxies reveals a large possible differential in the galaxy population between optical and near-infrared surveys. While these objects can be located in deep optical surveys, they are often very faint with R > 26 (e.g., van Dokkum et al. 2006), making it difficult to understand these objects in any detail without NIR imaging or spectroscopy. In this paper we analyse the properties of galaxies selected in moderately deep K-band imaging. We also investigate the redshift distributions, structures and properties of the nearinfrared selected galaxy population down to Kvega ∼ 20. One of our main conclusions are that the faint K-band population spans a range of redshift and properties. We find that galaxies with magnitudes Kvega < 17 are nearly all at z < 1.4. Galaxies with magnitudes Kvega = 17 − 21 are found at high redshift, up to at least z ∼ 4. The colours of these galaxies span a wide range, with in particular redder galaxies seen at higher redshifts. Finally, we investigate the properties of the extremely red objects in our sample, finding that they include most, but not all of, the highest mass galaxies at z ∼ 1.5. The morphologies of these EROs, and the redshift distribution and dust properties of Kvega < 20 sources, show that hierarchical galaxy formation is the dominate method by which massive galaxies form. This paper is organised as follows: in §2 we discuss the data sources used in this paper, including our Palomar imaging, DEEP2 redshifts, and HST ACS imaging. This section also gives basic details of the Palomar survey. §3 includes our analysis, which contains information on the K and J-band counts, the redshift and colour distributions of K-selected galaxies. §4 includes an analysis of the extremely red galaxy population and its properties, including stellar mass, dust content, and redshift distributions. §5 includes a detailed discussion of our results in terms of galaxy models, while §6 is a summary of our findings. This paper uses Vega magnitudes unless otherwise specified, and assumes a cosmology with H0 = 70 km s −1 Mpc −1 , Ωm = 0.3 and Ω λ = 0.7. Data Sources The objects we study in this paper consist of those found in the fields covered by the Palomar Observatory Wide-Field Infrared Survey (POWIR, Table 1). The POWIR survey was designed to obtain deep K-band and J-band data over a significant (∼1.5 deg 2 ) area. Observations were carried out between September 2002 and October 2005 over a total of ∼ 70 nights. This survey covers the GOODS field North (Giavalisco et al. 2004;Bundy et al. 2005), the Extended Groth Strip (Davis et al. 2007), and three other fields the DEEP2 team has observed with the DEIMOS spectrograph (Davis et al. 2003). We however do not analyse the GOODS-North data in this paper given its much smaller area and deeper depth than the K-band imaging covering the DEEP2 fields. The total area we cover in the K-band in the DEEP2 fields is 5524 arcmin 2 = 1.53 deg 2 , with half of this area imaged in the J-band. Our goal depth was Kvega = 21, although not all fields are covered this deep, but all have 5 σ depths between K = 20.2-21.5. Table 1 lists the DEEP2 fields, and the area we have covered in each. For our purposes we abbreviate the fields covered as: EGS (Extended Groth Strip), Field 2, Field 3, and Field 4. The K-band data were acquired utilising the WIRC camera on the Palomar 5 meter telescope. WIRC has an effective field of view of 8.1 ′ × 8.1 ′ , with a pixel scale of 0.25 ′′ pixel −1 . In total, our K-band survey consists of 75 WIRC pointings. During observations of the K data we used 30 second integrations with four exposures per pointing. Longer exposure were utilised for the J-band data, with an exposure time of 120 seconds per pointing. Total exposure times in both K and J were between one to eight hours. The seeing FWHM in the K-band data ranges from 0.8" to 1.2", and is on average 1.0". Photometric calibration was carried out by referencing Persson standard stars during photometric conditions. The final K-band and J-band images were made by combining individual mosaics obtained over several nights. The K-band mosaics are comprised of co-additions of 4 × 30 second exposures dithered over a non-repeating 7.0" pattern. The J-band mosaics were analysed in a similar way using single 120 second exposures per pointing. The images were processed using a double-pass reduction pipeline we developed specifically for WIRC. For galaxy detection and photometry we utilised the SExtractor package (Bertin & Arnouts 1996). Photometric errors, and the K-band detection limit for each image were estimated by randomly inserting fake objects of known magnitude into each image, and then measuring photometry with the same detection parameters used for real objects. The inserted objects were given Gaussian profiles with a FWHM of 1. ′′ 3 to approximate the shape of slightly extended, distant galaxies. We also investigated the completeness and retrievability of magnitudes for exponential and de Vaucouleurs profiles of various sizes and magnitudes. A more detailed discussion of this is included in §2.4 and Conselice et al. (2007b). Other data used in this paper consists of: optical imaging from the CFHT over all of the fields, imaging from the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope, and spectroscopy from the DEIMOS spectrograph on the Keck II telescope (Davis et al. 2003). A summary of these ancillary data sets, which are mostly within the Extended Groth Strip, are presented in Davis et al. (2007). The optical data from the CFHT 3.6-m includes imaging in the B, R and I bands taken with the CFH12K camera, which is a 12,288 × 8,192 pixel CCD mosaic with a pixel scale of 0.21 ′′ . The integration time for these observations are 1 hour in B and R and 2 hours in I, per pointing with 5 σ depths of ∼ 25 in each band. For details of the data reduction see Coil et al. (2004b). From this imaging data a RAB = 24.1 magnitude limit was used for determining targets for the DEEP2 spectroscopy. The details for how these imaging data were acquired and reduced, see Coil et al. (2004b). The Keck spectra were obtained with the DEIMOS spectrograph (Faber et al. 2003) as part of the DEEP2 redshift survey. The EGS spectroscopic sample was selected based on a R-band magnitude limit only (with RAB < 24.1), with no strong colour cuts applied to the selection. Objects in Fields 2-4 were selected for spectroscopy based on their position in (B − K) vs. (R − I) colour space, to locate galaxies at redshifts z > 0.7. The total DEEP2 survey includes over 30,000 galaxies with a secure redshift, with about a third of these in the EGS field, and in total ∼ 11, 000 with a K-band detection ( §3.1.1). In all fields the sampling rate for galaxies that meet the selection criteria is 60%. The DEIMOS spectroscopy was obtained using the 1200 line/mm grating, with a resolution R ∼ 5000 covering the wavelength range 6500 -9100Å. Redshifts were measured through an automatic method comparing templates to data, and we only utilise those redshifts measured when two or more lines were identified, providing very secure redshift measurements. Roughly 70% of all targeted objects resulted in reliably measured redshifts. Many of the redshift failures are galaxies at higher redshift, z > 1.5 (Steidel et al. 2004), where the [OII] λ3727 lines leaves the optical window. The ACS imaging over the EGS field covers a 10.1 ′ × 70.5 ′ strip, for a coverage area of 0.2 deg 2 . The ACS imaging is discussed in Lotz et al. (2006), and is briefly described here, and in Conselice et al. (2007a,b). The imaging consists of 63 titles imaged in both the F606W (V) and F814W (I) bands. The 5-σ depths reached in these images are V = 26.23 (AB) and I = 27.52 (AB) for a point source, and about two magnitudes brighter for extended objects. Our matching procedures for these catalogs progressed in the manner described in Bundy et al. (2006). The K-band catalog served as our reference catalog. We then matched the optical catalogs and spectroscopic catalogs to this, after correcting for any astrometry issues by referencing all systems to 2MASS stars. All magnitudes quoted in this paper are total magnitudes, while colours are measured through aperture magnitudes. Photometric Redshifts We calculate photometric redshifts for our K-selected galaxies, which do not have DEEP2 spectroscopy, in a number of ways. This sample is hereafter referred to as the 'photz' sample. Table 2 lists the number of spectroscopic and photometric redshifts within each of our K-band magnitude limits. These photometric redshifts are based on the opti-cal+near infrared imaging, BRIJK (or BRIK for half the data) bands, and are fit in two ways, depending on the brightness of a galaxy in the optical. For galaxies that meet the spectroscopic criteria, RAB < 24.1, we utilise a neural network photometric redshift technique to take advantage of the vast number of secure redshifts with similar photometric data. Most of the RAB < 24.1 sources not targeted for spectroscopy should be within our redshift range of interest at z < 1.4. The neural network fitting is done through the use of the ANNz (Collister & Lahav 2004) method and code. To train the code, we use the ∼ 5000 redshifts in the EGS, which span our entire redshift range. The agreement between our photometric redshifts and our ANNz spectroscopic redshifts is very good using this technique, with δz/z = 0.07 out to z ∼ 1.4. The photometry we use for our photometric redshift measurements are done with a 2 ′′ diameter aperture. For galaxies which are fainter than RAB = 24.1 we utilise photometric redshifts using two methods, depending on whether the galaxy is detected in all optical bands or not. For systems which are detected at all wavelengths we use the Bayesian approach from Benitez (2000). For an object to have a photometric redshift using this method requires it to be detected at the 3 σ level in all optical and near-infrared (BRIJK) bands, which in the R-band reaches ∼ 25.1. We refer to these objects as having 'full' photometric redshifts. As described in Bundy et al. (2006) we optimised our results and corrected for systematics through the comparison with spectroscopic redshifts, resulting in a redshift accuracy of δz/z = 0.17 for RAB > 24.1 systems. Further details about our photometric redshifts are presented in Conselice et al. (2007b), including a lengthy discussion of biases that are potentially present in the measured values. Table 2 lists the number of galaxies with the various redshift types. As can be seen, the vast majority of our galaxies have either spectroscopic redshifts, or have measured photometric redshifts using the full optical SED. Only a small fraction (< 1%) of our K-band sources are not detected in one optical band down to K = 21. For completeness in the analysis of the N (z) distribution of K-magnitudes discussed in §5, we calculate, using a χ 2 minimisation through hyper-z (Bolzonella, Miralles & Pello 2000), the best fitting photometric redshifts for these faint systems. These galaxies however make up only a small fraction of the total K-band population, and their detailed redshift distribution, while not likely as accurate as our other photometric redshifts, do not influence the results at all significantly. Stellar Masses From our K-band/optical catalogs we compute stellar masses based on the methods outlined in Bundy, Ellis, Conselice (2005) and Bundy et al. (2006). The basic method consists of fitting a grid of model SEDs constructed from Bruzual & Charlot (2003) stellar population synthesis models, with different star formation histories. We use an exponentially declining model to characterise the star formation history, with various ages, metallicities and dust contents included. These models are parameterised by an age, and an e-folding time for characterising the star formation history. We also investigated how these stellar masses would change using the newest stellar population synthesis models with the latest prescriptions for AGB stars from Bruzual & Charlot (2007, in prep). We found stellar masses that were only slightly less, by 0.07 dex, compared to the earlier models (see Conselice et al. 2007b for a detailed discussion of this and other stellar mass issues.) Typical random errors for our stellar masses are 0.2 dex from the width of the probability distributions. There are also uncertainties from the choice of the IMF. Our stellar masses utilise the Chabrier (2003) IMF, which can be converted to Salpeter IMF stellar masses by adding 0.25 dex. There are additional random uncertainties due to photometric errors. The resulting stellar masses thus have a total random error of 0.2-0.3 dex, roughly a factor of two. However, using our method we find that stellar masses are roughly 10% of galaxy total masses at z ∼ 1, showing their reliability (Conselice et al. 2005b). Details on the stellar masses we utilise, and how they are computed, are presented in Bundy et al. (2006) and Conselice et al. (2007b). K-band Completeness Limit Before we determine the properties of our K-selected galaxies, it is first important to characterise how our detection methods, and reduction procedures, influence the production of the final K-band catalog. While the major question we address is the nature of the faint and red galaxy population, it is important to understand what fraction of the faint population we are missing due to incompleteness. To understand this we investigate the K-band completeness of our sample in a number of ways. The first is through simulated detections of objects in our near-infrared imaging. As described in Bundy et al. (2006), Conselice et al. (2007b), and Trujillo et al. (2007) we placed artificial objects into our K-band images to determine how well we can retrieve and measure photometry for galaxies at a given magnitude. Our first simulations were performed from K = 18 to K = 22 using Gaussian profiles. We find that the completeness within our fields remains high at nearly all magnitudes, with a completeness of nearly 100% up to K = 19.5 for all 75 fields combined together. The average completeness of these fields at K = 20 is 94%, which drops to 70% at K = 20.5 and 35% at K = 21.0. If we take the 23 deepest fields we find a completeness at K = 21.0 of 70%. However, galaxies are unlikely to have Gaussian light profiles, and as such, we investigate how the completeness would change in Conselice et al. (2007b), if our simulations were carried out with exponential and r 1/4 light profiles. We find similar results as when using the Gaussian profiles up to K = 20, but are less likely to detect faint galaxies with r 1/4 profiles, and retrieve their total light output. As discussed in §3.1.1, these incompleteness corrections are critical for obtaining accurate galaxy counts, but the intrinsic profiles of galaxies of interest must be known to carry this out properly. As such, we utilise the Gaussian corrections as a fiducial estimate. In Figure 1 we plot our K-band counts with these corrections applied. We also plot the Jband counts up to their completeness limit, and do not apply any corrections for incompleteness. The 100% completeness for the optical data is B = 25.25, R = 24.75, and I = 24.25 (Coil et al. 2004b). These limits are discussed in §4.1 where we consider our ability to retrieve a well defined population of extremely red objects (EROs). ANALYSIS 3.1 Nature of the Faint K-band Population K-band, J-band Counts and Incompleteness Within our total K-band survey area of 1.53 deg 2 we detect 61,489 sources at all magnitudes, after removing false artifacts. Most of these objects (92%) are at K < 21, while 68% are at K < 20 and 37% are at K < 19. In total there are 38,613 objects fainter than K = 19 in our sample. Out of our total K-band population 10,693 objects (mostly galaxies) have secure spectroscopic DEIMOS redshifts from the DEEP2 redshift survey (Davis et al. 2003). We supplement these by 37,644 photometric redshifts within the range 0 < z < 2 (Table 2). We remove stars from our catalogs, detected through their structures and colours, as described in Coil et al. (2005) and Conselice et al. (2007b). We plot the differential number counts (Table 3) for galaxies in both the K and J-band for our K-selected sample in Figure 1 to test how our counts compare with those found in previous deep and wide near-infrared surveys. We carry this out to determine the reliability of our star and galaxy separation methods, as well as for determining how our incompleteness corrections in the K-band compare to others. As Figure 1 shows, we find little difference in our counts compared to previous surveys, and we are ∼100% complete up to magnitude K∼ 20 in all fields. As others have noted, we find a change in the slope of the galaxy counts at K = 17.5. We calculate that the slope at K < 17.5 is dN/dK = 0.54 ± 0.07, while at K > 17.5 it is dN/dK = 0.26 ± 0.01. Our counts, after correction, are slightly lower than those in the UKIDSS UDS survey (Simpson et al. 2006), and from studies by Cristobal-Hornillos et al. (2003) and Saracco et al. (1999). However, our counts are higher than those found in Iovino et al. (2005) and Kong et al. (2006). At brighter magnitudes these surveys all agree, with the exception of Maihara et al. (2001) who underpredict all surveys (not plotted). This difference at K > 20 is likely the result of the different incompleteness correction methods used. As detailed in Bershady et al. (1998), using various intrinsic galaxy profiles when computing completeness can lead to over and underestimation of the correction factor. The only accurate way to determine the incompleteness is to know the detailed distribution of galaxy surface brightness profiles at the magnitude limits probed (Bershady et al. 1998). As this is difficult, and sometimes impossible to know, all corrected counts must be seen as best estimates. The J-band counts ( Figure 1b) show a similar pattern as the K-band counts. These counts are not corrected for incompleteness and we are incomplete for very blue galaxies with (J − K) < 0 at the faintest J-band limit due to our using the K-band detections as the basis for measuring J-band magnitudes. As can be seen, there is a larger variation in the J-band number counts when comparing the different surveys at a given magnitude compared to the Kband counts. Furthermore, there is no obvious slope change in the J-band counts, as seen in the K-band. We are complete overall in the J-band to Jvega = 22.0 over the entire survey. Our J-band depth however varies between the three fields in which we have J-band coverage, and in fact varies between individual WIRC pointings. As we later discuss the properties of EROs in this paper, as defined with a (R − K) colour cut, it is important to understand the corresponding depths of the R-band imaging. The depth and number counts for the R-band imaging is discussed in detail in Coil et al. (2004b). Based on aperture magnitudes the 5 σ depth of the CFHT R-band imaging is roughly RAB = 25. Our R-band photometry uses the same imaging as Coil et al. (2004b), however we retrieved our own magnitudes based on the K-band selected objects in our survey. Our R-band depth however is similar to Coil et al. (2004b), and we calculate a 50% incompleteness at R = 25.1. Redshift Distributions of K-Selected Galaxies In this section we investigate the nature of galaxies selected in the K-band. The basic question we address is what are the properties of galaxies at various K-limits. This issue has been discussed earlier by Cimatti et al. (2002a), Somerville Figure 2. The spectroscopic redshift completeness for both the entire K-band selected catalog, and for the the (R − K) > 5.3 ERO selected sample. Note that none of our EROs at K > 19.5 have a measured spectroscopic redshift, typical for spectroscopic surveys which are optically selected. et al. (2004) and others. However, we are able to utilise the DEEP2 spectroscopic survey of these fields to determine the contribution of lower redshift galaxies to the K-band counts, and thus put limits on the contribution of high redshift (z > 1.5) galaxies to K-band counts at K < 20. The first, and most basic, method for understanding galaxies found in a K-band selection is to determine what fraction of the K-selected galaxies have a successfully measured spectroscopic redshift. The DEEP2 spectroscopic redshift success rate for our K-selected sample varies with Kband magnitude, from 10% to 30%, up to K = 21. The highest selection fraction is 30% at K = 17.5. At the faintest limit, K = 21, the redshift selection is 10%, and the fraction is 15% at K = 15.5. Figure 2 shows our spectroscopic redshift completeness as a function of K-band magnitude for both the entire K-band selected sample, and the EROs ( §4). The result of this plot is partially due to the fact that the DEEP2 selection deweights galaxies at z < 0.7. The EGS and the other fields also have slightly different methods for choosing redshift targets (Davis et al. 2003), creating an inhomogeneous selection over the entire survey. When we include photometric redshifts to supplement our spectroscopic redshifts, we obtain total redshift distributions shown in Figure 3. Note that our photometric redshifts are only included in Figure 3 at z > 0 if the object was significantly detected in all bands in the BRIJK photometry. In each K-band limit shown in Figure 3, and discussed below, there are a fraction of sources which do not meet this optical criteria, and these objects are counted at the z = −1 position on the redshift histograms. In Table 2 we list the number of K-band detections with and without spectroscopic redshifts, in each of the redshift ranges. It appears that nearly all bright K-band sources, with 15 < K < 17, are located at z < 1.4 (Figure 3). At fainter magnitudes, as shown by the plotted 17 < K < 19 and 19 < K < 21 ranges (Figure 3), we find a different distribution skewed toward high redshifts. While there are low redshift galaxies at these fainter K-limits, we also find a significant contribution of sources at higher redshifts, including those at z > 2. The K-bright sources at these redshifts are potentially the highest mass galaxies in the early universe. Galaxies at the faintest magnitudes, at 19 < K < 21, show a similar redshift distribution as the galaxies within the 17 < K < 19 magnitude range, but there are a larger number of higher redshift galaxies. This demonstrates that faint K-band sources are just as likely to be at low redshift as at high redshift. This is shown in another way using Figure 4 where we plot the distribution of K-magnitude vs. redshift (z). As can be seen, at K > 19 the entire redshift range is sampled, while a K < 17 selection only finds galaxies at z < 1.4. Colours of K-Selected Galaxies After examining the redshift distribution of our sample, the next step is determining the physical features of these The solid black line shows the redshift distribution for all galaxies with 19 < K < 21, the blued dashed line shows systems with 17 < K < 19, and the red hashed histogram is for galaxies with 15 < K < 17. The levels at z = −1 show the number of galaxies who had photo-zs not fit due to lacking significant optical detections. Figure 4. Contours of the redshift distribution for galaxies at various K-magnitudes. The red dashed contours are for EROs defined as (R-K)> 5.3 (mostly at z > 1), and the blue dotted contours are the distant red galaxies (DRGs) defined by (J −K) > 2.3, which generally span 0.4 < z < 2. galaxies. The easiest, and most traditional, way to do this is through the examination of colour-magnitude diagrams. Generally, galaxy colour is a mixture of at least three effects -redshift, stellar populations and dust. Galaxies generally become redder with redshift due to band-shifting effects, and become redder with age, and increased dust content. We can get an idea of the characteristics of our Kselected sample by examining the colour-magnitude diagram for the entire K < 21 sample ( Figure 5). Figure 5 plots the colours of our sample, as a function of (B − R) and (J − K) versus K-band magnitude. As can be seen, at fainter limits there are more red galaxies in each band. Since fainter/redder galaxies are more likely than brighter galaxies to be at higher redshifts, it is likely that these redder galaxies seen in Figure 5 are distant galaxies. The relation between (R − K) and redshifts ( Figure 6) shows this to be the case. As can be seen, at higher redshifts galaxies are redder in (R − K), although even at these redshifts there are K-selected galaxies which are very blue. At the highest redshifts, where optical magnitudes are at R > 24, we find that most galaxies hover around (R − K) = 5.3. However, as can be seen, a significant fraction of the K < 19.7 galaxies, which are massive systems at z > 1, would be identified as EROs. The detailed distribution of magnitudes, colours and stellar masses is shown in Figure 7 and Figure 8, divided into different redshift bins. Plotted with different symbols are the photometric redshift and the spectroscopic redshift samples. As can been seen, there are strong relations between stellar mass and K-band magnitude over the entire redshift range (Figure 7), with fainter K-selected galaxies having lower stellar masses. Also note that the galaxies with spectroscopic redshifts are generally brighter and bluer than the photometric redshift sample at a given stellar mass. This is particularly true at the highest redshift bins, and demonstrates that the spectroscopy is successfully measuring the brighter galaxies in the distant universe, while being less efficient in measuring redshifts for galaxies of the same mass, but at a fainter K-magnitude. Figure 8 furthermore shows how, as we go to higher redshifts, we obtain an overall redder distribution of colours. Within our lowest redshift bin, 0.5 < z < 0.75, there are few galaxies which would be classified as EROs with (R − K) > 5.3. It is also at this lowest redshift range where the overlap between the spectroscopic and photometric samples is highest. When we go to higher redshifts, such as at 0.75 < z < 1.0, we find that the slope of the locus in the relation between stellar mass and (R − K) colour steepens, such that galaxies at the same stellar mass become redder. This effect is dominated by the change in rest-frame wavelength sampled by the R and K filters. The fact that the higher mass galaxies become redder, while the lower mass galaxies tend to remain blue, is a sign that the spectral energy distributions of the lower mass galaxies are bluer than those for higher mass galaxies. This pattern evolves however, and at z > 1, galaxies at every mass bin become redder with time. The upper envelope in the colour-stellar mass relation ( Figure 8) is due to incompleteness, and is not a real limit. On Figure 7 and Figure 8 we plot hydrodynamic simulation results from Nagamine et al. (2005) using different dust extinctions. We over-plot the E(B-V) = 0, 0.4 and 1 models on these figures as contours. First, the fact that there is not Figure 5. Colour-magnitude diagrams for our sample. The left panel shows (R-K) colour vs. K-band magnitude, while the right panel shows the (J-K) vs. K diagram. Objects with spectroscopic redshifts are coloured blue in both diagrams, while objects considered 'red' either through the extremely red objects (EROs) or Distant Red Galaxies (DRGs) selection are labelled as red in each diagram. The solid line in the (R-K) vs. K panel shows the spectroscopic limit of R = 24.1, while the dashed line shows the 5σ limit for the R-band photometry of 25.1. Furthermore, we only plot points that are brighter than R = 26.5 and J = 23.5 in the two panels, respectively. The red triangles at the top of each figure are galaxies which are undetected in R or J, but have a measured K-band magnitude. Figure 6. The redshift distribution for galaxies within our sample at K < 19.7. The left panel shows systems which are at R < 24. The right panel shows the distribution of (R − K) colours as a function of redshift for galaxies which are fainter than R = 24. As can be seen, galaxies generally get redder at higher redshifts, but there still exists a scatter in colour at any redshift. a larger scatter in the log M * vs. K relation (Figure 7) is an indication that these galaxies are not dominated by dust extinction. Very few galaxies overlap with the E(B-V) = 1 model, and most galaxies are better matched with the E(B-V) = 0 model. This can furthermore be seen in Figure 8 where only the lowest extinction model, with E(B-V) = 0, generally matches the location of galaxies in the (R − K) vs. M * diagram which are not EROs. The (E-V) = 0.4 model does a good job of tracing the EROs, but these systems could also be composed of old stellar populations. We revisit the issue of dust extinction in these galaxies in §5.1. Finally, there are clearly two unique, and overlapping, samples identified in this near-infrared selected sample. The first are those galaxies in Figure 7 and 8 which are very massive, with masses M * > 10 11 M⊙ . We discuss these objects, and their evolution in Conselice et al. (2007b). We investigate in the next section the properties of the extremely red objects (EROs), those with observed colours, (R−K) > 5.3. Figure 7. The stellar mass vs. K-band magnitude relation for our sample of galaxies out to z ∼ 2. These figures are divided up into different redshifts, increasing from left to right and top to bottom. Plotted on these figures are both systems which have spectroscopically measured redshifts (open black boxes) and those which have photometric redshifts (the red dots). As can be seen, there is generally a strong relationship between stellar mass and apparent K-band magnitude, with a small scatter. Note that generally galaxies with spectroscopically measured redshifts are those which are brighter for a given stellar mass. These same systems are furthermore on the blue edge of the stellar mass-colour relationship (Figure 8). This shows that the DEEP2 spectroscopy is selecting primarily the bluer and brighter galaxies at a given stellar mass. We also plot in the 1.0 < z < 1.25 bin models for how these quantities relate, from SPH simulations from Nagamine et al. (2005). The blue, cyan and green contours (going from low mass to high mass at a given K) show the location of model galaxies with E(B-V) = 0, 0.4 and 1, respectively. Sample Selection The ERO sample we construct is defined through a traditional colour cut to locate the reddest galaxies selected in near infrared/optical surveys with (R − K) > 5.3. Galaxies selected in this way are often considered the progenitors of today's most massive galaxies, as seen at roughly z ∼ 1 − 2. Objects with these extremely red colours have remained a population of interest since their initial discovery (Elson, Rieke & Rieke 1988). Initially thought to be ultra-high redshift galaxies with z > 6, it is now largely believed that EROs are a mix of galaxy types at z > 1 (e.g., Daddi (Daddi et al. 2000); an idea we can test further with our data. Furthermore, because EROs are an easily defined and observationally based population, there has been considerable observation and theoretical work done towards understanding these objects. Naturally, it is more desirable to work with mass selected sample (see Conselice et al. 2007b for this approach), but these samples rely on accurate redshifts and stellar mass measurements, while the EROs are simply observationally defined through a colour. The idea that EROs are red due to either an evolved galaxy population, or a dusty starburst is perhaps no longer the dominate way to think about these systems (e.g., Moustakas et al. 2004). However, there are properties of EROs that are still not understood, nor even constrained. For example, it is not clear why some EROs can have apparently normal galaxy morphologies, while others appear to be merging or peculiar galaxies (e.g., Yan & Thompson 2003;Moustakas et al. 2004). The questions we address include: what are the stel- Figure 8. The colour-stellar mass relation for our sample to z ∼ 0. Similar to Figure 7, these galaxies have been divided up into different redshift bins. The horizontal and vertical lines show the limits for selecting unique galaxy populations probed in the K-band. Galaxies to the right of the vertical line are massive galaxies studied in Conselice et al. (2007b), while galaxies which are above the horizontal line, defined by (R − K) > 5.3 are the EROs studied in this paper. Note that the final redshift panel with 1.5 < z < 2.0 consists solely of photometric redshifts. Also, similar to Figure 7 we plot in the 1.0 < z < 1.25 figure models results from Nagamine et al. (2005), although we utilise the model results within the same limits as our data, K < 20. As in Figure 7, the blue, cyan and green contours show the location of model galaxies with E(B-V) = 0, 0.4 and 1 (going from bluer to redder colours at a given stellar mass), respectively. lar mass, morphological and redshift distributions for these objects? In our analysis we study the properties of these traditionally colour selected EROs to determine these basic properties. Our sample of EROs however is not defined simply by a (R − K) > 5.3 cut on our entire R−band and K−band catalogs. Due to the depth of both filters, we have to limit how deep we search for EROs to avoid false positives. As discussed in §2.4, we are 100% complete in our entire survey down to K = 20. The R−band depth is however not well matched to the K-depth for finding EROs, and has a > 5 σ detection limit of R = 25.1 (although 50% completeness). We therefore only select EROs which we are certain to within > 5 σ have a colour (R − K) > 5.3. This limits our analysis of EROs down to K = 19.7. We divide our ERO sample into three types, depending on the origin of the redshift for each. The first type are those EROs with a high quality DEEP2 redshift, of which there are a total of 62 within our survey. The second type of ERO are those with R < 24.1 which contain an ANNz photometric redshift ( §2.2). There are 343 of these EROs. The third type are the EROs with magnitudes between 24.1 < R < 25.1 which all have 'full' photometric redshifts ( §2.2). There are 1122 of these objects. The entire (R − K) > 5.3 sample therefore consists of 1527 EROs with some type of redshift. We examine the properties of (I − K) > 4 selected EROs in §4.6. ERO Number Counts As with the number counts of the K-selected objects, we are interested in comparing the number counts of our ERO sample with measurements from previous work. In Figure 9 we plot the number counts for our ERO sample, as a function of K-magnitude. As can been seen, we are slightly under-dense at nearly all magnitudes compared to the UKIDSS UDS survey, but find similar results as Daddi et al. (2000). The differences between the counts in our survey and the UDS Figure 9. The number counts for EROs in our fields plotted along with the counts from previous surveys by Simpson et al. (2006) and Daddi et al. (2000). We also plot the number counts for the extremely red galaxies (ERGs) -the ERO counts with stars removed. is likely due to several issues, including the slightly different filter sets used, and the correction for galactic extinction. The UDS survey uses Subaru Deep Field imaging utilising the Cousins RC filter, while our R-band imaging is from the CFHT and utilises a Mould R filter, which has different characteristics. Another issue is that these previous surveys have not corrected for Galactic extinction, while we have. This can result in slight differences in the number counts. Another feature seen in previous surveys, which we also see, is a turn-over in the slope of the counts at about K = 18.5, towards a shallower slope at fainter magnitudes. Redshift Distributions and Number Densities of EROs One of principle quantities needed to understand the properties of EROs is their redshift distribution and number densities. With our three different types of redshifts ( §2.2) measured for our EROs, we can construct the redshift distribution and number density evolution for EROs down to a magnitude limit of K = 19.7. Figure 10a shows the redshift distribution for our (R − K) > 5.3 selected EROs. We have plotted this distribution with three different histograms: for the spectroscopic redshift sample (red diagonal hatch), a photometric redshift sample for galaxies at RAB < 24.1 (black), and a photometric redshift sample at RAB > 24.1 (blue horizontal dashed). It is clear that galaxies selected with the ERO criteria are at higher redshifts (z > 1), with very few galaxies meeting this criteria at z < 0.8, and all of those that do are photometrically derived redshifts. A similar pattern can be seen in Figure 6, which plots the colour-redshift distribution for our sample selected by RAB < 24, and RAB > 24. The average redshift for our K < 19.7 ERO sample is < z >= 1.28±0.23. This is at a higher redshift and has a smaller dispersion than the average redshift for all galaxies with K < 19.7, < z >= 0.84 ± 0.31. The above arguments show that the traditional (R − K) > 5.3 ERO cut reliably locates galaxies at z > 1, on average. There is also a fairly long tail of sources up to z ∼ 2. For the most part it appears that EROs at K < 19.7 are selecting high redshift galaxies at z ∼ 1.3. However, we are missing a few galaxies from our ERO sample at K < 19.7 which do not have a redshift due to non-detections in the optical bands. These galaxies could be at very high redshift, and will be discussed in a future paper. Figure 11 plots the number density evolution for our EROs at K < 19.7 as a function of redshift, with tabulated values shown in Table 4. As can be seen, in agreement with Figure 10, there is a drop in the number density of EROs at z < 1. The number density peak for EROs is also clearly found between z = 1 − 1.5. We can compare this figure to previous measurements and models (e.g., Nagamine et al. 2005). Previously Moustakas et al. (2004) and Cimatti et al. (2002b) measured ERO number densities within various K-limits, but within (R − K) > 5, as opposed to our (R − K) > 5.3. However, when we compare our results to these papers, we find very similar results. Down to Kvega < 20.12, Moustakas et al. find a number density of EROs at z = 1 of log(φ(Mpc −3 )) = −3.19, whereas we find −3.39 in roughly good agreement. Similarly, Cimatti et al. (2002b) find down to Kvega < 19.2 a density of log(φ(Mpc −3 )) = −3.67 at z = 1, while we find −3.60, again in good agreement. When we compare our results to simulation results from Nagamine et al. (2005) we find again roughly good agreement, although the Lagrangian SPH results find a slightly higher number density. At z = 1 these SPH simulations find log(φ(Mpc −3 )) = −2.96, while the total variation diminishing (TVD) simulations give a slightly higher result. This density is a factor of 2.6 higher than the number density which we observe. These density are however the result of assuming a dust extinction of E(B-V) = 0.4, which might be too high in light of the results shown in Figures 7 and 8. Lower values of E(B-V) will make galaxies less red, and will produce a lower number density of EROs. Stellar Masses of EROs As shown in Figure 10b, our EROs generally have high stellar masses, and thus a fraction of the most massive galaxies at z > 0.8 must be EROs. This is an important result, as it has been surmised from other criteria, such as clustering, that the EROs are contained within massive halos (e.g., Daddi et al. 2000;Roche, Dunlop & Almaini 2003). We have constructed a complete sample of M * > 10 11 M⊙ galaxies up to z ∼ 1.4 in our fields (Conselice et al. 2007b), from which we can directly test the idea that EROs are massive galaxies. Although Figure 10b demonstrates that our EROs at K < 19.7 are selecting massive galaxies, this is likely due to the fact that our EROs are relatively bright, and we cannot constrain the masses or redshifts of fainter EROs, which must be either lower mass galaxies at the same redshifts as these, or higher redshift massive systems. There is also little difference in the distributions of stellar masses for our ERO sample at different redshifts. The peak mass is around 2×10 11 M⊙ at all redshift selections. We find that most of our sample of K < 19.7 EROs tend to have masses M * > 10 11 M⊙ , which in the nearby universe are nearly all early-types (Conselice 2006a). This is a strong indication that K < 19.7 EROs, regardless of their morphology or stellar population characteristics, are nearly certain to evolve into passive massive early-types in the nearby universe. Are Massive Galaxies at z > 1 EROs? While EROs are massive galaxies, the opposite of this is not necessarily true, as there are massive galaxies with M * > 10 11 M⊙ , that are not EROs, some with very blue colours (Conselice et al. 2007b). Figure 12 plots the fraction of galaxies within the mass ranges M * > 10 11.5 M⊙ and 10 11 M⊙ < M * < 10 11.5 M⊙ which are EROs between z ∼ 0 − 2. Figure 11 plots the number density evolution for EROs selected in two ways and compares to galaxies selected with stellar masses M * > 10 11 M⊙ . There are several interesting features in these figures. The first is that while massive galaxies exist throughout this redshift range, EROs only populate massive galaxies at z > 1. Another interesting feature is that an ERO selection at K < 19.7 will include a large fraction of the most massive galaxies with M * > 10 11.5 M⊙ , at 1 < z < 2. On average, between 1.0 < z < 1.4, our ERO selection will find 36% of all M * > 10 11.5 M⊙ galaxies. This increases to 75% within the redshift range 1.2 < z < 1.8. However, the ERO colour limit does not do a good job in selecting massive galaxies with 10 11 M⊙ < M * < 10 11.5 M⊙ . In this mass range at 1.0 < z < 1.4 the ERO selection finds only 35% of these systems. Similar to the M * > 10 11.5 M⊙ mass range, we find a higher fraction of 10 11 M⊙ < M * < 10 11.5 M⊙ galaxies which are EROs at 44%. However, with a K < 19.7 limit, we are obtaining a similar number density of EROs per co-moving volume as there are massive galaxies (Figure 11). The number densities of EROs however declines rapidly at z < 1. While it appears that the traditional (R − K) > 5.3 Figure 11. The co-moving number densities of EROs selected by the (R − K)vega > 5.3 and the (I − K)vega > 4 criteria as a function of redshift. Also plotted for reference is the number density evolution for galaxies with stellar masses M * > 10 11 M ⊙ (see Conselice et al. 2007b). limit will find the most massive galaxies at z > 1, this colour cut does not give a purely ultra-high mass sample, nor does it include all of the massive galaxies at these redshifts. At 1.2 < z < 1.8 about 25% of galaxies with M * > 10 11.5 M⊙ and 66% of systems with 10 11 M⊙ < M * < 10 11.5 M⊙ are not EROs. This is consistent with our finding in Conselice et al. (2007b) that ∼ 40% of massive galaxies at z > 1 are undergoing star formation and have blue (U − B)0 colours. Structural Features We utilise visual estimates of Hubble types and the nonparametric CAS system to characterise the morphologies and structures of an ERO sample selected with (R−K) > 5. While there are slightly more systems at (R −K) > 5.3 than at 5 < (R−K) < 5.3, we reduce our limit to aquire more systems for analysis and to better compare with previous work. A similar analysis is the focus of Conselice et al. (2007a,b), in which we examined the morphological properties of the most massive galaxies at z > 1.5, as well as the Distant Red Galaxies (DRG), with (J − K) > 2.3, which are also proposed to be the progenitors of today's massive galaxies. The CAS (concentration, asymmetry, clumpiness) parameters are a non-parametric method for measuring the structures of galaxies on CCD images (e.g., Conselice 2007; Conselice et al. 2000a,b;Bershady et al. 2000;Conselice et al. 2002;Conselice 2003). The basic idea is that low redshift, nearby galaxies, have light distributions that reveal their past and present formation modes (Conselice 2003). Figure 10. The redshift and stellar mass histograms for our sample of EROs. Shown are three histograms created after dividing the sample three different ways, depending on the origin of the redshift. The dotted diagonal hatched red histogram shows the redshift and stellar mass distributions for the spectroscopically confirmed EROs, while the non-hatched black histogram shows the distributions for EROs with (R − K) > 5.3, R AB < 24.1, and whose redshifts are photometric. The horizontal blue dashed hatched histogram shows the distributions for EROs with R AB > 24.1 with measured photometric redshifts. As can be seen at a K < 19.7 limit, the ERO selection generally locates massive galaxies at z > 1. Furthermore, well-known galaxy types in the nearby universe fall in well defined regions of the CAS parameter space (Conselice 2003). We apply the CAS system to our EROs to determine their structural features. There are two caveats to using the ACS imaging on these galaxies. The first is that there are redshift effects which will change the measured parameters, such that the asymmetry and clumpiness indices will decrease (Conselice et al. 2000a;Conselice 2003), and the concentration index will be less reliable (Conselice 2003). There is also the issue that for systems at z > 1.3 we are viewing these galaxies in their rest-frame ultraviolet images, which means that there are complications when comparing their measured structures with the rest-frame optical calibration indices for the nearby galaxies. Our main purpose in using the CAS system is to identify relaxed massive ellipticals as well as any galaxies that are still involved in a recent major merger and are presumably dusty. The following structural and morphological analysis is based on the ACS imaging of the EGS field described in §2.1. The imaging we use covers 0.2 deg 2 in the F814W (I) band, giving us coverage for ∼ 15% of our ERO sample. Eye-Ball/Classical Morphologies We study the structures and morphologies of our sample using two different methods. The first is through simple visual estimate of morphologies based on the appearance of our ERO sample in CCD imaging. The outline of this process is given in Conselice et al. (2005a) and is also described in the companion paper (Conselice et al. 2007b). Our total sample of objects gives 436 unique EROs for which there is ACS imaging. Each of these galaxies were placed into one of six categories: compact, elliptical, lenticular (S0), earlytype disk, late-type disk, edge-on disk, merger/peculiar, and unknown/too-faint. These classifications are very simple, and are only based on appearance. No other information, such as colour or redshift, was used to determine these types. An outline of these types is provided below, with the number in each class listed at the end of each description. Figure 12. Diagram showing the fraction of galaxies with extreme masses M * > 10 11.5 M ⊙ (solid circles) and 10 11 M ⊙ < M * < 10 11.5 M ⊙ (open boxes) which are also (R−K) > 5.3 selected EROs. As can be seen, the ERO selection successfully finds galaxies at z > 1, yet does not locate all of these systems. A sample of EROs at K < 19.7 will contain nearly all of the log M * > 11.5 systems at z ∼ 1.5, but is less successful at finding the lower mass, log M * > 11 systems. that it contain no obvious features, such as an extended light distribution or envelope. (24 systems) 4. Early-type disks: If a galaxy contained a central concentration with some evidence for lower surface brightness outer light, it was classified as an early-type disk. (3 systems) 5. Late-type disks: Late-type disks are galaxies that appear to have more outer low surface brightness light than inner concentrated light. (1 systems) 6. Edge-on disks: disk systems seen edge-on and whose faceon morphology cannot be determined but is presumably S0 or disk. (17 systems) 8. Peculiars/irregulars: Peculiars are systems that appear to be disturbed or peculiar looking, including elongated/tailed sources. These galaxies are possibly in some phase of a merger (Conselice et al. 2003a,b) and are common at high redshifts. (148 systems) 9. Unknown/too-faint: If a galaxy was too faint for any reliable classification it was placed in this category. Often these galaxies appear as smudges without any structure. These could be disks or ellipticals, but their extreme faintness precludes a reliable classification. (20 systems) Morphological Distributions The morphological distribution of the EROs can help us address the question of the origin of these extremely red galaxies. In the past, this technique has been used to determine the fraction of EROs which are early-type, disk or peculiar. Previous studies on this topic include Yan & Soifer (2003) who study 115 EROs, and Moustakas et al. (2004) who studied 275 EROs in the GOODS fields. Our total population of EROs for which we have morphologies is 436. These earlier studies have found a mix of types, with generally half of the EROs early-types, and the other half appearing as star forming systems in the form of disks or peculiars/irregulars. In summary, we find that 57±3% of our EROs are earlytype systems. This includes 58 systems, or 13% of the total, which are disturbed ellipticals. In classifications carried out in previous work some of these systems would be classified as peculiars. The bulk of the rest of the population consists of bonafide peculiars, which make up 34±3% of the ERO population. Only four EROs were found to be face on disk galaxies, while 17 systems (4%) of the ERO sample are made up of edge-on disk galaxies. Presumably these galaxies are red for different reasons -either evolved galaxy populations, or dusty star formation, or from dust absorption produced through orientation in the case of edge-on disks. Previous work has been somewhat inconsistent on the morphological break-down between peculiars and early-type galaxies (e.g., Yan & Soifer 2003;Moustakas et al. 2004). From our study, it is clear that much of this difference can be accounted for by the peculiar ellipticals. These systems appear in their large-scale morphology to be early-type, but have unusual features, such as offset nuclei that make them appear peculiar. The differences between previous findings can largely be accounted for by whether these peculiar ellipticals were included in the early-type or peculiar class. We find that the relative number of peculiar and earlytype EROs changes with redshifts (Figure 13), such that at the lowest redshifts (z ∼ 0.7) the EROs are dominated by the E/S0/Compacts, with a type fraction of ∼ 65%, but at z > 1.4 the peculiar systems are more prominent. The fraction of EROs which are peculiars evolves from ∼ 45% at z ∼ 1.7 to ∼ 20% at z = 0.9. This mix between early types and peculiars evolves with redshift, although the exact reason for this evolution is not immediately clear. It is possible that some of the peculiars at z > 1.2 only appear so because we are sampling their morphologies below the Balmer break that would produce more irregular/peculiar looking galaxies at these redshifts (Windhorst et al. 2002;Manger-Taylor et al. 2006;Conselice et al. 2007b). However, we are nearly always probing the rest-frame optical where the effects of the morphological k-correction are minimised both qualitatively and quantitatively (e.g., Conselice et al. 2005a;Conselice et al. 2007c in prep). We also find a higher fraction of peculiar systems within the ERO sample, than what is found for massive galaxies with M * > 10 11 M⊙ at similar redshifts (Conselice et al. 2007b). Interestingly, we find that 10-15% of EROs at z ∼ 0.9 are spiral/disks. For the most part however, it appears that most of the EROs are ellipticals, but we find that peculiars make up roughly a third of the systems, with the relative contribution coming from galaxies at the highest redshifts. We discuss the morphological break-down of these systems in §5.2 in terms of models of galaxy formation and evolution. CAS Structural Parameters Another way to understand the structures of these systems is through their quantitative structural parameters as measured through the CAS system. The CAS parameters have a well-defined range of values and are computed using simple techniques. The concentration index is the logarithm of the ratio of the radius containing 80% of the light in a galaxy to the radius which contains 20% of the light (Bershady et al. 2000). The range in C values is found from 2 to 5, with higher C values for more concentrated galaxies, such as massive early types. The asymmetry is measured by rotating a galaxy's image by 180 • and subtracting this rotated image from the original galaxy's image. The residuals of this subtraction are compared with the original galaxy's flux to obtain a ratio of asymmetric light. The radii and centreing involved in this computation are well-defined and explained in Conselice et al. (2000a). The asymmetry ranges from 0 to ∼ 1 with merging galaxies typically found at A > 0.35. The clumpiness is defined in a similar way to the asymmetry, except that the amount of light in high frequency 'clumps' is compared to the galaxy's total light (Conselice 2003). The values for S range from 0 to > 2, with most star forming galaxies having values, S > 0.3. We show in Figure 14 the CA and AS projection of CAS space for EROs defined by (R − K) > 5. As can be seen, the EROs, which are mostly early-types and peculiars, as defined by eye (Figure 14), fall along a large portion of the range of possible values in CAS space. As expected, the irregulars/peculiars have higher asymmetries, lower concentrations, and higher clumpiness values than the early types. This is similar to, but not exactly the same, as the structural parameter distribution for the most massive galaxies with M * > 10 11 M⊙ at z < 1.4. There is a larger fraction of peculiars within the ERO sample, and the redshift evolution with types is not identical between EROs and massive galaxies (cf. Conselice et al. 2007b). As with the massive galaxies found at high redshifts, the ERO visually determined types are slightly higher in asymmetry than their z ∼ 0 counterparts (Conselice 2003). Figure 14 shows how the EROs classified as early-types have slightly larger asymmetries than their visual morphology would suggest. The distorted early-types have even higher asymmetries on average. Most systems also deviate from the asymmetry-clumpiness relation (Figure 14), showing that the production of these asymmetries is more likely due to dynamical effects, rather than star formation (Conselice 2003). Interestingly, we find that many of the peculiar EROs do not have a high clumpiness index, which is opposite to what we found for high asymmetry systems within the massive galaxies sample (Conselice et al. 2007b). The reason for this is that these red galaxies are likely dusty, and therefore bright star clusters are not seen within the ongoing star formation. This furthermore shows that the EROs are likely more dominated by galaxy merging than a pure mass selected sample. We can understand the origin of these morphologies, and how they relate to the origin of the EROs, by comparing the CAS parameters of the EROs to their stellar masses. Figure 15 shows the comparison between ERO asymmetries and concentrations vs. their stellar masses. The concentrationstellar mass diagram shows a few interesting trends. The most obvious feature is that there appears to be a broad bimodality between EROs which are peculiar, and those which are early-types. The peculiars and early-types have similar masses, typically M * ∼ 10 11 M⊙ , yet they have different light concentrations. The peculiars typically have lower concentrations, C = 2 − 3, while the early-types are generally at C > 3. The early-types also tend to show a correlation between concentration and stellar mass which is not seen for the peculiars. The distorted ellipticals fall in between these two populations suggesting an evolutionary connection in the passive evolution of galaxy structure. What we are potentially witnessing is the gradual transition from peculiar EROs at high redshifts, to passive ellipticals at lower redshifts, while on the way going through a distorted elliptical stage. This is consistent with the idea that the z = 1 − 2 epoch is where massive galaxies final reach their passive morphology (Conselice et al. 2003;Conselice 2006). There also appears to be a bimodality within the stellar mass/asymmetry diagram (Figure 15). The peculiars in general have higher asymmetries than the early-types, but with the distorted ellipticals containing asymmetries mid-way between the peculiars and the ellipticals, suggesting again that the distorted ellipticals are a mid-way point in the evolution between the peculiar EROs and the passive EROs. As we do not see much mass evolution in the upper edge of the ERO mass distribution, it is likely that the peculiars are within their final merger phase, and transform into early-types relatively quickly over 1 < z < 2. A similar pattern can be seen for a high mass selected sample of galaxies at similar redshifts (Conselice et al. 2007b). Other ERO Selection Criteria ERO selection through the (R−K) > 5.3 criteria is only one way to find extremely red objects. Another popular method for finding EROs is a selection with (I − K) > 4 (e.g., Moustakas et al. 1997). While both of these selection methods are used to find EROs, it is not clear how these two methods Figure 14. There appears to be a remarkable bimodal distribution in these parameter spaces such that peculiars have low light concentration and higher asymmetries than the elliptical-like objects, while having similar stellar masses. The distorted ellipticals appear to fall in the gap between these two populations, and are likely a transitional phase between the two. This can be seen furthermore in Figure 13 where the fraction of early-type EROs increases at the expense of the peculiars. compare, and whether they are finding the same galaxy population. We investigate this issue briefly in this section. Figure 16 shows the relationship between (I − K) and (R−K) colours for galaxies within our sample. Those objects which have photometric redshifts z > 1.5 are plotted as the red open symbols, while those at z < 1.5 are plotted as the blue dots. What is obvious from this figure is that an ERO selection with (I − K) > 4 is more likely to pick out galaxies at higher redshifts than the (R − K) > 5.3 limit. This limit also appears to contain more contamination from lower redshift galaxies than the (R−K) > 5.3 limit. Overall, we find an overlap of 767 EROs through both definitions down to K = 19.7, this overlap constitutes 54% of the (R − K) > 5.3 EROs and 60% of the (I − K) > 4 EROs. The points are plotted in terms of their redshifts, with galaxies at z > 1.5 shown as open red circles and galaxies at z < 1.5 as blue dots. The (I − K) > 4 limit appears to find galaxies at higher redshifts, on average, than the (R − K) > 5.3 limit. In Figure 17 we show the redshift distribution for the EROs selected with (I − K) > 4, which can be directly compared with the redshift distribution for (R − K) > 5.3 systems in Figure 10. As can be seen, the redshift distribution for the (I − K) > 4 systems is skewed towards higher redshifts than the (R − K) > 5.3 selection. We find that, on average, the redshifts for the (I − K) > 4 EROs is <z> = 1.43±0.32, compared with <z> = 1.28 ± 0.23 for EROs selected with the (R − K) > 5.3 limit. There are also fewer systems at z < 1.4 within the (I − K) > 4 selection than for the (R − K) > 5.3 limit, suggesting that the redder bands are finding galaxies at slightly higher redshifts. However, this does not appear to be the case for the (J −K) > 2.3 'distant red galaxies' (DRGs), as discussed in Conselice et al. (2007a) and Foucaud et al. (2007). It appears that these systems are picking up a significant fraction of massive galaxies at z ∼ 1 up to K = 20. DISCUSSION Here we discuss the results from this paper in the context of galaxy formation models and scenarios. We include in this discussion the redshift distribution of K-selected galaxies, as well as the stellar mass and structural/morphological properties of EROs to address how the stellar mass assembly of galaxies is likely taking place. By examining these galaxies we are not relying on assumptions about stellar masses to find the most massive and evolved galaxies at high redshift. In this sense a K-band selected and colour selected popula- Figure 17. Similar to Figure 10, but for ERO systems which have been selected with the (I − K) > 4 limit. As can be seen by comparing this figure with Figure 10, we are selecting, on average, higher redshift galaxies with these redder bands. tion are an alternative approach from stellar mass selection (Conselice et al. 2007b) for understanding galaxy formation due to the simplicity, and reproducibility, of their selection. We first examine the redshift distribution of K < 20 galaxies, and compare this to models. We use colour information of our faint K-selected galaxies to rule out all monolithic collapse formation scenarios for galaxies. We then examine the properties of the EROs themselves to further argue that these systems are, due to their stellar masses, likely an intrinsically homogeneous population, with the peculiar EROs evolving into the ellipticals at lower redshifts. K-band Redshift Distributions The number of K-band selected galaxies per redshift at a given K-magnitude limit is an important test of when the stellar masses of galaxies were put into place. In general, ideas for how massive galaxies form explicitly predict how much stellar mass galaxies would have in the past. In a rapid formation, such as with a monolithic collapse, the stellar mass for the most massive galaxies, which in a Kband limited sample will always probe the massive systems, the number of galaxies within a bright K-band limit, say K < 20 at high redshift should be much larger than the number of sources seen in a hierarchical model, which predicts that the stellar masses of galaxies grows with time. Therefore the number of bright galaxies at higher redshifts in a hierarchical model is less than that predicted in a monolithic collapse. This test was first performed by Kauffmann & Charlot (1998) who concluded, based on an early hierarchical model, that the predicted counts exceed the observations, a result which has remained despite improvements in data and models (e.g., Kitzbichler & White 2006). In Figure 18 we show the number of K < 20 galaxies as a function of redshift for systems at z < 4. We further plot on Figure 18. Redshift distribution of our sample with a K < 20 cut. Comparisons to previous redshift distributions published in the K20 and GOODS surveys are shown. The two lines demonstrate predictions of fiducial monolithic collapse (Kitzbichler & White 2006), and hierarchical model predictions (Kitzbichler & White 2007) for this evolution within the same K-band limit. Figure 18 model predictions for how N (z) changes in a standard pure luminosity evolution monolithic collapse model from Kitzbichler & White (2006), and within a standard hierarchical formation scenario from Kitzbichler & White (2007). We have used in this figure our entire K-band distribution, including galaxies for which we had to measure photometric redshifts without an optical band ( §2.2). We also plot on Figure 18 the K < 20 magnitude distribution for galaxies seen in the GOODS and K20 surveys. While we generally agree with these previous results, we find a slightly lower number of systems at higher redshifts compared to GOODS and K20. The reason for this could simply be cosmic variance, as the GOODS and K20 samples at these redshifts also differ by a large amount. Otherwise, this difference is likely created by errors within our photometric redshifts. However, it must be noted that the integrated number of K < 20 galaxies is similar for our survey and GOODS and K20, but is still much smaller than that predicted by the monolithic model. As can be seen in Figure 18 there is clearly a large difference between the observed distribution, and the predicted monolithic collapse distribution. We can rule out this basic monolithic collapse model, which does not include dust, at > 10 σ, based on the comparisons to our K < 20 redshift distribution. It is possible to match with a monolithic collapse model if extreme dust content is included in these galaxies, or if there are 'hidden' galaxies (Kitzbichler & White 2006). However, as we argue below, there is no evidence that dusty galaxies dominate our K-band selected sample. From Figure 18 it appears that a basic hierarchical model agrees better with the data (e.g., Somerville et al. 2004;Stanford et al. 2004;Kitzbichler & White 2007). There is perhaps a slight excess of galaxies in the hierarchical formation model, which has been noted before (e.g., Kauffman & Charlot 1998;Somerville et al. 2004;Kitzbichler & White 2007). The origin of this difference is not clear. It perhaps simply implies that too much mass is produced early in the hierarchical model, yet this would seem to contradict the fact that the most massive galaxies with M * > 10 11 M⊙ are nearly all in place by z ∼ 1.5 − 2 (Conselice et al. 2007b). We address this apparent conflict in more detail in §5.2. Although the basic hierarchical model agrees better with the observed redshift distribution, there are scenarios where the monolithic model can fit the data as well as the hierarchical scenario (see also Cimatti et al. 2002a). These scenarios require that the K-band selected galaxies have a significant amount of dust extinction in the observed K-band. The amount of extinction needed varies from 1.0 to 0.7 mag at rest-frame z and R bands from z = 1.5 − 2 (Kitzbichler . This extinction needed is even higher at z = 1 − 1.5. By using the colours of galaxies that fit within the K < 20 criteria, we can determine what the contribution of dusty galaxies are to these counts. Stellar population synthesis models show that only old stellar populations, and galaxies with dust extinctions with Av > 1, have a colour of (R − K) > 5 at z = 1 − 4. We can use this information, and the model results using various dust extinctions from Nagamine et al. (2005) (Figures 7 and 8), to argue that these galaxies are not dominated by dust extinction, which would need to be the case to reconcile the observed K-band distribution with a monolithic collapse model. First, the model results compared to data shown in Figures 7 and 8, and discussed in §4.3 clearly show that K-selected galaxies are not dominated by heavy dust extinction. At best, only models with E(B-V) = 0 − 0.4 are able to reproduce the location of real galaxies. The E(B-V) = 0.4 models are however even too red to account for most galaxies in the (R − K) vs. M * diagram (Figure 8). A representative value of RV = AV/E(B-V) = 3.1 reveals values of AV = 0 − 1.28 for this range in E(B-V). For the Calzetti et al. (2000) extinction law, this gives lower values, with AV = 0 − 0.25. Thus, it does not appear likely that our K < 20 galaxy sample is dominated by enough dust extinction (AV = 1 needed, on average) to match the monolithic collapse galaxy count model. We find that only 28.3±0.6% of K < 20 selected galaxies at 1 < z < 2 have a colour (R − K) > 5, required to meet the minimum condition for dust extinction. We have further argued in §4, that a significant fraction of these EROs are old passively evolving stellar populations, which are unlikely to have a screen of dust with AV > 1 (see e.g., White, Keel & Conselice 2000). As discussed in §4.5.2 over half of the EROs at 1 < z < 2 appear as early-types, thus at most only ∼ 14% of the K < 20 galaxies at 1 < z < 2 have a significant amount of dust extinction. The dusty pure-luminosity evolution monolithic collapse models from Kitzbichler & White (2006) predict that all K < 20 galaxies have this amount of dust extinction, which clearly they do not. It is thus impossible to reconcile the K-band redshift distribution with any monolithic model. In summary our K-band redshift distribution, and those from the K20 and GOODS surveys, appear to be much closer to the basic hierarchical model of Kitzbichler & White Figure 19. The fraction of galaxies identifiable as a merger using the CAS system, and the fraction of early-types which appear to have a distorted structure. (2007) than to the monolithic collapse model. We can also rule out pure luminosity evolution models with significant dust extinction. Therefore, down to K = 20 at z < 4 it appears that the monolithic model cannot be the dominate method for forming galaxies. Structural Evolution As discussed in §4.5, a large fraction of the EROs we have HST imaging appear to be undergoing some type of evolution based on their structural appearance. A logical conclusion is that many of the peculiar systems are in some phase of a merger (Conselice et al. 2003a). We can quantify this by examining the merger fraction as derived through the CAS definition of A > 0.35 and A > S (Conselice 2003). This is a strict definition, and will allow us to measure a lower limit on the merger fraction, even when observing in the rest-frame ultraviolet (Taylor-Mager et al. 2007). Figure 19 shows the evolution of the merger fraction derived through this CAS definition. We also include the fraction of early-types which appear to have a distorted structure. It appears from this that between 5−10% of the EROs at 1 < z < 2 would be identified as a major merger through the CAS parameters. This is a factor of ∼ 3 less than the peculiar fraction (Figure 13), which is what we would expect for a population which is undergoing major mergers (Conselice 2006; Bridge et al. 2007). The reason for this is that the CAS approach is sensitive to ∼ 0.4 Gyr of the merger process (Conselice 2006), while eye-ball estimates of merging last for ∼ 1.2 Gyr (Conselice 2006). This is another indication that the peculiar galaxies we are seeing in the ERO population are in fact undergoing merging. Figure 20, as well as Figure 15 demonstrate what evolution is likely occurring within the ERO population. As can be seen, the EROs, within our K < 19.7 selection, have the same upper range of masses at z ∼ 0.8 − 2 ( Figure 20). Therefore, little mass growth occurs within this population at this K-band limit. We also know that a large fraction of the EROs with M * > 10 11 M⊙ are peculiar in some way. This varies from 64±24% at z > 1.5 to 41±6% at 1 < z < 1.25 ( §4.5). However, at z ∼ 0 the fraction of M * > 10 11 M⊙ red galaxies which are morphologically early-types is ∼ 85% (Conselice 2006a;Conselice et al. 2007b). It is clear that a significant fraction of EROs must have undergone morphological evolution as they cannot lose mass. What likely is occurring is that the distorted elliptical and peculiar galaxies which dominate the population at z > 1.5 (Figure 13 and 20) transform into morphologically and spectrally evolved systems at z ∼ 1. The reason this is likely the case is that there is no difference in the masses of the peculiar and the early-type EROs. This also explains why this population, despite having a mixed morphology clusters so strongly (e.g., Daddi et al. 2000;Roch et al. 2003). Calculations based on the merger rate in Conselice et al. (2007b) for the most massive galaxies suggest that on average about one or two major mergers are occurring for the M * > 10 11 M⊙ population at z < 2, but fewer at lower redshifts. Finally, these major mergers are what may make the Kband counts in the hierarchical model redshift distribution higher than the observed. The reason is that in the standard hierarchical model, the number and mass densities of the most massive galaxies with M * > 10 11 M⊙ underpredict the observed number of massive systems by up to two orders of magnitude (Conselice et al. 2007b). Within these models the stellar masses for these systems are largely already formed, but are in distinct galaxies that have not yet merged (De Lucia et al. 2006). It is thus easy to see that if a single massive galaxy at z ∼ 1.5 was in several pieces, all of which would still meet the criteria of K < 20 based on the relation of stellar mass and K-mag (Figure 7), then the number of galaxies with K < 20 at higher redshift in the hierarchical model would be higher than the observed number. This is consistent with the number densities of massive galaxies being higher than in the models, as well as for a rapid formation of massive systems through major mergers at z > 2 (Conselice 2006b). SUMMARY In this paper we analyse the faint K-band selected galaxy population as found in the Palomar NIR survey/DEEP2 spectroscopic survey overlap. Our primary goal is to determine the nature of the faint K > 19 galaxy population. While many of these galaxies are too faint for detailed spectroscopy, we can investigate their nature through spectroscopic and photometric redshifts, stellar masses, as well as photometric and structural features. Our major findings include: 1. The redshift distribution for K-selected galaxies depends strongly on apparent K-magnitude. Most systems at K < 17 are at z < 1.4, while a significant fraction of sources with 17 < K < 19 are at z > 2. These K-bright high−z galaxies are the progenitors of today's massive galaxies. 2. We find that a significant fraction (28.3±0.6%) of the K < 20 galaxy population consists of extremely red or massive galaxies at z > 1. We characterise the population of log M * > 11 sources in Conselice et al. (2007b), while we analyse the extremely red objects (EROs) in this paper. 3. We find that EROs at K < 19.7 are a well defined population in terms of redshifts and masses. Nearly all EROs are at z > 1, and have stellar masses with M * > 10 11 M⊙ . EROs are therefore certainly the progenitors of today's massive galaxies. The corollary to this however is not necessarily true. There are massive galaxies at z > 1 that would not be selected with the ERO criteria. We find that the ERO selection locates 35-75% of all ultra-massive, M * > 10 11.5 M⊙ galaxies, at z = 1 − 2, while only 25% of 10 11 M⊙ < M * < 10 11.5 M⊙ galaxies are located with this colour cut. 4. We examine the morphological and structural properties of our ERO sample and find, as others previously have, a mixed population of ellipticals and peculiars. In total, we find that the ERO population is dominated by early-type galaxies, with an overall fraction of 57±3% of the total. Interestingly, we find that a significant fraction of the early-types (∼ 25%) are distorted ellipticals, which could be classified as peculiars, although these systems at a slightly lower resolution than ACS, or using a quantitative approach would be seen as early-types. Peculiars account for the remaining 34%, and many of these are likely in some merger phase. This fraction tends to evolve such that the peculiars are the dominant population at higher redshifts, z > 1. 5. We investigate the structural parameters for our EROs using the CAS system. We find that visual estimates of galaxy class and position in CAS space roughly agree, although the asymmetries of these systems are higher than what their visual morphologies would suggest. We find a bimodality in the stellar mass-concentration diagram where the peculiar EROs are at a low concentration and the early-types are highly concentrated. The distorted ellipticals fall in between these two populations suggesting an evolutionary connection in the passive evolution of galaxy structure within the ERO population. 6. We compare the (R − K) > 5.3 ERO selection with the (I − K) > 4 ERO selection, and find that the (I − K) > 4 ERO are at slightly higher redshifts than the (R − K) > 5.3 selection, suggesting that it is a more useful criteria for finding evolved galaxies at z > 1.5. 7. By examining the redshift distribution of K < 20 galaxies, and comparing to monolithic collapse and hierarchical formation models, we are able to rule out all monolithic collapse models for the formation of massive galaxies. These monolithic collapse models predict a higher number of K < 20 galaxies as a function of redshift at a significance of ∼ 10 σ. While some monolithic collapse models are able to reproduce our galaxy counts, these are dominated by dust. We are however able to show that only ∼ 14% of the K < 20 galaxies have potentially enough dust to match this model, the others being evolved galaxies, or blue star forming systems. The Palomar and DEEP2 surveys would not have been completed without the active help of the staff at the Palomar and Keck observatories. We particularly thank Richard Ellis and Sandy Faber for advice and their participation in these surveys. We thank Ken Nagamine and Manfred Kitzbichler for providing their models and comments on this paper. We also thank the referee for their careful reading and commenting on this paper. We acknowledge funding to support this effort from a National Science Foundation Astronomy & Astrophysics Fellowship, grants from the UK Particle Physics and Astronomy Research Council (PPARC), Support for the ACS imaging of the EGS in GO program 10134 was provided by NASA through NASA grant HST-G0-10134.13-A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. JAN is supported by NASA through Hubble Fellowship grant HF-01182.01-A/HF-011065.01-A. The authors also wish to recognise and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
2007-11-07T13:21:15.000Z
2007-11-07T00:00:00.000
{ "year": 2007, "sha1": "816f2f436fb4a9787895411d68754c4e89a4d346", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/383/4/1366/4867561/mnras0383-1366.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "816f2f436fb4a9787895411d68754c4e89a4d346", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
42541113
pes2o/s2orc
v3-fos-license
HANGING THE BEEF CARCASS BY THE FOREQUARTER TO IMPROVE TENDERNESS OF THE LONGISSIMUS DORSI AND BICEPS FEMORIS MUSCLES Hanging beef carcasses in different configurations in the cooler affect some carcass muscle tenderness. Forty Nellore steer carcasses (ten per day) were chosen at random in a federally inspected slaughter plant and hanged alternate left and right sides either in the traditional way by the hindquarter (HQ) or by the forequarter (FQ) also called “tenderbife”. Carcasses were selected from steers up to 30 months old and had an average hot carcass weight of 244.1 kg. These carcasses were chilled for 48 hours, when samples from the Longissimus dorsi (LD) at the 12 rib and the Biceps femoris (BF) at the P8 site were removed, kept under refrigeration (0-2oC) for five days and frozen for future analysis. The temperature of the LD after 24 hours taken at the 12 rib was not different for HQ (1.0oC) and FQ (0.9oC). Fat thickness measured at the 12 rib was lower (P < 0.05) for HQ (3.8 mm) than FQ (4.3 mm). All samples were thawed during 48 hours under refrigeration for tenderness evaluation. Warner Bratzler Shear force from the LD was lower (P < 0.001) for FQ (3.53 kg) than HQ (4.78 kg) and was not different for BF. Total cooking losses were not different between HQ (19.7%) and FQ (18.9%). Hanging beef carcass by the forequarter caused an improvement in tenderness of the LD without any detrimental effect on the BF (cap of rump). INTRODUCTION There is a segment of consumers willing to pay a premium for guaranteed tender beef, however inconsistent meat tenderness has been identified as one of the major problems facing the meat industry nowadays (Shackelford, et al., 2001). Meat quality is affected by many factors as tenderness, juiciness and flavor, all contributing to the overall liking/disliking consumer attitude.Tenderness is the main quality attribute of beef, and the main reason for consumers buying and consuming it (Cia & Corte, 1978).Also tenderness is responsible for 40% of consumer acceptance of meat, followed by overall liking responding for 30%, flavor for 20% and juiciness for 10% (Chapell, 2001).As meat tenderness is one of the most important organoleptic trait for the consumer, a possible solution for improving tenderness of Bos indicus meat, would be using methods as electric stimulation, cooler management, ageing, injection of calcium chloride or carcass suspension by pelvic bone (Pedreira et al., 1999). Eventually the beef processing segment of meat industry will adopt technologies to sort out carcasses for tenderness.The percentage of carcasses qualifying for "tender" needs to be as high as possible to assure the system success whatever it is.Thus, it would benefit the processor to implement as many steps as feasible to improve tenderness before the classification occurs.In addition, a processor might want to implement specifications for cattle to be slaughtered that includes production practices that could potentially improve meat tenderness.Hanging carcasses in different configurations in the cooler, through stretching certain muscles and relaxing others, has been suggested to increase tenderness of stretched muscles and decrease tenderness of relaxed muscles (Owens & Gardner, 1999). Reviewing the literature about some aspects of feedlot management and nutrition on carcass measurements, Owens & Gardner (1999) reported that Longissimus muscle shear force tended to improve as carcass weights increased, perhaps associated with greater stretch of the Longissimus muscle with greater carcass weight.Tenderstretching is an alternative carcass hanging configuration in which the carcasses are hanged by pelvic bone.This physical process increases tension over loin and hindquarter muscles during rigor establishment avoiding intense contraction and turning them more tender (Forrest et al., 1979).In zebu steers, accordingly to Norman & Cia (1980), apllying the tenderstretch increased tenderness of the Gluteus medius, Quadriceps femoris and Biceps femoris but no improvement was observed for the Longissimus dorsi, and increased the sarcomere length of all muscles studied but the Psoas major.Nevertheless, although tenderstretch was proved to be effective in improving tenderness, it has the disadvantage of hind leg hanging in a 90º position, which may require additional floor space for the carcasses or sides in the chilling rooms (Sorheim & Hildrum, 2002) which difficult its adoption. In the early 1990's, a new method for muscle stretching or restraining major muscles in carcasses called tendercut was introduced by scientists at Virginia Polytechnic Institute and State University.The procedure for the tendercut implies making cuts in the skeleton of the pre rigor carcass shortly after slaughter while maintaining the Achilles tendon suspension (Claus, 1994).However, the tendercut requires more work than the tenderstretch, and the round/sirloin cut seems to be dependent on well-defined criteria for the specific cutting (Sorheim & Hildrum, 2002).Herring et al. (1967) studying the effects of various degrees of stretching or shortening on tenderness and sarcomere length of Semitendinosus muscle concluded that it was more important to prevent shortening than to ensure maximal stretch. Our hypothesis is that hanging carcasses by forequarter could have an impact over tenderness of some muscles, since it reduces tension in muscles of loin and hindquarter.In this way, the objective of this work was to study the effect of hanging carcasses by forequarter on tenderness of the Longissimus dorsi (striploin) and Biceps femoris (cap of rump) muscles. MATERIAL AND METHODS During four consecutive days 40 commercial Nellore steers carcasses (ten per day) were random selected in a federal inspected packing plant (Promissão, SP, Brazil).The mean and standard error of hot carcass weight and fat thickness were 244 ± 3.1 kg and 4.1 ± 0.19 mm, respectively, and animals were up to 30 months old.Alternate sides of these carcasses were hanged by the traditional Aquilles tendon method (HQ) while the opposite side was hanged by the Carpi radialis muscle (FQ), and chilled during 24 hours in a cooler with temperature between 0-2ºC when temperature and pH measurements of Longissimus dorsi muscle (LD) were taken with a digital pH meter. After 48 hours of chilling one inch thick samples of LD between the 12 -13 th ribs and the Biceps femoris muscles (BF) at the P8 site (pelvic portion of the Biceps femoris over the Gluteus medius muscle) were taken, individually labeled, vacuum packaged and aged during 5 days (0-2ºC).Immediately before samples were cut fat thickness over LD was measured at three quarters of medial end.After ageing, samples were frozen for posterior shear force measurement.Shear force was determined using a Warner Bratzler equipment according methodology described by Wheeler et al. (2001).Also were determined total cooking losses by weight difference of steaks before and after cooking.Data were analyzed as paired measurements and T test was used to detect differences between treatments by the Univariate procedure of SAS ® software (SAS Institute Inc., Cary, NC, USA). RESULTS AND DISCUSSION There were no differences in carcass temperature or pH measured 24 hours after slaughter (Table 1).Nevertheless fat thickness was greater (P = 0.03) in carcasses hanged by forequarter (4.3 mm) than those by hindquarter (3.8 mm).This difference could be due to a reduction in tension on the Longissimus dorsi which would, causes a shortening of the muscle and fat, increasing its thickness.Warner Bratzler shear force from the LD was lower (P < 0.01) for carcasses hanged by forequarter than for hindquarter while BF shear force and total cooking losses were not affected by treatment.Although many studies in the literature report effects of different methods of carcass hanging on tenderness of beef muscles, none of them studied the effect of forequarter carcass suspension and its effect on tenderness.Hanging carcass by pelvic bone (tenderstretching) has been reported as improving tenderness in beef carcasses, since it increases tension over loin and hindquarter muscles, avoiding intense contraction of muscles turning them tender (Sorheim et al., 2001). Cold shortening and subsequent toughness of meat can be reduced by either slow or delayed pre rigor chilling, by electrical stimulation to speed up glycolysis resulting in rigor mortis to occur faster at a higher temperature in the meat, or by physically stretching or restricting the muscle to contract.These tenderizing treatments can be used on a single basis or in combinations (Sorheim & Hildrum, 2002).The effect of carcass suspension methods on sarcomere length and shear force of some bovine muscles was examined by Hostetler et al. (1972).Carcasses were suspended vertically by the Achilles tendon, horizontal, neck-tied, hip-tied and hip-free by obturator foramen in the pelvic or aitch bone.Shear force was greater in LD of carcasses suspended vertically by Achilles tendon compared to other methods, but carcasses suspended horizontally, neck-tied, hip-tied and hip-free were not different. A comparison of various muscles from beef carcass which entered rigor either in a horizontal position with the limbs perpendicular to the vertebrae or in the common vertical position by the Achilles tendon suspension, Herring et al. (1965) observed that horizontally placed sides resulted in longer sarcomeres, lower fiber diameters and increased tenderness of Longissimus, Gluteus medius, Biceps femoris and Semimenbranosus.In a study conducted to determine how the tenderness of a single muscle varies when submitted to different degrees of shortening or stretching, Herring et al. (1967), stretched or contracted samples of Semitendinosus muscle by 12, 24, 36 or 48% of the pre-excised length.Accord-ing to the results, the difference in tenderness of muscles stretched 12-48%, although apparent, was not of the magnitude verified with the various stages of shortening.The authors concluded that from the standpoint of ultimate tenderness, it is more important to prevent postmortem shortening than to ensure maximal stretch. Although not evaluated in this work, an increase in diameter and a decrease in length was noticed in all major muscles of the hindquarter, what could partially explain the thicker fat layer for the forequarter hanged treatment.The studies accomplished by Bouton et al. (1973) confirmed findings of increased tenderness of Longissimus, Semimembranosus and Gluteus medius muscles by tenderstrech, with tenderness values of nonaged meat equivalent to 21 days ageing.Moreover, in addition to increasing the average tenderness level, tenderstretch had the ability to reduce the variation in tenderness of beef Longissimus muscles (Sorheim et al., 2001).Nevertheless, the tenderizing effect of muscle like Biceps femoris, Semimembranosus and Psoas major was none or slight (Sorheim & Hildrum, 2002).It is probably due to the high content of sinew and connective tissue, which determine the tenderness more than the possible stretching of the myiofibrillar proteins.The degree of contraction in which a muscle enters the state of rigor is variable among different muscles (Locker, 1960) and, accordingly to Shanks et al. (2002) these differences among muscles may be influenced by the proximity of the individual muscle in relation to skeletal separation point and muscle fiber orientation in relation to tension. Hanging the beef carcass by the forequarter caused a significant improvement in tenderness of the LD without any detrimental effect on the BF.This improvement in tenderness could be due to suspension of carcasses by forequarter.Despite of reducing tension on Longissimus dorsi muscle it seems that it was sufficient to avoid muscle shortening.These results suggest that the physical treatment, "tenderbife", has altered the environment in the myofiber affecting the rates of certain biochemical activities.Additional studies of biochemical events, such as, sarcomere lengths and myofibrillar fragmentation index of raw myofibrils need to be done. Table 1 - Means, standard errors and probabilities (P) of carcass characteristics hanged by the hindquarter or forequarter.
2017-10-22T13:43:27.646Z
2005-10-01T00:00:00.000
{ "year": 2005, "sha1": "4a73e2c543a6cab63d45163ebba43a40d13f8498", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/sa/a/667ZmQwq4jKTZMJsWCMJfgN/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4a73e2c543a6cab63d45163ebba43a40d13f8498", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
125998502
pes2o/s2orc
v3-fos-license
Exact asymptotics of positive solutions to Dickman equation The paper considers the Dickman equation \begin{document}$\dot x (t)=-\frac{1}{t}\,x(t-1),$\end{document} for \begin{document} $t \to \infty $ \end{document} . The number theory uses what is called a Dickman (or Dickman -de Bruijn) function, which is the solution to this equation defined by an initial function \begin{document} $x(t)=1$ \end{document} if \begin{document} $0≤ t≤ 1$ \end{document} . The Dickman equation has two classes of asymptotically different positive solutions. The paper investigates their asymptotic behaviors in detail. A structure formula describing the asymptotic behavior of all solutions to the Dickman equation is given, an improvement of the well-known asymptotic behavior of the Dickman function, important in number theory, is derived and the problem of whether a given initial function defines dominant or subdominant solution is dealt with. 1. Introduction and preliminaries. The paper investigates the properties of solutions to the Dickman equatioṅ for t → ∞ where t ≥ t 0 > 0. Throughout the paper, the value t 0 may differ as different results are formulated and, in general, it is assumed to be sufficiently large in order to guarantee all the computations performed being well defined. This is mentioned in each particular case. A continuous function x : [t 0 − 1, ∞) → R is called a solution of (1) on [t 0 − 1, ∞) if it is continuously differentiable on [t 0 , ∞) and satisfies (1) for every t ∈ [t 0 , ∞) (at t = t 0 , the derivative is regarded as the derivative on the right). The initial problem x = ϕ(t), t ∈ [t 0 − 1, t 0 ), where ϕ is a continuous function, defines a unique solution x = x(t 0 , ϕ)(t), t ≥ t 0 − 1 of (1) such that A solution x of (1) on [t 0 − 1, ∞) is called positive if x(t) > 0 for every t ∈ [t 0 − 1, ∞), negative if x(t) < 0 for every t ∈ [t 0 − 1, ∞), and oscillating if it has arbitrarily large zeros on [t 0 − 1, ∞). Let Ψ(y 1 , y 2 ) be the number of positive integers not exceeding y 1 having no prime divisors exceeding y 2 . Then, lim y→∞ Ψ(y t , y)y −t = ρ(t), t > 0 where ρ is what is called the Dickman function (or Dickman -de Bruin function because the latter author intensively studied it), defined for real t ≥ 0 by the relation As noted, e.g., in [20], the Dickman function was first studied by Dickman [12] and later by de Bruijn [6,7]. Differentiating (2), we can see that, assuming t 0 = 1, x = ρ(t) is a solution of equation (1) satisfying the unit initial condition on [0, 1]. Moreover, 0 < ρ(t) ≤ 1, |ρ (t)| ≤ 1, ρ(t) is nonincreasing, t ∈ [0, ∞), and ρ(t) ≤ 1/ t ! where · is the floor integer function. It is also known ( [6], see also [2]) that (throughout the paper, we use the well-known Landau order symbols O ("big" O) and o ("small" o) in computations with t → ∞). In [8] (see also [18]) an improved version of formula (3) is given: for all sufficiently large t and [24, p. 508] (formula for j (n+1) κ (u) where n = 0 and κ = 1) includes an improvement of formula (4) To the best of our knowledge, the formula (5) gives the best-possible asymptotic behavior of the function ρ published in available sources. For an overview of properties of the function ρ see also [18]. In the paper, we perform a qualitative analysis of the asymptotic behavior of the family of all solutions of (1) in terms of the theory of dominant and subdominant solutions to (1). We give the exact asymptotic behavior of the dominant solutions and sharp asymptotic behavior of the subdominant solutions. As a special case, we significantly improve the asymptotic behavior of the function ρ (being a subdominant solution using the below terminology) given by formula (4). The paper is organized as follows. In part 1.1, the theory of dominant and subdominant solutions is shortly described. Then, the main results of the paper in part 2 are formulated where the existence of dominant and subdominant solutions to (1) and their asymptotic behaviors are discussed. Part 3 is devoted to important consequences of the main results. Namely, the structure formula describing the behavior of the family of all solutions to (1) is derived, the asymptotic behavior of Dickman function given by formula (5) is improved, and classes of initial functions defining either dominant or subdominant solutions are described. Proofs of the statements with the necessary auxiliary information are brought together in part 4. 1.1. Dominant and subdominant solutions. We will shortly overview the representation theory of solutions of equation (1) by what is called dominant and subdominant solutions. To this end, we adapt Theorems 8-10 and Definition 2 from [11] where more general equations than equation (1) are treated and formulate the relevant Theorems 1.1-1.3 and Definition 1.4. For this type of investigation, see also [15,16,17,19]. Moreover, every solution x = x(t) of (1) on [t 0 − 1, ∞) can be uniquely represented by the formula where the constant K depends on x. Then, the formula (7) remains valid if x 1 is replaced byx 1 , the constant K is replaced by a constantK and x 2 is replaced byx 2 . In what follows, we will work with what is called iterated logarithms. We define them as follows. The nth iterated logarithm ln n t (n ≥ 0) is defined as ln n t := ln(ln n−1 t), n ≥ 1, ln 0 t := t where we assume t > e n−1 for this definition to be correct. In parts 2.1, 2.2 below, we use the terms "dominant" and "subdominant" solution to (1) in advance. When the existence of both types of solutions is proved, a verification of Definition 1.4 is simple and is given in part 2.3. (1). Let us look for a formal solution x = x(t) to (1) in the form of a power series with negative powers in a neighborhood of infinity Dominant solutions to with real coefficients C n defined by the following lemma. Lemma 2.1. Coefficients C n of the formal solution x(t) = S(t) to (1) are defined by the formula where n ≥ 2 and the coefficient C 1 is chosen arbitrarily. The proof of Lemma 2.1 is given in part 4.1. To establish the convergence or divergence of the formal series S(t) defined by (8) is an open problem since the well-known criteria for the convergence or divergence of power series are not directly applicable. An attempt to utilize formula (9) to get estimates of the coefficients for the convergence/divergence tests to be applicable does not lead to applicable estimates. The convergence/divergence problem explained in Remark 1 is the reason why we derive the following result on the existence of solutions to equation (1) asymptotically described by the formal series S(t) defined by (8). Theorem 2.2. Let p ∈ N be fixed, let C 1 > 0 be fixed and let ε be a positive number such that ε > C p+1 . Then, there exists a dominant solution x(t) of (1) satisfying the inequality The proof of Theorem 2.2 is given in part 4.2. Let us remark that the asymptotic relation (10) is often written as (1). The statement of Theorem 2.2 implies the existence of positive solutions to (1) decreasing to zero as polynomials with negative powers do. In this part, we show that there exist positive solutions decreasing to zero even faster. Using the above terms, such solutions are called subdominant. We will describe them using a class of functions M β with specific asymptotic properties defined below. Subdominant solutions to The following lemma says that M β = ∅ by defining a class of functions satisfying all afore-mentioned properties. In addition, a sign property, necessary in the following investigation, is emphasized. The proof of Lemma 2.4 is given in part 4.3. The following result describes the asymptotic behavior of a subdominant solution. Then, there exists a subdominant solution The proof of Theorem 2.5 is given in part 4.4. 3. Some consequences. The asymptotic behavior of both dominant and subdominant solutions to (1) together with representation formula (7) make it possible to formulate important properties of Dickman equation. Below, formula (7) is shown for the case in question, the asymptotic behavior of Dickman function is improved and a discussion on the sets of initial functions defining either dominant or subdominant solutions follows. 3.1. Structure formula describing the family of all solutions to (1). It is easy to write, utilizing formula (7) in Theorem 1.1, a structure formula describing the asymptotic behavior of all solutions to (1). As a solution x 1 we can take the solution described by formula (10) in Theorem 2.2, assuming that C 1 > 0 and ε > 0 are fixed, i.e., As a solution x 2 we can take the solution described in Theorem 2.5 and given by formula (20) in Remark 2, i.e. 3.2. Improved asymptotic behavior of the Dickman function. The above results make an important contribution to the investigation of the Dickman equation by making it possible to improve the asymptotic behavior of Dickman function, given by formulas (3)- (5). The following theorem provides the relevant statement and is proved in part 4.5. On initial functions defining dominant and subdominant solutions. Let x(t) = x(t 0 , ϕ)(t) be the unique solution of (1) with the initial data where ϕ : is equivalent to the following one where For t 0 = 1, the limit value is discussed, e.g., in [11,14,20,28]. If the limit exists and is finite, then Formula (31) is derived in [28] but without proving the existence of the limit (30). The existence of the limit can be deduced from the results in [11] and [14], or from formula (26), but these results cannot be used to derive formula (31). In [20], the authors gave an alternative proof of the limit equation (31) including the existence of the limit (30) (in connection with a discussion on the asymptotic convergence of solutions, we also refer to [3,4]). Let us also mention a recent paper [13], which describes a method for studying the asymptotic behavior of the dominant positive solutions to a similar class of scalar delay differential equations. Analyzing the initial-value problems (1), (28) and (29) with t 0 = 1, we conclude that the following theorem is valid. The exact asymptotic behavior of the dominant solution x(1, ϕ)(t) is specified in following theorem. Theorem 3.3. Let ϕ(t) > 0, t ∈ [0, 1] and C(1, ϕ) > 0. Then, the dominant solution x(1, ϕ)(t) of the initial-value problem (1), (28) for t → ∞ is asymptotically described as where the coefficients of the series in (32) are computed by formulas and x 2 (t) is arbitrary subdominant solution to (1). The theorem is proved in part 4.6. Remark 3. Compare the asymptotic behavior of x(1, ϕ)(t) given by formulas (30) and (32). By (30) we get for t → ∞. Describing exactly the asymptotic behavior by a power series with negative powers, formula (32) substantially improves formula (34). The order of asymptotic accuracy is improved as well. A subdominant solution x 2 (t) can be described, e.g., by formula (25) and, therefore, lim t→∞ t n x 2 (t) = 0 for arbitrary n ∈ N. 4. Proofs and additional material. This part contains proofs of the statements formulated above and the necessary auxiliary results and material. The proofs of the main results utilize the Ważewski method in a setting suitable to be applied to delayed differential equations [22]. This method is used in [9] to prove a theorem on the existence of solutions of delayed functional differential equations with graphs embedded in a previously defined domain. We employ the following particular case (but sufficient to determine the asymptotic behavior of dominant and subdominant solutions to (1)) of Theorem 1 in [9] adapted for equation (1). For further computations, we need auxiliary formulas on asymptotic decompositions given in Lemma 4.1 and Lemma 4.2 in [10]. The following lemma summarizes the necessary formulas. Lemma 4.2. Let reals σ and τ be fixed. Then, the following asymptotic representation holds for t → ∞. Let k ∈ N and reals σ, τ be fixed. Then, the following asymptotic representation holds for t → ∞. 4.1. Proof of Lemma 2.1. In the below computations, for arbitrary non-negative integers n and λ, we define binomial number Matching the multipliers of the identical powers in (40) and (41), we obtain The last formula is equivalent to (9), i.e., a formal solution S(t) to (1) is of form (8) with coefficients defined by (9). Proof of Theorem 2.2. In the proof, we apply Theorem 4.1 with Verify inequality (35) first. Due to the assumptions of Theorem 4.1, we have ϕ(t + θ) < δ(t + θ) for every θ ∈ [−1, 0) and, therefore, So, it is sufficient to show that (43) holds. For the left-hand side of (43), we get and the right-hand side of (43) turns into (we use formula (9) with some technicalities being the same as in the proof of Lemma 2.1) Now it is easy to see that (43) will hold if p n=1 nC n t n+1 + This inequality is valid for all sufficiently large t and, therefore, inequality (43) holds. Now we show that inequality (36) holds as well. Since ϕ(t − 1) > π(t − 1), inequality (36) will be valid if Proceeding as above, we obtain inequality (44) where the symbol of inequality " > " is replaced by the opposite symbol " < " and ε replaced by −ε. Consequently, we conclude that inequality (36) holds if an inequality relevant to (45), i.e., is valid. Since the last inequality holds for all sufficiently large t, inequalities (46) and (36) hold as well. Now it can be seen that for all t ∈ [t 0 , ∞) assuming t 0 is sufficiently large and (48) holds. The above inequalities are equivalent with (19).
2019-04-22T13:08:11.365Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "fa0e4a998c18c733b7c2992f860e5e1e63181433", "oa_license": "CCBY", "oa_url": "https://www.aimsciences.org/article/exportPdf?id=4cbdac68-3599-4e8f-af9b-f61e77006c79", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2de0ffdb0f060bbfb59fb5d97385f85c8080ad89", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
54219615
pes2o/s2orc
v3-fos-license
Domain Alignment with Triplets Deep domain adaptation methods can reduce the distribution discrepancy by learning domain-invariant embedddings. However, these methods only focus on aligning the whole data distributions, without considering the class-level relations among source and target images. Thus, a target embeddings of a bird might be aligned to source embeddings of an airplane. This semantic misalignment can directly degrade the classifier performance on the target dataset. To alleviate this problem, we present a similarity constrained alignment (SCA) method for unsupervised domain adaptation. When aligning the distributions in the embedding space, SCA enforces a similarity-preserving constraint to maintain class-level relations among the source and target images, i.e., if a source image and a target image are of the same class label, their corresponding embeddings are supposed to be aligned nearby, and vise versa. In the absence of target labels, we assign pseudo labels for target images. Given labeled source images and pseudo-labeled target images, the similarity-preserving constraint can be implemented by minimizing the triplet loss. With the joint supervision of domain alignment loss and similarity-preserving constraint, we train a network to obtain domain-invariant embeddings with two critical characteristics, intra-class compactness and inter-class separability. Extensive experiments conducted on the two datasets well demonstrate the effectiveness of SCA. Introduction In many real-world application of visual recognition, the training and testing data distributions are often different due to dataset bias [41]. This distribution discrepancy decreases the generalization capability of the learned visual representations. One example is that the model trained on synthetic images fails to generalize well on the real-world images. To eliminate the effect of the dataset bias, a common used strategy is unsupervised domain adaption (UDA). In UDA, we * Corresponding Author [35]. We present the 2D visualization of t-SNE for embeddings learned by (a) ResNet (trained on source images only), (b) domain alignment (based on JMMD [28]), and (c) SCA (ours). For the first row, different colors denote data of different object categories. For the second row, red color represents the data of W, and blue color represents data of A. Under SCA, different classes are well-separated, and the two domains are wellaligned on the class level. Best viewed in color. are provided with a labeled source dataset and an unlabeled target dataset, and the goal is to learn a model on the source dataset which minimizes the test error on the target dataset. In literature, recent UDA methods [10,28,9,42,43,25] adopt deep neural networks to learn a shared embedding space where the distribution discrepancy can be reduced. These methods typically involve two objectives: 1) learn embeddings that maintain a low classification error on the source dataset; 2) make embeddings domain-invariant, such that the classifier trained on the source can be directly used on the target dataset. To learn domain-invariant embeddings, recent methods usually minimize some measure of domain variance [43,28,25] (such as correlation distance [40]) or adopt the adversarial learning [10,9,42]. However, this line of methods have an intrinsic limitation: they only focus on reducing the global distribution discrepancy, without exploiting the class-level relations among the source and target images. Thus, even with perfect distribution alignment, the images with different labels from different domains might be misaligned nearby in the embedding space. As shown in Fig. 1(b), domain-level alignment (based on JMMD [28]) has the ability to reduce distribution discrepancy. However, these exists the semantic misalignment problem in the aligned embeddings. For examples, some samples from different classes are mapped nearby in the embedding space. This semantic misalignment is detrimental to the classifier performance on the target dataset. Motivated by this problem, we present a similarity constrained alignment (SCA) method for UDA. The working mechanism of SCA is that it can align the distributions, while preserving the class-level relations among source and target images. Specifically, we add a similarity-preserving constraint for the source and target images during domain alignment. The impact of the similarity-preserving constraint is two-fold. 1) Class unification: images with same labels should be pulled together in the embedding space; 2) class separation: images with different labels should be pushed apart. In practice, the similarity-preserving constraint can be implemented by minimizing the triplet loss [37]. During training, SCA learns domain-invariant embeddings by optimizing an objective that includes both the domain confusion loss and the triplet loss [37]. First, the domain confusion loss aims at mapping the source and target distributions into a shared feature space. Several existing methods can be directly used to achieve this goal. In this paper, we adopt JMMD [28] to align the data distributions. Second, the triplet loss is to enhance the discriminative ability of the deeply learned embeddings, so that source and target embeddings possess the properties of intra-class compactness and inter-class separability. Unfortunately, the target dataset is totally unlabeled, so the similarity-preserving constraint cannot be directly imposed for the source and target images. In the absence of target labels, we use a classifier trained on source images to assign pseudo labels for target images. To eliminate the influence of the incorrectly assigned images, We only select images with high predicted scores for training. Given labeled source images and pseudo-labeled target images, we utilize the triplet loss [37] to constrain their similarity in the embedding space. Specifically, if a source image and a target image are with the same class label, their corresponding embeddings are supposed to be aligned nearby, and vise versa. In this manner, the semantic misalignment problem can be alleviated. As shown in Fig. 1(c), we observe that the embeddings learned by our method preserve the two class-level relations: 1) the embeddings that belong to the same class are close (class unification); 2) the embeddings that belong to different classes are separated well (class separation). Based on the domain-invariant embedddings learned by SCA, the classifier can generalize well on the target dataset. To summarize, this paper is featured in three aspects. First, to our knowledge, this is an early work that explores the class-level relations across domains under the UDA setting. Second, by consolidating the idea of domainlevel alignment and metric learning, this paper presents a novel similarity constrained alignment (SCA) method for UDA. SCA attempts to reduce the distribution discrepancy while preserving the underlying difference and commonness among source and target images. Thus, the class-level misalignment problem can be alleviated. Third, extensive experiment results demonstrate that the proposed method improves the generalization ability of the learned classifier. Moreover, the proposed method is capable of producing competitive accuracy to state-of-the-art methods on two UDA benchmarks. Related Work Many methods are proposed to solve the domain adaptation problem. This section briefly reviews works that are closely related to our paper. Unsupervised domain adaptation. Unsupervised domain adaptation methods attempt to minimize the shift between source and target data distributions. Some methods focus on learning a mapping function between source and target distributions [20,13,8,39]. In [39], Correlation Alignment is proposed to match the two distributions. In [8]. The source and target domain are aligned in the subspace described by Eigenvectors. Other methods seek to find a shared feature space for source and target distributions [9,43,25,28]. Long et al. [25] and Tzeng et al. [43] utilize the maximum mean discrepancy (MMD) metric [14] to learn a share feature representations. Moreover, the joint maximum mean discrepancy (JMMD) [28] is proposed to align the joint distributions of multi-layers across domains. Recent methods [10,9,42,46,3,31] adopt adversarial learning [12] to learn representations that are not able to distinguish between domains. The gradient reversal algorithm (RevGrad) [9] is proposed to learn the domain invariant feature. Tzeng et al. [42] propose a generalized framework for adversarial domain adaptation. Pei et al. [31] propose a multi-domain adversarial network for fine-grained distribution alignment. SimNet [32] proposes to classify an image by computing its similarity to prototype representations of each category. Some methods [17,2,23,7] use the adversarial learning to learn a transformation in the pixel space from one domain to another. CYCADA [17] maps samples across domains at both pixel level and feature level. In this paper, we also attempt to reduce the distribution discrepancy, and we are more concerned with preserving the class-level relations among the source and target datasets. Self-training. Our method is related to self-training, a strategy in which the predictions of a classifier on the unlabeled data are used to retrain the classifier [22,5,21,46,33,19]. The assumption of self-training is that an image with the high predicted score is more likely to be classified correctly. In unsupervised domain adaptation, some methods [46,4,36] use pseudo-labeled images to improve classifier accuracy on the target dataset. Zhang et al. [46] propose a progressive way to select pseudo-labeled images for training the classifier. Chen et al. [4] use two classifiers to assign labels for target images. Saito [36] adopt three asymmetric classifiers to improve the quality of pseudo labels. Unlike these methods, we leverage the selected images with their pseudo-labels for semantic alignment instead of retraining the classifier. This practice provides a new way to utilize unlabeled data for learning feature representations. Deep Metric learning. Deep metric learning [6,11,44,38,37,18] aims to learn discriminative embeddings such that similar samples are nearer and different samples are further apart from each other. The most widely used loss functions for deep metric learning are the contrastive Loss [6] and triplet loss [37]. The problem settings of these works are different from ours. We aim to reduce the distribution discrepancy and utilize the triplet loss [37] to preserve the class-level relations among images from the two domains. Since the target domain is unlabeled, we assign pseudo labels for the target images. Overview In UDA, we are provided with a set of labeled images from the source dataset and a set of unlabeled images from the target dataset, where the data distributions of the two datasets are different. For the source dataset, we denote it y s i is its label, and n s is the total number of images on the source dataset. Similarly, we denote the target dataset as , where x t i is the i-th target image and n t is the total number of images on the target dataset. Our goal is to leverage labeled source images and unlabeled target images to learn a classifier that can generalize well on the target dataset. It is worth repeating that, for UDA, we not only deal with the whole distribution discrepancy that caused by the dataset bias. We also consider preserving the class-level relations among source and target images. To this end, we present a similarity constrained alignment (SCA) method for UDA. As shown in Fig. 2, the goal of SCA is two-fold, 1) it learns a domain-invariant embedding space to align the whole distributions; 2) it preserve the underlying difference and commonness among source and target images. Based on the learned embeddings, we can train a classifier that can generalize well on the target dataset. In Section 3.2.1, we briefly describe the domain-level alignment method used in this paper. In Section 3.2.2, we introduce the similarity-preserving scheme. In Section 3.3, we have a discussion about the proposed method. Similarity Preserving Alignment In this paper, we utilize the deep convolution neural network to learn the classifier. For K-way classification with a cross-entropy loss, this is corresponding to, where L(·, ·) is the cross-entropy loss, where g θ (·) is the feature extractor, and f (·) is the classifier trained on the source dataset. In general, the classifier f (·) is a simple fully-connected network followed by a softmax over the classes . Due the dataset bias [41], the classifier trained on the source dataset often fails to generalize well on the target dataset. To alleviate this problem, we present a similarity constrained alignment (SCA) method. SCA can eliminate the distribution discrepancy, while preserving the underlying difference and commonness among source and target images. In practice, SCA learn domain-invariant embeddings by optimizing over an objective that includes both the domain-level alignment loss and the similarity-preserving loss. Domain-level alignment Domain-level alignment focuses on reducing the whole distribution discrepancy between the source and target datasets. In the community, recent deep domain adaptation methods utilize a domain confusion loss to align the distributions. These methods usually adopt the discrepancybased metric [14,28] or adversarial adaptation [9] to design the domain confusion loss function. Following the practice in [28], we build the domainlevel alignment loss by using the JMMD metric. The JMMD formally reduces the discrepancy in the joint distributions of the activations in domain-specific layers L, i.e. P (Z s1 , . . . , Z s|L| ) and Q(Z t1 , . . . , Z t|L| ). Thus, the loss function of domain-level alignment is written as, where n = n s , z t denotes the activations of the target image in the layer , and z s denotes the activations of the source image in the layer . k is the kernel function in a reproducing kernel Hilbert space (RKHS). We adopt the ResNet-50 [15] as the backbone network. We discard its last layer and add two fully connected layers (a bottleneck layer, and a classifier layer) for our task. In practice, we align the joint distributions of the activations in two newly added layers. Similarity-constrained Scheme Domain-level alignment only aims at reducing the whole distribution discrepancy, but it can mix up the class-level relations among the source and target images. Consequently, there exists a semantic misalignment problem, i.e., source images of class A might be falsely aligned to target images of class B in the embedding space. This semantic misalignment problem directly degrades accuracy on the target dataset. To mitigate this problem, we should consider the class-level relations of images across two datasets. In this paper, we propose to preserve the underlying difference and commonness among images during the domain alignment. Class-level relations. A general assumption behind the similarity-preserving alignment is that if a source image and a target image are with the same class label, their corresponding embeddings are supposed to be aligned nearby, and vise versa. On the top of domain-level alignment, we add a similarity-preserving constraint to maintain two classlevel relations among source and target images. In this paper, the two class-level relations are defined as follow. • Class separation. Images from different domains and with different labels, should be mapped far apart in the embedding space. • Class unification. Images from different domains but with same labels, should be mapped nearby in the embedding space Similarity-preserving loss function. To mitigate the semantic misalignment problem, we want images to preserve the above class-level relations during the domain-level alignment. Let D i,j = g θ (x i ) − g θ (x j ) 2 2 measures the distance between two images in the feature space, where g θ (·) is the feature extractor. If x i and x j are with the same label, we want D i,j to be small, corresponding to the class unification. If x i and x j are with different labels, we want D i,j to be large, corresponding to the class separation. Based on the above analysis, we utilize the triplet loss [37] to achieve similarity-preserving constraint. Given an anchor image x a , a positive image x p , and a negative image x p , we minimize the loss, where x a and x p is a positive pair (their labels y a and y p are same), x a and x p is a negative pair (their labels y a and y n are different). m is the margin that is enforced between positive and negative pairs. This loss encourages the distance between x a and positive image x p to be smaller than the distance between x a and negative x n by the enforced margin m. Training data construction. The similarity-preserving loss supervises the embedding learning, so that class-level relations among source and target images can be preserved. When optimizing the similarity-preserving loss, we should pay attention to two crucial things, 1) the target dataset is totally unlabeled; 2) the construction of training triplet samples is non-trivial. For these two things, we propose corresponding techniques. (i) Label estimation for unlabeled target data. The target dataset is totally unlabeled, so the semantic relations cannot be directly built. In the absence of target labels, we use a classifer pre-trained on the source images to assign labels for unlabeled target images. To ensure the accuracy of the pseudo label, we adopt three tactics. (a) domain-level alignment. When pretraining the classifier, we also utilize the dataset-level to reduce the harmful influence of dataset bias. This practice improves the performance of the classifier on the target dataset, so that more accurate pseudo labels can be gained. (b) Threshold T . Intuitively, the image with the high predicted score is more likely to be classified correctly. Thus, we only select target images with predicted scores above a high threshold T for building the semantic relations. Note that the threshold T is constant during training. (c) Progressive selection. With the help of the similarity-preserving alignment, the classifier will improve itself during training. This motivates us to re-assign the label for the target image every several iterations (K). By doing so, the target images can be progressively selected for the class-level alignment. (ii) Sample triplet images. Given labeled source images and pseudo-labeled target images, we now introduce the way to construct triplet samples. The possible number of triplets is large, and optimizing all triplets is computationally infeasible. To avoid this problem, we follow the sampling strategy in [16]. For the labeled source images, we randomly select C classes and randomly select K images of each class. In this way, we select CK source images. Similarly, we select CK pseudo-labeled target images. Thus, we get a mini-batch of 2CK training images and perform triplet sampling in each mini-batch. Overall objective We present a similarity constrained alignment (SCA) for UDA. During the training, SCA jointly optimizes an objective that includes both a domain-level alignment loss and a similarity-preserving loss, such that more discriminative domain-invariant embeddings can be gained. On the top of learned embeddings, we can train a classifier that generalizes well on the target dataset. The final objective of SCA is written as, where L c is the classification loss, L d is the domain-level domain alignment, and L s is the similarity-preserving loss. The α and the β control the relative importance of domainlevel alignment and similarity preservation, respectively. Discussion Collaborative working mechanism. The working mechanism of SCA is that it can align the distributions, while preserving the class-level relations among source and target images. On the one hand, if we only use the domainlevel alignment to reduce the distribution discrepancy, the resulting embeddings would exist the semantic misalignment problem. On the other hand, the similarity-preserving constraint can map a source image and a target image nearby, if they are with the same class label. Thus, the similarity-preserving constraint can be viewed as the classlevel distribution alignment. With the collaborative supervision of them, we can reduce the distribution at both domain level and class level, i.e., learning domain-invariant embedddings that preserve the class-level relations. In our experiment, we validate this collaborative working mechanism. Moreover, we also study the impact of only adopting the similarity-preserving constraint on the transfer accuracy. Closely related to our work, Motiian et al. [30] also study the class-level alignment. Our work is different from [30] in two aspects, 1) the setting of [30] is supervised domain adaptation, where the labeled target images are available; 2) the authors of [30] do not consider the domain-level alignment, while our work collaboratively aligns the distributions at both domain and class level. Label estimation. To construct class-level relations among the source and target images, we need to estimate the labels of unlabeled target images. In this paper, we simply adopt a classifier pre-trained on the source images to assign pseudo labels for unlabeled target images. We only select target images with their scores above a certain threshold T . Note that we do not adaptively adjust the threshold T as in [46]. In practice, we set the threshold T a high value (0.9) to guarantee that the selected samples are more likely to be predicted correctly. During the training, the classifier will gradually improve itself, so we re-assign pseudo labels every several iterations. In this way, more and more target images will be progressively selected for training. How to use pseudo-labeled target images? Existing methods [22,5,21,46,33,19] usually utilize the pseudolabeled target images for training classifier directly. In this paper, the pseudo-labeled images are not used for training the classier, but for building the class-level relations. We argue that there exist a set of wrongly pseudo-labeled images, which can directly bring a bad influence to the classifier. To avoid this problem, we use selected target images for optimizing the similarity-preserving loss function. Moreover, as analyzed in [24,45], cross-entropy loss encourage the features of different classes staying apart. Thus, using selected target images for training classifier can be viewed as an indirect way to preserve the class separation relation. However, the cross-entropy loss does not consider the class unification relation. In contrast, we adopt pseudolabeled target images and source images for constructing both class unification and separation relations. Office-31 is a widely used benchmark for visual domain adaptation. It contains 4,652 images and 31 categories collected from three distinct domains: Amazon (A), Webcam (W) and DSLR (D). The images in DSLR are captured with a digital SLR camera and have high resolution. Amazon consists of images downloaded from online merchants (www.amazon.com). These images are of products at medium resolution. The images in Webcam are collected by a web camera, and they are of low resolution. We evaluate the proposed method across six transfer tasks A → W, D → W, W → D, A → D, D → A and W → A. We report the results following the protocol in [25]. ImageCLEF-DA is a benchmark dataset for Image-CLEF 2014 domain adaptation challenge. It contains three subsets, including Caltech-256 (C), ImageNet ILSVRC 2012 (I), and Pascal VOC 2012 (P), and each subset is considered as a domain. There are 12 categories and each categories contains 50 images. We use all domain combinations and build 6 transfer tasks: I → P, P → I, I → C, C → I, C → P, and P → C. We report the results following the protocol in [28]. Sample images of the Office-31 and ImageCLEF-DA are shown in Fig 3 and Fig 4, source images and labels {(x s i , y s i )} ns i=1 , unlabeled target images {x t j } nt j=1 . threshold (T ), max number of steps (S), and number of SCA updates per step (K). stage 1: pre-train a classifier: 3: train a classifer by minimizing Eq. 1 and Eq. 2. stage 2: class-level alignment: 4: for s=1; s ≤ S; s ++ do 5: use classifer to assign pseudo labels for target images with predicted score above T . 6: for k =1; k ≤ K; k ++ do 7: train SCA by minimizing Eq. 4. 8: end for 9: end for Through these images, we can observe the dataset bias discussed in [41]. Implementation Details We implement our method on pytorch framework, and fine-tune from ResNet-50 model [15] pre-trained on the ILSVRC 2012 dataset [34]. All the images are resized to 256 × 128. We discard its last layer and add two fully connected layers for our task. The first layer has 256 units, and the second goes down to the number of training classes. During training, we adopt random flipping and random cropping as data augmentation methods. We use stochastic gradient descent (SGD) for optimization, and adopt the same INV learning rate strategy as in RevGrad [9]. The learning rate decreases gradually after each iteration from 0.001, the momentum is set to 0.9, and the weight decay is set to 0.0004. We set α = 1 and β = 1 in Eq. 4. We adopt a two-stage training procedure: we first initial the classifier by minimizing Eq. 1 and Eq. 2, then train the whole network by minimizing Eq. 4. The training procedure is summarized in Algorithm 1. For the stage one training, we train the network for 5000 iterations. For the stage two training, we training the remaining 30000 iterations. We set threshold T = 0.9, max number of step S = 15, and number of SCA updates per step K = 2000 . Experimental Results Compared Apporaches. In this section, we mainly compare the proposed method with several state-of-theart methods, including DAN [25], RTN [27], JAN [28], RevGrad [9], MADA [31], SimNet [32], iCAN [46], and CDAN [26]. These methods are all based on the deep neural network (ResNet-50 [15]) to learn domain-invariant embeddings. For the fair comparison, the results of these methods are directly reported from their original papers. Comparison on the Office-31 dataset. We compare the proposed method with the recent state-of-the-art methods in Table 1. Our method (SCA) gains 87.6% accuracy, which is the second best performance on the Office-31 dataset. Note that our method is comparable with CDAN-M [26] (87.6% vs. 87.7%). Besides, our method achieves the highest performance on three tasks (A → W, W → A, and W → D). Our method is higher than MADA [31] (87.6% vs. 85.2%). Moreover, our method outperforms SimNet, iCAN, and JAN by 1.4%, 0.4%, and 3.2%, respectively. Comparison on the ImageCLEF-DA dataset. In Table 2, we compare the proposed method with state-of-theart methods. SCA obtains 87.9%, which outperforms the other methods. The accuracy of our method is 0.4% higher than the second best method CDAN-RM [26]. Moreover, the proposed method respectively outperforms the MADA [31], iCAN [46], and JAN [28] by 2.1%, 0.5%, and 2.1%. Specifically, our methods achieves the highest performance on two tasks (C → I and P → C). The comparisons on the Office-31 dataset (Table 1) and the ImageCLEF-DA dataset ( Table 3. Ablation experimental results of SCA. The results are on the Office-31 dataset. "B" (Basel.) denotes the baseline trained only the source dataset, " S" represents the similarity-preserving constraint, and "D" denotes the domain-level alignment. SCA is the full system ("B + D + S"). Component analysis In this section, we present step-by-step evaluation to analyze the effectiveness of SCA. Ablation study. We investigate the impact of different components in SCA. We conduct the experiment on the Office-31 and report the results on Table 3. The baseline is the network that we modify from ResNet-50, and it does not adopt any domain adaptation technique. In this paper, we adopt JMMD [28] for the domain-level alignment, and the result of "B+ D" is consistent with the experiment in [28]. Compared with "B" (Basel.), "B + D" achieves higher performance, which indicates that it has ability to reduce the distribution discrepancy. On the top of domain-level alignment, the similaritypreserving constraint further brings +3.4% improvement in average accuracy. This well demonstrates the importance of preserving underlying difference and commonness among source and target images. As discussed in 3.3, the similarity-preserving constraint can be viewed as a way to align distributions at class level. We further study its impact on the transfer accuracy, and report its results ("B+S") in Table 3. We can observe that only adopting the similarity constraint can also improve the baseline performance it gains +4.5% improvement over the baseline in average accuracy. This indicts that preserving class-level relations benefits the transfer accuracy. Weight of the similarity-preserving constraint. The β in Eq. 4 control the importance of similarity-preserving constrain. A larger β means that the constraint has a greater impact on the distribution alignment. In Fig 5, we demonstrate the transfer accuracy of SCA by varying the β ∈ {0, 0.1, 0.5, 1, 2, 5} on three tasks, A → W, W → A, and D → A. Note the when β is set to 0, the similaritypreserving constrain has no impact. As shown in Fig. 5, when the β increases from 0 to 1, the performance on three tasks grow and reach the best at β = 1. However, when the β is too large (β=5), the accuracy will drop by a large margin. Empirically, the best parameter β is between 0.5 to 2 in our method. Domain-level alignment method. As discussed in Section 3.2.1, we use a discrepancy-based metric JMMD for domain-level alignment. We note that the proposed similarity-preserving constraint can work collaboratively with other domain-level alignment methods. To validate this, we conduct the experiment on three tasks of Office-31: A → W, W → A, and D → A. We adopt an adversarial adaptation method named Reverse Gradient (RevGrad) [9] for domain-level alignment. Based on RevGrad, we construct the similarity constrained alignment network (SCA-Rev), and report the results on the Fig. 6. As shown in Fig. 6, RevGrad can improve the accuracy of baseline, which indicates it has ability to reduce the distribution discrepancy. Moreover, SCA-Rev further improves the accuracy of RevGrad. SCA-Rev gains +5.7%, +4.0% and 5.1% improvements over RevGrad on A → W, W → A, and D → A, respectively. On the one hand, the results demonstrate that preserving the two class-level relations is crucial for the domain-level alignment. On the other hand, these results indicate that the similarity-preserving constraint can work collaboratively with other domain-level alignment methods. Distribution discrepancy. The domain adaptation theory [1,29] introduces A-distance to measure the distribution discrepancy . The A-distance is defined as d A = 2 (1 − 2 ), where is the generalization error of a classifier trained to discriminate source and target. We report the d A on two tasks (A → W, W → D) of Office-31 with features of baseline, domain-level alignment (basel. + G), and SCA. As shown in Fig. 7, d A on SCA features is much smaller than d A on the baseline and domain-level alignment features. This indicates that SCA features can reduce the distribution discrepancy more effectively. Conclusion and Future Work In this paper, we present the similarity constrained alignment (SCA) method to address the semantic misalignment problem. SCA enforces a similarity-preserving constraint to maintain the underlying difference and commonness among the source and target images. In the absence of target labels, we use a classifier trained on source images to assign pseudo labels to the target images. Given labeled source images and pseudo-labeled target images, the similaritypreserving constraint can be implemented by minimizing the triplet loss. Under the collaborative supervision of the domain alignment loss and the triplet loss, SCA learns domain-invariant embeddings with two important properties, i.e., intra-class compactness and inter-class separability. Thus, the distributions can be aligned at both domain and class level, which alleviates the semantic misalignment problem. The experimental results on two benchmarks demonstrate that the proposed SCA is effective and competitive with the state-of-the-art methods. In the future, we will extend this idea to multiple target domains, where the class-level relations among multi-domains will be explored.
2018-12-03T16:46:29.000Z
2018-12-03T00:00:00.000
{ "year": 2018, "sha1": "f8acaabc99801a89baa5a9eff445fc5922498dd0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d4199fbd7809eb95efc5a12a8771d04cda8f939c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
214415544
pes2o/s2orc
v3-fos-license
A Gemini/GMOS study of the bright elliptical galaxy NGC 3613 and its globular cluster system We present the first photometric study of the globular cluster system (GCS) of the E galaxy NGC 3613 (Mv = -21.5, d = 30.1 Mpc), as well as the surface photometry of the host galaxy, based on Gemini/GMOS images. Being considered the central galaxy of a group, NGC 3613 inhabits a low-density environment although its intrinsic brightness is similar to the expected one for galaxies in the centre of clusters. The following characteristics are obtained for this GCS. The colour distribution is bimodal, with metal-poor globular clusters (GCs) getting slightly bluer with increasing radius. The radial and azimuthal projected distributions show that metal-rich GCs are more concentrated towards the host galaxy and trace its light distribution very precisely, while metal-poor GCs present a more extended and uniform distribution. The GC luminosity function helps validate the adopted distance. The estimated total GC population of Ntot= 2075 +/- 130 leads to a specific frequency Sn=5.2 +/- 0.7, a value within the expected range for GCSs with host galaxies of similar luminosity. The surface photometry of NGC 3613 reveals a three-component profile and a noticeable substructure. Finally, a small sample of ultra-compact dwarf (UCD) candidates are identified in the surroundings of the host galaxy. INTRODUCTION The ages of globular clusters (GCs) usually establish them among the oldest objects in the Universe (e.g. Hansen et al. 2013;Tonini 2013), so they provide important clues about the first phases of galaxy formation. From the observational point of view, GCs present several advantages like being so compact and intrinsically bright that can be observed farther away than one hundred Mpc (Harris et al. 2014(Harris et al. , 2016Alamo-Martínez et al. 2013). Moreover, globular cluster systems (GCSs) of early-type massive galaxies contain thousands of GCs, probably as a consequence of a history of numerous mergers (e.g. Bassino et al. 2008;Durrell et al. 2014;Oldham & Auger 2016;Caso et al. 2017). It is often assumed that GCs formed at high redshift, in gas-rich discs and within a high-pressure environment (Kruijssen 2015). Recent numerical simulations, like the E-MOSAICS Project (Pfeffer et al. 2018;Kruijssen et al. 2019), E-mails: brudebo.444@gmail.com, liliaybass@gmail.com, jpceda@gmail.com, ennis.ana@gmail.com have presented scenarios that describe the formation, evolution and disruption of the GCs, following their evolution together with that of the host galaxies. These scenarios imply a direct correlation between the formation of GCs and the field stars, in such a way that the properties of GCSs provide constraints to the simulations (e.g. Powalka et al. 2016) and, on the other side, a galaxy history can be described based on the study of its GCS. Such interconnections follow clearly from studies of large GC samples, like the ACS Fornax Cluster Survey (ACSFCS) (Jordán et al. 2007) or the Next Generation Virgo Cluster Survey (NGVS) (Ferrarese et al. 2012). One of the most common characteristics of GCSs in massive early-type galaxies is the existence of two GC subpopulations, though more complex cases have been pointed out (e.g. Caso et al. 2013;Sesto et al. 2016). These GC subpopulations have been detected through different physical properties: • bimodality in colour, which is interpreted mainly as a difference in metallicity for the bona fide old GCs, where 'blue' and 'red' subpopulations identify those with lower and higher metal content (i.e. metal-poor and metal-rich GCs), respectively (e.g. Usher et al. 2012;Chies-Santos et al. 2012;Forte et al. 2013). • different projected spatial distribution with respect to the host galaxy, with red GCs being generally more concentrated towards the centre of the host galaxies and tracing their surface-brightness profiles, while blue ones present a more extended distribution (e.g. Bassino et al. 2006;Forbes et al. 2012;Durrell et al. 2014;Escudero et al. 2018). • different kinematics, found in the radial velocity and velocity dispersion of the subpopulations. The kinematics of the red subpopulation is usually akin to that of the host galaxy stars (e.g. Schuberth et al. 2010;Pota et al. 2013;Amorisco 2019). According to the numerical simulations by Amorisco (2019), the higher dispersion of blue GCs relative to red ones may be explained by the high contribution of blue clusters to the halo population through minor mergers. Our current target, NGC 3613, is an intrinsically bright elliptical galaxy, classified as E6 (de Vaucouleurs et al. 1991). We initially adopt a distance d∼ 30.1 Mpc (Tully et al. 2013), based on surface brightness fluctuations, but taking into account that the distances calculated to date have a significant dispersion, as can be seen in NED 1 . In particular, one of the aims of this work is to provide a new estimate for this value by means of the turn-over of the globular cluster luminosity function. Then, the absolute visual magnitude of NGC 3613 (M V = −21.5) corresponds to the range of those of bright massive galaxies located in rich clusters, although it is noticeable that it inhabits an environment of lower density . The ATLAS 3D project (Cappellari et al. 2011a), a survey that combines multi-wavelength data and models, includes NGC 3613 in its sample of 260 early-type galaxies. According to their kinematic analysis (Krajnović et al. 2011), our target is a 'regular rotator' (i.e. dominated by ordered rotation) and, based on an estimator of the angular momentum of the stars, it is also classified as a 'fast rotator' (Emsellem et al. 2011). The local density estimators presented by Cappellari et al. (2011b), place NGC 3613 in a low-density environment. They also state that fast rotators form a homogeneous category of systems flattened and oblate, with regular velocity fields. One of the last papers of the ATLAS 3D project deals with the stellar populations of the early-type galaxy sample (McDermid et al. 2015), and gives values of the age and metallicity of NGC 3613, measured within the effective radius, calculated by two methods. Using single stellar population models they obtain: age = 11±2 Gyr and [Z/H] = −0.17±0.05, and using spectral fitting to derive star formation history, they obtain mass-weighted values of age = 13 ± 0.7 Gyr and [Z/H] = −0.13 ± 0.01. Under both approaches, our target turns out to be a quite old and metal-poor galaxy. More recently, O'Sullivan et al. (2017) presented the Complete Local-Volume Groups Sample (CLoGS) that includes 53 optically-selected groups located in the nearby Universe, up to a distance of 80 Mpc. According to their selection criterion (i.e. considering the brightest early-type member of the group as the central galaxy), NGC 3613 is not only member of a group but is also the central galaxy. 1 https://ned.ipac.caltech.edu/ As far as we know, the GCS of NGC 3613 has not been studied before, which is surprising given that the host is such a bright galaxy. According to the study of Madore et al. (2004), NGC 3613 belongs to a group consisting of a dozen galaxies. Located at an angular distance of 47 arcmin towards the north and with a radial velocity difference of 350 km sec −1 , there is a peculiar lenticular (also classified as shell elliptical) galaxy of similar luminosity, NGC 3610, that is considered as a prototype of a merger remnant of two disc galaxies. The latter galaxy has a very complex surfacebrightness distribution with plumes, tails and other structures as a consequence of the tidal disturbances suffered during its evolution (Schweizer & Seitzer 1992;. Madore et al. (2004) indicate that both galaxies might belong to the same group, which may then have undergone mergers and tidal-stripping processes. Moreover, the estimated projected distance between them (≈ 400 kpc assuming they are both at the same distance) lends support to the idea that they may have formed in a common environment. Thus, the current analysis of NGC 3613 and its GCS will not only allow us to characterize the system and confirm its distance, but also look for evidence of possible interactions with other group members, e.g. by detecting spatial enhancements or irregularities in the projected GC azimuthal distribution, substructure in the host surface-brightness distribution, etc. This paper is organised as follows. The observations and data reduction are described in Sections 2 and 3, while the results are presented in Section 4. In Section 5 we analyse the surface photometry of the galaxy and present our discussion in Section 6. A summary and conclusions are given in Section 7. OBSERVATIONS The data were obtained with Gemini/GMOS-N during semester 2013A (programme GN2013A-Q-42, PI: J.P. Caso), in nights with photometric quality, and consist of images of the galaxies NGC 3610 and NGC 3613 in g , r , and ibands. Fig. 1 shows the configuration of the observed fields. The images of NGC 3610 (one field on the galaxy plus another 'adjacent' field) have been used previously to study both the galaxy and its GCS , while those of NGC 3613 (one field on the galaxy) are the ones used in the present study to analyse the properties of the GCS of NGC 3613. In order to estimate the contamination by Galactic stars and background galaxies, we will use half of the 'adjacent field' (the one that is further from the centre of NGC 3610). This field is close in projection and has been taken as part of the same programme. Moreover, the GCS of NGC 3610 extends up to a galactocentric radius of ∼ 4 arcmin, so almost no GCs are expected in this half of the 'adjacent field'. The observing log is presented in Table 1. Four longexposures were taken for each band. We also note that the g images were obtained on two different nights. A dithering pattern was used to cover the gaps and remove cosmic rays and bad pixels, as well as a 2x2 binning, resulting in a scale of 0.146 arcsec pixel −1 . For the data reduction, we used tasks of the GEM- Point source selection and photometry In order to improve the detection of GC candidates located near the centre of the galaxy and to remove possible gradients in the surface-brightness distribution, we subtracted the light of the galaxy as much as possible using the task FMEDIAN, applying first a filter that calculates the median value in squares of 200 × 200 pixels, and then repeating the procedure with one of 40×40 pixels to eliminate fluctuations of lower period. With the photometry of the artificial stars (see section 3.3) we corroborated that this procedure does not modify the results obtained for point sources. To obtain an initial catalog of point-sources present in the GMOS field, we used the software SExtractor (Bertin & Arnouts 1996). We ran the software on all g , r and i images using two filters, one (Gaussian) that is more effective at larger distances from the galaxy, and another (Mexhat) which performs a better fit in highly populated areas such as those near the centre of the galaxy, where candidates for GCs are concentrated. The program generates a catalog for both cases, Mexhat and Gaussian filters. Then, we selected those objects listed in at least one catalog for g , r and i , and with a parameter CLASS STAR greater than 0.4 to eliminate extended sources. We performed PSF photometry with the corresponding tasks of the DAOPHOT package within Iraf. For each filter, a PSF model was obtained with about 20 isolated bright stars, well distributed over the field. The ALLSTAR task also gave us statistical parameters ( χ and sharpness). By means of these parameters, a new improved point-source catalog was obtained. Finally, aperture corrections were estimated using the same objects as those used to obtain the respective PSFs. Photometric calibration As part of the Gemini programme, a standard star field from the list of Smith et al. (2002) was observed, and reduced in the previous study of NGC 3610 by . To obtain magnitudes in the standard system, the calibration equation for each filter is: where m st d and m inst are the standard and instrumental magnitudes, respectively, Z P is the photometric zero-point, K M K the mean atmospheric extinction at Mauna Kea (obtained from the Gemini Observatory Web Page 2 ), and X the airmass. In the present work, the same calibration equations as those obtained by were applied. Finally, we applied the corrections by Galactic extinction obtained from NED, which were calculated by Schlafly & Finkbeiner (2011). Completeness estimation In order to estimate the photometric completeness for our fields, first we added to the images 250 artificial stars, uniformly distributed, covering a magnitude range 21.5≤ i 0 ≤ 27 and the expected colour range for GCs. We repeated this procedure 40 times, achieving a sample of 10 000 artificial stars in each image. Then, we performed the detection and photometry in the same way as in the original science images. The process was carried out for four ranges of galactocentric radii. In addition, it was repeated for the comparison field in order to estimate the contamination corrected by completeness. The resulting completeness curves are shown in Figs. 2 and 3. The fitted function (Harris 2009) is: where α, β and m 0 are the free parameters. Hereafter, the limit i 0 < 25 is used to guarantee an acceptable completeness in both, the science and comparison fields. Selection of GC candidates The GC candidates will be selected among the point-sources, according to certain brightness and colour ranges. On the one hand, the faint magnitude limit was determined in the previous Section according to the adopted completeness for the science images (i.e. i 0 ∼ 25). On the other hand, the bright magnitude limit will be taken as the estimated limiting magnitude that separates Ultra Compact Dwarf (UCD) and bright GC candidates. Adopting as such limit the M I magnitude derived from Mieske et al. (2006) (i.e. M I = −12), using the transformation equations given by Faifer et al. (2011) and the adopted distance for NGC 3613, we calculate the bright magnitude limit as i 0 = 20.8. In regards to the colour range, Fig. 4 shows the colourcolour diagrams, (r − i ) 0 versus (g − i ) 0 and (g − r ) 0 versus (g − i ) 0 for the selected point-sources. The use of colour-colour diagrams to select GC candidates has been thoroughly explained by Faifer et al. (2011). This method has proved to result in a clean selection of GCs, with only a small fraction of contaminants, when spectroscopic observations to confirm membership are available (e.g. Norris et al. 2008Norris et al. , 2012, and references therein) Accordingly, there are well-defined sequences in these diagrams that are indicated by the solid lines. We then select as GC candidates those in the colour ranges 0. Caso et al. 2015;Escudero et al. 2015, and references therein). Finally, Fig. 5 shows the colour-magnitude diagram i 0 versus (g − i ) 0 for the science field (left panel) and for the comparison field (right panel). The locus of the bona-fide GC candidates appears clearly on the science field, even the two subpopulations can be distinguished at first sight. In the comparison field, contaminants that fulfill the same criteria as GC candidates are present only for i 0 > 23, with a total of 4.9 objects/arcmin 2 . Colour distribution Fig. 6 shows the (g −i ) 0 colour distribution for all GC candidates, using a bin width of 0.04 mag. A smoothed histogram (with a 0.5σ Gaussian kernel) is also shown with dashedlines. We note that for this analysis, the central zone of the galaxy is excluded due to saturation. In order to analyse whether the global colour distribution can be represented as the sum of two Gaussian models, we used the Gaussian Mixture Modeling test (GMM, Muratov & Gnedin 2010). By means of the GMM test, we fitted two Gaussians to the sample, obtaining mean value, dispersion, and fraction for each subpopulation, i.e. metal-poor ('blue') and metal-rich ('red') GC candidates. The test also gives two statistical parameters, DD and the kurtosis of the input distribution. The DD parameter is a measure of the separation between the peaks of the two Gaussians, calculated as: where µ 1 and µ 2 are the mean values and σ 1 and σ 2 the dispersions of the fitted Gaussians. A bimodal distribution is acceptable when DD > 2, while the kurtosis is very likely negative in such a case. In order to run GMM on contamination-free samples, we proceeded as follows. The expected number of contaminants, N c , was calculated for each region, taking into account the ratio between the areas covered by the sub-sample and the comparison field. Due to the fact that the regions in which the sample was divided present a smaller area than that corresponding to the comparison field, we proceeded to randomly select N c objects from the comparison field, to . Projected spatial distribution of blue and red GC candidates, indicated with blue circles and red triangles, respectively. The dashed lines show the three radial ranges used to study the colour distribution. The galaxy centre is marked with a cross. then subtract from the science sample those that present more similar colours to each of them. This random selection can introduce some statistical noise. To minimize this effect, the procedure was repeated 25 times and the results were averaged to obtain the final parameters of each fitted Gaussian. The results are listed in Table 2, where it can be seen that for the whole sample it is acceptable to consider a bimodal distribution. We also performed this analysis for three concentric regions. We separated them according to the following galactocentric radii (R g ): 20 < R g < 70 arcsec , 70 < R g < 110 arcsec, and R g > 110 arcsec (see Fig. 7), using a bin width of 0.06 mag. Fig. 8 depicts the three colour distributions and Table 2 shows the corresponding results of the GMM test. According to the DD parameters and kurtosis obtained, it is also acceptable to consider bimodal distributions for the subsamples in the three concentric regions. As the f red parameter indicates, the blue subpopulation dominates clearly in all galactocentric ranges, unlike other bright elliptical galaxies where in the innermost region the weight of both subpopulations is similar (e.g. Caso et al. 2019). As can be noticed from Table 2, mean (g − i) 0 colours of blue and red subpopulations remain approximately at similar values for the three subsamples and for the total population, except that the blue peak gets bluer with increasing radius (we will come back to this in the Discussion) and the red peak of the intermediate region is bluer than the rest, though the latter also has the largest error. Globally, these mean values mostly agree with those found in other studies of GCSs in the same photometric system, that is µ ≈ 0.85 and µ ≈ 1.07 for the blue and red peaks, respectively (e.g. Harris 2009;Forbes et al. 2011, and references therein). Moreover, the fraction of metal-rich clusters in the inner and intermediate regions is larger than in the outer- most one, which is in agreement with the idea that this red subpopulation is more concentrated towards the host galaxy and thus, closely related to its stellar component. Blue-tilt In the colour-magnitude diagram depicted in Fig. 5, it can be clearly seen that as we consider brighter blue GC candidates, they get redder. This behaviour has been generally called 'blue-tilt' and, in our case, it extends over the whole luminosity range. Also, some authors refer to it as a 'massmetallicity relation (MMR)' (e.g. Harris et al. 2006), applied to this colour-luminosity trend followed by the metal-poor GCs in many bright galaxies, but not all of them. In order to characterize the blue-tilt, Fig. 9 shows the colour-magnitude diagram, differentiating the red and blue GC candidates by taking (g − i) 0 = 0.95 (Faifer et al. 2011) as limiting colour between both subpopulations. In addition, the large dots represent the mean colour of different adjacent subsamples in each subpopulation, each subsample with equal number of GC candidates (50 for the red candidates and 65 for the blue ones). It can be seen that in the case of the blue GC candidates, a correlation between colour and magnitude is present, as mean colours are tilted towards the red as we consider brighter GCs. By means of a linear leastsquares fit of those mean blue colours we obtained a slope of d(g − i ) 0 /di 0 = −0.053 ± 0.015 (the result of a chi-square test indicates that the fit represents the distribution with a 90 per cent of confidence.). Thus, it is in agreement within uncertainties to that obtained, in the same photometric system, by Wehner et al. (2008) for NGC 3311, the central galaxy of the Hydra cluster, and slightly larger than the one obtained by Escudero et al. (2015)(d(g − i ) 0 /di 0 = −0.026 ± 0.007) for a bright lenticular, NGC 6861. Fig. 10 shows the projected spatial distribution of the GC candidates surrounding the galaxy NGC 3613. It is divided into blue and red GC subpopulations, according to the adopted colour limit, (g − i) 0 = 0.95. The corresponding projected density is superimposed as a smoothed distribution as well as a few contours of constant numerical density. Projected spatial and radial distributions As already indicated by the decreasing fraction of red GCs with galactocentric distance, it is clear from Fig. 10 that the red GC subpopulation is more concentrated towards the centre of the galaxy, while the blue subpopulation is more extended and evenly distributed in an approximately circular distribution. The contours of the red GCs are elliptical, with the major axis oriented in a similar direction as the host galaxy starlight. The projected radial distributions for all GC candidates and for both subpopulations, corrected by contamination and by completeness, are presented in Fig. 11. All the radial profiles were fitted with power-laws to calculate the respective slopes. Due to saturation at the galaxy centre, the fits were performed for r > 0.35 arcmin. According to the powerlaw, log 10 (N) = d + e log 10 (r) r is the galactocentric radius and d, e are the fitted coefficients. The corresponding results are depicted in Table 3. As can be seen in Fig. 11, the power-law provides good fits for the blue and red subpopulations, excluding from the fit of the latter subpopulation the furthermost point. However, the power-law fit is not as good for the case of the whole sample. Then, a modified Hubble distribution (Binney & Tremaine 1987) was also fitted to the whole sample profile, within the same radial range, to take into account the evident change of slope present in the profile. In previous works we have obtained good fits this way (e.g. Caso et al. 2017). By means of the Hubble profile, where r is the galactocentric radius and a, b, r 0 are the fit- Table 3. Coefficients of the power-law fitted to the radial profiles for all, blue, and red GC candidates. ted coefficients, we obtained the following values a = 137 ± 15 N arcmin −2 , r 0 = 1.04 ± 0.24 arcmin and b = −1.15 ± 0.19. This fit is better than the one obtained with a power-law, particularly for the innermost points where the destruction of GCs (Kruijssen et al. 2012;Kruijssen 2015) must have affected the profile. As a third option, we fitted a SÃl'rsic model (Sersic 1968) to the whole GC sample, that resulted quite similar to that of the Hubble profile and gave an effective radius R eff = 1.97 ± 0.16 arcmin (17 kpc), for the total projected GC distribution (Fig. 11, upper panel). This value is slightly larger than those obtained by Usher et al. (2013) for NGC 4278 (12.7 kpc), and Kartha et al. (2014) for NGC 720 and NGC 2768 (13.4 kpc and 10.5 kpc, respectively), all E galaxies with similar luminosity than NGC 3613. We assume that the total extension of the GCS is reached at the radius where the background-corrected density, corresponding to the Hubble profile, is equal to 30 per cent of the background level. Such a criterion was first used by Bassino et al. (2006) in a wide-field study of the GCS of NGC 1399, based on three MOSAIC II (CTIO) images (FOV: 36 × 36 arcmin each). The galactocentric radius corresponding to 30 per cent of the background was the largest distance from the host galaxy where GCs and the background could be separated, being the density distribution flat further out. If we consider this limit, which has also been adopted in subsequent works (e.g. Caso et al. 2013;, the GCS of NGC 3613 exceeds the FOV of our images. Thus, we obtain an extension of r = 8.1 arcmin, that is r = 70 kpc. Fig. 12 shows how the GC subpopulations are distributed with respect to the position angle (PA), which is measured from north to east with vertex at the galaxy centre. Such distributions were estimated considering an annulus defined by the largest possible outer radius so as the whole annulus was contained within the FOV, i.e. 48 < R g < 102 arcsec. It was divided into angular sections of ≈ 30 • and the GC number density was calculated for each bin. It can be seen that the blue GCs do not show any particular behavior, as it is basically a rather uniform distribution, except for a slight drop at PA ∼ 300 • . On the other hand, the red GCs show a sinusoidal behavior, with two clear over-densities at PA that differ approximately by 180 • . As expected, the position of these over-densities agrees with what is obtained from the contours of constant density at the GC projected spatial distribution (see Fig. 10), defining the same direction as the ellipse major axis. Azimuthal distribution In order to fit the red GC distribution, we used the sinusoidal function: where N r ed is the density of red clusters, PA is the posi- tion angle, A is the offset of the symmetry axis, B is the amplitude and φ/2 is the phase shift. The parameters resulting from the fit are A = 8.50±0.36, B = −2.02 ± 0.51 and φ = 52 • ± 14 • . According to them, the PA of the maximum. i.e. the first over-density, is ∼ 109 • . We also calculated the ellipticity of the projected distribution of the red GCs by means of the expression proposed by Dirsch et al. (2003) and obtained a value of = 0.37. We note that this analysis of the azimuthal distribution applies to just a fraction of GC candidates, those located within the annulus defined above, while the rest of the GC population is not included. In addition, as the photometry of objects that are close to the borders of the image is usually not very accurate, the outer radius of the annulus was reduced. Fig. 13 shows the background and completeness corrected globular cluster luminosity function (GCLF), using a binwidth of 0.25 mag. Two Gaussians were fitted to the GC candidates with i' ≤ 24.9, excluding fainter ones due to declining completeness. One fit was performed leaving all the parameters free (solid line) and the other one using a fixed mean (turn-over), which was calculated with the adopted distance modulus and an universal absolute visual magnitude M V 0 = −7.4, taken from Richtler (2003). Afterwards, we converted V 0 to i 0 using the transformations given by . There are no notable differences between the two options. Therefore, from here on we will consider the results of the Gaussian fitted with all parameters free. We obtained a turn-over i 0 = 24.37 ± 0.25 with a dispersion of 1.26 ± 0.20, that corresponds to a distance modulus (m − M) = 32.37 ± 0.2. This value is in agreement within uncertainties with the distance modulus (m − M) = 32.39 ± 0.14 given by Tully et al. (2013), which is based on surface brightness fluctuations. Luminosity function and GC population In order to calculate the GC population, we integrated the Hubble law fitted to the radial distribution, assuming that a background-corrected density of 30 per cent of the background sets the limit of the system (see Section 4.4). Afterwards, we applied another correction to take into account that, according to the GCLF, this first result corresponds to only GCs brighter than i 0 = 24.9 and we want to consider the whole population. Finally, we obtained a total GC population of N tot = 2075 ± 130 members. The specific frequency S N is defined as the number of GCs per unit M V of host galaxy luminosity (Harris & van den Bergh 1981), which was considered to be closely linked to the formation efficiency of GCs (McLaughlin 1999). We obtained a value S N = 5.2 ± 0.7, after calculating the absolute V magnitude (M V = −21.5 ± 0.14) by means of the total V 0 magnitude obtained from NED and the adopted distance modulus. We can see that the specific frequency of the GCS of NGC 3613 falls within the typical range expected for early-type galaxies with similar luminosity (Brodie & Strader 2006;Peng et al. 2008;Georgiev et al. 2010;Harris et al. 2013). According to the model of GC formation presented by Kruijssen (2015), where they use the definition of specific frequency normalized by host-galaxy stellar mass, the way GCs form from the interstellar medium in discs and the subsequent disruption they suffer are the main physical processes shaping the behaviour of the specific frequency with respect to galaxy stellar mass. SURFACE PHOTOMETRY OF NGC 3613 Fig. 14 (top panel) shows the surface-brightness profile of NGC 3613 in the i'-band (reddening-corrected surface brightness µ i0 versus equivalent radius r eq ), obtained with Iraf through the ELLIPSE task. We used Sérsic models to fit the galaxy profile and the best fit was provided by the addition of three components, as all fits with less components led to systematic residuals. The expression for each Sérsic model is: where µ is the surface-brightness (in units of mag arcsec −2 ), µ 0 is the central surface-brightness, r 0 is a scale parameter and n is the Sérsic shape index (where n=1 corresponds to an exponential profile and n=4 to a de Vaucouleurs profile). The resulting residuals are shown in Fig. 14 (bottom panel). The parameters for the three fitted components are presented in Table 4. We have also included the respective effective radii, according to the relation r e f f = b n r 0 , where b n is a function of the n index, that may be estimated with the expression given by Ciotti (1991). The fitting parameters of our intermediate and outer components are in agreement within uncertainties with those of the bulge and exponential disk obtained by Krajnović et al. (2013) (ATLAS 3D project) through a two-component fit. In particular, they point out that the median SÃl'rsic index of the bulge is n = 1.7 for galaxies classified as fast rotators, i.e. close to our value for NGC 3613 (n = 1.6). The presence of three components in massive E galaxies, like our present target, has already been pointed out by several authors. For instance, Huang et al. (2013a) present a study of nearby Es from the Carnegie-Irvine Galaxy Survey and show that two-dimensional surface-brightness distributions of most of them, can be described by a compact core as inner component, an intermediate component as main body, and an outer envelope. For a sample close to 100 galaxies, they obtain Sérsic index n ≈ 1 − 2 for the components, in agreement with the values obtained for NGC 3613 though we perform a one-dimensional analysis. Multi-components in this type of galaxies (Huang et al. 2013a,b;Oh et al. 2017, and references therein) are understood as the consequence of a two-phase formation scenario. At high redshift (z ≥ 3), the evolution is dominated by in-situ star formation owing to highly dissipative processes, from which the inner substructure of the galaxies derive. On the other side, the outer extended envelopes were built-up during a later phase, mainly dominated by accretion through 'dry' minor mergers. Fig. 15 shows the parameters of the isophotes obtained with ELLIPSE for NGC 3613, as a function of r eq . That is, ellipticity (top panel), position angle PA measured positive from N to E (middle panel), and A4 Fourier coefficient which represents disky and boxy isophotes for A4 > 0 and A4 < 0, respectively (bottom panel). The values of are mostly higher than 0.4, which is typical of fast rotators as NGC 3613 (Cappellari et al. 2011b). Changes in the isophotal parameters, at r eq ∼ 20 and ∼ 55 arcsec, agree with the dominance of different components in the brightness profile. Fig. 16 shows the final combined GMOS image (i'-band) of NGC 3613, where the boxy shape of the outer isophotes is evident (A4 < 0). Five UCD candidates have been identified with squares in the surroundings. Globally, and PA agree with those given by (Krajnović et al. 2011) in the context of the ATLAS 3D project. Fig. 17 shows the GMOS image obtained by subtracting, from the original image, a smoothed model of the surfacebrightness distribution of the galaxy, performed with EL-LIPSE and BMODEL. In this residual image, there is an observable substructure at a low surface-brightness level. There is a plume towards the left side of the galaxy, pointing to the south, that is detectable in the original image (Fig. 16 Figure 15. Parameters of the isophotes fitted with ELLIPSE to the NGC 3613 surface-brightness distribution, as a function of r eq . From top to bottom panels: ellipticity , position angle PA, and A4 Fourier coefficient, respectively. The horizontal line in the bottom panel corresponds to A4 = 0. so that it cannot be a spurious residual of the image processing. Another plume is present on the opposite side, pointing to the north too. A bright x-shape residual located in the central region may be connected to these plumes. An inner stellar disk is aligned with the major axis of the galaxy isophotes (see also Ebneter et al. 1988). All this lying substructure can be understood as another indication of the multi-components identified in the galaxy, related to the formation history, where the plumes may be tidal remainings of past accretions (e.g. Barnes & Hernquist 1992;Hernquist & Spergel 1992). On the other hand, we find no clear evidence of interaction with NGC 3610. Relation between GC subpopulations and the host galaxy In many early-type galaxies, a close relationship has been observed between its stellar component and the red GC subpopulation, detected in the kinematics (e.g. Pota et al. 2013), in their radial projected distributions (e.g. Ko et al. 2019) as well as the shape (measured by ) of the red clusters and the stellar light distribution (e.g. Park & Lee 2013). In the case of NGC 3613, we do not have enough radial coverage to determine colour gradients but can analyse the trend of mean colours for blue and red GCs, at three different radial ranges ( Table 2). The blue peak of the inner radial range is redder than those of the intermediate and outer ranges, while the red peak does not present any clear variation with radius. That the blue peak gets bluer with increasing radius is in agreement with massive elliptical galaxies located at the centre of clusters (e.g., Bassino et al. 2006in Fornax, Caso et al. 2017 in Antlia), although NGC 3613 is considered as just the central galaxy of a group. Regarding the red GC population, we noticed that the PA of the two over-densities detected in their projected azimuthal distribution (i.e., PA ∼ 110 • and ∼ 290 • ) correspond, as expected, to the orientation of the major-axis of the galaxy elliptical contours. In addition, the ellipticity of the projected distribution of the red clusters resulted = 0.37. We calculated a mean and PA for the host galaxy isophotes with semi-axis between 48 < R g < 102 (i.e. the same radial range used for the GC azimuthal distribution), resulting < >= 0.47 (σ = 0.017) and < P A >= 97 • (σ = 0.14). The shape parameters of the light distribution are very similar to those of the projected red GC distribution, while there is no obvious relation to the blue GCs. Both effects can be related to the formation history of the galaxy as it is generally accepted that most massive early-type galaxies in the local Universe form in two-phases van Dokkum et al. 2015). Ultra-compact dwarf candidates Five UCD candidates have been detected in the colourmagnitude diagram of point-sources, shown in Fig. 5 with empty circles. This small sample, according to our photometry, shows colours within the range corresponding to GCs but their i magnitudes are brighter than expected for a GC (assuming a limit at i 0 = 20.8, as explained in Section 4.1). The positions of these UCD candidates are also identified on our GMOS image (Fig. 16), where they appear surrounding NGC 3613 at galactocentric radius between 66 and 121 arcsec, i.e. well within the radial range covered by the GC candidates. Their colours are in the range 0.85 < (g − i) 0 < 1.02 and their absolute magnitudes −11.8 < M i < −11.5 according to adopted distance. If we compare to the M i versus (g − i) 0 colour-magnitude diagram presented by Brodie et al. (2011, their fig. 5) for the sample of M 87 UCDs, our candidates fall in the same locus as the M 87 ones. We plan to obtain spectra of these UCD candidates in the near future, in order to confirm membership with radial velocities and analyse physical properties like metallicity, age, stellar populations, etc. SUMMARY AND CONCLUSIONS We present the first photometric study of the GCS of the bright elliptical galaxy NGC 3613, that is located at the centre of a galaxy group but has an intrinsic brightness typical of a brightest cluster galaxy. On the basis of g , r , i Gemini/GMOS images, not only the properties of the GCS but also the surface-photometry of the host galaxy were investigated. In addition, its distance was confirmed by means of the GC luminosity function and five new UCD candidates were discovered. The principal results are summarised here: • The GC colour distribution is bimodal, considering the whole sample or three different radial ranges. The mean colour of the blue GCs gets slightly bluer for increasing radius, which is understood as a hint that these metal-poor clusters may have been accreted with satellite galaxies. • Regarding the blue GC subpopulation, they follow a colour-magnitude relation in the sense that brighter clusters get redder, i.e. the so-called blue tilt, for whose interpretation several scenarios have been proposed. No equivalent relation is present in the red GC subpopulation. • Regarding the red GC subpopulation, its spatial, radial and azimuthal projected distributions show that they are more concentrated towards the host galaxy and trace closely the shape of the galaxy light isophotes. Thus, these effects point to a common origin of the galaxy stellar component and the majority of metal-rich GCs. The blue GC subpopulation presents a mostly uniform and more extended projected distribution. • By means of the turn-over of the GC luminosity function, we obtain a distance of 29.8 ± 2.8 Mpc, in agreement within uncertainties with the initially adopted value of 30.1 Mpc (Tully et al. 2013). The total GC population is estimated in N tot = 2075 ± 133 GCs and the specific frequency S N = 5.2 ± 0.7. Both values are typical for GCSs in host galaxies of similar luminosity than NGC 3613. • There is a noticeable substructure in the surfacebrightness distribution of NGC 3613, detected in the original and residual images. It may be a sign of past tidal interactions but cannot be clearly related to any interplay with its neighbour, the merger remnant NGC 3610. We also find no evidence of such interaction in the GC projected distributions. • We find a sample of five new UCD candidates in the outskirts of NGC 3613, brighter than the regular GCs but within the same colour range. We plan to continue studying them with spectroscopy in the near future.
2020-01-23T09:21:05.971Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "e6f3451281ceaab16fd241edde30b69f8a637502", "oa_license": "CCBYNCSA", "oa_url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/124458/Versi%C3%B3n_preliminar.pdf?sequence=1", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1770ea55bf5793425dc730eb488d545cc998b4c6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8421294
pes2o/s2orc
v3-fos-license
Trypanosoma cruzi Infection in Didelphis marsupialis in Santa Catarina and Arvoredo Islands , Southern Brazil Between 1984 and 1993 the prevalence of the Trypanosoma cruzi infection in opossums (Didelphis marsupialis) was studied in Santa Catarina and Arvoredo Islands, State of Santa Catarina, Brazil. The association of the triatomine bug Panstrongylus megistus with opossums nests and the infection rate of these triatomines by T. cruzi was also studied. Thirteen different locations were studied in Santa Catarina Island (SCI), in which 137 D. marsupialis were collected. Sixty two opossums were collected at the Arvoredo Island (AI), located 12 miles north from SCI. All captured animals were submitted to parasitological examinations that revealed the presence of T. cruzi in 21.9% of the opossums captured in SCI and 45.2% among opossums captured in the AI. The presence of P. megistus was detected in most of the D. marsupialis nests collected in the SCI, however, in the non-inhabited AI only eight triatomines were collected during the whole study. The presence of T. cruzi-infected D. marsupialis associated with P. megistus in human dwellings in the SCI, and the high infection rate of D. marsupilais by T. cruzi in the absence of a high vector density are discussed. Between 1984 and 1993 the prevalence of the Trypanosoma cruzi infection in opossums (Didelphis marsupialis) was studied in Santa Catarina and Arvoredo Islands, State of Santa Catarina, Brazil.The association of the triatomine bug Panstrongylus megistus with opossums nests and the infection rate of these triatomines by T. cruzi was also studied.Thirteen different locations were studied in Santa Catarina Island (SCI), in which 137 D. marsupialis were collected.Sixty two opossums were collected at the Arvoredo Island (AI), located 12 miles north from SCI.All captured animals were submitted to parasitological examinations that revealed the presence of T. cruzi in 21.9% of the opossums captured in SCI and 45.2% among opossums captured in the AI.The presence of P. megistus was detected in most of the D. marsupialis nests collected in the SCI, however, in the non-inhabited AI only eight triatomines were collected during the whole study.The presence of T. cruzi-infected D. marsupialis associated with P. megistus in human dwellings in the SCI, and the high infection rate of D. marsupilais by T. cruzi in the absence of a high vector density are discussed. Trypanosoma cruzi, the etiological agent of Chagas disease, is a protozoan parasite that infects over 200 sylvatic or domestic mammalian species and subspecies from seven different orders in Central and South America, including man.Marsupials, mainly those from the genus Didelphis, have been cited as one of the most important reservoirs of T. cruzi in several Latin American countries (Barretto et al. 1964, Travi et al. 1994).Previous studies revealed distinct infection rates of Didelphis sp. by T. cruzi in different regions of Brazil such as 35.7% in Rio de Janeiro (Guimarães & Jansen 1943), 24% in the Amazon basin (Miles 1976), 20.6% in Mambaí, State of Goiás (Mello 1982) and 37.9% in Bambuí, State of Minas Gerais (Fernandes et al. 1991). Due to its synantropic behaviour, frequently invading human dwellings in both rural and urban areas, opossums are commonly bitten by sylvatic and/or peridomestic triatomine species.Once infected by T. cruzi these marsupials usually present high infection rates and long term parasitemia, acting as important links between the parasite sylvatic and domestic transmission cycles (Barretto et al. 1964, Zeledon et al. 1970). Although the State of Santa Catarina is nonendemic for human Chagas disease, previous studies revealed that 84.5% and 66.6% of the P. megistus and Rhodnius domesticus were infected by T. cruzi, respectively (Schlemper Jr et al. 1985).Recently, adults and nymphs of P. megistus were found in several artificial ecotopes as well as in human dwellings in Santa Catarina Island (SCI), indicating the high potential of this species to invade and colonize human dwellings as observed in the State of São Paulo (Forattini et al. 1982, Steindel et al. 1994). In this work we studied the prevalence of T. cruzi among D. marsupialis (= D. aurita) captured in two distinct islands in Santa Catarina as well as discuss the presence of this opossum associated with P. megistus in human dwellings as an important risk factor for the transmission of Chagas disease. MATERIALS AND METHODS Study site -SCI, also known as Florianópolis, has an area of 425 km 2 and is located in Santa Catarina, southern Brazil.Around 500,000 people live nowadays in this Island where only 15-20% of the original Atlantic forest remains intact.For this study, SCI was divided in north, central and south regions, and 13 localities were studied.Arvoredo Island (AI), with a 7 km 2 area, is located 12 miles north from the SCI, mostly covered by a well-conserved forest.In the AI, a Brazilian navy base (Marinha do Brasil) has only a few men (3-5) which usually stay periods of 12-24 months in it (Figure). Animal captures -Animals were collected in both islands from 1984 up to 1993, manually or with live-traps (20x20x60 cm) set up in sylvatic, peri-domestic or domestic environments as described by Fernandes et al. (1990).Animals captured by inhabitants were also included in this study.After sex and weight determination, each animal was subjected to parasitological examinations to detect T. cruzi infection.Both fresh and Giemsa-stained smears, hemoculture in LIT (liver infusion tryptose) medium as described by Luz et al. (1994) and xenodiagnosis performed with 15 nymphs of 4th/5th instar of P. megistus was used to detect T. cruzi infection. Whenever possible, the scent glands were examined for the presence of T. cruzi.Other marsupial and rodent species were captured in both islands during this study.These animals were also examined for T. cruzi infection as described above. Animals captured in the AI were either examined in the field or brought to the laboratory.Thirty-five opossums captured in the AI were tagged and released for recapture after four to six months.For nest localization, six adult D. marsupilais were followed using the method described by Miles (1976).Briefly, this method uses a backpack attached to the opossum, which contains a line reel.The animal is released in the same capturing site and the line tracked to the end.Comparison of the percentage of infected animals in both islands was performed by the chi-square (χ 2 ) test. Triatomine search -In both islands, triatomines were systematically searched in opossums nests in tree holes, rocks, palm trees and bromeliads in both sylvatic or peri-domiciliar environment and also in houses and other human-made dwellings.In the AI, triatomines were also captured using light traps.All captured triatomines had their intestinal contents and feces examined for the presence of flagellates by fresh and Giemsa-stained smears.T. cruzi was isolated by sub-inoculation of positive feces in Swiss albino mice or by xenoculture as previously described (Bronfen et al. 1989). RESULTS A total of 199 D. marsupialis were collected in both islands, 137 (63 males, 70 females and 4 undetermined) at the SCI and 62 (27 males and 35 females) at the AI.The T. cruzi infection rate among these animals was 21.9% and 45.2%, respectively (Table ). At SCI, 13 out of 137 animals were positive by fresh blood examination (9.5%), 25 by xenodiagnosis (18.2%) and 21 by hemoculture (15.3%) in a total of 30 T. cruzi positive opossums.Among all D. marsupialis captured in SCI, mostly from Lagoa da Conceição, Trindade and Córrego Grande localities (Central region), 27.7% of them were captured in human dwellings (houses and annexes), 17.5% in the peridomicile (storage houses) and 45.2% in the sylvatic environment (Table ).No T. cruzi infection in scent glands was detected among 71 animals examined in this island.T. cruzi infection was found in 12.4% of the opossums captured in human-related dwellings. Two other marsupial species (Lutreolina crassicaudata and Marmosa cinerea) were captured in SCI.No T. cruzi infection was detected among the 34 captured animals (28 L. crassicaudata and six M. cinerea). At the AI, six out of 62 opossums were positive for T. cruzi by fresh blood examination (9.7%), 28 by xenodiagnosis (45.2%) and 12 by hemoculture (19.3%).Among the 28 T. cruzi-positive D. marsupialis, two had parasites in the scent glands (7.1%), as determined by fresh and Giemsastained smears.All positive samples were isolated by culture in LIT medium.The number of infected opossums in the AI was significantly higher than in SCI (p<0.001). From a total of 35 opossums examined, tagged and released in the same capturing site in the AI, six females were recaptured.One female, originally negative for T. cruzi infection by all methods, became infected within a six month period as revealed by xenodiagnosis and hemoculture indicating active transmission in the AI.Following the method used by Miles (1976), six opossums were monitored in the AI.Two animals were recovered and their nests, one in a tree hole and the other under a rock formation, were negative for triatomines.Two other line ends were found in rocks and two were disrupted.Neither the 12 rodents (Oryzomys sp.) captured in this island nor all six soldiers of the Brazilian navy living there were positive for T. cruzi infection. A total of eight P. megistus (five female adults and three nymphs) were captured in the AI during the whole study.The five females were captured using light traps, while the three nymphs were found in an opossum nest in a tree hole.Six triatomines (three nymphs and three adults) revealed the presence of T. cruzi in their feces.Two others were collected by Navy soldiers and preserved in alcohol. In contrast, the search for triatomines in SCI revealed the presence of P. megistus in the sylvatic environment, where 268 nymphs and six adults were collected, as well as in human dwellings where 305 nymphs and 24 adults were captured.The T. cruzi infection rate among these triatomines was 86.1% and 55.3%, respectively. During this study, 38 T. cruzi strains were isolated from D. marsupialis and 28 strains were isolated from P. megistus collected in both SCI and AI.These strains were already characterized by Steindel et al. (1993Steindel et al. ( , 1995) ) by biological, biochemical and molecular methods. Despite the existence of T. rangeli in Santa Catarina (Grisard et al. 1999), none of the animals examined during this study were infected by this parasite. DISCUSSION Opossums from the genus Didelphis have a wide distribution in South America, being frequently observed in close association with triatomines and human dwellings in both sylvatic and urban areas.Rodents and marsupials, are not only the most important blood source for several triatomine species, but also their major source of T. cruzi infection (Rocha e Silva et al. 1975).Studies on T. cruzi infection in Didelphis sp. in different regions of Brazil revealed infection rates varying from 20.6% to 37.9% (Guimarães & Jansen 1943, Miles 1976, Mello 1982, Fernandes et al. 1991).In the present work T. cruzi infection rates of 21.9% and 45.2% were found in D. marsupialis captured in the Santa Catarina and Arvoredo Islands, respectively. Previous studies reported the association of T. cruzi infected triatomines (P.megistus and R. domesticus) with rodent and marsupial nests in wild areas of the SCI (Leal et al. 1961, Schlemper Jr et al. 1985).Also, the association of P. megistus with D. marsupialis in artificial ecotopes, including human-made dwellings in SCI, has been reported (Steindel et al. 1994).Similar report was made by Forattini et al. (1982) in the State of São Paulo. Using the precipitin test to evaluate the food source of these triatomines, Steindel et al. (1994) have found that 80.6% of the 31 adult P. megistus captured in artificial ecotopes fed on humans.Moreover, T. cruzi infection was detected in 55.3% of these triatomines. In the present study, 62 out of 137 D. marsupialis (45.2%) were captured in human dwellings in the SCI.Natural T. cruzi infection was confirmed in 17 (27.4%) of these opossums. The massive destruction of the original Atlantic forest in the SCI allied to the construction of houses close to the remaining forest made contact of humans with both opossums and P. megistus frequent.In contrast, AI is a federal reserve which have a well conserved Atlantic forest.Besides the five Brazilian Navy houses, no other human-made dwelling is present in this island where T. cruzi circulates among animals in a sylvatic environment.Moreover, opossums meat is still appreciated as an exotic food by native inhabitants of SCI. The presence of T. cruzi infected triatomines and opossums in human dwellings in the SCI, as well as the detection of both human and opossum blood in T. cruzi-infected triatomines, indicates the risk of transmission of this parasite to humans.The same epidemiological situation was observed in the State of São Paulo when Litvoc et al. (1990) detected the presence of P. megistus in D. azarae nests and an infection rate of 47.8% of these opossums by T. cruzi. A related behavior was observed by Telford Jr and Tonn (1982) in the upper llanos of Venezuela.Studying the T. cruzi dynamics in D. marsupialis, these authors observed a prevalence of 55.2% and a close relation of this animal with R. prolixus and human dwellings. In contrast, an extensive serological survey carried out among 5,831 inhabitants of the SCI revealed a prevalence of infection of 0.034% (Carobrez et al. 1992).Thus, T. cruzi transmission in SCI occurs almost exclusively between P. megistus and D. marsupialis in the sylvatic environment.Moreover, SCI presents a low density of P. megistus found in artificial ecotopes, such as human-made dwellings (Steindel et al. 1994). Despite the low prevalence in humans, the occurrence of naturally infected reservoirs and vectors in domestic environments at SCI does not rule out the possibility of finding human infection in this habitat. A comparison of serological and parasitological tests to detect T. cruzi infection in 116 D. albiventris captured in Bambuí, State of Minas Gerais, revealed that 97.7% of the infected animals were positive in both tests (Fernandes et al. 1990).Having used fresh and Giemsa-stained smears, hemoculture and xenodiagnosis to detect T. cruzi infection in opossums during this study, 34 negative animals were kept in the laboratory and fol-lowed during two months by parasitological and serological tests (indirect immunofluorescence).Since only one opossum was positive by either tests, we conclude that T. cruzi infection in opossums can be easily detected by using parasitological methods. T. cruzi infection rate among D. marsupialis captured in the AI, which is geographically isolated and has a well conserved forest, was 45.2%.In contrast with the high T. cruzi infection rate among opossums, triatomines are scarce.P. megistus was the only species captured in this island and all six triatomines examined were positive for T. cruzi.Moreover, opossum blood was detected in all three nymphs submitted to precipitin tests. Two out of 28 (7.1%) opossums also presented T. cruzi in their scent glands.Natural T. cruzi infection in D. albiventris and D. marsupialis scent glands has been demonstrated by Fernandes et al. (1989) and by Naiff et al. (1987) which have detected one positive gland out of 20 animals examined, and in one out of 90, respectively.Our results are in agreement with these previous reports, confirming the low occurrence of naturally T. cruzipositive scent glands in D. marsupialis.Moreover, we cannot infer that this possible transmission mechanism can be responsible for the high prevalence of T. cruzi in opossums of the AI.Other possibilities of vertical transmission such as milk feeding were studied and discarded by Telford Jr and Tonn (1982) and by Deane et al. (1986). Characterization of 68 T. cruzi strains by biological, biochemical and molecular methods showed that strains from AI produce sub-patent parasitemia in Swiss mice and a high homogeneity of isoenzyme and randomly amplified polymorphic DNA profiles.Based on the same markers, strains isolated in SCI revealed a higher heterogeneity than that observed among strains isolated in the AI (Steindel et al. 1995). These results can be explained by the geographical isolation of the AI, where T. cruzi strains circulate almost exclusively among opossums.On the other hand, in the SCI T. cruzi have been isolated from triatomines and a wide variety of mammals, rodents and marsupials which may explain the higher heterogeneity observed. The presence of T. cruzi in the AI opossums scent glands may suggest a high adaptation of some parasite strains and the opossum.Deane et al. (1984Deane et al. ( , 1986) ) observed that only a few T. cruzi strains were able to infect the scent glands under controled conditions. Due to the opossums omnivorous habits, another way of infection considered was the ingestion of T. cruzi-infected rodents.All 12 Oryzomys sp.captured and submitted to parasitological tests were negative for T. cruzi infection.We have not discarded this possibility, however, it appears to be infrequent. Experimental infection of newborn D. marsupialis with T. cruzi strains from AI and SCI showed a long-term blood parasitemia.The presence of T. cruzi was observed in 50% of the scent glands of these animals after a two to three months period only in opossums experimentally infected with strains isolated from AI (M Steindel, unpublished data). Infection of the opossum scent glands suggests a high degree of host-parasite adaptation of some T. cruzi strains.Trypanosomes derived from opossums scent glands proved to be infective for mice and opossums under experimental conditions (Deane et al. 1986, Steindel et al. 1988). Another hypothesis that may explain the high T. cruzi prevalence among opossums in the AI that must be considered is the ingestion of T. cruzi-infected triatomines.The known insectivorous habits of these animals have already been demonstrated (Zeledon 1974) and must be considered as a possible T. cruzi infection source to the opossums in the AI.The low number of triatomines found in the AI did not explain the high prevalence of T. cruzi among opossums.However, more studies must be performed in order to better evaluate the triatomine density in this island.The existence of an alternative or unusual transmission mechanism of T. cruzi between D. marsupialis in the AI cannot be neglected. Although Santa Catarina is not an endemic area for human Chagas disease, the presence of D. marsupialis infected with T. cruzi in human dwellings in the SCI must be considered as an important risk factor for Chagas disease.Moreover, serving as blood and T. cruzi infection source to P. megistus, these opossums are acting as links between the domestic and sylvatic transmission cycles. Arvoredo and Santa Catarina Islands localization off the State of Santa Catarina coast. TABLE Number of Didelphis marsupialis captured in Santa Catarina and Arvoredo Islands, the percentage of Trypanosoma cruzi infected animals and the number of captured and T. cruzi positive animals per ecotope
2018-04-03T00:00:36.909Z
2000-12-31T00:00:00.000
{ "year": 2000, "sha1": "dcdf0e1325b3d92f00795d5904ba0c685196461d", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/mioc/a/5wsJLKN8RxZCczZdYvRYzrh/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dcdf0e1325b3d92f00795d5904ba0c685196461d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
261554887
pes2o/s2orc
v3-fos-license
Antidepressant Use and Mortality Among Patients With Hepatocellular Carcinoma This cohort study investigates the association between antidepressant use and mortality risk in patients with hepatocellular carcinoma. Introduction Liver cancer is the sixth most commonly diagnosed cancer and the third leading cancer-related cause of death worldwide, and among the different forms of primary liver cancers, hepatocellular carcinoma (HCC) is the most common, accounting for 75% to 90% of cases. 1 Patients with early HCC can be treated with curative therapies including resection, transplant, and ablation, leading to an expected overall survival time of more than 6 years. 2 However, the majority (>70%) of patients with HCC have a diagnosis at advanced stages because the symptoms of early HCC are not easily detected.Surgical intervention is not suitable for those with advanced HCC, 3 and the median survival time at this point is only 8 to 19 months. 2,4There is therefore an urgent need for research into alternative anticancer therapies, and drug repurposing based on the potential anticancerous effects of existing nononcological drugs is attracting interest as an approach. Antidepressants are commonly used drugs that have potential anticancer effects. 5Preclinical and epidemiological studies examining the anticancer effects of antidepressants, including tricyclic antidepressants (TCAs), selective serotonin reuptake inhibitors (SSRIs), serotonin-norepinephrine reuptake inhibitors (SNRIs), and other atypical antidepressants, have increased.Early animal studies reported that SSRI use was associated with increased liver cancer, 6 but other studies reported conflicting findings. 7Promising results for TCAs and SSRIs in HCC have been reported from in vitro and in vivo animal studies [8][9][10][11][12] and epidemiological human studies. 13,14Although these epidemiological studies have reported associations of antidepressants with lower risk of HCC, associations with HCC prognosis have not been evaluated.Therefore, we conducted a national cohort study to examine the association between antidepressants and HCC prognosis.We investigated overall and cancer-specific mortality as HCC prognosis indices.High concordance between claim records for medication use in the NHIRD and patient self-report has also been established. 16The study was approved by the Research Ethics Committee of the Chang Gung Medical Foundation. Study Design and Population Written informed consent was not needed because this study used Taiwan's NHIRD, which covers all residents, with research legitimacy affirmed by the Supreme Administrative Court in 2017.This study was conducted in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline. Cohort Selection A cohort of patients with a new HCC diagnosis was identified based on ICD-9-CM code 155.0 and ICD-10-CM codes C22.0, C22.2, C22.7, and C22.8 recorded in the NHIRD between January 1, 1999, and December 31, 2017.It has been reported that use of prevalent case may result in selection bias. Main Exposure Antidepressant prescription records were obtained from the NHIRD based on the Anatomical Therapeutic Chemical (ATC) code N06A.Antidepressant use was indicated by the presence of 1 or more prescriptions for antidepressants.We divided antidepressants into 3 classes: (1) SSRIs (ATC code N06AB), ( 2) SNRIs (ATC codes N06AX16 and N06AX21), and (3) TCAs (ATC code AN06AA). Individuals in each antidepressant subgroup were not mutually exclusive.In addition to these 3 major classes of antidepressants, the users of remaining atypical antidepressants such as bupropion, mirtazapine, and trazodone were categorized as other antidepressants in Table 1 but were not counted in the subgroup analysis due to heterogeneity.To examine whether the timing of antidepressant use influenced the association between antidepressant use and mortality, we conducted separate analyses for antidepressant use before and after HCC diagnosis.In each comparison, the groups using antidepressants before and after the diagnosis of HCC were not mutually exclusive.For comparisons of antidepressant use after diagnosis, those taking antidepressants were defined by at least 1 antidepressant prescription in the exposure assessment window after the date of first HCC diagnosis to death or the end of 2017.Antidepressant use was considered as a timevarying exposure to avoid potential immortal time bias. 18Specifically, for individuals taking antidepressants, the period between HCC diagnosis and the time of first antidepressant plus a 90-day induction period 19 was classified as nonexposure time, and the period thereafter during follow-up was classified as exposure time.To examine the influence of induction period on the results, we performed sensitivity analysis using different induction period times (0 and 180 days). Nonusers were defined as individuals who had no prescriptions for antidepressants in the 1 year before or after HCC diagnosis to death or the end of 2017.For nonusers, their entire follow-up period was classified as nonexposure time. Covariates Study covariates were included based on their relevance to antidepressant use and HCC prognosis and included demographic characteristics, comorbidities, and treatments for HCC. 20,21Covariate assessment windows were the time before the HCC diagnosis for relevant comorbidities, at the date of HCC diagnosis for demographic characteristics, and the time after the HCC diagnosis for HCC treatments.Among these covariates, treatments for HCC were exclusive to analysis of antidepressant use after HCC diagnosis.HCC treatment variables are more likely to be mediators of prediagnostic analyses, so adjusting for such variables is unnecessary overadjustment. 22morbidities including hepatitis B virus (HBV) infection, hepatitis C virus (HCV) infection, liver cirrhosis, alcohol use disorder, and other diseases were defined as being present if data on diagnostic codes corresponding to these comorbidities were assigned to the patient for at least 3 outpatient visits or at least 1 hospital admission before HCC diagnosis.We used the Charlson Comorbidity Index (CCI) to determine general health conditions and the overall burden of comorbidities. 20,23eatments for HCC after HCC diagnosis were considered as covariates in the analysis of antidepressant use after HCC diagnosis. 20Data regarding HCC treatments, including hepatic operation (lobectomy, segmentectomy, hepatectomy, liver transplant), radiofrequency ablation, transcatheter arterial embolization, and transcatheter arterial chemoembolization, were retrieved from NHIRD inpatient and outpatient databased on ICD-9 and ICD-10 procedure codes.Data regarding sorafenib and chemotherapy for HCC, including fluorouracil, gemcitabine, docetaxel, irinotecan, doxorubicin, mitomycin, cisplatin, carboplatin, and oxaliplatin, were obtained from the NHIRD using ATC codes. Study Outcomes Overall mortality and cancer-specific mortality were the primary outcomes in this study and were based on cause of death records in the NHIRD.Cancer-specific deaths were identified in the primary Statistical Analysis All data analysis was performed using SAS statistical software version 9.4 (SAS Institute) and was conducted on June 5, 2023.A 2-sided hypothesis test was used with a significance level set at .05. We reported the distribution of demographics, comorbidities, and hepatic treatment and median and IQR of CCI score between the antidepressant use and nonuse groups.Cox proportional hazards regression was performed to estimate hazard ratios (HRs) with 95% CIs for associations JAMA Network Open | Oncology between antidepressant use before and after HCC diagnosis and the rates of overall and cancerspecific mortality, with adjustments for demographic factors, comorbidities, and hepatic treatment. Both crude and adjusted HRs were reported to provide a comprehensive understanding of the associations.95% CIs that did not include 1 indicate statistical significance.Dose-response analyses were conducted to examine whether the duration of antidepressant use had a differential association with overall and cancer-specific mortality rates.We categorized the duration of antidepressant use 2 groups: short-term users (1 to 90 days) and long-term users (>90 days) because the practice of providing refillable prescriptions for up to 3 months to patients with chronic conditions in Taiwan; 5][26] To investigate the association of antidepressant therapy with the prognosis of HCC in different etiological subgroups, we conducted a subgroup analysis focusing on specific subgroups, including HBV infection, HCV infection, liver cirrhosis, and alcohol use disorder.To determine whether there is an additional association of antidepressant use in conjunction with chemotherapy or sorafenib, we conducted moderation analysis to examine the association of combined use of sorafenib or chemotherapy with antidepressant compared with mortality. To address the type I error that may result from multiple comparisons and to provide more reliable results, we adjusted the P values for false discovery rate in both the main analysis and the subgroup analysis. 27 Study Cohort A total of 308 938 patients with HCC between 1999 and 2017 were identified from the NHIRD, after excluding 9173 patients with HCC diagnosis before January Association Between Mortality and Antidepressant Use Before HCC Diagnosis The crude overall mortality rates were 15.68 per 100 person-years in the antidepressant use group and 12.14 per 100 person-years in the nonuse group (Table 2).In patients with HCC, use of any type of antidepressant within 1 year before diagnosis was not associated with a lower overall mortality but instead with a slightly higher risk after adjustment for covariates (adjusted HR, 1.10; 95% CI, 1.08-1.12).Additional analyses were conducted to examine the specific association of antidepressant types, including SSRI, SNRI, and TCA, with overall mortality.The results revealed no association with reduction in overall mortality, with adjusted HRs ranging from 1.03 (95% CI, 1.00-1.05) to 1.16 (95% CI, 1.07-1.25)when compared with nonuse. A similar analysis was conducted for cancer-specific mortality. To examine the possible influence of the length of the induction period, we used different induction periods (0 days and 180 days) to examine the association of postdiagnosis antidepressant use, including the use of different antidepressant types, with overall and cancer-specific mortality. The results were similar to those for a 90-day induction period (eTable 1 in Supplement 1). Association Between Mortality and Antidepressant Use After HCC Diagnosis With Different Comorbidities A subgroup analysis was conducted to examine the use of antidepressants after diagnosis in different comorbidity subgroups.Analyses showed significant inverse associations between postdiagnosis antidepressant use and both overall and cancer-specific mortality in patients with HCC with HBV respectively) compared with nonuse (Table 3).All the adjusted P values using the false discovery rate remained statistical significance (<.05). Association Between Mortality and Antidepressant Use After HCC Diagnosis With Chemotherapy or Sorafenib To investigate the potential effect of combination therapies, we examined the interaction between postdiagnosis antidepressant use and HCC treatments such as chemotherapy and sorafenib.None of the interaction tests between antidepressant and chemotherapy or antidepressant and sorafenib revealed additional beneficial associations with overall mortality or cancer-specific mortality.This finding was consistent across different antidepressant subgroups (eTable 2 in Supplement 1). Discussion This is the first national cohort study on the association between antidepressant use and mortality risk in patients with HCC.Our results demonstrate that the use of antidepressants after HCC diagnosis, including SSRI, SNRI, and TCA, was associated with decreased both overall and cancerspecific mortality in a large, representative cohort.Notably, we observed a consistent inverse association between postdiagnosis antidepressant use and mortality risk across various comorbidity subgroups, encompassing HBV infection, HCV infection, liver cirrhosis, and alcohol use disorder.In contrast, no association was observed between the use of antidepressants before HCC diagnosis and reduced cancer-specific mortality or all-cause mortality. Our study conducted 2 comparisons; examining the associations of antidepressant use before and after HCC diagnosis with mortality.The results showed inconsistent findings.Antidepressant use before HCC diagnosis was not associated with lower mortality risk, indicating that the antidepressant use prior to HCC diagnosis may not have an association with the severity of HCC and its subsequent prognosis.However, the use of antidepressants after HCC diagnosis was inversely associated with both overall and cancer-specific mortality.This suggests that antidepressant use might be involved in the prognosis of cancer, not before its induction.A similar finding was also observed that statin use after HCC diagnosis associated with lower HCC mortality but not in statin use before HCC diagnosis. 20Further studies are warranted to examine potential mechanisms of antidepressants use on HCC mortality. Considering the inverse association between overall mortality and antidepressant use after HCC diagnosis, several noncancer causes of death, such as unintentional injury, self-inflicted injury, and suicide, have been reported in patients with cancer, which may be reduced by antidepressant use. For example, risks of motor vehicle crashes 28 and suicide attempts have been reported to decrease following the initiation of antidepressant treatment.Furthermore, depression has been commonly observed in patients with HCC and has been associated with a lower quality of life, lower adherence to anticancer treatment, prolonged hospitalization, and higher mortality. 29Adherence to antidepressant treatment has been shown to have a positive impact on mortality in patients with depression and cancer. 30It is possible that the prescription of antidepressants may act as a mediator, indirectly influencing the overall mortality risk in patients with HCC by mitigating the negative health effects of depression. In addition to observing lower risk of overall mortality, we observed that antidepressant use after HCC diagnosis was inversely associated with cancer-specific mortality.This association remained consistent when specifically analyzing different antidepressants including SSRIs, SNRIs, and TCAs.This might be explained by previously reported apoptotic effects of serotonergic antidepressants on cancer through the regulation of growth stimulatory 5-HT activity connected to biochemical pathways involving mitogen-activated protein kinase, mitochondrial membrane potential, extracellular signal-regulated kinases, protein kinase B, and nuclear transcription factor-κB. 31The apoptotic activity of TCAs and SSRIs has been observed in different tumor cell lines including HCC. 8,10,11,32,33 In addition, some SSRIs (fluoxetine and sertraline) have been reported to be effective chemosensitizers that increase the cytotoxicity of anticancer drugs and suppress the growth of HCC cells. 32,34However, our results did not reveal additional beneficial associations when examining the interaction between postdiagnosis antidepressant use and HCC treatments, including chemotherapy and sorafenib.The lack of synergistic interaction may be attributed to the different mechanisms of action between antidepressants and chemotherapy or sorafenib.Further research is warranted to explore the potential biological mechanisms underlying the inverse association between antidepressants and HCC mortality. JAMA Network Open | Oncology Despite biological plausibility, studies have presented mixed results for associations of antidepressants with mortality in patients with cancer.Some studies have reported that antidepressant use in patients with depression and cancer has a beneficial effect on premature mortality, 30 and antidepressant use was found to be associated with reduced mortality in lung cancer specifically. 35However, other researchers have reported contradictory findings, for example, that use of antidepressants in patients with melanoma, breast, prostate, lung, colorectal, and other cancers is associated with increased mortality. 21,36 Strengths and Limitations This study has several strengths.By using a nationally registered data set with a 99% coverage of the Taiwanese population, this large-scale cohort study has national representativeness and low potential for selection bias.Furthermore, we adjusted for a range of potential confounders and took into consideration the potential induction periods of antidepressants, the timing of antidepressant use (before and after HCC diagnosis), and types of mortality (overall and cancer specific).In our analysis of antidepressant use after HCC diagnosis, we used time-varying exposure to minimize immortal time bias.Therefore, this study yielded robust results on the association between mortality and postdiagnosis antidepressant use in patients with HCC. Several limitations may affect the results and interpretations of this study.First, information on some potential confounders was not available in the NHIRD, including adherence to antidepressant use, smoking status, body mass index, nutritional status, other health-related factors, and laboratory test results, and we thus could not include and adjust for them in our statistical models.Second, our study may have misclassification bias because some participants may not have adhered to antidepressant medication.However, such misclassification usually results in an underestimation of the effect of interest 37 ; therefore, the true association may be more pronounced.Third, the low cancer-specific mortality in our study could be attributed to our definition based on the primary cause of death certificate records, where complications of HCC (such as liver cirrhosis or infectious complications) may appear as primary causes of death, leading to an underestimation of cancerspecific mortality.Fourth, this study used Taiwanese population data, and the results might not be generalizable to other countries.Finally, it is important to acknowledge that the current evidence supporting the use of antidepressants in HCC is limited, and there are no HCC management consensus guidelines recommending their use. 38,39Given that antidepressant use in our study was not specifically targeted at HCC treatment and the study design was retrospective and observational, caution is warranted when interpreting the observed associations. Conclusions In this large population-based HCC cohort study, antidepressant use after HCC diagnosis was associated with lower overall and cancer-specific mortality among patients with HCC.Our study provides promising empirical results indicating that antidepressants may have utility as anticancer therapeutics in patients with HCC.However, our findings should be interpreted cautiously because the associations found in this observational study may not indicate causality and may be affected by residual confounding or biases.Definitive evidence would require evaluation in randomized clinical trials. Enrollment in Taiwan's National Health Insurance program is mandatory for all residents; it is run by the Taiwanese government and covers 99% of Taiwan's population.The National Health Insurance Research Database (NHIRD) is comprehensive and includes information on medical procedures, prescriptions, and diagnoses in outpatient, inpatient, and emergency care.Individual medical records included in the NHIRD are anonymized to protect patient privacy.Diseases in the NHIRD were diagnosed and recorded using the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) before 2015 and the International Statistical Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) after 2015.Validation studies have demonstrated the validity of NHIRD diagnosis codes.The validity of cancer diagnosis recorded in the NHIRD has been tested by comparison with the Taiwan Cancer Registry and the positive predictive value was 93% for liver cancer diagnosis recorded in NHIRD. cause of death certification records based on ICD-9-CM code 155.0 and ICD-10-CM codes C22.0, C22.2, C22.7, and C22.8.The follow-up window for mortality was from the date of HCC diagnosis to the study end point (December 31, 2018). Figure 1 . Figure 1.Flowchart of Study Participants Based on Antidepressant Use Before Diagnosis Figure 2 . Figure 2. Flowchart of Study Participants Based on Antidepressant Use After Diagnosis 318 111 Patients with HCC diagnosed in 1998-2017 17 JAMA Network Open | Oncology To exclude prevalent cases of HCC that occurred before January 1, 1999, a 1-year exclusion assessment window was applied from January 1 to December 31, 1998.By excluding prevalent cases of HCC, we identified incident cases of HCC for analysis.The cohort entry date was the date of HCC diagnosis.The end of this study was December 31, 2018, to ensure a minimum 1-year follow- JAMA Network Open.2023;6(9):e2332579.doi:10.1001/jamanetworkopen.2023.32579(Reprinted) September 6, 2023 2/13 Downloaded From: https://jamanetwork.com/ on 09/30/2023 Table 1 . Characteristics of Patients With HCC by Antidepressant Use Before or After Diagnosis c Comorbidity before HCC diagnosis.d Procedures and medication use recorded after HCC diagnosis. Downloaded From: https://jamanetwork.com/ on 09/30/2023 The crude cancer-specific mortality rate was 0.52 per 100 person-years in the antidepressant use group and 0.42 per 100 person-years in the nonuse group.We observed that antidepressant use before HCC diagnosis did not have a significant association with lower cancer-specific mortality (adjusted HR, 1.06; 95% CI, 0.96-1.17).When examining specific subgroups of antidepressants, none of them showed an association with the lower cancer-specific mortality, with adjusted HRs ranging from 0.99 (95% CI, 0.67-1.47) to 1.13 (95% CI, 1.09-1.17)when compared with nonuse. Table 2 . Association Between Antidepressant Use and Mortality in Patients With HCC The total numbers of multiple tests adjusted in the false discovery rate method is 11 for crude and adjusted HRs. a Adjusted for age, sex, low income, prediagnostic comorbidities (hepatitis B virus, hepatitis C virus, liver cirrhosis, alcohol use disorder), and Charlson Comorbidity Index score.Nonuse was defined as patients with HCC without an antidepressant prescription in the 1 year before the HCC diagnosis.b Adjusted for age, sex, low income, prediagnostic comorbidities (hepatitis B virus, hepatitis C virus, liver cirrhosis, alcohol use disorder), Charlson Comorbidity Index score, and HCC treatment (operation, radiofrequency ablation, transcatheter arterial embolization/transcatheter arterial chemoembolization, radiotherapy, chemotherapy, sorafenib).Nonuse was defined as patients with HCC without an antidepressant prescription in the 1 year before and after the HCC diagnosis.c False discovery rate-adjusted P value. Table 3 . Association Between Antidepressant Use and Mortality in Different Etiological Subgroups of Patients With HCC
2023-09-07T06:17:11.799Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "f8cac0b7446cd2ce2ddca4186a1e22c1cab19fdf", "oa_license": "CCBY", "oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2808957/huang_2023_oi_230944_1692894098.43709.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "66439e80a11de81f5612f7b6cb69f8dda3f74be0", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
17229466
pes2o/s2orc
v3-fos-license
Sea Surface Wind Retrievals from SIRC / X-SAR Data : A Revisit The Geophysical Model Function (GMF) XMOD1 provides a linear algorithm for sea surface wind field retrievals for the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR). However, the relationship between the normalized radar cross section (NRCS) and the sea surface wind speed, wind direction and incidence angles is non-linear. Therefore, in this paper, XMOD1 is revisited using the full dataset of X-SAR acquired over the ocean. We analyze the detailed relationship between the X-SAR NRCS, incidence angle and sea surface wind speed. Based on the C-band GMF CMOD_IFR2, an updated empirical retrieval model of the sea surface wind field called SIRX-MOD is derived. In situ buoy measurements and the scatterometer data of ERS-1/SCAT are used to validate the retrieved sea surface wind speeds from the X-SAR data with SIRX-MOD, which respectively yield biases of 0.13 m/s and 0.16 m/s and root mean square (RMS) errors of 1.83 m/s and 1.63 m/s. Introduction The Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) was present on two flights of the Space Shuttle Endeavor in April and October, 1994.SIR-C is operated at the L-and C-bands, each with quad-polarization.The German/Italian X-SAR is operated at the X-band with a single vertical-vertical (VV) polarization.The SIR-C and X-SAR were designed to synchronously collect data over common sites.During the two successful missions, 300 sites were captured globally, and a valuable dataset of 143 terabits was acquired.SIR-C/X-SAR was radiometrically calibrated to assess the optimal SAR configurations for various key issues within the disciplines of ecology, geology, hydrology, and oceanography [1].Although two decades have passed, the multi-frequency and multi-polarization capabilities of SIR-C/X-SAR are unsurpassed [2].The radar provides valuable SAR datasets for earth observation, and it pioneered subsequent developments of spaceborne SAR systems, such as the X-band SAR of TerraSAR-X (TS-X) and Cosmo-SkyMed (CSK). Many interesting studies are conducted using SIR-C/X-SAR data, and a brief overview of oceanography studies is given below. Monaldo and Beal [3] found that the C-band SAR of SIR-C was able to image azimuth traveling waves with minimum distortion after comparing the retrieved ocean-wave height-variance spectra in the southern ocean using a linear inversion with WAve Model (WAM) predictions.The consistency was attributed to the low orbit height of 215 km, the steep incidence angle between 23° and 25°, and the use of HH polarization.The multi-frequency capability is the most attractive feature of SIR-C/X-SAR, and many studies analyzed the different radar signatures of oceanic and atmospheric processes in the X-, C-, and L-bands.A case study in the North Sea [4] demonstrates that the phase change of the hydrodynamic modulation transfer function (MTF) causes a distinguishable shift of the observed wave peaks in the C-band and X-band image spectra on both sides of an atmospheric front when the radar operated at the intermediate incidence angle of 51.3°.Ufermann and Romeiser [5] compared the simulated radar signatures for different settings of oceanic and atmospheric parameters with the observed multi-frequency/multi-polarization SIR-C/X-SAR signatures of the Gulf Stream front.The authors concluded that the contributions of oceanic and atmospheric phenomena to radar signatures exhibit different dependencies on radar frequency and polarization.Different radar signatures and their interpretation of rain cells over the ocean in the multi-frequency SIR-C/X-SAR images are reported by Jameson et al. [6], Moore et al. [7] and Melsheimer et al. [8].It is concluded that the enhanced sea surface NRCS patches in the both C-and X-band images are most likely caused by the high spectral power density of the C-and X-band Bragg waves caused by raindrops.Gade et al. [9] physically explained the different damping ratios of biologic films and man-made mineral oils observed in SIR-C/X-SAR data based on surface film experiments in the German Bight and the Japan Sea.The damping behavior of the same substance in the SIR-C/X-SAR data depends on the sea surface wind speed, while the damping ratio of the same substance is higher in the X-and C-band data than in the L-band data. The polarimetric capability is another important feature of SIR-C data, which provides additional information for marine environment monitoring.For instance, Melsheimer et al. [8] derived the rain rate using the phase differences in cross-and co-polarization data of SIR-C.Migliaccio et al. [10] presented a promising study for detecting oil spills by combining the constant false-alarm rate (CFAR) filter and the polarimetric parameters of entropy, alpha and anisotropy of SIR-C data.Using the same dataset, Nunziata et al. [11] demonstrated that the Muller matrix is capable of observing oil spills and distinguishing features that look similar. We aim to retrieve sea surface winds from X-SAR data.Prior to the launch of TS-X, we developed a linear geophysical model function (GMF) called XMOD1 [12] to retrieve the sea surface wind from X-band SAR data.To develop XMOD1, 166 X-SAR data and collocated ECMWF reanalysis 40-year (ERA-40) reanalysis wind data are used.XMOD1 is directly applied to the X-band spaceborne SAR data of TS-X without any adjustment.Considering that the relationship between SAR NRCS and wind speed, wind direction and incidence angle is often nonlinear, different radar characteristics of calibration performances, radiometric stability, and signal-to-noise ratios between X-SAR and TS-X, a dedicated X-band GMF called XMOD2, has been developed for TS-X and TanDEM-X (TD-X) data [13] to replace XMOD1.Currently, there are several X-band spaceborne SAR datasets available, such as TS-X, TD-X and CSK; the valuable X-SAR dataset is completely free to access.Therefore, revisiting the full dataset of X-SAR for sea surface wind retrievals is necessary.Specifically, a dedicated wind retrieval algorithm for other applications using SIR-C/X SAR data, such as oil spill monitoring, sea surface wave retrieval and ship detection, may be useful. In Section 2, the datasets, including X-SAR data, the reanalysis modeled wind data, the validation dataset of ERS-1/SCAT, and in situ buoy data are introduced.A detailed analysis of the dependence of X-SAR NRCS on the wind speed and incidence angle is presented in Section 3. Following the analysis, the development of an updated nonlinear GMF to derive sea surface winds from X-SAR data is presented.The retrieval is validated through a comparison with ERS-1/SCAT and buoy data.The last section presents the conclusions. Data The spatially and temporally matched dataset of X-SAR and the European Center for Medium Range Weather Forecasts (ECMWF) ERA-Interim reanalysis wind fields are used for developing the wind retrieval geophysical model function.The developed model is further validated using in situ buoy measurements and the scatterometer onboard ERS-1 (ERS-1/SCAT). X-SAR Data A total of 2465 X-SAR images acquired in April and October, 1994 are accessed from the German Remote Sensing Data Center (DFD) of the German Aerospace Center (DLR).All the accessed X-SAR data belong to the Multi-Ground Range Detected (MGD) product in VV polarization.The swath width of the X-SAR data is not constant: it varies between 15 and 40 km.The incidence angles of the X-SAR data cover a rather large range between 20° and 55°. Quality control of the X-SAR data mainly excludes data that are significantly disrupted by rainfall, oil spills and other non-wind features [14].Figure 1 shows an example of X-SAR data acquired over the Pacific Ocean for which only wind-related sea surface features are present.All of the selected X-SAR images are similar to this example.The entire dataset of the 2465 X-SAR images are randomly divided into two groups, which are used as two independent tuning datasets to verify the stability of the determined parameters in the GMF.The locations of the X-SAR data of the two datasets are shown in Figure 2, which are marked by red blocks and green stars, respectively. ECMWF ERA-Interim Reanalysis Wind Field Data The ECMWF ERA-Interim reanalysis wind data [15] that are spatially and temporally collocated with the X-SAR data are used as the tuning dataset.The 6-hour synoptic ERA-Interim reanalysis wind data has a spatial resolution of 0.75°.To obtain the wind field information corresponding to the center of the X-SAR image, the kriging method is used to interpolate the ERA-Interim data to a 0.25° × 0.25° grid. Figure 3 shows the histogram of the ERA-Interim sea surface wind speed collocated with the X-SAR data.The distribution of the model data suggests that most of the wind speeds are in the range of 3-12 m/s, which is consistent with surface wind speed distributions over the sea [16]. ERS-1/SCAT Data The ERS-1/SCAT sea surface wind field offline products at a spatial resolution of 25 km are processed using the CMOD-IFR2 [17] model, with an accuracy of 1.2 m/s and a 15° standard deviation compared with the NOAA buoy data [18]. Buoy Data The in situ buoy data are accessed from the National Oceanographic Center (NODC) of NOAA.A total of 63 buoy records are selected to validate the sea surface wind fields retrieved from the X-SAR images in April and October, 1994.Most of the buoy anemometers are installed at a height of 5 m.Therefore, the wind speeds from the 5-m anemometers are converted to wind speeds at a standard height of 10 m, where SAR generally measures the sea surface wind speed.The following power-law wind profile [19] is used in this study.(1) where is the wind speed at height and and are the known wind speedand height, respectively.The exponent pis approximately 0.10 [19] over the open sea. Development of the SIRX-MOD Model In this section, the development a non-linear GMF for the X-SAR data to retrieve the sea surface wind field is presented.The model is called SIRX-MOD. Detailed Investigation of the Characteristics of X-SAR NRCS The resonant Bragg wave number follows the relation 2 (where represents the radar wavenumber).As the radar wavenumber , the X-band Bragg waves lie between the resonant wavenumbers at the C-and Ku-bands.Recent research [13] indicated that the overall X-band NRCS of TS-X and TD-X is similar to the simulated C-and Ku-band radar NRCS values.To date, the C-band SAR-based sea surface wind field retrieval algorithm is mainly adopted from CMOD4 [20], CMOD5 [21] and CMOD-IFR2 [17] GMFs that were originally developed for scatterometer data.The CMOD-IFR2 model function, which is applied to the ERS-1/2 scatterometer offline products, is obtained from the ECMWF, ERS/SCAT data and buoy data.When the wind speed is less than 20 m/s, the wind speed results retrieved from CMOD-IFR2 are consistent with CMOD4 and CMOD5, whereas the discrepancies among these GMFs exist at high wind speeds.Because the C-band GMFs of CMOD5, CMOD5.N and CMOD_IFR2 mainly have discrepancies for high wind speeds, we assume that all three GMFs yield similar simulations.However, the number of coefficients of CMOD_IFR2 is less than that of CMOD5, which is therefore easier for determining coefficients to update the X-SAR wind retrieval algorithm, particularly when the tuning dataset is not sufficient to cover all the wind conditions.A recent study [22] compared the sea surface wind speeds retrieved using CMOD_IFR2, CMOD5 and CMOD5.N with measurements at the two weather platforms of Horns Rev and Egmond aan Zee.However, it reveals that CMOD_IFR2 performs slightly better than CMOD5 or CMOD5.N for retrieving sea surface wind speeds under 20 m/s in terms of bias.For these reasons, the updated X-SAR wind retrieval algorithm (SIRX-MOD) is based on CMOD_IFR2. CMOD-IFR2 empirically relates the C-band radar NRCS to surface wind speed by a power law, which is expressed by the following logarithmic equation: The model is detailed described in the Appendix.The sea surface wind speed and wind direction derived from the ERA-Interim model data and incidence angles of the X-SAR data are added to the CMOD_IFR2 model to simulate the C-band SAR NRCS, which is compared with the NRCS of the X-SAR data, as shown in Figure 4a.All the simulated C-band SAR NRCSs, sorted in ascending order of X-SAR NRCSs, are divided into 6 groups (5 dB intervals).The red error bars are plotted as the mean value ± standard deviation of every group of simulated NRCS.Similar to the finding in [13], the X-band sea surface backscatter intensity is slightly higher than that of the C-band by 0.25 dB.It appears that −10 dB is a turning point of the discrepancy between the C-band and X-band NRCSs.For NRCSs lower than −10 dB, the C-band and X-band NRCSs are very similar.However, when the value is higher than the threshold, the NRCS of the X-band is systematically higher than that of the C-band.A further analysis is the dependence of this discrepancy on the incidence angle, as shown in Figure 4b, in which the star represents every difference between the X-SAR NRCS and the CMOD-IFR2 simulation.All of the differences sorted in ascending order of the incidence angle are divided into 7 groups (5-degree intervals).The red error bars are plotted as the mean value ± standard deviation of every group of NRCS differences.For the incidence angles less than approximately 40°, the NRCSs of the X-band and C-band are similar.However, the value decreases for incidence angles larger than 40°. In Figure 5, the collocated X-SAR NRCS (asterisks) within the wind speed range is compared with the simulated C-band NRCS at various incidence angles.The simulation conducted by the X-band GMF XMOD1 is also superimposed for comparison.The solid, dashed and dotted curves are the simulated NRCS using different GMF models at the sea surface wind speed of 5.5 m/s for up-wind, down-wind and cross-wind, respectively.Notably, GMF XMOD1 assumes that there is no difference in the down-wind and up-wind NRCSs.The green and blue curves represent CMOD-IFR2 and XMOD1, respectively.The X-band NRCS simulated using the XMOD1 model agrees well with the X-SAR NRCS in the incidence angle range from 25° to 55°, whereas it significantly underestimates the X-band NRCS for incidence angles between 20° and 25°.However, the difference between up-wind and cross-wind conditions according to the linear XMOD1 is uniform, which does not depict the dependence of their differences (between up-wind and cross-wind) on incidence angles.Although the simulated C-band NRCS using CMOD-IFR2 is systematically lower than that of the X-band under such wind conditions, the trends of their dependences on the incidence angles are similar to the observations of the X-band SAR. Determining the Coefficients of SIRX-MOD The entirety of the quality-controlled X-SAR data acquired over the ocean is used as the tuning dataset for determining the coefficients in SIRX-MOD.However, a practical problem is whether the tuning dataset is sufficient to obtain stable coefficients in X-SAR GMF.Therefore, we first divide the entire dataset into two random groups, which are used separately to determine the coefficients in formula (2) to verify the stability of the tuning process.Figure 6a,b are comparisons of the simulated X-band NRCS using SIRX-MOD with the observations using the two groups of data pairs.The two comparisons yield very similar verification results, with root mean square (RMS) errors of 1.94 dB and 2.05 dB, respectively.The coefficients are listed in Table 1.Although the two sets of coefficients are slightly different, the similar statistical parameters derived from the verification suggest that the dataset is somehow sufficient to determine the coefficients.We therefore use the entire dataset to ultimately determine the coefficients in SIRX-MOD, which are listed in the third column of Table 1.Table1.Tuned coefficients of the SIRX-MOD model. Simulation of SIRX-MOD Simulations of SIRX-MOD are run to determine the dependence of the X-band NRCS on the incidence angles and sea surface wind field.In Figure 7a, the collocated X-SAR NRCS (asterisks) in the wind speed range from 9.5 m/s to 10.5 m/s is comparable to the simulated NRCS using the SIRX-MOD model at various incidence angles for up-wind, down-wind and cross-wind conditions.The NRCS simulated using the SIRX-MOD model agrees well with the X-SAR NRCS. Figure 7b shows the other simulation for the sea surface wind speed of 5.5 m/s, which matches the diagram shown in Figure 5.The green solid, dashed and dotted curves are the simulated NRCS using the SIRX-MOD model for up-wind, down-wind and cross-wind conditions, respectively.The NRCS simulated using the SIRX-MOD model agrees well with the X-SAR observations, although the difference in the NRCSs between the up-wind and cross-wind conditions of real X-SAR data is larger than the prediction of SIRX-MOD for incidence angles greater than 25°.The dependence of the NRCS on wind direction is simulated using the SIRX-MOD model for various wind speeds, as shown in Figure 8.The statistical results suggest that the collocation data pairs have the largest range of surface wind speeds (4.5 m/s to 5.5 m/s) at an incidence angle of 27°.Therefore, Figure 8 shows the periodic behaviors at an incidence angle of 27°.The difference between the NRCS in up-wind and down-wind scenarios increases slightly with increasing wind speed.When the sea surface wind speed decreases to 5 m/s, there is no evident difference in the NRCS between up-wind and down-wind scenarios.When the wind speed reaches 20 m/s, the difference is as high as 0.9 dB.The asterisks shown in Figure 8 are the X-SAR observations in the incidence angle of 27°.We can find that the data obtained by X-SAR do not distribute regularly.Therefore, it is not possible to draw any functions independently.Thus, we select a developed GMF, e.g., CMOD-IFR2 used in this study as a prototype to find a solution.The simulation using the developed SIRX-MOD shown in the figure suggests that it could depict well the behavior of the X-SAR observations. Validation of the SIRX-MOD Model Because the coefficients of SIRX-MOD are used for the collocation data pairs of X-SAR and the ERA-Interim reanalysis wind data, SIRX-MOD is validated by comparing the retrieved sea surface wind speed with other independent observations, i.e., the measurements of ERS-1/SCAT and in situ buoys.The criteria of collocating the SX-SAR data with the ERS-1/SCAT and buoy data are a spatial distance of less than 200 km and a temporal difference of less than one hour. To retrieve the sea surface wind speed using any GMF from the SAR data, a priori wind direction is important.When the collocation criteria mentioned above are used, there are few data pairs available.Therefore, the ERS-1/SCAT and buoy wind direction information are used for the wind speed retrieval from X-SAR.A sub-scene size of 2 km × 2 km derived from X-SAR data is used for the retrieval. In total, 63 buoy measurements meet the collocation criteria.The comparison is shown in Figure 9a, where the red signs indicate the data pairs with a collocation distance of less than 100 km.The bias, RMS and correlation are 0.13m/s, 1.63m/s and 0.68, respectively.The figure indicates a reasonable agreement with the buoy measurements.However, when the sea surface wind speed is above 10 m/s, the retrieved SAR wind speeds tend to be higher than those of the buoy measurements.Based on the collocation criteria mentioned, 51 data pairs of X-SAR and ERS-1/SCAT are obtained.Figure 9b shows the comparison results.The bias, RMS and correlation of the comparison are 0.16 m/s, 1.83 m/s and 0.93, respectively.Although the collocations of X-SAR, in situ buoys and ERS-1/SCAT are quite limited, the validation results with a bias of less 0.2 m/s and a RMS of less than 2.0 m/s suggest that SIRX-MOD yields a reasonable retrieval.X-SAR data acquired over the Atlantic Ocean on Oct. 6, 1994 at 12:42 GMT are selected to retrieve the sea surface wind field using SIRX-MOD, as shown in Figure 10a.The large-coverage ERS-1/SCAT measurements are shown in Figure 10b.The FFT method [23] is used for deriving the wind direction from the X-SAR data, as wind streaks are clearly visible.The remaining 180° ambiguity in the wind direction is resolved using the ERS-1/SCAT measurements.The retrieved X-SAR sea surface wind speed varies between 12 m/s and 16 m/s, indicating a significant spatial variation, while the ERS-1/SCAT measurements show a homogeneous sea surface wind field due to the low spatial resolution of 25 km.This example demonstrates the need for a SAR sea surface wind retrieval algorithm, which could yield high spatial resolution sea surface winds on a kilometer scale. Conclusion In the study, we revisited the X-SAR sea surface wind retrieval algorithm using the entire dataset acquired by X-SAR. We compare the simulated C-band NRCS using the CMOD_IFR2 model with X-SAR observations.Regardless of the overall comparison or single comparison at a particular wind speed range, the C-band NRCS shows a pattern similar to the X-band observations, which should be attributed to the similar resonant Bragg wave numbers of the C-and X-bands.We therefore use a C-band GMF as a prototype to update the X-SAR sea surface wind retrieval algorithm. Figure 1 . Figure 1.An example of X-band Synthetic Aperture Radar (X-SAR) data acquired over the Pacific Ocean (source: German Aerospace Center).The black arrow parallel to the linear wind streaks indicates the wind direction without 180° ambiguity. Figure 2 . Figure 2. Schematic of the locations of the 2465 X-SAR images used for developing the Geophysical Model Function (GMF).The red and green marks represent the two independent datasets. Figure 3 . Figure 3. Histogram of the ECMWF ERA-Interim reanalysis wind speeds collocated with the X-SAR data. Figure 4 . Figure 4. Comparison of the simulated normalized radar cross sections (NRCSs) using CMOD-IFR2 and the X-SAR measurements.The red error bars are plotted as the mean value ± standard deviation of the differences between the simulation and observation.(a) A direct comparison of the simulated C-band NRCS by the CMOD-IFR2 and X-SAR measurements; and (b) the differences in NRCSs simulated using CMOD-IFR2 and X-SAR measurements at various incidence angles. Figure 6 . Figure 6.Comparison between the NRCS simulated by the SIRX-MOD model and X-SAR observations for (a) dataset 1; (b) dataset 2 and (c) all datasets. Figure 7 . Figure 7. NRCS values simulated using the SIRX-MOD model for various incidence angles at sea surface wind speeds of (a) 10 m/s and (b) 5.5 m/s.The red error bars are plotted as the mean value ± standard deviation of the X-SAR NRCS. Figure 8 . Figure 8. NRCS values simulated using the SIRX-MOD model for various wind directions with an incidence angle of 27°. Figure 9 . Figure 9. Validation of the SIRX-MOD model through a comparison with (a) in situ buoy measurements and (b) ERS-1/SCAT measurements.The red signs indicate the collocation distance between the X-SAR data and the buoy or ERS-1/SCAT is less than 100 km. Figure 10 . Figure 10.The application of the SIRX-MOD model to X-SAR data.(a) Sea surface wind field retrieved using the SIRX-MOD model with X-SAR data acquired on October 6, 1994 at 12:42 GMT over the Atlantic Ocean; (b) Sea surface wind field from ERS-1/SCAT obtained on October 6, 1994 at 13:27 GMT over the Atlantic Ocean.The black rectangle indicates the area where the X-SAR scene was obtained.
2016-03-22T00:56:01.885Z
2015-03-26T00:00:00.000
{ "year": 2015, "sha1": "f8411fc5979e4eee0303e65911a7c19f8f2480f4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/7/4/3548/pdf?version=1427370025", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f8411fc5979e4eee0303e65911a7c19f8f2480f4", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
96457695
pes2o/s2orc
v3-fos-license
Upper Bound of the Third Hankel Determinant for a Subclass of q-Starlike Functions The main purpose of this article is to find the upper bound of the third Hankel determinant for a family of q-starlike functions which are associated with the Ruscheweyh-type q-derivative operator. The work is motivated by several special cases and consequences of our main results, which are pointed out herein. Introduction We denote by A (U) the class of functions which are analytic in the open unit disk U = {z : z ∈ C and |z| < 1} , where C is the complex plane.Let A be the class of analytic functions having the following normalized form: in the open unit disk U, centered at the origin and normalized by the conditions given by f (0) = 0 and f (0) = 1. In addition, let S ⊂ A be the class of functions which are univalent in U.The class of starlike functions in U will be denoted by S * , which consists of normalized functions f ∈ A that satisfy the following inequality: If two functions f and g are analytic in U, we say that the function f is subordinate to g and write in the form: f ≺ g or f (z) ≺ g (z) , if there exists a Schwarz function w which is analytic in U, with w (0) = 0 and |w (z)| < 1, such that f (z) = g w (z) . Moreover, for two analytic functions f and g given by and the convolution (or the Hadamard product) of f and g is defined as follows: We next denote by P the class of analytic functions p which are normalized by such that p (z) > 0 (z ∈ U). We now recall some essential definitions and concept details of the basic or quantum (q-) calculus, which are used in this paper.We suppose throughout the paper that 0 < q < 1 and that Definition 1.Let q ∈ (0, 1) and define the q-number [λ] q by Definition 2. Let q ∈ (0, 1) and define the q-factorial [n] q ! by Definition 3. Let q ∈ (0, 1) and define the generalized q-Pochhammer symbol [λ] q,n by Definition 4. For ω > 0, let the q-gamma function Γ q (ω) be defined by Definition 5. (see [3,4]) The q-derivative (or the q-difference) operator D q of a function f in a given subset of C is defined by provided that f (0) exists. We note from Definition 5 that lim for a differentiable function f in a given subset of C. It is readily deduced from (1) and (4) that The operator D q plays a vital role in the investigation and study of numerous subclasses of the class of analytic functions of the form given in Definition 5. A q-extension of the class of starlike functions was first introduced in [5] by using the q-derivative operator (see Definition 6 below).A background of the usage of the q-calculus in the context of Geometric Funciton Theory was actually provided and the basic (or q-) hypergeometric functions were first used in Geometric Function Theory by Srivastava (see, for details, [6]).Some recent investigations associated with the q-derivative operator D q in analytic function theory can be found in [7][8][9][10][11][12][13] and the references cited therein.Definition 6. (see [5]) The notation S * q was first used by Sahoo et al. (see [14]). It is readily observed that, as q → 1−, the closed disk given w − 1 1 − q 1 1 − q becomes the right-half plane and the class S * q reduces to S * .Equivalently, by using the principle of subordination between analytic functions, we can rewrite the conditions in ( 6) and (7) as follows (see [15]): (see [16]) For a function f ∈ A (U) , the Ruscheweyh-type q-derivative operator is defined as follows: where and From ( 8) it can be seen that and lim This shows that, in case of q → 1−, the Ruscheweyh-type q-derivative operator reduces to the Ruscheweyh derivative operator D δ f (z) (see [17]).From (8) the following identity can easily be derived: Now, by using the Ruscheweyh-type q-derivative operator, we define the following class of q-starlike functions.Definition 8.For f ∈ A (U) , we say that f belongs to the class RS * q (δ) if the following inequality holds true: or, equivalently, we have (see [15]) by using the principle of subordination. Let n 0 and j 1.The jth Hankel determinant is defined as follows: The above Hankel determinant has been studied by several authors.In particular, sharp upper bounds on H 2 (2) were obtained by several authors (see, for example, [18-21]) for various classes of normalized analytic functions.It is well-known that the Fekete-Szegö functional a 3 − a 2 2 = H 2 (1).This functional is further generalized as a 3 − µa 2 2 for some real or complex µ.In fact, Fekete and Szegö gave sharp estimates of a 3 − µa 2 2 for real µ and f ∈ S, the class of normalized univalent functions in U.It is also known that the functional a 2 a 4 − a 2 3 is equivalent to H 2 (2).Babalola [22] studied the Hankel determinant H 3 (1) for some subclasses of analytic functions.In the present investigation, our focus is on the Hankel determinant H 3 (1) for the above-defined function class RS * q (δ) . A Set of Lemmas Each of the following lemmas will be needed in our present investigation. Lemma 1. (see [23]) be in the class P of functions with positive real part in U.Then, for any complex number υ, When υ < 0 or υ > 1, the equality holds true in (13) if and only if or one of its rotations.If 0 < υ < 1, then the equality holds true in (13) if and only if or one of its rotations.If υ = 0, the equality holds true in (13) if and only if or one of its rotations.If υ = 1, then the equality in (13) holds true if p(z) is a reciprocal of one of the functions such that the equality holds true in the case when υ = 0. Lemma 2. (see [24,25]) be in the class P of functions with positive real part in U. Then for some x, |x| 1 and for some z (|z| 1). Main Results In this section, we will prove our main results.Throughout our discussion, we assume that q ∈ (0, 1) and δ > −1. Our first main result is stated as follows. It is also asserted that, for and that , for Proof.If f ∈ RS * q (δ), then it follows from (12) that where We define a function p(z) by It is clear that p ∈ P. From the above equation, we have From (14), we find that Similarly, we get Therefore, we have and We thus obtain Finally, by applying Lemma 1 and Equation ( 13) in conjunction with (18), we obtain the result asserted by Theorem 1. We now state and prove Theorem 2 below. Theorem 2. Let f ∈ RS * q (δ) be of the form (1). Then Proof.From ( 15)-( 17), we obtain By using Lemma 2, we have Now, taking the moduli and replacing |x| by ρ and p 1 by p, we have where and Upon differentiating both sides (19) with respect to ρ, we have It is clear that ∂F(p, ρ) ∂ρ > 0, which show that F(p, ρ) is an increasing function of ρ on the closed interval [0, 1] .This implies that the maximum value occurs at ρ = 1.This implies that max{F(p, ρ)} = F(p, 1) =: G(p). For p = 0, this shows that the maximum value of (G(p)) occurs at p = 0. Hence, we obtain The proof of Theorem 2 is thus completed. If, in Theorem 2, we let q −→ 1− and put δ = 1, then we are led to the following known result. Corollary 1. (see [18]) Let f ∈ S * .Then and the inequality is sharp. where Proof.Using the values given in (15) and (16) we have We now use Lemma 2 and assume that p 1 2. In addition, by Lemma 3, we let p 1 = p and assume without restriction that p ∈ [0, 2] .Then, by taking the moduli and applying the trigonometric inequality on (22) with ρ = |x| , we obtain where η (q) = q + q 2 + q 3 ψ 3 + 2q 3 − q 2 − 2q ψ 1 ψ 2 and κ (q) is given by (21).Differentiating F(ρ) with respect to ρ, we have This implies that F(ρ) is an increasing function of ρ on the closed interval [0, 1].Hence, we have that is, Since p ∈ [0, 2] , p = 2 is a point of maximum.We thus obtain which corresponds to ρ = 1 and p = 2 and it is the desired upper bound. For δ = 1 and q → 1−, we obtain the following special case of Theorem 3. Conclusions By making use of the basic or quantum (q-) calculus, we have introduced a Ruscheweyh-type q-derivative operator.This Ruscheweyh-type q-derivative operator is then applied to define a certain subclass of q-starlike functions in the open unit disk U. We have successfully derived the upper bound of the third Hankel determinant for this family of q-starlike functions which are associated with the Ruscheweyh-type q-derivative operator.Our main results are stated and proved as Theorems 1-4.These general results are motivated essentially by their several special cases and consequences, some of which are pointed out in this presentation.
2019-03-08T13:58:03.740Z
2019-03-07T00:00:00.000
{ "year": 2019, "sha1": "fe4b0ec3cac20c26ef3dfbafc0c1272f60d4aeef", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-8994/11/3/347/pdf?version=1551960896", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "fe4b0ec3cac20c26ef3dfbafc0c1272f60d4aeef", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
250710640
pes2o/s2orc
v3-fos-license
Effects of Different Extraction Methods on Vanilla Aroma To establish the analytic conditions for examining the aroma quality of vanilla pods, we compared different extraction methods and identified a suitable option. We utilized headspace solid-phase microextraction (HS-SPME), steam distillation (SD), simultaneous steam distillation (SDE) and alcoholic extraction combined with gas chromatography (GC) and gas chromatography–mass spectrometry (GC-MS) to identify volatile components of vanilla pods. A total of 84 volatile compounds were identified in this experiment, of which SDE could identify the most volatile compounds, with a total of 51 species, followed by HS-SPME, with a total of 28 species. Ten volatile compounds were identified by extraction with a minimum of 35% alcohol. HS-SPME extraction provided the highest total aroma peak areas, and the peak areas of aldehydes, furans, alcohols, monoterpenes and phenols compounds were several times higher than those of the other extraction methods. The results showed that the two technologies, SDE and HS-SPME, could be used together to facilitate analysis of vanilla pod aroma. Introduction Natural vanilla pods have a delicate and rich aroma that cannot be easily replicated and replaced by synthetic fragrances. As a result, with an increasing demand for vanilla pods, prices have rose, the market is in short supply, and there has been extensive news concerning the adulteration and blending of natural vanilla extracts [1]. Most foods release volatile organic compounds during storage or handling, which can be used as indicators of food quality or safety [2]. Thus, quick, stable and accurate extraction techniques are extremely important. The techniques most commonly used to extract and analyze natural vanilla pods are alcoholic extraction, liquid-liquid extraction (LLE), and liquid-solid extraction (SLE) [3], as well as LLE with ultrasonic vibration, SDE and SPME, among others [4]. The ideal extraction technique must be able to extract the analyte quickly, easily, completely and inexpensively. Different extraction methods each have unique advantages but also have different usage limitations and disadvantages [5]. The extraction methods used in this experiment are introduced separately below. Since vanilla pods are sold as alcoholic extracts in the international market [1], it is necessary to establish a suitable alcoholic extraction method for vanilla pods. According to the regulations of the U.S. Food and Drug Administration (FDA), the ethanol content of commercially available vanilla alcohol extracts should not be less than 35% (v/v). SDE In this experiment, pentane/ether (P/E) (1:1, v/v) was used for extraction. We chose a solvent with a low boiling point, which can be more easily removed to preserve the original aroma of vanilla pods [19]. Pérez-Silva et al. [20] compared the extraction of V. planifolia with pentane/dichloromethane (2:1, v/v), ether or pentane/ether (P/E) (1:1, v/v), and using P/E (1:1, v/v), the authors could extract a wide variety of compounds, potentially due to the difference in solvent polarity. According to Table 1, it can be observed that SDE could extract more carboxylic acids, aldehydes and phenols. Pérez-Silva et al. [20] extracted V. planifolia with P/E (1:1, v/v) and identified acids, phenols, alcohols, aldehydes, esters, hydrocarbons and ketones. The contents of acids and phenolic compounds were highest, among which the main aroma components were vanillin, vanillic acid and p-hydroxybenzaldehyde. Although the types of components were similar to those identified in this experiment, vanillic acid was not identified in this experiment, probably because the gas chromatography column used by the author was polar (DB-WAX), and herein we used a nonpolar column (DB-1). Table 2 shows that SDE could extract palmitic acid and other largermolecule components. Cai et al. [4] believed that SDE could be used to extract compounds with larger molecular weights and lower volatility, such as palmitic acid, compared with HS-SPME. Bajer et al. [21] considered SDE to be a more suitable extraction technique for analyzing volatile components with high retention indices (RIs). The present study showed that the volatile components with higher RIs were only identified by the SDE extraction method, which was consistent with previous studies. HS-SPME A total of 28 volatile compounds were identified by HS-SPME extraction of vanilla pod samples ( Table 2). The samples contained 6 aldehydes, 6 phenols, 5 alcohols, 3 esters, 2 ketones, 2 hydrocarbons, 2 sesquiterpenes, 1 furan and 1 monoterpene. The total peak area with HS-SPME was the largest and the total peak area of aldehydes was more than 5 times greater than that obtained with the other extraction methods (Table 1). In addition, the total peak areas of furans, alcohols and phenols were also higher than those obtained with the other extraction methods. The main components of vanilla pods analyzed by HS-SPME were phenol, 1-octen-3-ol, 2-pentylfuran, 1-octanol, guaiacol and vanillin. Yeh et al. [22] used HS-SPME to analyze V. planifolia produced in Taiwan and detected a variety of monoterpenes and sesquiterpenes. Among them, limonene, α-copaene and α-muurolene were also identified in the experiment, which can offer vanilla citrus, lemon and wood aromas. Hassan et al. [12] analyzed V. planifolia using HS-SPME and showed that shikimate derivatives accounted for the majority of V. planifolia, and vanillin was the most abundant component. In addition, volatile compounds, such as benzaldehyde, p-anisaldehyde, phydroxybenzaldehyde, benzyl alcohol, p-cresol, guaiacol, creosol and p-anisyl alcohol, were all shikimic acid derivatives. In this experiment, such compounds accounted for approximately 92% of the components, among which vanillin was the most abundant, followed by guaiacol. Although guaiacol was abundant, it is generally considered to have a negative effect on vanilla pod aroma [23], and with increasing guaiacol content, the vanillin content tends to decrease [24]. Compared with other extraction methods, HS-SPME extracted more monoterpenes and sesquiterpenes. Although the total peak area of HS-SPME was highest, no carboxylic acid compounds were identified, and the types of compounds were lower than those obtained with SDE. Kraujalytė et al. [25] found that HS-SPME was more suitable for compounds with low volatility due to the lower extraction temperature. Therefore, this extraction method was consistent with previous studies and is suitable for simple and rapid detection of sample components [4]. SD A total of 25 volatile compounds were identified using SD extraction of vanilla pod samples ( Table 2). The samples contained 11 aldehydes, 5 ketones, 4 esters, 3 alcohols, 1 phenol and 1 hydrocarbon. In this experiment, SD could not extract important aroma components, such as p-hydroxybenzaldehyde and vanillin, from vanilla pods, possibly because p-hydroxybenzaldehyde [26] and vanillin are only slightly soluble in water (1 g/100 mL) [1]. Additionally, the aqueous layer of SD extract lacks compounds, such as p-hydroxybenzaldehyde and vanillin. Despite the absence of vanillin, the total peak areas of aldehydes still accounted for 68% of the extract (as shown in Table 1), which might be related to the greater polarity of aldehydes. From Table 3, it can be observed that a large amount of furfural appeared in the extract. Cai et al. [4] speculated that this phenomenon was caused by the hydrolysis and pyrolysis of the compounds during the extraction process. [19,22,[27][28][29][30][31][32][33][34][35][36] and reference were checked for all on DB-1. 3 Retention indices, using paraffin (C 5 -C 25 ) as references. 4 Total concentration from GC-FID, values are means ± SD of triplicates. Alcoholic Extraction In this experiment, 35, 75 and 95% alcohol were used to extract vanilla pods, and 10, 14 and 19 volatile compounds were identified, which consisted of only aldehydes, esters, carboxylic acids, alcohols, ketones and phenols. According to Table 2, the contents of guaiacol, p-hydroxybenzaldehyde and vanillin extracted from vanilla pod with 35% alcohol were lower than those in the other two ethanolic extracts. Moreover, esters and carboxylic acids were only identified in the 75% and 95% ethanolic extractions but not in the 35% ethanolic extraction. However, only the 35% ethanolic extracts contained vanillyl alcohol. Hernández-Fernández et al. [37] used GC-MS to compare the differences between 35% ethanolic extraction (1:10, v/v) and supercritical carbon dioxide extraction of V. planifolia. They found that the vanilla pod ethanolic extract contained six compounds, guaiacol, p-vinylguaiacol, vanillin, p-hydroxybenzaldehyde, vanillyl alcohol and vanillic acid. Excluding vanillic acid, the other five compounds were detected in the 35% ethanolic extract in this experiment. Sostaric et al. [9] extracted V. planifolia with 35% alcohol, and the extraction ratio was consistent with this experiment (1:5, v/v). Additionally, they used GC-MS to compare differences between the V. planifolia ethanolic extract and synthetic flavor. The authors found that natural vanillin extracts contain high amounts of vanillin and long carbon-chain esters that are not found in synthetic flavors such as ethyl nonanoate and ethyl decanoate. Synthetic fragrances contain ethyl vanillin that are lacking in natural vanilla extracts. Comparing three kinds of vanilla pod extracts with different alcohol concentrations, it can be observed that the higher the alcohol concentration, the more volatile components are extracted and the greater are the total peak areas. At present, commercial vanilla alcohol extracts are mostly extracted with 35% (v/v) alcohol [37], potentially because higher alcohol concentrations will alter the vanilla aroma of the extract. However, consumer acceptance is not high. Hernández-Fernández et al. [37] believed that alcohol extraction has some disadvantages, such as high concentration of organic residues, longer extraction time, and a larger dosage required for use as a spice. Quantitative Analysis of Vanilla Pods In this experiment, SDE was used to quantitatively analyze vanilla pod samples, and a total of 51 volatile compounds were identified (Table 3) using the method that identified the most compounds among all evaluated extraction methods. It contained 9 aldehydes, 10 carboxylic acids, 9 phenols, 7 esters, 6 hydrocarbons, 4 alcohols, 2 ketones, 2 sesquiterpenes, 1 furan and 1 monoterpene, revealing that the content of vanillin was highest, followed by guaiacol. Januszewska et al. [38] found that the main volatile components of vanillin pods from different origins were vanillin and guaiacol. Among them, vanillin has sweet and creamy aromas and is an important aroma component of vanilla pods [39]. Zhang and Mueller [19] quantified the volatile components of V. planifolia extracts by GC-MS and identified p-hydroxybenzaldehyde, (E)-methyl cinnamate, benzyl alcohol, phenol, p-cresol, 1-octanol, 2-phenylethanol, benzoic acid, octanoic acid, creosol, methyl salicylate, anisaldehyde, nonanoic acid, anisyl alcohol, isovanillin and other volatile compounds, and these compounds were also identified in this experiment. Among them, the content of guaiacol, a minor component, was 105.00 mg/kg, which was similar to the quantification results (101.58 mg/kg). In addition, guaiacol, creosol and phenol endow V. planifolia with strong phenolic, woody and smoky flavors [40]. Figure 1 shows a principal components analysis (PCA) diagram of different extraction methods, from which it can be observed that the different methods can be divided into 3 groups. The three ethanolic extracts with different concentrations were close to the same group on the PCA diagram, which indicated that the composition of ethanolic extracts with different concentrations were similar. Table 2 also shows that the volatile components extracted with the three different concentrations of alcohol were mainly composed of aldehydes, alcohols, ketones and phenols, which can be compared with the PCA results. SDE could extract a wide variety of volatile components. In addition, in contrast to the other extraction methods, the proportion of aldehydes was highest, while SDE had the highest content of acid components, and no carboxylic acid compounds were identified in SD and HS-SPME (Table 2). Therefore, SDE was the farthest from other extraction methods on the PCA diagram, and it can be speculated that the volatile components extracted with SDE were the most different from other extraction methods. Comparison of Different Extraction Methods results. SDE could extract a wide variety of volatile components. In addition, in con to the other extraction methods, the proportion of aldehydes was highest, while SDE the highest content of acid components, and no carboxylic acid compounds were id fied in SD and HS-SPME (Table 2). Therefore, SDE was the farthest from other extra methods on the PCA diagram, and it can be speculated that the volatile componen tracted with SDE were the most different from other extraction methods. Vanillin is the main component of natural vanilla pods, so the content of vani extremely important for vanilla extracts [1]. In SD extracts, vanillin cannot be detecte this method is preliminarily considered unsuitable for analysis of vanillin. Although commercially available vanilla pods are sold in the form of ethanolic extraction, the ber of components and total peak areas identified by ethanolic extraction in this s were the lowest. Zheng et al. [41] compared the extraction of Syringa flowers with d ent solvents, and they also found that the efficiency of ethanolic extraction was Based on the results of this experiment, it was found that SDE could extract more vo components, but the total peak areas of HS-SPME were more than twice as large as obtained with SDE. In addition, this study showed that only HS-SPME and SDE c extract monoterpenes and sesquiterpenes. Kung et al. [31] used SDE and HS-SPME t alyze the volatile compounds from Platostoma palustre and found that SDE could ex more volatile compounds and sesquiterpenes. However, HS-SPME could extract Vanillin is the main component of natural vanilla pods, so the content of vanillin is extremely important for vanilla extracts [1]. In SD extracts, vanillin cannot be detected, so this method is preliminarily considered unsuitable for analysis of vanillin. Although most commercially available vanilla pods are sold in the form of ethanolic extraction, the number of components and total peak areas identified by ethanolic extraction in this study were the lowest. Zheng et al. [41] compared the extraction of Syringa flowers with different solvents, and they also found that the efficiency of ethanolic extraction was poor. Based on the results of this experiment, it was found that SDE could extract more volatile components, but the total peak areas of HS-SPME were more than twice as large as those obtained with SDE. In addition, this study showed that only HS-SPME and SDE could extract monoterpenes and sesquiterpenes. Kung et al. [31] used SDE and HS-SPME to analyze the volatile compounds from Platostoma palustre and found that SDE could extract more volatile compounds and sesquiterpenes. However, HS-SPME could extract more monoterpenes than SDE. In this study, the monoterpene total peak areas of HS-SPME were higher while the sesquiterpene total peak areas were lower than those determined with SDE, which was similar to the results of a previous study. For many assays, SDE lacks the sensitivity and convenience required for experiments, and HS-SPME can make up for these shortcomings. Cai et al. [4] believed that the reproducibility of SDE was better than that of HS-SPME, so if quantitative analysis is needed, SDE is the best extraction method. In addition, SDE can extract more components. However, it is less sensitive to trace components. Reineccius [42] pointed out that no method will accurately reflect the aroma components actually present in a food or their proportions. Therefore, it is recommended to use SDE and SPME complementary to analyze more complete vanilla aroma components. Plant Materials In this experiment, top bourbon vanilla beans (V. planifolia) with similar length and weight (about 17 cm and 4 g) which had been cultivated and cured in Sava, Madagascar, and were purchased from MR. Vanilla Beans commercial source in Taiwan. HS-SPME The 65 µm PDMS/DVB adsorption fibers used in this experiment were purchased from Supelco, Bellefonte, PA, USA. The experimental procedure has been described by Yeh et al. [22]; 8-10 vanilla pods were cut in half, and 1 g of vanilla seeds were scraped and placed into a 4 mL cylindrical glass bottle with a Teflon rubber pad. It was then heated in a 50 • C water bath and extracted with a 65 µm PDMS/DVB adsorption fiber for 40 min. After the extraction was completed, GC and GC-MS desorption were applied for 20 min for analysis in splitless mode. The above process was repeated 3 times. SDE A total of 20 g vanilla pods were cut into approximately 0.2 cm wide pieces and placed in a 5 L three-necked round bottom flask. Then, 500 g water and 1.00 g internal standard (0.5 mg/g cyclohexyl acetate) were added, and a Likens-Nickerson (L-N) device was connected. Fifty milliliters of n-pentane/diethyl ether at a ratio of 1:1 (v/v) was added to the bottom of the L-N device, placed in a pear-shaped bottle as a solvent end, and then placed in a water bath at 40-50 • C. The other end was connected to a 5 L three-neck round-bottom flask filled with 4 L of water as a heat source for steam distillation, and the sample end was heated to 100 • C. After extraction for 2 h, the solvent extract in the pear-shaped bottle was collected, dehydrated with anhydrous sodium sulfate and filtered with No. 1-125 mm qualitative filter paper. Then, a distillation column device (40 • C, 1 h, 100 cm glass column) was used to remove excess solvent and collect the concentrated volatile compound extract. GC syringes were used to collect 1 µL, and GC and GC-MS analyses were performed by direct injection. The split ratio was 1:100. The above process was repeated 3 times. SD Twenty grams of vanilla pods were cut into approximately 0.2 cm wide pieces and placed into a 5 L three-necked round-bottom flask. Then, 500 g of water was added, the other end and connected to a 5 L three-necked round-bottomed flask, and 4 L of water was placed in the flask for steam distillation. The sample end was heated to 100 • C. After 2 h, the extract was collected, and 10 g was placed in a 15 mL cylindrical glass bottle with a Teflon rubber pad. Then, the samples were extracted with 65 µm PDMS/DVB adsorption fibers of HS-SPME for 40 min at room temperature. After the extraction was completed, GC and GC-MS desorption were used for 20 min for analysis in splitless mode. The above process was repeated 3 times. Alcoholic Extraction Two grams of vanilla pods were cut into approximately 0.2 cm wide pieces, and 20 g of 95, 75 and 35% alcohol was added. After extraction with an ultrasonic shaker for 30 min, the mixture was shaken by hand for 1 min and filtered with No. 1-125 mm qualitative filter paper. The filtrate was collected for later use. Twenty grams of 95, 75 and 35% alcohol was added to the vanilla pod sample again and the above extraction method repeated. The two extracts were mixed and filtered with anhydrous sodium sulfate, and the extract was injected into the capillary using a 3 mL disposable syringe to remove excess solvent and concentrated. One microliter of the extract was collected with GC syringes and analyzed by GC and GC-MS by direct injection with a split ratio of 1:10. Each of the above alcohol concentrations was repeated 3 times. Internal Standard (IS) Preparation Standard compound of cyclohexyl acetate was purchased from Sigma-Aldrich (St. Louis, MO, USA). cyclohexyl acetate (0.5 g) was diluted to 10 g with 95% alcohol and then serially diluted to 0.5 mg/g. GC The instrumental conditions refer to Yeh et al. [22]. The instrument used in this study was an Agilent Model 7890 GC (Santa Clara, CA, USA), and the separation column was a DB-1 (60 m × 0.25 mm i.d.) from Agilent, which is a nonpolar column. The carrier gas was nitrogen (N 2 ) delivered at a flow rate of 1 mL/min. The injection port temperature was set to 250 • C. The detector was a flame ionization detector (FID), and the detector temperature was 300 • C. The oven temperature was maintained at 40 • C for 1 min, then raised to 150 • C at 5 • C/min, held for 1 min, raised to 200 • C at 10 • C/min, and then maintained at this temperature for 21 min. GC-MS A Model 5977A quadrupole mass spectrometer (Mass Selective Detector, MSD) from Agilent (CA, USA.) was used. The ion source temperature of the MSD was 230 • C, and the quadrupole temperature was 150 • C. The GC was an Agilent Model 7890B. The operating conditions for the GC and the use of column were the same as those described for GC, changing only the carrier gas to helium (He). The mass spectral data measured by the instrument were compared with the mass spectral library of Wiley 7N. Quantitative Calculation of the IS Method The IS method is a relatively accurate quantitative method in instrumental analysis, and its calculation formula is as follows: where A x = The peak area of the compounds in the sample, A is = the peak area of IS, C is = the amount of IS added (mg), and W s = the sample weight (g). Statistical Analysis In this study, principal component analysis (PCA) was performed using XLSTAT2014 (Addinsoft, New York, NY, USA). The data were subjected to one-way analysis of variance, with Tukey's multiple range method used to identify significant differences of p < 0.05 with GraphPad Prism 5 (GraphPad Software, San Diego, CA, USA). Conclusions From the PCA chart, it can be observed that the different extraction methods could be divided into 3 groups. Among them, the three different concentrations of alcohol were extracted from the same group, and the composition was similar. They were mainly composed of aldehydes, alcohols, ketones and phenols. However, Alcohol extraction at 35% resulted in the fewest extraction components. In this experiment, SD extraction could not detect vanillin, so this method is not suitable for analysis of vanilla pods. SDE could extract a variety of volatile compounds, while HS-SPME did not extract the most components but could extract more aroma total peak areas. The result suggested that the HS-SPME and SDE are both powerful analytic tool for the determination of the volatile compounds in vanilla. Therefore, HS-SPME is recommended for the preliminary identification of vanilla aroma. Otherwise, SPME and SDE can complement each other for vanilla aroma analysis. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Data sharing not applicable. No new data were created or analyzed in this study. Data sharing is not applicable to this article.
2022-07-21T15:19:20.966Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "7287d2d4ffe769f223ff2c6f38b34f9a2183b8a5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/14/4593/pdf?version=1658226135", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "478017bfb66bc0b82c084374aee4880b7a2d3d9e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
259438798
pes2o/s2orc
v3-fos-license
Rolling out PRIDE in All Who Served: Barriers and Facilitators for Sites Implementing an LGBTQ+ Health Education Group for Military Veterans Background/Objective The Veterans Health Administration (VHA) PRIDE in All Who Served health education group (PRIDE) was developed to improve health equity and access to care for military veterans who are lesbian, gay, bisexual, transgender, queer, and/or other sexual/gender-diverse identities (LGBTQ+). This 10-week program rapidly spread to over 30 VHA facilities in 4 years. Veterans receiving PRIDE experience improved LGBTQ+ identity-related resilience and reductions in suicide attempt likelihood. Despite PRIDE’s rapid spread across facilities, information is lacking on implementation determinants. The current study’s goal was to clarify determinants of PRIDE group implementation and sustainment. Methods A purposive sample of VHA staff (N = 19) with experience delivering or implementing PRIDE completed teleconference interviews January–April 2021. The interview guide was informed by the Consolidated Framework for Implementation Research. Rapid qualitative matrix analysis was completed with methods to ensure rigor (e.g., triangulation and investigator reflexivity). Results Key barriers and facilitators of PRIDE implementation were heavily related to facility inner setting (what is happening inside the facility), including implementation readiness (e.g., leadership support for LGBTQ+-affirming programming, access to LGBTQ+-affirming care training) and facility culture (e.g., systemic anti-LGBTQ+ stigma). Several implementation process facilitators enhanced engagement at sites, such as a centrally facilitated PRIDE learning collaborative and a formal process of contracting/training for new PRIDE sites. Discussion/Conclusion Although aspects of the outer setting and larger societal influences were mentioned, the majority of factors impacting implementation success were at the VHA facility level and therefore may be more readily addressable through tailored implementation support. The importance of LGBTQ+ equity at the facility level indicates that implementation facilitation should ideally address institutional equity in addition to implementation logistics. Combining effective interventions with attention to local implementation needs will be required before LGBTQ+ veterans in all areas will benefit from PRIDE and other health equity-focused interventions. Supplementary Information The online version contains supplementary material available at 10.1007/s11606-023-08204-5. INTRODUCTION United States (US) military veterans who are lesbian, gay, bisexual, transgender, queer, and/or other sexual/gender-diverse identities (including but not limited to questioning, pansexual, asexual, agender, gender diverse, nonbinary, gender-neutral, and other identities; LGBTQ+) are a historically disenfranchised and currently underserved group within the Veterans Health Administration (VHA). 1,2 Despite considerable progress in detecting and understanding LGBTQ+ health inequities. [3][4][5] access to equitable and effective care is not yet consistent across VHA facilities. 1 Common barriers have included discrimination, cisheteronormativity, limited electronic health record infrastructure for documenting sexual orientation and gender identity, and lingering effects of unjust policies that excluded LGBTQ+ individuals from military service. [6][7][8][9][10][11] In 2016, VHA policy established designated LGBTQ+ Veteran Care Coordinators (VCCs) at each facility, which has paved the way for innovation by VHA staff committed to increasing access to affirming care. 12,13 LGBTQ+-affirming care entails healthcare for LGBTQ+ patients that is culturally responsive and works to alleviate health inequities in this population through strategies such as creating a welcoming environment, addressing provider bias, providing tailored health services, and acknowledging underlying systemic inequality. 13 Despite gains in the availability of LGBTQ+affirming services within VHA, [14][15][16] there is still a complete lack of affirming care interventions at many VHA facilities. 17 In order to address this gap in LGBTQ+-affirming care offerings at VHA facilities, the PRIDE in All Who Served group intervention (PRIDE) was developed and spread by the PRIDE National Diffusion Team. [18][19][20] This was made possible by the VHA Innovators Network, which links VHA facilities with the goal of helping frontline VHA staff develop innovative ideas to enhance VHA services. 21,22 PRIDE has been designated as a National Diffusion Practice by the VHA Diffusion of Innovation program, which supports the spread of innovative VHA practices. 23 The PRIDE intervention is a 10-week, structured health education group that focuses on social connection, health promotion, minority stress reduction, and engagement with available VHA resources. 19 Significant reductions in selfreported suicide risk and symptoms of distress (e.g., depression) have been observed after attending the group as well as increases in protective factors (e.g., identity acceptance). 19,24 Leading up to the current study, PRIDE rapidly spread from a single clinician at one facility in 2017 to being delivered at VHA facilities across the country in 2021. The PRIDE National Diffusion Team used external implementation facilitation (i.e., collaborative problem solving/support through a designated support person 25 as the overarching implementation strategy, with Fortney's Access model (26) informing evaluation of the Veteran experience (e.g., perceived access) and health impact. 19 As a result of this organic spread, PRIDE groups have reached more than 700 LGBTQ+ veterans. 20 Yet, with less than 25% of VHA facilities currently delivering the program and less than 1% of the estimated LGBTQ+ veterans reached, a deeper understanding of factors that impact implementation and sustainment is needed to scale the program beyond the early adopting sites. 27,28 This article qualitatively examines the facilitators and barriers that impact implementation, sustainment, and ultimately veteran access to the PRIDE intervention, including factors associated with shifting to virtual delivery during the first year of the COVID-19 pandemic. Study Design and Guiding Framework Retrospective and prospective exploratory descriptive qualitative methodology and rapid qualitative analysis were used to clarify determinants of implementation across 18 sites using 1 existing facilitator field notes from previous site visits; and 2 key informant interviews. Study methodology (interview guide and analytic strategy) and interpretation of findings were guided by two implementation frameworks: the Consolidated Framework for Implementation Research (CFIR) and the Health Equity Implementation Framework (HEF). 29,30 CFIR is a well-accepted framework for implementation science research and is particularly useful in guiding formative implementation research. The HEF complements CFIR in this context given its specific focus on using implementation science to decrease health disparities rooted in systemic inequity. Similar to CFIR, the HEF focuses on multiple levels of implementation determinants, including societal influence (i.e., wider systemic inequality), outer/ inner context, and the clinical encounter. Participants and Recruitment Prospective participants (N = 29) were PRIDE site implementation leads and members of the PRIDE learning collaborative from 29 sites. These prospective participants were sent a personalized secure VA email inviting participation in key informant interviews. Prospective participants were informed of the purpose of the study, and that participation was voluntary, one-time, and confidential. The sample size was a priori determined to be N = 20, since this number of interviews (given the relatively narrow research question) was likely to lead to data saturation. 31,32 Participant inclusion criteria were being a VA staff member in one of the following roles: facility PRIDE implementation lead, PRIDE group leader, and/or local clinic/organizational leader familiar with PRIDE. For participants who agreed to study participation, informed consent was completed via encrypted email, with consent forms signed via pdf digital signature. Participants were offered an opportunity to ask any questions to a project staff member prior to digitally signing the consent form. Among prospective participants who expressed interest in participating, none refused participation. After participation, each consented participant was asked to identify up to three additional staff who could be approached for participation. Procedures Site Field Notes As part of a VHA Innovators Network Spread quality improvement grant prior to the current study (mPIs TL and MH), TL completed external facilitation site visits and recorded semi-structured field notes for each site. The site visits (which some sites did not receive in-person due to the COVID-19 pandemic) included a training in the PRIDE manual for group leads, a training in LGBTQ+affirming care open to all staff at the facility, and a facilitated meeting with site leadership to promote the rationale and importance of LGBTQ+-affirming care and the PRIDE group. Field notes detailed items such as visibility of LGBTQ+-affirming care symbols, hospital leadership engagement in site visit, and other clinic-and providerlevel factors. Although these field notes were not originally created for research, for the current study we obtained IRB approval to use field notes to triangulate analysis with participant interviews (see the "Analysis" section below). Participant Interviews Consented participants completed a semi-structured qualitative interview, which was audio recorded and transcribed by professional transcriptionists external to the team. Interviews lasted on average 48 min. Given the small study sample size and small pool of prospective participants, participants were not asked to report demographic information in order to preserve their anonymity and to alleviate concerns about reprisal for candid responses. 33 The interview guide (see Appendix 1) was piloted internally in the research team prior to use with participants. The interview guide focused on describing facilitators and barriers of implementing the PRIDE intervention, and was broken down loosely into sections according to four CFIR/ HEF domains: Intervention Characteristics, Implementation Process, Inner Setting (what is happening inside the specific VHA facility), and Outer Setting/Societal Influence (what is happening outside the specific VHA facility in places such as the local community or broader VHA). 29 All study procedures were approved by the Durham VA Health Care System Institutional Review Board. Analysis Due to the quick timeline of this 1-year pilot study and the goal of informing ongoing implementation work, a rapid qualitative analysis approach was used to analyze the data. [34][35][36] Microsoft Excel and Word were used to complete this process. Three team members (SW, ME, MH) developed and used a template to summarize the first three transcripts and develop consistency across the analysis team. A summary of each transcript was then created using the template, before further condensing the data using matrix analyses by case and CFIR/HEF domain. Finally, tables were created of each barrier and facilitator with consensus definitions and mapping back to the CFIR/HEF domains and subconstructs. For site visit field notes, a matrix was created of site by existing field note sections (trainings provided, implementation facilitators, implementation barriers, LGBTQ+ visibility, additional tasks, and miscellaneous comments). The matrix of site visit field notes was triangulated with the matrix of themes from the key informant interviews to generate a comprehensive understanding of implementation determinants. Reflexivity memos and an audit trail were among the methods used to ensure rigor during design and analysis stages. Investigator Reflexivity The research team reflected gender diversity (cisgender, demigender, neutrois, and nonbinary) and sexual orientation diversity (bisexual, pansexual, queer, and straight). The team all had advanced degrees and was majority White, but reflected racial diversity in its leadership (Black, mixed race, and White leadership). At the time of the study, the team all had roles in either health services research or healthcare innovation. TL, MH, and SW all had prolonged engagement with prospective participants that began prior to initiation of the study. TL was the creator of the PRIDE intervention. TL and MH led spread and implementation of PRIDE. SW was a former site lead for the PRIDE intervention. The team's (TL, MH, SW) lived experience of PRIDE implementation was allowed to enhance the research process (e.g., interview guide, qualitative analysis, interpretation). The qualitative analytic team (ME, MH, and SW) all consider themselves to be LGBTQ+ advocates. The qualitative interviewer (ME) had no previous contact with prospective participants and was a master's trained qualitative analyst with experience in data collection and content analysis. RESULTS A total of 20 staff participants consented to the study. One consented participant was lost to contact prior to completing an interview, leaving 19 staff participants from 18 sites who completed structured interviews. See Table 1 for site characteristics. Sites displayed a variety of PRIDE implementation stages (see Table 1). Field notes from 10 site visits to VA facilities were also analyzed. Figure 1 shows a graphic depiction of the specific CFIR/HEIF domains that corresponded to barrier-and facilitator-related themes. The majority of themes mapped onto Inner Setting and Process CFIR domains. Barriers to PRIDE Implementation Rapid qualitative analysis yielded 7 barrier-related themes (see Appendix 2 for definitions of barrier themes), which mapped onto 14 CFIR/HEIF domains (some themes mapped onto multiple domains). Themes below include quotations and anonymized ID codes. This theme also links back to the two themes relating to clinical care and organizational support. Some site leads lacked clinic leadership support, protected time to work on setting up the group, and clerical support; this lack of support negatively impacted their ability to navigate the logistics of setting up the PRIDE group. Difficulty with finding referrals also fell under this theme, which often linked back to a lack of support from colleagues to expend effort to help identify veterans who would benefit from the group (e.g., "Advertising was not going well. I had one person who wanted to participate, so I completed it as an individual manualized therapy." Participant 8712). Participants felt that there were LGBTQ+ veterans receiving care at their facility-they just noted infrastructure barriers to identifying these veterans and referring them to the PRIDE group. Discrimination and Systemic Oppression This theme related to the HEIF domain Societal Influence (sociopolitical forces) and the CFIR domain Inner Setting (workplace culture). This theme also encompassed feelings of exhaustion among PRIDE site leads surrounding effort and time taken for LGBTQ+ advocacy in environments without sufficient supports. For example, a participant (8168) stated: The culture here is kind of awful. […] It just became more and more clear that it's just a civilized surface and a ton of bigotry and discrimination underneath. And I used to think that having a group and encouraging people to understand why it's important to be honest with your providers and all that was, you know, a step towards culture change. But now I just want them to be safe. And it's hard to be safe when your record is full of progress notes that say 'LGBT Group.' It's just not safe here. Facilitators of PRIDE Implementation Analysis yielded 6 facilitator-related themes (see Appendix 3 for definitions of facilitator themes), which mapped onto 20 CFIR/HEIF domains (some themes mapped onto multiple domains). Themes Relating to the PRIDE National Diffusion Team This topic related to three themes: Strong Base of Materials; Training, Knowledge Transfer, and Clinician Consultation; and Infrastructure for Shared Learning Across Sites. Themes in this topic area mapped onto three CFIR domains: Intervention Characteristics (design quality and packaging), Inner Setting (resources available for implementation, access to knowledge, and information, learning climate), and Process (external change agents, learning collaborative, formally appointed implementation leaders). Overall, these themes highlighted the successes of the implementation strategies used in terms of helping the site leads feel supported and prepared to start and maintain the group. The design and content of the PRIDE group manual and handouts were appealing to site leads and group leaders. Moreover, the national PRIDE external implementation facilitator (Dr. Lange) was perceived as a source of knowledge and empowerment. For example, one participant (8460) noted, [The PRIDE implementation facilitator] kind of served like a backbone to the whole project. She was very good about ensuring that if we came across any lack of support from leadership or things at our own facility, that she would be willing to kind of step in. And we didn't need her to do that, but I think that also just kind of empowered us to feel more confident in the choices we were making in continuing the group. Themes Relating to LGBTQ+ Collaboration and Training This topic area consisted of the following two themes: Intra-Facility LGBTQ+ Visibility, Collaboration, and Support; and Access to LGBTQ+ Training, Expertise, and Non-VA Community Organizations. These themes related to two CFIR domains: Inner Setting (culture, implementation climate, networks, and readiness for implementation) and Outer Setting (cosmopolitanism). Themes in this topic area tied into the need for facilitators to approach implementation of the PRIDE group with a foundation of knowledge and collegial/ institutional support. This topic area also tied back to a counteracting force against the barrier theme Discrimination and Systemic Oppression. High levels of LGBTQ+ institutional visibility and LGBTQ+ expertise/training could somewhat counteract systemic LGBTQ+ oppression. One participant (8911) noted, "The [Health System] Director that we have now is very supportive of anything that I do. My department heads -they support what I do. So I don't have any issues with anybody supporting my running the group, and there's always people who want to be in and want to participate in the group." Maintaining Local Gains by Working Together This theme reflected the importance of having strong relationships with colleagues, clinic leads, and facility leaders. This theme related to Inner Setting (networks and communication, readiness for implementation) and Process (executing, reflecting, and evaluating). This theme was in stark contrast to the barrier-related theme Needing to Work with or Rely on Others as a Problem. Within this theme, participants noted being able to have a team to support implementation of PRIDE. These participants found that relying on others was a support rather than a burden. One participant (8574) stated, "All of my colleagues are really on-board with posting visual safety items in their offices, which made it more comfortable, I think, for Veterans to kind of out themselves in the therapy room. And so having colleagues give me referrals and having my facility give me the space I need to create this groupthat was really helpful." Facilitators Among Sites Completing One or More Cohort Six of the participants from sites that completed one or more cohort of the PRIDE intervention noted themes relating to the positive effects of external facilitation. All but one of the participants from sustaining sites also noted support from either colleagues or leadership being important for implementation and/or sustainment. DISCUSSION This study used rapid qualitative analysis to clarify implementation determinants of an LGBTQ-affirming health education group intervention for veterans. Findings indicated the importance of the inner setting CFIR domain in both barriers and facilitators of PRIDE implementation. These facilitylevel factors are important because they reflect the structure, culture, and communication present at VA facilities. Additionally, LGBTQ+ visibility, support, discrimination, and systemic oppression arose across multiple themes. These issues point to the ongoing unique needs and experiences of LGBTQ+ veterans accessing healthcare. Although barriers to accessing care may be framed as either "actual" or "perceived," 26 this study's findings suggest that may be an oversimplification. The use of the actual/perceived access dichotomy may inadvertently invalidate the legitimate anxieties about discrimination that both LGBTQ+ veterans and LGBTQ+-affirming staff may have. Obstacles to accessing LGBTQ+-affirming care exist at the individual, clinician, and systemic level, and in this study, we observed a blending of actual/objective and perceived/subjective barriers to accessing care. Regarding group referrals, for example, several participants endorsed difficulty with obtaining referrals, especially when the facility environment was unwelcoming or lacked support for logistics in initiating the group. Additionally, participants often had a passionate desire to promote health equity, but campaigning for reform is a known contributor to advocacy burnout among professionals. 37,38 Site leads reported lack of protected time, resources, and internal support, which not only led to implementation/sustainment barriers but may have contributed to exhaustion around advocacy efforts. Given varying levels of internal facility support for implementation, the availability of external facilitation from the PRIDE National Diffusion Team appeared to help overcome these actual/perceived barriers to PRIDE implementation. This study also demonstrated the usefulness of the HEIF when combined with CFIR. The equity lens helped highlight how implementation of LGBTQ+-affirming interventions can be affected by anti-LGBTQ+ stigma, which can in turn burden LGBTQ+ veterans seeking healthcare. Although VHA has national policies that affirm LGBTQ+ veterans, there is variability in how these policies are implemented across facilities. 13 As demonstrated in this study, it is imperative that LGBTQ+-affirming policies and healthcare innovations have adequate facility-level support to promote consistent implementation. This study has three limitations. First, consistent with the qualitative design, generalizability was not a goal of this study. Instead, we focused on answering "how" questions in rich detail. Second, not all regions of the US were represented in this study. However, this mirrors the spread of the PRIDE intervention, which focused on spread to the South. Third, for the most part only one participant was interviewed per site, which may not be fully representative of the experience of implementing and sustaining the program. This study also has two key strengths: 1 interviews were triangulated with field notes to ensure reliability; and 2 to minimize bias, interviews were conducted by an interviewer who was not affiliated with PRIDE implementation. CONCLUSIONS The current study clarified determinants of implementation and sustainment of an LGBTQ+-affirming educational group at 18 facilities within the VHA. Since the conclusion of the study, the PRIDE intervention has now been delivered at 42 VHA facilities across the country with an additional 17 VHA facilities currently preparing to implement the group for the first time. 39 Further spread of the PRIDE intervention will ensure equitable veteran access to this innovative program. Based on the study findings, there are four key recommendations for sites seeking to improve implementation or sustainment of PRIDE: (1) solicit support from leadership early in the implementation process, (2) build collaborative teams of LGBTQ+-affirming staff at and outside of the facility, (3) use and refer back to PRIDE implementation materials, and (4) discuss and address institutional intersectional equity. Future work may link determinants of implementation to potential targeted strategies for PRIDE site implementation or investigate effects of unequal power differentials in implementation of equity-focused interventions.
2023-06-22T06:17:03.412Z
2023-06-20T00:00:00.000
{ "year": 2023, "sha1": "51073b6295c6291559cfaa570b6d81472ff94c23", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-023-08204-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "3acb3a0a2437037360f8bc088de3296cbbc4c441", "s2fieldsofstudy": [ "Sociology", "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
168273
pes2o/s2orc
v3-fos-license
A Comparative Study on Translation Units for Bilingual Lexicon Extraction This paper presents on-going research on automatic extraction of bilingual lexicon from English-Japanese parallel corpora. The main objective of this paper is to examine various N-gram models of generating translation units for bilingual lexicon extraction. Three N-gram models, a baseline model (Bound-length N-gram) and two new models (Chunk-bound N-gram and Dependency-linked N-gram) are compared. An experiment with 10000 English-Japanese parallel sentences shows that Chunk-bound N-gram produces the best result in terms of accuracy (83%) as well as coverage (60%) and it improves approximately by 13% in accuracy and by 5-9% in coverage from the previously proposed baseline model. Introduction Developments in statistical or example-based MT largely rely on the use of bilingual corpora. Although bilingual corpora are becoming more available, they are still an expensive resource compared with monolingual corpora. So if one is fortune to have such bilingual corpora at hand, one must seek the maximal exploitation of linguistic knowledge from the corpora. This paper presents on-going research on automatic extraction of bilingual lexicon from English-Japanese parallel corpora. Our approach owes greatly to recent advances in various NLP tools such as part-of-speech taggers, chunkers, and dependency parsers. All such tools are trained from corpora using statistical methods or machine learning techniques. The linguistic "clues" obtained from these tools may be prone to some error, but there is much partially reliable information which is usable in the generation of translation units from unannotated bilingual corpora. Three N-gram models of generating translation units, namely Bound-length N-gram, Chunkbound N-gram, and Dependency-linked N-gram are compared. We aim to determine characteristics of translation units that achieve both high accuracy and wide coverage and to identify the limitation of these models. In the next section, we describe three models used to generate translation units. Section 3 explains the extraction algorithm of translation pairs. In Sections 4 and 5, we present our experimental results and analyze the characteristics of each model. Finally, Section 6 concludes the paper. Models of Translation Units The main objective of this paper is to determine suitable translation units for the automatic acquisition of translation pairs. A word-to-word correspondence is often assumed in the pioneering works, and recently Melamed argues that one-toone assumption is not restrictive as it may appear in (Melamed, 2000). However, we question his claim, since the tokenization of words for nonsegmented languages such as Japanese is, by nature, ambiguous, and thus his one-to-one assumption is difficult to hold. We address this ambiguity problem by allowing 'overlaps' in generation of translation units and obtain single-and multiword correspondences simultaneously. Previous works that focus on multi-word Pierre Vinken , 61 years old , will join the board as a nonexecutive director Nov. 28 . In this paper, we compare three N-gram models of translation units, namely Bound-length Ngram, Chunk-bound N-gram, and Dependencylinked N-gram. Our approach of extracting bilingual lexicon is two-staged. We first prepare Ngrams independently for each language in the parallel corpora and then find corresponding translation pairs from both sets of translation units in a greedy manner. The essence of our algorithm is that we allow some overlapping translation units to accommodate ambiguity in the first stage. Once translation pairs are detected during the process, they are decisively selected, and the translation units that overlaps with the found translation pairs are gradually ruled out. In all three models, translation units of N-gram are built using only content (open-class) words. This is because functional (closed-class) words such as prepositions alone will usually act as noise and so they are filtered out in advance. A word is classified as a functional word if it matches one of the following conditions. (The Penn Treebank part-of-speech tag set (Santorini, 1991) is used for English, whereas the ChaSen part-of-speech tag set (Matsumoto and Asahara, 2001) is used for Japanese.) part-of-speech(E) "CC", "CD", "DT", "EX", "FW", "IN", "LS", "MD", "PDT", "PR", "PRS", "TO", "WDT", "WD", "WP" stemmed-form(E) "be" symbols punctuations and brackets We now illustrate the three models of translation units by referring to the sentence in Figure 1. Bound-length N-gram Bound-length N-gram is first proposed in (Kitamura and Matsumoto, 1996). The translation units generated in this model are word sequences from uni-gram to a given length N. The upper bound for N is fixed to 5 in our experiment. Chunk-bound N-gram Chunk-bound N-gram assumes prior knowledge of chunk boundaries. The definition of "chunk" varies from person to person. In our experiment, the definition for English chunk task complies with the CoNLL-2000 text chunking tasks and the definition for Japanese chunk is based on "bunsetsu" in the Kyoto University Corpus. Unlike Bound-length N-gram, Chunk-bound N-gram will not extend beyond the chunk boundaries. N varies depending on the size of the chunks 1 . Figure 3 lists a set of N-grams generated by Chunk-bound N-gram for the sentence in Figure 1. Dependency-linked N-gram Dependency-linked N-gram assumes prior knowledge of dependency links among chunks. In fact, Dependency-linked N-gram is an enhanced model of the Chunk-bound model in that, Dependency-linked N-gram extends chunk boundaries via dependency links. Although dependency links could be extended recursively in a sentence, we limit the use to direct dependency links (i.e. links of immediate mother-daughter relations) only. Two chunks of dependency linked are concatenated and treated as an extended chunks. Dependency-linked N-gram generates translation units within the 1 The average number of words in English and Japanese chunks are 2.1 and 3.4 respectively for our parallel corpus. The distinct characteristics of Dependencylinked N-gram from previous works are two-fold. First, (Yamamoto and Matsumoto, 2000) also uses dependency relations in the generation of translation units. However, it suffers from data sparseness (and thus low coverage), since the entire chunk is treated as a translation unit, which is too coarse. Dependency-linked N-gram, on the other hand, uses more fine-grained N-grams as translation units in order to avoid sparseness. Second, Dependency-linked N-gram includes "flexible" or non-contiguous collocations if dependency links are distant in a sentence. These collocations cannot be obtained by Bound-length Ngram with any N. Translation Pair Extraction We use the same algorithm as (Yamamoto and Matsumoto, 2000) for acquiring translation pairs. The algorithm proceeds in a greedy manner. This means that the translation pairs found earlier (i.e. at a higher threshold) in the algorithm are regarded as decisive entries. The threshold acts as the level of confidence. Moreover, translation units that partially overlap with the already found translation pairs are filtered out during the algorithm. The correlation score between translation units A C B and A E D is calculated by the weighted Dice Coefficient defined as: Experimental Setting Data for our experiment is 10000 sentencealigned corpus from English-Japanese business expressions (Takubo and Hashimoto, 1995). 8000 sentences pairs are used for training and the remaining 2000 sentences are used for evaluation. Since the data are unannotated, we use NLP tools (part-of-speech taggers, chunkers, and dependency parsers) to estimate linguistic information such as word segmentation, chunk boundaries, and dependency links. Most tools employ a statistical model (Hidden Markov Model) or machine learning (Support Vector Machines). Translation units that appear at least twice are considered to be candidates Table 2: Accuracy(Bound-Length N-gram) pair extraction algorithm described in the previous section. This implies that translation pairs that co-occur only once will never be found in our algorithm. We believe this is a reasonable sacrifice to bear considering the statistical nature of our algorithm. Table 1 shows the number of translation units found in each model. Note that translation units are counted not by token but by type. We adjust the threshold of the translation pair extraction algorithm according to the following equation. The threshold h ¢ x q y U is initially set to 100 and is gradually lowered down until it reaches the minimum threshold h S 2, described in Section 3. Furthermore, we experimentally decrement the threshold h x q y U from 2 to 1 with the remaining uncorrelated sets of translation units, all of which appear at least twice in the corpus. This means that translation pairs whose correlation score is 1 sim( 0 are attempted to find correspondences 2 . 2 Note that P ¢ o plays two roles: (1) threshold for the co-occurrence frequency, and (2) threshold for the correlation score. During the decrement of i p form 2 to 1, the effect is solely on the latter threshold (for the correlation score), and the former threshold (for the co-occurrence frequency) does not alter and remains 2. The result is evaluated in terms of accuracy and coverage. Accuracy is the number of correct translation pairs over the extracted translation pairs in the algorithm. This is calculated by type. Coverage measures "applicability" of the correct translation pairs for unseen test data. It is the number of tokens matched by the correct translation pairs over the number of tokens in the unseen test data. Acuracy and coverage roughly correspond to Melamed's precision and percent correct respectively (Melamed, 1995). Accuracy is calculated on the training data (8000 sentences) manually, whereas coverage is calculated on the test data (2000 sentences) automatically. Accuracy Stepwise accuracy for each model is listed in Table 2, Table 3, and Table 4. " h p x q y U " indicates the threshold, i.e. stages in the algorithm. "e" is the number of translation pairs found at stage " h ¢ x q y U ", and "c" is the number of correct ones found at stage " h ¢ x q y U ". The correctness is judged by an English-Japanese bilingual speaker. Table 4: Accuracy (Dependency-linked N-gram) lists accuracy, the fraction of correct ones over extracted ones by type. The accumulated results for "e", "c" and "acc" are indicated by '. Coverage Stepwise coverage for each model is listed in Table 5, Table 6, and Table 7. As before, " h ¢ x q y U " indicates the threshold. The brackets indicate language: "E" for English and "J" for Japanese. "found" is the number of content tokens matched with correct translation pairs. "ideal" is the upper bound of content tokens that should be found by the algorithm; it is the total number of content tokens in the translation units whose co-occurrence frequency is at least " h x q y U " times in the original parallel corpora. "cover" lists coverage. The prefix "i " is the fraction of found tokens over ideal tokens and the prefix "t " is the fraction of found tokens over the total number of both content and functional tokens in the data. For 2000 test parallel sentences, there are 30255 tokens in the English half and 38827 tokens in the Japanese half. The gap between the number of "ideal" tokens and that of total tokens is due to filtering of functional words in the generation of translation units. (3) (1) 1992 (5) 237 (2) 115 (6) 471 (3) 1447 (7) 331 (4) 48 Discussion Of the three models, Chunk-bound N-gram yields the best performance both in accuracy (83%) and in coverage (60%) 3 . Compared with the Boundlength N-gram, it achieves approximately 13% improvement in accuracy and 5-9% improvement in coverage at threshold 1.1. Although Bound-length N-gram generates more translation units than Chunk-bound Ngram, it extracts fewer correct translation pairs (and results in low coverage). A possible explanation for this phenomenon is that Bound-length N-gram tends to generate too many unnecessary translation units which increase the noise for the 3 Dependency-linked N-gram follows a similar transition of accuracy and coverage as Chunkbound N-gram. Figure 5 illustrates the Venn diagram of the number of correct translation pairs extracted in each model. As many as 3439 translation pairs from Dependency-linked N-gram and Chunk-bound N-gram are found in common. Based on these observation, we could say that dependency links do not contribute significantly. However, as dependency parsers are still prone to some errors, we will need further investigation with improved dependency parsers. Table 8 lists the sample correct translation pairs that are unique to each model. Most translation pairs unique to Chunk-bound N-gram are named entities (NP compounds) and one-to-one correspondence. This matches our expectation, as translation units in Chunk-bound N-gram are limited within chunk boundaries. The reason why the other two failed to obtain these translation pairs is probably due to a large number of overlapped translation units generated. Our extraction algorithm filters out the overlapped entries once the correct pairs are identified, and thus a large number of overlapped translation units sometimes become noise. Bound-length N-gram and Dependency-linked N-gram include longer pairs, some of which are idiomatic expressions. Theoretically speaking, translation pairs like "look forward" should be extracted by Dependency-linked N-gram. A close examination of the data reveals that in some sentences, "look" and "forward" are not recognized as dependency-linked. These preprocessing failures can be overcome by further improvement of the tools used. Based on the above analysis, we conclude that chunking boundaries are useful clues in building bilingual seed dictionary as Chunk-bound Ngram has demonstrated high precision and wide coverage. However, for parallel corpora that include a great deal of domain-specific or idiomatic expressions, partial use of dependency links is desirable. There is still a remaining problem with our method. That is how to determine translation pairs which co-occur only once. One simple approach is to use a machine-readable bilingual dictionary. However, a more fundamental solution may lie in the partial structural matching of parallel sentences (Watanabe et al., 2000). We intend to incorporate these techniques to improve the overall coverage. Conclusion This paper reports on-going research on extracting bilingual lexicon from English-Japanese parallel corpora. Three models including a previously proposed one in (Kitamura and Matsumoto, 1996) are compared in this paper. Through preliminary experiments with 10000 bilingual sentences, we obtain that our new models (Chunkbound N-gram and Dependency-linked N-gram) gain approximately 13% improvement in accu-racy and 5-9% improvement in coverage from the baseline model (Bound-length N-gram). We present quantitative and qualitative analysis of the results in three models. We conclude that chunk boundaries are useful for building initial bilingual lexicon, and that idiomatic expressions may be partially handled with by dependency links.
2014-07-01T00:00:00.000Z
2001-07-07T00:00:00.000
{ "year": 2001, "sha1": "c566bb88cbdbcdf2a2869f7e0bdadeb86625b1af", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1118049&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "c566bb88cbdbcdf2a2869f7e0bdadeb86625b1af", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
244735604
pes2o/s2orc
v3-fos-license
“Scientific Research Nurtures Teaching” Based on Coaxial Electrospraying The effective implementations of “Scientific Research Nurtures Teaching” to students can benefit the fostering of professional talents from universities. The metabolism of professional knowledge renewing determines that the key role of professional education and the formation of students practical ability in their university life, and also the abundant professional research topics can ensure fruitful teaching materials. With scientific researches on coaxial electrospraying as an example, this paper has explained how to efficaciously implement “Scientific Research Nurtures Teaching” to both undergraduate and postgraduate students. Under the wise instructions of “three-all education” spirits, 1) all-process of coaxial electrospraying can be useful materials for nurturing teaching such as the raw materials and experimental conditions, implementation of the coaxial electrospraying processes, analyses of the resulted in nano products, and the writing of patents or research articles; 2) all-staff around the coaxial electrospraying can take an active part in the teaching, such as the supervisors, the instructor, the administrative staff, the academic visitor, and safety officers; 3) all-round education can be carried out for nurturing teaching and fostering the students’ abilities of practical ability, organization and management ability, crisis resolution ability, innovation ability, organization skills, the spirit of unity and cooperation besides professional ability. THE METABOLISM OF PROFESSIONAL KNOWLEDGE RENEWING REQUESTS "SCIENTIFIC RESEARCH NURTURES TEACHING" Knowledge never stops its step of moving forward. Shown in Figure 1 is a diagram about the renewing mechanism of professional knowledge. The fundamental knowledge that is well known to all professional talents has been edited into teaching materials. These teaching materials are imparted to the students majoring in this field all over the world in different languages and in different organization formats through classroom lessons, and often with some separate scientific experiments. After a well grasp of the fundamental professional knowledge, the students (often the PhD or Master postgraduate students but also some undergraduate students) will also need to read some scientific papers to follow the cutting-edge knowledge of their speciality. Later, they can begin their innovative investigations on new topics or even new projects under the guidance of their supervisors. Based on the theoretical learning about professional knowledge and also the experimental exploration of professional practices. The students will be able to write their research articles for publication in international journals. After enough learning accumulations, the students and also their supervisors may write reviews about a certain professional topic. In a special professional field, the new speciality teaching materials can be published after more and more topics are reviewed and gradual systematization. This renewing metabolism determines that it may need several decades of years for a complete renewing of the professional cutting-edge knowledge. Meanwhile, it also points out that the professional exploration processes are precious teaching materials, which provide a strong platform for implementing "Scientific Research Nurtures Teaching" to students including bachelor, master and PhD students. However, in many colleges and universities, the professional teaching materials remain the same as those utilized 20 or even 30 years before. Those old professional teaching materials should be renewed for a better teaching effect and for well fostering the students' capability of innovation. In this paper, under the spirit of "three-all education" and based on the coaxial electrospraying, how to effectively implement "Scientific Research Nurtures Teaching" to students is discussed. THE KEY ROLE OF PROFESSIONAL EDUCATION AND THE FORMATION OF STUDENTS PRACTICAL ABILITY REQUIRE "SCIENTIFIC RESEARCH NURTURES TEACHING" University is place where the students gradually transfer their study from the classroom and from the teachers in the rostrums, to the self-practices and own ' s perception. During their growth from primary school, to junior high school, to senior high school and to university, the students are familiar with achieving the fundamental knowledge from the text books through classroom learning (as shown in Figure 2). Figure 2. The core of university professional teaching and the formation of students' professional ability However, after entering the university, where they can grow from undergraduate students to master and doctoral students, the ratios of getting knowledge from their practices increase gradually. Particularly, their professional knowledge, which is totally different with the fundamental knowledge for all the people, need more and more experiments to deepen and to broaden. And they can update the professional cognition through the cutting-edge studies. In a word, the core role of university professional teaching should be the formation of students' professional ability, particularly the ability studying from their selfpractice and from the real world. Needless to say, implementations of "Scientific Research Nurtures Teaching" to the students can benefit the professional teaching and also the formation of the students' practice ability. THE ABUNDANT PROFESSIONAL RESEARCH TOPICS CAN ENSURE A FRUITFUL "SCIENTIFIC RESEARCH NURTURES TEACHING" Professional knowledge growth is an inevitable requirement for disciplines to improve the effect of education. Teaching knowledge is the main way to cultivate talents. After the 18th century, the sharp differentiation of disciplines is an important reason for the rapid development of education. At the same time, the division of different disciplines and majors makes the concept of specialization and professional talents established, and higher education in the modern sense has really developed. The richness, hierarchy and renewal speed of a discipline's knowledge determine the professional level of training talents in a discipline. In many scientific fields, a common phenomenon is that the professional teaching can't follow the steps of knowledge growth rate. Thus, a combination of direct scientific research and classroom teaching can greatly promoting the growth of professional talents, nursing new teaching approaches. In the specialty of "Materials Science and Engineering", the students are often taught all kinds of materials preparation and characterization methods, and often the optimization and successful implementation comprise the mainstream of materials engineering. Among all types of advanced materials production methods, electrospraying is a popular one, which is easy to implement and effective in creating polymeric particles with a size from several decades of nano meters to several microns [1][2][3][4]. Electrospraying and electrospinning are two most important branches of electrohydrodynamic atomization (EHDA) methods, which takes the advantages of the easy interactions between the working fluids and the electrostatic energy [5][6][7][8][9][10]. After the popularity of electrospinning during the past three decades for creating a series of structural nanofibers, including core-shell [11], homogeneous [12][13][14], side-by-side [15,16], tri-layer coreshell [17] and other complicated multiple-chamber ones [18], electrospraying is also becoming more and more popular in the laboratory experiments and also potential industrial applications. These advanced EHDA methods contain abundant materials for implementing "Scientific Research Nurtures Teaching". As the developments of electrospraying in creating particular materials, coaxial electrospraying is gradually projecting out for its powerful capability of generating core-shell structures. Sown in Figure 3 is a diagram of a Advances in Social Science, Education and Humanities Research, volume 598 typical coaxial electrospraying process. An electrospraying system include four sections, i.e. the two syringe pumps for driving two working fluids, a power supply for offering the high voltage to the fluids, a concentric spinneret for guiding the two working fluids into the electrical fields in an organized manner, i.e. one surrounding another, and a grounded collector. These contents are fundamental teaching materials for the students to know how to carry out an electrospraying process. Certainly, the most useful materials that can be explored to nurse professional teaching should be the treatments of different kinds of working fluids for creating novel nanostructures. With coaxial electrospraying as a scientific research example, how to effectively implement "Scientific Research Nurtures Teaching" is explained as follows. EFFECTIVE IMPLEMENTATIONS OF "SCIENTIFIC RESEARCH NURTURES TEACHING" UNDER THE DIRECTIONS OF "THREE-ALL" EDUCATION SPIRITS During the implementations of "Scientific Research Nurtures Teaching", the raw materials from scientific researches, on one hand, are an important issue. However, on the other hand, how to take advantage of those materials for teaching is another important concern, which maybe more important than the first one. Luckily, the "three-all education" spirits have given hints on how to conduct "Scientific Research Nurtures Teaching". In 2018, the Ministry of Education in China put forward the concept of "three-all education" for comprehensive reform of high education. "Three-all education" means allstaff education, all-process education and all-round education. The comprehensive reform of "three-all education" is not only the integration of current education projects, carriers and resources, but also the reconstruction of long-term personality education, system and standards. Through this pilot reform, an integrated personnel education system is expected to be built for running a socialist university with Chinese characteristics and cultivating socialist builders and successors with all-round development of morality, intelligence, physique, art and labor. Shown in the diagram of Figure 4, "three-all education" and "scientific research nurtures teaching" can be completely combined together for fostering innovation talents in high education. With University of Shanghai for Science and Technology as an example, a series of methods are suggested for effectively implement "scientific research nurtures teaching" as follows. Figure 4. "Three-all education" and "scientific research nurtures teaching" all highly combined for fostering innovation talents in high education. All-process education during scientific researches for nurturing teaching A complete process for the scientific researches include many sections. In general, with coaxial electrospraying as an example, it has the preparation of raw materials and experimental conditions, implementation of the coaxial electrospraying processes, analyses of the resulted nano products, and the last but the most important partsummarizing experiment contents and writing the patent or research articles ( Figure 5). During these processes, different materials can be refined for teaching students at different levels. For example, the raw materials and experimental conditions, i.e. about how to carry out an electrospraying experiment can be explored to teach the undergraduate students to know how to prepare nanoparticles. However, during also the same processes, the postgraduate students for Master and PhD degree can be taught to how to design the targeted core-shell products based on their past experiences on scientific studies. It is often a real engineering issue that how to optimize the experimental conditions for creating the desired structural nano particles. For nurturing teaching on undergraduate students, the parameters about coaxial electrospraying can be imparted to them one by one, such as the applied voltage, the fluid flow rate, the particle collection distance, and also the influences of the environmental conditions. However, for nurturing teaching on the postgraduate students, the systematic investigations of experimental parameters, the interaction of different parameters, and also the comparison between electrospinning and electrospraying can be useful materials for deepen and broaden the postgraduate students' knowledge and practice experiences on the coaxial electrospraying [19]. All-staff education during scientific researches for nurturing teaching During the scientific researches, the students will contact all types of persons in the campus. These people include their supervisors, other teachers, the instructor, the administrative staff, the academic visitor, and even students of different levels and from different backgrounds ( Figure 5). Often the students' supervisors are the mainstream for teaching. However, all the people that the students contact during the research processes can provide useful teaching on achieving knowledge and practice experiences. For example, the laboratory safety officer is very powerful in teaching the students (regardless of undergraduate or postgraduate) how to implement the coaxial electrospraying in a safe way, and keep them from all types of dangerous factors in the laboratories. All-round education during scientific researches for nurturing teaching Besides professional ability, a talent graduated from the university should also have the following basic professional abilities: technical ability, written expression ability, interpersonal communication ability, lifelong learning ability, logical thinking ability, practical ability, organization and management ability, crisis resolution ability, innovation ability, organization skills, the spirit of unity and cooperation and so on ( Figure 5). During the scientific researches on coaxial electrospraying, a wide variety of opportunities can be found for nurturing the above-mentioned abilities for the students. For example, the characterizations of electrosprayed core-shell nanoparticles need many expensive instruments, which are limited to all the students, for example the transmission electron microscope and the scanning electron microscope. The students can train their capability of organization and cooperation, by which samples from some students are organized together for measurements. This not only save the time and fee for conducting these samples from different persons, but also an opportunity for them to study from each other about how to prepare the samples subjected to the SEM and TEM observations. CONCLUSION Scientific Research Nurtures Teaching" in universities is the request of the metabolism of professional knowledge renewing, the key role of professional education and the formation of students practical ability, and the abundant professional research topics as fruitful teaching materials. Under the spirits of "three-all education" by Ministry of Education in China, "Scientific Research Nurtures Teaching" can be effectively carried out based on examples of the scientific researches about coaxial electrospraying. All-process of coaxial electrospraying (such as the raw materials preparation, implementation of the coaxial electrospraying processes, analyses of the resulted products) can be useful materials. All-staff around the coaxial electrospraying including the supervisors, the instructor, the administrative staff, the academic visitor, and safety officers can take an active part in the teaching. The abilities of students such as professional and practical ability, organization and management ability, organization skills and spirit of unity and cooperation can be efficaciously trained.
2021-12-01T16:07:26.733Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "eae62125fedd43b555626acf2b456d0a0baf9694", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125964134.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b24a3feb37aa46bd6a6550c3ccd8792f0da94cdd", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
18637396
pes2o/s2orc
v3-fos-license
Epigastric Distress Caused by Esophageal Candidiasis in 2 Patients Who Received Sorafenib Plus Radiotherapy for Hepatocellular Carcinoma: Case Report Abstract Sorafenib followed by fractionated radiotherapy (RT) has been shown to decrease the phagocytic and candidacidal activities of antifungal agents due to radiosensitization. Moreover, sorafenib has been shown to suppress the immune system, thereby increasing the risk for candida colonization and infection. In this study, we present the 2 hepatocellular carcinoma (HCC) patients suffered from epigastric distress caused by esophageal candidiasis who received sorafenib plus RT. Two patients who had received sorafenib and RT for HCC with bone metastasis presented with hiccups, gastric ulcer, epigastric distress, anorexia, heart burn, and fatigue. Empiric antiemetic agents, antacids, and pain killers were ineffective at relieving symptoms. Panendoscopy revealed diffuse white lesions in the esophagus. Candida esophagitis was suspected. Results of periodic acid-Schiff staining were diagnostic of candidiasis. Oral fluconazole (150 mg) twice daily and proton-pump inhibitors were prescribed. At 2-weak follow-up, esophagitis had resolved and both patients were free of gastrointestinal symptoms. Physicians should be aware that sorafenib combined with RT may induce an immunosuppressive state in patients with HCC, thereby increasing their risk of developing esophagitis due to candida species. INTRODUCTION C andida species are part of the normal gastrointestinal (GI) flora in humans; however, patients with impaired immunity, those with chronic diseases such as cancer and diabetes mellitus (DM), patients with a history of recurrent antibiotic usage, and those receiving chemotherapy and/or radiotherapy (RT) are at increased risk of developing candida esophagitis. [1][2][3][4] Sorafenib is a kinase inhibitor commonly used as treatment for advanced renal cell carcinoma and hepatocellular carcinoma (HCC). The drug inhibits intracellular raf kinases (CRAF and BRAF) as well as cell surface kinase receptors such as Fms-like tyrosine kinase receptor 3 (Flt-3), c-kit, Ret, vascular endothelial growth factor (VEGF)-2, VEGFR-3, and platelet-derived growth factor receptor beta (PDGFR-beta). 5,6 One of the most common adverse effects of sorafenib is upper and lower GI distress which manifests as reflux or dyspepsia and epigastric pain, causing appetite loss, weight loss, and fatigue. 7,8 Sorafenib suppresses CD 4 þ T-cell activation and induces T-cell cycle arrest 9 and has been demonstrated to significantly enhance the sensitivity of human HCC cell lines to irradiation. 10,11 A growing body of evidence shows that irradiation has direct DNA damage-dependent effects, sending signals to distant normal tissues via a process known as the abscopal effect. 12,13 In addition, fractionated irradiation has been shown to suppress interferon-gamma (IFN-g) 14 and decrease the percentage of dendritic cells (DCs) and macrophages in vivo. 15 Moreover, radiation therapy can modulate the pharmacokinetics of anticancer drugs. 16 These lines of evidence support the possibility that sorafenib and RT can synergistically induce an immunosuppressive state, thereby increasing the risk for infection due to candida species. The symptoms of candida esophagitis mimic those of GI upset in patients taking sorafenib, which can lead to misdiagnosis and inadequate treatment. Herein, we present 2 patients with HCC who received sorafenib concurrently with RT as well as after completion of RT. Both patients developed candida esophagitis, the symptoms of which were initially misdiagnosed as symptoms characteristic of sorafenib-induced GI distress. Case 1 A 71-year-old man with a history of chronic hepatitis C virus infection, DM, hypertension and benign prostate hypertrophy presented with a tender mass in the right subcostal area in June 2015. Results of needle biopsy were diagnostic of metastatic HCC. Laparoscopic right hepatectomy was performed in August, 2015 and histopathologic examination of resected specimens revealed HCC. Positron emission tomography-computed tomography (PET-CT) scan showed multiple bone metastases. A total radiation dose of 45 Gy was delivered in 15 fractions to the mass located near lumbar spine (L spine) 4 to 5 and a total dose of 39 Gy was delivered in 13 fractions to the right 7th rib. The radiation course began on September 9 and was completed on October 16, 2015. Sorafenib (200 mg) 400 mg twice daily was prescribed beginning on September 7, 2015. Approximately 1 week after beginning sorafenib, the patient began to complain of hiccups, epigastric distress, anorexia, heart burn, and fatigue. Empiric antiemetic agents, antacids, and pain killers were prescribed but the symptoms persisted. Panendoscopy revealed diffuse white lesions in the esophagus ( Figure 1). A diagnosis of candida esophagitis, grade IV, was made according to Kodsi classification. 17 Periodic acid-Schiff (PAS) staining was indicative of candidiasis involving the squamous epithelium of the esophageal mucosa ( Figure 2). Fluconazole (150 mg) 300 mg per os (p.o.) quaque die (qd) in 1 week was prescribed. At 2-week follow-up, panendoscopy demonstrated regression of candida esophagitis ( Figure 3). Physical examination at the same follow-up visit revealed complete resolution of hiccups and epigastric distress as well as significant weight gain. Case 2 An 80-year-old man with goiter and benign prostatic hyperplasia underwent laparoscopic segmentectomy for segment 5 of liver in October 2014 and cholecystectomy on November 11, 2014. Alpha-fetoprotein (AFP) level decreased from 265.2 ng/ml before surgery to 3.52 ng/ml after surgery; however, at follow-up in April 2015 the AFP level was 1260 ng/ ml. PET-CT scan in May 2015 revealed multiple bone metastases, including metastasis to the right scapula, the left 7th rib, the 10th thoracic (T) spine, and the 1st lumbar (L) spine but no local recurrence. Sorafenib (200 mg) 400 mg twice a day was prescribed in addition to local radiation therapy comprising a total dose of 30 Gy in 10 fractions delivered to T12 to L2 in May 2015. Grade II hand-foot syndrome was noted during the course of sorafenib and RT. In August 2015, the patient presented with persistent bone pain and an AFP level of 31,526 ng/ml. A total dose of 30 Gy in 10 fractions was delivered to T10-L1 and a total dose of 39 Gy in 13 fractions was delivered to lesions in the right scapula and left 7th rib concurrent with sorafenib (200 mg) 400 mg per day beginning in September 2015. Epigastric pain, hiccups, anorexia, and tarry stool were noted during the periods of treatment. Empiric agents were administered but the patient still complained of retrosternal pain on swallowing and persistent hiccups. Panendoscopy revealed plaques in the upper and mid esophagus indicative of candida esophagitis as well as esophageal and gastric ulcers. An 1-week regimen of fluconazole (150 mg) 300 mg p.o. qd for candidiasis and Takepron, 30 mg p.o. qd for the esophageal and gastric ulcers was administered. Physical examination at 2-week follow-up revealed complete resolution of hiccups and epigastric distress as well as significant weight gain. The need for informed consent was waived by the Institutional Review Board of the Far Eastern Memorial Hospital (FEMH-IRB-104172-C) and retrospective data were collected after receiving approval from the Institutional Review Board of the Far Eastern Memorial Hospital (FEMH-IRB-104172-C). DISCUSSION The Sorafenib HCC Assessment Randomized Protocol (SHARP) and the Asian Pacific Trial demonstrated that sorafenib (Nexavar, Bayer Pharma AG, Berlin, Germany) was associated with significantly better survival of patients with HCC than placebo. 7,8 RT combined with sorafenib results in marked tumor shrinkage but has been shown to be associated with systemic skin reactions. 18,19 The results of a phase II trial showed that radiation therapy plus sorafenib results in a partial response rate of 55% in patients with unresectable HCC. 20 Grade 2 and 3 diarrhea was reported in 25% of patients who received radiation therapy concurrently with sorafenib and in 5.6% of patients who received radiation therapy after sorafenib. Moreover, grade 2/3 gastric or duodenal ulcer was reported in 8.4% of patients who received sequential use of sorafenib. 20 However, the incidence of diarrhea of grade 3/4 ranged from 6% to 8% and the grade 3/4 of anorexia and nausea was 0% to 2% in patients treated with sorefenib only. 7,8 These data suggest the percentage of GI adverse effects were higher in multiple modalities. The classic symptoms of infectious esophagitis include dysphagia, odynophagia, and retrosternal pain on swallowing. 4 It can cause candida esophagitis when patients with impaired immunity, with chronic disease or under medications, such as gastric acid suppression therapy, malignancy, human immunodeficiency virus disease, illnesses characterized by immunodeficiency, DM, corticosteroid therapy, recurrent antibiotic use, prescribed chemotherapy and/or RT, proton pump inhibitors, H2-receptor antagonists, and prior vagotomy produce hypochlorhydria, which alters the colonization of the stomach by oral cavity bacteria and yeast and is thought to increase the risk of infectious esophagitis. [1][2][3][4]21,22 The prevalence of esophageal candidiasis is 0.8% to 1.2%. 4,23 In the current report, both patients under concurrent RT and sorafenib suffered from hiccups, epigastric distress, anorexia, heart burn, or retrosternal pain on swallowing and fatigue that were similar those of GI upset caused by sorafenib. Furthermore, we reviewed the records for 44 patients under such schedule in our institute retrospectively, 3/44 (6.8%, including 2 patients reported here) had epigastric distress or anorexia with panendoscopy-proved esophageal candidiasis. Zhao et al 9 found that sorafenib suppressed CD 4 þ T-cell activation, proliferation, and cytokine production and induced T-cell cycle arrest and apoptosis in a dose-dependent manner. Hipp et al 24 observed that sorafenib inhibited DCs antigen presentation, DC migration and their capability to stimulate primary T-cell responses by reducing the secretion of cytokines and the expression of major histocompatibility complex and CD1a molecules. These inhibitory effects were found to be mediated by the inhibition of phosphatidylinositol 3-kinase (PI3K), mitogen-activated protein kinase (MAPK), and nuclear factor kappa-light-chain-enhancer of activated B cells (NF-kB) signaling. These findings provide evidence that sorafenib suppresses the immune system and therefore increases the risk for infections due to candida species. Opsonized candida species are ingested by both monocytes and monocyte-derived macrophages, but uptake of unopsonized candida is mediated only by monocyte-derived macrophages. 25,26 Additionally, IFN-g is one of the major factors that augment the phagocytic and candidacidal activities of human macrophages. 27 Recently, Tsai et al 16 reported that local irradiation, no matter daily dose or off-target dose, modulates the area under the concentration versus time curve of anticancer drugs in plasma. Furthermore, a growing body of evidence shows that irradiation has direct DNA damage-dependent effects, sending signals to distant normal tissues via a process known as the abscopal effect. The effect leads to overall genomic instability and radiation susceptibility in surrounding and distant normal tissues. 12,13 Interestingly, fractionated irradiation has been shown in an animal model to suppress helper T1 (Th1) cytokine profiles, namely IFN-g and the IFN-ginducible 10 kDa protein (IP-10). 14 Song et al 15 also found that the percentages of DCs and macrophages were also lower after fractionated irradiation in an animal model. After patients recovered from the episode and in the sequential maintaining course with sorafenib only, there was no recurrent esophageal candidiasis. Putting these published observations together, it is apparent that irradiation could modulate the concentration of anticancer drugs with abscopal effects that hint the effects of sorafenib may be modulated when concurrent with RT and it may cause the response of nonirradiation area similar with the irradiation area. Sorafenib was shown to significantly enhance the sensitivity of the human HCC cell line SMMC-7721 to radiation in a schedule-dependent manner. 10,11 Moreover, there is evidence that irradiation can induce the compensatory activation of multiple intracellular signaling pathway mediators, such as PI3K, MAPK, VEGF, c-jun N-terminal kinase (JNK), and NF-kB. 28 The sorafenib-mediated blockade of the Raf/MAPK and VEGFR pathways therefore may enhance the efficacy of radiation. 29 The evidence suggests that sorafenib with fractionated irradiation or sorafenib followed RT delivered sequentially provide better results but may enhance the adverse effects associated with each treatment modality, thereby decreasing the phagocytic and candidacidal activities of antifungal drugs due to radiosensitization. Esophagogastroduodenoscopy with brushings or biopsy is currently the most sensitive and specific method for diagnosing candida esophagitis. The infection is characterized by the presence of patchy, whitish plaques covering a friable, erythematous mucosa. 23,30 For immunosuppressed patients with candida esophagitis, the recommended drug is oral fluconazole with a loading dose of 400 mg followed by 200 to 400 mg once daily for 2 to 3 weeks without local antifungal therapy. 1,31 In our patients, the epigastric and chest distress was improved after prescribed oral fluconazole accordingly. CONCLUSION To the best of our knowledge this is the first report to show that treatment with sorafenib concurrent with RT or following RT can result in candida esophagitis. Physicians should be aware that sorafenib and RT can synergistically induce an immunosuppressive state in patients with HCC, thereby increasing their risk for esophagitis due to candida species.
2018-04-03T01:54:55.052Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "90e04498168f4410d967cb6cf4047537916282c8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000003133", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "90e04498168f4410d967cb6cf4047537916282c8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236727982
pes2o/s2orc
v3-fos-license
Discharge of psychiatric patients against psychiatrist’s advice Background: Discharge from the hospital against the doctor’s advice and refusal of receiving treatment is one of the significant issues at the time of hospitalization, which is especially crucial in relation to psychiatric patients. It can exacerbate the disorder and the subsequent complications and increase further hospital admissions. The present study was designed to evaluate the causes of discharge from the hospital and the refusal of receiving treatment against medical advice in hospitalized patients in Iran Psychiatric Hospital. Methods: The present study was a descriptive cross-sectional study. One hundred patients hospitalized in Iran Psychiatric Hospital discharged with personal consent against medical advice from July to December 2018 were studied. Two methods were used for assessment; the fulfillment of a routine ministry-approved checklist by the dischargers themselves and the face-to-face interview with both the patient and discharger based on a researcher-made checklist. Cohen’s Kappa coefficient was used to assess the agreement of the answers of patients to both routine ministry-approved and researcher-made checklists by SPSS software version 16.0 with an overall accuracy of 95%. Results: Based on the results extracted from the researcher-made checklist, 43 (43%) of the discharges were generally based on patient-related factors. The personal insistence to discharge by the patient was cited as the main reason for discharge. Cohen’s Kappa coefficient showed no significant agreement between the patient’s answers to the interview and what they have previously filled in the routine ministry-approved checklist. More specifically, the measure of agreement for answers of patients to questions in the standard checklist and the questions asked by the interviewer was 0.078 (p=0.167). Conclusion: From the results of this study, it can be concluded that the face-to-face interview based on the researcher-made checklist can more effectively determine the reasons for discharge of patients due to the accuracy of the interview. Introduction Discharge against medical advice (DAMA) is a condition in which the patient intends to leave the hospital early despite medical advice indicating that the patient is dissat-isfied with the services provided or a significant problem (1). The critical tasks of the hospital are to ensure the health status of the pati ents and to improve the quality of services provided to them. Patient's satisfaction with the quality of services provided and medical and nursing care are essential indicators in evaluating the quality and effectiveness of the health system (2). Various issues may play a role in increasing DAMA, such as demographic factors, mental health status, comorbid physical illnesses, previous hospitalizations, hospital services dissatisfaction, financial problems, family problems, caregiver-patient communication, lack of significant improvement in hospital, belief in traditional medicine, discomfort with an extended stay in the hospital, feeling of recovery, and the place of living (urban or rural). Patients admitted to psychiatric hospitals and departments are more likely to leave the hospital against medical advice than patients admitted to the internal and surgical departments. Evaluation of the factors associated with the discharge against medical advice seems necessary because the failure of patients to complete the course of treatment is a risk factor for relapse, readmission, and the imposition of additional costs for the patient (3). Also, examining the causes and factors related to discharge against medical advice, the weaknesses of mental health care, and the necessary steps for improvement could be identified (4). Due to the lack of sufficient studies on this issue in Iran and the necessity of knowing the factors related to DAMA in psychiatric patients, this study aimed to investigate the factors related to discharge against medical advice in psychiatric patients using two different methods, including 1) fulfillment of routine ministry-approved checklist by the dischargers themselves and, 2) interviewing with both the patients and dischargers based on the researcher-made checklist. Data collection The present study was a cross-sectional, descriptive, and analytical study performed on 100 patients who had been decided to discharge against their psychiatrist's advice from Iran Psychiatric Hospital from July to December 2018. In this regard, the routine checklist made by the Ministry of Health, Treatment, and Medical Training (MOHTME) and the interview based on a researchermade checklist was used. The routine checklist of discharge against medical advice (DAMA) is a proposed form that its validity and reliability have been evaluated and approved by the MOHTME and is a typical form that has to be fulfilled by the patients who want to leave the hospital against medical advice. A researcher-made checklist is a form that is compiled by a focused group of psychiatrists working at Iran University of Medical Sciences as faculty members. Several interviews were also conducted with the families of patients and the nursing staff of Iran Psychiatric Hospital to prepare a checklist. Despite the more significant number of factors mentioned in the researcher-made checklist, its difference with the routine ministry checklist was not significant. In other words, the difference between the two checklists is negligible. One hundred patients who were discharged against medical advice during this study were interviewed. Most of the patients were interviewed in a face-to-face manner in the hospital on the day of discharge, and others were interviewed by telephone within one week of hospital discharge. The routine ministry-approved checklist and the researcher-made checklist were completed by interviewing patients and dischargers. Finally, the results of these two assessment methods were compared. The Ethical Committee of Iran university of medical science approved this study (Ethical code: IR.IUMS.FMD.REC.1398. 227). Besides, verbal consent from the participants was received, and it was assured that all information was confidential. Inter-rater reliability Cohen's Kappa coefficient of Agreement: In conceptual inferential statistics, there is a concept that is called a measure of agreement that examines and evaluates the relationship between two quantities (5). The difference between this concept and other statistical correlations is the separation of two quantities by two individuals, phenomena, or two sources of decision making (6). A factor measures the size of the agreement called the Cohen's Kappa coefficient (7). What Cohen's Kappa coefficient seeks is the magnitude of agreement between two individuals, phenomena, or sources of decision making. Suppose the raters are in complete agreement when the Cohen's Kappa coefficient is equal to one. If there is no agreement among the raters other than what would be expected by chance, the Cohen's Kappa coefficient is zero. The negative value of Cohen's Kappa coefficient, which implies no effective agreement between the two raters, or the agreement is worse than random. In this study, Cohen's Kappa coefficient was used to determine the agreement between the interviews of respondents with questions of the researcher-made checklist and the checklist made by the Ministry of Health. This test was done by SPSS software version 16.0. Results Results of demographic information: The mean age of the patients (100 patients) was 31, ranged from 17 to 63; 71 patients (71%) were under 35 years old. In this study, 18 patients (18%) admitted to the emergency ward, 3 patients (3%) admitted to the female ward, 34 (34%) of patients admitted to the Mehr ward, 31 patients (31%) admitted to the male-one ward, 4 patients (%) admitted to the male-two ward, and 10 patients (10%) admitted to the male-three ward were discharged against medical advice. The results also show that 57% of the discharges were based on the person's own decision. While 15% of discharges were based on their companions, and 28% were based on both factors. However, 21% of patients under study had a previous history of discharge against medical advice. A summary of the results is listed in Table 1. Results of Patient's hospital records: From July to December 2018, 1115 individuals were discharged from the Iran Psychiatric Hospital, that 148 persons (13.27%) were discharged against medical advice. The average length of stay until DAMA in different departments was 9.5 days. More specifically, it was 4 days in the average length of stay for the emergency ward, 6.5 days in the female ward, 5.3 days in the Mehr ward, 14 days in the male-one ward, 15.1 days in the male-two ward, 12.4 days in and malethree ward. Also, the results of the interview showed that the next destination in 99% of the patients after DAMA was home. Based on the results, 46 patients (46%) had a history of drug use, and 35 patients (35%) had previous alcohol use. Moreover, 57 patients (57%) had a history of suicide attempts, and 25 patients (25%) had a history of physical aggression. In 27 patients (27%) , relatives had a history of hospitalization at a psychiatric ward which 2 patients (2%) were discharged against medical advice. Besides, 16 patients (16%) discharged against medical advice had medical comorbidity, and 45 patients (45%) suffered from personality disorders. Reasons for Discharge against medical advice: The reasons for discharge against medical advice in a researchermade checklist consisting of 28 items, based on the standard hospital checklist, were organized into five main factors: patient-related factors, staff-related factors, hospital environment-related factors, treatment-related factors, and hospital facilities-related factors. Other areas * At the same time, the mother and father were present and had requested discharge. (I) Patient-related factors According to our interviews, 43 discharges (43%) were based on patient-related factors, with the highest frequency. Among the patient-related factors, the shares of personal insistence was detected in 45 patients, the shares of sense of improvement in 12 patients, the shares of job issues in 12 patients, the shares of feeling embarrassed by hospitalization in 8 patients, the shares of the necessity to attend ceremonies in 6 patients, the shares of need for companionship in the hospital in 2 patients, and the shares of family misconception about hospitalization was detected in 2 patients . None of the patients reported the expiration of insurance or financial support as a reason for discharge. It should be noted that each patient had the right to state different reasons. (II) Factors related to the hospital environment The results of the interviews in this study showed that 41% of discharges had reasons related to the hospital environment. Among the reasons, "to be worry about the negative impact of other patients" was a critical reason for discharge AMA (30 patients ). Also, high noise and inadequate relaxation (in 21 patients) , the possibility of smoking (14 patients), lack of amenities (13 patients), and many hospitalized patients (12 patients) have listed as the common reasons for discharge against medical advice. In comparison, 10 patients and 2 patients were listed as "dissatisfied with nutrition" and "difficulties in reaching the hospital" for the reasons of discharge AMA. (III) Factors related to health care providers According to the results, 14 patients (=14%) requested to discharge against medical advice based on factors associated with health care providers. "Inadequate nursing care" was the most important factor that 17 patients cited as one reason for DAMA. It should be noted that 9 patients complained about "medical care," and 9 patients complained about "improper communication". Of the other causes among the interviewees, only two percent left the hospital due to "restraint". Finally, only 2 patients left the hospital for treatment-related factors, such as feeling unhealthy. None of the patients "needed to be referred to other centers" did mention as reasons for DAMA. Results of Cohen's Kappa coefficient of Agreement: As mentioned before, Cohen's Kappa coefficient was used to evaluate the agreement of the answers of patients to questions in the standard checklist and the questions asked by the interviewer. More specifically, the measure of agreement for answers of patients to questions in the standard checklist and the questions asked by the interviewer was calculated 0.078 (p-Value=0.167). Since the Cohen's Kappa coefficient was very close to zero and not significant at the 5% level, there is no significant agreement between the answers of patients to the standard checklist and the questions of interviewers. Discussion This study was a cross-sectional, descriptive, and analytical study to investigate the causes of DAMA based on the researcher-made checklist and the standard checklist. The researcher-made checklist was completed through an open-ended interview with the patient and his or her companion. Interviews were done at the time of discharge or by telephone during the first week after discharge. The standard checklist was completed only by the patient's companion at the time of discharge; thus, the chance to interview and examine the reasons of patients for DAMA was one of the benefits of this study. According to the data of the present study, 1115 individuals were discharged from Iran Psychiatric Hospital, 148 persons of whom were (13.27%) discharged against medical advice. Other similar studies in the psychiatric ward reported DAMA ranging from 6 to 35%. The average length of hospitalization was 9.5 days, while 88% of patients were hospitalized within 1 to 15 days. This finding is in line with Javad Setareh et al. (8) at Zare Hospital in Sari, Iran. Patients with psychiatric disorders are more likely to be discharged at an early stage due to issues related to the stigma of hospitalization in psychiatric wards. Besides, during the early days of hospitalization, the trust between patients and staff has not been fully established. Also, among those discharged against medical advice, the prevalence of those who experienced the first-time hospitalization was higher than those who had previous psychiatric hospitalization (60% vs. 40%). This finding is in line with the study carried out by Sheikhmoonesi et al. (9) in 2011 at Zare Hospital in Sari. It may be argued that not acquainting patients and their families with the hospital environment may prevent them from receiving hospitalbased treatments. Also, according to the results of interviews, the most frequent reason for DAMA was related to unreasonable patient insistence (45%), concern about the negative impact of other patients (30%), and high noise and lack of comfort in the department (21%), respectively. None of our patients mentioned "costs of hospitalization", "expiry of insurance", and "need to be referred to other centers." In a cross-sectional study conducted by Javad Setareh et al. (8) at Zare Hospital in Sari, the most frequent reasons for DAMA were: "family insistence" (44%), "patient insistence" (33%), "dissatisfaction with treatment staff" (12%), and "costs of hospitalization and family problems" (12%). As these reasons were the reasons reported by the families of patients, one of the limitations of the above study was the lack of consideration of attitudes of patients. The results of our study showed that among nearly half (49%) of those who strongly insisted on DAMA, the frequencies of "substance use disorder" and "borderline personality disorder" were more than other causes related to DAMA. This finding is consistent with most similar studies, including the study of Javad Setareh et al. (8). Based on the results of this study, it was concluded that there is a lack of insight and denial of the disorder by personality traits and temptation for substance abuse in patients with substance use disorder and patients with personality disorders. It has been hypothesized that the comorbidity of these two disorders with high levels of specific traits is the reason for DAMA. The reasons for DAMA as mentioned by most of the patients were "I am tired of the closed environment" or "I was bored in the closed environment". Also, among the other factors related to DAMA were "concerns about the negative impact of other patients," as well as "high noise level and less relaxed environment", which were most frequent among the discharged patients admitted to the emergency ward. It seems that patients admitted to the emergency ward are in a more severe phase of the disorder and have more aggressive behaviors on arrival. This condition makes the situation more uncomfortable for other patients and may increase the rate of DAMA. Also, 37% of the families of discharged patients described the non-separated environment between male and female patients as one of the factors related to their DAMA. The next factor in DAMA was "feeling recovered". To explain this, it should be noted that a lack of insight into the disorder reduces one's tolerance for hospitalization. Thus timely awareness and early psychoeducation are essential for the tolerance of hospital-based treatments. A "large number of hospitalized patients" and the "lack of amenities" (with similar percentages of 12% and 13%) were other factors of DAMA. Suggestions made by several patients to improve their amenities included "to be a television in each room", "improved cooling and heating systems", and "increased walking time in the yard, outside the wards". Although in the interview with patients, "feeling of recovery" accounted for 12% of causes of DAMA, data obtained from completing the hospital checklist indicated that "feeling recovery" accounts for 79% of DAMA. This incongruence can be explained by not probing the reasons for DAMA through fulfilling the routine checklist. Finally, it can be said that due to the type and conditions of the interview, the use of the researcher-made checklist is more accurate than the standard checklist, showing the reasons for the patient's discharge with personal consent. Accordingly, it is recommended that in future studies and actions by taking into account the reasons for DAMA, the solution to existing problems be presented and studied. Then, the effects of these solutions on the decisions of patients for personal discharge were investigated. One of the limitations of our study is its cross-sectional design and not comparing demographic factors and the type of clinical diagnosis of these patients with commonly discharged patients. In future studies, follow-up of patients who have been discharged against medical advice may be considered in terms of subsequent complications, including readmission. Conclusion The reduction of DAMA is so essential in psychiatric patients because failure to complete treatment is a risk factor for relapse, readmission, and additional costs for the patient. If physicians can initially identify patients at risk for DAMA, they can think of necessary measures to increase the acceptance of patients from hospitalization. From the results of this study, it can be concluded that the use of the researcher-made checklist to collect data can more clearly show the reasons for discharge of patients due to the accuracy of the interview and the patient's calmness to answer.
2021-08-03T00:05:17.348Z
2021-01-10T00:00:00.000
{ "year": 2021, "sha1": "9d9e0d59bfa0b7f6f90736670ab42fb2c9a54707", "oa_license": "CCBYNCSA", "oa_url": "http://mjiri.iums.ac.ir/files/site1/user_files_e9487e/zolfi98-A-10-5746-1-81c04cf.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "54835fe1b6cee72ee9a4afa265bfa48a36c0ef05", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
55431545
pes2o/s2orc
v3-fos-license
Study on the Method of Test and Evaluation Measure Construction Based on Operational Desired Effect of Equipment For the basic input of weapons test, the author absorbed the advanced ideas of military equipment test and put forward the construction of three-level measures system of operational test evaluation, namely the combat mission, the combat equipment system task measures and weapon system/concept elaborated, the weapons and equipment operational mission description, the expected results, characteristics and operational missions. This paper is not only for the mission of measures, the mission decomposition process, solutions for the subsequent task decomposition and weapon system/equipment system decomposition, but also for the construction of similar measures system provides a useful reference. INTRODUCTION Weapons test as a necessary stage in the development of new weapons and equipment construction is a trend of current military equipment development and management system in a short term.It is one of direct factors of equipment quality and operational efficiency, and it is also the key problem of our army weapon equipment that stands on its way of development must be solved.At present, our army weapons equipment test is a vigorously combat test.Among them, the weapons and equipment operational test is a comprehensive test system, and a weapon combat which is the key point and the difficulty of test.In the construction of evaluation measures system, a clear test of reasonable structure is the foundation to ensure the orderly conduct of the activities of the test organization which is the basis for the operational test evaluation plan, design, implementation and evaluation.It is also with far-reaching significance to establish evaluation measures system for the test of our army to carry out effective operational test evaluation work.In this paper, the application of military measures design standards in the development process of mission decomposition method based on a certain type of surface to air missile weapon system as an example introduces the mission which is based on the measures of mission decomposition process construction method and tries to solve the problem of our army test and evaluation of basic input.Also,it provides reference for construction in similar measures system [1][2][3][4][5] . PROBLEMS The operational test and evaluation is to determine the weapons in combat, equipment or military ammunition from the typical user performance and applicability in real combat arbitrary conditions of weapon systems, equipment and ammunition (or key components) field trial; and the test results of identification.Research on the problem of weapon equipment operational test and evaluation measures system of the basic input is carried out the operational test evaluation.The process of building measures system of weapon equipment is the typical combat mission decomposition and functional attributes of the required level of weapon system.These functions of attribute recognition are for the technical performance measures of weapon system of tactic.Then based on the statistical theory for design and analysis of experiments, they will finally establish the mapping relationship between the specific performance measures of combat mission and weapon system [6][7][8][9] . The mission of the task is the basic equipment of equipment requirement demonstration; the mission analysis is a part of the initial requirement demonstration of equipment.The weapons and equipment of the typical combat mission analysis must be the first to capture the operational evaluation measures system in the process of constructing the test.Through the analysis of the combat mission and combat mission layers of decomposition, we can gradually clear operation to perform the specific combat mission commander of weapons and equipment function, performance re- quirements in detail, then the weapon equipment operational test and evaluation of indicators are at all levels--combat mission, mission, equipment measures system, weapon system identification selection and design.The analysis and design of weapons and equipment operational measures (also known as the combat capability measures) is the design and implementation of the key elements of weaponry, combat measures analysis and design throughout the project demonstration With the development of weapon equipment and put it into use, it is of great significance to the construction of the development of weapons and equipment [10][11][12][13] . First, the basic work to build the weapons test and evaluation measures system is to decompose the weapons mission.The basic flow of combat mission decomposition is from the mission description, the expected effect analysis, the analysis of combat mission characteristics and the decomposition of a combat mission measures, which is a primary indicator of weapon equipment operational test and evaluation [14][15][16][17] . BASIC CONCEPTS CONNOTATION For the accurate expression of the weapons decomposition process of mission for readers to understand and grasp the contents of the article, the basic concept of this paper is needed in the process of discussion--the mission description, the expected effect, the mission and mission characteristics measures of details. (1) The combat mission The basic elements of the action plan including the action of the main body, action, action time, action for action and other reasons generally do not contain specific actions. There are the following three points need to cause enough attention: The first point is that the mission objectives should be directly or indirectly associated with single goal or more goals to achieve the goal of contributing to the mission; the second point is that the meaning should be clear as much as possible and can't be ambiguous or expressed confusedly; the third point is that when dealing with mission, don't make too many guidelines of intervention and too much detailed provisions to achieve the goals and methods.In order to facilitate the concrete analysis of certain problems, we should give full play to the characteristics of field operations personnel who are familiar with the situation of battlefield, and fully motivate the initiative of the command staff [18][19][20][21][22] . (2) The mission expected effect The expected effect is decided by physical or behavioral state changes.Due to changes in the physical state which is seen as we know, the physical state changes compared with the behavior changes is easier to evaluate, that is to say, the changes in physical condition are more likely to be tracked in real time to capture the state of behavior change.However, it's not like that in a timely manner, and it may be also not easy to find.Due to the behavior changes, it's more difficult to assess [23][24][25][26][27] . The effect of this concept is intended to clarify the mission (the target) and corresponding relationship between tasks to help combat commanders and staff agency to determine the objective conditions corresponding to reach goals that needed. There are four requirements to grasp the key areas: The first one is the expected effect of each mission to establish direct contact with one or more targets, namely each mission expected effect should at least establish contact with a combat mission objectives.The second one is the mission of the expected effect can be quantified.The third one is the statements of effect should not be used to specify the way and the method of the effect.The only need is to give the desired effect, and that is enough.Do not implement the mission style and narrative method which are too specific.The fourth one is to distinguish the mission expected effect which should be supported by the combat mission, so that the life expectancy effect is the basis of environmental conditions for success, and it's not another one to achieve the mission objectives or a new mission [28][29] . To define the mission of the expected effect is very important, because only through effectively defining of the mission expected effect, it's easy to evaluate the function and task performance of the system to the combat mission effectiveness. (3) The combat mission characteristics Indicators should focus on the characteristics of development.Construction of weapon equipment operational test and evaluation measures of the process is through in-depth analysis of weapon equipment combat mission, mission and function of the system, the mission of weapon system, the characteristics of functional properties of task characteristics and system step by step. Combat mission in the behavior level should be more focused on recognition of the characteristics of the expected effect.According to the needs of the future, we should establish contact between the key ability and the demand of the combat mission, sort according to the dependence on the focus of future capacity requirements on these ties with combat mission characteristics, and determine the degree of importance of characteristics [30][31][32] . (4) The combat mission measures Describe the application of weaponry and equipment system characteristics in the process of Key Performance Parameters (KPPs), Critical Technical Parameters and Key System (CTPs) Attributes (KSAs).Weapons and equipment of the mission, task performance and effects of system function determine the completion of combat mission [33][34][35] . The selection of indicators should follow the following principles: The first one is that the measures should be as simple as possible through single metric measure; the second one is that the measures should Web of Conferences MATEC be used to reflect on the understanding of action; the third one is that the measures should be used to reflect the action to make a complete the mission; the fourth one is that the measures should be used to influence on the conditions of the sensitive; the fifth one is that the measures should be with each level of performance to distinguish; the sixth one is that the measures should reflect the output, performance, or take action process; the last one is that the measures should try to use the absolute value and relative value of their respective strengths. COMBAT MISSIONS MEASURES CON-STRUCTION METHOD AND ITS APPLICA-TION 4.1 Combat missions decomposition process The weapons mission includes the following four procedures: the mission description (including mission statement and mission goals), the construction of combat mission characteristics analysis and measures analysis, the expected effect of combat mission.These factors are determined and the final measures correlation properties are necessary.Then, we prioritize established contact; determine the key link, which will focus shifted to the key link. The basic process of the combat mission decomposition is to describe the combat mission parameters.First of all, according to the description of the mission, we research the combat mission expected results through the A model mapping matrix decomposition; then, according to the characteristics of the combat mission, we recognize each mission with expected mission characteristics to determine the operational mission measures. Mapping matrix model A: combat mission objectives to mission expected effect Difficulties in establishing the mapping matrix model of the A is to get to the right information, of course, it depends on the quality of access to data.Once the mission description (including mission statement and mission operations) and the expected effect are determined, the mapping relationship between the mapping matrix models of A is built up.There is a need to answer the question of constructing the mapping relationship: If a desired effect can be reached the final, will its corresponding mission be implemented?If the answer is yes, then the mapping relationship between the mission and the expected effect exists.Check the A model mapping matrix.If it's not the expected effect of corresponding mission objectives, then it should return to the literature search to meet the principles of constructing the mapping matrix model of A. Finally, construct the hierarchical mapping mission expected effect characteristics and the mission system. Mapping matrix model B: combat expected results to combat mission characteristics According to the effect of the definition and the expected results of combat classification, the following context shows different classifications and related characteristics of the expected effects of the combat mission. To determine the characteristics of combat mission is one of the most difficult tasks of the mission decomposition process.Because the effect is the mission of the output, when the effect was assessed, the evaluation result is the final effect, and this effect must be measured.The importance of this concept will be very apparent in the following cases: (1) The expected effect 1: manned and ready. For each of the desired effect, the types of effect are classified physically and behavior is very helpful.The expected results 1 into the physical state of the weapon system change.The weapon system selection in this case is the missile weapon system of missile weapon system, the change of state from "into the firing position" state to "launch ready". In the expected effect 1 in this example, "availability" was chosen as the most appropriate characteristic because of the need for "system maintenance" and "the test launch of the missile weapon system" is in order to ensure its entry for launch condition is available.Definition of "availability" includes "system maintenance" and "launch" these two words, it is in order to reduce the construction of measures mapping matrix model in C problem. Availability of equipment refers to equipment located at the start of a mission when the state works normally, which influence (by equipment reliability, maintainability, testability, human factors and the protection of resources, 11).In this paper, the "availability" refers to the system of missile weapon system maintenance and testing after launch to play a normal function and launch ready state degree. (2) The expected effect 2: keep the will to fight The expected results 2 is classified as weapon system behavior change or changes in behavior.Because there is no indication of the need to support what the will to fight, the expected effect may also need further confirmation.If you motivate the weapons operator's will to fight, then the weapon system behavior will be affected by the change; if it is related to all personnel involved in the mission operations or other person's will to fight, this will involves changes in behavior, because of the effect of the pre period which will affect the combat mission characteristics selection.Notice the following two points which are very important: One is the verb "keep" which implies behavior change over a period of time (or the whole mission execution; another one is the expected effect which is to change the war fighter's will to fight in a more positive direction. The movement coordination between weapon-controlled personnel should be established, maintained and improved the positive cooperation between ICETA 2015 each other's attitude as the starting point.The ability to focus a high fighting spirit can enhance the weapons operator, the focusing ability can make their rational use of knowledge and technology have its own controlled factors which affect weapon systems, combat worthiness of weapons people.Therefore, the personnel of supporting arms control will fight are to effectively deal with the hostile situation.Therefore, "the key indicators of response" was chosen as the expected effect 2. The meaning of "reactive" is defined as the situation of reaction speed of weapon control personnel on the battlefield, it is helpful to build to measure the mapping matrix model C. (3) The expected effect 3: to improve operational performance The expected result 3 is classified as a behavioral effect.To enhance the combat mission from the aspect of performance, how to enhance the interaction between operational and performance is a usual consideration.Coordination is the key operational performance.In order to take appropriate action, we must maintain the warning on the height of the battlefield situation. "Harmony" refers to the missile weapon system implementation of continuous action in the battle between the nodes and the degree of integration, the implementation of the action by reducing redundant program creates a coordinated mechanism to improve operational performance.Operational node is an important method to describe the combat mission, task, content of a series of operations, and operational organization functions merged into the abstract description."Early warning" refers to the operational node missile weapon system capacity and the maintenance process relevant information and takes appropriate action.Through understanding of the definition of the "harmony" and "early warning", the construction will help measure the mapping matrix model C. (4) The expected effect 4: get the enemy's intelligence The expected result 4 is classified as physical effects, the requirement is that this situation is removed from the battle space; the intention is to prevent the enemy military strategy and the ability to influence the state of will, or to take action against the enemy.Characteristics of "readiness" of the planning and training and the characteristics of "harmony" are characteristics of a concerted effort."Readiness" refers to the missile weapon system combat nodes that meet the missile weapon system combat mission readiness.The measure to be trained with regularity and competent personnel, equipment status ensures supply, storage system, ammunition and equipment based on the number of available. A mapping matrix model B to the combat mission characteristics were defined, but the defined properties of mission are very important in the construction of the mapping matrix model C design process for combat mission characteristics measures definition. Mapping matrix model C: combat mission characteristics to mission measures matrix Characteristics of combat mission indicators point to the expected effect.The measures must be able to influence the assessment mission to the desired effect.Remember the desired effect is to end the presentation objective or state of completion, so the measures must be applied to the goal or end state.Property is defined as the decomposition process in the mission of the last step to determine, it is helpful to implement the mission decomposition process.Every final indicator is included on the interpretation of the meaning and a description of its scope of application.Support for multiple common benefit measures, there are some situations can occur, such as the limited resource makes the assessment of some difficult mission level indicator.A mapping matrix model C will be devoted to the design mission level indicators; however, as a replacement of quantitative indicators, the key problems should be offered to qualitative. (1) Availability There is an availability to build the expected effect 1 (launch ready) and all three missions contact.Definition of "availability" in the mapping matrix model for the design process of B showed that the weapon which quickly returned to the evaluation and used state immediately is very necessary.According to regulations, the key is the availability characteristics of test launch.Therefore, whether the test launch becomes the most can show the availability characteristics.Give a consideration to the measures which include a large number of combat missions.As an example, the most appropriate indicator is "weapon system through probability test launch in the normal state".However, the precondition of weapon equipment system overhaul is usual, so it is worth considering the second indicator is "weapon system through the system after repair in normal state probability".The key problem of the characteristics is that the corresponding "evaluation of weapon equipment quickly returned to the ability to use immediately". (2) Reaction The reaction of structures and the expected effect 2 (keep fighting spirit) and two three (mission of defending our goals to fulfill the mission of the military) is contact.Definition of "reactive" in the mapping matrix model B design process involves a weapon control personnel to make quick response to a battlefield situation."Fast" is also meant to be on time for reservation, it is used to know how soon.In this way, we can construct a measure, namely "need a weapon control personnel according to the changing conditions of reaction time to respond".In order to make the change of type, weapons and equipment with the change of the reaction conditions described in further detail, we propose a better explanation of the indicators, namely "weapon control personnel to respond to threats and reaction time changing environmental conditions".The second measure which is more direct is the will to fight in the reaction of side effect.If the Web of Conferences MATEC mission is successful, we should find the cause of what factor is unsuccessful, the definition of indicators for the "will to fight the survival effect on the weapon operator probability".The key issue of the corresponding characteristics is "evaluation of weapon control personnel to make the rapid reaction ability of battlefield situation".It is worth noting.It is not in the process of acquiring or in a training program that can measure.It may be just an accident in the process of military action to combat or after the analysis of the field. (3) Coordination Coordination is built and the expected effect 4, the expected effect 3 (to enhance operational performance, get the enemy intelligence) and the last two mission goals (performance of military mission and defeating enemy attempts to link)."The definition of harmony" in the mapping matrix model B design process for weapons in combat mission in the process of implementation across the vertical and horizontal coordinates of all combat node connection.This coordination needs to be continued and it cannot be interrupted.To reduce the redundancy, collaborative innovation is used to enhance operational performance."Redundant" through the measurement redundancy actions is to assess the situation; the corresponding measures are "according to the probability of action plan redundancy"."Collaboration" for the measures is more difficult.If there is coordination which is collaborated with the hypothesis, then, it may be the best.However, most of condition is not like that, so it is more difficult for the coordination measures.Coordination measures which are corresponding to horizontal and vertical direction are "across the operational node for the coordination ratio" and "across the operational node continuous vertical coordination rate".The ability to stop the enemy meets our need of position which is coordinated in the planning and preparation.Therefore, the measure is "in the weapons mission execution, planning and operational node oriented across the smooth coordination of probability".The most important feature is that the "evaluation on operational node of weapon equipment is in order to coordinate across the combat mission, and the ability is to reduce the redundancy program and create a coordination mechanism". (4) Early warning Early warning and the expected results 3 (built to enhance the combat performance) and 2 (military mission to fulfill the mission) is contact.The definition of "early warning" in the mapping matrix model B design process is in order to deal with related information and take timely and appropriate action for operational node, acquisition and maintenance of the battlefield situation which vary from minute to minute warning is necessary.Then, evaluate the weapon equipment information and take appropriate action to correct the measures can also be written.The first measure for early warning is "according to battlefield situation, operational node decision accuracy".The second indicators for early warning are "according to battlefield situation, the correct rate of weapon control".The third indicators for early warning are "according to battlefield situation, the correct rate of operational node action".The key issue of the corresponding characteristics is "ability to assess the operational nodes across all battlefield situations to make early warning". (5) Readiness Readiness to build and the expected effect of 4 (to get the enemy intelligence mission) and 3 (defeating enemy attempts) contact.Definition of "readiness" in the mapping matrix model B design process is the "ready state".Most of the early warning measures represent the planning stage implementation stage effect on combat mission.A measure of "by the lack of training in combat mission execution trapped rate".Another indicator is "by the lack of equipment, security and capital leads to combat mission execution trapped rate".The key issue of the characteristics of the corresponding evaluation ability is "in ready state to perform combat missions". It should be noted that, although the discussion in this part of the key issues is raised, the mapping matrix model C on the combat mission characteristics and mission parameters which are shown is not reflected in the Table 1 and the key issues is on mapping relationship between indicators and characteristics. CONCLUSIONS This paper attempts to solve the basic input problem of weapon equipment operational test evaluation.The author does research on the operational test and evaluation measures system construction method through absorbing the advanced theory and the test with our practice and putting forward three-level indicators which are the combat mission, the combat task measures system of equipment and weapons systems, the weapons of the mission description of the desired effect, the connotation of the concept of combat, the combat mission characteristics and mission operations.This paper also introduces the construction of combat mission target mission decomposition method.By analyzing the operational mission objectives and expected results, the author determines the characteristics of the combat mission, and obtains the final combat mission measures.The construction of combat mission is used to assess whether the measures has reached the expected effect of weapon equipment operational capabilities, operational effectiveness and operational adaptability by weapons combat mission to assess the weapons and equipment.This paper introduces the mission decomposition method to carry out according to the mission and weapon equipment system which provides the subsequent decomposition, and it also has guiding significance to other similar measures system construction. This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Table 1 . Combat mission critical ability
2018-12-13T08:31:55.421Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "97fd3d8d4b155b5eaa1bb4dfa9604e7760e02adb", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2015/03/matecconf_iceta2015_05004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "97fd3d8d4b155b5eaa1bb4dfa9604e7760e02adb", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
246988560
pes2o/s2orc
v3-fos-license
Handling Missing Data in Cross-Classified Multilevel Analyses: An Evaluation of Different Multiple Imputation Approaches Multiple imputation (MI) is a popular method for handling missing data. In education research, it can be challenging to use MI because the data often have a clustered structure that need to be accommodated during MI. Although much research has considered applications of MI in hierarchical data, little is known about its use in cross-classified data, in which observations are clustered in multiple higher-level units simultaneously (e.g., schools and neighborhoods, transitions from primary to secondary schools). In this article, we consider several approaches to MI for cross-classified data (CC-MI), including a novel fully conditional specification approach, a joint modeling approach, and other approaches that are based on single- and two-level MI. In this context, we clarify the conditions that CC-MI methods need to fulfill to provide a suitable treatment of missing data, and we compare the approaches both from a theoretical perspective and in a simulation study. Finally, we illustrate the use of CC-MI in real data and discuss the implications of our findings for research practice. observed data and an imputation model (Rubin, 1987). A key requirement of MI is that the imputation model correctly takes into account the data structure and the relationships between the observed variables. This can be particularly challenging when the data have a clustered structure, in which observations are organized within higher-level units (e.g., students in schools, employees in teams, repeated measures in individuals). Although a number of studies have considered applications of MI in clustered data, much of this research has been concerned with hierarchical data, such as two-and three-level data (Enders et al., 2016;Goldstein et al., 2009;Grund et al., 2018b;Lüdtke et al., 2017;Schafer & Yucel, 2002;Wijesuriya et al., 2020). By contrast, the use of MI applications with nonhierarchical data structures, such as cross-classified or multiplemembership structures, is still poorly understood (for an overview, see Rasbash & Browne, 2008). The purpose of this article is to investigate the effectiveness of MI for the treatment of missing values in cross-classified data, wherein observations can belong to multiple clusters that do not form a clear hierarchy (Goldstein, 2011;Raudenbush & Bryk, 2002). Cross-classified data are common in many areas of research, for example, in cross-sectional data when individuals are organized in multiple higher-level units (e.g., students in schools and neighborhoods; employees in teams and fields of expertise; see also Claus et al., 2020;Fielding & Goldstein, 2006) or in longitudinal data when cluster membership changes over time (e.g., students in primary and secondary schools; see also Cafri et al., 2015). The treatment of missing data in cross-classified data can be extremely challenging because the variables can be observed at different levels, and the cross-classified structure implies a complex pattern of dependency between the observations that can no longer be captured by hierarchical models. In writing this article, we have three major goals. First, we aim to clarify the requirements that imputation approaches need to fulfill in order to provide a suitable treatment of missing values in the analysis of cross-classified data. In this context, we focus on analyses with random intercepts and linear effects, and we discuss how the imputation approaches can be extended to address additional types of analyses. Second, we aim to compare the statistical properties of different MI approaches for cross-classified data (CC-MI) from both theoretical and practical perspectives and by using the results of a simulation study. To this end, we (a) introduce a novel approach to CC-MI that is based on the fully conditional specification (FCS) approach to MI and (b) outline a Bayesian joint modeling (JM) approach for the treatment of incomplete cross-classified data. Third, we illustrate the application of CC-MI in a worked example with real data from education research, provide recommendations, and outline limitations and extensions of the approaches that we considered. This article is organized as follows. In the first section, we provide a brief introduction to the structure and analysis of cross-classified data. In doing so, we try to outline the most important structural features of cross-classified data and explain how they can be analyzed with cross-classified random-effects models (CCRMs). Next, we present the JM and FCS approaches to CC-MI and explain how these methods accommodate the structural properties of cross-classified data. In this context, we also consider ad hoc approaches that extend conventional methods for single-and two-level MI to better accommodate crossclassified data and that can be implemented in a wide variety of statistical software. Then, we present the results of a simulation study, in which we evaluated the statistical properties of these methods. Finally, we demonstrate the application of the methods in an example with computer code and real data, and we discuss the implication of our findings for the treatment of incomplete crossclassified data. Cross-Classified Data Cross-classified data are characterized by a clustered structure, in which observations belong to multiple clusters simultaneously. For example, consider the hypothetical scenario in Figure 1, where students are clustered within the units of two random factors: schools (A) and neighborhoods (B). In such a case, students who attend the same school or live in the same neighborhood often tend to be more similar to each other than to students in different schools or neighborhoods because they are exposed to similar contextual influences. Both factors represent higher-level units, and we refer to these levels as Levels A and B. However, in contrast to the clustering that occurs in hierarchical data, the two factors are crossed and not nested within one another. In other words, the students are cross-classified by neighborhoods and schools. This is reflected by the fact that the students at any particular school sometimes live in different neighborhoods, and the students in any particular neighborhood sometimes attend different schools. The combined membership of each student in one school and one neighborhood forms a number of school-neighborhood pairs, in which a certain number of students are nested. We refer to this intermediate level as Cross-Classified Multiple Imputation Level AB. Conceptually, the school-neighborhood pairs may correspond to more tightly knit communities or peer groups that share additional contextual influences that are not shared by other students who attend the same school (but live in different neighborhoods) or live in the same neighborhood (but attend different schools). The school-neighborhood assignment shown in Figure 1 is an example of partial cross-classification, in which every school is crossed with only a subset of the neighborhoods and vice versa. The partially cross-classified structure is reflected by the fact that each neighborhood sends students to only some of the schools, and each school recruits students from only some of the neighborhoods. For the example above, the assignment of schools and neighborhoods into school-neighborhood pairs is illustrated in more detail in Figure 2. By contrast, a full cross-classification would occur if every school received students from all the neighborhoods or, equivalently, if every neighborhood sent students to all the schools. Examples of cross-classified data can be found in many areas of research. For example, in education research, cross-classification can occur when schools are crossed with other organizational units, such as neighborhoods or families (e.g., Dundas et al., 2014;Dunn et al., 2015;Garner & Raudenbush, 1991). In longitudinal data, cross-classification occurs when students transition from one type of school to another (e.g., primary and secondary school; Goldstein & Sammons, 1997;Paterson, 1991) or switch classes or teachers over time (e.g., Gregory & Huang, 2013;Heck, 2009;Kyriakides & Creemers, 2008). Finally, crossclassified data can also occur in other areas of research, such as in clinical and organizational research (Barker et al., 2020;Claus et al., 2020), in experimental research (Baayen et al., 2008), or in the context of generalizability theory (Cronbach et al., 1972;Shavelson & Webb, 2000). FIGURE 2. Example (continued) for cross-classified data with students (Level 1) clustered in schools (Level A) and neighborhoods (Level B). The two panels represent the ties between schools and neighborhoods (a) and the number of students for each schoolneighborhood pair (b). Cross-Classified Random-Effects Models One of the most popular methods for analyzing cross-classified data is the CCRM. Suppose that a researcher is interested in the relationship between an explanatory variable x and an outcome variable y in a sample of students (Level 1) who are clustered within schools (Level A) and neighborhoods (Level B). For example, y may represent students' academic achievement, whereas x may represent their socioeconomic status (SES). Intercept-only model. A common first step in the analysis of cross-classified data is to estimate the components of variance in y that can be attributed to differences between schools, neighborhoods, and school-neighborhood pairs, for example, by using the following intercept-only model (Raudenbush & Bryk, 2002). For student i (i ¼ 1; . . . ; n jk ) in school j ( j ¼ 1; . . . ; J ) and neighborhood k (k ¼ 1; . . . ; K), where b 0 is the overall intercept, u A;j and u B;k are the random intercepts of the schools and neighborhoods, respectively, u AB;jk are the random intercepts of the school-neighborhood pairs, and e ijk are the student-specific residuals. The random effects u A;j , u B;k , and u AB;jk and the residuals e ijk are assumed to follow independent normal distributions with means of zero and variances denoted by t 2 A , t 2 B , t 2 AB , and s 2 , respectively. The main purpose of this model is to distinguish the components of variance in y that pertain to differences between schools (A), neighborhoods (B), and school-neighborhood pairs (AB). These can subsequently be used to compute the intraclass correlation (ICC) at different levels (see Raudenbush & Bryk, 2002). Conceptually, the random effects associated with A and B (u A;j and u B;k ) represent the mean differences that exist between the members of different schools or neighborhoods and that are shared among the students who attend the same school (j) or live in the same neighborhood (k). The random effect associated with the school-neighborhood pair (u AB;jk ) represents the differences between the mean values of students who belong to a particular combination of school and neighborhood (jk) above and beyond the differences that they share with other students who attend the same school (but live in different neighborhoods) or live in the same neighborhood (but attend different schools). The random effects of the cluster membership are sometimes also referred to as "main" (u A;j and u B;k ) and "interaction" (u AB;jk ) effects (e.g., Beretvas, 2011) because they reflect mean differences between groups of students similar to the main and interaction effects in a two-way between-subjects analysis of variance (see also Maxwell et al., 2018;Raudenbush & Bryk, 2002). Cross-Classified Multiple Imputation Random-intercept model with explanatory variables. The model above can be extended to address more interesting research questions by including explanatory variables that can be measured at any level of the sample (Raudenbush & Bryk, 2002). In addition, the model can include the cluster means of explanatory variables at Level 1, which allows the effects of this variable to take on different values at different levels. For example, with three explanatory variables x at Level 1, z at Level A, and w at Level B, the model can be extended as follows: In this model, b 1 is the effect of x at Level 1, b 2 is the effect of x at Level A, b 3 is the effect of x at Level B, and b 4 is the effect of x at Level AB (i.e., for the school-neighborhood pair). In addition, b 5 and b 6 are the effects of z and w at Levels A and B, respectively. Notice that the extended model now partitions both y and x into within-and between-cluster components, although it does so in different ways. Specifically, the components in y are represented by random effects and residuals, whereas the components in x are represented by the values of x at Level 1 (x ijk ) and the cluster means of x at Levels A ( x j ), B ( x k ), and AB ( x jk ). The effects of the cluster means ( x j , x k , and x jk ) of x in Equation 2 represent contextual effects, that is, the extent to which cluster-level differences in x are associated with cluster-level differences in y above and beyond the effect of x at Level 1 (Raudenbush & Bryk, 2002). In sum, this model partitions the lower-level variables into within-and between-cluster components and allows these components to be associated with one another at different levels. The models above incorporate the nonhierarchical structure of the data in multiple ways. First, the models include separate random effects and variance components for the two crossed factors A and B. By contrast, if this structure was simplified or ignored, for example, by treating the factors as hierarchical, then the estimated parameters and standard errors could be biased (Lai, 2019;Luo & Kwok, 2009;Meyers & Beretvas, 2006). Second, the models include a random effect of the "interaction" of the two factors, that is, for the combined membership of individuals in a pair of units in A and B. Omitting this component can sometimes simplify the specification of the model but can also induce bias (Shi et al., 2010). As a general rule, including the random effect of the "interaction" requires that there are multiple observations (n jk > 1) for at least some of the pairs of units in A and B; otherwise (if all n jk ¼ 1), it cannot be distinguished from the residual at Level 1 and must be dropped from the analysis (see also Beretvas, 2011). Third, the models can include effects of explanatory variables at each level as well as effects of the cluster means of explanatory variables at Level 1, which allows the cluster-level effects to differ from the effects at Level 1 (Raudenbush and Bryk, 2002; see also Kreft et al., 1995). Grund et al. Further extensions. The models can also be extended further to address additional research questions. For example, random slopes can be included to allow the effects of the explanatory variables to vary across the units of A or B, and explanatory variables at Levels A or B can be used to explain some of the variance in the slope coefficients (e.g., Raudenbush & Bryk, 2002). In such a model, the effects of lower-level explanatory variables vary both at random and due to the moderating influence of higher-level variables (cross-level interactions [CLIs]). In the following sections, we focus on CCRMs that include only random intercepts and linear effects. We do not consider applications with random slopes, CLIs, or other nonlinear effects in detail, but we return to these extensions later. MI of Cross-Classified Data In the following section, we outline two of the main strategies-JM and FCS-that are typically used to conduct MI, and we explain how these strategies can be extended for CC-MI. In addition, we consider a number of ad hoc approaches that are based on imputation approaches for single-level and twolevel (hierarchical) data. For simplicity, we focus on applications with continuous data; however, either approach can also be used with categorical data, and we return to this topic in the Discussion section. Joint Modeling The general idea underlying the JM approach is that a single (joint) imputation model is specified for all variables with missing data, thus generating imputations for all variables simultaneously. The JM approach was developed primarily in the context of single-and two-level data (Schafer & Olsen, 1998;Schafer & Yucel, 2002; for an overview, see Carpenter & Kenward 2013), but it has also been applied to cross-classified and multiple-membership data (Yucel et al., 2008). To conduct CC-MI with the JM approach, a multivariate CCRM that includes all variables both with and without missing data must be specified. Suppose that the data comprise observations clustered in two factors A and B as before and with variables measured at different levels. Let further y ð1Þ denote the variables at Level 1, y ðAÞ the variables at Level A, y ðBÞ the variables at Level B, and y ðABÞ the variables at Level AB. For example, if the variables of interest were those in Equation 2, then y ð1Þ would include y and x, y ðAÞ would include z, y ðBÞ would include w, and y ðABÞ would be empty. Then, for observation i in unit j of factor A and unit k in factor B, the joint model can be written as AB;j ; u ðABÞ AB;j are the random effects and residuals at Level AB, and e ijk are residuals at Level 1. The random effects and residuals at each level are assumed to follow independent multivariate normal distributions with mean vectors of zero and covariance matrices T A , T A , T AB , and Σ, respectively. Similar to the univariate intercept-only model in Equation 1, the joint model partitions the within-and between-cluster components in the variables at each level. In addition, the model incorporates the associations that can exist between the components at each level by allowing the random effects and residuals to be correlated (i.e., through T A , T A , T AB , and Σ). Consequently, the JM approach to CC-MI incorporates information from variables at different levels in the imputation of missing data. For example, when imputing missing data at Level 1, the JM approach incorporates information from variables at Levels A, B, and AB (T A , T B , and T AB ) as well as other variables at Level 1 (Σ). Markov Chain Monte Carlo (MCMC) algorithm. The JM approach to CC-MI can be implemented with MCMC techniques (Browne, 2009;Browne et al., 2001). In the following, we outline the main steps of a generic MCMC algorithm for the estimation of the model parameters and the imputation of missing data at each level (for additional details, see Rasbash & Browne, 2008). For convenience, we write the data of all units and variables as y ¼ y ð1Þ ; y ðAÞ ; y ðBÞ ; y ðABÞ À Á and the random effects as u ¼ ðu A ; u B ; u AB Þ. At iteration t of the MCMC algorithm, Notice that the sampling steps for the random effects (Step 6) and the residuals (Step 7) are performed by conditioning on the random effects and the observed values of other variables at the same level. In doing so, the JM approach incorporates information from other variables into the imputation of missing data, while accounting for the relationships that exist between variables at different levels. To our knowledge, there is currently no software that implements the JM approach to CC-MI (but see Yucel et al., 2008). However, the required sampling steps can be carried out in general-purpose software for Bayesian data analysis, such as WinBUGS/OpenBUGS (Lunn et al., 2000), JAGS (Plummer, 2017), or Stan (Stan Development Team, 2021. In addition, a simplified version of this model with no random effects or variables at Level AB can be fit with the Bayesian estimation procedure in Mplus (Muthén & Muthén, 2017). Fully Conditional Specification As an alternative to the JM approach, the joint distribution of the variables with missing data can be approximated by imputing one variable at a time while iterating over a sequence of univariate imputation models, one for each variable with missing data. This strategy is known as the FCS approach to MI (Raghunathan et al., 2001;van Buuren et al., 2006). In the context of CC-MI, each imputation model is set up in such a way that it (a) partitions the within-and between-cluster components of the respective target variable and (b) includes other variables and their cluster means as predictors to represent the relationships between the variables at each level. Cross-Classified Multiple Imputation Because the FCS approach iterates along a sequence of imputation models, one for each target variable with missing data, different types of models are required to address variables at different levels. Suppose that y ðpÞ denotes the pth target variable with missing data (p ¼ 1; . . . ; P). If y ðpÞ is measured at Level 1, then the imputation model takes the form of a CCRM as follows: AB , and s 2ðpÞ . The predictors in x ðpÞ typically include all variables other than y ðpÞ . In addition, in CC-MI, x ðpÞ includes the cluster means of the predictors at Levels A, B, and AB. In doing so, the imputation model takes into account the within-and between-cluster components both in y ðpÞ (through random effects) and in x ðpÞ (through cluster means) as well as the relations that can exist between the two (through ␤). For example, if the first target variable y ð1Þ was the outcome variable y from Equation 2, then the predictors would be x, z, w, and the cluster means of x at Levels A, B, and AB ( x j , x k , and x jk ). For target variables at Levels A, B, and AB, the imputation models take on simpler forms but follow the same strategy by conditioning on the other variables at the same levels as well as the cluster means of lower-level variables. Specifically, for variables at Levels A, B, and AB, the models become: where x ðpÞ denotes the predictors, ␤ ðpÞ denotes the fixed effects, and u ðpÞ A;j , u ðpÞ B;k , u ðpÞ AB;jk denote the random effects and residuals with variance components as before. The predictor variables in x ðpÞ can include all variables other than y ðpÞ that are measured at the same level as well as the cluster means of predictors that were measured at lower-levels (e.g., at Levels 1 or AB for a target variable at Level A). In addition, in unbalanced and partially cross-classified data, x ðpÞ can include cluster means of higher-level variables that were measured at other levels (e.g., at Level B for a target variable at Level A). For example, if the second target variable y ð2Þ was the explanatory variable z from Equation 2, which is measured at Level A, then the predictors would be the cluster means of x and y at Level A ( x j and y j ) and potentially those of w (in unbalanced and partially cross-classified data). Similar to the JM approach, the FCS approach aims to accommodate the crossclassified structure of the data in each imputation model by (a) partitioning the variables in within-and between-cluster components and (b) allowing for associations among the components at different levels. However, in the FCS approach, only the components in the target variables are represented by random effects, whereas those in the predictors are represented by cluster means (see also Enders et al., 2016;Grund et al., 2018b;Lüdtke et al., 2017;Mistler & Enders, 2017). If the cluster means themselves are based on variables with missing data, then they are updated in each step of the procedure, so that they reflect the most recent imputations of the underlying variables (see also Royston, 2005; van Buuren & Groothuis-Oudshoorn, 2011). To our knowledge, the FCS approach to CC-MI is currently supported only by the packages mice ( van Buuren & Groothuis-Oudshoorn, 2011) and miceadds in the statistical software R (R Core Team, 2021). FCS algorithm. The FCS approach to CC-MI that is implemented in miceadds uses an (approximate) Gibbs sampling algorithm to generate imputations by iterating along the following steps. For simplicity, we present these steps only for a single variable at Level 1, dropping the p superscript, and we write the random effects and their variances as u ¼ ðu A;j ; u B;k ; u AB;jk Þ and τ 2 ¼ ðt 2 Notice that the sampling steps for the random effects (Step 4) and the imputations at Level 1 (Step 5) are implemented by conditioning on the predictor variables and their cluster means (in x ðpÞ ijk ). In doing so, the FCS approach accommodates the relationships that can exist between the variables at different levels, similar to the JM approach. In addition, the random effects are sampled conditionally on one another and in an iterative manner. This is required in (partially) crossclassified data because the observed data provide information about multiple random effects. The sampling steps for the regression coefficients are standard Cross-Classified Multiple Imputation Gibbs steps with an implicit uniform prior for ␤. However, the algorithm is not a true Gibbs sampler because it omits the sampling of the variance components, relying on estimated values instead. Single-and two-level FCS with cluster means. As an alternative to CC-MI, crossclassified data can also be accommodated with simpler imputation models (e.g., for single-or two-level data) by including the effects of cluster membership through fixed effects or additional cluster means (Andridge, 2011;Drechsler, 2015;Lüdtke et al., 2017;Wijesuriya et al., 2020). These ad hoc approaches naturally do not offer a full replacement of CC-MI, but they can be useful if software for CC-MI is not available. Here, we consider one such approach that relies on additional cluster means and can be based on single-or two-level FCS. The FCS approach to CC-MI (Equation 4) accommodates the cross-classified structure of the data by including random effects for the target variable and cluster means for the predictor variables in each imputation model. Cluster means can be included as predictors even in simpler models, so the main difference is how these approaches address the random effects for the target. For example, when using single-level FCS, the random effects for a target variable at Level 1 can be approximated as follows: where x ðpÞ contains the predictors and their cluster means as before, and y ðpÞ jðÀiÞ , y ðpÞ kðÀiÞ , and y ðpÞ jkðÀiÞ are the cluster means at Levels A, B, and AB for the target variable, which are computed from the imputed values from the previous iteration of the imputation procedure. However, in order to avoid a direct dependency between the target variable and its own imputed values ( van Buuren, 2018, Ch. 6), these cluster means are computed individually for every case i, such that the case i is excluded from the computation, that is: y Going forward, we will refer to these cluster means as adjusted cluster means. Conceptually, the adjusted cluster means can be regarded as a proxy for the information about case i that the other cases within the same cluster provide (i.e., the ICC), thus mimicking the contribution of the random effects in CC-MI. This method requires the adjusted cluster means to be updated at each iteration of Grund et al. the imputation procedure with "passive" imputation steps, an option that is provided by many statistical software packages (Raghunathan et al., 2018;Royston, 2005; van Buuren Groothuis-Oudshoorn, 2011). Differences Between JM and FCS The main difference between the JM and FCS approaches to CC-MI is how they represent the between-cluster components in the variables included in the imputation model. In the JM approach, the imputation model is based on a multivariate CCRM and represents the between-cluster components with random effects, which correspond to the latent within-and between-group components in the variables at each level (Asparouhov & Muthén, 2006;Lüdtke et al., 2008). By contrast, the FCS approach is based on a sequence of univariate CCRMs, in which the between-cluster components of the target variable are also represented by random effects, whereas those of the predictor variables are represented by manifest cluster means (e.g., Raudenbush & Bryk, 2002). In the context of hierarchical data, it has been shown that the JM and FCS approaches to MI are equivalent in cases with balanced data, that is, when all clusters have the same size (Carpenter & Kenward, 2013; see also Enders et al., 2016;Lüdtke et al., 2017;Resche-Rigon & White, 2018). Specifically, for balanced data, it can be shown that the two approaches represent the conditional distribution of the missing data in different but equivalent ways, provided that the cluster means are included in the FCS approach. Grund et al. (2018a) further showed that the two approaches provide nearly identical results even in unbalanced data, where the exact equivalence between them no longer holds (see also Resche-Rigon & White, 2018). In the Appendix, and in more detail in Supplement A in the Online Supplemental Materials, we extend these considerations to cross-classified data and found that the same result holds but under stronger conditions. Specifically, we found that the FCS and JM approaches to CC-MI are asymptotically equivalent in balanced fully cross-classified data, that is, when the number of units in A and B and the cluster sizes are constant and the sample is sufficiently large. The equivalence holds only asymptotically, because the FCS approach induces a slight dependency between marginally independent observations whose strength diminishes as the numbers of units in A and B become large. For this reason, the FCS and JM approaches are not formally equivalent in cross-classified data. Nonetheless, given that the discrepancy between the FCS and JM approaches appears to be relatively minor, we would still expect their performances to be similar in practice (see also Grund et al., 2018a). In addition, from a practical perspective, the FCS approach often has advantages over the JM approach because it allows for a more flexible specification of the imputation models, with finer control over what type of imputation model is used for each variable and which predictor variables are included in them (see also van Buuren et al., 2006). Cross-Classified Multiple Imputation Previous research on CC-MI has focused primarily on the FCS approach (Wijesuriya, 2021;however, see Yucel et al., 2008) or specific applications, such as missing item responses in educational assessments (Kadengye et al., 2014) or missing data in social network analysis (Jorgensen et al., 2018). In addition, Hill and Goldstein (1998) considered the special case of missing unit identifiers in longitudinally cross-classified data. Overall, these studies suggest that methods for CC-MI can provide an effective treatment of missing values in crossclassified data. However, to our knowledge, no study has systematically compared the JM and FCS approaches to CC-MI in more general settings. Simulation In the following, we present the results of a simulation study in which we evaluated the performance of different MI approaches for cross-classified data. This included the JM and FCS approaches to CC-MI as well as ad hoc approaches that were based on single-and two-level MI. The computer code needed to run the simulation study is provided in the OSF repository (https://osf.io/5em2d). Data Generation In the simulation study, we generated data for two standardized variables x and y from a multivariate CCRM (see Equation 3) with observations clustered within two crossed factors A (e.g., schools) and B (e.g., neighborhoods). Specifically, for an observation i (i ¼ 1; . . . ; N ) in unit j of factor A ( j ¼ 1; . . . ; J ) and unit k of factor B (k ¼ 1; . . . ; K), the data were generated with the following model: where the random effects (u A;j , u B;k , and u AB;jk ) and the residuals (e ijk ) followed independent bivariate normal distributions with mean vectors of zero and covariance matrices T A , T B , T AB , and Σ, respectively. The covariance matrices were chosen in such a way that (a) the two variables x and y would have certain proportions of variance at each level (t 2 A , t 2 B , t 2 AB , and s 2 ), and (b) the random effects and residuals of x and y would be correlated to a given extent. Partial cross-classification. The model in Equation 8 covers both fully and partially cross-classified data. In this study, we focused on partially crossclassified data and used the following procedure to assign a subset of the units in B to a subset of the units in A (see Figure 2). The procedure consisted of two steps. First, we defined the number of units in B that needed to be assigned to each unit in A (n B=A ) and initially assigned these units in a block-like pattern, where the first n B=A units in B would be assigned to the first n A=B ¼ n B=A Á J K units in A, and so on. Second, we introduced a certain number of random permutations into these initial assignments. This resulted in partially cross-classified data with Grund et al. a total number of n B=A Á J pairs between A and B and with an assignment pattern that was in part deterministic and in part random. For each pair of A and B, we then generated either a balanced or unbalanced number of observations at Level 1. In the balanced case, we generated clusters of constant size (all n jk ¼ n); in the unbalanced case, we chose one unit in B per unit in A and increased the cluster size for this pair while decreasing the cluster size for all other units in B that were assigned to the same unit in A. We did this in such a way that the average cluster size would remain unchanged ( P j P k n jk ¼ n) in the unbalanced data. Table 1 shows an example with J ¼ 16 units in A and K ¼ 32 units in B, where we assigned n B=A ¼ 8 units in B to each unit in A with 20% random assignments and an unbalanced number of observations with an average cluster size of n ¼ 5. Missing data. Once the data were generated, we induced missing values in x on the basis of the values in y in accordance with an missing completely at random (MCAR) or missing at random (MAR) mechanism that we simulated with the following linear model: In this model, a is a quantile of the standard normal distribution that determines the probability of missing data (e.g., a ¼ À0:674 for 25% missing data), and l determines the missing data mechanism (i.e., MCAR if l ¼ 0, MAR otherwise). A value x ijk was set to missing when the corresponding r ijk > 0. Simulated Conditions Using this data generating procedure, we varied the sample sizes at Levels A and B, where we fixed the number of units in A to J ¼ 128 and set the number of units in B to K ¼ 64, 128, or 256. For the cross-classification, we set the number of units in B per unit in A to n B=A ¼ 8, which resulted in a constant number of 1,024 pairs between A and B in all conditions. In addition, we simulated both balanced and unbalanced samples with an average cluster size of n ¼ 5. This resulted in cross-classified data with a total sample size of N ¼ 5;120 and moderate to strong degrees of cross-classification as indicated by Cramér's V (i.e., in comparison with a hierarchical data structure; see Lai, 2019). 1 For the variance components, we fixed the residual variances at Level 1 (s 2 ) to .50 and set the variances of the random effects at Levels A, B, and AB (t 2 A , t 2 B , and t 2 AB ) to values between .10 and .30. Specifically, we considered three configurations: one equalvariance condition, where all variances were set to .20; and two unequal-variance conditions, where the variances at Levels A, B, and AB were set to .30, .20, and .10, or .20, .30, and .10, respectively. The correlations between the random effects at Levels A, B, and AB were fixed to .50, and the correlations between the residuals at Level 1 were fixed to .20. Finally, we simulated both MCAR (l ¼ 0) and MAR data (l ¼ :70) and fixed the probability of missing data to Cross-Classified Multiple Imputation 25%. This resulted in 36 simulated conditions, each of which was replicated 2,000 times. 2 Imputation and Analysis To handle the missing data, we considered 10 different imputation approaches. This included the JM and FCS approaches to CC-MI as well as eight ad hoc approaches based on single-and two-level FCS. The ad hoc approaches included both "naive" specifications of these methods, which reflect common recommendations for applications in single-and two-level data, as well as the extended specifications that aim to accommodate the cross-classified data structure by including adjusted cluster means. Specifically, the methods were as follows: 1. FCS-1L: single-level FCS. 2. FCS-1L-M: single-level FCS, extended to include cluster means for the predictor and adjusted cluster means for the target variable (at Levels A, B, and AB). 3. FCS-2L-A: two-level FCS with random effects of A and cluster means for the predictor. 4. FCS-2L-A-M: two-level FCS with random effects of A and cluster means for the predictor, extended to include adjusted cluster means for the target variable (at Levels B and AB). 5. FCS-2L-B: two-level FCS with random effects of B and cluster means for the predictor. 6. FCS-2L-B-M: two-level FCS with random effects of B and cluster means for the predictor, extended to include adjusted cluster means for the target variable (at Levels A and AB). 7. FCS-2L-AB: two-level FCS with random effects of AB and cluster means for the predictor. 8. FCS-2L-AB-M: two-level FCS with random effects of AB and cluster means for the predictor, extended to include adjusted cluster means for the target variable (at Levels A and B). 9. FCS-CC: FCS approach to CC-MI. 10. JM-CC: JM approach to CC-MI. To implement JM-CC, we used OpenBUGS (Lunn et al., 2000); and to implement FCS-CC and the methods based on single-and two-level FCS, we used the R packages mice and miceadds ; van Buuren & Groothuis-Oudshoorn, 2011). Finally, we also conducted the analyses with the complete data (CD) and after listwise deletion (LD) to provide a means of comparison. In the analysis of the imputed data, we were interested in two different models. The first model was an intercept-only CCRM for x (see Equation 1). The second model was a random-intercept CCRM with y as the outcome variable and x as the explanatory variable (see Equation 2). The parameters of interest were the estimated variance components in x at each level (t ). For each parameter, we computed the relative bias and the coverage rates of the 95% confidence interval (CI) to evaluate the accuracy of the parameter estimates and the estimated standard errors. Due to the different representation of the between-cluster components in the data-generating model and the analysis, the true values of the regression coefficients cannot generally be expressed in closed form. For this reason, we used the average estimates in the CD as reference values in the computation of the bias and coverage. Results The main results are summarized in Tables 2 through 4. For simplicity, we present the detailed results only for selected conditions with J ¼ K ¼ 128 units in A and B, unbalanced clusters, and MAR data. The results for the other conditions were often similar, so we discuss them only when needed and provide them in full in Supplement C in the Online Supplemental Materials and the OSF repository (https://osf.io/5em2d). The results for the estimated variance components in x are presented in Table 2. The FCS and JM approaches to CC-MI (FCS-CC, JM-CC) provided approximately unbiased parameter estimates in all simulated conditions. By contrast, the single-and two-level FCS approaches (FCS-1L, FCS-2L) led to bias unless their specification was extended to include the adjusted cluster means of the incomplete variable x. The direction and size of the bias depended on the variance configuration and how the random effects at each level were accommodated by these procedures. Specifically, single-level FCS (FCS-1L) underestimated the variances of the random effects and overestimated the residual variance at Level 1. Two-level FCS (FCS-2L-A, FCS-2L-B, and FCS-2L-AB) slightly overestimated the variance of the random effect that was included in the imputation model (e.g., t ðxÞ2 A for FCS-2L-A), where the bias was largest for FCS-2L-AB. In addition, these methods underestimated the variances of the other random effects and overestimated the residual variance at Level 1, except for FCS-2L-AB, which estimated the residual variance with approximately no bias. The bias tended to be largest when the omitted variance components were large. However, when we extended the single-and two-level FCS approaches to include the adjusted cluster means, there was little to no bias in the estimated variance components. Finally, LD led to essentially unbiased estimates of the variance components. These results were fairly consistent across the simulated conditions. The bias in the estimated regression coefficients in the CCRM of y regressed on x is shown in Table 3. Similar to before, FCS-CC and JM-CC provided estimates of the regression coefficients with little to no bias, whereas the Grund et al. estimates provided by the single-and two-level FCS approaches (FCS-1L and FCS-2L) were strongly biased unless their specifications also included the adjusted cluster means of the incomplete variable. When we extended singleand two-level FCS in this manner, the bias in the regression coefficients became smaller for two-level FCS with random effects of A or B (FCS-2L-A-M and FCS-2L-B-M) and even more so for single-level FCS (FCS-1L-M) and two-level FCS with random effects of AB (FCS-2L-AB-M). The size of the remaining bias depended on the variance configuration, such that the bias was largest when the corresponding variance component was large, especially for FCS-2L-A-M and FCS-2L-B-M. LD led to a consistent bias in all regression coefficients. Similar to above, the results were fairly consistent across conditions, except for LD, which provided unbiased results under MCAR. Finally, the coverage rates for the 95% CIs of the estimated regression coefficients (Table 4) followed the same pattern as the bias. Specifically, for FCS-CC and JM-CC, the coverage rates were close to the nominal value of 95%. For the single-and two-level FCS approaches without adjusted cluster means (FCS-1L, FCS-2L-A, FCS-2L-B, and FCS-2L-AB), the coverage rates were well below the nominal value. By contrast, when we included the adjusted cluster means, we found coverage rates close to the nominal value for both single-and two-level FCS (FCS-1L-M, FCS-2L-A-M, FCS-2L-B-M, and FCS-2L-AB-M). For LD, we found coverage rates well below the nominal value of 95%. These results were again fairly consistent across conditions, except for LD, which showed nominal coverage under MCAR. To summarize, the results of the simulation study suggested three key findings. First, both the FCS and JM approaches to CC-MI provided accurate results in all simulated conditions. Second, the conventional single-or two-level FCS approaches performed poorly because they failed to accommodate the crossclassified data structure. Third, when the single-and two-level FCS approaches were extended to include the adjusted cluster means for the incomplete target variables, they provided much more accurate results that were very similar to CC-MI. These results are encouraging because they suggest that CC-MI as well as suitable extensions of single-and two-level MI can provide an effective treatment of missing values in cross-classified data. Example Analysis To illustrate the application of the different approaches to CC-MI, we use data from the Early Childhood Longitudinal Study (ECLS-K 1998). The ECLS-K is a longitudinal study that focuses on childrens' early school experiences with multiple measurements beginning in Kindergarten (1998), through primary school (1999)(2000)(2001)(2002)(2003)(2004), and up to secondary school (8th grade, 2007). An interesting feature of the ECLS-K data is that most children in the sample change schools when they transition from primary to secondary school, which means that the Cross-Classified Multiple Imputation students are cross-classified by primary school (factor P) and secondary school (factor S). In this example, we use a subset of the ECLS-K data with observations from Grades 5 and 8, comprising a sample of 9,067 students from 1,997 primary and 2,502 secondary schools who changed schools during that time and for whom school membership at both time points was known (Cramér's V ¼ .858). Specifically, we are interested in the relationship between reading achievement and the amount of time children spent doing homework after their transition to secondary school, controlling for differences between types of schools (private vs. public): In addition, we fit an intercept-only model for the amount of time spent on homework to quantify the amount of variance between primary schools, secondary schools, and primary-secondary school pairs. In this sample, 553 (6.1%) of the cases had missing data on at least one of the three variables. To handle the missing data, we used a subset of the methods presented above: LD, single-level FCS with (adjusted) cluster means (FCS-1L-M), and the FCS approach to CC-MI (FCS-CC). These methods were chosen, because they either performed well in our simulation study (FCS-1L-M and FCS-CC) or as a means of comparison (LD). To implement the two MI approaches, we used the R packages mice and miceadds, and we generated 20 imputed data sets. We also used the packages EdSurvey (Bailey et al., 2021) to process the data, lme4 (Bates, 2010) to fit the analysis models, and mitml to pool the results using Rubin's (1987) rules. The computer code for this example is provided in Supplement B in the Online Supplemental Materials and the OSF repository (https://osf.io/5em2d). The results are presented in Table 5. Overall, the results showed that students who spent more time on homework had higher reading achievement (at Level 1). In addition, there was a positive contextual effect of time spent on homework at the primary school level (P), indicating that students who had attended primary schools that assigned more homework had higher reading achievement in secondary school. However, there were no contextual effects at the secondary school level (S) or at the level of the primary-secondary school interaction (P-S). There was also a positive effect of the type of school, indicating that students at private (vs. public) schools had higher reading achievement. Finally, the results for the variance components indicated substantial amounts of variance at each level (P, S, and P-S). Due to the relatively small percentage of missing data, the results were very consistent across the methods for handling missing data, and the estimated regression coefficients were usually within half a unit of a standard error from each other. The largest difference in the estimated coefficients was the Cross-Classified Multiple Imputation effect of time spent on homework at the secondary school level, which was essentially zero for LD and slightly positive (but non-significant) for FCS-1L-M and FCS-CC. Limitations and Extensions An important requirement of MI is that the imputation procedure must accommodate the relevant features of the data and the intended analyses. In our study, we focused on applications, in which the intended analyses were CCRMs with random intercepts and explanatory variables with linear effects. For these applications, we outlined how the JM and FCS approaches to MI can be implemented to accommodate cross-classified data, using methods that were either based directly on univariate and multivariate CCRMs (JM-CC and FCS-CC) or that emulated them in an ad-hoc manner (e.g., FCS-1L-M). The Grund et al. main strength of these approaches is that they provide a fairly general treatment of missing data in cross-classified data that support a broad range of CCRMs within these limits. This is particularly useful when the imputed data will be used by multiple analysts and in potentially many different analyses. Naturally, CCRMs can also be extended by including random slopes or nonlinear effects (e.g., CLIs). These types of effects complicate the treatment of missing data, because they cause conventional MI approaches such as JM and FCS to become incompatible with the intended analysis (Du et al., 2022; see also Seaman et al., 2012). Recent research has shown that substantivemodel-compatible (SMC) versions of these approaches can be used to ensure compatibility by including the substantive analysis model directly in the imputation procedure (Bartlett et al., 2015;Goldstein et al., 2014). Several studies have shown that SMC methods can be extremely effective at handling missing data in single-and multilevel analyses with nonlinear effects (Erler et al., 2017;Grund, Lüdtke, et al., 2021;Lüdtke et al., 2020). The main advantage of SMC methods is that they can be fine-tuned to accommodate the more complex features of a particular analysis at the cost of making the treatment of missing data more specific to this analysis. In principle, SMC versions of the JM and FCS approach to CC-MI could also be used in applications of CCRMs with random slopes and nonlinear effects (see also Goldstein et al., 2014). To our knowledge, there is currently no software that provides SMC versions of JM and FCS for cross-classified data. As an alternative, general-purpose software for Bayesian data analysis (e.g., WinBUGS/OpenBUGS, JAGS, or Stan) can be used to implement an SMC version of JM or the sequential modeling approach (Ibrahim et al., 2002) to MI. The sequential modeling approach ensures compatibility by factorizing the joint distribution of the variables into a sequence of univariate conditional models, one of which corresponds to the intended analysis (Ibrahim et al., 2002; see also Lüdtke et al., 2020). In addition to the compatibility issues caused by nonlinear effects, it has been shown that the imputation models used in the FCS approach are sometimes incompatible with each other, even if the intended analysis includes only linear effects (Liu et al., 2014;Zhu & Raghunathan, 2015). This issue has also been raised in the context of multilevel analyses, where the conditional models employed by FCS sometimes do not correspond to a well-defined joint model (Resche-Rigon and White, 2018; see also Du et al., 2022). In the present study, we found that a similar problem applies to the FCS approach in cross-classified data (see the Appendix), although this did not have any noticeable impact on its performance (see also Grund et al., 2018a). Nonetheless, the (lack of) compatibility in the FCS approach remains an important issue, and researchers should be mindful when applying this method in practice (for a more detailed discussion, see Du et al., 2022). Discussion In the present article, we compared different approaches for the imputation of missing values in cross-classified data (CC-MI). To this end, we introduced an extension of the popular JM and FCS approaches to MI for incomplete crossclassified data. On the basis of theoretical considerations and the results of a simulation study, we found that-though not formally equivalent-both the JM and FCS approaches to CC-MI provided an effective treatment of incomplete cross-classified data. In addition, we found that simpler approaches based on single-or two-level FCS can accommodate cross-classified data by extending the imputation models to include (adjusted) cluster means. Finally, we illustrated the application of these methods with the mice and miceadds packages for the statistical software R in a worked example with real data from education research. Our findings have multiple implications for practice. First, the JM and FCS approaches to CC-MI appear to be similarly suited for handling incomplete cross-classified data. The FCS approach can be particularly convenient because it can easily handle different types of variables (e.g., a mixture of continuous and categorical data) and provides finer control over the selection of predictor variables in each model. For the FCS approach to CC-MI to accommodate the crossclassified structure of the data, the imputation models should include (a) the random effects for each crossed factor and (if possible) their interaction and (b) cluster means of the predictor variables to accommodate the relationships between the variables at each level. As an alternative to CC-MI, cross-classified data can also be handled with simpler techniques that are based on single-or twolevel FCS. This can be beneficial when methods for CC-MI are unavailable or suffer from convergence problems. In such cases, single-or two-level FCS approaches can be extended to include (adjusted) cluster means, which can reduce or avoid the computational burden of modeling multiple random effects in CCRMs while providing results that are often similar to CC-MI. The present study has several limitations in addition to those listed above. First, there are several variants of CCRMs for nonhierarchical data that should be considered in future research. This includes multiple-membership multipleclassification (MMMC) models that can be used to analyze data, in which observations are not only clustered in multiple crossed factors but can also belong to multiple units of the same factor (Browne et al., 2001;Grady & Beretvas, 2010;Park & Beretvas, 2020). Similarly, in longitudinal research, CCRMs can be extended to distinguish between acute and cumulative effects of cluster membership when this membership changes over time (Cafri et al., 2015). Although MMMC and similar models share many of the features of the CCRMs considered in this article, little is known about how multiple-membership structures can and should be accommodated in the treatment of missing data (however, see Yucel et al., 2008). Second, in our simulation study, we focused on partially cross- Grund et al. classified data with structural features that are typical in education research (e.g., Garner & Raudenbush, 1991;Goldstein & Sammons, 1997;Raudenbush, 1993). Future research should also evaluate CC-MI in settings with more challenging features, for example, with a small number of clusters at Levels A and B, or common features from other areas of research, for example, with fully crossclassified data or only a single observation at Level 1 (i.e., without random effects of the "interaction"). Third, we assumed that the identifiers that denote the cluster membership for each unit were fully observed. However, especially in longitudinal data, in which cluster membership can change over time, these identifiers can also be missing, and more research is needed to determine how to handle missing data in these cases (see also Hill & Goldstein, 1998; van Buuren, 2011). Fourth, in our simulation study, we did not consider higher-level variables with missing data (e.g., at Levels A, B, or AB). Future research should therefore evaluate the performance of the different approaches to CC-MI for the treatment of missing data in higher-level variables (see also Enders et al., 2018;Grilli et al., 2022;Grund et al., 2018b). Finally, we evaluated CC-MI only in conditions with MCAR or MAR data. By contrast, when data are missing not at random (MNAR), conducting MI typically requires strong assumptions about the missing data mechanism, which can be evaluated in sensitivity analyses (Carpenter & Kenward, 2013). Future research should therefore evaluate CC-MI in conditions with MNAR data. To summarize, the present study compared and evaluated multiple approaches to CC-MI, which included both a novel extension of the popular JM and FCS approaches to MI and a number of alternative methods that extend existing methods for single-and two-level MI. We conclude that multiple approaches to CC-MI can provide an effective treatment of incomplete cross-classified data. We hope that the findings presented here motivate further research on statistical methods for handling missing values in cross-classified data and other types of nonhierarchical data. Appendix In this Appendix, we present the key results from our comparison of the FCS approach to CC-MI with the JM approach (for additional details, see Supplement A). We consider the bivariate case with two variables x ð1Þ and x ð2Þ , where observations are clustered within two factors A and B. The joint model is can be expressed as: x ð1Þ x ð2Þ " # *N m ð1Þ 1 where 1 is a vector of ones, I is the identity, and J A , J B , and J AB are block matrices that define the covariance structure in the cross-classified data. Specifically, if I ðpÞ is the identity matrix of size p and J ðpÞ is a square matrix of ones of size p, then To show that the FCS approach is equivalent with the joint model, one needs to show that (a) the conditional distribution in the FCS approach implies the same structure and (b) the parameters of the conditional distribution in the FCS approach are one-to-one functions of the parameters in the joint model. Using the steps outlined in Supplement A, the conditional mean and variance of x ð2Þ given the observed values and the cluster means of x ð1Þ can be expressed as: and V x ð2Þ jx ð1Þ where J ¼ J ðnn A=B n B=A Þ , 1 ðpÞ is a p-vector of ones, x ð1Þ is the sample mean of x ð1Þ , x A is the vector of marginal means for each A, x B is the vector of marginal means for each B, and x ð1Þ AB is the vector of means for each pair of A and B. Grund et al. Equation A4 already indicates that the conditional distribution in the FCS approach has a covariance structure that differs from the joint model. Specifically, by conditioning on x ð1Þ , the FCS approach induces a dependency between marginally independent observations in x ð2Þ (through g J J). The coefficients are given in full in Supplement A. Here, we only reproduce the results for g J and define (for p; q 2 f1; 2g): Then, Because g J is not generally zero, the FCS approach is not generally equivalent with the JM approach. However, the equivalence can be shown to hold asymptotically. To this end, we write g J ¼ Àn f ðn B=A ;n A=B Þ gðn B=A ;n A=B Þ , where both f and g are the polynomial functions of n B=A and n A=B . After some manipulations, it can be shown that f has a lower degree than g. Specifically, f and g can be expressed as: f ðn B=A ; n A=B Þ ¼ x 1 n B=A n A=B þ x 2 n B=A þ x 3 n A=B þ x 4 ; ðA5Þ and gðn B=A ; n A=B Þ ¼ z 1 n 2 B=A n A=B þ z 2 n B=A n 2 A=B þ z 3 n 2 B=A þ z 4 n B=A n A=B þ z 5 n 2 A=B þ z 6 n B=A þ z 7 n A=B þ z 8 : As a result, g J approaches zero as either n B=A or n A=B becomes large. In this event, the structure of the conditional distribution in the FCS approach matches the structure of the joint distribution with coefficients that are one-to-one functions of the parameters in the joint model. Therefore, although not strictly equivalent, the FCS approach to CC-MI is still asymptotically equivalent with the JM approach in balanced fully crossed data. Some additional results are worth noting. First, in unbalanced data, the two approaches are not equivalent (see also Resche-Rigon & White, 2018). In such a case, the coefficients for the conditional mean and variance (Equations A3 and A4) would depend not only on the parameters in the joint model but also on the cluster size (n jk ) and the number of units in A and B, which may also vary between subsets of clusters. Second, in partially crossed data, the equivalence Cross-Classified Multiple Imputation may also not hold. For example, if the units are partially crossed in a "block-like" pattern, where each block consists of a subset of fully crossed units, then the arguments above apply to each block and the asymptotic result should still hold. However, in such cases, the cluster sizes, number of units in A, and B, and cluster means will usually vary across blocks, which again complicates the structure of the conditional distribution in the FCS approach. Authors' Note All additional files concerning this article, including scripts, data, and the supplemental materials are available at https://osf.io/5em2d. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. ORCID iD Simon Grund https://orcid.org/0000-0002-1290-8986 Notes 1. The Cramér's V values were computed on the basis of 100 simulated data sets from each condition and ranged from .333 for the condition with K ¼ 64 and balanced samples to .543 for the condition with K ¼ 256 and unbalanced samples. 2. Because we encountered convergence issues in a small number of replications, we removed replications that resulted in singular solutions or yielded extreme parameter estimates (absolute values larger than 10). In the condition that was affected the most (balanced clusters, K ¼ 64, t 2 A ¼ :30, MCAR), this resulted in the removal of seven (0.4%) of the 2,000 replications.
2022-02-20T16:22:02.271Z
2022-02-18T00:00:00.000
{ "year": 2023, "sha1": "6d63ea5183aaa1847c9c30d4e76a5b9c3fe42ff0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3102/10769986231151224", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "ea44ea067d5ebb88c56d1ded3ca1529a18856162", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
883016
pes2o/s2orc
v3-fos-license
Additional energy scale in SmB6 at low-temperature Topological insulators give rise to exquisite electronic properties because of their spin-momentum locked Dirac-cone-like band structure. Recently, it has been suggested that the required opposite parities between valence and conduction band along with strong spin-orbit coupling can be realized in correlated materials. Particularly, SmB6 has been proposed as candidate material for a topological Kondo insulator. Here we observe, by utilizing scanning tunnelling microscopy and spectroscopy down to 0.35 K, several states within the hybridization gap of about ±20 meV on well characterized (001) surfaces of SmB6. The spectroscopic response to impurities and magnetic fields allows to distinguish between dominating bulk and surface contributions to these states. The surface contributions develop particularly strongly below about 7 K, which can be understood in terms of a suppressed Kondo effect at the surface. Our high-resolution data provide insight into the electronic structure of SmB6, which reconciles many current discrepancies on this compound. I n the past few years, the concept of strong topological insulators, which exhibit an odd number of surface Dirac modes characterized by a Z 2 topological index, has attracted great interest. In this context, it was theoretically predicted that some Kondo insulators, such as SmB 6 , Ce 3 Bi 4 Pt 3 , CeNiSn, CeRu 4 Sn 6 , are candidates for strong three-dimensional (3D) topological insulators 1,2 . In particular, SmB 6 is intensively studied because of its simple crystal structure and clear signatures of a Kondo hybridization gap. Theoretically, a common picture of the multiplet f-states and the Kondo hybridization effect is shared among different band structure calculations for SmB 6 (refs 2-7), as sketched in Fig. 1 (Supplementary Fig. 1). Because of strong spin-orbit coupling and crystal field effects, the f-states of Sm are split into several multiplets as presented in Fig. 1a. Considering the symmetry of the multiplets, only the G 7 and G 1 ð Þ 8 bands are allowed to hybridize with the Sm d-band via the Kondo effect 4,6 . As a result, two hybridization gaps (denoted as D 1 , D 2 ) may open at different energies, as sketched in Fig. 1b (in principle only D 2 is a well-defined gap). Although topological surface states (TSS) are unambiguously predicted to reside within the hybridization gap [2][3][4][5][6][7] , no consensus has been reached on the structure of the TSS around the Fermi energy (E F ). Nonetheless, the prediction of TSS provides an attractive explanation for the four decades-old conundrum 8 of SmB 6 , which exhibits a plateau in the resistivity typically below about 5 K (refs 9,10). Experimentally, the existence of metallic surface states below about 5 K has been best illustrated by electrical transport measurements on SmB 6 (refs [10][11][12]. However, the origin of these surface states and their topological properties remain controversial, in spite of intensive investigations. Several properties of SmB 6 interfere with a straightforward interpretation. One major issue arises with respect to the size of the hybridization gap. Spectroscopic measurements observed a large hybridization gap of about 15-20 meV (refs 13-24), which is normally understood by considering a single f-band hybridizing with a conduction band via the Kondo effect ( Supplementary Fig. 1). Typically, additional features within this energy scale are assumed to be in-gap states. In some cases, the in-gap states are further ascribed to TSS (refs 15,17). On the other hand, analyses of thermal activation energies derive a small excitation energy of 2-5 meV, which shows bulk properties and is understood in terms of a small, likely indirect, bulk gap [25][26][27] or in-gap states [28][29][30] . Obviously, different probes, as well as different ranges in the measurement temperatures reveal only either the bigger or the smaller hybridization gap sketched in Fig. 1b. Nevertheless, these measurements provide essential constraints to the sizes of the two hybridization gaps. In terms of topology (that is, trivial or non-trivial surface states), experimental results, even obtained by using the very same method, are conflicting among many reports [14][15][16][17][18][19][20][21][22][23][24][31][32][33][34] . Considering the exotic phenomena, which appear only within ±20 meV and below 5 K, measurements with very high-energy resolution and at very low-temperature are highly desired. Another severe difficulty, which contributes to such a wide discrepancy among the experimental results, is caused by the surface itself. Specifically, the (001) surface of SmB 6 is polar 23 . This can induce different types of band bendings 14 , quantum well confinements 35 , charge puddles and surface reconstructions [36][37][38][39] . Specifically the latter may give rise to conducting surface layers on its own 23 . Frequently, different types of surfaces (B-and Smterminated, reconstructed and non-reconstructed) coexist at different length scales on one and the same cleaved surface, which may complicate interpretation of spectroscopic results, for example, by angle-resolved photoemission spectroscopy (ARPES). We therefore conduct scanning tunnelling microscopy/spectroscopy (STM/STS) down to the base temperature of 0.35 K with an energy resolution of about 0.5 meV. This allows us to identify the fine structure of the hybridization gaps on large and nonreconstructed surfaces in the sub-meV scale. Moreover, by measuring the impurity, magnetic-field and temperature dependence of the STS spectra, we are able to attribute bulk and/or surface contributions to these states, and unveil a new energy scale of C7 K, which provides an important piece of the puzzle for a unified picture of SmB 6 . Results Topography and STS spectra at base temperature. SmB 6 crystallizes in a cubic structure with a lattice constant a ¼ 4.133 Å, Fig. 2a. The topography of a non-reconstructed surface, presented in Fig. 2b, exhibits clear atomic resolution. Here, the distance of about 4.1 Å and the arrangement of the corrugations is in good agreement with the cubic structure of SmB 6 . The very small number of defects compared with the number of unit cells within the field of view (45,200) not only indicates high sample quality but also ensures that the measured spectrum is not influenced by defects. The absence of any corrugation other than along the main crystallographic axes, as nicely seen in the inset of Fig. 2b, clearly indicates a B-terminated surface 37,39 . The differential tunnelling conductance g(V)dI(V)/dV, measured at T ¼ 0.35 K and far away from any impurity, exhibits several anomalies close to E F , marked by (i)-(v) in Fig. 2c. A change in the slope of g(V) around ±20 meV, suggests a pronounced loss of local density of states within this energy Figure 1 | Sketch of the multiplet f-states and the resulting band structure. (a) Evolution of energy levels of the f-states in SmB 6 , which follows from the work of refs 6,7. The f-states are split into J ¼ 7/2 and J ¼ 5/2 states by spin-orbit coupling (SOC). The J ¼ 5/2 state, which is slightly below E F , is split into a G 7 doublet and a G 8 quartet by the crystal field (CF). Away from the G point, the G 8 quartet is further split into G range. Around the same energy, the opening of a gap has been widely observed by a number of spectroscopic tools as mentioned above [16][17][18][19][20][21][22][23][24] , including STS (refs [36][37][38]. On the basis of the band structure displayed in Fig. 1b, the kinks marked by (i) can be ascribed to the Kondo hybridization between the f-band and the conduction band, which results in a decreased conduction electron density inside the hybridization gap below the Kondo temperature T K (ref. 40). Here, T K marks the crossover from (single ion) local moment behaviour at high temperature to entangled behaviour between f and conduction electrons 41 . More importantly, we were able to disentangle several anomalies, which were hitherto not resolved individually by STS at higher temperature [36][37][38] . Benefitting from this improvement, we can investigate the fine structure of bulk/ surface bands and go beyond a simple Kondo hybridization analysis, which is based on only one f-band and one conduction band 14 . Around À 13.5 meV, there is a small peak marked by (ii). Excitations with similar energy have been reported before, for example, by ARPES ( À 15 meV) (ref. 15), X-ray photoelectron spectroscopy ( À 15 meV) (ref. 14) and inelastic neutron scattering (14 meV) (refs 42,43), yet with differing explanations as to its origin. As discussed below, this small peak is most likely related to the indirect tunnelling into the localized G 2 ð Þ 8 states. Compared with delocalized f-states, such localized f-states may give rise to only small anomalies in spectroscopy measurements 44 . Compared with peak (ii), peak (iii) (at around À 6.5 meV) is very sharp and pronounced. Such a peak has been observed on different types of surfaces, including reconstructed ones [36][37][38] , which clearly indicates that there are significant bulk contributions to this state. Very likely, the weakly dispersive structure of the hybridized G 1 ð Þ 8 band around the X-point along with the Fano effect can induce a peak in the conductance spectra at this energy level. In a Kondo system, the Fano effect is due to a quantum mechanical interference of electrons tunnelling into the localized states and the conduction bands 45,46 . Either a sharp drop (like feature (i)) or a pronounced peak will show up around the gap edge, depending on the tunnelling ratio between the two channels, as well as the particle-hole asymmetry of the conduction band. However, as has been reported previously, the spectrum deviates from a simple Fano model at lowtemperature 36,38 , indicating additional components to peak (iii) (see also discussion below). This is consistent with our inference that the hybridized G Note that its energy level is also comparable with the size of the small bulk gap observed by transport measurements [25][26][27] . Therefore, peak (i) to (iii) can directly be compared with the band structure in Fig. 1b. To verify the bulk/surface origins of these peaks at low-temperature, impurity, magnetic-field, and temperature dependences of STS have been conducted. As we will show below, besides bulk components, peak (iii) also contains components from the surface layer below 7 K. Crucially, we also observe small anomalies (iv) and (v) at ± 3 meV, which reside just inside the bulk gap D 2 ( cf. also results on temperature-dependent STS spectra below). The shoulder-like shape of these small anomalies indicates the existence of two weakly dispersive bands or localized states near E F . It is noted that both features at about ± 3 meV also reveal spatial inhomogeneity ( Supplementary Fig. 2), which-given the electronic inhomogeneity of even atomically flat surfaces 39 -hints at the surface origin of these states. Spatial dependence of the STS spectra. For STM measurements, one possible way to distinguish bulk and surface states is to carefully investigate the tunnelling spectra at/near impurities or other defects, because the surface states are more vulnerable to such defects. Therefore, g(V) was measured across two impurity sites at 0.35 K, shown in Fig. 3a,b. The bigger impurity at #A with an apparent height of E160 pm is probably located on top of the surface, while the smaller one at #E (apparent height E50 pm) is likely incorporated into the crystal. According to Fig. 3c, the g(V)-curves are all very similar for positions #B to #F. Even at position #A, that is, on top of the big impurity, the spectrum exhibits similarities; in particular all anomalies (i)-(v) can be recognized. In addition, a new peak occurs at À 10 meV, which may be assigned to an impurity bound state. In Fig. 3d, we plot the height of the peaks (ii) to (iv) at different positions. A combined analysis of Fig. 3c,d reveals spatial stability of peak (ii), being consistent with the expectation for bulk states as discussed above. On the other hand, peaks (iii) and (iv) are not as stable as peak (ii); their heights are suppressed by both the big and the small impurity, which implies that at this temperature both peaks contain contributions from the states pertained to the surface. Magnetic-field dependence of the STS spectra. In Fig. 4a,b, g(V)-curves measured at sites #A and #C of Fig. 3a for different applied magnetic fields are presented. There is no distinct change detected up to the maximum field of 12 T for features (i) to (v), except an enhanced peak amplitude for the impurity state at À 10 meV, see Fig. 4b. The magnetic-field independence of these states is consistent with the observation of metallic surface conductance up to 100 T by transport 26,47-49 and spectroscopic measurements 30,36 . This observation can be understood by considering a very small g-factor (0.1-0.2) of the f-electrons 50 . Temperature dependence of the STS spectra. We now turn to the temperature dependence of the features (i) to (v). The temperature evolution of the STS spectra was measured continuously on the same unreconstructed, B-terminated surfaces away from any defect between 0.35 and 20 K, see Fig. 4c. Above 15 K, the spectra show a typical asymmetric lineshape which arises from the Fano effect 45,46 , being in good agreement with previous work 37 . Following the interpretation of ref. 45 the peak position in energy can be related to the gap edge, that is, the G 1 ð Þ 8 band in case of SmB 6 , as discussed above. On cooling, the amplitude of peak (iii) increases sharply, accompanied by a sudden appearance of peaks (iv) and (v) below 7 K, with the latter effect being beyond thermal smearing ( Supplementary Fig. 3). The low-temperature evolution of the spectra is clearly seen after the measured g(V, T)-curves were subtracted by the data at 20 K, see Fig. 4d. In an effort to quantitatively investigate the evolution of the spectra with temperature, we describe the low-temperature g(V)-curves by a superposition of four Gaussian peaks on top of a co-tunnelling model (Supplementary Fig. 4). However, fits to data obtained at higher temperature (T410 K) turned out to be less reliable ( Supplementary Figs 5 and 6). To further analyse the temperature evolution of peak (iii), we normalized the spectra by its size at V b ¼ ± 30 mV. The resulting g(T)-values of peak (iii) are plotted in Fig. 4e. Clearly, a change in the temperature dependence is observed around 7 K. This is further supported by a comparison to data obtained by Yee et al. 36 (blue circles and blue dashed line) in a similar fashion but on a (2 Â 1) reconstructed surface (which may explain the scaling factor, right axis). Also, the spectral weights of the À 10 meV peak by Ruan et al. 38 (green squares) indicate a similar trend at TZ5 K. Note that even the temperature evolution above about 7 K cannot be explained by a mere thermal broadening effect 36,38 . By tracing the temperature evolution of the dI/dV-spectra between about 7-50 K (refs 36-38) a characteristic energy scale of about 50 K was derived. This can be accounted for by the Kondo effect of the bulk states, with an additional contribution from a resonance mode 38 , which is likely (as discussed above) related to the G 1 ð Þ 8 state. The same energy scale of \50 K has also been observed by transport 9,12 and other spectroscopic measurements 13,20,51,52 . However, below 7 K, the intensity of peak (iii) shows a sudden enhancement in Fig. 4e, indicating the emergence of an additional energy scale. Considering the fact that this new energy scale, as well as many other exotic transport phenomena related to the formation of a metallic surface 10-12 set in simultaneously, the increase in intensity of peak (iii) (as well as the appearance of peaks (iv) and (v)) is expected to rely on the same mechanism that is responsible for the formation of the metallic surface. Both observations appear to evolve out of the bulk phenomena associated with the primary hybridization gap at elevated temperatures. In the following section, we will argue that this new energy scale is related to the suppression of the Kondo effect at the surface. Discussion In this study, the topographic capabilities of the STM allow us to distinguish features (i) to (v) on non-reconstructed (001) surfaces of a single termination and without apparent defects. Therefore, we can simply exclude the possibility that they are driven by surface reconstructions or defects. Especially the observation of new states on clean surfaces below about 7 K indicates that the exotic properties of SmB 6 are intrinsic rather than due to impurities. The observation of well-resolved features in our tunnelling spectra (discussed above) enables the direct comparison with results of bulk band structure calculations [4][5][6][7]18,53 . This not only reveals the energy levels of the multiplet f-states, but can also reconcile the long-standing debate of 'small' versus 'large' bulk gap in SmB 6 (ref. 14). Consequently, our data shows that a dedicated hybridization model with two-instead of onemultiplet f-states is necessary to interpret the low-energy properties of SmB 6 . In particular, peak (iii) has multiple components including bulk and surface states, the ratio of which changes dramatically with temperature. It is widely accepted that the electronic properties of SmB 6 can be divided into several temperature regions, which are based on Maximum peak values of the differential conductance at À 13.5, À 6.5 and À 3 mV obtained at positions #A to #F. transport measurements 18,26 , as well as other probes, like ARPES (refs 18,20). Apparently, 5-7 K is a crucial regime, where the temperature-dependent properties undergo significant changes. Above this range, the electronic states in SmB 6 are governed by the Kondo effect of the bulk 14,16,17 . At lower temperatures, several interesting observations-in addition to that of the saturated resistance-were made. For example, the Hall voltage becomes sample-thickness independent 11 ; the angular-dependent magnetoresistance pattern changes from fourfold to twofold symmetry 26 ; and the development of a heavy fermion surface state is found by magnetothermoelectric measurements 54 . These experimental facts provide convincing evidence for the formation of (heavy) surface states just around 5-7 K, which is in line with the appearance of a new energy scale. Recently, a surface Kondo breakdown scenario was proposed based on the reduced screening of the local moments at the surface. As a result, the Kondo temperature of the outmost layer, T s K , can be strongly suppressed, resulting in a modified band structure 55 . Slab calculations further show that below T s K surface f-electrons gradually hybridize with conduction electrons at the surface and form a weakly dispersive band close to E F (refs 50,53). Remarkably, very narrow peaks with strongly temperaturedependent STS spectra near E F are regarded as a smoking gun evidence for a surface Kondo breakdown scenario 53 . On the basis of our experimental results, T s K is inferred to be around 7 K, being about an order of magnitude smaller than T K . The evolution of our tunnelling spectra below about 7 K also fit excellently to the theoretical prediction and the related calculations for STS. In such a scenario, the additional component at À 6.5 meV and shoulders at ±3 meV are related to the heavy quasiparticle surface states, the formation of which supplies an additional tunnelling channel in particular into the f-states. This provides a highly possible origin for the metallic surface states and a reasonable explanation to the various experimental observations listed above. We note that theoretically a surface Kondo breakdown effect does not change the topological invariance of SmB 6 , which is determined by the topology of the bulk wave functions. Therefore, the surface-derived heavy quasiparticle states could still be topologically protected. Experimentally, for such topologically protected surface states backscattering is forbidden in quasiparticle interference (QPI) patterns as measured by STM (ref. 56). In line with this prediction and as shown in the Supplementary Fig. 7, no clear quasiparticle interference pattern could be detected so far from our results, which is similar to the observation by Ruan et al. 38 . Methods Sample preparation and STM measurements. All samples were grown by the Al-flux method. A cryogenic (base temperature TE0.35 K) STM with magnetic-field capability of m 0 Hr12 T was utilized. Three SmB 6 single crystals were cleaved a total of five times in situ at E20 K to expose a (001) surface. Cleaved surfaces were constantly kept in ultra-high vacuum, po3 Â 10 À 9 Pa. Tunnelling was conducted using tungsten tips 57 , and the differential conductance (g(V)-curve) is acquired by the standard lock-in technique with a small modulation voltage V mod r0.3 mV. On our best cleaved sample, the size of non-reconstructed surface areas can reach up to 100 Â 100 nm 2 . Analysis of STS spectra. In principle, the low-temperature g(V)-curves can be well described by a superposition of four Gaussian peaks on top of a Fano model (see example of g(V, T ¼ 0.35 K) in Supplementary Fig. 4) or more elaborate hybridization models 45,46 (Supplementary Fig. 6). A similar procedure with only one Gaussian was employed in ref. 38. However, fits are less reliable at elevated temperature. Instead, our spectra measured at different T in zero field overlap nicely for V b o À 25 mV and V b 410 mV, such that they can be normalized by using very similar factors. Consequently, we can directly trace the temperature dependence of the peak height (at least for peak (iii)) by measuring the normalized peak intensity as shown in Fig. 4e. Note that the intensities of peak (iii) as obtained from Fig. 4d, that is, after subtracting the 20 K-data, yield very similar values as those shown in Fig. 4e from normalized spectra. Data availability. The data supporting the findings of this study are included within this article (and its Supplementary Information files), or available from the authors.
2017-01-11T12:28:57.000Z
2016-12-12T00:00:00.000
{ "year": 2016, "sha1": "497eb6053247309d750a916a13af91eb983957b9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms13762.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aed7e1f366db496a53daf7e262c3bac290a6ccb8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
227260764
pes2o/s2orc
v3-fos-license
Silencing lncRNA DUXAP8 inhibits lung adenocarcinoma progression by targeting miR-26b-5p Abstract Lung adenocarcinoma (LUAD), a common type of lung cancer, has become a popularly aggressive cancer. Long noncoding RNAs (lncRNAs) play a critical role in the pathogenesis of human cancers, while the function of double homeobox A pseudogene 8 (DUXAP8) in LUAD remains to be fully inquired. Therefore, our study was conducted to elucidate the DUXAP8 expression in LUAD and its mechanism on the biological features of LUAD cells. Loss-of-function experiments were performed to assess the function of DUXAP8 proliferation and apoptosis of H1975 and A549 cells. Functionally, silencing DUXAP8 inhibited proliferation and induced apoptosis of LUAD cells. Mechanistically, further correlation assay indicated a negative association between miR-26b-5p and DUXAP8 expression. Subsequently, we testified that DUXAP8 exerted its role in the progression and development of LUAD through targeting miR-26b-5p. In summary, our results elucidated that that DUXAP8 promoted tumor progression in LUAD by targeting miR-26b-5p, which provide a novel therapeutic target for diagnosis and therapy of LUAD. Introduction Lung cancer is known to be a leading contributor of tumor-related deaths around the word, for which the 5-year survival rate is still ∼16.6% [1][2][3][4][5]. In this disease, 90% is divided into the non-small cell lung cancer (NSCLC), including lung adenocarcinoma (LUAD), lung squamous cell carcinoma and large cell lung cancer [6,7]. LUAD, a common type of lung cancer, has become a popularly aggressive cancer [8][9][10]. It is difficult to diagnose LUAD at an early stage; most patients are diagnosed at the advanced stage [11][12][13][14]. Although the therapeutic treatment for LUAD has progressed in recent years, the prognosis of LUAD patients is tremendously poor. Hence, there is an exigent necessity to probe novel targets and improve the understanding of mechanisms behind tumor progression for LUAD. It has been found that cancer progression and tumorigenesis is related to genetic and epigenetic changes, including LUAD [15][16][17][18][19][20][21]. Long non-coding RNAs (LncRNAs) are a type of transcript, with more than 200 nucleotides in length, proved to be widely involved in various physiological and pathological processes [22][23][24][25]. Further, a growing number of studies have revealed that lncRNAs exerted their regulatory roles in the occurrence and progression of cancers [26][27][28][29]. For example, Jiao et al. found that lncRNA MALAT1 promotes tumor growth and metastasis by targeting miR-124/foxq1 in bladder transitional cell carcinoma [30]. In addition, lncHOXA10 drives liver TICs self-renewal and tumorigenesis via HOXA10 transcription activation [31]. Additionally, lncRNA TDRG1 has been reported that could promote cell proliferation, migration and invasion by targeting miR-326 to regulate MAPK1 expression in cervical cancer [32]. Therefore, exploring the mechanism of lncRNAs in human cancers is very significant. Double homeobox A pseudogene 8 (DUXAP8) is located on chromosome 22q11 with 2268 bp in length [33,34]. Increasing evidences have demonstrated that DUXAP8 plays a regulatory role in many cancers, including NSCLC, gastric cancer etc [35]. Chen et al. have found that DUXAP8 promoted cell growth in renal carcinoma [36]. Moreover, DUXAP8 could regulate the proliferation and invasion of esophageal squamous cell cancer [37]. Previous study has proved that knockdown of DUXAP8 expression suppressed cell proliferation in glioma [38]. Growing evidences have demonstrated that lncRNAs could interact with miRNA to affect tumorigenesis. For example, lnc NTF3-5 promoted osteogenic differentiation of maxillary sinus membrane stem cells via sponging miR-93-3p [39]. Furthermore, lncRNA SNHG20 is involved in breast cancer cell proliferation, invasion and migration via miR-495 [40]. Pan et al. reported that lncRNA JPX regulates lung cancer tumorigenesis by activating Wnt/β-catenin signaling [41]. It is noteworthy that miR-26b-5p has been identified to be closely related to tumor growth and serves as the target of lncRNAs [42][43][44]. However, it is almost unknown whether DUXAP8 was a functional lncRNA in LUAD and the effect of DUXAP8 on LUAD and its underlying mechanism remains unclear. Therefore, focus of the present study is to unravel the functional mechanism of DUXAP8 in LUAD progression. First, the present study showed that the expression level of DUXAP8 was remarkably increased in LUAD tissues compared with that in adjacent tissues. Functionally, we determined functional analysis that indicated that lncRNA DUXAP8 facilitates cell proliferation and inhibited apoptosis by targeting miR-26b-5p in LUAD. Our study provides a potentially useful target for LUAD therapy. Table 1. Cell culture and cell transfection Human LUAD cell lines (A549, H1299, H1975) and normal epithelial cell line (16HBE) were purchased from American Type Culture Collection (ATCC, Manassas, VA, U.S.A.). The cell lines were cultured in DMEM complemented with 10% FBS at 37 • C under a moist atmosphere of 5% CO 2 . Cells were collected after 48 h for further analysis. Sh-DUXAP8 and sh-NC were obtained from GenePharma (Shanghai, China). Transfection was performed by using Lipofectamine 2000 (Invitrogen, Shanghai, China) according to the manufacturer's instructions. Real-time PCR Total RNA was isolated from tissue samples and cells with TRIzol reagent (Invitrogen), cDNA was synthesized by TaqMan Reverse Transcription Kit (Applied Biosystems). Real-time PCR (RT-PCR) was implemented on Applied Biosystems 7500 Real-Time PCR system (Applied Biosystems, Foster City, U.S.A.) utilizing SYBR Premix ExTaq™ (Life Technologies). Relative gene expression levels were calculated by the 2 − C T method and U6 were employed as internal controls for normalization. Western blot Total protein was extracted from cells with RIPA buffer (Beyotime Biotechnology, Beijing, China) and the concentration of total protein was determined using the BCA Protein Assay Kit (Thermo Fisher Scientific). Proteins were separated by SDS/PAGE and transferred to PVDF membranes. Following blocked with 5% skim milk in TBST at room temperature for 1 h, probed with primary antibodies and then incubated with HRP-conjugated secondary antibodies at room temperature for 2 h. The enhanced chemiluminescence detection kit (Thermo Fisher Scientific) was used to visualize the blots. GAPDH worked as the inherent control and immunoreactive bands were quantified using ImageJ. Cell proliferation assay Cell-counting kit-8 (CCK-8) and colony formation analysis were carried out to determine the effects of DUXAP8 on the proliferation of H1975 and A549 cells. In briefly, H1975 and A549 cells were seeded into 96-well plates at the density of 1 × 10 5 per well and incubated at 37 • C for 0, 24, 48 or 72 h. Then, each well was treated with 10 μl of CCK-8 and maintained at 37 • C for 2 h. The optical density (OD) values were tested at a wavelength of 450 nm by the microplate reader (Bio-Tek, Winooski, U.S.A.). For colony formation assay, H1975 and A549 cells were seeded in six-well plates and grown in DMEM containing 10% FBS. The medium was replaced every 3 days. Two weeks later, the medium was discarded and then cells were fixed in 4% paraformaldehyde, stained with 0.5% Crystal Violet. Colonies were counted and photographed with a light microscope (Olympus, Tokyo, Japan). Luciferase reporter assay Both H1975 and A549 cells were transfected with either DUXAP8 wild-type (WT) or mutated-type (Mut) promoter reporters in combination with miR-26b-5p mimic. After 48-h transfection, luciferase activity was detected by dual-luciferase reporter assay system (Promega) and luciferase intensity normalized to Renilla luciferase activity. RNA pull-down assay Pierce™ Magnetic RNA-Protein Pull-Down Kit was used for the RNA pull-down assays. Briefly, the DUXAP8-WT, DUXAP8-Mut and NC were biotin labeled into Biotin DUXAP8-WT, DUXAP8-Mut and Biotin NC, severally. Next, cells were lysates and cultured with the biotinylated probe and M-280 streptavidin magnetic beads (Sigma-Aldrich). At last, RT-qPCR assay was used for assessing the expression of miR-26b-5p. Statistical analysis Statistical analyses were performed by GraphPad Prism 5.0 and data were presented as mean + − standard deviation (SD). The differences between groups were calculated by one-way ANOVA followed by Tukey's poc host analysis. At least three independent experiments were performed and P<0.05 was indicated as statistically significant. DUXAP8 is up-regulated in LUAD tissues and cell lines To explore the expression levels of DUXAP8 in LUAD, RT-PCR was performed to assess the expression levels of DUXAP8 in tissues and cell lines. We identified that DUXAP8 was remarkably increased in cancer samples compared with their corresponding normal samples ( Figure 1A). In addition, compared with 16HBE cells, DUXAP8 expression levels was significantly up-regulated in cancer cells (A549, H1299, H1975) ( Figure 1B). Based on these results, we inferred that DUXAP8 might serve as an oncogene in LUAD. Silencing of DUXAP8 inhibited LUAD cell proliferation In order to identify the effect of DUXAP8 in LUAD, we transfected indicated cells with sh-NC, si-DUXAP8 and the efficiency of DUXAP8 expression was certified by RT-PCR. We observed that si-DUXAP8 transfection significantly decreased the mRNA levels of DUXAP8 in H1975 and A549 cells ( Figure 2A). Next, we investigated the influence of DUXAP8 on cell proliferation using CCK-8 and colony formation assays. Our observations showed that silenced DUXAP8 expression caused the diminution of H1975 and A549 cell viability ( Figure 2B). Consistently, the colony formation assay indicated that si-DUXAP8 restricted cell proliferation in H1975 and A549 cells ( Figure 2C). Collectively, down-regulation expression of DUXAP8 could inhibit cell proliferation in LUAD. Silencing of DUXAP8 promoted cell apoptosis Flow cytometric analyses elucidated that the percentage of apoptotic cells was enhanced by si-DUXAP8 compared with si-NC group ( Figure 3A). Consistently, the expression levels of pro-apoptotic proteins (Bax, Cleaved-caspase-3 and Cleaved-caspase-9) were all increased, whereas Bcl-2, the anti-apoptotic gene was memorably down-regulated in si-DUXAP8 group ( Figure 3B). Collectively, down-regulation of DUXAP8 expression could facilitate cell apoptosis in LUAD. MiR-26b-5p was a downstream target of DUXAP8 MiR-26b-5p was predicted as the putative target of DUXAP8 according to bioinformatics analysis ( Figure 4A). We first verified the level of miR-26b-5p in tissues and cell lines. In contrast with DUXAP8 expression, miR-26b-5p expression level was dramatically decreased in LUAD tissues and cells ( Figure 4B,C). As a result, we intended to explore the association between DUXAP8 and miR-26b-5p. The luciferase reporter assay showed that miR-26b-5p mimic repressed the relative luciferase activities containing the DUXAP8-WT, while no obvious alteration was viewed in the mutant form of DUXAP8 ( Figure 4D). Moreover, pull-down assay was conducted and result showed that miR-26b-5p expression was more enriched by biotinylated DUXAP8-WT than DUXAP8-Mut or NC groups ( Figure 4E). These data revealed that DUXAP8 negatively regulated miR-26b-5p expression by directly targeting the 3 -UTR of miR-26b-5p. DUXAP8 facilitated cell progression via targeting miR-26b-5p Based on the above findings, we performed the rescue assays to certify whether DUXAP8 exerted its oncogenic function by modulation of miR-26b-5p. MiR-26b-5p expression levels were observably elevated in H1975 and A549 cells after transfected with si-NC, si-DUXAP8, miR-26b-5p inhibitor or si-DUXAP8+miR-26b-5p inhibitor ( Figure 5A). CCK-8 assay illustrated that cell viability was depressed by down-regulation of DUXAP8 expression and subsequently recovered by miR-26b-5p inhibitor ( Figure 5B). Concordant with CCK-8 assay, colony formation assay demonstrated that inhibition of miR-26b-5p expression abrogated the anti-proliferative effects of si-DUXAP8 ( Figure 5C). As expected, the anti-apoptotic functions of DUXAP8 in H1975 and A549 cells were partially reversed by miR-26b-5p ( Figure 5D). To this end, these results provided strong evidence that DUXAP8 functioned as a tumor promoter in LUAD progression via suppression of miR-26b-5p. Discussion Lung cancer is a malignant cancer in the world with high morbidity and high mortality. LUAD is a common subtype of lung cancer with stagnant improvement in prognosis during past decades despite the treatment progress [45]. Despite some progress in related treatments in recent years, the overall 5-year survival rate of advanced lung cancer is still less than 15%. Accordingly, elucidating the underlying mechanism of LUAD to discover effective diagnostic and prognostic biomarkers is conducive to the improvement of LUAD therapy. Over the past decades, mounting studies have reported that lncRNAs play important roles in human cancers progression, including LUAD. Peng et al. showed that lncRNA CRNDE promotes colorectal cancer cell proliferation and chemoresistance via miR-181a-5p-mediated regulation of Wnt/β-catenin signaling [46]. In addition, lncRNA NO-RAD has been reported that contributes to colorectal cancer progression by inhibition of miR-202-5p [47]. Moreover, lncRNA XIST promotes human LUAD cells to cisplatin resistance via let-7i/BAG-1 axis [48]. Nevertheless, there are still numerous lncRNAs that need to be elucidated. In the present study, we focused on the biological function of DUXAP8 in the development of LUAD. In the current study, we first prospected the expression of DUXAP8 in LUAD tissues and cells. In contrast with normal tissues and cells, DUXAP8 was highly expressed in LUAD cells. These data were consistent with previous findings showing DUXAP8 exerted its effect as a tumor promoter in regulating cancer progress [49,50]. Thereafter, DUXAP8 was silenced in H1975 and A549 cells to carry out loss-of-function experiments. Our results expounded that depletion of DUXAP8 suppressed cell proliferation and promoted apoptosis. However, the mechanism about how DUXAP8 is involved in progression of LUAD remains unclear. Accumulating researches suggested that lncRNAs regulated cell functions through interacting with miRNA [51]. Furthermore, growing studies emphasize that miRNAs are regarded as core mediators in progression and development of multiple malignant tumors via functioning as oncogenes or tumor suppressors [52,53]. By utilization of bioinformatics tool miR-26b-5p was found to own binding sites with DUXAP8. In addition, we carried out luciferase reporter assay to identify the correlation between miR-26b-5p and DUXAP8 in LUAD cells, discovered that miR-26b-5p was negatively regulated by DUXAP8. Moreover, rescue experiments unveiled that depletion of miR-26b-5p blocked the inhibitory effects of miR-26b-5p down-expression on cell proliferation and apoptosis of LUAD. In summary, we unraveled that silencing DUXAP8 expression suppresses cell proliferation and enhanced apoptosis by targeting miR-26b-5p, which serves as a cancer facilitator in LUAD. To the best of our knowledge, this is the first investigation to shed light on the potential and molecular mechanism of DUXAP8 in LUAD. Our findings represent a potential therapeutic target for the treatment of LUAD, whether it involves more complicated regulation is still to be explored by us and other researchers. We will further make deeper and more detailed studies about regulation mechanism of DUXAP8 on LUAD in the future work.
2020-12-04T14:08:02.761Z
2020-12-03T00:00:00.000
{ "year": 2021, "sha1": "7e5e59b59be87c3129dcdd0ba09a73a0e3d04821", "oa_license": "CCBY", "oa_url": "https://portlandpress.com/bioscirep/article-pdf/41/1/BSR20200884/901318/bsr-2020-0884.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b25f23194e85320e404d78a21cbd1abbee124399", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
9807739
pes2o/s2orc
v3-fos-license
Neuroprotective Strategy in Retinal Degeneration: Suppressing ER Stress-Induced Cell Death via Inhibition of the mTOR Signal The retina is a specialized sensory organ, which is essential for light detection and visual formation in the human eye. Inherited retinal degenerations are a heterogeneous group of eye diseases that can eventually cause permanent vision loss. UPR (unfolded protein response) and ER (endoplasmic reticulum) stress plays an important role in the pathological mechanism of retinal degenerative diseases. mTOR (the mammalian target of rapamycin) kinase, as a signaling hub, controls many cellular processes, covering protein synthesis, RNA translation, ER stress, and apoptosis. Here, the hypothesis that inhibition of mTOR signaling suppresses ER stress-induced cell death in retinal degenerative disorders is discussed. This review surveys knowledge of the influence of mTOR signaling on ER stress arising from misfolded proteins and genetic mutations in retinal degenerative diseases and highlights potential neuroprotective strategies for treatment and therapeutic implications. Retinal Degeneration The retina arises from the neuroectoderm during embryogenesis and is the part of the eye that perceives light stimuli, as well as integrates and transmits electrical impulses through the optic nerve to the visual cortex in the brain. The retina is comprised of six major types of neurons, including retinal ganglion cells, bipolar, horizontal and amacrine interneurons, Müller glia and photoreceptors. These neurons are organized in three cellular layers separated by synaptic layers [1]. In the outer nuclear layer (ONL), the photoreceptor cells are morphologically compartmentalized cells having inner and outer segment regions, connected by a narrow cilium. The outer segment of the photoreceptor consists of a stack of disk membranes surrounded by a plasma membrane [2]. Continual replenishment of disks in the photoreceptor leads to a high rate of protein turnover and ER biogenesis post-translationally modifies and controls the quality of many outer segment (OS) proteins, including opsin. Retinal pigment epithelial (RPE) cells, located at the outer layer of the retina, provide nourishment (e.g., vitamin A metabolites) and clear OS debris of the overlying photoreceptor cells, via daily phagocytosis of OS tips, and participate in regeneration of visual pigment regeneration [3]. The outer plexiform layer (OPL) consists of synaptic interactions between photoreceptors and horizontal cells and bipolar cells [4]. The inner nuclear layer (INL) contains horizontal, bipolar, and amacrine cell bodies, which play different roles in visual formation. The inner plexiform layer (IPL) is composed of synapses between bipolar and retinal ganglion cells (RGCs), and the ganglion cell layer (GCL) consists of RGC nuclei. The RGC axons form the optic nerve from the output of the retina to the brain, transferring visual information to the centers [5]. Inherited retinal degenerations (IRD) are a heterogeneous group of eye diseases, which affect more than 2 million people worldwide, that can eventually cause permanent vision loss [6]. The inheritance patterns, onset age, and severity of visual dysfunction in IRD are different. Syndromic and nonsyndromic forms of retinal dystrophies, which include autosomal, X-linked, and mitochondrial inheritance are classified. Phenotypic categories cover retinitis pigmentosa (RP), macular degeneration, cone or cone-rod dystrophy, congenital stationary night blindness, and Leber congenital amaurosis (LCA) etc. [7]. Dysfunctions of rod and cone photoreceptor cells, can involve the outer segment structure, phototransduction, the cilium structure and transport connection, inner segment protein and vesicle trafficking, lipid metabolism, chaperone function, RNA splicing and transcription, synaptic function, and retinal development. Affected mechanisms in the RPE cover membrane trafficking, ion transport and visual cycle reactions [7]. Certain mutations in secondary retinal neurons such as ganglion cells and Müller cellscan also lead to IRD [8]. However, the exact molecular mechanisms involved in mutant genes causing dysfunctional retinal neurons to undergo apoptosis and lead to the development of IRDs are still unclear. Animal models, which carry mutations in various genes that mimic human IRDs have been used to describe the modes of retinal cell death. These animals include (1) retinal degeneration (rd) mice; (2) retinal degeneration slow (rds) mice; (3) transgenic mice carrying P347S and Q344ter mutations in the rhodopsin gene; (4) knockout mice deficient for the b2-subunit of Na1/K1-ATPase expressed in retinal Müller cells; and (5) Royal College of Surgeons (RCS) rats, in which photoreceptors were detected that diedvia the apoptotic pathway as evidenced by histological morphology, TUNEL (terminal deoxynucleotidyl transferase-mediated biotin-dUTP nick end-labeling) assays, and/or by retinal DNA laddering with gel electrophoresis. Results from these studies indicate that it is likely that photoreceptors in human IRDs are dying similarly via the apoptotic pathway [9]. Proteostasis and ER Stress Protein homeostasis is critical for cellular function andi s tightly controlled by the synthesis and clearance of proteins [10]. The concept of proteostasis is simple. Protein synthesis (including protein folding and protein transport) must match the rate of degradation [11,12]. A healthy proteomeis maintained through a series of complex surveillance systems, which ensure that each protein is functionally folded or assembled [13]. Cells have evolved many mechanisms to cope with misfolded proteins, such as the ubiquitin-proteasome system (UPS) [14], ER-associated protein degradation (ERAD) [15] and the unfolded protein response (UPR) [16]. These proteostasis networks play important roles in maintaining correctly folded proteins and removing misfolded proteins. The endoplasmic reticulum (ER) is responsible for the quality control of newly synthesized proteins, including protein folding, post-translational modification and transportation [17], thus it is a key component of cellular proteostasis. ER cisternae have been historically classified as ribosome-bound "rough" ER and ribosome-free "smooth" ER. As indicated, smooth ER includes ribosome-free areas, where fusion and vesicle budding take place, whereas the rough ER performs the functions of proper protein folding and modification [18]. The newly synthesized, unfolded proteins are initially generated from the ribosomes and are then transported into the cisternal space of the ER, where these polypeptide chains are properly folded and oligomerized, disulfide bonds are formed, and N-linked oligosaccharides are attached for a glycoprotein. After folding and post-translational modification, mature proteins are disassociated from ER chaperones and transported to the Golgi apparatus [19]. The cellular protein folding capacity is tightly regulated in the ER via activation of intracellular signal pathways [20]. When the accumulation of misfolded or unfolded proteins causes an imbalance in ER homeostasis, it leads to ER stress, which further causes the activation of the UPR (unfolded protein response) [21,22]. The UPR promotes protein folding and suppresses protein translation to reduce the load of proteins within the ER and increases autophagy and ERAD to promote degradation of misfolded proteins [21]. There are three known stress sensors that trigger UPR in ER: inositol-requiring protein 1 (IRE1), protein kinase RNA-like ER kinase (PERK), and activating transcription factor 6 (ATF6) [23]. PERK phosphorylates initiation factor eIF2α, leading to cap independent translation of ATF4. ATF4 activates C/EBP homologous protein (CHOP), which can stimulate apoptosis. IRE1 is a kinase that leads to activation of RNAse activity. This induces the splicing of XBP1mRNA and further activates ERAD. Following immunoglobulin binding protein (BiP) dissociation, ATF6 is cleaved by the S1P and S2P proteases into an active form in the Golgi. The activated ATF6 then causes activation of ERAD to restore ER homeostasis. The initial transcriptional and translational effects of IRE1, PERK, and ATF6 signaling help cells adapt to ER stress. However, if these actions fail to restore ER homeostasis and ER stress persists, UPR signaling triggers maladaptive proapoptotic programs and cell death. ER stress functions as a critical mechanism relevant to pathogenesis in IRD [19] (Figure 1). transcription factor 6 (ATF6) [23]. PERK phosphorylates initiation factor eIF2α, leading to cap independent translation of ATF4. ATF4 activates C/EBP homologous protein (CHOP), which can stimulate apoptosis. IRE1 is a kinase that leads to activation of RNAse activity. This induces the splicing of XBP1mRNA and further activates ERAD. Following immunoglobulin binding protein (BiP) dissociation, ATF6 is cleaved by the S1P and S2P proteases into an active form in the Golgi. The activated ATF6 then causes activation of ERAD to restore ER homeostasis. The initial transcriptional and translational effects of IRE1, PERK, and ATF6 signaling help cells adapt to ER stress. However, if these actions fail to restore ER homeostasis and ER stress persists, UPR signaling triggers maladaptive proapoptotic programs and cell death. ER stress functions as a critical mechanism relevant to pathogenesis in IRD [19] ( Figure 1). Figure 1. The unfolded protein response (UPR).There are three known pathways triggering UPR in ER. (i) PERK (protein kinase RNA-like ER kinase) phosphorylates initiation factor eIF2α, leading to cap independent translation of ATF4 (activating transcription factor 4). ATF4 activates CHOP (C/EBP homologous protein), which can stimulate apoptosis; (ii) IRE1 (inositol-requiring protein 1), is a kinase that leads to activation of RNAse activity. This induces the splicing of XBP1 mRNA and further activates ERAD (ER-associated protein degradation); (iii) following BiP (Immunoglobulin binding protein) dissociation, ATF6 is cleaved by the S1P and S2P proteases into an active form in the Golgi. The activated ATF6 will then cause the activation of ERAD to restore ER (endoplasmic reticulum) homeostasis. There are three known pathways triggering UPR in ER. (i) PERK (protein kinase RNA-like ER kinase) phosphorylates initiation factor eIF2α, leading to cap independent translation of ATF4 (activating transcription factor 4). ATF4 activates CHOP (C/EBP homologous protein), which can stimulate apoptosis; (ii) IRE1 (inositol-requiring protein 1), is a kinase that leads to activation of RNAse activity. This induces the splicing of XBP1 mRNA and further activates ERAD (ER-associated protein degradation); (iii) following BiP (Immunoglobulin binding protein) dissociation, ATF6 is cleaved by the S1P and S2P proteases into an active form in the Golgi. The activated ATF6 will then cause the activation of ERAD to restore ER (endoplasmic reticulum) homeostasis. Disturbance of Proteostasis and ER Stress in Retinal Degeneration The generation of visual information from retinal cells depends on functional proteins, such as rhodopsin (Rh) [24]. ER is an important intracellular apparatus responsible for the protein quality control. Only properly folded proteins are released from the ER, yet misfolded proteins are degraded to prevent the generation of dysfunctional or potentially toxic proteins [25]. The ER stress and UPR, caused by protein misfolding, have been recently regarded as a contributing factor to IRD [26,27]. Bhootada et al. reported that the expression of T17M Rh in rod photoreceptors induces the activation of ER stress-related UPR signaling, which results in severe retinal degeneration. ATF4 knockdown blocked retinal degeneration and promoted photoreceptor survival in one-month-old T17M mice [28]. Thus far, over 250 different heritable mutations have been identified that cause the production of abnormal proteins by retinal cells, which lead to retinal degeneration and vision loss [29]. The imbalance of proteostasis has been implicated in IRD [30,31]. UPR induction might function as a common pathway, which is activated in cases of retinal degeneration, involving the degenerative process. As the predominant protein within photoreceptors, mutations in Rh are the most common cause of inherited RP [32]. The IRE1 signaling pathway of UPR was robustly activated in a Drosophila model of retinal degeneration caused by Rh misfolding [33]. In addition, selective activation of UPR signaling pathways were detected in P23H animals, which is expressed at different levels of P23H Rh compared to wild-type siblings [34][35][36][37]. Griciuc et al. studied Drosophila, in which Rh1 (P37H) was expressed in photoreceptors, and genetically increased the levels of misfolded Rh1 (P37H), which further caused the activation of the Ire1/Xbp1 ER stress pathway [38]. In animals expressing misfolded Rh, proapoptotic UPR molecules, such as cleaved ATF6, pEIF2, and CHOP, markedly increased before the loss of photoreceptors, raising the possibility that UPR activation caused by misfolded Rh may directly result in photoreceptor cell death [34,35]. However, overexpressing BiP with adeno-associated virus type 5 (AAV5) vector in transgenic rat retina led to a reduction in CHOP and photoreceptor apoptosis [35]. The mutations in the ELOVL4 (the elongation of very long chain fatty acids) gene resulted in Stargardt macular dystrophy and early childhood blindness [39]. ELOVL4 encodes a membrane protein targeted to the ER, which is an enzyme involved in the generation of long-chain fatty acids [40,41]. Not only do photoreceptors express high levels of ELOVL4, but other types of cells in the eye have been found to express ELOVL4 as well [42]. From a molecular point of view, ELOVL4 mutations cause premature truncations of the protein, leading to loss of an ER retention motif [43][44][45]. In transfected cells, misfolded ELOVL4 aggregates in the ER and induces the UPR, which is closely associated with photoreceptor cell death [35]. The rd1 mutation leads to a remarkable reduction in PDE6-β protein, which is a catalytic subunit of a phosphodiesterase, regulating cGMP levels in photoreceptors [46,47]. Absence of the PDE6-β causes accumulation of cGMP and, in turn, results in significant increases of intracellular calcium and photoreceptor degeneration [48]. In a recent study it was demonstrated that multiple UPR signaling pathways were activated and UPR protein levels of BiP and peIF2α were increased in a time-dependent manner in the retinas of rd1 mice, which suggests that the ER stress contributes to retinal degeneration in rd1 mice [49]. As a post-translational modification, N-linked glycosylation can influence protein folding efficiency. The fibulin-3 gene encodes an N-glycoprotein, which is expressed and secreted by RPE cells. The R345W mutation leads to fibulin-3 misfolding and retention in the ER and causes Malattia Leventinese and Doyne honeycomb retinal dystrophy (ML/DHRD) [50][51][52]. In a cell study, over-expression of mutant R345W fibulin-3 resulted in activation of UPR and increased vascular endothelial growth factor (VEGF) expression when compared with over expression of the wild-type protein [50,53]. These results suggest that misfolded fibulin-3 may trigger RPE dysfunction via activation of UPR pathways, yet further enhance the VEGF level leading to choroidal neovascularization. Various gene mutations target different retinal neurons and encode misfolded proteins in various types of retinal degenerative diseases. As misfolded or unfolded proteins accumulate in the ER, the UPR is triggered to restore proteostasis. However, an excessive imbalance in proteostasis results in prolonged ER stress, which initiates programmed cell death. The ER stress response has been recently proposed as an important contributing factor to retinal degenerative disease (Table 1). Inhibition of mTOR Suppresses ER Stress and Attenuates Retinal Degeneration Rapamycin (a.k.a. Rapa Nui), as an mTOR inhibitor, was discovered in 1970 on Easter Island. In all eukaryotes, the intracellular rapamycin receptor is a 12-kDa protein, the FK506-binding protein (FKBP12) [63]. When rapamycin conjugates to FKBP12, it forms a ternary complex with a conserved mTOR domain, shutting down downstream signals [64,65]. Rapamycin inhibits the function of mTORC1, whereas newly-developed mTOR kinase inhibitors interfere with the actions of both mTORC1 and mTORC2. mTOR inhibition suppresses cellular protein synthesis by regulating the initiation and elongation phases of translation and ribosome biosynthesis [66]. Two main downstream signals of mTOR are p70S6K and eIF4E (4E-BP1) [67]. Blocking mTOR affects the activity of p70S6K and the function of 4E-BP1, leading to inhibition of protein synthesis [68]. Through inhibition of p70S6K, rapamycin also blocks the translation of 5′-TOP (5′-terminal oligopyrimidine tract) mRNAs to suppress mRNA translation and protein synthesis [69]. In addition to affecting p70S6K, 4E-BP1 is a translation inhibitor, which is phosphorylated and inactivated in response to a growth signal [70]. In fact, rapamycin treatment causes the dephosphorylated 4E-BP1 to bind and inhibit the translation initiation factor eIF4E, which Figure 2. Schematic representation of the mTOR signaling pathway. mTORC1 is activated by Rheb (Ras homolog enriched in brain); the two most well-known downstream targets of mTORC1 are S6K1 (S6 kinase 1) and 4EBP1 (eukaryotic translation initiation factor 4E-binding protein 1). As an upstream signal, TSC (tuberous sclerosis complex) can suppress Rheb to negatively regulate mTORC1. Furthermore, PTEN (phosphatase and tensin homolog deleted on chromosome 10) can be suppressed by PI3K (phosphoinositide 3-kinase), thus inactivating the mTOR pathway. In addition, AMPK (AMP-dependent kinase) is activated by a high AMP/ATP ratio and suppresses mTOR. The main function of mTORC1 activity is the stimulation of mRNA translation, protein synthesis, and autophagy inhibition. PIP2/PIP3, Phosphatidylinositol 4,5-bisphosphate/Phosphatidylinositol 3,4,5-triphosphate; PDK1, 3-Phosphoinositide-dependent protein kinase 1; AKT1, AKT serine/threonine kinase 1; PRAS40, 40 kDa pro-rich AKT1 substrate 1; RAPTOR, regulatory associated protein of mTOR; mLST8, mammalian lethal with SEC13 protein 8; DEPTOR, DEP domain-containing mTOR-interacting protein. Inhibition of mTOR Suppresses ER Stress and Attenuates Retinal Degeneration Rapamycin (a.k.a. Rapa Nui), as an mTOR inhibitor, was discovered in 1970 on Easter Island. In all eukaryotes, the intracellular rapamycin receptor is a 12-kDa protein, the FK506-binding protein (FKBP12) [63]. When rapamycin conjugates to FKBP12, it forms a ternary complex with a conserved mTOR domain, shutting down downstream signals [64,65]. Rapamycin inhibits the function of mTORC1, whereas newly-developed mTOR kinase inhibitors interfere with the actions of both mTORC1 and mTORC2. mTOR inhibition suppresses cellular protein synthesis by regulating the initiation and elongation phases of translation and ribosome biosynthesis [66]. Two main downstream signals of mTOR are p70S6K and eIF4E (4E-BP1) [67]. Blocking mTOR affects the activity of p70S6K and the function of 4E-BP1, leading to inhibition of protein synthesis [68]. Through inhibition of p70S6K, rapamycin also blocks the translation of 5 -TOP (5 -terminal oligopyrimidine tract) mRNAs to suppress mRNA translation and protein synthesis [69]. In addition to affecting p70S6K, 4E-BP1 is a translation inhibitor, which is phosphorylated and inactivated in response to a growth signal [70]. In fact, rapamycin treatment causes the dephosphorylated 4E-BP1 to bind and inhibit the translation initiation factor eIF4E, which then dissociates the cap structure at the 5 termini of mRNAs, thereby suppressing cap-dependent translation [71]. Thus, inhibiting the mTOR signal strongly reduces intracellular protein synthesis and alleviates the protein load in the ER which, in turn, remarkably suppresses ER stress. In addition, the mTOR signal can directly influence ER stress through a specific Ire1α-ASK1-JNK signal. Inhibiting mTORC1 blocks the Ire1α-ASK1-JNK pathway and suppresses the activation of JNK, which mitigates ER-stress-induced apoptosis [72]. Inhibition of the mTOR signal augments the autophagy process to remove damaged macromolecules and misfolded proteins. Inhibition of mTORC1 leads to increased ULK1/2 (UNC-5 like autophagy activating kinase 1/2) activity, which further phosphorylates ATG13 (autophagy related gene 13) to activate the autophagy process [73,74]. At the transcriptional level, inhibition of mTORC1 also modulates autophagy through regulating the localization of TFEB (transcription factor EB), an important autophagy gene regulator. Many studies have demonstrated that mTOR inhibition strongly induces autophagy in various model systems, even in the presence of nutrients [75]. Inhibition of mTOR signal by rapamycin and its analogs leads to a reduction in the synthesis of misfolded proteins and an increase in the degradation of damaged proteins, which further suppresses the ER stress caused by gene mutations in cases of retinal degeneration. Many previous studies have demonstrated that inhibition of mTOR has neuroprotective effects, including rescue of photoreceptors and or RPE from mutant gene-induced apoptosis, which slow down the process of retinal degeneration [76]. P23H, one of the Rh mutations, misfolds and accumulates in the ER. If degradation fails, the protein can aggregate to further trigger photoreceptor death. Moreover, mutantP23H negatively competes with the function of the wild-type (WT) protein. However, the toxic and negative properties of mutantP23Hcan beremarkablyattenuated by mTOR inhibition, since P23H aggregation, caspase activation, and apoptosis have been found to be significantly reduced by mTOR inhibition [77]. In an animal model, the activation of UPR and a consistent increase in BiP and CHOP gene expression was detected in P23H Rh retinas with autosomal dominant retinitis pigmentosa (ADRP), which might stimulate apoptotic signaling in these animals. Injections of rapamycin protected the P23H Rh rod photoreceptors from experiencing physiological decline and slowed the rate of retinal degeneration [78]. In P37H mutant retina, chronic suppression of TOR signaling, using the inhibitor rapamycin, was found to strongly mitigate photoreceptor degeneration and chronic P37H proteotoxic stress [79]. Genetic inhibition of the ER stress-induced JNK/TRAF1 pathway and the APAF-1/caspase-9 pathway dramatically suppressed P37H-induced photoreceptor degeneration. These findings suggest that chronic P37H proteotoxic stress disrupts cellular proteostasis, further leading to metabolic imbalance and mitochondrial failure. Inhibiting the mTOR signal can normalize metabolic function and alleviate ER stress-induced retinal degeneration. RPE dysfunction has been implicated in various retinal degenerative diseases. Impairment of RPE mitochondria in mice induces gradual epithelium dedifferentiation, loss of RPE features, and cellular hypertrophy through activation of the AKT/mTOR pathway. Treatment with rapamycin has been shown to mitigate key features of dedifferentiation and maintain photoreceptor function [77]. However, Rajala et al. reported that light damage caused the activation of the phosphoinositide 3-kinase and the AKT pathway in rod photoreceptor cells. AKT activation, especially the AKT2 signal, plays a neuroprotective role in light-induced retinal degeneration. AKT2 knock-out mice exhibited a significantly greater sensitivity to stress-induced cell death than rods in heterozygous or wild-type mice. These findings suggest that AKT, as a regulator of the mTOR signal, can also function as a therapeutic target when treating retinal degenerative diseases [80,81]. In a rat model of NMDA (N-methyl-D-aspartate)-induced retinal neurotoxicity, intravitreal injection of NMDA caused a marked increase of leukocytes and microglia and significant capillary degeneration. However, the NMDA-induced changes were significantly reduced by the simultaneous injection of rapamycin. These findings indicate that mTOR inhibition prevents inflammation and capillary degeneration during retinal injury [82]. Inhibition of mTOR maintains cellular proteostasis and attenuates ER stress by reducing misfolded protein synthesis and augmenting autophagy to remove misfolded proteins with gene mutations. The mTOR pathway plays an exquisitely complex role in the regulation of retina protein biosynthesis and degradation, as well as in ER stress-induced apoptosis (Figure 3). Inhibition of mTOR maintains cellular proteostasis and attenuates ER stress by reducing misfolded protein synthesis and augmenting autophagy to remove misfolded proteins with gene mutations. The mTOR pathway plays an exquisitely complex role in the regulation of retina protein biosynthesis and degradation, as well as in ER stress-induced apoptosis (Figure 3). Figure 3. Inhibition of the mTOR signal suppresses ER stress-induced cell death. Inhibition of mTOR maintains cellular proteostasis and attenuates ER stress by reducing misfolded protein synthesis. In addition, inhibition of mTOR can augment autophagy to remove misfolded proteins generated from mutant genes. mTOR inhibition can suppress ER stress-induced apoptosis by regulating retina protein biosynthesis and degradation. Conclusions In summary, UPR and ER stress are critical factors that contribute to retinal degeneration, yet inhibition of mTOR maintains cellular proteostasis and attenuates ER stress by reducing misfolded protein synthesis and augmenting autophagy to remove misfolded proteins caused by gene mutations [83]. Further studies are needed to investigate the detailed regulatory network, such as the sensitive surveillance mechanisms of mTORC1 and ER stress, and ER-related apoptosis in retinal neurons. Although the exact mechanism(s) that lead to rapamycin's neuroprotective effects are unclear, it is tempting to speculate that reduction of misfolded protein synthesis and induction of autophagy help prevent the accumulation of abnormal proteins seen in retinal degenerative disorders, and aids in cell survival in the setting of gene mutation injury. Considering that the field of combined mTOR/UPR research is new, significant progress is likely still ahead. Conflicts of Interest: The authors declare no conflict of interest. ATF4 Activating transcription factor 4 ASK1 Apoptosis signal-regulating kinase 1 ATF6 Activating transcription factor 6 ATG13 Autophagy related gene 13 Figure 3. Inhibition of the mTOR signal suppresses ER stress-induced cell death. Inhibition of mTOR maintains cellular proteostasis and attenuates ER stress by reducing misfolded protein synthesis. In addition, inhibition of mTOR can augment autophagy to remove misfolded proteins generated from mutant genes. mTOR inhibition can suppress ER stress-induced apoptosis by regulating retina protein biosynthesis and degradation. Conclusions In summary, UPR and ER stress are critical factors that contribute to retinal degeneration, yet inhibition of mTOR maintains cellular proteostasis and attenuates ER stress by reducing misfolded protein synthesis and augmenting autophagy to remove misfolded proteins caused by gene mutations [83]. Further studies are needed to investigate the detailed regulatory network, such as the sensitive surveillance mechanisms of mTORC1 and ER stress, and ER-related apoptosis in retinal neurons. Although the exact mechanism(s) that lead to rapamycin's neuroprotective effects are unclear, it is tempting to speculate that reduction of misfolded protein synthesis and induction of autophagy help prevent the accumulation of abnormal proteins seen in retinal degenerative disorders, and aids in cell survival in the setting of gene mutation injury. Considering that the field of combined mTOR/UPR research is new, significant progress is likely still ahead.
2017-01-31T08:35:28.556Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "d27d1fa4655e8ea2503c3cafb68a41d75e65ba31", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/1/201/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d27d1fa4655e8ea2503c3cafb68a41d75e65ba31", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
230557641
pes2o/s2orc
v3-fos-license
Phylogenetic Relationships of Sucrose Transporters (SUTs) in Plants and Genome-wide Characterization of SUT Genes in Orchidaceae BackgroundSucrose is the primary form of photosynthetically produced carbohydrates transported long distance in many plant species, which significantly affects plant growth, development and physiology. Sucrose transporters (SUTs or SUCs) are a group of membrane proteins that play vital roles in mediating sucrose allocation within cells and at the whole plant level.ResultsIn this study, we investigated the relationship of SUTs in 24 representative plant species and performed a comprehensive analysis of SUT genes in three sequenced Orchidaceae species, Dendrobium officinale, Phalaenopsis equestris, and Apostasia shenzhenica. All the SUTs from 24 plants were classified into three groups and five subgroups: subgroups A, B1, B2.1, B2.2, and C, based on the evolutionary relationships. A total of 22 SUT genes were identified in Orchidaceae species, among which D. officinale had 8 genes (DenSUT01-08), P. equestris had 8 genes (PeqSUT01-08) and A. shenzhenica had 6 genes (PeqSUT01-06). For the 22 Orchidaceae SUTs, each of the subgroups A, B2.2 and C contains three genes, whereas the SUT genes were significantly expanded in the monocot-specific subgroup B2.1 which contained 12 genes. To shed light into sucrose partitioning and functions of sucrose transporters in Orchidacea species, we analysed water-soluble sugar content and performed RNA sequencing of different tissues of D. officinale, including leaves, stems, flowers and roots. The results showed that although total content of water-soluble polysaccharides was highest in the stems of D. officinale, the sucrose content was highest in flowers. Moreover, gene expression analysis showed that most of the DenSUTs were expressed in flowers, among which DenSUT01, DenSUT07 and DenSUT06 had significantly high expression levels.ConclusionsThese results indicated that stems are used as main storage sinks for photosynthetically produced sugar in D. officinale, and that the DenSUTs mainly take functions in the cellular machinery and development of floral organs. Our findings provide valuable information on sucrose partitioning and the evolution and functions of SUT genes in Orchidaceae and other species. Background Photoassimilated carbohydrates are produced by autotrophic source tissues such as leaves, and moved to heterotrophic sink tissues such as roots, stems, owers and seeds. Sucrose is the major transport form of photosynthetically produced sugar in many plant species due to its non-reducing nature and insensitivity to degradation [1]. Long distance sucrose transport along the phloem sap requires the across of a series of membranes. Sucrose transporters (SUTs or SUCs) play vital roles in transmembrane transport during phloem loading and unloading as well as in sucrose allocation within plants and between pathogens and bene cial symbionts [2]. According to the genomes of grasses, the SUT genes were originally classi ed into ve proposed groups including SUT1-SUT5 [2,[19][20]. The SUT1 clade is dicot speci c with members expressed in the plasma membrane of sieve elements or companion cells [7,[21][22]. SUT2 and SUT4 encompass both dicot and monocot plants, whereas SUT3 and SUT5 are both monocot speci c. The SUT2 transporters are mainly expressed in plasma membrane of SEs and found in vegetative sink organs [23][24]. Members of the SUT4 clade are identi ed in both plasma membrane and the vacuole [25][26]. Recent studies have divided the SUTs into two subfamilies (Ancient Group 1 and Ancient Group 2) and three types (Type I, Type II and Type III) [18,27]. SUT family genes play essential roles in phloem loading and unloading, pollen development, fruit ripening, ethylene biosynthesis and seed development and germination in many plants [10,12,26,[28][29]. Besides, the SUT genes also involved in various physiological processes and sucrose exchanges between plants and symbiotins, pathogens and fungi [2,[30][31]. For example, in Arabidopsis, AtSUC5 is predominantly expressed in seeds, whereas AtSUC1 and the mutant atsuc9 both expressed in oral organs, and facilitates anthocyanin accumulation and oral transition [28,32]. The rice OsSUT2 is expressed in seeds which involve in the germination of embryos [33][34]. The activity and expression of sucrose transporters are regulated by genetic, molecular and physiological factors. The family Orchidaceae is one of the largest families in angiosperms, with over 25,000 species and 880 genera, representing ~ 10% of the owering plants [36]. Many of them are economically important for their unmatched ornamental and medical value. Moreover, the orchids are model systems for elucidating oral evolution in angiosperms and symbiotic activities between plants and fungi [35,37]. To date, the genomes of three Orchidaceae species, Dendrobium o cinale, Phalaenopsis equestris, and Apostasia shenzhenica, have been sequenced and published, which greatly promoted the genetics and genomics of orchids [37][38][39][40]. However, the roles of sucrose transporters in orchids are still unknown. In the present study, we performed genome-wide identi cation and characterization of the SUT gene families in the three sequenced Orchidaceae species. Transcriptome sequencing and water-soluble sugar content analysis was also conducted in D. o cinale. Our ndings have shed light into the evolution, expression, and functions of SUT genes in Orchidaceae. Results And Discussion Genome-wide identi cation of SUT genes in Orchidaceae species The SUTs are prevalent in plants and play fundamental roles in plant growth, development and stress tolerance [19,[41][42]. To understand the potential roles of SUTs in orchids, the three sequenced Orchidaceae species, D. o cinale, P. equestris, and A. shenzhenica, were used for genome-wide identi cation and characterization of SUT genes. The HHM pro le of the SUT protein was used as a query to perform an HMMER search against the genome assemblies of the three species. Bioinformatics analysis identi ed a total of 22 SUTs from the three species, which were designated 'DenSUT' for D. o cinale, 'PeqSUT' for P. equestris, and 'ApoSUT' for A. shenzhenica, with a serial number (Table 1, Table S1). Among them, D. o cinale had 8 genes (DenSUT01-08), P. equestris had 8 genes (PeqSUT01-08) and A. shenzhenica had 6 genes (PeqSUT01-06). The results agree with previous reports that plant sucrose transporters are encoded by relatively small gene families. According to the phylogenetic tree, the 22 SUT genes from three orchids were classi ed into four subgroups, subgroups A, B2.1, B2.2 and C (Fig. 1, Table 1). Subgroup A included three genes, DenSUT01, PeqSUT01 and ApoSUT01. There were also four genes in subgroup C (DenSUT03, DenSUT04, PeqSUT08 and ApoSUT02) and three genes in subgroup B2.2 (DenSUT02, PeqSUT03 and ApoSUT03), respectively. However, the SUT genes were signi cantly expanded in the monocot-speci c subgroup B2.1 which comprised 12 genes. Phlogenetically, sucrose transporters of D. o cinale were more close to that of P. equestris than A. shenzhenica. The molecular weights of the SUT genes ranged from 51.22 to 106.90 kD with pI values ranged between 4.95 and 10.12. Most of these genes were ~500 aa or ~600 aa in length with 11-13 introns and 12-14 extrons, whereas there were several genes with only 4-5 introns/extrons. Previous studies indicate that plant sucrose transporters are usually consisted of 500-600 aa, with molecular weight of 55-60 kD [15,43], which is consistent with the ndings in the present study. Detail information of the SUT genes, including name, coding protein, CDS length, molecular weight and PI value, is shown in Table 1. Phylogenetic relationship of the SUT proteins in major plant species To provide insight into the evolution of SUT gene families, we performed phylogenetic analysis using 24 representative plant species, including green alga, mosses, lycophytes, gymnosperms, monocots and dicots. Detailed information of the SUTs see Methods. The SUT domain sequence and neighbour-joining method was used with 1000 bootstraps to construct the phylogenetic tree. In this study, the SUT genes of several eukaryotic chlorophyta were clustered in a special branch, which was de ned as outgroups. The SUTs from the 24 species were classi ed into three groups and ve subgroups: subgroups A, B1, B2.1, B2.2, and C ( Fig. 1). Group A contained at least one member from mosses, lypophytes and angiosperms including both monocots and dicots. Group B was the largest group which is divided into three subgroups; subgroup B1 was made up of SUT genes from exclusively dicot species, corresponding to the SUT1 clade by Lalonde & Frommer [20]. Subgroup B2.2 contained both monocot and dicot species that are also present in the SUT2 group reported by Lalonde & Frommer [20]. Whereas subgroup B2.1 was monocot-speci c expansion clade containing members from SUT3 and SUT5 reported by Kühn & Grof [2]. Group C contained mosses, lypophytes and angiosperms including both monocots and dicots, corresponding to SUT4 clade [20]. According to previous studies, the SUT1 and SUT2 proteins mainly play roles in phloem loading and unloading, sucrose transport to sink cells, and sucrose exchanges with microbes [2,30,31,[44][45]. While SUT4 proteins are involved in various physiological processes such as circadian rhythms and responses to dehydration and photosynthesis [46][47]. In recent studies, the SUTs were classi ed into two subfamilies (Ancient Group 1 and Ancient Group 2) and three types (Type I, Type II and Type III) [18,27]. Type I clade is dicot speci c which corresponds to the SUT1 group [2] and Type III clade contains both monocots and dicots which corresponds to the SUT4 group [20]. Type II (A) is composed of monocot and dicot species that are also reported in the SUT2 group by Lalonde & Frommer [20], whereas the monocot speci c Type IIB contains members from SUT3 and SUT5 reported by Kühn & Grof [2]. Sucrose transporters have been identi ed in primary terrestrial plants including both lypophyte and moss, with 6 SUTs in Selaginella lepidophylla and 7 SUTs in Physcomitrella patens; however, none was identi ed in green alga Chlamydomonas reinhardtii [18]. There were 6-10 SUT genes in monocot see crops such as rice (6 genes), maize (10 genes) and sorghum (8 genes). In contrast, in another monocot species, Ananas comosus, only 3 SUTs were identi ed. For most dicot species, 4-9 SUTs were identi ed. These results revealed that the number of sucrose transporters remains largely stable during the evolution from lower plants to terrestrial plants. This conclusion is consistent with previous studies on SUT gene identi cation and evolution [18,20,27]. However, the SUTs were expanded in several species such as Triticum aestivum (18 genes) and Glycine max (14 genes), which may be the result of whole genome polyploidization. The SUTs of some monocot species were expanded in subgroups B2.1; for example, there were 5 ZmaSUTs in subgroup B2.1, whereas 3 ZmaSUTs were identi ed in subgroup A, and only one was identi ed in subgroup B2.2 and C. Likewise, the SUTs from dicot species were expanded in subgroup B1 such as GmaSUTs, AtSUTs and DcaSUTs. The characean algae Chlorokybus atmosphyticus contains one SUT homolog which is basal to all the streptophyte SUTs [18]. We also identi ed one SUT (VcaSUT01) in chlorophyta Volvox carteri. Thereby, the origin of the sucrose transporters predates the divergence between green alga and the ancestors of terrestrial plants. Conserved motifs analyses of the SUT genes The diversity of motif compositions in sucrose transporters of Orchidaceae species was assessed using the MEME programme; a total of 10 conserved motifs were identi ed. The distribution of these 10 motifs in the SUT proteins is shown in Figure 2. The motif, motif2 was the most conserved SUT domain, which was identi ed in all of the SUT proteins except PeqSUT08 and DenSUT06. Besides, motif10 was observed in 17 SUT proteins, whereas absent from PeqSUT08, ApoSUT05, DenSUT06, DenSUT08, and PeqSUT02. All the three members in group A contained the same four motifs, motif10, motif2, motif5 and motif9. Moreover, Group B members shared the same motif, motif5, except for DenSUT07; likewise, motif4 was also commonly owned by all group B SUTs except for ApoSUT04 (Fig. 2). Among the 12 SUTs in subgroup B2.1, three motifs were commonly owned, i.e. motif2, motif3, and motif5. There were eight sucrose transporters that had all 10 motifs, among which ve for P. equestris (PeqSUT03, PeqSUT04, PeqSUT05, PeqSUT06 and PeqSUT07), two for A. shenzhenica (ApoSUT02 and ApoSUT06), whereas D. o cinale had only one (DenSUT05). The sucrose transporters in each subgroup shared several unique motifs, indicating that the SUT proteins within the same subgroups may have certain functional similarities. In addition, motif distribution of the SUTs suggested that those genes were largely conserved during evolution. Water-soluble sugar content in D. o cinale Photosynthetically produced sugar is not just a resource of carbon skeletons but also an energy vector and signaling molecule, which has major impacts on plant growth, development and physiology [48][49]. After synthesis in mesophyll cells of leaves, sucrose needs to be loaded to the phloem parenchyma cells or apoplasm of mesophyll cells, then transported in specialized networks [i.e. sieve element/companion cell complexes (SE/CCC)], and nally unloaded to distal sink organs [2,30,49]. Unlike other monocot crops such as maize, rice, and wheat that uses seeds as main storage sinks, the endosperms of most orchid seeds are signi cantly degenerated. As a result, Orchidaceae plants are highly dependent on symbiotic fungi to complete their life cycle, especially at the stage of seed germination and seedling growth due to nutrient de ciency [50][51][52]. To shed light into sucrose partitioning and functions of sucrose transporters in Orchidacea species, we analysed water-soluble sugar content in different tissues of D. o cinale, including leaves, stems, owers and roots, using the GC-MS/MS method. The results showed that the content of water-soluble polysaccharides varies signi cantly among different tissues (Fig. 3). The amount of total water-soluble polysaccharides was highest in the stems of D. o cinale, with approximately 116.17 mg/g, followed by leaves with approximately 113.23 mg/g; owers had approximately 88.08 mg/g, whereas roots have a signi cantly lower level of water-soluble polysaccharides, only ~26.66 mg/g (Fig. 3a). These indicated that stems were the major sink organs for sugar storage in D. o cinale. Because the D. o cinale is an epiphytic plant in its natural habitation which usually experiences drought stress [53][54], thus the high amount of sugar in stems may help to maintain osmic pressure to improve drought tolerance. The content of sucrose also varies greatly among different tissues. Nonetheless, sucrose content was highest in owers, approximately 28.1 mg/g, followed by leaves (with ~18.13 mg/g) which are the major source tissues for the photosynthetically assimilated sucrose. The amount of sucrose in stems is ~13.77 mg/g, and that of roots is also the lowest containing only ~7.82mg/g (Fig. 3b). Previous studies show that the developing pollen grains are strong sink tissues, which require sucrose to provide energy for maturing, germination and growth [55][56]. These results showed that although total sugar content was highest in the stems, the photoassimilated sucrose was mainly transported to support the growth and physiology of oral organs in D. o cinale. Expression patterns of the SUT genes in different tissues of D. o cinale Sucrose transport systems play vital roles in carbon partitioning, plant development, inter-/intra cellular communications and environmental adaptations. The SUT genes are not only involved in sucrose transport, but also play essential roles in pollen germination, fruit ripening, and ethylene biosynthesis in many species [10,[28][29]47]. To further understand the roles of the SUT genes in orchids, we investigated expression pro les of DenSUT genes in D. o cinale. RNA sequencing (RNA-seq) was performed using different tissues of D. o cinale including leaves, stems, owers and roots. The FPKM expression of DenSUT genes in four different tissues is provided in Table S2. The expression levels of different DenSUT genes in the four D. o cinale tissues are represented in different colours and are shown in Fig. 4. The Arabidopsis AtSUC1 is expressed in germinating pollens where it is translationally regulated and facilitates anthocyanin accumulation; while the mutant atsuc9 promotes oral transition by manipulating sucrose uptake [28,32]. AtSUC1 is also expressed in the parenchymatic cells of the style and anthers, which guides modulates water availability around the region and nally results in pollen-tube towards the ovule and anther opening [55]. Recent studies have also described the roles of NtSUT3 and LeSUT2 in sucrose uptake during pollen development and pollen tube growth [9,56]. In the present study, RNA-seq showed that most of the DenSUTs were expressed in owers, among which three genes, DenSUT01, DenSUT08, and DenSUT06 had signi cantly high expression levels. In agreement with the expression pro les, sucrose accumulation also predominantly occurs in the owers, with approximately 28.1 mg/g. These result indicated that these genes mainly took function in the cellular machinery and development of oral organs. Phylogenetically, DenSUT01, DenSUT08, and DenSUT06 were classi ed in subgroup A and the monocot-speci c expanded subgroup B2.1, respectively. In leaves, sucrose is mainly synthesized in the mesophyll cytoplasm, maybe also in organelles such as vacuoles and plastids [57]. Once released to the leaf apoplasm, sucrose is actively loaded into the SE-CCCs via a sucrose/H+ mechanism in apoplasmic loading species [58]. The analysis of transgenic and mutant plants indicates that dicot members of the SUT1 clade and monocot members of the SUT3 clade are essential for apoplasmic loading of SE-CCC [59][60][61]. In maize, ZmSUT1 plant an important role in e cient phloem loading [61]. The inhibition of sucrose transporters results in starch accumulation in the epidermal cell [62]. The sucrose transporter SUC2 is crucial for sucrose allocation, the null mutant of which in Arabidopsis led to compromise health of the plant [63]. After loaded into the SE-CCC, energy-driving reloading is required along the whole process of long terminal sucrose transport from source to sink. In D. o cinale, sucrose content was ~18.13 mg/g in the leaves, ranked second among four tissues. However, only one gene, DenSUT02, was signi cantly expressed in the leaves, which was also expressed in owers and roots. We deduce that DenSUT02 may play a potential role in phloem loading in D. o cinale. Nonetheless, other sugar transporters such as SWEETs and MSTs are also likely involved in sucrose transport. In well studied grass stems, immature internodes are considered as utilization sinks, whereas the fully elongated mature internodes are storage sinks where the sucrose accumulates [64][65][66]. The plasmamembrane-localized sucrose transporters are promising candidates for sucrose uptake in stems. For example, all of the SbSUTs in sorghum are active in sucrose uptake, although the expressed sites of different SUTs in internodes may vary [19,[66][67]. The SbSUTs are localized to sieve elements in both developing and mature sorghum stems [68], which is consistent with the localization of wheat TaSUT1 and rice OsSUT1 proteins in SE-CCCs in mature stems [69][70]. In the present study, three genes, DenSUT03, DenSUT05 and DenSUT07, were expressed in Dendrobium stems. DenSUT05 and DenSUT07 were both moderately expressed in stems and owers, whereas DenSUT03 was slightly expressed in the stems and signi cantly expressed in the roots. In addition, DenSUT01 and DenSUT08 were also slightly expressed in the roots. The expression of DenSUTs was also analysed in ower, stem and leaf D. o cinale using qRT-PCR (Fig. 5). The results were largely consistent with that from RNA-seq analysis. However, the functions of SUT genes in D. o cinale and the other two Orchidaceae species remain to be veri ed. Conclusions In conclusion, we performed a comprehensive study of the phylogenetic relationship of the SUTs in 24 plant species and a genome-wide characterization of the SUT genes in three Orchidaceae species. The SUTs were classi ed into three groups and ve subgroups. We identi ed a total of 22 SUT genes in the three orchids, 8 DenSUTs, 8 PeqSUTs, and 6 ApoSUTs. The functions of the SUTs in Dendrobium were analysed. The results showed that most of the DenSUTs had high expression levels in owers. Although total content of water-soluble sugars was highest in the stems, the sucrose content was highest in owers. We proposed that stem were used as major sinks for sugar storage in D. o cinale, and the DenSUTs mainly take functions in oral organs. Our ndings provide important insights into the evolution patterns in plants and advanced our knowledge of sucrose partitioning and potential functions of SUT genes in Orchidaceae species. ClustalW [71] was used for sequence alingment and a Hiden Markov Model (HMM) [72] was constructed for SUT proteins. The HMMER program was used to search for SUT proteins in all D. o cinale, P. equestris, and A. shenzhenica proteins with a cutoff E-value of 1e −4 using the HMM as a query. If the location of two SUT genes on the genome is less than 10KB, it is considered as the homologous gene generated by fragment replication; if not, it is considered as the homologous gene generated by genome-wide replication. After a comprehensive check, the candidate proteins that only contained fragmental SUT domains were eliminated. Protparam (http://web.expasy.org/protparam/) website was used to simulate the molecular weight of each gene and the theoretical isoelectric point (pI) of each protein was also predicted. Gene structure and motif analyses The Gene Structure Display Server tool (http://gsds.cbi.pku.edu.cn/, v2.0) was used to analyse gene structure of all the SUTs identi ed in D. o cinale, P. equestris, and A. shenzhenica. MEME software (http://meme.nbcr.net/meme/, v4.11.0) was used to search for motifs in SUT proteins with a motif window length from 10 to 100 bp, maximum number of motifs was set at 20, and the motif exist in at least three SUT proteins was identi ed as the true motif. (Hangzhou, China). Four D. o cinale tissues inluding roots, stems, leaves and owers (three replicates for each tissue) were collected and dried in an oven at 105℃ until constant weight. The 12 samples were shattered into ne powders in independently way by a mixer mill (MM 400, Retsch). The total polysaccharide was extracted using water extraction and alcohol precipitation method, and the content of total polysaccharide was measured using phenol-sulfuric acid method. Total polysaccharide extraction: about 0.05g from each sample were weighed, and added in 1mL water, and fully homogenized. The sample was then extracted in water bath at 100℃ for 2h, centrifuged at 10000g for 10min after cooling, and reserved supernatant. 0.2mL supernatant was collected and slowly added with 0.8mL anhydric ethanol. After well mixed, the mixture was stored overnight at 4℃. After centrifuged 10000g for 10min, the supernatant was discarded, 1mL water was added into the precipitation, fully mix and dissolved. Calculation of total polysaccharide cotent: Preheat the Microplate Reader for more than 30min and adjust the wavelength to 490nm. 200µL supernatant was extracted, and 100µL reagent and 0.5mL concentrated sulfuric acid were added. After well mixed, it was incubate in 90℃ water for 20min. 200µL mixture was extracted and added into the enzyme label plate and the absorbance value A was determined at 490nm. Glucose was used as the reference. The regression equation under standard conditions was y = 7.981x-0.0037, R2 = 0.9973, x represented for glucose content (mg/mL), y represented for the absorbance value. Total polysaccharide (ug/g dry weight) = (A+0.0037)÷7.981×V1÷V2×V3÷W×1000 =626.49×(A + 0.0037)÷W. V1: The redissolved volume after alcohol precipitation, 1mL; V2: The volume of alcohol precipitation, 0.2ml; V3: The volume of water added during extraction, 1mL; W: Sample weight, g; 1000: The conversion coe cient from mg to μg. Determination of sucrose content After dried, the 12 samples were shattered into ne powders in independently way by a mixer mill (MM 400, Retsch). 20 mg of powder was diluted to 500 μL with methanol: isopropanol: water (3:3:2 V/V/V). The extracts was centrifuged at 14,000 rpm under 4°C for 3 min. 50μL of the supernatants were mixed with internal standard (ZZBIO, Shanghai ZZBIO CO., TD.) and evaporated under nitrogen gas stream, then transferred to the lyophilizer for freezedrying. The residue was used for further derivatization. The sample of small molecular carbohydrates was mixed with 100 μL solution of methoxyamine hydrochloride in pyridine (15 mg/mL). The mixture was incubated at 37 °C for 2 h. Then 100 μL of BSTFA was added into the mixture and kept at 37 °C for 30 min after vortex-mixing. The mixture was diluted and analyzed by GC-MS/MS according to the description by Gómez-González et al. [73] and Sun et al. [74], with modi cations. Agilent 7890B gas chromatograph coupled to a 7000D mass spectrometer with a DB-5MS column (30 m length × 0.25 mm i.d. × 0.25 μm lm thickness, J&W Scienti c, USA) was employed for GC-MS/MS analysis of sugars. Helium was used as carrier gas at a ow rate of 1 mL/min. Injections were made in the split mode with a ratio of 3:1 and the injection volume was 3 μL. The oven temperature was set at 170°C for 2min, then raised to 240°C at 10°C/min, raised to 280°C at 5°C/min, raised to 310°C at 25°C/min and held for 4 min. All samples were analyzed in selective ion monitoring mode. The injector inlet and transfer line temperature were 250 °C and 240 °C, respectively. RNA Extraction and qRT-PCR Analysis Total RNA was extracted from three D. o cinale tissues, i.e. owers, stems and leaves, using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturers' instruction. DNase I was used to purify the potential contaminating genomic DNA. The quality of total RNAs was checked with 1% denaturing agarose gels and the NanoDrop 2000 spectrophotometer (ThermoFisher Scienti c, Beijing, China). First-strand cDNA synthesis was performed with PrimeScript reverse transcriptase (TaKaRa Biotechnology, Dalian, China), using the RNA was used as the template. Gene-speci c primers wre designed with Primer Premier 5.0 program (Table S3). The DnActin (comp205612_c0) gene was used as an internal standard for normalizing the gene expression data [75]. The expression levels of DenSUTs were analyzed in a qRT-PCR assay, which was completed with the SYBR Green qPCR kit (TaKaRa Biotechnology, Dalian, China) and the Stratagene Mx3000P thermocycler (Agilent, Santa Clara, CA, USA). The PCR program was as follows: 95℃ for 10 min then 40 cycles of 95℃ for 15 s and 60℃ for 60 s. The relative SUT gene expression levels were calculated with the 2 -△△Ct method [76]. The analysis included three biological replicates, each with three technical replicates. The expression levels in different tissues were visualized in a histogram using the average values. Statistical Analyses Statistical analysis was performed to calculated the average values and standard errors for the three replicates. SPSS software (vs. 16.0) was used to determine the signi cant differences of sugar content among different tissues using a one-way ANOVA procedure and post hoc analysis. P value=0.05 indicates a signi cant dierence and is represented by an asterisk (*); p value=0.01 indicates a very signi cant dierence and is represented by two asterisks (**). Declarations Authors' Contributions YZW and CBS conceived and designed the experiments, performed the experiments, analyzed the data, prepared gures and/or tables, authored or reviewed drafts of the paper, approved the nal draft. YC analyzed the data, authored or reviewed drafts of the paper, approved the nal draft. QZW and HJW analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the nal draft. CBS conceived the experiments, authored or reviewed drafts of the paper, approved the nal draft. Availability of data and materials The following information was supplied regarding data availability: The raw data of RNA-seq experiment is deposited in Sequence Read Archive (NCBI): SUB8609885. All data and material used in this study are available from the corresponding author upon reasonable request. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable.
2020-12-10T09:06:53.370Z
2020-12-07T00:00:00.000
{ "year": 2020, "sha1": "8e1a18b3a24281abdff8b5d5931d01d3adfd326a", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-120129/v1.pdf?c=1631878051000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "fcb7ef669ebee699e042421250017bffba0efe37", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
235727662
pes2o/s2orc
v3-fos-license
Site-specific symmetry sensitivity of angle-resolved photoemission spectroscopy in layered palladium diselenide Two-dimensional (2D) materials with puckered layer morphology are promising candidates for next-generation opto-electronics devices owing to their anisotropic response to external perturbations and wide band gap tunability with the number of layers. Among them, PdSe2 is an emerging 2D transition-metal dichalcogenide with band gap ranging from 1.3 eV in the monolayer to a predicted semimetallic behavior in the bulk. Here we use angle-resolved photoemission spectroscopy to explore the electronic band structure of PdSe2 with energy and momentum resolution. Our measurements reveal the semiconducting nature of the bulk. Furthermore, constant binding-energy maps of reciprocal space display a remarkable site-specific sensitivity to the atomic arrangement and its symmetry. Supported by density functional theory calculations, we ascribe this effect to the inherent orbital character of the electronic band structure. These results not only provide a deeper understanding of the electronic configuration of PdSe2, but also establish additional capabilities of photoemission spectroscopy. Transition metal dichalcogenides (TMDs) host highly attractive properties for fundamental studies of novel physical phenomena and for applications ranging from opto-electronics to sensing at the nanoscale [1,2]. Among all TMDs, those based on noble metals (Pd, Pt) have received less attention because of their high cost, until the recent discovery of a layercontrollable metal-to-semiconductor transition [3][4][5][6], which motivated their investigation in the last few years. Similar to the extensively investigated black phosphorus [7][8][9][10], with a band gap varying from 0.3 eV in the bulk to 1.5 eV in the monolayer [11], PdSe 2 is characterized by an in-plane puckered structure resulting in an anisotropic response to external stimuli, such as light [12], electric field [13] and strain [14,15]. In addition, a linear dependence of the band gap with the number of layers has been observed [3,16]: the monolayer is predicted to have an indirect gap of 1.3 eV [17] which monotonically decreases to 0 eV as the thickness exceeds 40 − 50 layers, suggesting semimetallic behavior. However, unlike black phosphorous and the majority of TMDs which share hexagonal crystal structures, the low-symmetry pentagonal atomic arrangement of PdSe 2 gives rise to exotic thermoelectric, mechanical and optical properties [18][19][20]. Here, we investigate the electronic structure of PdSe 2 for the first time by angle-resolved photoemission spectroscopy (ARPES). We clarify the semiconducting nature of the bulk by direct measurements of the electronic bands in reciprocal space. Furthermore, we reveal a previously unexplored sensitivity of photoemission to the site-specific crystal symmetry. In particular, constant binding-energy cuts of the surface-projected Brillouin zone (BZ) disclose the dominant chemical/orbital character of the metal atoms on the top-most valence band, while the effect of the chalcogen species becomes relevant at binding energies exceeding 1 eV. This finding is corroborated by plane wave density functional theory (DFT) calculations employing Perdew-Burke-Ernzerhof (PBE) functional. The crystallographic morphology of PdSe 2 is sketched in Fig.1. Its stable configuration is orthorhombic with Pbca space group (#61) and experimental lattice parameters a = 0.575 nm, b = 0.587 nm and c = 0.77 nm [21], see Fig.1a. Layers are normal to the caxis. The top view (panel b) shows the characteristic pentagonal atomic arrangement of the monolayer, while the side view (panel c) reveals its puckered structure. Each PdSe 2 layer is formed by three atomic planes: Pd atoms in the middle are covalently bound to four Se atoms located on the top and bottom sub-layers. In contrast with other TMDs where the metal atom has +4 oxidation state, PdSe 2 adopts the +2 oxidation [22]. This is achieved . This result is in agreement with recent optical measurements [23,24], but diverges from calculations that predict semimetallic behaviour [3,17,25,26]. While it is known that DFT generally underestimates band gaps in semiconductors [27,28], the fact that VB and CB are well separated in reciprocal space preserves their individual electronic features, regardless of the computed value of the band gap. In particular, the occupied states (experimentally observed by ARPES) are reproduced by DFT with remarkable accuracy. More insight on the electronic structure of PdSe 2 is achieved by inspecting the ARPES isoenergetic maps. Fig.3a shows the photoemission spectral weight measured on the reciprocal We will unfold this concept referring to Fig.3d-f. Panel d recalls the orthorhombic unit cell of PdSe 2 and the corresponding BZ. It is known that ARPES measurements of solids probe the so-called surface-projected BZ (SBZ) based on energy and momentum conservation [29]. In panel d the SBZ is represented by the orange-shaded area. A closer inspection of the unit cell reveals that Pd atoms arrange on a face-centered orthorhombic (fco) lattice, as evident in panel f: if chalcogen atoms were absent, the corresponding first BZ would be the one sketched on the right-hand side [30] (notice the similarity with the BZ of the standard face-centered cubic lattice) [31]. Panel e compares the SBZs of the orthorhombic (panel d) and the fco (panel f) cells with identical lattice parameters. It can easily be verified that the following relations hold among wavevectors:ΓX = π/a,ΓȲ = π/b,ΓC = 2π/a,ΓD = 2π/b. Returning to the photoemission map of Fig.3c, the blue dashed lines identify the SBZ of the fco unit cell and each elliptical shape is centered at theΓ point of the face-centered lattice. Since Pd atoms arrange on an fco lattice, our ARPES analysis suggests that the top-most VB originates predominantly from Pd orbitals with little contribution from Se. In order to support this hypothesis, we have computed the orbital-projected k-DOS and wave functions at selected points of the band structure. Fig.4a reports the difference between it is dominated by Se 4p orbitals [32], while at Γ (label 4) it is combination of Pd 4d states. These results support our hypothesis that the same single orbital of Pd (i.e. 4d z 2 ) shapes the top-most VB. Recalling that Pd atoms form an fco lattice, this symmetry is retained also in reciprocal space, as revealed by our ARPES data (see Fig.3c). A simple tight binding approach leads to the same conclusion and is reported in the Methods section. At larger binding energies both Pd and Se orbitals contribute to the band structure, exhibiting the standard orthorhombic symmetry of Fig.3a. In conclusion, we have measured the electronic band structure of bulk PdSe 2 by angleresolved photoemission spectroscopy. Within the experimental accuracy, our data confirm its semiconducting nature with a minimum band gap of 50 meV (i.e. the instrumental resolution) since no evidence of conduction band across the Fermi level has been observed, while all electronic dispersive features below E F are well-reproduced by our DFT calculations. Furthermore, we have demonstrated a remarkable sensitivity of the ARPES technique to site-specific symmetries of the electronic structure. This finding can be pivotal in tuning the electronic properties of PdSe 2 -based heterostructures [33], analogous to the observed dependence of the gap on the band character of MoS 2 /graphene [34,35]. Moreover, we envisage that the chemical selectivity of ARPES allows a fine-tuning of the electronic properties. For example, chemical substitution of metal atoms [36] will give rise to specific changes in the VB related both to doping and to modifications in the surface symmetry, to which ARPES will be sensitive. We believe this implementation is not limited to PdSe 2 , but it applies to a much wider class of compounds with complex crystal structures and can help clarify the subtle interactions related to correlated electronic phases, such as metal-insulator transitions, charge density waves and superconductivity [37,38]. Competing interests The authors declare no competing interests. DFT. The electronic band structure of PdSe 2 was computed with the Quantum Espresso package [41]. Exchange-correlation was considered using the Perdew-Burke-Ernzerhof functional revised for solids (PBEsol). Van der Waals interaction among PdSe 2 layers was included using the semiempirical Grimme's DFT-D2 correction [42]. Atoms were allowed to relax until the residual forces were below 0.0026 eV/Å. Cutoff energy of 60 Ry and 8 × 8 × 6 k-point mesh were used. The iso-surface rendering in Fig.4b was performed with the VESTA software [43]. Absence of CB evidence in ARPES data. Each map employs a logarithmic intensity scale ranging from 5% to 100% of the respective maximum. Even well above E F we have detected no sign of electron pockets originating from the CB, as DFT calculation would predict. Although we are not able to determine the value of the band gap with static ARPES, our data uphold the semiconducting nature of bulk PdSe 2 . Orbital character of the band structure (Fig.4a). Pd 4d and Se 4p electrons determine the valence and conduction states of PdSe 2 . It is instructive to "visualize" the specific orbital character and in particular the Pd-Se duality along high symmetry lines of the BZ. Here, we employ a simple colour-coded approach: the k-resolved density of states projected on Pd 4d and Se 4p are shown in Fig.6a and Fig.6b, respectively. Taking the difference between the data of these two graphs we obtain Fig.4a, where blue and red colours encode positive (Pd) and negative (Se) values. As an example, the Pd and Se k-DOS (the latter is represented on the negative abscissa) and their difference at the Γ point are shown in Fig.6c. Notice, in particular, that the Pd-projected k-DOS at the Fermi level (i.e. the VB top) is approximately 3 times larger than Se, as claimed in the main manuscript. Tight binding approach and matrix element effect. In a regular MX 6 octahedral complex (O h symmetry) the five outer d orbitals of the transition metal M arrange into the high-energy, double-degenerate e g and the low-energy, triple-degenerate t 2g states [44], as sketched in Fig.7a. A closer look at the crystal structure of PdSe 2 , shown in Fig.7b, reveals that each Pd is surrounded by four Se atoms belonging to the same monolayer and two apical atoms (in yellow colour) belonging to the nearest upper and lower layers. The six chalcogen atoms form an octahedron elongated along the c-axis (D 4h symmetry). This distortion lifts the degeneracy of the e g states, resulting in the d z 2 orbital being energetically more favorable than the d x 2 −y 2 [44][45][46]. As we already pointed out, in PdSe 2 the oxidation state of Pd is +2 and its electronic configuration is therefore 4d 8 : six of these electrons fill the t 2g states completely, while the remaining two occupy the d z 2 orbital, leaving d x 2 −y 2 empty. d z 2 is therefore the highest occupied molecular orbital (HOMO) forging the top of the VB, while d x 2 −y 2 represents the lowest unoccupied molecular orbital (LOMO) contributing the bottom of the CB, in agreement with our own calculations (Fig.4b) and other recent work [47]. We can now elucidate the symmetry features of the top-VB observed by photoemission using a simple 2D tight binding approach that involves only the HOMO. Metal atoms of a PdSe 2 monolayer arrange on a rectangular lattice as shown in Fig.8. For simplicity we will In the fco case, a tight binding approach is straightforward. Let |R A be the Wannier wave function at the specific lattice site R (i.e. the d z 2 orbital). The ovelap integral is γ = R A |U|0 A (U is the periodic lattice potential), where R runs over the 4 nearest neighbors (+a/2, +a/2), (+a/2, −a/2), (−a/2, +a/2), (−a/2, −a/2) and the eigenvalue of the hamiltionian H is E(k) = E A + γ n.n. e ikR = E A + 4γ cos(k x a/2) cos(k y a/2), Fig.8c shows the resulting VB dispersion along the x-axis (k y = 0). If the orthorhombic cell is used, two Wannier wave functions |R A and |R B form the basis of the tight binding hamiltonian: It can easily be verified that the hopping term between sites A and B is h = 4γ cos(k x a/2) cos(k y a/2) like in the previous fco case. Since sites A and B are equivalent, it also follows that E A = E B . Thus, the two eigenvalues are E ± (k) = E A ± |4γ cos(k x a/2) cos(k y a/2)|. Fig.8d depicts E ± (k) along the x-axis (k y = 0). The corresponding eigenstates are (± h |h| , 1) and the generic wave function at the lattice site R reads [48]: |R ± = ± h |h| e ikR A |R + R A + e ikR B |R + R B . In the free electron final state approximation (here, the final state |k f is a plane wave with wavevector k f ), the photoemission matrix element M is expressed as the Fourier component of the tight binding orbital |0 ± [48], i.e. which, using the momentum conservation (k = k f ), leads to the following photoemission intensity: that simplifies to I ± ∝ 2 | k f |0 A | 2 1 ± h |h| since |0 A = |0 B . At this point we notice that h |h| = sgn[cos(k x a/2) cos(k y a/2)]. Thus, I ± ∝ (1± sgn[cos(k x a/2) cos(k y a/2)]. Referring to Fig.8d, it is easily verified that the previous equation completely suppresses the photoemission intensity of one eigenvalue E ± (k) depending on the values of k x and k y . Fig.8e-f show the results: as expected, the fco band structure of Fig.8c and the experimental data of Fig.3c are recovered since the use of two equivalent sites is redundant and the appropriate unit cell is the fco.
2021-07-05T01:15:49.192Z
2021-07-02T00:00:00.000
{ "year": 2021, "sha1": "1b9a6b16264d7fcc8c1a2c4b274e0a04234f3360", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1583/ac255a", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "1b9a6b16264d7fcc8c1a2c4b274e0a04234f3360", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
130762728
pes2o/s2orc
v3-fos-license
WHAT IS SOCIAL RESILIENCE ? LESSONS LEARNED AND WAYS FORWARD Over the last decade, a growing body of literature has emerged which is concerned with the question of what form a promising concept of social resilience might take. In this article we argue that social resilience has the potential to be crafted into a coherent analytic framework that can build on scientific knowledge from the established concept of social vulnerability, and offer a fresh perspective on today’s challenges of global change. Based on a critical review of recently published literature on the issue, we propose to define social resilience as being comprised of three dimensions: 1. Coping capacities –the ability of social actors to cope with and overcome all kinds of adversities; 2. Adaptive capacities – their ability to learn from past experiences and adjust themselves to future challenges in their everyday lives; 3. Transformative capacities – their ability to craft sets of institutions that foster individual welfare and sustainable societal robustness towards future crises. Viewed in this way, the search for ways to build social resilience – especially in the livelihoods of the poor and marginalized – is revealed to be not only a technical, but also a political issue. Introduction The notion of resilience has become increasingly prominent in the last decade or so within several academic disciplines and research fields, from biology and engineering to sustainability studies and research into natural hazards and development issues. A controversial discussion has gained momentum regarding the question of whether or not resilience is a valid concept for the study of society. Today, a growing body of literature has emerged which is concerned with defining what form a promising conceptualization of social resilience might take. These developments have been subjected to deep criticism from both natural and social scientists. The ecologists Brand and Jax (2007) have argued for constraining the application of the notion of resilience to ecosystems for reasons of conceptual clarity. As a "boundary object" (star and GrieseMer 1989; star 2010), resilience might facilitate the exchange of thoughts across disciplinary borders, which is necessary in order to develop a better understanding of coupled social-ecological systems. However, employing too broad a definition for the sake of a shared vocabulary might then make the term too vague and unmanageable, which in turn might even hinder scientific progress (Brand and Jax 2007). �rom a social science perspective, the geographers cannon and Müller-Mahn (2010, 623) have argued that the concept of resilience is "inadequate and even false when it is being uncritically transferred to social phenomena". Due to its empirical heritage rooted in ecosystem sciences, the concept is feared to lead to the "re-naturalization of society" (lidskoG 2001) and to the re-emergence of a simplistic natural determinism (Judkins et al. 2008). By advocating a positivistic, rationalistic and mechanistic way of thinking, it would disguise the essence of the issue: power relations (cannon and Müller-Mahn 2010). As such, the concept bears the risk of "depoliticizing" social structures and unconsciously reinforcing the status-quo of society by overlooking those mechanisms that put people at risk in the first place (PellinG and Manuel-navarrete 2011). While acknowledging these critical voices, in our view the stated arguments are not sufficient to dismiss the concept as a whole. Instead we argue -and this is the central proposition of this paper -that social resilience retains the potential to be crafted into a coherent analytic framework that, on the one hand, is able to incorporate scientific knowledge from the tried and tested concept of vulnerability and, on the other hand, is forward-looking and opens up a fresh perspective on today's challenges of global change. Our proposition rests on a critical review of recently published literature on "social resilience" 1) which was found by means of the two research engines, "Google Scholar" and "Web of Knowledge". All in all 68 relevant articles were identified that explicitly refer to the concept of "social resilience"; 13 were purely conceptual elaborations, while 55 presented empirical findings. These results were complemented with contributions that are, according to our knowledge and assessment, central to the discussion, even though the term "social resilience" was not explicitly mentioned. Giving the vast number of contributions on resilience, we cannot claim comprehensiveness regarding all conceptual refinements and empirical applications. Our aim is rather to provide a systematic overview of the main strands in the development of the concept of social resilience and to propose a framework which can guide future research in the field. In doing so, we want to provide a compass to help researchers to navigate through the increasingly complex body of literature, which will enable them to build on existing knowledge in order to make further progress in this research field. To this end, we investigate the roots of the concept of resilience and outline its genealogy, which leads to the iden-1) This contribution explicitly focuses on literature referring to the concept of social resilience and is not intended to give an overview of the resilience literature in general. The literature on different types of resilience has grown rapidly (e.g. urban resilience, organizational resilience, community resilience, regional resilience) and cannot be adequately covered and discussed in a single review. tification of three fundamental principles (section 2). Subsequently, we discuss the varied definitions of social resilience and provide a short summary of empirical studies that have applied the concept so far (section 3). We then identify key mechanisms for building social resilience (section 4) and discuss possible ways to advance the study of social actors' navigating of contemporary spaces of risk and resilience (section 5). The article ends with concluding remarks on the development of the concept of social resilience. What is resilience? The concept of resilience has evolved stepwise from its initial emphasis on the general persistence of ecological system functions in a world that is subject to ongoing change, through an orientation towards coupled social-ecological systems and questions of the adaptation of humans in nature, to its most recent readjustment, in taking up the more critical question of social transformation in the face of global change. This particular genealogy -we suggest -is indicative of the underlying principles that constitute the resilience concept, i.e. persistability, adaptability, and transformability. Resilience as persistability crawford hollinG's (1973) article on "Resilience and Stability of Ecological Systems" is referred to as groundbreaking work in the study of resilience (e.g. walker and salt 2006). By discussing examples such as spruce budworm outbreaks and their role for boreal forests in Canada (hollinG 1973; hollinG 1986; hollinG 1996), hollinG made the case that ecosystems would reveal nonlinear dynamics. With his paper, he radically called into question former static, equilibrium-based models of ecosystems. Instead he proposed to approach them as complex, adaptive systems that would retain cyclicity and exhibit a multitude of possible stable states, or "basins of attraction". With the notion of resilience, he addressed "the persistence of [ecological] systems and their ability to absorb change and disturbance and still maintain the same relationships between populations and state variables" (hollinG 1973, 14). This (ecological) resilience was measured by the magnitude of disturbance that a system could tolerate and still persist (carPenter et al. 2001). Such an understanding was fundamentally different from the meaning implied by the more established term "stability", which described the ability of a system to return to an equilibrium state after a temporary disturbance (hollinG 1996) and was also related to the time required for the system to return to this equilibrium (PiMM 1984). In shifting from the logic of stability to that of resilience, the emphasis was placed on those characteristics that enabled the system to live with disturbance and instability and which promoted its inherent flexibility and strengths that would increase its chances of persistence. Due to its clear-cut focus on ecosystems, at this stage, the concept of resilience remained widely unnoticed by social scientists. Resilience as adaptability In subsequent years, scholars of the Resilience Alliance, an international and interdisciplinary research network, further substantiated the idea of resilience by conducting empirical case studies on coupled social-ecological systems (walker et al. 1981;walker 1993;carPenter et al. 1999;carPenter et al. 2001;walker et al. 2002). Empirical findings and conceptual considerations merged into the elaboration of the meta-theoretical model of the "adaptive cycle" (hollinG 2001; Gunderson and hollinG 2002; Berkes et al. 2003). The adaptive cycle is a heuristic model that portrays an endogenously driven four-phase cyclicity of complex systems. The general phases that these systems pass through are periods of 1) accumulation and growth, 2) stagnation, rigidity and lock-in, 3) sudden collapse, and 4) re-organization and renewal. With the notion of "panarchy", cross-scale dynamics and the interplay between nested adaptive cycles are addressed, in which the analyzed system is affected by both higher-ranked, slower cycles and subordinated, faster cycles. In this second phase of the concept's lifespan, (social-ecological) resilience was defined as the "capacity of a [social-ecological] system to absorb disturbance and re-organize while undergoing change so as to still retain essentially the same function, structure, identity and feedbacks" (folke 2006, 259). This definition served the aim of integrating the two ideas of ecological resilience and that of the adaptive cycle. As such, social-ecological resilience was defined as the magnitude of change the system could undergo and still remain within the same stable state (cf. ecological resilience), and the system's degree of self-organization (hollinG 2001), understood as its capacity to re-organize after perturba-tions in an emergent and path-dependent manner (cf. adaptive cycle). In order to make the concept applicable for sustainability studies, the system's capacity for learning and adaptation was also included as third factor (Berkes et al. 2003). With the notion of adaptation or adaptability, proponents of resilience thinking positioned themselves within the climate change discourse by raising the question of whether "humans in nature" might be able to combine their experience and knowledge to successfully adapt to global environmental change. Resilience emerged as a "boundary object" (star and GrieseMer 1989; star 2010) positioned between two communities of practice -i.e. natural and social sciences -and represented a means by which to allow for interdisciplinary collaboration and exchange. Resilience as transformability One of the fundamental ideas of resilience thinking was "that environmental problems cannot be addressed in isolation of the social context" (o' Brien et al. 2009, 5). As such, the concept of resilience could be seen as an invitation extended by natural scientists to social scientists to engage in integrative research under the banner and normative goal of sustainability. As a response to the critique of conservatism implicit in the reading of the concept of resilience as applied to social systems, which has been raised by critical social scientists (PellinG and navarrete 2011), most recently, resilience proponents have updated their concept by adding the notion of transformation or transformability. As stated above, a system is seen to possess multiple potential stable states, or basins of attraction, which together constitute its "stability landscape" (GalloPín 2006, 298). In being exposed to a specific shock or stress, or through changes in internal structures and feedback loops, the system might move from one basin into another, and thus exhibit changes in its functionality. The notion of transformability, then, addresses a system's capacity to transform the stability landscape and to create new system pathways when ecological, economic or social structures make the existing system untenable (walker et al. 2004; folke et al. 2010). The word "untenable" (walker et al. 2004, 1) unmistakably addresses those issues that have so far been at the heart of the development discourse, i.e. equality, justice and human rights. This new focus on transformability can be said to have heralded the third phase of the concept's lifespan. Against this background, the genealogy of the concept of resilience can be summarized as having evolved stepwise from its initial focus on the persistability of ecological system functions, through an emphasis on the adaptability of coupled social-ecological systems, to its most recent reorientation towards addressing the transformability of society in the face of global change. By taking these three genealogical steps as highlighting the underlying principles that constitute the concept, resilience can be defined in its most general sense as a system's capacity to persist in its current state of functioning while facing disturbance and change, to adapt to future challenges, and to transform in ways that enhance its functioning. How can this concept be transferred to the social realm? What is social resilience? All definitions of social resilience concern social entities -be they individuals, organizations or communities -and their abilities or capacities to tolerate, absorb, cope with and adjust to environmental and social threats of various kinds. As oBrist et al. (2010a, 289) pointed out, the entry point for empirical studies on social resilience is the question: "Resilience to what? What is the threat or risk we examine?" Threats are usually assumed to originate externally with regard to social units (e.g. impact of rising prices on household expenditure), but they might also stem from internal dynamics (e.g. impact of diseases on household income) or from interaction between the two (GalloPín 2006, 295). turner et al. (2003,8075) differentiate between stresses, which are characterized by continuous or slowly increasing threats (e.g. soil degradation) and perturbations, which refer to rapid-onset hazards (e.g. hurricane) to which social units are exposed. They emphasize that social as well as ecological events and dynamics can be considered as threats, and that social units are usually exposed to multiple stressors (see also leichenko and o 'Brien 2008). The reviewed empirical case studies on social resilience address a wide range of threats. While some studies remain relatively broad and unspecific (cinner et al. 2009), most other studies focus on specific stressors, which can be broadly grouped into three categories: 1. The first is centered on natural hazards and disasters and comprises studies on droughts ( 2. A second group of papers addresses more long-term stress associated with natural resource management, resource scarcity and environmental variability. Case studies focus on issues such as mangrove forest conversion (adGer 2000), maritime resource conservation (Marshall et al. 2009), desertification (Bradley and GrainGer 2004), declining water quality (Gooch et al. 2012 All these studies have in common the fact that they use social resilience as their guiding concept. How do different authors define social resilience? The review shows that the emergence of the concept of social resilience shares certain similarities with the conceptual development of resilience, as described in section 2. It starts with a rather unspecific understanding of social resilience as the capacitity to respond, which then evolves as it incorporates notions of learning and adaptation to form a composite definition, and culminates in the acknowledgement of the importance of the roles played by power, politics and participation in the context of increasing uncertainty and surprise. Drawing on insights of vulnerability analysis A first definition of social resilience was provided by adGer (2000, 361) who considered it "as the ability of communities to withstand external shocks to their social infrastructure". Rather like the aforementioned understanding of resilience as the ability to persist, the focus of this definition was on the ca-pacities of social entities to protect themselves from all kinds of hazardous events. With a similar understanding in mind, turner et al. (2003,8075) incorporated the notion of resilience into their vulnerability concept and defined it as "system's capacities to […] respond": These responses, they write, "whether autonomous action or planned, public or private, individual or institutional, tactical or strategic, shortor long-term, anticipatory or reactive in kind, and their outcomes collectively determine the resilience of the coupled system" (ibid. 8077). Against this background one could argue that resilience is a combination of those elements that have been addressed in former concepts with the terms "coping strategies" and "adaptive capacity". However, the idea of resilience extends beyond these two elements. The concept of resilience is intrinsically dynamic and relies on the Heraclitean notion of "everything changes, nothing remains still". As such, it encompasses uncertainty, change and crisis as normal, rather than exceptional conditions. Therefore, the analysis of social resilience is geared toward understanding the mechanisms by which a system can adapt not only to the challenges that are directly at hand, but also to those that are unexpected and unknown (kates and clark 1996; streets and Glantz 2000). Glavovic et al. (2003, 291) have made this explicit by defining social resilience as "the capacity to absorb […] change -the ability to deal with surprises or cope with disturbances". Incorporating learning and adaptation into composite definitions In a second step the definition of social resilience was widened by including further skills and know-how that were deemed necessary for successfully dealing with uncertainty and change. PellinG (2003,48), for instance, holds that social resilience is "a product of the degree of planned preparation undertaken in the light of a potential hazard, and of spontaneous or premeditated adjustments made in response to felt hazard, including relief and rescue". cutter et al. (2008) defines social resilience as "the ability of a social system to respond and recover from disasters" and states that it "includes those inherent conditions that allow the system to absorb impacts and cope with an event, as well as post-event, adaptive processes that facilitate the ability of the social system to re-organize, change, and learn in response to a threat." Both PellinG (2003) and cutter et al. (2008) underline the anticipatory capacities and the pre-hazard preparedness of social actors and the capacity of a social system to learn from hazardous events how to better deal with it in future. This positive feedback of learning from past crises to better deal with uncertainties in the future has given direction to new composite definitions. Glavovic et al. (2003, 290f.) have written that social resilience is basically "influenced by […] institutions […] and networks that enable people to access resources, learn from experiences and develop constructive ways of dealing with common problems". Based on these considerations, oBrist et al. (2010a, 289) define social resilience "as the capacity of actors to access capitals in order to -not only cope with and adjust to adverse conditions (that is, reactive capacity) -but also search for and create options (that is, proactive capacity) and thus develop increased competence (that is, positive outcomes) in dealing with a threat". �rom this perspective, social resilience not only addresses social actors' or entities' capacities to protect themselves from all kinds of threats: in addition to absorptive capacities in the face of perturbations and stress, the idea of social resilience also implies that catastrophes may be perceived as opportunities for doing new things, for innovation and development (Bohle et al. 2009). Being fully resilient, then, means coping with future crises by learning, through undergoing shocks and distress, about which actions are more or less appropriate in the context of uncertainties. Therefore the key question of social resilience is, as oBrist et al. (2010a, 291) have pointed out, "what enhances capacities of individuals, groups and organizations to deal with threats more competently". Acknowledging power, politics and participation in transformation Even though this recent elaboration of the concept of social resilience may sound promising, many social researchers have deemed it too optimistic, since social actors' specific contexts are neglected. lorenz (2010) has rightly pointed out that whether persons are able to cope with threats, learn from them, and adjust to future crises is not only decided by the persons themselves, or by their endowments and willingness to invest into mitigating and adaptive measures; most of all, it is a question involving all those societal factors that both facilitate and constrain people's abilities to access assets, to gain capabilities for learning, and to become part of the decision-making process. Therefore, at its heart, social resilience has to provide a suitable answer to the question of the interplay between social structures and the agency of social actors (Bohle et al. 2009(Bohle et al. ). voss (2008 has made clear that "the predominant opinion was that the pressure of an 'objective' problem was enough to initiate solution oriented processes. This was based on a fundamental trust that all problems today or in the future could be successfully dealt with through technology and science […] in a cloud of apoliticalness". However, this opinion has turned out to be wrong; firstly, because the different perspectives and expectations of diverse stakeholders must be taken into account when coming to terms with today's challenges, and secondly, because there is no Habermasian ideal speech situation (lorenz 2010). What types of threats are perceived, how they are dealt with, and whether the poor and marginalized are heard or not -all of these are questions related to actors' capacities to participate in governance processes (voss 2008). If alternative or critical voices remain unheard for the sake of implementing standard solutions, the "participative capacity" of the system, and the unequal distribution of power and knowledge, become key issues of social resilience (e.g. Glavovic et al. 2003;Bohle et al. 2009;davidson 2010;lorenz 2010;oBrist et al. 2010a). In reality, there are situations that make shifts in dealing with risks necessary which exceed established methods of coping and adapting. These shifts might include technological innovations and policy reforms, like Germany's nuclear power phase-out and related policies to foster renewable energies. Such shifts are, however, strongly influenced by social factors (ethics, knowledge, attitudes to risk and culture) and social thresholds and start with people's questioning of their everyday lives and routines, their norms, values and taken-for-granted assumptions about reality ( Three capacities of social resilience The state of the current debate over how to define social resilience has reached a point where several authors such as voss (2008) (2012) have suggested that three different types of capacities are necessary for understanding the notion of social resilience in its full meaning. These are labelled coping capacities, adaptive capacities and transformative capacities. In the rows in table 1 we list four criteria in order to make the distinct meanings of these three terms explicit. The first criterion refers to people's response to risks, and distinguishes between ex-ante and ex-post activities. The second criterion, the temporal scope, refers to the time horizon that is addressed. A continuum is spanned between agency based on immediacy and short-term thinking, and project-related, calculative agency based on a more long-term rationale. The third criterion refers to the degree of change undergone by social structures, and the fourth to the outcomes that are associated with the three capacities. By means of these four criteria, we can place each of the above mentioned capacities in a matrix of social resilience: 1) Coping capacities address "re-active" (ex-post) (oBrist 2010a, 289) and "absorptive" (Béné et al. 2012, 21) measures of how people cope with and overcome immediate threats by the means of those resources that are directly available. The rationale behind coping is the restoration of the present level of well-being directly after a critical event. 2) Adaptive capacities refer to the "pro-active" (exante) (oBrist 2010a, 289) or "preventive" measures (Béné et al. 2012, 31) that people employ to learn from past experiences, anticipate future risks and ad- just their livelihoods accordingly. Adaptation is geared toward incremental change, and serves to secure the present status of people's well-being in the face of future risks. The major difference between coping and adaptation is grounded in the temporal scope of the activities involved. While coping addresses tactical agency and short-term rationale, adaptation involves strategic agency and more long-term planning. 3) �inally, transformative capacities, or "participative capacities" in the words of voss (2008) and lorenz (2010), encompass people's ability to access assets and assistance from the wider socio-political arena (i.e. from governmental organizations and so-called civil society), to participate in decision-making processes, and to craft institutions that both improve their individual welfare and foster societal robustness toward future crises. The main difference between transformation and adaptation refers to the degree of change and the outcome it implies. Transformation is geared towards a radical shift in which the objective is not to secure, but to enhance people's well-being in the face of present and/or future risks. As such it explicitly incorporates topics of progressive change and development. So far, there is no systematic assessment of the relation between the three dimensions of social resilience, which are often considered as a linear sequence that is traversed according to the degree of stress social actors are exposed to or the degree of agency involved (Béné et al 2012). However, while positionality within social systems might influence the endowment of social actors with different capacities, it would be misleading to understand these three terms as static power markers of different societal sections. Empirical case studies suggest rather that all three dimensions of agency can be found in principle among all actors at all scales, albeit to very different extents depending on the context (keck and etzold in this volume). In accordance with adGer et al. (2011) Social resilience by what means? In contrast to the general notion of resilience, the understanding of social resilience is deeply influenced by insights from the social sciences, and addresses questions of human agency, social practices, power relations, institutions, and discourses -facets that have been widely ignored in studies of ecological resilience. After having presented the key capacities in the section above, in the next step, we will address the question of key determinants of social resilience, i.e. social relations and network structures; institutions and power relations; and knowledge and discourses. Social relations and network structures As social resilience is closely related to the idea of capacity (cannon 2008), authors such as MayunGa (2007) These studies refer to a broad variety of assets, e.g. economic capital, physical capital, natural capital, human capital, etc. However, against the background in which assets are widely acknowledged to be products of social relations (sakdaPolrak 2010, 57-60), social capital is recognized as playing a key role in building and maintaining social resilience (see e.g. adGer 2000;adGer et al. 2002;PellinG and hiGh, 2005;wolf et al. 2010;scheffran et al. 2012). Studies that deal with this issue can be subdivided into those that predominantly analyze the structure of social networks and those that focus on the meaning and content of social relations (keck et al. 2012). Studies by ernstson and colleagues (ernstson 2008; ernstson et al. 2010a and 2010b) are examples of the first group of studies. Authors like Bodin et al. (2006) have tried to assess the optimal case-specific ratio of strong and weak ties necessary in order to build social resilience. And Moore and westley (2011) have argued that network theory helps to explain the types of networks needed for social resilience. However, they have admitted that the mere presence of network structures does not guarantee that innovation will take place. What is crucially needed are "institutional entrepreneurs" that make innovative processes happen. PellinG and hiGh (2005), traeruP (2012) and keck et al. (2012) represent the second group of studies, which place emphasis on the content of social relations and on the critical roles of trust, reciprocity and mutual support. PellinG and hiGh (2005) have suggested that informal social interactions are communities' best resources for maintaining their capacities to build social resilience and to change collective direction. keck et al. (2012) have made clear that informal networks play a decisive role in urban food supply in cities of developing countries. While most authors consider social capital an enabling resource for resilience-building, Bohle (2006) has given attention to the dual nature of social networks; sometimes enabling, but sometimes constraining and exclusionary. Institutions and power relations As with assets, questions of access have also come into the focus of social resilience research. In attempting to understand people's access to resources, several authors have stressed the importance of institutions, understood as those rules and norms that both structure and are structured by social practices (etzold et al. 2012(etzold et al. ). adGer (2000, for instance, states that "social resilience is institutionally determined, in the sense that institutions permeate all social systems and institutions fundamentally determine the economic system in terms of its structure and distribution of assets". In this regard, hutter (2011), GarschaGen (2011) and keck (2012) have proved the usefulness of studying social resilience through the lenses of neo-institutional organization theory. �rom an empirical point of view, varGhese et al. (2006) have pointed out the importance of a differentiated view of the role that access to land plays in community resilience. lanGridGe et al. (2006,12) have illustrated in their case study how the capacities of Northern Californian communities to deal with water scarcity are not directly influenced by the availability of water, but by the "historically contingent mechanism to gain, control and maintain access to water". In consequence, the authors rightly plead for an analysis of the full array of structural and relational access mechanisms. The issue of access has brought questions on equity, justice and power into the agenda. In this regard, oBrist et al. (2010b) have made clear the importance of people's cultural capital -in the form of gender, kinship or ethnic role models -in determining their access to malaria health care. Glavovic et al. (2003) have argued in their account that resilience at one level of a community does not necessarily improve resilience at another level, and cannon (2008, 12f.) has argued that "[c]ommunities are places where normal everyday inequality, exploitation, oppression and maliciousness are woven into the fabric of relationships." Accordingly, he argues that communities must be understood as places of unequally distributed vulnerabilities and unequally distributed potentials for dealing with them. �rom this perspective, the process of building, conserv-ing and reproducing social resilience appears as a highly contradictory and even conflictive process. Additionally, it is important to note that "(o)ptimizing for one form of resilience can reduce other forms of resilience" (walker and salt 2006, 121). In other words, resilience is costly, and it is important that it is achieved under conditions of finite resources and limited, though available, options for action. Hence, studies of social resilience must always address the question of who are the winners and losers of ongoing processes of building social resilience. Knowledge and discourses Recent studies of social resilience emphasize the roles of knowledge and culture. Marshall and Marshall (2007, 10) have suggested "that perception of risk should be included in future conceptual models of resilience". Their study shows how ranchers in Australia and the USA overestimate their capacity to cope and adapt to climate variability, and how this misperception makes them vulnerable to more extreme climate events (see also Marshall 2010). Likewise, furedi (2007, 485) has argued that the ways in which people "cope in an emergency or a disaster are shaped by […] a cultural narrative that creates a set of expectations and sensitises people to some problems more than others". As such, "perceptions of risk, preference, belief, knowledge, and experience are key factors that determine, at the individual and societal level, whether and how adaptation takes place" (schwarz et al. 2011, 1138). voss (2008) has convincingly shown that social resilience is a matter of people's power to define what is perceived as a threat or disaster and what is not, whereas hegemonic discourses are always dominating while the subaltern are seldom heard. lorenz (2010) argues for understanding the resilience of social entities from the point of view of their symbolic order of meaning. kuhlicke (2010) speaks of the "myth of resilience", in order to highlight its social construction and related underlying mechanisms. In his study on flood management in a German municipality, he has shown that resilience, as a discursive formation, can become a powerful vehicle for establishing and consolidating new power relations within the municipality. In her study in Northern Ghana, olwiG (2012) has pointed to the multi-sited construction of local resilience as a result of the interaction of powerful global organizations with local populations. As such, resilience is a product of both local and global imaginaries and discourses. All these studies emphasize the importance of questioning by whom, for what purpose, and with what consequences various worldviews are transported through the notion of resilience. In this regard, "resilience theory […] needs to acknowledge and incorporate much more explicitly [the] role of stakeholder agency and the process through which legitimate visions of resilience are generated" (larsen et al. 2011, 491). As ernstson (2008,174) has argued, what is required is a clear-cut focus on the longer-term formation and reproduction of (hegemonic) discourses -a focus that urges social scientists to think of knowledge and power as dynamic and interrelated components of the fabric of our social world. Ways forward in social resilience research We draw two major findings from our literature review as presented above: first, social resilience is best understood as a concept in the making. In fact, the questions of how social resilience can be properly defined, how it can be operationalized, measured and analyzed, and how it might be fostered (or hindered) are far from being settled yet. As such, at present, it is too early to make any final decision about the validity and usefulness of the concept for social science-oriented research agendas. At the same time -this is our second finding -three fundamental principles of social resilience can be identified that make it a concept in the making. The concept of resilience in general terms was shown to have evolved stepwise from its initial focus on the persistence of system functions, through an emphasis on adaptation, to its most recent reorientation towards addressing the transformation of society in the face of global change. In loose correspondence with these genealogical steps, the idea of social resilience was developed from its initial meaning, referring simply to actors' capacity to respond, and enlarged to encompass actors' capacity to learn and adapt; now the concept also includes their capacity to participate in governance processes and to transform societal structures themselves. �rom our point of view, the concept of social resilience in its current state removes much of the concerns raised by Brand and Jax (2007) with regard to conceptual clarity. Despite their loosely coupled genealogies, the actor-oriented concept of social resilience elaborated by social scientists departs significantly from the concept's original meaning. Social scientists place the spotlight on social actors, rather than on systems, and focus on capacities and practices instead of functionalities. This shift has been necessary, as it has brought important issues such as power, politics and participation back onto the agenda of the resilience debate. In our view, this mitigates much of the concern expressed by cannon and Müller-Mahn (2010) with regard to the potentially depoliticizing effect of applying of the concept of resilience to social contexts, and its tendency to reinforce the status quo. At the same time, we are aware that the current path of development of the concept of social resilience bears the risk of losing sight of the importance of context, feedbacks and connectedness that the resilience concept put on the agenda of risk studies in the first place (nelson et al. 2007). We therefore consider the current challenge to be that of finding ways to balance and reconcile insights from both perspectives. We argue that this balance can be achieved by systematically integrating three aspects into social resilience analysis. �irst, a resilience perspective, as nelson et al. (2007,399) highlight, implies the relatedness and coupling of social and ecological spheres, which cannot be understood in isolation from one another. As the review has revealed, the interlinked character of these two spheres has been deemphasized in studies on social resilience, in favor of an emphasis on the social sphere alone. Ecological systems are addressed mainly in the form of stress factors impinging on social units. In our view, the concept of "hazardscapes" developed by Mustafa (2005) and the related concept of "riskscapes" suggested by Müller-Mahn and everts (2013) are able to address the issue of social-ecological coupling, and minimize the risk that ‚hazards' will be treated simply as ‚natural events' originating outside the social sphere altogether. Having been inspired by the idea of landscape in Geography, Mustafa (2005) points out that the concept of "hazardscapes" acknowledges both the "constructedness of nature in human contexts" and "nature in the realist sense" (escoBar 1999, 2). Landscape is understood as "the materialised result of complex human-environment relations" (�OR 1501 2010, 7). The concept, thus emphasizes the hybrid character of human-environmental relations and admits that the experience of hazards "is not just a function of the material geographies of vulnerability but also of how those hazardous geographies are viewed, constructed, and reproduced" (Mustafa 2005, 566). In addition it also draws attention to the pluralistic character of hazards, in temporal, spatial and social terms. Secondly, we argue that a social resilience analysis that acknowledges "context, feedback and connectedness" (nelson et al. 2007) can be crafted on the basis of a relational understanding of society as proposed by Pierre Bourdieu. Authors such as oBrist et al. (2010a) have already drawn on Bourdieus notions of "field", "habitus" and "capital" and argue that actors' risk exposure and social resilience differ depending on their positions in the field (which are determined by their capital) and their practices (which are determined by their habitus). SieGMann (2010) and cannon (2008) have highlighted that there might be "winners" and "losers" in resilience-building processes -even within the same community or household. With Bourdieu's notions it becomes possible to analyze fields of social resilience, to identify the probable winners and losers in these fields and -most importantly -to relate them to each other. Such a view enables us to raise another question of future interest; that is: resilience in whose interest? Thirdly, many studies on social resilience have emphasized the local level, which is deemed to be the crucial level of analysis. However, as olwiG (2012) has made clear, social resilience must be understood as a product of the interaction between global and local forces. Apart from rare exceptions (e.g. lyon 2011), at present a systematic discussion of the relation between social resilience, scale and place is still lacking. We consider the concept of "translocality", as outlined by Greiner and sakdaPolrak (2012; see also Greiner and sakdaPolrak 2013), to be suitable for the analysis of social resilience in the context of cross-scale dynamics. The authors distinguish between "places", referring to locations in the physical environment where face-to-face communication takes shape, and "locales", referring to settings for social interaction that stretch beyond places, and which become "translocales" by means of remote interactions (Greiner and sakdaPolrak 2012). With the concept of "translocality", social resilience can be conceived of as the outcome of pluri-local embeddedness of social actors, which are increasingly gaining importance under the present condition of ongoing globalization. Conclusion This paper has focused on the concept of social resilience. This notion shares the key principles of the general resilience concept, which is rooted in and dominated by ecological systems thinking, specifically in its focus on systems' persistability, adaptability, and transformability. However, it departs from the general resilience discourse by adopting an actor-oriented perspective. Through the establishment of this approach to social resilience, the threat of a re-emergence of the social application of oversimplistic concepts of natural determinism has been counterbalanced, and politics has been brought back into the discussion. Based on our review of the literature, we have identified three important dimensions of social resilience, which together take into account social actors' capacities to cope with and to overcome all kinds of immediate adversities (coping capacities), their capacities to learn from past experiences and adjust themselves to pressing new challenges in the future (adaptive capacities), and their capacities to craft institutions that foster individual welfare and sustainable societal robustness in the event of present and future crises (transformative capacities). As this review illustrates, the concept of social resilience shares several commonalities with the social vulnerability and livelihoods approach (for an overview see Manyena (2006) and Miller et al. (2010). However, there are also important differences. We argue that the concept of social resilience contributes new perspectives to the understanding of vulnerable groups under stress. �irstly, the concept recognizes uncertainty, change and crisis as normal, rather than exceptional. The world is conceived of as being in permanent flux. In consequence, social resilience is perceived as a dynamic process, rather than as a certain state or characteristic of a social entity. Secondly, the study of social resilience emphasizes the embeddedness of social actors in their particular time-and placespecific ecological, social and institutional environments. As such, it is a relational rather than an essentialist concept. Thirdly, social learning, participative decision-making, and processes of collective transformation are recognized as central aspects of social resilience. Social transformations are never deterministic, but are open to debate, despite the fact that hegemonic discourses and technical innovation may play important roles in defining potential directions for development. To sum up, then, social resilience is not only a dynamic and relational concept, but also a deeply political one. As such, the search for new approaches to resiliencebuilding -especially with regard to the livelihoods of the poor and marginalized -is revealed to be not merely a technical question, but also a contested political one.
2018-12-06T23:59:52.754Z
2013-03-31T00:00:00.000
{ "year": 2013, "sha1": "5e4c30fe159ab64a9acddda70a76709d19a28a4b", "oa_license": "CCBY", "oa_url": "https://opus.bibliothek.uni-augsburg.de/opus4/files/90431/EK-67-2013-01-02.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b2c08feb8b70edb630f6e4e4580532d23569c0de", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Geology" ] }
269769081
pes2o/s2orc
v3-fos-license
Unravelling social status in the first medieval military order of the Iberian Peninsula using isotope analysis Medieval Iberia witnessed the complex negotiation of religious, social, and economic identities, including the formation of religious orders that played a major role in border disputes and conflicts. While archival records provide insights into the compositions of these orders, there have been few direct dietary or osteoarchaeological studies to date. Here, we analysed 25 individuals discovered at the Zorita de los Canes Castle church cemetery, Guadalajara, Spain, where members of one of the first religious orders, the Order of Calatrava knights, were buried between the 12th to 15th centuries CE. Stable carbon (δ13C) and nitrogen (δ15N) isotope analyses of bone collagen reveal dietary patterns typical of the Medieval social elite, with the Bayesian R model, ‘Simmr’ suggesting a diet rich in poultry and marine fish in this inland population. Social comparisons and statistical analyses further support the idea that the order predominantly comprised the lower nobility and urban elite in agreement with historical sources. Our study suggests that while the cemetery primarily served the order's elite, the presence of individuals with diverse dietary patterns may indicate complexities of temporal use or wider social interaction of the medieval military order. the Order of Calatrava and European volunteers, achieved a key victory at Navas de Tolosa, near the town of Santa Elena, Jaén, Spain 1,11,12 .This victory also meant the recovery of the old fortress of Calatrava la Vieja (Carrión de Calatrava, Ciudad Real, Spain), and the transfer of the headquarters of the order once again, but this time to the Castle-Monastery of Calatrava la Nueva (Aldea del Rey, Ciudad Real, Spain).The territorial progress of Christian forces, which culminated in 1492 CE with the capture of the city of Granada, ensured the continued control of the castle by the warrior monks of Calatrava until its abandonment in the 16th century CE 13,14 . The Order of Calatrava emerged in the Kingdom of Castile, situated in what would become the southernmost region of the kingdom on the border with al-Andalus.Its establishment was prompted by the need to replace the order of the Temple and address the looming threat of Almohad attacks 11,12 .The monarchy played a key role in the formation of the order, collaborating with Cistercian monks 11 .The order itself comprised both religious and lay members who shouldered military responsibilities 11 .Commencing in the 13th century CE, this militia underwent a process of aristocratization, leading to an increasing secularization.The upper echelons of the hierarchy were dominated by prominent noble families.The nobility contributed human resources, established familial agreements, and made substantial property donations.Consequently, the Order of Calatrava became an institution endowed with significant material and monetary resources 11 .They also wielded influence in the kingdom's politics as the centuries unfolded.Despite the vow of poverty taken by its members, this commitment was frequently compromised.Members perceived their affiliation not solely as a religious journey for the salvation of their souls but also as an avenue for economic betterment and social advancement.This is partly a consequence of the fact that the order primarily recruited its members from the lower nobility or urban oligarchies, constituting the majority of knights within the institution 11 .The military force of the order encompassed a spectrum ranging from Freire Knights (heavy cavalry with their own horses and entourage), Freire Sergeants (fighting on horseback but with more simple weapons and dictated by the highest lay and ecclesiastical hierarchies), Associated Fighters (temporary volunteers, primarily knights), Mercenaries, and Vassals. Yet, despite historical insights into the social origins and compositions of the order, relatively little is known about how these distinctions were expressed in terms of dietary and economic status 9 .Medieval historical records from northern Iberia reveal distinct dietary patterns shaped by social status, historical circumstances, and geography.The diets of economic elites were marked by significant meat consumption, particularly poultry and young animals, indicating a luxury accessible only to the wealthiest [15][16][17] .Urban areas favored beef and lamb 18 .Fish played a crucial role for both elites and urban residents, with river fish, especially in the Kingdom of Castile, becoming a staple due to religious restrictions on meat 19 .Despite these constraints, historical sources indicate a thriving trade in marine fish to Castilian cities, emphasizing its widespread availability even inland 19 .Crop cultivation in the Middle Ages encompassed wheats, barley, millets, rye, and oat, with rye being the most cultivated cereal, while wheat was considered a luxury for the social elite 16,20,21 .Rural peasants, constituting the majority, based their diets on local cereal crops, varying by region 22 .Dairy products and meat were less accessible to rural populations, and individuals of lower social status often turned to Panicum miliaceum or millet during poor harvests or famine 16,18,21,23 .However, to date there have been no detailed studies of religious order members from this time period. Since 2014, a comprehensive series of archaeological excavations has been carried out at the Castle of Zorita de los Canes.This research endeavor has been made possible through a collaborative partnership between the Davidson Day School of North Carolina and the professional archaeological firm, ArchaeoSpain.These excavations uncovered a Christian cemetery located within the expansive esplanade of the Corral de los Condes (Fig. 2).The precise commencement of its usage as a cemetery remains a topic of debate; however, some scholars propose that it may have originated towards the conclusion of the 12th century and spanning to the 15th century CE 24 .Osteological studies have revealed that most individuals interred in this cemetery were primarily adult men of varying ages.Significantly, signs of trauma have been identified in several of these individuals, indicative of violent incidents, probably battle wounds 25 , in keeping with primary use as a cemetery for the Order of Calatrava.Nevertheless, amidst these remains, one woman and a child have been unearthed, giving rise to diverse hypotheses regarding the reasons for their burial in a location that initially appeared to be designated for members of the Order of Calatrava.These discoveries raise questions about the historical context and social composition of at least some parts of the cemetery.To illuminate the dietary practices and social stratification of those interred within the fortress, particularly individuals with affiliations to the Order of Calatrava, a comprehensive stable isotope analysis encompassing carbon (δ 13 C) and nitrogen (δ 15 N) was systematically conducted.This in-depth examination was extended to both human (n = 25) and faunal (n = 19) remains from the precincts of the Castle of Zorita de los Canes.To enhance the interpretation of our findings, we utilized a Bayesian model implemented through the R package Stable Isotope Mixing Models (SIMMr) 26 to quantitatively assess the diets of the individuals studied at Zorita de los Canes Castle.Beyond its substantial contributions to our understanding of the dietary patterns within military orders of the past, this interdisciplinary research effort offers profound insights into the historical and archaeological import of the Castle of Zorita de los Canes. Discussion Despite the intensity of the use of a small area of land as a cemetery in the castle of Zorita de los Canes for over 300 years, disentangling the stratigraphic complexity and chronology is challenging, hindering an exact correlation of tomb typologies and their age (Fig. 2) 24 .Nevertheless, the presence of knights from the Order of Calatrava in the Corral de los Condes cemetery is evident.Historical sources, the dominance of male skeletons with evidence of abundant battel injuries, and dietary evidence of high social status all indicates that it was dominated by monk warriors or knights of the Order of Calatrava.Approximately one-third of the tombs were unaltered, providing a relative chronology through their typology, stratigraphic level, proximity to the church entrance, and the presence of a high number of penetrating injuries and blunt force trauma.Individuals predating the establishment of the Order of Calatrava in the Castle of Zorita de los Canes are most likely located in the deepest, as-yet-unexcavated, parts of this necropolis or in altered tombs with multiple individuals (e.g., skeletal elements not related to complete individuals discovered in tombs 1, 4, 17, 18, and 27).Additionally, early remains (2020) 27 , C 4 cereals by Nitsch et al. 28 , marine fish by Alexander et al. 17 , López-Costas and Müldner 29 , and Mion et al. 30 ; and freshwater fish by Mion et al. 30 .www.nature.com/scientificreports/marked with order crosses were discovered, reminiscent of those found in Évora, Portugal, later recognized as the Order of Évora 31 .Most of the individuals display a significant number of penetrating stab wounds and blunt force injuries (Table 1), suggestive of violent episodes such as those in Évora 31 , may be associated with war events like the battles of Alarcos (1195 CE) or Navas de Tolosa (1212 CE).Between these two conflicts, Zorita de los Canes assumed a pivotal role as the primary fortress for the Order of Calatrava 13 , with the burial of its main members taking probably place in this fortress between 1195 and 1212 CE, until the order relocated the headquarters to Calatrava la Nueva 13,14 (Fig. 1).The presence of a woman (n = 1) and an infant (n = 1) among the monk warriors, as well as in Évora, may be associated with the site's repopulation.However, as in Évora, determining the exact timing of their arrival and social status remains challenging 31 , and they may indicate that some individuals do come from different periods of site use.Some altered tombs with remains from more than one individual may belong to earlier Christian inhabitants, such as those involved in the civil war between the Lara and Castro families (1158 and 1169 CE) 14 .To determine the age and origin of each individual precisely, a comprehensive study involving and exhaustive radiocarbon dating will be essential.However, overall, it is clear that the majority of individuals sampled here are likely members of the standing army of the Order of Calatrava.When comparing values from adult individuals at Zorita de los Canes (δ 15 N Mean ± SD = 11.4 ± 1.0 ‰, and δ 13 C Mean ± SD = − 18.0 ± 0.6 ‰) with those derived from Pérez Ramallo et al. 4 research on social status differences in the medieval Iberian Peninsula, it suggests that the individuals under study belonged to the social elite of their time (see Table 2).Pérez-Ramallo et al. 4 reconstructed individuals' diets through stable carbon and nitrogen isotope analysis, revealing significant variations in animal protein consumption when social origins were taken into account.The research of Pérez-Ramallo and colleagues encompassed various social groups, including royalty, bishops, residents of emerging northern Iberian medieval cities, and rural populations.The authors observed that individuals belonging to the Christian social elite of the time (10th-12th centuries CE) exhibited higher stable nitrogen isotope values (δ 15 Peninsula 17,32,33 .However, a significant number of studies also observed no differences 4,29,[34][35][36] .Still, the fact that we could only analyse one female individual inhibits any comprehensive comparison based on sex.Alternatively, as seen in Évora 31 , this woman, together with other male individuals that illustrated lower δ 15 N values (COC(CL)02 and COC(CL)25) (see Table 1), may be individuals attracted by the repopulation efforts in recently conquered areas.Therefore, the presence of women or individuals from other social classes among the knights of the Order of Calatrava could indicate their roles as servants or workers in the castle.On the other hand, the infant's values (COC(CL)T.1(4):δ 15 N = 14.6 ‰, and δ 13 C = − 17.9 ‰; estimated age between 0.8 and 1.5 years) clearly shows the effects of breastfeeding when compared to the adult individuals in the study.This is consistent with the expected practice of breastfeeding up to the age of 3 years for boys and up to the age of 2 years for girls during this period 37 .When comparing the values of COC(CL)T.1(4)with those of the woman analysed in this study (COC(CL)20), it was observed that the infant δ 15 N values are higher and outside the expected increment range (2-3 ‰) 38,39 .Consequently, the δ 15 N values of the mother or the person in charge of breastfeeding the infant, as sometimes another woman was substituted 37 , should range between 12.6 ‰ and 11.6 ‰.This suggests the presence of at least one more woman with a diet richer in animal and/or marine protein and, consequently, potential social status differences also among local women.However, our interpretation remains speculative until more female individuals are discovered and analysed at Zorita de los Canes, including through other techniques to examine kinship relationships (e.g., aDNA). Our Bayesian model attempts to provide more detailed insights into the dietary sources of individuals in Zorita de los Canes (Sup.Mat.S1 and S2).Following its results, the main food consumed appears to have been Gallus gallus (Fig. 4), aligning with historical records that highlight poultry as a favourite among the Medieval social elite in Iberia [15][16][17] .This would correspond well with the social status of most Calatrava knights.The significant consumption of freshwater, and particularly marine fish, is also notable, potentially influenced by the order's affiliation with the Cistercian Order, which promoted strict adherence to religious restrictions on meat consumption 19 .Our Bayesian model indicates a notable disparity in fish consumption, with marine fish accounting for 13.5%, compared to river fish at 6.8% (Sup.Mat.S2).Given the abundance of fish in the nearby Tagus River, an inland water source, it is notable that the majority of consumed fish comes from coastal regions.This observation may further underscore the economic prosperity and connectivity of the population, enabling extensive consumption of marine-origin fish even in inland areas.The Simmr Bayesian model further highlights a preference for C 3 plants over C 4 plants, supporting the historical sources that indicate the social elite's favouritism toward barley or wheat rather than millet 16,20,21 .The presence of C 4 plants may be linked to direct consumption or indirect exposure through the ingestion of animals fed with millet as fodder 21 .It might also be influenced by the site's proximity to al-Andalus, providing individuals in Zorita de los Canes access to plants commonly found in the Islamic region of the Iberian Peninsula, such as sorghum or sugar cane 16,40 .However, differences in time and location can cause significant variations in the isotopic values of plants and animals due to factors such as climate, farming practices, manuring, proximity to the coast, latitude, and altitude [41][42][43] .Despite our meticulous efforts to select C 3 and C 4 plants, as well as marine and freshwater fish from geographically closer and climatically similar areas for our Bayesian model (see Sup. Mat.Tab.S2.1), it is important to acknowledge that the variability inherent in their location and potential baseline differences through time significantly impacts the accuracy of our approach.This is evident from the wide range of deviation in the estimates produced (see Sup. Mat.Fig. S2.1; Tab.S2. 3 and S2.4).Furthermore, we have included a substantial number of food sources (n = 9), which further complicates the precision of our Bayesian model.Therefore, we should approach this interpretation with www.nature.com/scientificreports/caution until further analysis of local plants and freshwater fish can provide a more reliable model.In addition, the impact of an individual's geographical origin on these dietary patterns remains uncertain and necessitates further analyses, including studies using 87 Sr/ 86 Sr, δ 18 O, and δ 34 S proxies 34,35,44 . A comprehensive analysis of our findings in comparison with stable isotope analyses conducted at contemporaneous archaeological sites near Zorita de los Canes Castle provides a more nuanced understanding of socially-related dietary practices in Medieval Iberia (Table 2, Fig. 6).Our initial comparison involves knights of the Order of Évora (Portugal), fellow members of the Order of Calatrava 31 , the social elite of the Kingdom of Aragon (including two bishops, a princess, a count, and two monks) 4 , and members of the Castilian royalty buried in the Seville Cathedral 15 .Notably, δ 15 N and δ 13 C values obtained from the samples analyzed here closely resemble those in Évora.However, in contrast to those observed by Jiménez-Brobeil et al. 15 for the Castilian Royal members buried in Seville, the individuals studied here exhibit generally lower δ 15 N values.This suggests that individuals in Zorita de los Canes belong to a social elite, yet their values are lower than those of the royalty.This aligns with historical sources indicating that the order primarily comprised the lower nobility and urban elite 11 .Nevertheless, it is noteworthy that two individuals (COC(CL)07 = δ 15 N 13.2 ‰, and δ 13 C-17.7 ‰; COC(CL)09 = δ 15 N 12.5 ‰, and δ 13 C-18.4‰) could be associated more closely with the higher nobility based on their nitrogen and carbon isotope values.Intriguingly, both individuals displayed perimortem trauma from a violent episode (Table 1) 25 . A statistical comparison reveals a significant difference in the δ 13 C values of adult individuals from Zorita de los Canes compared to those of the social elite in the Kingdom of Aragon and the royalty members of Castile (Kruskal-Wallis test, and Mann-Whitney pairwise test for equal medians; p < 0.05).This distinction may be attributed to a stronger adherence to religious norms, leading to a shift from meat to fish consumption among the order members.However, it also raises the possibility of a higher intake of C 4 plants in contrast to identified members of royalty.Additionally, comparing our results with data available in the wider literature on medieval Muslim and Christian populations geographically and chronologically close further supports our interpretation of individuals belonging to the social elite, especially when contrasting rural and urban populations (Fig. 6, Table 2).While fish consumption is evident in Zorita de los Canes, the prominence of animal protein in the individuals analyzed stands out compared to coastal populations (e.g., Benipeicar, Tossal de las Bases or El Raval) (Table 2).Caution is warranted when drawing parallels with the Muslim population of Valencia (11th-13th centuries CE), however, given factors like the potential presence of non-local individuals noted by the authors of that study 17 , as well as significant climatic and baseline variation between the regions. Overall, our stable isotope results suggest that the Corral de los Condes cemetery from Zorita de los Canes Castle was primarily intended for knights and sergeants of the order, positions held by the high nobility (hierarchy), but particularly by the lower nobility and the urban elite.However, individuals with diets more typical of other social statuses imply that the cemetery might not have been exclusively reserved for the order's elite but also included members of lower statuses within it.Considering the order's role as a mechanism for social advancement, these male individuals may have been from the lower nobility or the urban elite with fewer material means.Future analyses of different bone or dental remains from the same individual, could reveal if there www.nature.com/scientificreports/were dietary differences throughout their lives, shedding light on whether membership in the Calatrava order improved living conditions.Nonetheless, as mentioned earlier, we cannot rule out the possibility that some individuals are representative of Christian communities predating the fortress's cession to the Calatrava order in 1174 CE, or indicative of wider social repopulation of the site during times of peace.In addition, our Bayesian model is limited by the absence of local values for plants, cereals, and freshwater fish.These limitations and its high deviation and error margins must be considered when interpreting the results. Materials and methods We present a comprehensive, novel isotopic dietary analysis of 25 individuals (23 adult males, 1 adult female, and 1 indeterminate child) whose skeletal remains were exhumed from the Corral de los Condes cemetery located within the historically significant Castle of Zorita de los Canes, Guadalajara, Spain (Fig. 1).Our approach involved the sampling of a rib fragment from each individual, reflecting the last 5-10 years of life due to their quick turnover 50 .To further enhance the accuracy and contextual interpretation of the isotopic data derived from human collagen, we extended our investigations to encompass 19 sets of remains (cattle, ovicaprid, pigs, and poultry), from between the 12th and the 16th centuries CE, comprising both domestic and wild fauna, also discovered within the wider confines of the castle.All data generated or analysed during this study are included in this published article, and its supplementary information files.All methods employed in this study were meticulously executed in strict adherence to relevant guidelines and regulations by the Patrimonio Cultural de Castilla la Mancha.The experimental protocols underwent rigorous scrutiny and received approval from the Zorita de los Canes town council, and those legally responsible for Cultural Heritage of Castilla la Mancha (legal representatives of the Cultural Heritage of the Government of Spain), ensuring the highest standards of ethical conduct.Additionally, it is imperative to confirm that informed consent was obtained from their legal guardians, Dionisio Urbina and Catalina Urquijo, also co-authors of this manuscript and directors of the archaeological excavation at Zorita de los Canes, emphasizing the commitment to ethical practices in this research endeavour. Stable isotope analysis The dietary patterns of ancient humans can furnish insights into their social standing and origins 4,34,35 .The variability in stable carbon isotopes (δ 13 C) within terrestrial ecosystems is primarily shaped by the photosynthetic pathways employed by plants forming the foundation of the food chain 51 .Consequently, distinct and nonoverlapping δ 13 C values emerge, with C 3 plants (such as trees, shrubs, temperate grasses, and crops like wheat) exhibiting a range of approximately − 24 ‰ to − 36 ‰ (with a global average of − 26.5 ‰) 52 .By contrast, C 4 plants (including tropical grasses and crops like maize, millet, and sugar cane) showcase a range of about − 9 ‰ to − 17 ‰ (with a global average of − 12‰) 52,53 .CAM plants (such as succulents) display δ 13 C values that overlap and fall between those of C 3 and C 4 plants.Due to different sources of CO 2 in marine ecosystems, marine plants exhibit δ 13 C values more akin to C 4 plants.These δ 13 C distinctions are transmitted up the food chain to the tissues of consumers, with an enrichment of 0.5-2 ‰ in δ 13 C observed between trophic levels 52,53 .Stable nitrogen isotope (δ 15 N) values for both terrestrial and aquatic animals vary in accordance with trophic level, escalating by + 3-6 ‰ with each successive trophic level 54 .Longer food chains and variations in nitrogen sources contribute to, on average, higher δ 15 N values among marine and freshwater consumers, although the δ 13 C values of freshwater ecosystems exhibit greater variability 55,56 .We prepared bone samples (rib fragments) of approximately 1 g weight by breaking them into smaller fragments and removing adhering soil through abrasion using a sandblaster.These samples underwent demineralization by immersing them in 0.5 M HCl for a period of 1-7 days.After complete demineralization, the samples were rinsed three times with ultra-pure H 2 O.The residue was then gelatinized in pH3 HCl at 70 °C for 48 h.The resulting soluble collagen solution was Ezee-filtered to eliminate insoluble residues, following the method outlined by Brock et al. 57 .Subsequently, the samples were lyophilized using a freeze dryer for 48 h.In cases where we had sufficient material, approximately 1.0 mg of the resulting purified collagen was weighed in duplicate and placed into tin capsules for further analysis.The δ 13 C and δ 15 N ratios of the bone collagen were analyzed using a Thermo Scientific Flash 2000 Elemental Analyser coupled to a Thermo Delta V Advantage mass spectrometer at the Isotope Laboratory, MPI-GEA (formerly MPI-SHH), Jena.Isotopic values are presented as the ratio of the heavier isotope to the lighter isotope ( 13 C/ 12 C or 15 N/ 14 N) as δ values in parts per mil (‰) relative to international standards: VPDB for δ 13 C and atmospheric N2 (AIR) for δ 15 N.The results were calibrated against international standards (IAEA-CH-6 Sucrose, IAEA-N-2 Ammonium Sulfate, and USGS40 L-Glutamic Acid).Specifically, USGS40 values were 13 C raw = − 26.4 ± 0.1, 13 C true = − 26.4 ± 0.0, 15 N raw = − 4.4 ± 0.1, and 15 N true = − 4.5 ± 0.2; IAEA N-2 values were 15 N raw = 20.2 ± 0.1, and 15 N true = 20.3 ± 0.2; IAEA C6 values were 13 C raw = − 10.9 ± 0.1, and 13 C true = − 10.8 ± 0.0. Replicated analyses of standards indicate a machine measurement error of approximately ± 0.1 ‰ for δ 13 C and ± 0.1‰ for δ 15 N. We assessed the overall measurement precision by conducting repeat extractions from a fish gelatin standard (n = 20), resulting in a precision of ± 0.1‰ for δ 13 C and ± 0.1 ‰ for δ 15 N. To ascertain purity and collagen preservation, we examined the carbon-to-nitrogen stable isotope ratio, aiming for a range of 2.9-3.6, a value typically found in fresh bone collagen 58 .The elemental mass percentages are approximately 34.8 ± 8.8 % for carbon and between 11 and 15% for nitrogen 59 .External factors, such as humic acids or salts, can alter these percentages 59 .Collagen yield, representing the percentage of collagen extracted from the bone, serves as an indicator of bone preservation quality.Fresh bone typically contains about 20% collagen.Diagenesis can cause collagen loss in the bone, to the extent that the isotopic signature obtained from a low-yield sample may no longer reflect its original isotopic signature.While sample filtration helps eliminate residues, it can also lead to a substantial loss of yield (around 40% to 60%) 60 .Ambrose and Norr 53 set the limit at 1.2%.However, van Klinken 59 established a minimum range between 0.5-1.0%for archaeological bones. Figure 1 . Figure 1.Map of the Iberian Peninsula showing the location of Zorita de los Canes and the other sites mentioned in this study.This map was generated using the open access software QSIG 3.28.1-Firence(https:// qgis.org/ es/ site/). Figure 2 . Figure 2. Photogrammetry and map of the medieval Christian cemetery know as Corral de los Condes or Corral of the Counts (11th-15th centuries CE) discovered and excavated in the castle of Zorita de los Canes. Figure 3 . Figure 3. δ 13 C and δ 15 N of fauna and humans analysed in the present study and C 3 cereals by Knipper et al. (2020)27 , C 4 cereals by Nitsch et al.28 , marine fish by Alexander et al.17 , López-Costas and Müldner29 , and Mion et al.30 ; and freshwater fish by Mion et al.30 . Figure 4 . Figure 4. Comparison of dietary proportions between sources from the Bayesian R model 'Simmr' . Figure 6 . Figure 6.δ 13 C and δ 15 N human individuals from the present study and the 'Christian' and 'Muslim' compiled literature data (see Table2).C = Christian; M = Muslim. Figure 6.δ 13 C and δ 15 N human individuals from the present study and the 'Christian' and 'Muslim' compiled literature data (see Table2).C = Christian; M = Muslim. C values.These distinctions were primarily observed between Oryctolagus 15niculus and Bos taurus, Gallus gallus, and ovicaprines for both δ15N and δ Table 1 . 13able isotopes of δ13C and δ 15 N of human and fauna from Zorita de los Canes castle *Presence of trauma/s. Table 2 . 135N and δ13C measurements for medieval 'Christians' and 'Muslim' individuals available in the literature from the Iberia Peninsula excluding those potentially influenced by breastfeeding (mean, SD, and number of samples).
2024-05-16T06:17:55.141Z
2024-05-14T00:00:00.000
{ "year": 2024, "sha1": "4544099b5611ef6802dc184d00418710c43ed3e5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-024-61792-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ca954eb4ff3489e44c81cc8850d5b7b9f72fa1d", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Medicine" ] }
125919330
pes2o/s2orc
v3-fos-license
General solution for MHD-free convection flow over a vertical plate with ramped wall temperature and chemical reaction The aim of the article is to study the unsteady magnetohydrodynamic-free convection flow of an electrically conducting incompressible viscous fluid over an infinite vertical plate with ramped temperature and constant concentration. The motion of the plate is a rectilinear translation with an arbitrary time-dependent velocity. Closed-form solutions for the temperature, concentration and velocity fields of the fluid are obtained. The influence of transverse magnetic field that is fixed relative either to fluid or plate is studied. Furthermore, the effects of system parameters on the fluid velocity are analyzed through numerical simulations and graphical illustrations. plate by considering periodic functions for the suction velocity, temperature, and concentration at the wall. The unsteady-free convection flow of a Newtonian fluid past an impulsively started infinite vertical plate in a porous medium with ramped wall temperature, ramped wall concentration, and ramped plate velocity has been recently studied by Ahmed and Dutta [1]. Ghara et al. [5] have studied the MHD-free convection flow past an impulsively moving vertical plate with ramped wall temperature. Seth et al. [14] studied Hall and Soret effects on unsteady MHD-free convection flow of radiating and chemically reactive fluid past a moving vertical plate with ramped temperature in a rotating system. The unsteady MHD-free convection flow past a porous plate under oscillatory suction velocity was analyzed by Reddy [10]. MHD-free convection flow of a second grade fluid was studied by Samiulhaq et. al. in [11]. Narahari and Debnath [8] have studied the unsteady MHD-free convection flow past an accelerated vertical plate with constant heat flux and heat source. They considered two cases, namely, the magnetic lines of force held fixed relative to the fluid or held fixed relative to the plate. For more interesting and related results, see [6,12,13,16] and the references therein. In the present paper, the unsteady MHD-free convection flow near a vertical plate is considered. The plate has a translational motion in its plane with an arbitrary time-dependent velocity. The wall temperature varies as a ramped law and the wall concentration is constant. The heat generation or absorption and the chemical reaction are also considered. The fluid is electrically conducting and regarding the applied magnetic field two cases are considered, namely, when the magnetic lines of force are held fixed to the fluid or to the plate [8]. The difference between fluid velocities in the two cases is studied and some properties are highlighted. The governing linear partial differential equations are written into a non-dimensional form and solved by means of the Laplace transform method. The influence of the system parameters (e.g., magnetic parameter, Grashof numbers, chemical reaction, and heat absorption parameters) on the fluid velocity are also analyzed through graphical illustrations. Mathematical formulation of the problem Consider the unsteady-free convection flow of an incompressible viscous, electrically conducting fluid. The fluid is near an infinite vertical plate with ramped temperature and constant concentration. The motion of the plate is a rectilinear translation with an arbitrary time-dependent velocity. We introduce a coordinate system with x-axis along the plate in vertical upward direction and y-axis normal to the plate. A uniform transverse magnetic field of strength B o is applied. Initially, at time t = 0, the plate and the fluid are at rest with the same temperature T ∞ and species concentration C ∞ . After time t = 0 + , the plate moves with the velocity U o f (t) in its own plane along the x-axis. Here, U 0 is a constant velocity and f (·) is a dimensionless piecewise continuous function, whose value f (0) = 0. Heat is supplied to the plate as a time-ramped function in the presence of heat source and chemical reaction. The species concentration at the plate is constant C w . We further assume that: The magnetic Reynolds number is small, so that the induced magnetic field is negligible in comparison with the applied magnetic field. 2 o . Viscous dissipation, radiative effects, and Joule heating terms are neglected in the energy equation (usually in free convection flows the velocity has small values). However, according to Magyari and Pantokratoras [7], the radiative effects can be easily included by a simple rescaling of the Prandal number. 3 o . No external electric field is applied and the effect of polarization of ionized fluid is negligible; therefore, electric field is assume to be zero. 4 o . There exits a first-order chemical reaction between the fluid and species concentration. The level of species concentration is very low, so that the heat generated by chemical reaction can be neglected. Since the plate is infinite extended in x and z directions, therefore, all physical quantities are functions of the spatial coordinate y and time t only. Under usual Boussinesq's approximation, the flow is governed by the following system of equations [8,15]: Into above equations, u(y, t), T (y, t), and C(y, t) are velocity, temperature, and species concentration of the fluid, respectively, ν is the kinematic viscosity, g is the acceleration due to gravity, β T is the thermal expansion coefficient, ρ is fluid density, β C is the volumetric coefficient of expansion with species concentration, σ is electrical conductivity, C p is the specific heat at constant pressure, k is the thermal conductivity, Q is the heat generation or absorption coefficient, D m is the chemical molecular diffusivity, and R is chemical reaction parameter. Equation (1) is valid when the magnetic lines of force are fixed relative to the fluid. If the magnetic field is fixed relative to the plate, the momentum Eq. (1) is replaced by [8,17] Equations (1) and (4) can be combined as where The appropriate initial and boundary conditions are , Introducing the dimensionless variables and dropping out the star notation, we get Into above relations, G r , G m , M, Pr, Q, R, and Sc denote thermal Grashof number, Grashof number, Hartman number, Prandtl number, the dimensionless heat generation or absorption coefficient, the dimensionless chemical reaction parameter, and the Schmidt number, respectively, whereas is the Heaviside unit step function. Solution of the problem In the following, the solution of partial differential equations (10)-(12) with initial and boundary conditions (13)-(15) will be determined by Laplace transform. The momentum equation depends upon energy and concentration equation; therefore, we shall first establish the exact solutions for temperature and concentration. Temperature field In a recent paper, Ghara et al. [5] studied a similar MHD-free convection flow with thermal radiation and ramped wall temperature but without heat source. Due to their assumption regarding the relative heat flux, the dimensionless form of the energy equation is identical to our Eq. (11) with the radiation parameter Ra instead of Q. As the corresponding initial and boundary conditions are also identical to our conditions (14) 2 and (15) 2 , we directly present the temperature field: and its Laplace transform [2] T (y, to be used later for velocity. The function (y, t; a, b), defined by is identical to that of Ghara et al. [5,Eq. (21)] for Q = Ra. Species concentration Applying the Laplace transform to Eq. (12) and using the corresponding initial and boundary conditions, we find that where C(y, s) is the Laplace transform of C(y, t). The solution of the differential equation (19) 1 with the adjoining boundary conditions is By now applying the inverse Laplace transform to Eq. (20), we can easily obtain the concentration distribution and is known in the literature. Calculation of fluid velocity Applying the Laplace transform to Eq. (10), and using the initial and boundary conditions (13) 1 , (14) 1 and (15) 1 as well as Eqs. (17), (20), we find that where u(y, s) and F(s) are the Laplace transforms of u(y, t) and f (t), respectively. The solution of the problem (21), (22), is where Applying the inverse Laplace transforms to Eq. (23), we find that where Equation (25) seems to not satisfy the boundary condition (14) 1 . To eliminate this drawback, we rewrite (25) in the equivalent form: Moreover, to compare the fluid velocities corresponding to the two ways of implementing magnetic field, we note that the two velocities differ through the term: Now, it is important to highlight the following aspects: (1) If f (·) is a piecewise continuous and positive function, V (y, t) has positive values. Consequently, for the magnetic field is fixed relative to the plate, the fluid velocity is larger than the fluid velocity corresponding to the case when the magnetic field is fixed to the fluid. An opposite trend appears if the function f (·) is negative. (2) The term V (y, t) is an increasing function of the spatial variable y. Therefore, the fluid velocity is significantly changed in the neighborhood of the plate if the magnetic field is fixed relative to the plate. Consequently, unlike the case when the magnetic field is fixed to the fluid, the fluid at infinity does not remain at rest if the magnetic field is fixed relative to the plate. Special cases As thermal and concentration components of velocity do not depend on the plate motion. However, heat and mass transfer can influence the fluid motion and we have to know if their influence is significant or it can be neglected in some motions with possible engineering applications. Motion of the plate with constant velocity ( f (t) = H (t)) We take f (t) = H (t) (the Heaviside unit step function) into Eq. (25). After lengthy but straightforward computation, the mechanical component take the form: As it was to be expected the corresponding results are identical to those obtained by Narahari and Debnath [8,Eq. (11a) with a o = 0] and Tokis [17,Eq. (12)]. Introducing f (t) = H (t)e bt into Eq. (25), we find that where The results are identical to those obtained by Pattnaik et al. [9,Eq. (13)] with a = b, λ = M 2 , 1 K p = 0 and γ = 0 in the absence of thermal and concentration effects and for the case when magnetic field is held fixed relative to the fluid. Indeed, assigning to f (·) suitable forms, we can determine exact solutions for any motion with technical relevance of this type. Numerical results and discussion The unsteady-free convection flow of an incompressible viscous, electrically conducting fluid is studied. The fluid is near an infinite vertical plate with ramped temperature and constant concentration. The motion of the plate is a rectilinear translation with an arbitrary time-dependent velocity. Closed-form solutions for the dimensionless temperature, concentration, and velocity fields of the fluid are obtained. The influence of transverse magnetic field that is fixed relative to fluid or plate is studied. It is important to note that the influence of the flow parameters is significantly different on fluid flow, depending on how the magnetic lines of force are fixed relative to the fluid or the plate. Finally, two particular cases for the translational motion of the plate, namely, the translation with constant velocity, respectively, exponential accelerated motion are considered. Numerical results for velocity have been computed for several values of magnetic parameter M, Grashof number G r , mass Grashof number G m , chemical reaction parameter R, and heat source parameter Q. The velocity profiles versus the spatial variable y at time t = 15 and for constant plate velocity ( f (t) = H (t)) are shown in Figs. 1, 2, 3, 4, and 5. The velocity proflies u(y, t), are plotted when the magnetic field is being fixed to the fluid (ε = 0) and to the moving plate (ε = 1). The influence of magnetic field M on the velocity profiles is presented in Fig. 1. It is known that, under the influence of a magnetic field on an electrically conducting fluid, a resistive force arises (so called the Lorentz force). This force has tendency to slow down the fluid motion in the boundary layer. It can be seen from Fig. 1 that the fluid velocity decreases as the magnetic field M increases. In addition, it is noted that if the magnetic field is fixed to the fluid (ε = 0), the values of the fluid velocity are lower than in the case when the magnetic field is fixed to the plate (ε = 1). The effects of the thermal Grashof number G r and the mass Grashof number G m on the fluid velocity are shown in Figs. 2 and 3. For both parameters, velocity has a maximum value in the vicinity of the plate and tends towards the value h(t) for large values of the spatial coordinate y. It is further observed that the values of the fluid velocity increase with increasing values of G r and G m . The influence of the dimensionless chemical reaction parameter R on the fluid velocity is presented in Fig. 4. and it is observed that the fluid velocity decreases with increasing values of the parameter R. In Fig. 5, we have plotted the velocity profiles versus y for two values of the heat absorption parameter Q. It can be seen from these curves that the velocity decreases with increasing of the values of parameter Q. In Fig. 6, we have plotted velocities u m (y, t), u m (y, t) + u C (y, t) and u m (y, t) + u C (y, t) + u T (y, t) versus y to investigate the contributions of mechanical, thermal, and concentration components of velocity on the fluid motion. It is observed that contributions of mechanical, thermal, and concentration components of velocity on the fluid motion are significant and they cannot be neglected. Conclusions The unsteady-free convection flow of an incompressible viscous and electrically conducting fluid is studied. The fluid is near an infinite vertical plate with ramped temperature and constant concentration. Closed-form solutions for dimensionless temperature, concentration, and velocity field of the fluid are obtained. The influence of transverse magnetic field that is held fixed relative to fluid or to the plate is studied. The motion of the plate is a rectilinear translation with an arbitrary time-dependent velocity f (·); assigning to this f (·) suitable forms, we can determine exact solutions for any motion with technical relevance of this type. The plate temperature changes as a time-ramped function and the concentration on the plate is constant. The resulting coupled partial differential equations are written in the non-dimensional form and are solved by means of the Laplace transforms. Through graphical illustrations for the case when plate is moving with uniform velocity, the influence of magnetic field and of the system parameters on the fluid velocity is brought to light. Some useful conclusions of the study are as under: • If the magnetic field is fixed relative to the moving plate, the fluid velocity differs significantly from the case when the magnetic field is fixed relative to the fluid. • At large distances from the plate, the fluid will be at rest when the magnetic field is fixed relative to the fluid, but it attains a finite non zero value if the magnetic field is fixed relative to the plate. • For increasing values of the Hartmann number, the fluid velocity decreases; therefore, stronger magnetic field leads to slower flows. • The fluid velocity increases with increasing values of the thermal Grashof and mass Grashof number and decreases with respect to the increasing values of chemical reaction and heat absorption parameters R and Q. • Contributions of mechanical, thermal, and concentration components of velocity on the fluid motion are significant and they cannot be neglected.
2019-04-22T13:08:07.405Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "2b85506ac7cdd399fc8f29b018b1b148a5094a27", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40065-017-0187-z.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b4cd46742cbb9cc4bdbf5e4269f4431891adbb37", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
268528271
pes2o/s2orc
v3-fos-license
Effects of Additional Protein Intake on Lean Body Mass in Patients Undergoing Multimodal Treatment for Morbid Obesity (1) Multimodal treatment is a standard treatment for patients with obesity. However, weight loss also leads to reductions in fat-free mass. The aim was to investigate whether additional protein intake contributes to better preservation of lean body mass (LBM). (2) A total of 267 obesity patients (age 45.8 years; BMI 47.3 kg/m2) were included in this analysis. For the first 12 weeks of the program, patients were given a formula-based diet of 800–1000 kcal per day. Patients were divided into a control group (CG) (n = 148) and a protein group (PG) (n = 119). The PG was characterized by an additional protein intake with the aim of consuming 1.5 g of protein per kilogram of normalized body weight, whereas the CG had a protein intake of 1 g/kg/d. Bioelectrical impedance analysis was performed at the beginning (t0) and after 12 weeks (t1) of the program. (3) There were no significant differences between the groups with respect to weight loss (p = 0.571). LBM was also significantly reduced in both groups, without significant differences between CG and PG. (4) Increased protein intake had no significant effect on body composition of morbidly obese patients during a 12-week formula-based diet and multimodal treatment. Introduction Obesity is one of the most relevant diseases in society today.It has reached pandemic proportions, affecting more than one billion people worldwide [1].Obesity is described by the World Health Organization (WHO) as an accumulation of adipose tissue that exceeds normal levels and is associated not only with increased morbidity, but also with increased mortality [2]. A BMI ≥ 40 kg/m 2 or ≥35 with obesity-related health conditions correspond to morbid obesity [2].The abnormal increase in fat tissue is associated with a large variety of comorbidities such as type 2 diabetes mellitus, cardiovascular diseases, cancer, and mental disorders like depression.In addition, high body weight often leads to joint problems, which can result in a lack of exercise and thus reduced energy expenditure [3]. The goals of obesity treatment are the improvement of comorbidities, the reduction in risk factors of comorbidities, the improvement of quality of life and the reduction of work absences [4].A multimodal approach has been established as a nonsurgical treatment method for obesity, comprising nutritional, behavior, and exercise therapy [4]. As part of nutritional therapy, patients are prescribed a calorie-restricted diet as this is essential for inducing weight loss [5].However, a problem of diet-induced weight reductions is the associated loss of functional muscle mass.In addition to functional capacity and quality of life, muscle mass is also an important determinant of basal metabolic rate.Thus, a reduction in muscle mass leads to decreases in metabolic rate [6].In turn, a low basal metabolic rate can increase the risk of weight regain [7], which may even consist of a regain of fat rather than muscle mass [8].A goal of nutritional therapy should therefore be to preserve as much muscle mass as possible during weight loss while maximizing the reduction in fat mass. Changes in body composition can be detected by using bioelectrical impedance analysis (BIA).It allows quantifying fat mass and lean body mass (LBM), which includes muscle mass.Compared to other methods such as dual-energy X-ray absorptiometry (DXA), total body water estimates, computed tomography and magnetic resonance imaging (MRI), it requires little time, is simple to apply, relatively inexpensive and can be very accurate depending on the device used [9]. A diet-related factor that can affect the loss of LBM is the amount of ingested protein [10].With respect to age-related loss of LBM, an elevated protein intake of up to 2.0 g/kg/day is recommended, which is markedly above the 0.8 g/kg/day recommended dietary allowance for adults.Adequate protein intake may also be required to prevent LBM loss associated with calorie restriction, especially when very low-calorie diets (VLCDs) are used [11].Accordingly, calorie-restricted formula diets have been used to provide a minimum amount of protein during periods of intense calorie restrictions [12]. However, it remains unclear whether further increasing protein intake during calorie restriction can reduce the loss of LBM during weight loss.Previous studies provided inconsistent results.Some found that increased protein intake during caloric restriction helps maintain LBM compared to normal protein intake [13][14][15], especially when combined with physical activity [16,17].However, other studies have failed to show the positive impact of additional protein intake [18,19].Most of these studies used experimental designs and recruited rather small groups of participants with a BMI between 25 and 43 kg/m 2 .It also remains unclear as to what degree findings may translate into actual practice of weight loss treatment. Therefore, the aim of our retrospective cohort study was to analyze real-world data from an established obesity center that offers an intensive multimodal treatment program with an initial 12-week very-low calorie diet (VLCD).At one point (i.e., in June 2018), the treatment regime was permanently altered by introducing a daily intake of additional protein powder.We wanted to compare the corresponding cohorts (i.e., the one formed prior to with the one formed after the introduction of additional protein powder) to determine whether the additional protein intake leads to a significantly higher preservation of LBM during weight reduction. Materials and Methods The present paper reports data from an ongoing observational study that was designed to prospectively evaluate a publicly available, nonsurgical weight loss treatment program for patients with morbid obesity.The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethical Review Board of the Saxonian Medical Association (protocol code EK-B-07/10-1, date 29 March 2010) [20].It comprises a 12-month multidisciplinary lifestyle intervention with a very low-calorie diet, a 5-year follow-up care with mandatory annual checkups and a prospective evaluation.Written informed consent was taken from every patient prior to inclusion. The primary endpoint was change in LBM after 12 weeks of therapy.As secondary analyses, subgroups were formed to break down the heterogeneous sample and to analyze the effects in individual patient groups. Subjects We analyzed data from 595 patients who participated between 2013 and 2020 in a one-year multimodality treatment program for morbid obesity.The requirements for participation were a BMI > 35 kg/m 2 with comorbidities or a BMI > 40 kg/m 2 and an age between 18 and 70 years.Patients with immobility, pulmonary or cardiological insufficiency and binge eating disorder, as well as female applicants who were pregnant and breastfeeding were excluded from the program.The treatment includes a formula-based low-calorie diet in the first 12 weeks. The amount of protein provided in these first 12 weeks was increased in June 2018, making it possible to form two groups with respect to prescribed protein intake as shown in Figure 1.The allocation to the groups was based on time of participation and was not fully randomized. feeding were excluded from the program.The treatment includes a formula-based lowcalorie diet in the first 12 weeks. The amount of protein provided in these first 12 weeks was increased in June 2018, making it possible to form two groups with respect to prescribed protein intake as shown in Figure 1.The allocation to the groups was based on time of participation and was not fully randomized. The control group (CG) included 148 patients undergoing the multimodal treatment program between 2013 and 2017 and thus did not receive any additional protein supplementation. The protein group (PG) included 119 patients who participated in our program after introduction of the additional protein supplementation, i.e., between 2018 and 2020. Dietary Intervention The intensive multimodal treatment consisted of nutritional therapy, exercise therapy and behavior therapy.A detailed description can be found in a prior publication [20]. All patients in both groups were prescribed a strict 800 kcal per day formula-based diet using OPTIFAST ® (Nestlé Health Science, Frankfurt/Main, Germany) as a complete meal replacement during the first six weeks.Between the 7th and 12th weeks, a period referred to as the "transition phase", all patients were given the choice to initiate a gradual replacement of the formula-based meals with a balanced, calorie-reduced nutritional diet under the guidance of a dietitian.In week seven, patients could first add a vegetable meal with a maximum of 50 kcal to their formula diet.From the eighth week onwards, a first formula meal could be replaced by a balanced low-calorie meal.It was up to the patient to decide which of the four meals was replaced.This resulted in a daily energy intake of 950-1000 kcal. In the control group (CG), the daily protein intake was provided solely by the consumption of the OPTIFAST ® products.The amount of protein was therefore predetermined by the nutritional content of these formula products and equal in all patients, regardless of their height, weight, or normalized body weight.The formula of the OPTI-FAST ® products changed several times over the study period.On average, protein intake was around 60 g per day until 2016.Between 2016 and 2019, the daily protein intake ranged between 60 g and 80 g, depending on the combination of flavors that the patient The control group (CG) included 148 patients undergoing the multimodal treatment program between 2013 and 2017 and thus did not receive any additional protein supplementation. The protein group (PG) included 119 patients who participated in our program after introduction of the additional protein supplementation, i.e., between 2018 and 2020. Dietary Intervention The intensive multimodal treatment consisted of nutritional therapy, exercise therapy and behavior therapy.A detailed description can be found in a prior publication [20]. All patients in both groups were prescribed a strict 800 kcal per day formula-based diet using OPTIFAST ® (Nestlé Health Science, Frankfurt/Main, Germany) as a complete meal replacement during the first six weeks.Between the 7th and 12th weeks, a period referred to as the "transition phase", all patients were given the choice to initiate a gradual replacement of the formula-based meals with a balanced, calorie-reduced nutritional diet under the guidance of a dietitian.In week seven, patients could first add a vegetable meal with a maximum of 50 kcal to their formula diet.From the eighth week onwards, a first formula meal could be replaced by a balanced low-calorie meal.It was up to the patient to decide which of the four meals was replaced.This resulted in a daily energy intake of 950-1000 kcal. In the control group (CG), the daily protein intake was provided solely by the consumption of the OPTIFAST ® products.The amount of protein was therefore predetermined by the nutritional content of these formula products and equal in all patients, regardless of their height, weight, or normalized body weight.The formula of the OPTIFAST ® products changed several times over the study period.On average, protein intake was around 60 g per day until 2016.Between 2016 and 2019, the daily protein intake ranged between 60 g and 80 g, depending on the combination of flavors that the patient chose from.From 2019 onwards, a new formula was introduced, providing exactly 20 g of protein among all flavors and resulting in a daily protein intake of 80 g via the OPTIFAST ® products.On average, the protein intake in the CG was 61.04 g, or 1.00 g per kilogram of normalized body weight at a BMI of 22 kg/m 2 (see results). In the protein group (PG), protein intake was individually adjusted using OPTIFAST ® products and additional protein powder in order to achieve a relative intake of 1.5 g per kilogram of normalized weight.Normalized weight was calculated based on a BMI of 22 kg/m 2 for each patient.The corresponding total amount of protein was then compared with the amount provided by the formula products, and all patients were advised to substitute the remaining difference by ingesting a tasteless, sugar-free protein powder with at least 80% protein content.Patients were asked to add one to six tablespoons of protein powder to their meals throughout the day.On average, the protein intake in the PG was 92.4 g, or 1.5 g per kilogram of normalized body weight at a BMI of 22 kg/m 2 (see results). Exercise Intervention During the first 12 weeks of our weight loss program, the sports intervention consisted of weekly instructed exercise (20 min gymnastics and 60 min endurance training).The sports therapist also prescribed a daily home exercise program (10-15 min gymnastics) and encouraged a gradual increase in daily physical activity (tracked by using pedometers). Bioelectrical Impedance Analysis Bioelectrical impedance analysis (Nutriguard-M and NutriPlus Software Version 5.3, Data Input, Pöcking, Germany) was performed according to standard protocols with Bianostic AT ® electrodes (Data Input, Pöcking, Germany) at the beginning and at 12 weeks of the program [21].Assessments were scheduled in the morning and patients were asked to attend in a fasting state, wearing only light clothing and to urinate beforehand.The measurement was carried out after patients laid on their backs for about 10 min with their legs apart at an angle of approximately 45 degrees and their arms at an angle of approximately 30 degrees.Two electrodes were attached to the right hand and two to the right foot.An alternating current of 800 µA and 50 kHz was passed through the body via these electrodes.The following parameters of body composition were computed: body fat, total body water, lean body mass (LBM), extracellular mass (ECM), body cell mass (BCM), ECM/BCM, basal metabolic rate, and phase angle. Additional Paramenters Body weight was assessed in the treatment center using a scale from Kern & Sohn (Kern MTS 300K100M standing scale, minimum 2 kg, maximum 300 kg, e = 0.1 kg; Kern & Sohn GmbH, Balingen, Germany).Patients were weighed in the morning and asked to only wear light clothing and no shoes. Waist/hip ratio was assessed prior to treatment initiation by trained staff adhering to standard protocols and using a Seca 201 ergonomic circumference measuring tape (Seca, Hamburg, Germany). In patients diagnosed with type 2 diabetes mellitus (T2DM) prior to inclusion, glycated hemoglobin (HbA1c) was assessed during an in-patient stay preceding the multimodal treatment initiation. Statistical Evaluation The data were analyzed in SPSS (IBM SPSS Statistics, Version 27, IBM Deutschland GmbH, Ehningen, Germany) using the two-sample t-Test and the Mann-Whitney Test (continuous variables) as well as the Fisher's Exact Test (categorical variables).Two-factor ANOVAs with repeated measures were conducted to examine changes over time between both groups.Statistical significance was assumed at p < 0.05. Baseline Characteristics Both groups were comparable with respect to age, sex, education, implantation of gastric balloon, height, body weight and BMI.BIA-based body composition indicators, including lean body mass (LBM), also did not differ significantly between the control and the protein groups at the beginning of the program (Table 1).The average body weight was 136 kg in the control and 132 kg in the protein group, which corresponded to a BMI of 48 kg/m 2 and 46 kg/m 2 , respectively.The number of female participants was higher in both groups.A gastric balloon was implanted in 92 (62.2%) patients of the CG and 65 (54.6%)patients of the PG. Protein Intake In the control group, the prescribed formula-based diet provided an average daily protein intake of 61.0 g/d (SD ± 7.25, range 55-80).This corresponded to an intake of 1.0 g per kg normalized body weight at BMI 22 kg/m2 (SD ± 0.11, range 0.65-1.25). In the protein group, each patient was prescribed a daily protein intake of 1.5 g per kg normalized body weight.The average total intake was 94.2 g/d (SD ± 11.0, range 64.7-127). The difference in protein intake was significant between the groups (p < 0.001). Changes in Body Weight, BMI and Lean Body Mass after 12 Weeks After 12 weeks, body weight as well as BMI had decreased in both groups by 16% (Table 2).This difference was not significant between CG and PG. Lean body mass (LBM) was found to be decreased significantly in both groups.The magnitude of this difference was a total of 6 kg, which corresponded to a relative loss of 8%.Statistically, the loss of LBM was found to be not significant (Table 2). To explore whether certain groups of patients benefited from the protein supplementation, we performed several subgroup analyses with respect to sex, age, height, gastric balloon as well as baseline BMI and baseline LBM.It was found that added protein supplementation had no significant effect on LBM in men, women, older patients (i.e., age ≥ 56 years), taller patients (i.e., ≥1.74 m), patients with very high baseline BMI (≥51.6 kg/m 2 ), high baseline lean body mass (i.e., ≥83.0 kg), low baseline lean body mass (i.e., ≤57.1 kg), patients with gastric balloon and patients without gastric balloon (see Supplementary Materials). Discussion Some studies showed that increasing protein intake during energy-restricted diets has beneficial effects on body composition and the preservation of lean body mass (LBM) [13,14,17,[22][23][24].In contrast, we found that an increased protein intake of 1.5 g as compared to 1 g per kilogram of normalized weight neither affected weight loss nor LBM loss in the course of a 12-week multimodal treatment for morbid obesity with a formula-based diet of 800-1000 kcal per day. Indeed, there are several factors known to affect body composition during weight loss and that might help explain why the beneficial effect of increased protein could not be replicated in the present study. One factor could be the amount of weight loss.In our study, the weight loss was 22 kg at 12 weeks.In previous studies, it ranged between 2 kg and 11 kg [14][15][16]18,[25][26][27][28][29][30].One reason for the higher loss of body weight and LBM could be the very low calorie intake (i.e., 800 kcal/d in the first 6 weeks).Larsen et al. found no significant differences in LBM between a protein and a control group in the course of a 690 kcal/d diet [31].In their metaanalysis of very-low-calorie-ketogenic diets (VLCKDs), Muscogiuri et al. also concluded that VLCKDs which are characterized by a low carbohydrate content (<50 g/day), 1-1.5 g of protein/kg of normalized body weight, 15-30 g of fat/day and a daily intake of about 500-800 calories, do not have a better effect on conserving LBM as compared to other weight management interventions [32].In contrast, Tang et al. recorded a significantly greater preservation of LBM in a high protein group compared with a normal protein group in the course of 2300 kcal/d [14].This suggests that a higher calorie intake might be required to maintain LBM during weight loss intervention. The ratio of LBM loss to total weight loss should also be considered.The amount of LBM loss was about 8% in both our study groups.In comparison, previous studies reported losses between 12 and 36% of total body weight loss [14][15][16][17][18]30,33,34].This massively reduced LBM loss in our study could be due to multimodal treatment components such as the type and intensity of the exercise intervention.During the first 12 weeks of our weight loss program, the sports intervention consisted of 20 min/week of gymnastics exercises and 60 min/week of endurance training.In combination with the prescribed home exercise of 10-15 min daily gymnastics, it approached the WHO recommendation for physical activity of 150-300 min/week [35].Accordingly, Layman et al. reported less LBM loss in their protein + exercise group with a minimum of 150 min/week walking +60 min/week resistance training [25].In a study by Verreijen et al., the LBM even increased in the protein + exercise group with 180 min of resistance training per week [29].It is thus possible that the exercise component had an effect on LBM in both our cohorts and this might have masked possible effects of protein supplementation analogous to ceiling effects in pharmacology. Another factor is the protein intake in itself.In previous studies, control groups had a protein intake of 0.8 g/kg/d [17,24,25,29], while the control group in our analysis consumed an average of 1.0 g per day with OPTIFAST ® products.Ogilvie et al. found that increasing protein intake from 0.8 g to 1.0 g already had a significant effect on maintaining LBM [24].It is unclear whether the daily intake of 1.0 g possibly represents a threshold value for protein intake, above which further increase does not provide any additional benefit [12]. Lastly, patient adherence could be a contributing factor.The formula products as well as the additional protein powder had to be financed by the patients themselves.This aspect could have had an influence on adherence to therapy.In this study, it could not be verified whether the patients followed the recommendations of protein supplementation.One characteristic that is often associated with obesity is "underreporting" [36].This describes the underestimation or reporting of less than the actual amount of food and the associated calorie intake [36].It is therefore possible that the reported dietary behavior of the patients during the therapy program differed from the actual food intake.Accordingly, the energy balance model would have predicted reduced weight loss in the protein group due to the calories added via additional protein supplementation (in case of adherence). Patient adherence in the protein group could have additionally been impacted by the coronavirus pandemic as treatment had to be delivered remotely for some time [37]. A strength of our study is the real-world setting, since patients took part in an established multimodal treatment program.This allowed us to determine the alteration of a single factor (i.e., additional protein supplementation) within a complex treatment constellation involving many other factors such as motivational support, exercise and nutritional counseling.Another strength is the large study group size. A major limitation is insufficient randomization due to the study design (i.e., observational cohort study).However, the introduction of additional protein supplementation was a fixed change to the treatment protocol and was established at a random point within the study period.Thus, the present study can be considered a form of quasi-experiment or natural study with quasi-randomization.Cohort effects have to be considered, nevertheless.The control cohort participated between 2013 and 2017 in the program and the protein cohort between 2018 and 2020.With time, therapeutic staff and techniques naturally changed and improved, respectively.Also, the program itself might have become more popular and attracted more and other patient populations.And some of our patients took part in the program during the pandemic. Further limitations are the lack of adherence quantification and the measurement of lean body mass via BIA.Small effects on muscle mass might not be detectable by BIA [38].Future studies should therefore focus on other measuring tools of muscle quality and function such as DXA, MRI and functional exercise tests and should carry out adherence assessments. It also remains possible that an increased protein intake has positive effects on the body other than the preservation of muscle functioning.For example, in their analysis, Weigle et al. [39] showed that increasing protein intake from 15% to 30% of daily energy intake leads to better satiety.This also resulted in significantly lower calorie intake, significant changes in weight loss and leptin sensitivity in the central nervous system [39].In addition, protein increases food-induced thermogenesis more than carbohydrates and fats [40] as the consumption of energy and oxygen is increased, thus increasing thermogenesis [41].These processes also increase the feeling of satiety [42].However, Englert et al. [18] point out in their discussion that the increase in satiety and food-induced thermogenesis due to the increase in protein intake can contribute to a higher energy deficit and thus to the loss of LBM. Conclusions In conclusion, our results suggest that prescribing additional protein supplementation does not help preserve lean body mass in the course of a 12-week multimodal treatment, including a formula-based VLCD that already provides a daily intake of 1.0 g. Figure 1 . Figure 1.Flow chart of the study population selection. Figure 1 . Figure 1.Flow chart of the study population selection. Table 1 . Study population characteristics at the beginning of the program. Table 2 . Changes in body weight and BMI. a : Differences between t0 at baseline and t1 at 12 weeks.b : Differences between control group (CG) and protein group (PG) at 12 weeks.Differences were tested by using the two-factor ANOVA with repeated measures.
2024-03-20T15:07:07.399Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "9ce152e3ac201538185a2b9fa21bd69ec1ab913d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/16/6/864/pdf?version=1710578563", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dba85e1e1fc95fd916dd4f8c514c3753d59d757c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
18477248
pes2o/s2orc
v3-fos-license
Ceramide function in the brain: when a slight tilt is enough Ceramide, the precursor of all complex sphingolipids, is a potent signaling molecule that mediates key events of cellular pathophysiology. In the nervous system, the sphingolipid metabolism has an important impact. Neurons are polarized cells and their normal functions, such as neuronal connectivity and synaptic transmission, rely on selective trafficking of molecules across plasma membrane. Sphingolipids are abundant on neural cellular membranes and represent potent regulators of brain homeostasis. Ceramide intracellular levels are fine-tuned and alteration of the sphingolipid–ceramide profile contributes to the development of age-related, neurological and neuroinflammatory diseases. The purpose of this review is to guide the reader towards a better understanding of the sphingolipid–ceramide pathway system. First, ceramide biology is presented including structure, physical properties and metabolism. Second, we describe the function of ceramide as a lipid second messenger in cell physiology. Finally, we highlight the relevance of sphingolipids and ceramide in the progression of different neurodegenerative diseases. early intermediate in the de novo ceramide biosynthesis. Considered the innocuous precursor of ceramide, dihydroceramide differs from ceramide only by reduction of the C4-5 trans-double bond in the sphingoid backbone inhibiting [3] or reducing its biological activity [4] when compared with ceramide moiety. The introduction of a Fig. 1 Chemical structure of sphingoid bases (sphinganine, sphingosine, phytosphingosine), ceramide species (dihydroceramide, ceramide and phytoceramide) and complex sphingolipids. Sphingomyelin, synthesized by the transfer of the phosphorylcholine moiety to the C-1 hydroxyl group of ceramides, is the only cell membrane phospholipid not derived from glycerol. Alternatively, modification of a ceramide by addition of one or more sugars directly connected at the primary alcohol group yields complex glycosphingolipids. Galactosylceramide and glucosylceramide (cerebrosides) have a single monosaccharide (galactose or glucose) as polar head group; sulfatides are the sulfuric acid esters of galactocerebrosides. Addition of a galactose to glucosylceramide gives rise to lactosylceramide, precursor of globo-, ganglioand lactosides. Globosides contain multiple sugar moieties. Ganglio-and lactosides have a complex oligosaccharide core structures with one or more sialic acids in the polar head trans-double bond between C4 and C5 results in the bioactive molecule of ceramide. This reaction is catalyzed by the enzyme (dihydro)-ceramide desaturase, which is localized in the cytosolic leaflet of the endoplasmic reticulum (ER) membrane [5,6]. In this way, cells can fine-tune the amount of biologically active ceramide. The presence of the double bond in the sphingosine chain determines the tilt of ceramides in the membrane and enables the lipid to interact with enzymes such as hydrolases and phosphatases [7]. Moreover, unsaturation in the sphingoid backbone augments intramolecular hydration/hydrogen bonding in the polar region. This may allow the close packing of the ceramide molecules, which exhibit a tighter intramolecular interaction than comparable lipids [8][9][10]. This higher packing density of ceramides within the lipid bilayer affects the physical properties of membranes [11]. Short-chain ceramide Synthetic short-chain ceramides (N-acyl chains of 2 to 8 carbon atoms) are commonly used to mimic the mechanisms of action of naturally occurring long-chain ceramides, which are highly hydrophobic compounds. Short-chain ceramides are water soluble and membranepermeable and can be easily used as experimental tools within living cells [12][13][14][15][16]. Small amounts of C2-ceramide are normal components in brain (10 pmol/g) and liver (25 pmol/g) [17] although the metabolic origin and physiological activity of this short ceramide are uncertain. NMR characterization of C-2 and C-18 ceramides showed that the conformation of the polar region of the two molecules is the same [9]. Since the interaction between ceramides and their ligand molecules probably occurs through the polar head, the maintenance of the headgroup conformation irrespective of N-acyl chain length is enough for C-2 ceramides to reproduce most of the long-chain ceramides signaling effects. However, the length of the fatty acyl chain modifies significantly the biophysical properties of the ceramide moieties [18] and in some reports long-and short-chain ceramides have been found to have different biological effects [19,20]. The major difference between short and long ceramides is in the geometrical shapes they adopt at the membrane level that consequentially gives rise to different behaviors. The hydrophobic portion of C-2 is smaller than the polar headgroup. Therefore, C-2 has a shape that favors a positive curvature in lipid monolayer [21]. Long-chain ceramides are cone shaped molecules with opposite geometrical properties, which induce a negative curvature of the two halves of the bilayer towards the aqueous milieu, leading to membrane trafficking via vesiculation and fusion [22,23]. Moreover, long-chain ceramides increase the order of the acyl chains in the bilayers, thus decreasing fluidity and stabilizing the membrane [24][25][26]. Conversely, short-chain ceramides perturb the structural order of the lipid bilayer. Long-chain ceramides are immiscible with phospholipids, while short-chain ceramides mix much better and are therefore able to spontaneously overcome membrane barriers [27]. Once inside the cell since they possess the appropriate stereochemistry, short ceramides might bind target proteins normally inaccessible for the longer species. On the contrary, naturally occurring longceramides are eminently hydrophobic even compared to other lipid species and as a consequence their concentrations in the cytosol are extremely low. This hydrophobicity of ceramides justifies the need for a ceramide transfer protein (CERT) in cells [28]. CERT localizes inside the cell and modulation of its activity may result in significant changes in ceramide levels [62]. Therefore, since short-chain ceramides behave as soluble amphiphiles [29], they are suspected to have cellular effects that cannot be extrapolated to natural ceramide species (mainly insoluble amphiphiles) and their use might lead to confusion on the role of ceramide in cellular signaling. Ceramides as precursors of sphingolipids Free ceramides are molecules known to exert a wide range of biological functions in many of the most critical cellular events, including growth, differentiation, apoptosis and oncogenesis. Ceramides are the core structure of a class of complex lipid called sphingolipids, ubiquitous components of eukaryotic cell membranes [30]. Sphingolipids were initially described in brain tissue in the second half of the 19th century [31]. The name sphingolipids denotes their enigmatic (namely sphinx-like) nature that, despite intense research, still remains unclear. Sphingolipids have long been regarded as inactive and stable structural components of the membrane; however they are now well recognized to be biologically active in processes of cellular biology. Sphingolipids are very heterogeneous and are classified depending on their structural combinations in long-chain (sphingoid) bases, amide-linked fatty acids [32] and hundreds of headgroup variants [33]. Sphingolipids are generated by attachment of different polar headgroups at the primary alcohol group (C1-OH) of a ceramide molecule. Depending on the type of polar group, two major classes are defined: phosphosphingolipids and glycosphingolipids (GSLs) (Fig. 1). The typical phosphosphingolipid in mammalian cells is sphingomyelin (SM), synthesized by the transfer of the phosphorylcholine moiety (from phosphatidylcholine) to the C1-OH of ceramides. Alternatively, modification of a ceramide by addition of one or more sugars yields complex GSLs. As a result of the great heterogeneity in the glycan moiety, among GSLs much variation exists. When a single monosaccharide is present, the GSL is referred to as a cerebroside (also known as monoglycosylceramides). Usually glucose or galactoses are attached directly to the ceramide portion of the molecule, resulting in glucosylceramide (GlcCer; glucocerebroside) and galactosylceramide (galactocerebroside), respectively. The sulfuric acid esters of galactosylceramide are the sulfatides. Galactosylceramide and sulfatide are highly enriched in oligodendrocytes and myelin-forming cells compared to other membranes [34]. By contrast GlcCer is not normally found in neuronal cell membranes. Additionally, a galactose can be transferred by the enzyme lactosylceramide synthase to GlcCer to form lactosylceramide (LacCer) [35,36], which plays a pivotal role as a precursor for the synthesis of complex GSLs [37]. In fact, the common LacCer structure is then elongated by different glycosyltransferases, thereby defining the classes of GSLs that are identified as ganglio-, globo-, lacto-and (neo)-lactosubtypes according to their specific saccharide core structures. Globosides represent cerebrosides that contain additional carbohydrates predominantly galactose, glucose or N-acetylgalactosamine (GalNAc). Gangliosides are very similar to globosides except that they also contain one or more sialic acid residues on their carbohydrate chains. Gangliosides comprise approximately 5 % of brain lipids and are mainly present in astroglia, followed by neurons and oligodendrocytes. Lacto and (neo)-lacto-series are GSLs classified on the basis of the core oligosaccharide structures present in their molecules and catalyzed by the transfer of N-acetylglucosamine (GlcNAc) onto LacCer [35]. Polar carbohydrate chains of GSLs extend toward the extracellular milieu, forming specific patterns on the surface of cells, contributing to cell recognition during differentiation, development and immune reaction [38]. These different types of sphingolipids can be converted back to ceramide by the removal of the polar headgroup by specific enzymes. Ceramide generation Ceramides can be produced in cells either via the de novo synthesis or via hydrolysis of complex sphingolipids [39]. The activation of different catabolic enzymes yields ceramide within a few minutes whereas the de novo synthesis produces ceramide in several hours [40]. Different extraand intra-cellular stimuli dictate the pathway used for ceramide generation resulting in distinct subcellular localization of ceramide and different biochemical and cellular responses. De novo synthesis of ceramide takes place in the ER In animal cells, ceramide is de novo-synthesized on the cytoplasmic face of the smooth endoplasmic reticulum (ER) [5,41] and in mitochondria [42,43]. The de novo synthesis of ceramides in eukaryotes begins with the condensation of serine and palmitoyl-CoA to form 3-ketosphinganine, through the action of serine palmitoyl transferase (SPT) (Fig. 2). This enzyme is composed of two subunits: Lcb1 and Lcb2. Mutations in the human Lcb1 gene underlie hereditary autonomous neuropathy, a neurodegenerative disorder of the peripheral nervous system [44]. CerS represents a key enzyme in the pathway for de novo sphingolipid biosynthesis. Interestingly, these highly conserved transmembrane proteins are also known as human homologues of yeast longevity assurance gene (LASS1). Six different CerSs (CerS1-6) have been identified in vertebrates and plants [46], whereas most of the other enzymes involved in sphingolipids metabolism exist in only one or two isoforms [46]. Each CerS regulates the de novo synthesis of endogenous ceramides with a high degree of fatty acid specificity. In line with the presence of multiple CerSs, ceramides occur with a broad fatty acids length distribution inside the cell. Although some CerSs are ubiquitously expressed, other isoforms present a very specific distribution among tissues, according to the need of each tissue for specific ceramide species [47,48]. CerS1 specifically generates C18 ceramide and is highly expressed in the brain and skeletal muscles but is almost undetectable in other tissues. CerS2 mainly generates C20-26 ceramides and has been found to have the highest expression of all CerSs in oligodendrocytes and Schwann cells especially during myelination. The selectivity of different CerS isoforms to synthesize different ceramide species is important since ceramides with specific acyl chain lengths might mediate different responses within cells [46]. Fumonisins are toxic mycotoxins with a very similar structure to sphingosine or sphinganine, which is a substrate for CerS. Since these fungal metabolites are able to inhibit CerS reaction, they are extensively used to study the role of ceramide generated through the de novo pathway in the ER [49]. On the contrary, the mitochondrial CerS is not affected by fumonisins, suggesting that its activity is distinct from the ER resident enzyme [42,43]. Neo-synthesized ceramides subsequently traffic from the luminal face of the ER to the Golgi compartment where different polar heads are incorporated into the ceramide molecule to form complex sphingolipids [50]. Ceramide transport from ER to the Golgi The high hydrophobicity and low polarity of ceramide moiety limit free ceramide to circulate inside the cell or more generally in solution. This may explain the occurrence of several isoenzymes of ceramide biosynthesis at different subcellular sites and supports the view that the site of ceramide formation might determine its function. On the other hand, the cell needs to transport ceramide from the ER to the Golgi compartment for the synthesis of GSLs and SM. Ceramides destined for conversion to GSLs appear to reach the Golgi only via the classical vesicular route [28]. The step-wise addition of sugar groups to ceramides is catalyzed by membrane bound glycosyltransferases and it is restricted to the ER-Golgi complex [51]. The synthesis of most GSLs begins with glucosylation of ceramide to form GlcCer, at the cytosolic surface of the Golgi [52]. The direction in which GlcCer is trafficked is controversial. GlcCer normally localizes to trans-Golgi and trans-Golgi network, whereas it remains in the cis-Golgi on the knockdown of FAPP2. Two inhibitors of intra-Golgi membrane trafficking did not affect the synthesis of GSLs. These observations suggest that GlcCer is transported from the cis-side of Golgi to the trans side by FAPP2 in a nonvesicular manner [53]. On the other hand, it has been suggested that GlcCer synthesized at the Golgi is retrogradely transported to the ER, where it is translocated to the lumen, and then transported to the Golgi again [54] for the subsequent synthesis of LacCer and more complex GSLs [55]. CERT mediates the transfer of ceramides containing C14-C20 fatty acids but not longer-chain ceramides [59]. This correlates with the presence of a C14-20 acyl chain SM in many tissues and cell lines whereas GSLs are formed by longer ceramides. CERT, works as mediator of sphingolipids homeostasis. Loss of functional CERT in Drosophila affects plasma membrane fluidity and increases oxidative stress [60] and CERT is critical for mitochondrial and ER integrity [61]. Interestingly, CERT has an alternatively spliced isoform characterized by the presence of an additional 26 amino acids domain, responsible for its localization at the plasma membrane and consequent secretion to the extracellular milieu, named CERT L or Goodpasture antigen binding protein (GPBP) [62]. These two isoforms are differentially expressed during development. CERT L is more abundant at early stages of embryonic maturation and its knockdown leads to severe developmental deficit in muscle and brain because of increased apoptosis [63]. As development progresses, the initially very low levels of CERT, gradually increase. Both isoforms can be detected in adult brain [64]. Other reports showed elevated CERT L expression levels to be associated with several autoimmune disorder e.g., lupus erythematosus, multiple sclerosis, myasthenia gravis, Fig. 2 Overview of the metabolic pathways involved in the synthesis of endogenous ceramide. Ceramide can be formed by de novo synthesis, by degradation of complex SLs or by re-acylation of sphingoid long-chain bases (salvage pathway). The de novo pathway involves several enzymatic steps. Through catabolic pathways ceramide is generated by either hydrolysis of the membrane lipid SM by the SMase enzymes or by lysosomal breakdown of complex GSLs. Ceramide itself is degraded by ceramidase to regenerate sphingoid bases. The sphingosine formed is then phosphorylated and finally degraded to phosphoethanolamine and C16-fatty aldehyde by the action of S1P lyase. A salvage pathway uses the enzyme ceramide synthase to produce ceramide from sphingosine. Once generated, ceramide can serve as a substrate for the synthesis of SM and GSLs or be converted into various metabolites such as sphingosine or Cer1P Ceramide function in the brain: when a slight tilt is enough 185 Addison disease [65]. An efficient execution of apoptotic signaling is important to inhibit inflammation and autoimmune responses against intracellular antigens [66] and modulation of CERT/CERT L levels has a direct influence in ceramide levels and could be responsible for balancing cell death during embryogenesis and under pathophysiological condition. Once delivered to the Golgi apparatus, ceramide spontaneously translocates from the cytosolic to the luminal leaflet for SM synthesis. Formation of SM from ceramide is catalyzed by sphingomyelin synthase (SMS) [67] that transfers the phosphocholine headgroup from phosphatidylcholine onto ceramide yielding SM as a final product and diacylglycerol (DAG) as a side product [68]. If ceramide is a key metabolic intermediate for sphingolipids with an amide backbone, DAG is the precursor for glycerolderived phospholipids and, as well as ceramide, it plays important roles in many signaling pathways. Whether the DAG generated by SMS regulates cellular processes remains unclear. SMS exists in two isoforms, SMS1, faces the lumen of the cis/medial Golgi [69,70] and it is responsible for the de novo synthesis of SM [70]; SMS2, which resides in the plasma membrane [68,71], could instead play a more specific role in signal transduction events. In neural cells the de novo SM is mostly synthesized at the plasma membrane and the production at the cis medial Golgi is less prominent [72,73]. This indicates that the subcellular localization of SM formation is cell type specific and that SMS activities may be involved in different biological processes. Catabolic pathways for ceramide production Beside the de novo pathway, significant contribution to intracellular ceramide levels occur also through hydrolysis of complex sphingolipids by activation of different hydrolases [74] (Fig. 2). Ceramides derived from SM catabolism require the activation of sphingomyelinases (SMase) [75], specific forms of phospholipase C, which hydrolyze the phosphodiester bond of SM yielding water soluble phosphorylcholine and ceramide [76]. Several SMases have been characterized and classified by their pH optimum, subcellular distribution and regulation. The best-studied of these SMases is the acid sphingomyelinase (aSMase), which exhibits an optimal enzymatic activity at pH 4.5-5 [77]. This lipase is localized in lysosomes and is required for the turnover of cellular membranes [78]. ASMase is deficient in patients with the neurovisceral form (type A) of Niemann-Pick disease, with consequent abnormal accumulation of SM in many tissues of the body [79]. Besides this lysosomal/endosomal aSMase, a secreted zinc-activated form of aSMase was first identified in serum [80] and found to be secreted by many cell types [81,82]. These two aSMases are differentially glycosylated and processed at the NH 2 -terminal (72) but they are products of the same gene [81]. Neutral SMases (nSMase) are membrane bound enzymes with an optimal activity at a neutral pH. Several isoforms have been characterized. NSMase 1 is localized in the membranes of the ER, [83,84] and it is ubiquitously expressed and highly enriched in kidney [85]. NSMase 2 has a different domain structure than nSMase 1 and is specifically highly expressed in brain [86,87] [88]. A third nSMase (nSMase 3) is ubiquitously present in all cell types and distributed mainly in the ER and Golgi membrane [89]. NSMases are further classified as Mg 2? / Mn 2? dependent or independent. An alkaline SMase exists only in intestinal cells and it is activated by bile salts [90]. The function of these multiple isoforms is still elusive; however their membrane localization has lead to speculation that they may contribute to the modification of local microdomains in the membrane organization during vesicle formation, transport, and fusion [91,92]. Salvage pathway Ceramides can be generated by an alternative acyl-CoAdependent route (Fig. 2). This pathway relies upon the reverse activity of the enzyme ceramidase (CDase), which is called the ''salvage pathway'' since catabolic fragments are recycled for biosynthetic purposes [93,94]. As the name suggests, CDase catalyses the hydrolysis of ceramide to generate free sphingosine and fatty acid. Together with ceramide production, CDase regulates also sphingosine levels. In fact, it is important to note that whereas sphinganine is generated by de novo sphingolipid biosynthesis (Fig. 2), free sphingosine seems to be derived only via turnover of complex sphingolipids, more specifically by hydrolysis of ceramide [5]. The catabolism of ceramide takes place in lysosomes from where sphingosine can be released [95] in contrast to ceramide, which does not appear to leave the lysosome [96]. Free sphingosine is probably trapped at the ER-associated membranes where it undergoes re-acylation (condensation with a fatty-acyl-CoA) to again generate ceramide. This ''reverse'' activity is carried out by the same CDase [96,97]. As with SMase, different CDases have been identified associated with different cellular compartments according to the pH at which they achieve optimal activity (acid, neutral and alkaline). Acid CDases (aCDase) are lysosomal [98][99][100], whereas neutral/alkaline CDases (nCDase and alCDase) have been purified from mitochondria [42,101] and nuclear membranes [102]. CDases have been isolated from soluble fractions of rat brain [103], mouse liver and human kidney. A purely alkaline CDase has been localized to the Golgi apparatus and ER [104,105]. This variability in CDases subcellular localizations and distribution in tissues suggests that these enzymes may have diverse functions in the biology of the cell. N/a CDases have been shown to catalyze the reverse reaction to generate ceramide from sphingosine and fatty acids [97,104,106,107] whereas the acid isoform resides in lysosome. Mitochondria are also capable of generating ceramide via the action of reverse CDase [42,101,108]. Sphingosine-1-phosphate and ceramide-1-phosphate Phosphorylation/dephosphorylation reactions represent a mechanism through which cells respond to specific changes: the phosphorylated state of a molecule often exhibits effects that are diametrically different from those of the unphosphorylated state. Besides being used to resynthesize ceramide, sphingosine can be converted into sphingosine-1-phosphate (SP1) via sphingosine kinase, an enzyme that exists in the cytosol and ER [109,110] (Fig. 2). The terminal catabolism of sphingosine involves the action of SP1 lyase, which degrades the SP1 to form ethanolamine phosphate and a fatty aldehyde [111]. Sphingosine is associated with growth arrest [112] whereas its phosphorylated form, SP1, is able to promote cell proliferation and prevent programmed cell death [110] (for a review [113]). Ceramide and S1P that exert effects of opposite nature in their regulation of apoptosis, differentiation, proliferation and cell migration [114,115]. The concentration of ceramide and S1P is counter-balanced by enzymes that convert one lipid to the other and their levels are believed to balance between cell viability and cell death. However, this is not the only way the cell can balance to ensure tissue homeostasis. Ceramides can also be phosphorylated by the enzyme ceramide kinase (CERK) to form ceramide-1-phosphate (Cer1P) [116][117][118][119]. As expected, phosphorylation of ceramide in Cer1P allows a switch of ceramide properties: comprehensive studies indicate that Cer1P inhibits apoptosis and can induce cell survival [120][121][122]. CERK was first observed in brain synaptic vesicles [117] and found to be highly expressed in brain, heart, skeletal muscles and liver [116]. It appears that at least two different CERK isoforms exist in neural tissue, a calcium dependent enzyme at the plasma membrane level and a second cytosolic enzyme [123,124]. The former enzyme localizes at synaptic-vesicles suggesting a possible role for CERK in neurotransmitter release [116,117,125]. CERK specifically utilizes ceramide transported to the Golgi apparatus by CERT [126]. Stable downregulation of CERT by RNA interference results in strong decrease in Cer1P levels, suggesting that Cer1P formation mostly relies on ceramide de novo synthesis [126]. Together with CERK and Cer1P phosphatases, CERT could modulate an appropriate balance between the intracellular levels of ceramide and Cer1P. However it is important to mention that short-term pharmacological inhibition of CERT appears to slow down SM synthesis without decreasing Cer1P synthesis [127], suggesting either an alternative route for delivery of ceramide to CERK at the Golgi complex or a process which is dependent on long-term responses. Maintenance of equilibrium between ceramide and Cer1P seems to be crucial for cell and tissue homeostasis and accumulation of one or the other results in metabolic dysfunction and disease. Recently, S1P was reported to function not only as an intracellular but also as an extracellular mediator of cell growth through endothelial-differentiation gene family receptors [128]. Cer1P could exert similar functions at the plasma membrane level. Further research is necessary to study if ceramide could reach the plasma membrane transported by CERT L allowing plasmatic membrane CERK to form Cer1P. Plasma membrane, not just a lipid bilayer Structural organization of the membrane The plasma membrane is the densest structure of eukaryotic cells and it defines the outer limit of the cell with its environment. Far from being a passive skin around a cell, plasma membranes are highly dynamic structures with a central role in a vast array of cellular processes [129,130]. Plasma membrane of eukaryotic cells comprises three major classes of lipids: glycerophospholipids, sphingolipids and sterols, principally cholesterol [131]. Glycerophospholipids are the main building blocks of eukaryotic membranes and differ from sphingolipids (ceramide based lipids) in that they are built on a glycerol backbone [132]. Sphingolipid acyl chains are characteristically highly saturated, this allows them to pack tightly in the lipid bilayer and results in a liquid ordered phase with little opportunity for lateral movement or diffusion. This characteristic makes sphingolipids suitable to contribute heavily to the structure of the outer leaflet [30]. Conversely, glycerophospholipids are rich in unsaturated acyl chains that are typically kinked, this means they pack loosely thus increasing the fluidity of the lipid bilayer. The inner leaflet has a higher content of unsaturated phospholipids. This lipid asymmetry in membranes accounts for the greater fluidity of the inner layer relative to the outer layer (Fig. 3). Sphingolipids molar ratio relative to glycerophospholipids and cholesterol varies within cell types. For instance, GSLs are a very minor component in certain cell types such as erythrocytes but they have been shown to be particularly abundant in neurons and oligodendrocytes where they make up 30 % of total lipids in myelin sheets [133,134]. Cholesterol affects the consistency of the plasma membrane making the outer surface firm and decreasing its permeability [135]. With its rigid ring structure, cholesterol fills interstitial spaces between fatty acid chains of the nearest phospholipids, restricting their movement. At the same time cholesterol helps plasma membrane to maintain its fluidity, separating the long saturated fatty acid tails of phospholipids, avoiding their condensation. Despite the significance of ceramide metabolism in the synthesis and degradation of sphingolipids, ceramide content is normally very low in cell membrane and increases in ceramide concentration are highly localized and temporally regulated. The occurrence of ceramide in the lipid bilayer directly affects both the structural organization and the dynamic properties of the cell membrane [11,136]. Lipid raft Many cellular processes such as endocytosis, exocytosis and membrane budding involve changes in membrane topology. While biological membranes are typically in a fluid or liquid-disordered state at physiological temperatures, combinatorial interactions between specific lipids drives the formation of dense, liquid-ordered domains, or 'lipid rafts' within membranes [13,130,137,138] (Fig. 3). The characteristics of these microdomains differ from those of the whole membrane. They are generally enriched in lipids with saturated acyl chains, especially SM and cholesterol which pack tightly within the lipid bilayer [139,140]. These separated regions seem to exist as preformed entities in the membrane of resting cells [141] and are present in different parts of the lipid bilayer [142]. The straight saturated acyl chains of sphingolipids in rafts are more extended than unsaturated chains of surrounding phospholipids and as a result lipid rafts extend 1 nm beyond the phospholipids background [143]. The isolation of biologically relevant lipid rafts is problematic. In the past, highly saturated lipid rafts have been isolated based on their detergent resistance [144]. More recently, it has been shown that these detergent resistant membranes (DRMs) are in fact a product of the extraction method and do not reflect any specific membrane structure. Therefore, it is important to recognize that rafts are not equivalent to DRMs [145]. The majority of studies have investigated lipid rafts mainly at the plasma membrane, due to their accessibility from the outside of the cell [146][147][148]. However many intracellular organelles contain raft-like domains [144,[149][150][151][152]. Membranes of the Golgi are rich in cholesterol/SM [153][154][155] and it has been suggested that rafts function in sorting of lipids and proteins in the secretory and endocytic pathways. In particular, raft like domains are thought to be abundant in the trans-Golgi [152,156] and in late endosomes [151]. Lipid rafts are dynamic structures without any characteristic morphology [157]: during the steady state, rafts may be very small, nanometers in diameter [139,158,159] (4) and phosphatidylethanolamine (3). By contrast, the choline-containing lipids SM (6) and phosphatidylcholine (5) and a variety of glycolipids (7,8) are significant components of the exofacial leaflet of plasma membranes [45]. SM (6) together with cholesterol and different GSLs (7,8), form highly organized microdomains called lipid rafts on the plasma membrane. Since these microstructures are formed by lipid species with long saturated acyl chains, rafts are rigid platforms which float in the more fluid surrounding membrane that consists of phospholipids with saturated (1) and unsaturated (2) fatty acyl chains and less cholesterol. Lipids rafts are enriched in glycosylphosphatidylinositol (GPI)-anchored proteins (8) at their external surface and studded with transmembrane integral proteins but upon proper stimuli they can coalesce into large domains making even micrometer-size rafts [159]. The fundamental principle by which lipid rafts exert their functions is a segregation or concentration of specific membrane proteins and lipids to form distinct microdomains [147] that represent specialized signaling organelles within the plasma membrane [160]. These dynamic membrane sites have been implicated in mechanisms of cell polarity [161], membrane trafficking including endocytosis [149,162] and exocytosis [163][164][165] and in intracellular signaling [160,[166][167][168]. Ceramide-enriched platforms As a highly hydrophobic second messenger, ceramide presumably acts at the level of lipid rafts in transducing external signal. Rafts are the primary site of action of the enzyme SMase that releases ceramide from SM [172] (Fig. 4). The tight interaction between SM and cholesterol serves as the basis for raft formation. Ceramides, on the other hand, mix poorly with cholesterol and have a tendency to self associate and segregate into highly ordered microdomains [13,173]. The nature of ceramide has a strong impact on membrane structure. In fact, long-chain saturated ceramide molecules, are intermolecularly stabilized by hydrogen bonding and van der Waal forces [25,174] and form a liquid ordered domains that induce lateral phase separation of fluid phospholipid bilayers into regions of liquid-crystalline (fluid) phases. Moreover, the small size of ceramide polar headgroup results in a low hydration and allows ceramide molecules to pack tightly avoiding any interference with surrounding lipids [175]. In fact it has been shown that as little as 5 mol% ceramide is sufficient to induce ceramide partitioning in the lipid bilayer and to drive the fusion of small inactive rafts into one (or more) larger active ceramide-enriched membrane platforms [174]. Among lipids, DAG is structural similar to ceramide. DAG is produced in the cell membrane by hydrolysis of phosphatidylinositol 4,5-bisphosphate [176] and phosphatidylcholine [177]. Both are very minor components of membrane being formed and removed rapidly at specific locations in response to signaling. As well as ceramides, DAGs also give rise to phenomena of lateral phase separation in small domains within phospholipid bilayers. Both ceramide [178] and DAG [179] have a small polar head and a large hydrophobic region; they tend to bend the bilayer and to facilitate the formation of non-bilayer (non- Fig. 4 Scheme of lipid raft reorganization up in ceramide formation by SMase activity. Hydrolysis of SM through the enzyme Smase generates ceramide in the outer leaflet of the cell membrane. For its biochemical features, ceramide mixes poorly with the other rafts components and shows self-assembling capability in the membranous environment forming large distinct ceramide-enriched membrane platforms which serve to reorganize the cell membrane, resulting in clustering of activated receptor molecules lamellar) phases which are important for cellular processes such as pore formation, vesicle fusion and budding, as well as membrane protein function. Also, both lipids act as second messengers that play important roles in many signaling pathways. DAG is able to induce structural changes in membrane, more efficiently than ceramide, requiring as little as 2 mol% [180]. This difference in efficiency is likely due to the different physical properties of these lipids. It is though that the different proficiencies of ceramide and DAG for induction of membrane structural change through transient destabilization of the lamellar structures provide opportunity for fine control of membrane properties. The ceramide-enriched membrane platforms serve as clustering components to achieve a critical density of receptors involved in signaling. In fact, rafts are too small to engage in membrane processes [160,181]. This high density of receptors seems to be required for effective transmission of the signal into cells. For example, CD95 signaling is amplified a hundred-fold by the formation of ceramide-enriched membrane platforms [182]. The neuronal plasma membrane is particularly enriched in lipid rafts [183]. More than 1 % of total brain protein is recovered in a lipid raft fraction, whereas less than 0.1 % of total protein is associated with lipid raft isolated from non neuronal tissues [184]. In cultured neurons, lipid rafts are distributed throughout the cell surface including the soma and dendrites. As well as across cell types, lipid and protein raft composition differs according to neuronal developmental stage. Mature neuron lipid raft content is higher than that of immature neurons and astrocytes. [185]. Synaptic proteins such as synaptophysin or synaptotagmin localize in lipid rafts [186,187] and lipid rafts are critical for maintaining the stability of synapses and dendritic spines [188]. Neurotransmitter signaling seems to occur through a clustering of receptors and receptor-activated signaling molecules within lipid rafts. Several lipid raft associated neurotransmitter receptors have been isolated from brain tissues, examples include: nicotinic acetylcholine receptors [189], gamma aminobutyric acid type B receptors [190], a-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor and N-methyl-D-aspartate receptors [188,191,192]. Aberrant organization of SM and cholesterol in rafts has been linked to loss of synapses and changes in nerve conduction [188]. Depletion of sphingolipids or cholesterol leads to gradual loss of inhibitory and excitatory synapses and dendritic spines [188]. Rafts also play an important role in neuronal cell adhesion [193], localization of neuronal ion channels [194,195] and axon guidance [196]. In oligodendrocytes, rafts mediate the interaction between myelin associated glycoprotein on myelin and its receptor on neurons [197]. Ceramide signaling in apoptosis Apoptosis is an essential process for normal embryonic development and to maintain cellular homeostasis within mature tissues. A proper balance between regulation of normal cell growth and cell death is the basis of life. Deregulated apoptosis is a feature of most pathological conditions such as neurodegeneration, auto immune disorders and cancer. In neurodegenerative diseases such as Alzheimer's, Parkinson's, Huntington's and Prion's diseases aggregated misfolded proteins contribute to the neuronal pathogenesis; in multiple sclerosis, autoimmune mechanisms accompany the demyelination; in HIV-associated dementia, viral products are crucial for neuronal demise. Factors affecting neurodegeneration can differ, but these devastating disorders are all characterized by a massive loss of specific populations of neurons or damage to neuronal transmission. Premature death of terminally differentiated cells such as neurons and oligodendrocytes results in progressive and irreversible functional deficits since these post mitotic cells cannot be easily replaced [198]. The role of ceramide in apoptosis is extensive and complex and despite intense investigations remains controversial [199]. An increase of ceramide levels leads to cell death [200,201]; in contrast, depletion of ceramide can reduce the progression of apoptosis [202][203][204]. However, ceramide is indispensable for proper function of the central nervous system (CNS) [205][206][207]. Ceramide levels inside the cell determine its dual role: protection and cell sustenance at low concentrations but death and threat when over produced. This outlines the importance for cells to maintain a strict ceramide balance by a tight regulation of sphingolipid based signaling networks. Ceramide can induce apoptosis via different routes and different intracellular organelles are the target of its action. SM hydrolysis by neutral and/or acid SMases is known to be a very important pathway for production of pro-apoptotic ceramides [208]. However, the de novo synthesis pathway has also been reported to be relevant in the generation of a signaling pool of ceramide leading to cellular apoptosis [209][210][211]. These two pathways can induce apoptosis independently or jointly (Fig. 5). SM hydrolysis generates a rapid and transient increase of ceramide and results in formation of ceramide-enriched membrane platforms. In contrast, the ceramide de novo pathway requires multiple enzymatic steps and it is responsible for a slow but robust accumulation of ceramide over a period of several hours. SMase activation occurs in response to stimulation of cell surface receptors of the tumor necrosis factor (TNF) upon the binding with specific ligands such as TNF alpha, TNF-related apoptosis-inducing ligand (TRAIL) and Fas ligands. SM hydrolysis in response to TNF signals involves both nSMase and aSMase but their activation occurs through different mechanisms [212,213]. The cytoplasmic tail of the TNFR1 contains two distinct portions that differently associate with nSMase or aSMase [214,215]. Activation of aSMase requires the C-terminal of the TNFR1 identified as death domain (DD) [216]. DD associates with the adaptor protein TRADD (TNF receptor 1-associated death domain) that together with another cytoplasmic protein, termed FADD/MORT-1 [217] induces activation of aSMase [218]. ASMase is normally present in the endosomal/lysosomal compartment. However, upon phosphorylation by protein kinase C, aSMase translocates from its intracellular locations to the plasma membrane where it reaches SM [219]. ASMase is reported to be functional at physiological pH after translocation to the plasma membrane [220]. The ceramide produced by aSMase activates the aspartyl protease cathepsin D [221] that can subsequently cleave the pro-apoptotic Bcl-2 family member Bid. Activation of Bid induces cytochrome c release from mitochondria [222] and activation of caspase-9 and -3, leading to apoptotic cell death by the intrinsic pathway [223]. Conversely, activation of nSMase requires a short motif adjacent to the DD of TNFR1, called neutral sphingomyelinase domain (NSD). The NSD binds an adaptor protein, FAN (factor associated with nSMase) which couples nSMase to TNFR1 [224]. The ceramide generated by nSMase leads to the activation of ceramide-activated protein kinase (CAPK) [14] and ceramide-activated protein phosphatases (CAPPs) [225], direct downstream targets of ceramide. CAPK, Ser/Thr protein kinase, is involved in the mitogenactivated protein kinase (MAPK) cascades that induce the extracellular-signal regulated kinases (ERK) activation. ERK cascade leads to cell cycle arrest and cell death. CAPPs, which comprise the serine threonine protein phosphatases PP1 and PP2A [226], mediate the effect of ceramide through dephosphorylation and inactivation of several substrates, such as retinoblastoma gene product (RB) [227], Bcl-2 and Akt [228] and through downregulation of the transcription factors c-Myc [229] and c-Jun [230]. Although aSMase and nSMase seem to induce death receptor dependent and independent mediated apoptosis through apparently separate mechanisms, both enzymes are activated by the same stimuli, i.e. UV light [231], hypoxia [232,233], radiation [204,234], TNF-related apoptosisinducing ligands [235] and the DNA-damaging drug doxorubicin [236]. Disruption of rafts or prevention of ceramide generation by inactivation of aSMase, renders cells resistant to receptor clustering and apoptosis indicating that aSMase plays an important role in death receptormediated apoptosis [2,237,238]. Accordingly, aSMasedeficient mice are resistant to the induction of apoptosis by CD95 [239] and TNF alpha signaling [240]. Instead, exposition to the chemotherapeutic agent etoposide [211] and cannabinoids [248], retinoic acid [249] and B cell receptor (BcR)-induced apoptosis [250] all involve a large increase in ceramide levels formed specifically through the de novo pathway. However, the downstream targets of the de novo ceramide dependent cell death are largely unknown. In conclusion, evidence suggests that ceramide acts either by changing the physical state and organization of cellular membranes or by direct binding and activation of target proteins. The spatial reorganization of plasma membrane driven by generation of ceramide may serve to cluster signaling molecules and to amplify death signaling. However, rather than a specific mechanism for apoptosis induction, this process appears to represent a generic mechanism for transmembrane signaling. In fact, receptors that are not involved in apoptosis (IL5, LFA 1, CD28, CD20) [251] can activate the SMase signaling pathway with subsequent raft clustering into microdomains. Beside its effect at the level of cellular membranes, ceramide is capable of direct binding with components that lead to death as CAPP, CAPK, protein kinase C-n, cathepsin D [252] and mediate induction of signaling cascades that lead to apoptosis, growth arrest and inflammation. Aging Sphingolipids hold a major role in regulating development and lifespan [253] and deregulation in sphingolipid metabolism increase the risk and progression of age-related neurodegenerative disease [254,255]. Since ceramide is the core of sphingolipids, its contribution to cellular pathophysiology is object of intense study. A close connection between ceramide levels and aging comes from studies carried on Saccharomyces cerevisiae where a gene involved in ceramide synthesis has been identified as a regulator of yeast longevity. This gene called longevity assurance homolog 1 (LAG1), together with LAC1, functions as a key components of CerS in vivo and in vitro [256] and its lost correlates with a marked increase in yeast lifespan [257]. The human homolog LAG1Hs (CerS1) is highly expressed in the brain, testis and skeletal muscles and specifically generates C18-ceramide [46]. This conclusion seems to be supported by cell culture studies where overexpression of CerS1 with increased C18-ceramide generation resulted in apoptosis [258]. Interestingly, C18ceramide generated by CerS1 was found to downregulate the expression of the enzyme telomerase [259]. Telomerase functions by elongating the end of existing chromosomes and thus preventing cellular senescence. Since cellular aging is dependent on cell division, these enzymes play a critical role in long-term viability of highly proliferative organ systems [260]. Specifically C18-ceramide is able to mediate a negative regulation of the human telomerase reverse transcriptase (hTERT) promoter, whereas different ceramides generated by other ceramide synthases do not have such a function. Telomerase is expressed in neurons in the brains of rodents during embryonic and early postnatal development and is subsequently downregulated [261]. Terminally differentiated neurons are postmitotic, therefore there is not need to maintain the telomere length [262]. However, telomerase is constitutively expressed in restricted regions of the hippocampus and the olfactory bulbs which are continuously supplied with neural stem and progenitor cells [263]. These cells are required for adult neurogenesis throughout life because they produce new neurons and support brain cells. Therefore, besides the telomeric roles, telomerase was found to protect the postmitotic neuronal cells from stress-induced apoptosis and may serve a neuron survival-promoting function in the developing brain and be important for regulating normal brain functions. Thus, the regulation that C18-ceramide seems to exert on telomerase expression may contribute to increase neuronal vulnerability of the adult brain in various age-related neurodegenerative disorders. Several studies support the role of ceramide in inducing senescence and in activating genetic/biochemical pathways involved with aging. Accumulation of ceramide occurs normally during development and aging in single cells [264] and young cells treated with exogenous ceramide exhibit a senescent-like phenotype [265]. In addition, a significant change in ceramide metabolic enzyme activities seems to occur in specific organs or even in specific cell types with aging [264,266]. The activities of the sphingolipid catabolic enzymes (SMase and CDase) seem to change more robustly than that of the anabolic enzymes (SMS and CerS). ASMase and nSMase activity significantly increase in rat brain during aging [267] demonstrating that aging is accompanied by an increase in SM turnover. NSMase was also reported to be dramatically activated in senescent fibroblasts [264]. ACDase, nCDase and alCDase activities are increased specifically in brain tissue from aging rats and among the isoforms of CDases, alCDase shows the highest activity [267]. Increase in the CDase activity in kidney and brain indicates an increase in the production of sphingosine and its contribution toward aging in these tissues. In contrast, CerS shows a lower activity, suggesting a minor contribution of ceramide de novo synthesis to ceramide accumulation [267]. Ceramide and neurodegeneration Lipid storage disorders Ceramide is defined as a central element in the metabolic pathways of sphingolipids. All sphingolipids are synthesized from ceramides and are hydrolyzed to ceramides. In addition to CDase and SMase, there are other hydrolytic enzymes which hydrolyze complex sphingolipids producing ceramides as product. More than ten specific acid exohydrolases are responsible for intracellular GSLs digestion in a stepwise action that takes place within the lysosome. Deficiency or malfunctioning of one of these enzymes results in accumulation of the corresponding lipid substrate in the lysosomal compartment leading to cellular enlargement, dysfunction and death. Due to its high synthesis of lipids, the brain is the organ mainly affected by accumulation of lipid products. Their abnormal storage and slow turnover results in severe dementia and mental retardation. Inherited metabolic disorders which have been linked to lysosomal dysfunction belong to a family of diseases identified as lysosomal storage disorders (LSDs). Farber's disease Farber's disease is an inherited disorder characterized by high levels of ceramides due to deficient activity of lysosomal aCDase [268]. The rate of ceramide synthesis is normal but ceramide resulting from degradation of complex sphingolipids cannot be hydrolyzed and it is confined into the lysosomal compartment [269]. There is a significant correlation between the ceramide accumulated in situ and the severity of Farber disease [270]. The abnormal ceramide storage in the brain results in neuronal dysfunction, leading to progressive neurologic deterioration. The inflammatory component of this disease consists in chronic granulomatous formations [271]. Granuloma are small areas characterized by the presence of lymphocytes, monocites and plasma cells [272] and appear to result from a dysregulation of leukocyte functions. However, the sequence of molecular mechanisms leading from defect in ceramide metabolism to leukocyte dysregulation is still unknown. Krabbe's disease and Gaucher's disease Krabbe's disease is a disorder involving the white matter of the central and peripheral nervous systems. It is characterized by a deficiency in the lysosomal enzyme galactosylceramidase which removes galactose from galactoceramide derivatives. Galactosylceramidase is necessary to digest galactosylceramide, a major lipid in myelin forming oligodendrocytes and Schwann cells [273]. Abnormal storage of galactosylceramide due to the lack of this enzyme leads apoptosis of myelin forming cells with a complete arrest of the myelin formation and consequent axonal degeneration. This accounts for the severe degeneration of motor skills observed in the disease. Another GSL called psychosine (the deacylated form of galactosylceramide, also known as galactosylsphingosine) is normally broken down by galactosylceramidase. Psychosine is present in the normal brain tissues at very low concentrations, owing to its rapid breakdown to sphingosine and galactose by galactosylceramidase. In its absence, psychosine accumulates in the brain acting as cytotoxic metabolite [274] and therefore contributing to oligodendroglial cell death. Psychosine was also found to cause axonal degeneration in both the central and peripheral nervous system by disrupting lipid rafts [275]. Myelin and/or oligodendrocyte debris produced by oligodendrocyte death in Krabbe's disease activates microglial cells, resident macrophages in the brain, which are the primary mediators of neuroinflammation [276]. Because a pathological hallmark of this rapidly progressive demyelinating disease is the presence of multinucleated macrophages (globoid cells) in the nervous system [277] the disease is also known as globoid cell leukodystrophy. However, the function of these cells is unclear. Gaucher's disease is characterized by the lysosomal accumulation of GlcCer due to defects in the gene encoding the lysosomal hydrolase glucosylceramidase [278]. In the brain, GlcCer accumulates due to the turnover of complex lipids during brain development [279]. The cells most severely affected are neurons because they process large amounts of gangliosides which are components of their membranes and synapses. The demyelination or disrupt of the membrane structure may be the major consequence of these diseases and it is conceivable that a change in the ceramide at the plasma membrane level may contribute to these disorders. Enzymes involved in ganglioside degradation are highly expressed in brain tissue and are of particular importance in the first few years of life when axons elongate, dendrites branch and synapses develop [279]. Deficiency of these enzymes causes neuronal storage of gangliosides leading to loss of neurons and their axons, resulting in cortical atrophy and white matter degeneration. Cells and organs that do not process large amounts of gangliosides are either normal or show mild storage without cell damage. Niemann Pick's disease Defects in SM degradation results in a neurodegenerative condition known as NP. This kind of disorder exists in three major forms. Both NP type A and type B are caused by defects in lysosomal aSMase activity. Affected individuals cannot convert SM to ceramide [280] and alteration of the ceramide-SM ratio, rather than SM accumulation, is likely responsible for the onset of the disease. The importance of SM as a source of ceramide is indicated by the fact that activation of the aSMAase occurs in response to numerous signals within the cell and the production of ceramide is critical for an appropriate signaling cascade. NP type C diseases are caused by defects in a protein, NP C1 protein, which is located in membranes inside the cell and is involved in the movement of cholesterol and lipids within cells [281]. A deficiency of this protein leads to the abnormal accumulation of cholesterol and glycolipids in lysosomes and leads to relative deficiency of this molecule for steroid hormones synthesis. Neurodegenerative dementia: Alzheimer', Parkinson' and Prion's diseases Neural cells are very complex morphologically. The large plasma membrane surfaces of neurons are important for neuronal trafficking, neuron-neuron communication and signaling transduction. During aging and neurodegeneration membrane dysregulation and dysfunction are often found. These alterations in membrane microenvironment occur very early in the CNS [282,283]. Heightened oxidative stress has a profound impact upon membrane lipidprotein organization and signal transduction [284]. These changes might be at the basis of diseases such as Alzheimer's disease, Parkinson's disease (PD), synucleinopathies, prion diseases, and other dementias. Lipid rafts have been shown to be involved in the regulation of APP processing and in Ab peptide formation [285], and represent the principal sites within the membrane where b-secretase and c-secretase generate the pathological amyloid b peptide [286][287][288][289][290]. Other lipid raft components, such as the gangliosides GM1 and GM2, have been associated with induction of Ab transition from a a-helix-rich structure to a b-sheet-rich conformation [291,292]. Ganglioside binding with Ab accelerates Ab fibril formation [293] which gradually causes membrane raft disruptions and thereby has profound consequences on signal transduction and neurotransmission. Prion protein (PrPc) is a GPI-anchored protein [294] and together with its pathological variant associates with lipid rafts [295]. Moreover, the conversion of PrPc into PrPSc has been shown to occur in these membrane domains [296]. Alpha-synuclein associates specifically with lipid rafts [297] and abnormalities of lipid rafts in the frontal cortex occur during the development of PD pathology [298]. Massive modification of fatty acid content gives rise to more viscous and liquid ordered rafts in PD brains than in the age-matched control group [298]. Also, lipid rafts from AD brains exhibit aberrant lipid profiles compared to healthy brains [299]. Similar lipid changes are also observed in epilepsy and ischemia/stroke [300,301]. Elevations of intracellular ceramide levels, which may in turn be associated with induction of apoptotic cell death, have been reported in brain tissues and CSF of AD brain [302] together with reduced SM [303] and altered ganglioside levels [304]. In line with this, an increase of aCDase [305] and aSMase activity [306] has been detected in the brain of AD patients. The key enzyme in ceramide de novo synthesis, SPT, is regulated by APP processing [307] suggesting that this could be one of probably many mechanisms responsible for the alterations in lipid metabolism at the plasma membrane. Conclusions Ceramide is an important signaling molecule involved in the regulation of cell development, growth and apoptosis. In healthy cells ceramide metabolism is finely tuned and precisely coordinated and the level of ceramide generated can dictate whether development is stimulated or whether apoptosis is induced. Ceramide is beneficial for early growth and development of neuronal cells [308,309] and at low levels it has trophic effects promoting cell survival and division. Initial abnormal formation of ceramide can potently induce more ceramide accumulation in a selfsustaining way [200,310] that results to be toxic and supports pro-apoptotic actions in many cell types [311]. This induces drastic consequences leading to tissue damage and organ failure [312]. The mechanisms by which ceramide induces these disparate effects is not known, but may involve its effects in membrane structure and/or activation of different downstream signaling pathways. These apparently contradictory roles can be understood only when we consider ceramide formation as a balanced and vulnerable system. This is, however, a fine line to tread and deviation in either direction can have drastic consequences. Where ceramide is concerned, growth arrest or apoptosis are only a slight tilt away.
2017-08-02T07:33:20.892Z
2012-06-24T00:00:00.000
{ "year": 2012, "sha1": "6b439b6a5e08c099cfe8a9c686a35403473bcc4b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00018-012-1038-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6b439b6a5e08c099cfe8a9c686a35403473bcc4b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
263609141
pes2o/s2orc
v3-fos-license
Simultaneous inference for monotone and smoothly time varying functions under complex temporal dynamics We propose a new framework for the simultaneous inference of monotone smooth time varying functions under complex temporal dynamics utilizing the monotone rearrangement and the nonparametric estimation. We capitalize the Gaussian approximation for the nonparametric monotone estimator and construct the asymptotically correct simultaneous confidence bands (SCBs) by carefully designed bootstrap methods. We investigate two general and practical scenarios which have received limited attention. The first is the simultaneous inference of monotone smooth trends from moderately high dimensional time series, and the proposed algorithm has been employed for the joint inference of temperature curves from multiple areas. Specifically, most existing methods are designed for a single monotone smooth trend. In such cases, our proposed SCB empirically exhibits the narrowest width among existing approaches while maintaining confidence levels. The second scenario involves simultaneous inference of monotone smooth regression coefficient functions in time-varying linear models. The proposed algorithm has been utilized for testing the impact of sunshine duration on temperature which is believed to be increasing by the greenhouse effect hypothesis. The validity of the proposed methods has been justified theoretically as well as extensive simulations. Meanwhile, the smooth increasing trends have been widely identified in climate change. A prominent example is Xu et al. (2018), which demonstrates via climate model that global temperature has a smooth increasing trend. Xu et al. (2018) further warned an accelerated global warming according to this trend. In our real data analysis, we also identify the smooth increasing trends as well as locally stationary patterns of temperature data, see Figure 1. This empirical evidence indicates that the new statistical tools for shape constraint analysis, especially the estimation of monotone regression functions are needed for investigating contemporary temporal-dependent real-world data. In this paper, we aim at the estimation and inference of the monotone and smoothly time-varying functions under complex temporal dynamics. In particular, we consider two very important scenarios where our methodology demonstrates its effectiveness. One is the estimation and jointly simultaneous inference of monotone trends from moderately Wu (2022). The time-varying pattern implies the nonstationarity high-dimensional nonstationary processes. This capability proves especially valuable when dealing with multiple time series that exhibit monotonic behavior, such as temperature data collected from various weather stations within a geographic region. The other is the estimation and inference of the monotone coefficient functions, or more generally, monotone linear combination of regression coefficient functions in time-varying coefficient linear models. This enables us to assess the monotonically changing relationship between the response variable and predictor variable as it evolves over time. Time-varying coefficient models with monotone coefficients are useful in climate science, see for example Dhar & Wu (2023). Our proposed SCBs are asymptotically correct and are asymptotically centered around a good monotone estimate of underlying monotone time-varying functions. While some recent studies have considered dependent observation in the context of monotone regression (e.g. Anevski & Hössjer (2006), Zhao & Woodroofe (2012)), they primarily focus on pointwise limit distributions. Recently, Chernozhukov et al. (2009) and Bagchi et al. (2016) proposed methods for constructing SCBs for monotone signals, however, their resulting SBSs are conservative. Bagchi et al. (2016) focuses on the inference under minimal smooth assumptions, hence their confidence band along with their estimates produces flat spots which seems not optimal for the analysis of data sets with slowly changing mechanisms. Besides Chernozhukov et al. (2009), the above-mentioned literature and most current inference methods for monotone regression functions applicable to time series requires the data to be strictly stationary. Chernozhukov et al. (2009) produces monotone SCBs via rearranging the original SCB, while their resulting is not necessarily centered around their proposed monotone estimate of the regression function. In fact, inference for the entire monotone regression function is a fundamental and challenging problem in the literature attracting enormous endeavor, including but not limited to for example the construction of SCBs in special monotone models (e.g. Sampson et al. (2009);Huang (2017); Gu et al. (2021)) or under i.i.d. assumption Westling et al. (2020). In contrast to the existing literature, our results significantly expand the application scope of inference methods for monotone regression functions to a broader range of real-world scenarios via allowing general time series nonstationarity as well as multivariate and high dimensionality. We compare our methods with several mainstream methods for statistical analysis of the monotone regression function and summarize the results in Table 1. -smooth function -produce strictly monotone estimator Isotonic regression-based: -discontinuous step function -almost inevitable flat point Brunk (1969), Mukerjee (1988), Mammen (1991). Our simultaneous inference is based on monotone estimates for possibly high-dimensional monotone vector functions and for regression coefficients combining the strength of nonparametric estimation and monotone rearrangement. Such estimator belongs to the class of two-step estimators that combine isotonization or rearrangement with smoothing. This estimator has been extensively discussed in the single trend with stationary noise setting; see, for example, Zhao & Woodroofe (2012), Bagchi et al. (2016). Our chosen rearranged estimates can be obtained by unconstrained optimization algorithm, which enables us to calculate the corresponding stochastic expansion. We then apply the state-of-the-art Gaussian approximation technique Mies & Steland (2023) to approximate the distribution of maximum deviation of the monotone estimates by that of certain Gaussian random variables or vectors. Using this fact, we design two bootstrap algorithms to construct the (joint) SCBs for both scenarios of high dimensional trends and time-varying coefficient linear models. It is worth noting that our method is applicable to so-called piecewise locally stationary noise (see Zhou (2013) for details), which allows both smooth changes and abrupt changes in their underlying data-generating mechanisms. The validity of our proposed methods is proved mathematically and the approximation error is controlled with the help of Nazarov's inequality introduced in Nazarov (2003). The remainder of this paper is structured as follows. In Section 3, we introduce the monotone rearrangement-based monotone estimator. Section 4 presents our main results for the high-dimensional case, including model assumptions, the recovery procedure for the inverse estimator, the key Gaussian approximation theorem, and the bootstrap procedure, which mimics the limiting distribution in a feasible manner. Another significant scenario, involving the analysis of monotone estimation in time-varying coefficient regression models, is presented in Section 5. In Section 6, we detail the selection scheme for smoothing parameters in all local linear estimations and monotone rearrangements. Furthermore, Section 7 reports our simulation results, while Section 8 presents the application of our method to the analysis of historical temperature data in the UK. Finally, in Section 9, we discuss potential directions for future work. Additional simulation results and detailed proofs can be found in the supplementary material. Notation Before stating our results formally, we list the notations that will be used throughout the paper below. For a vector v = (v 1 , · · · , v p ) ⊤ ∈ R p , let |v| = ( p j=1 v 2 j ) 1/2 . For a random vector V, q > 0, let ∥V∥ q = (E|V| q ) 1/q denote the L q -norm of the random vector V and write V ∈ L q if (E|V| q ) 1/q < ∞. The notion∥ · ∥ refers to ∥ · ∥ 2 if no extra clarification. Denote diag{a k } p k=1 as a diagonal matrix with diagonal elements a 1 , . . . , a p . Let ⌊x⌋ represent the largest integer smaller or equal to x. For any two real sequences a n and b n , write a n ≍ b n if ∃ 0 < c < C < ∞ such that c ≤ |lim n→∞ a n /b n | ≤ C. Let C l I, l ∈ N, be the collection of functions that have lth-order continuous derivatives on the interval I ⊂ R. Monotone estimator via monotone rearrangement For simplicity, we start illustrating our methodology for univariate series in this section. Consider the classic nonparametric model with monotone constraints where m(t) is a smooth function monotone in t, and e(t) is the error process. In this paper, we allow e(t) to be nonstationary, and all our results established are under a strictly increasing context which can be applied equally to the strictly decreasing case. Therefore, we focus on the mean function m : [0, 1] → R that is increasing in t for the sake of brevity. At each time point t i = i/n, we assume that only one realization (y(t i )) is available, i.e., there is no repeated measurement. Further writing y i = Y (t i ), e i = e(t i ), the model (3.1) then can be written as A plethora of estimators have been derived for model (3.2) to obtain constrained estimators of m(·) that satisfy the continuity and monotonicity constraint. A prevalent method for the inference of monotone shape functions is the fundamental isotonic regression, which yields a discontinuous 'step' function with flat segments. To bridge the gap between step fitting and the continuous nature of the data, isotonization mixed with kernel smoothing procedure has been studied by series of research, e.g. Mammen (1991), Van Der Vaart & Van Der Laan (2003), Durot & Lopuhaä (2014), which almost inevitably produces flat area in estimated curves even when the true function contains no flat part. Additionally, the spline-based methods, e.g. Ramsay (1988), Meyer (2008), can estimate a smooth and strictly increasing function. However, the approaches mentioned above, including isotonization and splines, rely on constrained optimization techniques. Dette et al. (2006) introduced a smooth and strictly monotone estimator via monotone rearrangement instead of the constraint optimization, so the statistical properties of the estimator are easy to analyze. The monotone rearrangement techniques have been applied widely in solving statistical problems with monotone constraints, for example Chernozhukov et al. (2009),Dette & Volgushev (2008, Dhar & Wu (2023). The key idea of monotone rearrangement is the use of the following fact. For any function f defined on [0, 1], and a kernel density function K d (·), define g h d • f on R as which is a smooth and monotone approximation of f −1 when f is strictly increasing; see Ryff (1970). Moreover, g h d • f is always smooth and monotone, even if f is non-monotone. Thus, a natural smooth and monotone estimator of m −1 can be defined through Riemann wherem(·) is a local linear estimator using kernel function K r (·) and the bandwidth h r , i.e., (m(t),m ′ (t)) =: arg min The construction ofm −1 I is adopted from Dette et al. (2006), which primarily investigated pointwise asymptotic behavior based on independent observations. However, our research focus, which revolves around simultaneous inference, necessitates the examination of a time span rather than a single time point. Therefore, we introduce the index setT on which all our simultaneous results are built: byT converges to zero in probability, as supported by the following proposition. In the following Section 4 and Section 5, we further consider high-dimensional version and time-varying regression extension of the monotone estimatorm I (t) and the corresponding simultaneous inference methods. The simultaneous inference ofm I (t) onT in this section could be performed via the methods in Section 4 and Section 5 as it is their special case, and will not be discussed separately for the sake of brevity. 4 Joint SCB for High-dimensional monotone trends In the past decades, the increasing need to study contemporary time series with rapidly growing size and progressively complex structure has necessitated the construction of SCBs that cover many time-varying curves jointly, which is essential for the simultaneous inference in numerous applications involving processes that are p-dimensional, such as investigating global warming trends across multiple districts. While in each district the temperature recorded may exhibit variations, they commonly adhere to the monotonicity condition. Constructing an SCB that covers all the monotone estimators from these districts at a desired significance level can be useful for comprehensively understanding the extreme climate change on a global scale. This motivates us consider the high-dimensional extension of model (3.2) with the form: for i = 1, . . . , n; k = 1, . . . , p where y i = (y i,1 , . . . , y i,p ) ⊤ = (y 1 (t i ), . . . , y p (t i ) ⊤ is our observed p-dimensional process and e i = (e i,1 , . . . , e i,p ) ⊤ is a high-dimensional nonstationary process. Each coordinate of the mean function vector m(t) = (m 1 (t), . . . , m p (t)) ⊤ is assumed to be monotone, i.e., Assumptions of the high dimensional time series To begin with, we impose the assumptions on the high dimensional trends: (A1) For vector function m(t) = (m 1 (t), . . . , m p (t)) ⊤ , the second derivative m ′′ k (·) of the function m k (·) exists and is Lipschitz continuous on [0, 1]. The Lipschitz constants are bounded for all k = 1, . . . , p. (A2) For vector function m(t) = (m 1 (t), . . . , m p (t)) ⊤ , there exists a universal constant Let ε i , i ∈ Z be i.i.d. random elements and F i = (. . . , ε i−1 , ε i ). We assume that the high dimensional nonstationary error e i is generated by following causal representation: where G i = (G i,1 , . . . , G i,p ) ⊤ is a measurable function. Moreover, we introduce the physical dependence measure of Wu (2005) for p-dimensional G i (F i ) to facilitate our asymptotic study on monotone estimations. . . ,ε i ). Define the physical dependence measure for The physical dependence measure we defined in (4.3) shows an input-output-based dependence measure quantifying the influence of the input ε i−k on the output G i (F i ), which is different from the classic mixing conditions. Alternative definitions of dependence measure for high dimensional time series can be found, for example in Zhang & Cheng (2018), where the dependence measure is specified to each dimension i.e. δ q (G j , k), j = 1, . . . , p and requires a universal summation decaying i.e. ∞ k=l sup 1≤j≤p δ q (G j , k) < ∞. Our assumptions on dependence measure is related to their framework in the sense that ∞ k=l δ q (G, k) ≤ √ p ∞ l=k sup 1≤j≤p δ q (G j , k). Furthermore, we define the long-run covariance matrix function for the nonstationary process e i = G i (F i ). Definition 4.2. For process G i (F i ), define long-run covariance matrix function The long-run variance is important for quantifying the stochastic variation of the sums of the nonstationary process. We now introduce the assumptions on physical dependence measure, long-run covariance and other properties of the considered high-dimensional non- (B4) There exists constant Λ, Λ > 0 such that the smallest eigenvalue of long-run covariance matrix Σ G (i) is bounded away from Λ and below from Λ for any i = 1, . . . , n. Remark 4.1. The dimension p is contained in the factor Θ(p). For example, if δ q (G j , k) ≤ Cχ k for all j = 1, . . . , p with constant C > 0, then Θ(p) would be a rate of √ p. In fact, Θ(p) decides the allowed non-trivial rate of dimension p for Gaussian approximation. Assumptions (B1), (B2), and (B4) are in line with many kernel-based nonparametric analyses of nonstationary time series such as Zhao (2015) and Dette & Wu (2022). While this paper adopts a geometric decay in (B1), it's worth noting that a polynomial decay rate can yield similar results. However, this may introduce more complex bounds and theorem conditions, and for the sake of brevity, we maintain the geometric decay assumption. Mies & Steland (2023) to help understand the generality of our assumptions. High dimensional Gaussian approximations To estimate the monotone high-dimensional trend m(t), we apply monotone rearrangement on each dimension. The monotone estimator function vector is written asm I (t) = (m I,1 (t), . . . ,m I,p (t)) wherem I,k is the inverse of Them k (t)) is the local linear estimator towards the k th component of m(t), using kernel and bounded second order derivatives. Remark 4.2. Our bandwidths are from two procedures. h r is from the local linear estimation constrained by sample size n while h d is from the rearrangement constrained by N . Intuitively, we can let N rather large so that the rearrangement can be more accurate. However, our condition R n /h d = o(1) requires that h d is constrained not only by N but also local linear estimation. Therefore, N cannot be arbitrary large. in Dette et al. (2006), our assumption R n /h d = o(1) allows for more flexible choice of h d . Furthermore, the time span of our interestT is defined as ∩ p k=1T k whereT k is defined in the same way as (3.5) withm k so thatm I,k is well defined for each k = 1, . . . , p. To construct a joint SCB form I −m on the time spanT , the key result is to learn the maximal deviation i.e. max 1≤k≤p sup t∈T |m I,k (t) − m k (t)|. The following proposition provides the Gaussian approximation towards the maximal deviation and serves as the basis for our simultaneous inference study and further bootstrap procedure. There exists p-dimensional Gaussian vector process (4.7) Moreover, V(t) = (V 1 (t), . . . , V p (t)) ⊤ satisfying (4.7) could be where V j = (V j,1 , . . . , V j,p ) ⊤ ∼ N p (0, Σ G (j)), j = 1, . . . , n independently and Remark 4.3. For ideal situation where q → ∞ with typical scaling Θ(p) = √ p, the rate in the right side of (4.7) can be is a valid nondegenerate Gaussian approximation which can be shown in the proof of Theorem 4.1 that there exists constant σ > 0 such that min 1≤k≤p Var( √ nh r V k (t)) ≥ σ 2 . Bootstrap assisted joint SCB Following proposition 4.1, we can investigate max 1≤k≤p sup t∈T √ nh r (m k (t)−m I,k (t)) through studying the maximum of Gaussian vectors √ nh r V k (t). However, the limiting distribution of max 1≤k≤p sup t∈T V k (t) is sophisticated due to high dimensionality and the complicated time-varying covariance structure of V k (t). One direct approach is to generate copies of the estimated V k (t) and obtain empirical quantiles of their maxima. In this case, the estimated V k (t) is obtained via a Gaussian variable analogous to V k (t), with the unknown quantities m k (·), m ′ k (·), and Σ G (j) replaced by appropriate estimators. However, it is widely recognized that accurately estimating m ′ k (·) can be challenging. Moreover, the estimation of Σ G (j) can also be difficult if G i (F i ) is piecewise locally stationary since the breakpoints are difficult to identify, yielding inconsistency around the abrupt changes of the long run covariance for usual parametric estimators; see Zhou (2013), Zhang & Wu (2015), Bai & Wu (2023). Initialization: Choose bandwidth h r,k , h d,k and window size L. Step 1: Obtain local linear estimatorm k modified by Jack-knife for each dimension. Step 2: Apply monotone rearrangement on each dimension to obtain the monotone estimatorm I . To overcome this difficulty we consider V * (t) in (4.9) of Algorithm 1. The formula (4.9) does not involve m ′ k (·) so the estimation of this quantity is not needed. Moreover, to approximate V(t) well via V * (t), there is no need to estimate the long run covariance Σ G (j) well at each j. Instead, it only requires the estimation error for cumulative longrun covariance Q(k) =: k i=1 Σ G (i) to be relatively small w.r.t Q(k). In this paper, we estimate Q(k) byQ whereε(t i ) = (ε 1 (t i ), . . . ,ε p (t i )) ⊤ are the nonparametric residuals, i.e.ε k (t i ) = y i,k −m k (t i ). A similar estimator for the cumulative long run variance has been studied by Mies & Steland (2023), where the original series instead of residual is used since Mies & Steland (2023) assumes the data has zero mean when estimating Q(k). We address the theoretical properties ofQ(k) in Lemma A.2, available in the supplement. With the use ofQ(t) in Algorithm 1, we introduce following theorem to show the comparison between Gaussian approximation V(t) in (4.7) and bootstrap sample V * (t) in (4.9). Given data, Theorem 4.1 states the empirical distribution from our bootstrap sample can uniformly estimate the maximal deviation of V(t) as defined in (4.7). This is further substantiated by the combination of Proposition 4.1, which justifies the validity of our bootstrap algorithm in replicating the uncertainty associated with the maximum deviation ofm I − m. This foundation provides a reliable basis for simultaneous inference, including the construction of joint SCBs. Time-varying coefficient regression Besides underlying monotone trends, monotonicity also arises in time-varying relationships between variables. For example, greenhouse gases, such as carbon dioxide, methane, and water vapor, can help to regulate Earth's temperature by trapping heat from the sun that would otherwise be radiated back into space, creating a natural greenhouse effect; see Anderson et al. (2016). However, since the Industrial Revolution, human activities have significantly increased the concentration of greenhouse gases so the identical sunshine duration can contribute more temperature increase than before because part of its heat that should be radiated back before is now trapped; see Kweku et al. (2018). Based on such greenhouse effect hypothesis, we can identify there exists a smoothly increasing relationship between the response variable, temperature, and the predictor, sunshine duration. This motivates us to consider the following time-varying linear model represent the response process (e.g. temperature), p-dimensional covariate process (e.g. sunshine duration, rain falls, etc.), and the error process, respectively. m(t) = (m 1 (t), . . . , m p (t)) ⊤ denotes the p-dimensional time varying coefficients. In this section, we consider the time-varying linear model with fixed dimension p; the scenario when diverging p is more complicated and will be left as a rewarding future work. To reflect the increasing relationship, in the time-varying model we further assume the following monotonicity constraint: is smoothly increasing. For example, taking model (8.1) into account and considering C = (0, 1) ⊤ , which signifies the smoothly increasing relationship between sunshine duration and temperature. Unlike the high-dimensional trend setting in Section 4, if there exists breaking time points in model (5.1), it becomes challenging to distinguish whether the abrupt changes result from the nonstationary process (x ⊤ i , e i ) or the time-varying coefficient. In this regression context, allowing breaking points in the nonstationary process can raise doubts about the smoothness of m(t). For this technical reason, we assume both (x i ) and (e i ) are locally stationary and generated from Similar as Section 3, we define our monotone estimator based on monotone rearrangement i.e.m −1 wherem C,k (·) is the k-th coordinate of local linear estimatorm C (·) = Cm hr (·). Definê m C,I,k as the inverse ofm −1 C,I,k andm C,I = (m C,I,1 (·), . . . ,m C,I,s (·)) ⊤ serves as our monotone estimator towards m C (t). To provide theorem for construction of SCB, we make following assumptions on both (x i ) and (e i ): The smallest eigenvalue of long-run covariance function Λ(i) =: Σ U (i) is bounded away from 0 for all i = 1, . . . , n. Remark 5.1. The above conditions are standard in the literature of analysing time-varying linear models, which have also been used in Zhou & Wu (2010). can be easily verified for a large class of locally stationary process. Assumption (B5') guarantees the non-degeneracy of the long-run covariance matrix of the process of U i (F i ). The following Theorem presents Gaussian approximation towards the maximal deviation ofm C,I −m C and an indirect way of learning the distribution of the complex Gaussian approximations. This shares similar framework in the coalition of Propostion 4.1 and Theorem 4.1 which avoids estimating m ′ C (t) and provides an appropriate long-run covariance estimatorΣ C (j) towards the new long-run covariance Σ C (j) =: Let conditions (B1')-(B5') and (C1)-(C2) hold and m C (t) satisfy conditions (A1),(A2), then we have following two results: (i). There exists independent Gaussian random vector V j =: 5) where ρ n =: R 2 n h d + n 3/10 log n nhr and V C,k (t) =: There are multiple choices ofΣ C (j) satisfying Theorem 5.1. Generally, we obtainΣ C (j) by plugging inM(j) =: 1 . It is worth noting that our cumulative long-run covariance estimator in (4.10) can be used to estimate Λ(j) by replacingε(t i ) in (4.10) as In this way, all rate conditions in Theorem 5.1 can be checked similarly with Theorem 4.1. In fact, for such fixed dimension, there are many methods of estimating difference-based estimator proposed by Bai & Wu (2023) can also be used, which enjoys the robustness to both smooth and abrupt structural breaks in nonstationary process. Combining (i) and (ii) in Theorem 5.1, a similar bootstrap procedure, Algorithm 2, can be conducted to apply SCB in practice. In the Algorithm 1, the estimatorΛ(j) can be choosed from multiple methods as mentioned before. For illustration, we list the algorithm using cumulative long-run covariance estimator in (4.10) by replacingε(t i ) in (4.10) as x iεi Algorithm 2 Bootstrap for SCB onm C,I − m C Data: Y i , i = 1, . . . , n and known linear combination C. Initialization: Choose bandwidth h r,k , h d,k and window size L. Step 1: Obtain local linear estimatorm C modified by Jack-knife. Step 2: Apply monotone rearrangement on each dimension to obtain the monotone estimatorm C,I . Step 5: Repeat step 4 for B times and obtain the estimated Step 6: Construct the (1 − α)th SCB of m C (t) asm C,I (t) ±q 1−α . Bandwidth selection To implement the method one needs to select h r and h d in (3.4) and (3.3). The first bandwidth h r is introduced to apply local linear estimation. When dimension p is fixed, the selection strategy for methodologies is basically guided by the classic asymptotic theory for local linear estimation which generally indicates that h r is expected to attain the minimum asymptotic mean integrated squared error by balancing bias term of order h 2 r and variance term of order (nh r ) −1 . Apart from the the theoretical optimum bandwidth, another General Cross Validation (GCV) selector proposed by Craven & Wahba (1978) is adopted in our simulation and empirical study. For estimating m(·), we can writeŶ = Q(h)Y for some square matrix Q, where Y andŶ denote the vector of observed values and estimated values respectively, and h is the bandwidth. We can choosê However, as for high-dimensional case where p diverges to infinity, to mimic auto-correlation structure, we find h r = n −1/5 dose not guarantee convergence of bootstrap procedure in the proof. In fact, the condition (4.11) requires nh 6 r → ∞. In light of this, we recommend a pragmatic approach, applying the GCV method independently to each dimension and then selecting an average bandwidth level among the optimal candidates from each dimension. This strategy works well in our simulation and data analysis. The second bandwidth, h d , is introduced to apply the monotone rearrangement on local linear estimator, which in practice is usually chosen to be small as long as the assumption As for window size L in (4.10), we recommend the minimum volatility (MV) methods as discussed in Politis et al. (1999). DenoteQ(k) (L) as theQ(k) in (4.10) using window size L, then the fundamental idea behind the MV method suggests that the estimatorΣ becomes stable when the window size L is in an appropriate range. Specifically, one could firstly set a series of candidate window sizes L 1 < L 2 , · · · < L M . For which minimizes MV(j). As mentioned in Zhou (2013), the MV method does not depend on the specific form of the underlying time series dependence structure, and hence is robust for our complex temporal dynamics. Simulation study In this section we shall perform a simulation to study the coverage probabilities of our SCBs for high-dimensional monotone trend and time-varying coefficient in Section 4 and Section 5 respectively. We basically consider three mean functions as building blocks for highdimensional and time-varying regression simulation scenarios: m 1 (t) = 0.5t 2 + t, m 2 (t) = exp(t), m 3 (t) = 2 ln(t + 1). In high-dimensional case with p ≥ 3, we define our p-dimensional regression function matrix with a i = 1 + 0.2(i/p) 0.5 and A 1 , A 2 , A 3 are the sub-matrices of A which respectively consist of the first ⌊p/3⌋-th rows, the (⌊p/3⌋ + 1)-th to ⌊2p/3⌋-th and the (⌊2p/3⌋ + 1)-th to p-th rows of A. The error terms are generated by following two cases (a) Locally stationary: and Σ j,l = 4(−0.95) |j−l| for j and l lower than ⌊p/2⌋ and zero elsewhere. Based our assumptions on kernel functions, we choose the popular Epanechnikov kernel After trying multiple simulation exploration in advance, we found that the optimal values of h r,k tend to vary around 0.3 according to the GCV guidelines. Therefore, based on our findings, we have decided to set the value of h r,k for the final simulation between 0.25 and 0.35 and take same bandwidth for each dimension, i.e. h r,k = h r . For each simulation pattern and bandwidth choice, we generate 480 samples of size n = 1000 and apply our simultaneous inference procedure presented as Algorithm 1 in Section 4 to obtain 90% and 95% SCBs on support setT with B = 5000 bootstrap samples. Table 2 shows our simulation results for different dimension settings and bandwidth choices. As for regression extension, we consider a linear time varying coefficient case where (m 1 (t), m 2 (t)) ⊤ is shown in Table 3. It is noticed that the choice of the parameter N in the monotone rearrangement step should exceed the value of n, and ideally, be set as higher as possible. However, due to limited computational memory, we decided to fix N at 4000 when dealing with scenarios where n is 1000. This value was chosen as a practical compromise between maximizing the effectiveness of the rearrangement step and avoiding computational memory issues. Upon conducting an in-depth analysis of the simulation results in conjunction with the data presented in Table 2, and Table 3, it can be deduced that the simulated probabilities for these diverse aspects of complex temporal dynamics within a nonstationary process, including high-dimensional trend and time-varying coefficient regression, demonstrate a noteworthy degree of proximity to their respective nominal levels when a tolerable error is considered. Additionally, simulation results in high dimensions reveal the possibility that conditions for the divergent rate of dimension p may be relaxed with the help of sharper inequalities in the future. Climate data analysis In this section, we study the historical monthly highest temperature collected from various stations over the United Kingdom. This dataset can be obtained from the UK Met Office website (https://www.metoffice.gov.uk/research/climate/maps-and-data). The maximal temperature serves as a valuable resource for discerning temperature anomalies and extremes within the context of global warming, and we study the joint SCBs for this data set in Section 8.1. In the context of investigating the relationship between sunshine duration and temperature, prior research has indicated that maximal temperature exhibits the highest degree of correlation with sunshine duration when compared to minimum and mean temperatures; see Van den Besselaar et al. (2015). We investigate the sunshine duration and maximal temperature in Section 8.2. UK temperature trends analysis We investigate the monthly highest temperature series over 27 stations from 1979-2022; in other words p = 27 and T = 520. The names of the stations are listed in the supplemental material; notice that for each station considered, there are only a few missing data which we interpolate using 'approx' in R via observations from the same months in nearby years. Our goal is to conduct inference fir all the deseasonalized temperature trends simultaneously, where we implement 'stl' in R for the deasonalization. Taking into account the global warming, the inference is performed under the constaints that all the temperature trends are monotone. We apply Algorithm 1 to generate the joint SCBs for the 27 monotone trends via b = 5000 bootstraps. The tuning parameters h r , h d and L are selected according to the methods in Section 6. We display the estimated temperature trends of 27 stations in Figure 2, of which the left panel shows the original local linear fits and the right panel draws the monotone and smooth estimator via monotone rearrangement, i.e.,m I,k (t). In Figure 3 we present our resulting joint SCBs with respect to the decreasing order of latitude of the stations, which shows that the maximum temperature level is decreasing with latitude increasing. Our results suggest faster global warming rate in the southern part of the UK comparing to the northern part, exemplified by stations like Heathrow and Oxford in the south in contrast with Lerwick in the north. This phenomenon may be attributed to the fact that stations in southern UK areas are often situated near cities or densely populated regions and consequently experience more significant warming effects caused by increased energy consumption, greenhouse gas emissions, the urban heat island effect and so on. Tempearture-Sunshine Analysis In this section we investigate the relationship between sunshine duration and atmospheric temperature and how the former impacts the latter. Specifically, we build a linear timevarying coefficient model for the maximum temperature and sunshine duration for stations Heathrow and Lerwick in UK historic data: In Figure 4, we compare 90% SCB from our Algorithm 2, the SCB of Zhou & Wu (2010) and the SCB of Chernozhukov et al. (2009) (2010) does not admit monotone constraints, so when the underlying coefficient function is monotone the width of Zhou & Wu (2010) is too wide to be admissible, see the discussion on Chernozhukov et al. (2009), which modifies general unconstraint SCBs but yield conservative coverage. Therefore the testing procedure using the two SCBs for H 0 might lack power when the underlying time-varying coefficient is monotone as in our scenario . In contrast, the SCB based on monotone rearranged estimator, which is generated by Algorithm 2, suggests a significant positive relationship between sunshine duration and temperature at a 90% confidence level in Heathrow. The difference in the SCB of sunshine coefficient between Heathrow and Lerwick in SUPPLEMENTARY MATERIAL This supplementary materials will give detailed proofs in Section A. Additional simulation results and the names of station we use in empirical study can be found in Section B. In Section A, essential notation would be introduced first, and then we will give proofs of our main results in the paper. The rest is useful lemmas or auxiliary theoretic results. A Proofs For k ∈ Z, define the projection operator P k · = E (· | F k ) − E (· | F k−1 ). We write a n ≲ b n (a n ≳ b n ) to mean that there exists a universal constant C > 0 such that a n ≤ Cb n (Ca n ≥ b n ) for all n. ∥ · ∥ tr refers to ∥ · ∥ tr,1 if no extra clarification, which is known as the trace norm. If no confusion is caused, we shall use Σ i to represent the long-run covariance function Σ G (i). Define m N (t) as auxiliary function towards m(t) and is defined by the inverse of function below: If m is decreasing, we shall firstly reverse the observation data and follow the sane procedure in increasing situation to obtainm I . It should be mentioned thatm I is defined from its inversem −1 where m − is the local linear estimator from the origin sample −y i , i = 1, . . . , n. Therefore, |m I − (−m)| can follow all the theorems in increasing situation. In this way, we take −m I as the decreasing estiamtor towards true function m, then we can find that simultaneous results for | −m I − m| is actually equivalent to the increasing situation |m I − (−m)|. From Lemma A.9, we know that the jack-knife bias correction could yield Recall ν l,k (t) defined in (4.8), for κ l =: |x| l K r (x)dx, using the fact κ 2 − κ 2 1 > 0, we have inf Then using the fact max 1≤k≤p max 1≤i≤N #{j : |j/n − i/N | ≤ h r,k } = O(nh r ) and summation by parts, we have Then by Lemma A.8, V (0) (t) can be the Gaussian process satisfying (4.7). Thus with summation by parts, given the data, where the last line is obtained by using (A.31) and the fact N i=1 W(i/N, t) = I p . By Assumption (B4), pΛ ≤ ∥Σ(t)∥ tr ≤ pΛ for any t ∈ [0, 1], thus Φ ≍ p. Note that Recall the proof of Proposition 4.1, it has showed that V (0) (t) defined in (A.22) satisfies (4.7). Therefore, Theorem 4.1 can hold if Note that I 3 converges to zero in probability by (A.33), then we only need to bound I 1 and I 2 in separately two steps. By similar arguments in the proof of Lemma A.1 and Proposition 4.1, we can show that By (A.18), for some η i,N,k betweenm k (i/N ) and m k (i/N ), together with (A.43), Lemma A.7 and Lemma A.10, we have By definition ofŴ * i,k,I (t) in the algorithm, (A.44) can be written as Then (A.46) can be written as Replacem in (A.47) with jack-knife corrected local linear estimatorm, denoted as ObviouslyS ( where ρ n = R 2 n /h d + n 3/10 log(n)/nh r and V −1 By similar arguments in (A.19), replace local linear estimator as the jack-knife corrected versionm C and it yields Note that s is fixed, then by similar arguments in (A.23), (A.74) By Theorem 2 in Zhou & Wu (2010), there exists independent V j ∼ N (0, Σ C (j)), j = 1, . . . , n on a richer space s.t. max i≤n i j=1 x j e j − i j=1 V j = o p (n 3/10 log(n)). Summation by parts yields that where universal factor C > 0 depends on q only. Proof of Lemma A.2. Denote A.3 High-dimensional Gaussian approximation Lemma A.3. Let X t = G (t, F t ) with E (X t ) = 0 where G satisfies (B1) and (B2), for some q > 2, and suppose p ≤ cn for some c > 0. If (B3) is satisfied, then there exists random vectors (X t ) n t=1 d = (X t ) n t=1 and independent, mean zero, Gaussian random vectors ) and the universal constant Proof of Lemma A.3. See proof of Theorem 3.1 in Mies & Steland (2023). Lemma A.5. Let Σ t ,Σ t ∈ R p×p be symmetric, positive semidefinite matrices, for t = 1, . . . , n, and consider independent random vectors Y t ∼ N (0, Σ t ). Denote On a potentially larger probability space, there exist independent random vectorsỸ t ∼ N 0,Σ t , such that for universal constant C > 0. Then ξ l are independent Gaussian random vectors, ξ l ∼ N (0, S l ). Denoting ∆ l = S l − S l , and |∆ l | as in Lemma A.4, we find Gaussian random vectors ζ l ∼ N (0, |∆ l |) such thatξ l = ξ l + ζ l ∼ N 0,S l . We may also splitζ l into independent terms, i.e. we find independent Gaussian random vectorsỸ t ∼ N 0,Σ t such thatξ l = (lB)∧n This construction yields that the (Ỹ t ) n t=1 and (Y t ) n t=1 are sequences of independent random vectors, whileỸ t and Y t+1 are not necessarily independent. We also introduce the notation Since the random vectors ζ s are Gaussian, the random variable ∥ζ s ∥ 2 is sub-exponential with sub-exponential norm bounded by C tr (Cov (ζ s )) ≤ C tr (S l ), for some universal factor C, and for s = (l − 1)B + 1, . . . , (lB) ∧ n. To see this, denote the sub-exponential norm by where σ 2 j are the eigenvalues of Cov (ζ s ), and δ j ∼ N (0, 1). A consequence of this subex-ponential bound is that, for a potentially larger C, E max s=1...,n ∥ζ s ∥ 2 ≤ C log(n) max l ∥S l ∥ tr ≲ log(n)BΦ. Analogously, If φ > nΦ, nφ/B + BΦ is minimized by setting B = n, which yields To sum up, for some universal constant C > 0, A.4 Lemmas of monotone rearrangement Lemma A.6. Assume assumptions in Lemma A.1 hold, then Proof of Lemma A.6 By Lemma A.10, the local linear estimator follows where R n = h 2 r + log 5/2 n/ √ nh r . Recall the definitions ofm −1 I , m −1 N in (3.3) and (A.1) respectively, for any t ∈ R, by Taylor's expansion for some η i,N,k betweenm k (i/N ) and m k (i/N ). For any sequence where the least line of (A.94) is obtained by (A.107), min 1≤k≤p h d,k ≍ h d ≍ min 1≤k≤p h d,k and R n /h d = o(1) from Assumption (C2). for any i = 1, . . . , N . Case 4. If t ∈ (m k (1), m k (1) + h d,k ], then |m k (1) − t| ≤ h d,k . By similar arguments in Summarizing Case 1-4, note that the big O for k can be also uniformly bounded under the uniformly bounded Lipschitz condition (A1) and Assumption (A2), we have then (A.89) holds. Moreover, by Lagrange's mean value again, we have for some η i,N,k betweenm k (i/N ) and m k (i/N ), By similar arguments in (A.96), thus (A.90) holds. Similarly, for some η i,N,k betweenm k (i/N ) and m k (i/N ) Moreover, by Lagrange's mean value, for some η s,k betweenm k (s k ) and m k (s k ), (A.100) By similar arguments in the proof of Lemma A.6, Note that for uniformly s ∈ [0, 1] Proof of Lemma A.8. For the sake of brevity, since k will be fixed in the subsequent analysis, we omit it in the subscripts for short. That is we omit the dependence on k when no confusion arises. By simple calculation, for any t ∈ (m(0), m(1)) Proof of (i). Note that uniformly with respect to t ∈ T n,k and k = 1, . . . , p. Thus Proof of (ii). For any t ∈ [0, h r,k ) ∪ (1 − h r,k , 0], using (A.112) and a Taylor expansion which yields (A.111). B Appendix B.1 Discussion on minimal volume As mentioned before, now we have two approaches to construct SCB for the monotone estimatorm I . The former conservative approach is to apply the monotone rearrangement on local linear estimatorm, upper and lower bounds of local linear estimator's SCB. This improving procedure admits a narrower SCB that contains monotone estimatorm I with conservative significance level. The new way is proposed in this paper, which directly B.2 Additional simulation results To compare existing SCB method for monotone regression, we design following simulation scenarios in univariate case. We consider three kinds of (3.2) respectively composed of regression function m k , k = 1, 2, 3 and locally stationary error process e i = G(i, F i ), G(t, F i ) = 4 −1 ∞ j=0 a(t) j ξ i−j where a(t) = 0.5 − (t − 0.5) 2 and ξ l , l ∈ Z are i.i.d. standard normal variables. For each simulation pattern and bandwidth choice, we generate 480 samples of size n = 1000 and apply our simultaneous inference procedure presented as Algorithm 2 in Section 4.3 to obtain 90% and 95% SCBs on support setT with B = 5000 bootstrap samples. Table 4 shows the simulated coverage probabilities. Under such locally stationary setting, there are several SCBs can be compared with our monotone SCBs. A fundamental SCB is obtained directly from the local linear estimator without imposing any monotone condition, as discussed in Zhou & Wu (2010). Furthermore, following the approach outlined in Chernozhukov et al. (2009), under the monotone constraint, the fundamental SCB can be enhanced through an order-preserving procedure, resulting in a narrower but conservative SCB. In our analysis, we apply two order-preserving procedures: one involves applying monotone rearrangement to the fundamental SCB, resulting in a 'rearranged SCB', while the other entails applying isotonic regression, yielding an 'isotonized SCB'. In our simulation results, we observed that our monotone simultaneous confidence bands (SCBs) exhibit several advantages compared to other SCBs. Specifically, our monotone SCBs have a wider support time span, allowing for a more comprehensive coverage of the underlying function. Moreover, by adjusting the support range of our SCBs to match classic SCB support [h r , 1 − h r ], we observed that our monotone SCBs can have a narrower width. Following tables show the length of our monotone SCB and classic SCB without monotone condition with respect to simulation study in Table 4. Monotone SCB is obtained by this paper, Zhou's SCB is the fundamental SCB without monotone condition obtained from Zhou & Wu (2010). Rearranged SCB and isotonized SCB are two conservative SCBs from Chernozhukov et al. (2009). They are obtained by applying rearrangement and isotonization on Zhou's SCB respectively. The length of an SCB [m L (t), m U (t)] is defined by the L ∞ metric between its upper and lower bounds, i.e. sup t∈[hr,1−hr] |m L (t) − m U (t)|.
2023-10-04T06:42:14.236Z
2023-10-03T00:00:00.000
{ "year": 2023, "sha1": "82db0c9dda3308bc2f0ce75cd8475d5e05ab42fe", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1fffb434df926a262bb04d311730690ac85d94ab", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
249628817
pes2o/s2orc
v3-fos-license
Patterns of multidrug resistant organism acquisition in an adult specialist burns service: a retrospective review Background Multidrug resistant organisms (MDROs) occur more commonly in burns patients than in other hospital patients and are an increasingly frequent cause of burn-related mortality. We examined the incidence, trends and risk factors for MDRO acquisition in a specialist burns service housed in an open general surgical ward, and general intensive care unit. Methods We performed a retrospective study of adult patients admitted with an acute burn injury to our specialist statewide tertiary burns service between July 2014 and October 2020. We linked patient demographics, injury, treatment, and outcome details from our prospective burns service registry to microbiology and antimicrobial prescribing data. The outcome of interest was first MDRO detection, stratified into the following groups of interest: methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus (VRE), two groups of Pseudomonas (carbapenem resistant, and piperacillin-tazobactam or cefepime resistant), carbapenem-resistant Acinetobacter species, Stenotrophomonas maltophilia, carbapenem-resistant Enterobacteriaceae (CRE), and extended-spectrum beta-lactamase producing Enterobacteriaceae (ESBL-PE). We used a Cox proportional hazards model to evaluate the association between antibiotic exposure and MDRO acquisition. Results There were 2,036 acute admissions, of which 230 (11.3%) had at least one MDRO isolated from clinical specimens, most frequently wound swabs. While acquisition rates of individual MDRO groups varied over the study period, acquisition rate of any MDRO was reasonably stable over time. Carbapenem-resistant Pseudomonas was acquired at the highest rate over the study period (3.5/1000 patient days). The 12.8% (29/226) of MDROs isolated within 48 h were predominantly MRSA and Stenotrophomonas. Median (IQR) time from admission to MDRO detection was 10.9 (5.6–20.5) days, ranging from 9.8 (2.7–24.2) for MRSA to 23.6 (15.7–36.0) for carbapenem-resistant P. aeruginosa. Patients with MDROs were older, had more extensive burns, longer length of stay, and were more likely to have operative burn management. We were unable to detect a relationship between antibiotic exposure and emergence of MDROs. Conclusions MDROs are a common and consistent presence in our burns unit. The pattern of acquisition suggests various causes, including introduction from the community and nosocomial spread. More regular surveillance of incidence and targeted interventions may decrease their prevalence, and limit the development of invasive infection. Supplementary Information The online version contains supplementary material available at 10.1186/s13756-022-01123-w. Introduction Risk of death after burn injury has decreased in high income countries in recent decades, but infection remains a major cause of morbidity and is the major Open Access *Correspondence: heather.cleland1@monash.edu cause of in-hospital mortality [1]. In keeping with other health care settings and conditions, the emergence of antimicrobial resistance poses increasing challenges in the management of burns patients [2]. Bacteria with clinically important multidrug resistant phenotypes such as Staphylococcus aureus and various gram-negative infectious agents, in particular Pseudomonas aeruginosa, Acinetobacter baumannii, Escherichia coli, Klebsiella pneumonia, and Enterobacter cloacae, are common in burns services, which house patients with extensive skin loss and open wounds, decreased immune function, prolonged antibiotic use, invasive treatments, and long length of stay. These patient characteristics increase the risk of colonisation by, and infection with, multidrug resistant organisms (MDROs), and contribute to the poor outcomes associated with difficult to treat infections due to MDROs [3,4]. MDROs occur more commonly in burns patients than in other hospital patients [5] and are an increasingly frequent cause of burn-related mortality [6]. A recent review of infection control measures to manage MDRO outbreaks in burns units, including removing patients and closing down the unit, showed that even the most comprehensive measures to eradicate MDROs may not be successful [7]. Thus, infection prevention and antibiotic stewardship initiatives designed to minimize the development and acquisition of MDROs are fundamental to best practice burns care. A systematic review of potentially modifiable risk factors for MDRO acquisition has identified antibiotic use, as well as hospital interventions more generally associated with increased risk of infection (urinary or intravascular catheters, mechanical ventilation, and hydrotherapy) as targets for prevention efforts. Strategies minimising the risk of MDRO acquisition in burns also include early wound excision and closure, meticulous wound management, and environmental control [4]. Other general aspects of infection prevention and control also have specific implications for burn care, including infrastructure design, models of care, isolation precautions, and cleaning regimens [8]. However, consensus on these issues is lacking, with the relative value of many basic practices, technologies, and design features in burns units undetermined [9,10]. In contrast, the value of antibiotic stewardship in ensuring appropriate treatment of infection and managing de-escalation is well established, especially in combination with consistently applied infection control practices [11]. In order to ensure infection prevention and management efforts are well targeted and patients treated appropriately for clinical infection, it is necessary to have an understanding of patterns of infection and colonisation that are specific to individual settings. Additionally, the incidence and associations of acquisition of MDROs can act as indicators of quality of care and support quality improvement initiatives. In order to better understand the occurrence of bacterial MDROs and potential strategies for their prevention and management in our specialist statewide tertiary referral burns service, we aimed to examine incidence, trends, and risk factors for MDRO acquisition. We also examined the impact of antibiotic use and timing on MDRO acquisition. Study setting and population The Victorian Adult Burns Service (VABS) is a specialist adult burns service providing the statewide service for adult patients (≥ 16 years) in the Australian state of Victoria. The population of Victoria was 6,462,019 in 2017 [12]. Victoria has a regionalised, hierarchical trauma system, which ensures transfer of patients with severe burns to the specialist service. Previous research has shown that 98% of adult patients with severe burn injury are managed at the VABS [13]. In addition, many patients with less severe burns are cared for in this service. The VABS manages patients who require critical care in a general open intensive care unit (ICU), and ward patients are housed in an open general surgical ward that also accommodates plastic surgery patients. The service has a policy of routine surveillance swabbing of wounds on admission and at dressing changes at least weekly until healed or patient discharged. All adult patients admitted with an acute burn injury to the VABS between July 2014 and October 2020 and entered into the VABS database were included in this study. Data sources and data management Admission, demographic (age and gender), injury event (cause and intent), injury severity (i.e., the percentage of total body surface area [%TBSA] burned), management, and in-hospital outcome (discharge disposition and hospital length of stay [LOS]) data were extracted from the VABS database. This database routinely captures epidemiological, quality of care, treatment and outcome data for all patients admitted to the service. The %TBSA burned was reported as a continuous variable (i.e., 0-100) and categorised into two groups: 0-19.9%, and ≥ 20% TBSA, with the latter group defined as having a major burn injury. The primary cause of burn injury was dichotomoised to identify patients who sustained a flame burn, the most common cause of burn injury in adult patients in Australia and New Zealand. Injury intent was dichotomised to identify patients who sustained an unintentional injury. Discharge disposition was dichotomised to identify patients who were discharged to another hospital or healthcare facility as an additional indicator of injury severity. Hospital LOS (reported in days) was calculated from date and time of admission and discharge. The hospital microbiology database was searched for specific organisms isolated from these patients during their inpatient stay. Data on the timing of the swab, where the specimen was collected (i.e., in theatre, on the ward, etc.), specimen type, and the organism(s) identified in the specimen were extracted. The MDRO groups of interest were: methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus (VRE), two groups of Pseudomonas (carbapenem resistant [Group 1] and piperacillin-tazobactam or cefepime resistant [Group 2]), carbapenem-resistant Acinetobacter species, Stenotrophomonas maltophilia, carbapenem-resistant Enterobacteriaceae (CRE), and extended spectrum beta lactamase producing Enterobacteriaceae (ESPL-PE). Rectal screening swabs were excluded. Specimens were grouped based on the location from which they were collected: wound, respiratory (including sputum and bronchoalveolar lavage), blood (including catheter tip cultures), or urine. Only the first isolate of each species of an organism was recorded. The number of unique MDRO organisms and organism groups for each patient was calculated. Time to isolation was calculated from date and time of admission and specimen collection data. The time to isolation was reported as a continuous variable (in days) and was also dichotomosed according to whether the specimen was isolated within 48 h of admission. Antibiotic exposure data was available for the subgroup of patients admitted between October 2018 and October 2020. Their hospital electronic medical records were searched for non-topical antibacterial drugs. The name and date of first administration for each antibiotic was extracted from the hospital's electronic prescribing record system. Antibiotic administration was examined in all patients for whom data were available. Further analysis of antibiotic exposure in patients who had MDROs isolated was also conducted. Exposure to antibiotics which were active against most or all isolates of an organism other than the resistant phenotype of interest (dubbed 'Standard' antibiotics), was determined for patients with each of the MDRO groups of interest (Additional file 2: Table S1). Time to first exposure for each unique antibiotic was calculated using date and time of admission and order data. Statistical analysis Data from the VABS, microbiology, and pharmacy databases were linked using patient name, birth date, and medical record number. Summary statistics were used to describe the profile of patients who did and did not develop an MDRO. Frequencies and percentages were used for categorical variables, while mean and standard deviation or median and interquartile range (IQR) were used for continuous variables depending on the skewness of the data. Differences between patients who did and did not develop an MDRO were assessed using chi-squared or Mann Whitney U tests, as appropriate. A p-value < 0.05 was considered statistically significant. The number of MDRO containing specimens was calculated for each MDRO group of interest and overall and reported using frequencies. The rate of MDRO acquisition per 1000 bed days and 95% confidence intervals (CIs) were calculated for the overall sample and for each MDRO group of interest individually. The association between antibiotic exposure and MDRO acquisition was evaluated using a Cox proportional hazards model, where antibiotic exposure was considered as a time-dependent covariant. The resulting hazard ratio (HR) and 95% CI was reported. Data handling and statistical analysis was performed using Stata Version 14.0 (StataCorp, College Station, Texas, USA) and in the R statistical environment version 4.0.3 [14]. Figures were produced in Excel 2016 (Microsoft, Redmond, Washington, USA) and in the R statistical environment version 4.0.3 [14] using the tidyverse, [15] ggdist [16], gghalves [17], survival [18,19], and survminer packages [20]. Ethics approval The Alfred Human Research Ethics Committee granted ethics approval for this study (Project Number 154/20). Results There were 2,036 acute admissions to the unit between July 2014 and October 2020, 230 (11.3%) of whom had at least one MDRO isolated from a clinical specimen. Of these, 160 acquired one MDRO, 43 acquired two MDROs, and 17 acquired three MDROs; the remaining patients acquired four or more MDROs. Patients with MDROs were older with more extensive burns. Patients with a major burn injury accounted for 10.1% of the total patient population, but 38.6% of patients with an MDRO. There was a positive relationship between length of hospital stay and MDRO identification. A greater proportion of patients with an MDRO underwent a burn wound management procedure in the operating theatre, while a smaller proportion of patients with an MDRO were discharged to home (Table 1). MDROs were most frequently isolated from wound swabs. There were 323 wound swabs which were positive for an MDRO. MDROs were isolated from 13 blood cultures, 21 respiratory samples, and 12 urine specimens ( Table 2). Some characteristics of patients with MDROs varied by the specific organism group they acquired. However, increasing size of burn, a wound management procedure in the operating theatre, discharge to another hospital (typically for rehabilitation) and increasing LOS were associated with all MDROs. Rates per 1000 patient days for each organism by year varied (Fig. 1). In the 2017/18 year, the rate (95% CI) of carbapenem-resistant Pseudomonas was 6.7 (4.5-9.7)/1000 patient days. In the 2016/17 period, the VRE rate was 3.2 (1.7-5.3)/1000 patient days: in subsequent years this decreased to 0.5/1000 days. In 2019-2020, carbapenem-resistant Acinetobacter species, previously an uncommon occurrence in the burns unit, had a rate of 3.3 (1.8-5.7)/1000 days (13 cases), when preceding and subsequent years had zero or one case. In this study, Acinetobacter species isolates were all A. baumannii, except for one, which was A. haemolyticus, from a blood culture. The total rates of MDROs showed no change over time (Additional file 1: Fig. S1). The MDRO with the highest rate over the study period was carbapenem-resistant Pseudomonas, at 3.5 (2.8-4.3)/1000 patient days (Additional file 3: Table S2). Figure 2 shows the number of MDROs per year by type in patients with burns ≥ 20% TBSA. There were 203 patients with major burns admitted over the study period, an average of 2.7 patients per month. Resistant P. aeruginosa and S. maltophilia were isolated in every year of the study in patients with major burns, however other organisms (Acinetobacter species, CRE, ESBLs) were less consistently isolated. MRSA and VRE were absent from this group of patients for one 12 month period each. Time to isolation of MDRO Twenty-nine (12.8%) MDROs were isolated from specimens collected within 48 h of admission. These were predominantly MRSA (n = 15) and Stenotrophomonas (n = 10). No multi-resistant specimens of P. aeruginosa, Acinetobacter species or VRE were isolated within 48 h after admission (Additional file 4: Table S3). Median time to first positive clinical specimen varied according to organism type, with MRSA, ESBL-PEs and Stenotrophomonas less than 10 days, and carbapenem-resistant P. aeruginosa and CRE more than 3 weeks post admission (Fig. 3). Antibiotic exposure data was available for 730 patients over a two-year period to October 2020. Ninety of these patients (12.3%) had an MDRO isolated from a clinical isolate. Three hundred and thirty-seven of the 730 (46.2%) patients received antibiotics. Twenty-five patients who did not have antibiotics had an MDRO isolated, 16 of whom had either a P. aeruginosa or Discussion In this study, 11.3% of burns patients had an MDRO isolated from a clinical specimen during their admission. Comparisons with other units are difficult, due to different populations and whether or not the study focussed on infections only or included colonisation. Apart from excluding rectal swabs, our study made no attempt to distinguish the two. MDROs were most commonly isolated from wounds, with isolates from respiratory samples, blood, and urine being far less common. This largely reflects the wound surveillance swabbing policy in our unit, rather than relative incidence of wound infections. [3,4,21]. Factors associated with acquisition of MDROs in our unit are increasing age, size of burn, increasing LOS and operating theatre procedures, in keeping with other reports [3,[21][22][23]. Incidence and time of isolation for different organisms In our study, the commonest MDRO isolates were P. aeruginosa, followed by MRSA. Rates for different [1]. In comparison to other studies, we recorded few Acinetobacter species [24][25][26]. A systematic review of risk factors for Gram negative MDROs reflects the clinical significance of Acinetobacter in burn patients, with seven of 11 studies focussed on Acinetobacter [22]. Following a previous outbreak of infection and colonization with gram-negative pathogens carrying a metallo-β-lactamase gene in our hospital, prescription of meropenem is restricted [27]. Acinetobacter has consistently been a less common organism, however carbapenem resistant P. aeruginosa comprises the largest single group of MDROs in our service. We also presented the rate of S. maltophilia isolation, which is rarely reported on in the burns literature. S. maltophilia was one of the three commonest organisms isolated in our study: 15% were isolated in the first 48 h, possibly indicating environmental pre-hospital acquisition. Reports of Stenotrophomonas in burns patients are few, but a study from Taiwan reported 14 burns patients with Stenotrophomas bacteraemia and a higher incidence in burns patients than non-burns patients in their hospital. They reported four deaths in association with polymicrobial sepsis [28]. In our study, of 73 isolates, fewer than five isolates each were detected from respiratory samples or blood. Despite a reputation as an opportunistic pathogen, usually infecting immunocompromised hosts, some strains have the potential to develop enhanced virulence in humans, and clusters in hospital populations suggest a capacity for spread in healthcare settings [29,30]. The frequent isolation of Stenotrophomonas in our patients is a cause for some concern, indicating as it does possible hospital transmission, and an association with prolonged antibiotic use, with the potential to cause invasive infection. Time to isolation Bacterial colonisation and infection of burn wounds typically occurs early after injury and initially more commonly with gram-positive organisms. With increasing LOS, gram-negatives come to predominate in wounds and hospital treatment related infections, along with increasing antibacterial resistance patterns [4,21]. The pattern of isolation of different species in our study reflects this usual pattern. The median time (IQR) from admission to isolation in our cohort was 10.9 (5.6-20.5) days: 12.8% of these were isolated within 48 h after admission, and are more likely to have been acquired in the community. In their review of healthcare associated infections after burn injuries, van Duin et al. reported a median time of 38 (17-77) days from admission to first MDRO isolation. Median time to first isolation of MRSA was 11.5 days (3-33), compared with 9.8 (2.7-24.2) in our cohort [21]. Although the bulk of acquisitions were identifed within the first three weeks after admission in our cohort, they occurred throughout the hospital stay. The wound swabbing policy has provided unique information which identifies the extent of MDRO colonisation of patients in our service. As most MDROs were isolated from wounds which were not systemically treated in the absence of clinical signs of infection, they potentially persist for prolonged periods, especially in patients with extensive injuries, resulting in a high prevalence rate, with the ongoing risk of nosocomial transmission. Antibiotic use We examined the association between exposure to antibiotics which were active against a specific organism other than the MDRO phenotype of interest (Standard antibiotics). Our analysis did not show an association between these antibiotics and MDRO acquisition, possibly due to low numbers, nor did it show an association between any antibiotic exposure and MDRO isolation. This finding is in contrast with other studies: a recent systematic review of risk factors for acquisition of gram-negative MDROs showed an increased pooled odds ratio of 7.00 (2.77-17.67) associated with exposure to extended spectrum cephalosporins, and 6.65 (3.49-12.69) for carbapenem exposure [31]. Another study showed > 10% incidence of bowel colonisation with ESBL gram negative organisms in hospital in-patients exposed to cephalosporin monotherapy and concluded that antibiotic resistance is an inescapable effect of antibiotic therapy [32]. Non-lactose-fermenting gram-negative organisms such as P. aeruginosa, frequently possess intrinsic resistance mechanisms, including low membrane permeability, and multiple genetic resistance determinants. Resistant strains are selected for during antibiotic use through removal of 'competing' organisms and sensitive strains, but acquisition of resistance mechanisms, and nosocomial transmission are other ways patients acquire these organisms [33]. In our study, exposure to Standard antibiotics was not a prerequisite for development of an MDRO: carbapenem-resistant Pseudomonas was isolated in the absence of exposure to selective antibiotic pressure in 27% of isolates. It is possible that the lack of association of MDROs with prior antibiotic exposure in our cohort is due, at least in part, to acquisition driven by nosocomial spread rather than de novo generation of resistance. Significance Our study of unique clinical specimens positive for MDROs in acute burns patients indicates a consistent presence of these organisms within the service. The burns ward is located on an open general surgical ward shared with other services, and has common wound management and bathing areas. Despite periods of increased prevalence, no 'outbreaks' have been declared, although increased cleaning and isolation protocols are put in place when highly resistant organisms are identified. In addition to consistent infection control protocols, active consistent antibiotic stewardship is particularly needed in burns units, given that antibiotic exposure is an established and potentially modifiable risk factor for MDRO acquisition in critically ill patients [31]. An antibiotic stewardship team is part of our burns unit and provides direction for antibiotic prescribing, especially in more complex cases. However recent unpublished data from our unit indicates a high level of prolonged peri-operative 1st generation cephalosporin prescriptions that lack specific indications, and requires more oversight; especially in view of a recent systematic review finding that evidence for peri-operative antibiotic prophylaxis is lacking in burns patients [34]. Additionally, dosing regimens in complex burns patients require expert determination to achieve adequate treatment levels and minimize the risk of developing bacterial drug resistance [35]. A recent study of ICU antibiotic stewardship in Toronto, Canada, indicated that burns ICU staff were both more likely to receive suggestions from the antibiotic stewardship team, and more likely to reject those suggestions [36]. Given the specific complexities of diagnosing and treating multiple infections in burns patients, this finding suggests the need for consistent and senior staffing in stewardship and burn teams to develop shared understanding of clinical issues and decision making in this patient group. While broad principles may be applicable to many environments, specific infection control practices and antibiotic prescribing are commonly hospital and unit specific, dictated by patient population, model of care, infrastructure, and microbiological resistance profiles. Limitations This study did not identify isolates associated with invasive wound infections, the incidence of which will be lower than positive swabs. However, the demonstration of risk factors for acquisition of MDROs and their prevalence in the unit provides information to target atrisk patients and a basis for measuring improvements in prevalence associated with infection control measures. No molecular typing was done to investigate possible transmission. Antibiotic prescription data was available for a small subset of patients, and limited analysis was undertaken. Conclusion MDROs are a common and consistent presence in our burns unit. The pattern of acquisition suggests various causes, including introduction from the community and nosocomial spread. More regular surveillance of incidence and targeted interventions may decrease their prevalence, and limit the development of invasive infection. Current infrastructure does not support best infection control measures.
2022-06-14T13:41:00.567Z
2022-06-13T00:00:00.000
{ "year": 2022, "sha1": "49d45fc281055b079c71280b554677ad565972a3", "oa_license": "CCBY", "oa_url": "https://aricjournal.biomedcentral.com/track/pdf/10.1186/s13756-022-01123-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b12145df020e95e6e06c5a13929f1203dd05d6cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3858752
pes2o/s2orc
v3-fos-license
Identification of potential therapeutic targets in prostate cancer through a cross‐species approach Abstract Genetically engineered mouse models of cancer can be used to filter genome‐wide expression datasets generated from human tumours and to identify gene expression alterations that are functionally important to cancer development and progression. In this study, we have generated RNAseq data from tumours arising in two established mouse models of prostate cancer, PB‐Cre/PtenloxP/loxP and p53loxP/lox PRbloxP/loxP, and integrated this with published human prostate cancer expression data to pinpoint cancer‐associated gene expression changes that are conserved between the two species. To identify potential therapeutic targets, we then filtered this information for genes that are either known or predicted to be druggable. Using this approach, we revealed a functional role for the kinase MELK as a driver and potential therapeutic target in prostate cancer. We found that MELK expression was required for cell survival, affected the expression of genes associated with prostate cancer progression and was associated with biochemical recurrence. 2. It is not clear why the authors chose to focus on the RNAseq from the Pten-/-mice and not the p53-/-;Rb-/-mice in Figure 2. A better justification for this in the text is needed. a. Note: labeling of AdT-specific genes in unclear in Fig. 2E 3. The two distinct mouse models are not clearly labeled in Supplemental Fig. 4, and an image of PIN lesions for the p53-/-;Rb-/-model as described in the text is lacking. 4. Could the authors comment on the age to X phenotype (PIN, MedTumor, AdTumor) for each of the mouse models? Although these are well-studied models, this information would allow the reader to better place this study in the context of the field. Fig. 6E is unclear. It is difficult to tell which line represents MELKi 4, and there are unlabeled data points as well. The labeling of Supplemental Referee #3 (Comments on Novelty/Model System for Author): The authors do a good job of describing the differences between the mouse and human prostate and the two different genetically engineered mouse models they use in their studies. Referee #3 (Remarks for Author): The manuscript by Ramos-Montoya, et al., is well written with a sound experimental approach to identify novel therapeutic targets in prostate cancer (PCa). Their approach started with rational comparisons between two transgenic mouse models of PCa (PB-Cre/p53PRb and PB-Cre/PTEN), including differences of tumor grade and location within the mouse prostate where tumors form. The authors also nicely describe anatomical differences between mice & human prostates as a rationale for including tumors from multiple regions of the mouse prostate in each of the model systems for further analysis. Using RNAseq expression profiling and a set of bioinformatics-based filtering steps to reduce the complexity of the differentially expressed genes and to help identify changes that are likely to be therapeutically targetable, the authors found MELK to be significantly associated with more aggressive tumors and poor outcome in human patients. The authors then go on to show, in several human cell line models of PCa, that abrogating MELK expression using siRNA or adding a putative inhibitor of MELK (MELKi) activity inhibits cellular proliferation and induces apoptosis and that treating mice harboring cell line xenografts with the MELKi slowed tumor growth and increased apoptosis within the tumors. The work identifying MELK as a potential therapeutic target in PCa is very thorough and the results and interpretations are sound. However, the studies conducted to validate MELK as a therapeutic target in PCa rely heavily on the activity and specificity of the chosen MELKi (OTSSP167). Although many of the results obtained using the MELKi were similar to the results obtained with MELK-targeting siRNA, this correlation is insufficient support that the putative MELKi is, in fact, inhibiting MELK activity. Moreover, the authors do not describe the source of the MELKi nor do they provide any supporting evidence of its specificity toward MELK. In fact, the authors themselves acknowledge the recent report that seems to invalidate the presumed dependency on MELK of several cell line models (https://elifesciences.org/articles/24179 ) and suggests the antiproliferative activity of OTSSP167 has substantial effects on targets other than MELK. Thus, the inference that MELK regulates mitotic spindle formation is not sufficiently proven since OTSSP167-treated cells were used to draw this conclusion. Due to the uncertain specificity of OTSSP167 toward MELK, several questions should be considered in order to bolster the authors' findings of the importance of MELK in PCa aggressiveness and its utility as a therapeutic target. 1) Does the converse experiment (overexpression of MELK in a low-expressing cell line vs abrogation of expression is a high-expressing cell line) result in the upregulation of the same genes and increase aggressiveness? 2) Does the expression of genes identified as MELK-regulated get modulated similarly in response to OTSSP167 treatment of cells that do not express MELK? 3) Does decreasing cellular proliferation by another means, e.g. androgen deprivation of androgen-dependent CaP cells, result in similar expression modulation of the same genes? 4) Are there any CaP cell lines that do not respond to MELKi or siMELK? A new paragraph has been included in the discussion section discussing these recent publications and commenting on the possible role that MELK inhibition could play in the context of prostate cancer progression towards antiandrogen therapy resistance due to cellular lineage plasticity, neuroendocrine and stem cell phenotype acquisition. Reviewer's comment: It is not clear why the authors chose to focus on the RNAseq from the Pten-/-mice and not the p53-/-;Rb-/-mice in Figure 2. A better justification for this in the text is needed. Authors' response: The reason that Figure panels 2C to 2E focus on the Pten-/-data is that the analyses conducted would not have been feasible with the data obtained from p53-/-;Rb-/-mice. Only a very small number of differentially expressed genes were identified in PIN lesions from p53-/-;Rb-/-compared to normal prostate lobes (between 25 and 63 genes depending on the lobe, at a significance level of 0.01). We believe that this is because the p53-/-;Rb-/-mice developed low-grade PIN lesions, which did not accumulate many gene expression alterations. The analyses described in Figure panels 2C to 2E, e.g. pathway analysis, would not have been informative with such a small number of differentially expressed genes. Furthermore, we were unable to differentiate similarly distinct stages of tumour progression in the p53-/-;Rb-/-mice as we did in the Pten-/-mice (e.g. medium-and advanced-stage tumours). In our hands, the Pten-/-model was thus better suited to exploring how gene expression patterns differ between different prostate lobes and stages of prostate cancer progression, and we have now revised the manuscript to explain this. It is worth noting that the subsequent analyses, including the cross-species analyses aimed at identifying potential therapeutic targets, considered both mouse models to an equal extent, and we found that this was valuable as it improved the overlap between the mouse and human data ( Figure EV1B). Reviewer's comment: Note: labeling of AdT-specific genes in unclear in Fig. 2E Authors' response: We thank the reviewer for bringing this to our attention and have corrected the labelling to be consistent with the other figures. Reviewer's comment: The two distinct mouse models are not clearly labeled in Supplemental Fig. 4, and an image of PIN lesions for the p53-/-;Rb-/-model as described in the text is lacking. Authors' response: We thank the reviewer for bringing this to our attention. Supplemental Figure 4 (now referred to as Figure EV3 in this resubmission) has been revised to include labels indicating the two distinct mouse models. The image showing PIN lesions for the the p53-/-;Rb-/-model that was inadvertently ommitted in the first version of this figure has also been added. Reviewer's comment: Could the authors comment on the age to X phenotype (PIN, MedTumor, AdTumor) for each of the mouse models? Although these are well-studied models, this information would allow the reader to better place this study in the context of the field. Authors' response: This information has been added to the beginning of the results section. Reviewer's comment: The labeling of Supplemental Fig. 6E is unclear. It is difficult to tell which line represents MELKi 4, and there are unlabeled data points as well. Authors' response: This figure (referred to as Figure EV5 in this resubmission) has been edited for clarity. Reviewer's comment: The work identifying MELK as a potential therapeutic target in PCa is very thorough and the results and interpretations are sound. However, the studies conducted to validate MELK as a therapeutic target in PCa rely heavily on the activity and specificity of the chosen MELKi (OTSSP167). Although many of the results obtained using the MELKi were similar to the results obtained with MELK-targeting siRNA, this correlation is insufficient support that the putative MELKi is, in fact, inhibiting MELK activity. Moreover, the authors do not describe the source of the MELKi nor do they provide any supporting evidence of its specificity toward MELK. In fact, the authors themselves acknowledge the recent report that seems to invalidate the presumed dependency on MELK of several cell line models (https://elifesciences.org/articles/24179 ) and suggests the antiproliferative activity of OTSSP167 has substantial effects on targets other than MELK. Thus, the inference that MELK regulates mitotic spindle formation is not sufficiently proven since OTSSP167-treated cells were used to draw this conclusion. We thank the reviewer for these important comments and suggestions, and we agree that the specificity of the compound used to inhibit MELK is a key consideration, considering that off-target effects are commonly observed with kinase inhibitors. In the revised manuscript as well as in this response, we are thus presenting additional data and context to support the conclusions drawn in our study. The MELK inhibitor used in this study, OTS167, was first described by Chung and colleagues (Chung et al, 2012). In their study, the ability of OTS167 to inhibit MELK was demonstrated using in vitro kinase assays. Chung and colleagues also tested in their study the growth inhibitory effect of OTS167 in several different cancer cell lines and found that cells with low MELK expression are much less sensitive to growth inhibition by OTS167 than cells with high MELK expression. We have revised Figure EV4 to include data supporting that OTS167 also inhibits MELK at the concentrations and in the experimental system used in our study. Treatment of C4-2b cells with OTS167 reduced the phosphorylation of ACC at Ser79 (revised Figure EV4A), a known MELK substrate, in a dose-dependent manner (Beullens et al, 2005). Furthermore, OTS167 treatment also reduced MELK protein levels (revised Figure S5A), which has been previously observed and attributed to decreased MELK stability due to inhibition of autophosphorylation (Lizcano et al, 2004;Badouel et al, 2010;Chung et al, 2016). Taken together, these results support the conclusion that OTS167 does indeed inhibit MELK activity under the experimental conditions used in this study. We have also added the source of OTS167 used in our experiments to the methods section, which was inadvertently omitted in the original version of this manuscript. We fully agree with the reviewer that studying the effects of MELK overexpression in a low-expressing prostate cancer cell line would be a worthwhile experimental approach, and indeed increased aggressiveness following overexpression of MELK has been previously demonstrated in breast cancer (Wang et al, 2014). However, despite our best efforts we were unable to identify a prostate cancer cell line that could serve as a suitable model system. We initially investigated five prostate cancer cell lines (LNCaP, C4-2, C4-2b, PC-3, DU145) and one nontransformed prostate cell line (PNT1a) that are regularly used in our laboratory. All six of these cell lines exhibited robust expression of MELK, with only relatively minor differences between cell lines (Response Figure 1); MELK expression levels as assessed by qPCR were less than 2-fold higher in the highest-expressing cell line (DU145) than in the lowest-expressing cell line (PNT1a). In an effort to identify a more suitable MELK low-expressing cell line, we retrieved data on MELK expression from the Cancer Cell Line Encyclopedia (https://portals.broadinstitute.org/ccle). MELK expression data was available for eight prostate cancer cell lines (NCIH660, VCaP, MDAPCA2B, DU145, LNCaP, 22RV1, PC3, PRECLH), five of which were not included in our own cell line panel. A comparison of MELK expression in prostate cancer cell lines with other cell lines for which data is available in the Cancer Cell Line Encyclopedia illustrates that MELK expression in all eight prostate cancer cells is comparatively high overall and displays relatively little variation between cell lines (Response Figure 2). In contrast to many other cancer types, e.g. breast, stomach and melanoma, there are no clear "outliers" with low MELK expression among prostate cancer cell lines that are likely to be promising models to test the effect of MELK overexpression. The reviewer also raises the question of whether there are any prostate cancer cell lines that do not respond to siMELK or treatment with OTSSP167. In our laboratory, we have so far tested the growthinhibitory effect of siMELK in LNCaP, C4-2, C4-2b and PNT1a cells, and have observed reduction of proliferation and decreased cell viability in all cases ( Figure 5D, Figure EV4E, Response Figure 3). Consistent with this, all prostate cell lines tested to date (LNCaP, C4-2, C4-2b, PC-3, DU145, PNT1a) were sentitive to treatement with OTS167. These results are not surprising, considering that all of these cell lines exhibit robust MELK expression as outlined above. Interestingly, despite the relatively modest differences in MELK expression, we did observe a statistically significant correlation between MELK expression levels and sensitivity to OTS167 in the six cell lines, which would be consistent with the interpretation that the growth inhibitory effects of OTS167 may be mediated by MELK. These results have now been incorporated into Figure 5F. Figure 3: Effect of siMELK on growth of C4-2 and PNT1a cells. C4-2 cells (left) or PNT1a cells (right) were transfected with siRNAs directed against MELK or a non-targeting control, and viable cells were counted at the time points indicated. Preliminary data -n = 2 for C4-2, n = 1 for PNT1a, with three technical replicates per biological replicate. Response In order to address the reviewer's question whether inhibition of cell proliferation by other means results in expression modulation of the same genes as treatment with OTS167, we used microArray and RNA-seq data of LNCaP and C4-2b cells treated with established growth-inhibitory compounds: the androgen-inhibitors enzalutamide (Wang et al, 2016) or bicalutamide (unpublished data from our laboratory) and the AMPK activators AICAR and metformin 24 h (Jurmeister et al, 2014). To facilitate cross-comparability, we only used genes covered in all datasets (11,210 genes), averaged microArray data across all probes of the same gene after inter-quartile normalisation, and used a moderated log2 fold change estimate for RNAseq data. We then used principal component analysis (PCA) to discern systemic differences between treatments. As shown in Response Figure 4, the gene expression profile of cells transfected with siMELK #2 most closely resembled that of cells treated with OTS167 for 24 h, and the gene expression profile of cells transfected with siMELK#3 most closely resembled that of cells treated with OTS167 for 8 h. By contrast, there was greater variance between MELK knock-down or OTS167 treatment and the other growth-inhibitory stimuli. This suggests that silencing of MELK and treatment of OTS167 result in relatively similar gene expression profiles compared to unrelated treatment conditions. To facilitate cross-comparability, only genes covered in all conditions (11,210 genes), were selected. microArray data was averaged across all probes of the same gene after interquartile normalisation. A moderated log2 fold change estimate was used for RNAseq data. PCA was used to discern systemic differences between treatments. The totality of data presented above and in our revised manuscript continues to supports the conclusion that the growth-inhibitory effects of OTS167 in prostate cancer cells are at least in part mediated through MELK: • OTS167 inhibits MELK under the experimental conditions used in the study, as evidenced by reduced phosphorylation of a MELK substrate and decreased MELK protein levels. • MELK expression positively correlates with sensitivity to OTS167 in a panel of prostate cell lines. • Treatment with OTS167 and silencing of MELK both result in similar changes in the expression of cancer-relevant genes, and the resulting gene expression profile is distinct from that induced by unrelated growth-inhibitory compounds. • Growth inhibition and induction of apoptosis are not only observed following treatment with OTS167, but also following siRNA-mediated knock-down of MELK. Nonetheless, we have revised the discussion section of the manuscript in order to accurately reflect recent literature indicating that, like most kinase inhibitors, OTS167 inhibits more than one kinase (Ji et al, 2016), and to discuss the potential implications for our study. To avoid giving the impression of complete specificity for MELK in absence of data to this effect, we have also changed all references to the inhibitor in the text and figures from "MELKi" to "OTS167". Finally, we acknowledge the reviewer's point that further experiments will be required in order to determine whether the effect of OTS167 on mitotic spindle formation is mediated through MELK or through another target of the inhibitor, and we have now revised the text and figures of our manuscript to reflect this. Nevertheless, we feel that this does not significantly impact the main conclusions of the study, namely that the cross-species approach described in the manuscript is able to identify potential therapeutic targets in prostate cancer, of which MELK serves as one example. Reviewer's comment: A graphical overview of the approach used to identify MELK as a target for PCa would be useful. Authors' response: We agree with the reviewer and have revised Figure 3C to show a graphical overview of the steps used to derive potential therapeutic target genes for prostate cancer and identify MELK. Reviewer's comment: Experimental methods section (and other places in the main text) should refer to Supplementary Text for relevant information. Authors' response: We have revised the main text in line with the reviewer's suggestion. Reviewer's comment: White areas in legends of Fig 4D & E are not represented in graph, challenging rapid interpretation. Authors' response: We have revised the legend of Figure 4D and E and hope that this will aid interpretation of the figure. Thank you for the submission of your revised manuscript to EMBO Molecular Medicine. We have now received the enclosed reports from the referees that were asked to re-assess it. As you will see the reviewers are now supportive and I am pleased to inform you that we will be able to accept your manuscript pending a few final editorial amendments. Referee #3 (Comments on Novelty/Model System for Author): As the title indicates, this approach identifies *potential* therapeutic targets, making these studies largely preclinical. Medical impact will be higher when therapeutic targets identified by this approach are validated in human trials. Referee #3 (Remarks for Author): The authors have addressed the reviewers' concerns very thoughtfully and thoroughly. YOU MUST COMPLETE ALL CELLS WITH A PINK BACKGROUND ê See Materials and Methods section, "Data Analysis and Graphical Representation". No power analysis was done a priori of study design, since the effect size in changes was unknown. Generally a minimum of n=3 biological replicates was used. See Appendix Supplementary Methods, "In vivo studies". We did not perform any statistical method to choose the group size of the in vivo studies, as we did not have enough information on the variability of the model being used. For that reason we chose to use an n=10 for each animal group, expecting that such size would provide enough power to the study to be able to detect the effects induced by the treatments. See Appendix Supplementary Methods, "In vivo studies". No blinding was done in the in vivo studies. However, for the calliper measurements in the xenograft study, these measurements were captured by a scientist not involved in the project nor in the analysis to avoid bias in the collection of data. Data the data were obtained and processed according to the field's best practice and are presented to reflect the results of the experiments in an accurate and unbiased manner. figure panels include only data points, measurements or observations that can be compared to each other in a scientifically meaningful way. graphs include clearly labeled error bars for independent experiments and sample sizes. Unless justified, error bars should not be shown for technical replicates. if n< 5, the individual data points from each experiment should be plotted and any statistical test employed should be justified the exact sample size (n) for each experimental group/condition, given as a number, not a range; Each figure caption should contain the following information, for each panel where they are relevant: Captions The data shown in figures should satisfy the following conditions: Source Data should be included to report the data underlying graphs. Please follow the guidelines set out in the author ship guidelines on Data Presentation. Please fill out these boxes ê (Do not worry if you cannot see all your text once you press return) a specification of the experimental system investigated (eg cell line, species name). B--Statistics and general methods the assay(s) and method(s) used to carry out the reported observations and measurements an explicit mention of the biological and chemical entity(ies) that are being measured. an explicit mention of the biological and chemical entity(ies) that are altered/varied/perturbed in a controlled manner. a statement of how many times the experiment shown was independently replicated in the laboratory. Any descriptions too long for the figure legend should be included in the methods section and/or with the source data. In the pink boxes below, please ensure that the answers to the following questions are reported in the manuscript itself. Every question should be answered. If the question is not relevant to your research, please write NA (non applicable). We encourage you to include a specific subsection in the methods section for statistics, reagents, animal models and human subjects. This checklist is used to ensure good reporting standards and to improve the reproducibility of published results. These guidelines are consistent with the Principles and Guidelines for Reporting Preclinical Research issued by the NIH in 2014. Please follow the journal's authorship guidelines in preparing your manuscript.
2018-04-03T00:11:02.949Z
2018-02-05T00:00:00.000
{ "year": 2018, "sha1": "4222b43a11a0b1573a8694c1a68d7b77bcbc5bab", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15252/emmm.201708274", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e8d5850327d2b6aec11da845678b5fec0e2abfd", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
245733242
pes2o/s2orc
v3-fos-license
What are Digital Public Health Interventions? First Steps Toward a Definition and an Intervention Classification Framework Digital public health is an emerging field in population-based research and practice. The fast development of digital technologies provides a fundamentally new understanding of improving public health by using digitalization, especially in prevention and health promotion. The first step toward a better understanding of digital public health is to conceptualize the subject of the assessment by defining what digital public health interventions are. This is important, as one cannot evaluate tools if one does not know what precisely an intervention in this field can be. Therefore, this study aims to provide the first definition of digital public health interventions. We will merge leading models for public health functions by the World Health Organization, a framework for digital health technologies by the National Institute for Health and Care Excellence, and a user-centered approach to intervention development. Together, they provide an overview of the functions and areas of use for digital public health interventions. Nevertheless, one must keep in mind that public health functions can differ among different health care systems, limiting our new framework’s universal validity. We conclude that a digital public health intervention should address essential public health functions through digital means. Furthermore, it should include members of the target group in the development process to improve social acceptance and achieve a population health impact. Background As digitization plays a large role in an increasing number of health systems, digital public health is an emerging field for population-based research and practice. The fast development of both hardware-and software-based digital technologies provides a fundamentally new understanding of improving public health, which can be achieved through digitalization, especially in prevention and health promotion. For example, digital technologies may improve physical activity levels, dietary intake, posture, and mental well-being via sensors and apps [1]. Technological innovations in apps for tracking health-related behavior, monitoring potential health risks, and communication and interaction have rapidly changed many aspects of public health [2]. However, not all of these interventions might achieve a health impact at a population level by displaying effectiveness in randomized clinical trials and efficacy under quasi-experimental real-world circumstances. Although there is a need for evaluation methods that address the many challenges that arise with digitization (eg, fast-paced development), it is challenging to assess digital public health interventions as these may span from population health surveillance to the prevention of specific diseases, and they develop faster than analog interventions [3]. Moreover, companies and institutions often develop digital tools based on market evaluations, expected profits, and technological possibilities but not based on the public's needs and preferences. To improve digital public health interventions' effectiveness and efficacy, we first need to understand what they entail and how they are defined. However, to our knowledge, no definition for digital public health interventions exists to date. Only by doing that will we gather meaningful, valid, and reliable results on their effectiveness and efficiency. Thus, the aim of this viewpoint is to offer a definition for digital public health interventions. Before defining digital public health interventions, we need to explain the differences between eHealth, mobile health (mHealth), digital health, and digital public health. This is necessary to highlight the differences between digital health interventions (DHIs) and digital public health interventions. Following this, we will build a framework that might help to identify, structure, and classify digital public health interventions. Our definition and framework will rely on existing approaches from public health [4], digital health [5], and user-centered design [6]. The first section of building the framework will explain why we chose the selected models; we decided to use the Essential Public Health Functions (EPHFs) by the World Health Organization (WHO) for the public health level [4], the updated version of the Evidence Standards Framework for Digital Health Technologies by the National Institute for Health and Care Excellence (NICE) [5], and the Participatory Health Research approach by Wright [6]. After clarifying the reasons for choosing named approaches, we will explain each model and how they are related to digital public health and may be used for digital public health purposes. After setting the theoretical background for our definition, we will illustrate our findings using the German Corona App as an example to validate our digital public health intervention criteria. We will conclude with a definition for digital public health interventions and use this to propose a digital public health intervention classification framework (Multimedia Appendix 1). Differences Among eHealth, mHealth, Digital Health, and Digital Public Health Terms such as eHealth, mHealth, or digital health are used in the context of the digitization of public health. Since 2019, few papers have also referred to the term digital public health. Given the multitude of terms and definitions in digital health, it is essential to understand the considerable heterogeneity of how such terms interrelate with each other and where digital public health might find its place in the terminological canon of digital health. Therefore, the following section will define the named terms and summarize their core fields of action and the target group's level, as seen in Figure 1. Core field of action and target group level of mHealth, eHealth, digital health, and digital public health [7][8][9][10][11][12][13][14][15]. mHealth: mobile health. An article on eHealth concepts based on an extensive literature search [7] confirms the lack of consensus on the meaning of eHealth as possibly the first word in this field. A 2005 study found 51 different definitions of eHealth [8]. This lack of consensus highlights the importance of a shared understanding of terms. More recent studies emphasize that because of its immense dynamics, the field of eHealth is challenging to define [9]. Most definitions share the use of information and communication technologies (eg, the internet) for health topics. Their focus mostly lies on delivering health services rather than health promotion and disease prevention [10][11][12]. Some definitions also highlight the importance of user-centered approaches for facilitating health services in eHealth [10,12]. The word mHealth aims more directly at a particular technology, namely smartphones and mobile sensors, in their health significance. Thus, mHealth is defined more precisely overall, although different technologies are used here [13]. As a part of eHealth, the focus of mHealth lies on wireless and mobile technologies and their use in enhancing health-related science, treatment, and ultimately health status [9]. Fatehi et al [14] stated that in 2020, there were >90 different definitions for digital health. They concluded that digital health includes eHealth, mHealth, self-tracking, wearable devices, artificial intelligence, and information systems in health care, focusing on health and not technology. Digital health focuses on the health of individuals (eg, patients) to improve health care with technology [14]. This is where digital health and digital public health differ, as digital public health aims to improve health and well-being at the population level. Nevertheless, digital public health uses the same technologies that are also used to improve individual health care; however, its purposes change. A recent publication by Zeeb et al [15] provides the first overview of what digital public health might be. It serves as a starting point for developing a better understanding of digital public health interventions by providing a short introduction to central terms. Thus, following the article by Zeeb et al [15], the authors propose the following definition for digital public health in distinction to the other abovementioned fields and outline for which fields they see it relevant (own translation from German): DiPH [...] focuses on the development, application, and knowledge interest on Public Health and thus on prevention, health promotion, and the related basic sciences such as epidemiology. Primary clinical and individual patient-related aspects are not in the foreground, unlike, for example, telemedicine with its concrete application in an individual treatment and care context. However, it should be noted that the term DiPH has not yet prevailed over others such as eHealth and mHealth. Also, this is hard to expect given the diversity and dynamics of the terms used to date. Where, however, the focus of digitization and health is on population, prevention, and health promotion, including a conscious analysis of health inequalities, DiPH can offer a clearer classification than some other terms in this field. Although the definition by Zeeb et al [15] serves as a good starting point for the discussion, it mainly focuses on the primary level of prevention (ie, preventing a disease or injury before it occurs). Although public health, in its essence, comprises 3 levels of prevention, according to this definition, secondary (eg, reducing the impact of a disease or injury after it occurred) and tertiary preventions (eg, rehabilitation) are not explicitly mentioned by Zeeb et al [15] as part of digital public health. The central challenge of defining digital public health is the integration of digital development and technologies into public health concepts and use them to achieve public health goals rather than redefining and reconceptualizing public health in the face of technological advancements [5,15]. To develop a first working definition and classification framework for digital public health interventions, established models for both digital health [5] and public health [4] were assessed to combine aspects of these models to develop a more holistic definition of digital public health interventions. Here we suggest that digital public health, as a complex intervention, should be viewed from different perspectives. As such, our proposed classification is a combination of the elements of already existing models. Specifically, the EPHF by the WHO provided us with an overview of the necessary core functions of public health, which might also be addressed by digital means. Our definition will explicitly include the area of health promotion (ie, focusing on health resources) and all three levels of prevention (ie, primary prevention to reduce the risk of disease development, secondary prevention as screening and early diagnosis, and tertiary prevention for rehabilitation), as all these levels are included in the EPHF by the WHO [4]. We will then link our definition to the participation approach: a user-centered model for the development of digital interventions to increase the acceptance of digital public health interventions among target groups [6]. This will create a framework that follows the concepts of public health (goals). Choice of Included Models Following a narrative approach and based on the authors' expertise and experience in the field of Public Health, three layers were identified: (1) overview, which is a larger operational layer where the central functions of digital public health are mapped; (2) structure, which is a layer that focuses on structuring digital public health activities (eg, by functions); and (3) improvement, which is a layer that specifically includes the individual perspective in the development and use of digital public health interventions. For each layer, a framework was identified based on the author's expertise and previous experience with the frameworks and a nonsystematic literature search for alternative frameworks. The frameworks included are as follows: the EPHF by the WHO, which offers a macro view of public health topics [4]; the Evidence Standards Framework for Digital Health Technologies by the NICE, which categorizes DHIs for the UK setting [5]; and the Steps of Participation Approach, which was suggested by Wright [6]. Together, the EPHF and the NICE framework will build the base of mapping and structuring digital public health interventions. We then use the Steps of Participation Approach suggested by Wright [6]. This is a user-centered approach for intervention development that aims to increase acceptance within the target group. The approach provides well-described and clear-cut categories for target group involvement-participation and nonparticipation alike. Together, with all 3 models aligned, a conceptual pyramid for digital public health intervention classification is formed ( Figure 2). EPHFs by the WHO A way of addressing public health goals to affect population health is using the EPHF [4]. Following the WHO report EPHFs, health systems and health security: developing conceptual clarity and a WHO road map for action, these functions can be separated into cross-cutting, horizontal functions, roughly based on the building blocks approach to health systems, and service-based, vertical functions comprising the traditional public health services provided by modern health systems [4]. Although there is no precise definition for each part, as they depend on each health care system or region, the WHO report identified some significant categories that most EPHF share (Textbox 1). Textbox 1. Essential public health functions according to the World Health Organization [4]. Horizontal functions • Governance (eg, public health management, policy, and planning or quality assurance in health services) These are just a few examples of fields for action in both (analog) public health interventions and digital public health interventions. However, we could not apply all EPHF to every setting. They depend strongly on specific health care systems, which differ among countries. In general, the WHO regards public health interventions primarily as an effort or policy that attempts to improve mental, social, and physical health at the population level by including and addressing EPHF [4]. Analogous to public health interventions, digital public health interventions have the potential to include and address horizontal as well as vertical functions. The governmental regulation of mHealth apps as medical devices with the possibility of reimbursement, as started in Germany in December 2020 [16], may be one way of applying horizontal EPHF of governance to digital public health. Various countries have also developed proximity-tracing apps as tools for population health surveillance and monitoring during the SARS-CoV-2 pandemic [17,18]. The last example of applying the horizontal EPHF to digital public health is the digitization of health care systems in total. This leads to a redesign around people's needs and expectations in, for instance, web-based consultation services or telemedicine for people in rural areas who do not have access to health care professionals [19]. As for vertical public health functions, apps and wearables for self-monitoring, step counting, and fitness tracking can serve as examples of vertical digital public health functions. Their goal is to promote health and a healthy lifestyle [20,21]. Level of Interaction: NICE Framework Applying EPHF as a cornerstone for the identification and mapping of digital public health interventions provides an initial overview of the field of digital public health. The next step is to further structure such interventions based on their functional classification proposed by NICE's Evidence Standards Framework for Digital Health Technologies [5]. This applied framework describes the types and levels of evidence needed to show the effectiveness and expected economic impact of a DHI. Various publications have used this framework for their digital health technology assessment, which confirmed our resolution that this framework is not only well-known but also well-used in the scientific field of digital health [22][23][24]. The NICE framework aims to establish standardized criteria that can assess DHIs by providing a functional classification and stratification into evidence tiers. This separation illustrates the main functions of the types of interventions that we expect to be the most widely developed (Textbox 2). Evidence tier C: interventions • Preventive behavior change: address public health issues (eg, smoking, eating, alcohol, sexual health, sleeping, and exercise) • Self-manage: allows people to self-manage a specific condition; may include behavior change techniques Such apps usually display level 2 functions (ie, informing, simple monitoring, and communication), which mirrors the underlying EPHF, including disease prevention and health information systems as underlying EPHF. The first 3 functions within evidence tier C serve as a digital example for the vertical functions of health promotion as well as disease prevention (such as mobile apps on prescription [16]). Finally, the last 3 functions in tier C, although focusing more on the medical and individual level than the other tiers and functions, can be seen as a part of health promotion and disease prevention. Unlike the first 3 functions in tier C, which focus more on the primary prevention area, the last 3 functions are more closely linked to secondary and tertiary prevention. Specifically, the functions of DHIs in tier C include the early diagnosis of specific conditions and rehabilitation and healing, which improves the user's health (for instance, a national telemedicine service [19]). As seen, there is an interrelation between the NICE framework and EPHF, supporting the argument that digital public health interventions can address EPHF. The critical part here is that the NICE framework, unlike the WHO EPHF, provides a structure for the degree of complexity (ie, level of interaction) based on the user's risk. Following the understanding of complex and multicomponent interventions that act and interact on different levels, benefits, and acceptance of digital public health interventions, depending on the users of such interventions and their specific perspectives. Any digital public health intervention can ultimately fail if the population does not accept or use it. Thus, it is essential to involve target groups in the development of these interventions. We propose a participatory and user-centered approach for intervention development as the third cornerstone of digital public health interventions. User-Centered Approach in Intervention Development Hochmuth et al [25] advocated that complex and multicomponent interventions require a user-oriented intervention design because of the varying intricacies of such interventions. This intricacy can be based on the following: • Interactions between technological components (eg, sensors for data acquisition) • Different requirements for users in the implementation of the intervention (eg, knowledge of data security) • Involvement of other groups or organizational levels (eg, patients or researchers) • Degree of adaptation or flexibility of the intervention (eg, further agile development through software updates) [3,25]. To follow a user-centered approach, developers must integrate the users (ie, the target group) in the development process. A way of structuring the involvement of users is the Steps of Participation Approach suggested by Wright [6]. This model describes the user's nonparticipation and involvement in the research process. It further differentiates among 9 stages, ranging from instrumentalization to self-organization. The 9 stages provide a hierarchical order not only for participation but also for the nonparticipation of target groups in the development of public health interventions. Although it includes 9 stages, only the last 4 include real participation, according to Wright [6], as the first 2 have no target group members involved in the development process. Steps 3 to 5 are the precursors for participation. As stated by this approach, one can only speak of participation in only those areas where people have the power to participate in the decision-making processes [26]. The 9 significant steps based on Wright [6] are described in the following sections. As shown in Table 1, the difference among the 3 groups of nonparticipation is that the first 2 steps completely exclude the target group. Although grades 3 to 5 recognize the target group as advisers, they do not include them in the decision-making process, which occurs in steps 6 to 9 ( Table 1). The chance to successfully roll out and implement a digital public health intervention increases as the development process includes the target group [27]. Therefore, a user-oriented way of developing digital public health interventions to increase acceptance and use of such interventions should be a goal of digital public health. A way of including target groups in the development of new digital public health interventions could be to apply user-centered approaches to the development process [28]. The aim here is to look at issues from various stakeholder perspectives and create new ideas in an interdisciplinary team to solve potential problems and challenges throughout the development of a digital public health intervention. Ideally, this approach also includes the target group (eg, for an app) to increase acceptance and use. Generally, participatory development processes are iterative and may be designed in various forms depending on the goal. In principle, the following four steps can shape the process: (1) concept generation and ideation; (2) prototype design and system development; (3) Evaluation; and (4) deployment, including various feedback loops ( Figure 3). After an initial analysis of the user's needs, the developers collect the criteria for functions and design. Then, they convert these recommendations into the functional specifications of a user-centered design. Using walkthroughs and usability testing, prototypes are tested and perfected before deployment, which helps to expose latent practical and interface design weaknesses. The developing team can achieve this by analyzing remotely collected data using automatic data transmission or video use. As usability testing is a pillar of the best practices for medical system architecture [28], production teams should also apply this to digital public health interventions' development. [6]. Precursors for participation Step 3: information Step 5: involvement The decision-makers are advised by (selected persons from) the target group Participation Step 6: co-determination • 6.1 The decision-makers consult with the target group • 6.2 Negotiations between target group representatives and decision-makers • 6.3 The target group members have a say Step 7: partial transfer of decisionmaking authority • 7.1 A right of participation in the decision-making process • 7.2 Decision-making authority is limited to certain aspects Step 8: decision-making power Step 9: self-organization • 9.1 The responsibility for a measure or a project is entirely in the hands of the target group Figure 3. Schematic representation of the user-centered design process [28]. Principal Findings The aim of this viewpoint paper was to define digital public health interventions and provide an exemplary classification framework for digital public health interventions. Such an approach may help identify core areas of digital public health interventions, which in turn might be helpful during the development and evaluation phases of digital public health interventions. We argue that it is crucial to examine digital public health interventions from 3 different perspectives. The first one should be the WHO framework for EPHF [4]. This is important as it provides an overview of what kind of activities, which strengthen and maintain health at the population level, belong to public health as a discipline and, therefore, what a public health intervention may be. The second perspective focuses on the digital aspects of an intervention. A suitable framework is the Evidence Standards Framework for Digital Health Technologies by NICE, as it classifies digital interventions based on their functions and defines corresponding evidence standards [5]. Both frameworks combined enable us to categorize digital public health interventions according to the area of public health and the level of interaction between the user and the digital tool. The last perspective focuses on user involvement in the development of such interventions, as proposed by Wright [6]. This is of great importance, as studies suggest that the acceptance of target users increases with more involvement in the process of development, testing, and implementation. Therefore, acknowledging the 9 levels of user participation (and focusing on levels 6 to 9) may enable developers to create even more significant and meaningful digital public health interventions for their target group. Our current approach relates to a single and, to the best of our knowledge, the only definition of digital public health. As it is natural for such definitions to evolve over time as the field evolves, our suggestion for a definition of digital public health interventions might also evolve, as one cannot talk about a definition for digital public health interventions without defining the borders of digital public health. Although the suggested EPHF in this perspective piece refers to a summary of the WHO, some readers of this paper might find it hard to apply it to their specific context. This may very much depend on the health care system in which the digital public health intervention is developed. Therefore, the EPHF listed in this study should not be seen as a final list of public health functions or classification frameworks but rather as examples of core functions and goals. Similarly, the NICE framework might not be applied directly in other countries with different health systems and contexts; however, it might provide a helpful starting point for identifying relevant frameworks for such systems or developing their own frameworks that focus on interaction and functional classifications. Possible steps for participation to include user perspectives and methods (eg, user stories) might differ depending on the format and content of a specific intervention. For example, one cannot expect an app that facilitates communication between physicians and patients to unfold its full potential when the development team does not consider both perspectives regarding design, functions, and content [29,30]. Some effects might be more visible on a public health scale than others, depending on the population's size and the health system for which the intervention was initially developed. More importantly, digital public health interventions should display their effectiveness beyond the laboratory in the real world. They should do so by providing study results with high internal validity and results with high external validity. This well-known approach within empirical social research ensures that measurable effects transpire from the laboratory to the real world. The following example aims to display the connections among the 3 analyzed frameworks and models. Since the beginning of the SARS-CoV-2 pandemic in early 2020, various countries have developed contact-tracing apps for monitoring and surveillance [31]. The primary function of such apps is to notify users after contact with someone who was (later) tested positive for the SARS-CoV-2 virus [32]. As previously mentioned, contact-tracing apps serve the horizontal EPHF of health information systems. Most countries set the bars for data security in contact-tracing apps high to improve users' trust. Conversely, a high level of data protection prevents the collection and analysis of epidemiologically relevant data, making it more difficult to assess the effectiveness from a public health perspective. When developing a contact-tracing app, it is necessary to weigh the protection of privacy and the potential public health benefits against each other [33]. This constraint of data availability for public health (research) limits contact-tracing apps to evidence tiers 1 and 2 within the NICE framework for DHIs. Although simple monitoring (as level 2 demands) is possible, the apps do not aim to calculate the diagnoses needed for tier 3 (instead, recommendations such as different colors for warning levels in the German app). As previously mentioned, participation in the development process is key to a successful intervention. Conclusions This study aimed to provide the first definition and classification framework for digital public health interventions. Here, we suggest that digital public health, as a complex intervention, should be viewed from different perspectives. As such, our proposed classification is a combination of the elements of already existing models, specifically, the EPHF by the WHO, which provided us with an overview of the necessary core functions of public health that might also be addressed by digital means. The NICE framework gave us an overview of different areas for digital technologies and potential evaluation requirements. Both models together form a framework for describing digital public health interventions. However, without the inclusion of target groups in user-centered processes during the development, these interventions may lack efficiency and the acceptance of potential users. Therefore, we propose an established user-centered design process to be included in the development of digital public health interventions. Nevertheless, users of our definition and framework must check the validity of our criteria within their setting (eg, population structure, understanding of public health, and health care system). Taking the different strains of research that together might provide a better understanding of the term digital public health intervention, the first definition might be as follows: A Digital Public Health Intervention addresses at least one essential Public Health function through digital means. Applying a framework for functional classification and stratification categorizes its interaction level with the user. The developmental process of a digital public health intervention includes the user perspective by applying participatory methods to support its effectiveness and implementation with the goal to achieve a population health impact. The first step toward a potential intervention classification framework was developed based on this definition and its underlying frameworks (Multimedia Appendix 1). The aim of this framework is three-fold: (1) support the future reporting of digital public health intervention functions and effectiveness by providing a framework for classification, (2) identify future requirements (eg, for evaluation) of such interventions, and (3) support the implementation processes of digital public health interventions by linking implementation needs and characteristics with the classification framework (ie, a digital public health intervention addressing active monitoring in health care with high levels of user involvement might have other implementation needs than a digital public health intervention that addresses simple monitoring in health care with low levels of user involvement) [36]. We view a combination of all 3 models as a chance to set up a first definition and classification for digital public health interventions and hope that our approach will encourage the uptake and further development of our idea.
2022-01-06T16:19:30.760Z
2021-07-09T00:00:00.000
{ "year": 2022, "sha1": "b8a33d3ec6da91199e92dadc97ba77f51b222222", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/31921", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6087fa82ecd978b49138cade0bd691327d6989fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
253246815
pes2o/s2orc
v3-fos-license
DMFpred: Predicting protein disorder molecular functions based on protein cubic language model Intrinsically disordered proteins and regions (IDP/IDRs) are widespread in living organisms and perform various essential molecular functions. These functions are summarized as six general categories, including entropic chain, assembler, scavenger, effector, display site, and chaperone. The alteration of IDP functions is responsible for many human diseases. Therefore, identifying the function of disordered proteins is helpful for the studies of drug target discovery and rational drug design. Experimental identification of the molecular functions of IDP in the wet lab is an expensive and laborious procedure that is not applicable on a large scale. Some computational methods have been proposed and mainly focus on predicting the entropic chain function of IDRs, while the computational predictive methods for the remaining five important categories of disordered molecular functions are desired. Motivated by the growing numbers of experimental annotated functional sequences and the need to expand the coverage of disordered protein function predictors, we proposed DMFpred for disordered molecular functions prediction, covering disordered assembler, scavenger, effector, display site and chaperone. DMFpred employs the Protein Cubic Language Model (PCLM), which incorporates three protein language models for characterizing sequences, structural and functional features of proteins, and attention-based alignment for understanding the relationship among three captured features and generating a joint representation of proteins. The PCLM was pre-trained with large-scaled IDR sequences and fine-tuned with functional annotation sequences for molecular function prediction. The predictive performance evaluation on five categories of functional and multi-functional residues suggested that DMFpred provides high-quality predictions. The web-server of DMFpred can be freely accessed from http://bliulab.net/DMFpred/. Introduction Proteins or regions that lack stable 3D-structures under the native physiologic conditions are known as intrinsically disordered proteins and regions (IDP/IDRs). Recent studies have suggested that IDP/IDRs are common in nature, with more than 30% of proteins in eukaryotes being disordered [1,2]. The widespread occurrence of IDP/IDRs alter the classical protein structure-function paradigm [3][4][5]. IDP/IDRs play essential roles in living organisms, the alteration of their functions are responsible for many human diseases such as cancer [6], Alzheimer's [7] and Parkinson's [8]. Exploring the molecular functional mechanism of IDP/IDRs will be helpful for a complete understanding of protein structures and functions, and will be also used to guide wet lab experiments and inform studies of rational drug design [9,10]. The functions of protein disordered regions arise from their native structural flexibility or from their ability to bind to partner molecules [4]. These disorder functions can be summarized as six categories: entropic chains, assembler, scavenger, effector, display site, and chaperone [4,11]. The disordered entropic chain benefits directly from its intrinsically disordered conformation without becoming structured, which serves as the connector between domains and structural elements making up domains [12]. Disordered assemblers bring together multiple binding partners, and promote the formation of large protein complexes [4,5,13]. Scavenger disordered regions in proteins store and neutralize small ligands, such as chromogranin, salivary glycoproteins and calcium-binding phosphoproteins [11,14,15]. Effectors interact with other partner proteins and modify their activity [16]. Some disordered regions serve as display sites, facilitating easy access and recognition of the post-translational modifications (PTMs) in proteins [17]. Disordered chaperone function makes the IDRs assisting RNA and protein molecules to reach their functionally folded states [18]. The intrinsically disordered is encoded in the protein sequence, motivating the development of computational sequence-based disorder predictors [19]. Currently, there are about 200 million disordered proteins have been identified experimentally and predictively [20]. In contrast, only thousands of disordered proteins have functional annotations [21,22]. This data suggests that it is important to develop computational predictors for filling the deepening gap between annotated and unannotated disordered sequences. In this regard, several sequencebased computational predictors are proposed for predicting specific functions of disordered proteins. For example, the DFLpred [23] and APOD [24] are computational methods developed for predicting disordered linkers that fulfill entropic chain function in proteins. Besides, there are predictors for identifying disordered regions binding to specific types of molecular partners, including protein binding predictors [25][26][27][28][29][30][31][32], DNAs and RNAs binding predictors [33,34], and lipid binding predictors [35]. However, methods for predicting the other five classes (assembler, scavenger, effector, display site and chaperone) of molecular functions of IDRs are required. Protein representation is critical for the construction of computational predictors. Protein sequence defines structure, which in turn dictates its function [4]. The intrinsically disordered proteins reassessed the classical sequence-structure-function paradigm [36], the complex sequence, structure, and functional properties of IDP/IDRs should be explored to fully represent the disordered proteins. By modelling the language's generative rules, the language model in natural language processing (NLP) comprehensively understands the language, and capture the semantic features of text, which is an indispensable technology in NLP. Protein sequences can be viewed as the language of genetics sharing high similarities with natural language sentences [37]. For example, the natural language sentences composed of words express their semantics, while proteins composed of residues perform various functions. Inspired by their similarities, the proteins can be represented and modelled by the language models. In this paper, we proposed DMFpred predictor, which predicts five molecular functions of IDRs, including assembler, scavenger, effector, display site, and chaperone. DMFpred employs the Protein Cubic Language Model (PCLM) to learn protein representations, consisting of three types of protein language models and an attention-based language model alignment (ALAN) module. Three protein language models were used to capture protein sequences, structural, and functional features, respectively. The ALAN module extracts the relationship among three captured features and encodes the complementarity information. The key challenge in functional prediction is that the number of disordered sequences with functional annotations is relatively small. The transfer learning technology can transfer knowledge from tasks with plentiful training data to improve the performance of similar other tasks, which is especially useful for the task with limited training data [38]. Therefore, we first pre-trained PCLM with large IDRs sequences to capture the disordered features of proteins. Then the general disordered features were transferred separately to five different disorder functions prediction via model fine-tuning. Benefited from pre-training and function-specific fine-tuning of PCLM, DMFpred captures more relevant features of disorder molecular functions. The ablation experiment results demonstrated that each module of PCLM contributes to the predictive performance improvements. And further evaluation suggested that DMFpred provides highquality predictions on all five categories of functional residues and multi-functional residues, whose residues carry more than one category of molecular functions. The corresponding web server of DMFpred was established and can be freely accessed from http://bliulab.net/ DMFpred/. Benchmark datasets The datasets used in this study were collected from DisProt [22], which is the major repository of manually curated functional annotations of intrinsically disordered proteins from literature. All sequences in the database are functionally annotated at the amino acid level. In this study, we focused on five general categories of disordered molecular functions (DMFs), including assembler, scavenger, effector, display site and chaperone. Following the intrinsically Disordered Proteins Ontology (IDPO) schema in the DisProt, each of the five categories of function terms has one or two leaf terms (see S1 Fig). Here, we treat all the leaf terms as the same functional class as their root terms. The sequences in the database are functionally annotated with amino acids as the basic unit, and we collected a total of 590 sequences containing residues assigned at least one class of DMFs. For each class of function, we treat the residues annotated with the functional term class in the database as functional residues, and the others as nonfunctional residues. Then we assign all the functional residues in the sequences as label '1' and non-functional residues as label '0', leading to five lines of labels corresponding to five categories of DMFs annotations. To avoid data redundancy, we performed the similarity clustering on the 590 sequences by using PSI-BLAST [39] by setting the threshold of 25%, and filtered sequences with pairwise sequence similarity >25%. This way ensured that the sequence similarity between any two sequences in the collections was lower than 25%. The remaining 541 proteins were randomly divided into training, evaluation, and test sets in a ratio of 6:2:2. Finally, 324 sequences were used as the training set for model training, 106 sequences were used as the valuation set for model selection, and 111 sequences were used as the independent test set (TEST-1) to evaluate predictive performance (S1 Data). The number of functional residues for the five categories of disordered molecular functions in the DMF benchmark datasets is given in Table 1. Architecture of protein cubic language model Sequence, structure and function language models. Sequence, structure, and function are three important aspects of proteins. Only one language model cannot fully characterize the three features. In this paper, we employed three types of language models for capturing the sequences, structural, and functional features of proteins. Sequence language model. The amino acid sequence contains the evolutionary information of protein. Here, the bidirectional long short-term memory (Bi-LSTM) networks were employed as the sequence language model to capture the global correlation features of evolutionary information (see Fig 1A). By using the protein PSSM profile and HMM profile as the inputs of the sequence language model, the sequence features Seq can be calculated by [40]: where X L×40 is the combination of PSSM and HMM matrix generated by PSI-BLAST [39] and HH-suits [41] respectively, and L is the length of the sequence. LSTM f and LSTM b indicate the forward and backward recurrent neural unit respectively. Concat represents the combination of vectors. Structure language model Protein structure reflects the results of local interaction among residues. The structure language model aims to capture structural features of the protein, and a convolutional-based model is used to capture structural local pattern features from the residue-residue contact map (CCM) (see Fig 1B). By taking CCMs as inputs, the structure features Stc can be calculated by [42]: where Y L×L is the CCM profile generated by CCMpred [43,44], Filter stc and b stc are trainable variables, Conv represents convolution operator, and relu is the Rectified Linear Unit activation function [45]. Function language model Functional conservative sequence segments also known as functional motifs hold particular functionality information of proteins. Previous researches [46][47][48] have shown that the motifbased convolution (MotifConv) by embedding particular motifs into the convolution kernel can learn the prior biological features. Inspired by MotifConv, the functional motif-based convolution was employed as the function language model to capture proteins' functional features (see Fig 1C). The 164 motifs used in this study were extracted from the Eukaryotic Linear Motif (ELM) database [49]. The letter-probability matrix of each motif is used to build the convolution kernel formulated as: structure, and C. function language model), attention-based language model alignment module (D. ALAN), and the fusion and output layer (E). The input protein sequence is converted to sequence profile X, structure profile Y, and function profile Z, which are then fed into three protein language models to capture the sequence features Seq, the structure features Stc, and the function features Func. Next, three captured features are incorporated into the alignment features (F stc−func , F seq−stc and F seq−func ) by ALAN modules. Finally, the fusion and output layers merge the outputs of ALAN to calculate the propensity score P i of disorder molecular function for each residue. where l is the length of motif, a i,j represents the frequency of standard amino acid. Then the function features Func can be calculated by: where Z L×20 is the one-hot encoding matrix of protein sequence, M is the combination of 164 motif convolution kernel matrix, and b func is trainable variable. Attention based language model alignment The primary sequences encode the disordered states of IDP/IDRs, which in turn determine functions. The potential correlations among sequence, structure and function are essential information for the protein representations. In this study, attention alignment models the correlations between protein features by calculating the attention alignment weights on two kinds of features (see Fig 1D). For example, given the sequence features Seq, structure features Stc, and function features Func, the attention-alignment weights α seq−stc , α seq−func and α stc−func are calculated by: where H 1 seq , H 1 stc and H 1 func are the trainable weight variables. The attention-alignment weights between two kinds of features reflect matching patterns between different property aspects of the proteins. Weighted by the attention-alignment weights, the sequence features Seq, structure features Stc and function features Func captured by three language models can be enhanced and fused into the complementary features F seq−stc , F seq−func and F stc−func : where H 2 seq H 2 stc and H 2 func are the trainable variables, Seq 0 , Stc 0 and Func 0 indicate the transformed feature matrix of Seq, Stc and Func, respectively. The softmax is the activation function. The complementary features F seq−stc , F seq−func and F stc−func learn the correlations among sequence, structure, and functional properties of proteins, and these features are fed into the cubic fusion and output layers for calculating the predictive propensity score. Cubic fusion and output layer The cubic fusion module of PCLM merges the three alignment complementary features into latent cubic space, and obtains a joint representation matrix F seq−stc−func of protein sequences: where L denotes the length of the input sequence, n denotes the dimension of features, W x , W y , and W z are the trainable weighted variables. Each vector F i in the representation matrix represents the features of each residue in the sequence. The fully connected (FC) layer captures the global and local correlations between residues in the sequence so as to calculate the propensity score P i for each residue: where W f and b f represent the weighted and bias variables, respectively. Pre-training of protein cubic language model The transfer learning involves a model training strategy, which transfers the knowledge learned from the source domain to a new and different target domain. It is especially effective when the target domain has insufficient training data [38]. In this study, although we have relatively limited number of disorder functional annotation regions for PCLM model training, the number of intrinsically disordered regions (IDRs) is sufficient. The large number of IDRs will overcome the problem that model cannot be fully trained with insufficient disorder functional data, and the generic disordered features learned from IDR dataset can be transferred to facilitate the disorder molecular function prediction. Therefore, in this study, we employed the widely used IDP/IDR prediction benchmark dataset [40] as the pre-training dataset to pretrain PCLM model for predicting disordered regions in protein. To avoid data redundancy, we excluded sequences with >25% sequence similarity to the disordered functional benchmark datasets, and obtained 2639 sequences with 38134 IDRs and 1079 sequences with 16403 IDRs for model pre-training and validation, respectively (S2 Data). The binary cross-entropy loss function was used to calculate the loss score for model parameters optimizing [50]: where p i denotes the predictive score for residue R i being disordered calculated by Eq 14, and y i represents the actual label of disordered residue. The Adam optimizer [51] with a learning rate of 0.001 was employed to optimize the model parameters, and the model with the minimized loss score on the IDR validation set was saved as the pre-trained model. Fine-tuning PCLM for predicting disordered molecular functions In the fine-tuning stage, the pre-trained PCLM model was fine-tuned with functional specific data for predicting the disordered molecular functions in protein. Because of the differences between the five molecular functions, we separately fine-tuned PCLM with assembler, chaperone, display site, effector, and scavenger functional annotations in the DMF benchmark dataset, leading to five independent predicting PCLM models (see Fig 2). In the DMFpred predictor, the five functional specific fine-tuned PCLM models work in parallel to produce five disordered molecular functional predictions for each residue in the input proteins. Here, we used the same loss function and optimizer as the ones used in the pre-training stage, but different learning rates to fine-tune the model parameters for each function. Parameters of all layers in PCLM were fine-tuned for achieving better performance, and this strategy has been adopted by many transfer learning based studies [52,53]. More detailed hyper-parameters for DMFpred are given in S1 Table. Evaluation criteria DMFpred generates two forms of outputs: the real-valued propensity score (the likelihood of residue with the given function) and binary results (residue with or without the given function). Binary predictions were converted from the propensities: one residue is predicted as functional residue if its propensity score is greater than a given threshold. Otherwise, it is predicted as the non-functional residue. The receiver operating characteristic curve (ROC) and AUC value (area under ROC curve) were utilized to evaluate the predictive performance of the real-valued propensity prediction. Sensitivity (Sn), specificity (Sp) and accuracy (ACC) were used for the evaluation of the binary results. Since the dataset is imbalanced, i.e. there are many more non-functional residues than the functional residues. Therefore, two metrics, balanced accuracy (BACC) and the Matthews Correlation Coefficient (MCC) were used to measure the predictive performance. Disordered residues interact with multiple partners with more than one functions are called the multi-functional residues. The residue-level functional prediction of these multi-functional residues can be treated as a multi-label learning task, and five example-based metrics were utilized to evaluate the performance of DMFpred on multi-functional residues [54]: where p indicates the total number of samples, q indicates the number of labels, h(x i ) is the predicted label set and Y i is the true label set. Δ represents the symmetric difference between two sets. Functional specific fine-tuning achieves better performance In order to investigate the differences among five categories of molecular functions, we performed the cross-functional validation on the benchmark datasets. To avoid the overestimation caused by the multi-functional residues, sequences that only belonging to one class function in the training and validation sets are used to fine-tune and test the PCLM model. The AUC evaluation results are shown in Fig 3. From Fig 3, we can see that model fine-tuned and tested on the same function achieves the best performance, while cross-functional predictors achieve lower performances. These predictive results suggest that specialized predictors are required for each functional category, and function-specific fine-tuning is the key to achieve better predictive performance of each disordered molecular function. Ablation analysis of protein cubic language models To verify the contribution of three language models to DMFpred, we performed an ablation analysis. The PCLM models with different combinations of three language models were individually fine-tuned on five molecular function training data, and the corresponding AUC values for each function evaluated on validation dataset are shown in Fig 4. We can see that (i) predictors with the combination of three language models consistently achieve the best performances for all five functions; (ii) the prediction performance of predictor decreased by dropping the structural language model. Predictors with only sequence language model performed the worst. These results are not surprising because three language models capture the sequence, structural, and functional features of proteins, and these three features are complementary, and contribute to the functional prediction. As a result, predictors incorporating the three protein language models achieve the best performance. Attention based language model alignment learns the correlation patterns In order to investigate the performance improvement of attention-based language model alignment (ALAN) to the proposed predictor. We compared the performance of predictors for PLOS COMPUTATIONAL BIOLOGY predicting five disordered molecular functions by using PCLM model with and without the ALAN module. The PCLM model without ALAN directly feed the features captured by the three language models to the fusion and output layers (see Fig 1) to calculate prediction results. The two types of models were independently fine-tuned with five different functions, and the results evaluated on the validation dataset are shown in Fig 5. From this figure, we can see that predictors with ALAN consistently outperform the predictors without ALAN on five classes of functions, demonstrating the effectiveness of the ALAN module. Furthermore, we note that the predictor for Scavenger function with an ALAN achieves better performance in terms of AUC value. These results may be caused by the fact that the complementary features captured by the ALAN module supplemented the inadequate sequence, structure and functional features learned from limited annotated sequences. This improvement is especially manifested in the Scavenger function with a relatively small number of annotated sequences. Benefitted from the features captured by ALAN, predictor can make more accurate prediction leading to better performance. To further analyse the information learned by the ALAN module, we visualized the attention-alignment weights between sequence and structure features. Two protein examples (Dis-Prot ID: DP02925 and DP00284) selected from the independent test set (TEST-1) were visualized in Fig 6, from which we can see that the specific segments in the sequences map with the highest attention weights, and these sequence segments corresponding to the experimentally determined functional motifs searched from the ELM database [49] by FIMO tools (https://meme-suite.org/meme/tools/fimo). These results indicate that the ALAN can capture critical correlation patterns by modelling the relationship between different protein features. This prior biological knowledge captured by ALAN complements the original sequence, structure and functional attributes of proteins, providing a powerful protein representation. Model pre-training facilitates feature correlation In order to explore the contribution of model pre-trained with disordered proteins, we compare the predictive power of features extracted between models directly trained with molecular functional sequences (DT in Fig 7) and the fine-tuned model based on pre-training with IDRs (PT in Fig 7). Following previous studies [23,24], the absolute point-biserial correlation (PBC) score is used to quantify the feature predictive qualities, which reflects the correlation between numeric and binary variables: formed that directly trained model on all five functions. This is because model pre-trained with IDR sequences captures more disordered features than directly trained on limited functional sequences. As the functional residues are the sub-set of disordered regions, the common disordered features captured by pre-trained model facilitate to distinguish disordered functional residues from ordered residues, leading to a robust predictive quality. Overall results To our best knowledge, DMFpred is currently the only predictor for predicting the five general molecular functions of disordered proteins. There are two forms of outputs of DMFpred: realvalued propensity results and binary results. We used the ROC curve and AUC value for evaluating the real-valued predictive results. Sn, Sp, ACC and two metrics for imbalanced datasets (BACC and MCC) were used to assess binary results. The evaluation results on the TEST-1 independent test set are shown in Table 2 (the ROC curve and thresholds settings see S2 Fig, S2 Table). From Table 2, we can see that DMFpred provides accurate predictive performance for all five functional categories in terms of AUC values. The Sn, Sp and ACC results show the ability of DMFpred to correctly predict functional and non-functional residues, demonstrating the predictive performance. In order to further evaluate the predictive performance of the predictor, we constructed a new independent test set (TEST-2) with the sequences newly added into the DisProt database during July 2021 to June 2022 by following the same dataset collection protocols. TEST-2 contains 47 proteins with 5780 functional residues, including 3753 assemblers, 218 chaperones, 855 display sites, 682 effectors and 272 scavengers. The prediction results of DMFpred on TEST-2 are shown in S3 Table. From these results, we can see that the predictive results achieved by DMFpred on the new independent test set TEST-2 are highly comparable with those on the independent test dataset TEST-1, indicating that the performance of DMFpred predictor is stable. Predictive results on the multi-functional residues The disordered residues interacting with multiple partners with more than one functions are called multi-functional residues. In order to investigate the performance of DMFpred predictor for predicting these multi-functional residues, we collected all the residues with at least two functional annotations from TEST-1 dataset, and obtained a total number of 1352 multi-functional residues for performance evaluation. We compare DMFpred with a random baseline predictor generating the multi-functional labels for each residue with a probability of 0.5, and the evaluation results are shown in Table 3. From this table, we can see the followings: (i) compared with the baseline predictor, DMFpred achieves lower Hamming loss, but higher accuracy, which indicates DMFpred can accurately predict more multi-functional residues than the baseline predictor. (ii) DMFpred achieves higher performance than the baseline method in terms of precision, recall rate and F1 value. These results are not surprising because DMFpred was fine-tuned with function-specific labels on the benchmark dataset so as to learn the discriminative features of each function. Benefitting from the accurate prediction for five functions, DMFpred achieves better performance for predicting multi-functional residues. Conclusion Intrinsically disordered proteins/regions perform various molecular functions in living organisms. These functions of IDP/IDRs can be summarized as six general categories, including entropic chains, assembler, scavenger, effector, display site and chaperone. Motivated by the growing numbers of the annotated disordered sequences and the need to expand the coverage of disordered protein function predictors, we introduce the disordered molecular functional predictor called DMFpred, covering five important categories: disordered assembler, scavenger, effector, display site and chaperone. It has the following advantages: 1) DMFpred employed the protein cubic language model (PCLM) that incorporates three protein language models for characterizing sequence, structure, and functional attributes of proteins. PCLM employed attention-based language model alignment to capture the sequence-structure-function correlation and learn a joint representation of proteins. 2) Benefited from the pre-training and function-specific fine-tuning of PCLM, DMFpred captures discriminative features for five functional categories prediction. 3) The evaluation results on five categories of functional and multi-functional residues suggest that DMFpred provides high quality predictions. 4) The web-server of DMFpred is established and can be freely accessed from http://bliulab.net/ DMFpred/, which will be helpful to researchers working on the related fields.
2022-11-02T07:15:14.315Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "017cbc895063a61d31cf3bc904b70b2e7c57a982", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1010668&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7dc61d07064d8c470501252b367098e8fc3af71", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
257131809
pes2o/s2orc
v3-fos-license
Control of the Organization of 4,4′-bis(carbazole)-1,1′-biphenyl (CBP) Molecular Materials through Siloxane Functionalization We show that through the introduction of short dimethylsiloxane chains, it was possible to suppress the crystalline state of CBP in favor of various types of organization, transitioning from a soft crystal to a fluid liquid crystal mesophase, then to a liquid state. Characterized by X-ray scattering, all organizations reveal a similar layered configuration in which layers of edge-on lying CBP cores alternate with siloxane. The difference between all CBP organizations essentially lay on the regularity of the molecular packing that modulates the interactions of neighboring conjugated cores. As a result, the materials show quite different thin film absorption and emission properties, which could be correlated to the features of the chemical architectures and the molecular organizations. Introduction The control of molecular organization is the object of constant and intense research activity since it determines most properties of solid-state materials. So far, a considerable amount of work has been published to describe the role of molecular parameters acting on various molecular forces and on steric effects, in particular, to try to control the solidstate molecular packing [1][2][3][4]. As an example, the substitution of organic conjugated molecules by flexible peripheral chains has become a convenient tool to control, to some extent, the molecular organization via the tuning of van der Waals interactions and steric constraints [5][6][7]. Until now, most of studies on side-chain functionalization have focused on alkyl chains. This is due to the wide variety of commercially available alkyl chains, including ramified ones. In comparison, very few studies have been performed with other types of chains, such as ethyleneoxide [8] or siloxane chains [9], which may induce different organization properties due to their distinctive features. Thus, oligodimethylsiloxane (ODMS) chains -(SiMe 2 -O) x -are primarily characterized by exceptional flexibility which accounts for most of their unique properties [10]. This peculiar feature arises from the nearly free torsional motion along the Si-O backbone [11]. Hence, ODMS hardly crystallizes but exhibits a strong amorphous character, as illustrated by the very low glass transition of its parent polydimethylsiloxane (PDMS) (T g = −120 • C) [12]. Siloxane backbone also contains weak dipole moments, but the latter can easily be masked or uncovered (by an easy umbrella-type motion of the pending methyl groups), enabling ODMS to readily adapt to a polar or apolar environment [11]. Overall, ODMS possesses weak intermolecular forces entailing a very low surface tension, solubility parameter and dielectric constant, in particular [13]. In addition, ODMS is endowed with a propensity for microphase separation easily leading to the self-organization of siloxane-containing molecular systems [14][15][16]. Finally, with regard to geometrical parameters, siloxane chains are rather bulky, as illustrated by the larger molecular section of the linear ODMS chain (σ = 41 Å 2 ) as compared to the one of the linear alkyl chains (σ = 21.5 Å 2 ) [17]. The specific features of siloxanes have stimulated the development of many siloxanecontaining organic compounds. Thus, siloxane functionalization has been applied to control the organization and tune properties of many molecular and macromolecular systems for a wide range of application domains, including optoelectronics [9,18,19]. For example, functionalization with ODMS has been used to obtain liquid crystalline organizations with favorable π-molecular interactions for charge transport properties [20]. A complex donor-acceptor (D-A) nanostructured smectic phase could even be stabilized by incorporating ODMS segments at both extremities of a D-A-D molecular triad [21,22]. A number of siloxane-hybrid side-chains conjugated polymers have also been reported to exhibit lamellar mesomorphic organization with enhanced charge transport properties (e.g., with mobilities around 1 cm 2 V −1 s −1 ) [23][24][25]. This effect was recently attributed to the fluid and nanosegregating character of siloxane chains that impose a better facing of the polymer backbones with improved π-stacking overlap [17]. In other respects, the bulky and flexible character of ODMS has also been used to deliberately hinder π-intermolecular interactions to obtain solvent-free π-functional molecular liquids at room temperature. Thus, multiple siloxane functionalization of different arylamine and fluorene derivatives led to room-temperature liquid materials with significant charge transport and emission properties in their neat liquid phase [26,27], anticipating promising future applications of siloxane-functionalized materials in liquid (opto)electronics [28][29][30]. The study presented herein aims at using siloxane functionalization for tuning the molecular organization of a representative π-conjugated molecule used in optoelectronics. The objective was to scrutinize how the variation in the location and proportion of siloxane chains is able to modify and control the molecular organization from the initial system. For this study, we selected 4,4'-bis(carbazol-9-yl)biphenyl (also called CBP), a well-known carbazole-based material used as a host in organic light emitting devices [31][32][33]. CBP crystallizes in a herringbone-type packing and exhibits a high melting point (282 • C) [34,35]. In devices, CBP is used in its metastable glassy state obtained by chilling, but it strongly suffers from its tendency to return to its natural crystalline state [35]. Thus, rendering CBP a molecular liquid or a liquid crystal through siloxane functionalization constitutes a route toward morphologically stable guest-host devices and/or fluidic devices [29,30,36,37]. The synthetic procedure to prepare the siloxane-functionalized CBP has recently been reported elsewhere [38]. The latter focusses on the methodology we followed to introduce one or two short heptamethylsiloxane segments (via a propylene linker) at each of the carbazole end-units of CBP, to yield CBP-2Si 3 and CBP-4Si 3 , respectively ( Figure 1). As a preliminary result, CBP-2Si 3 showed a considerable drop of the melting point (T m = 87 • C) as compared to native CBP, and CBP-4Si 3 exhibited a stable and fluid liquid state at room temperature, for which the only thermal event observed was a glass transition at T g = −62 • C. These first observations already point to the considerable impact of the siloxane functionalization on the molecular organization of CBP derivatives. The aim of the present study is to undertake an extensive structural characterization by means of X-ray scattering to unravel the role of the siloxane chains on the fine molecular organization of the siloxane-functionalized CBP derivatives. In this paper, we will show in particular that the minimal insertion of siloxane in CBP-2Si 3 is able to stabilize a lamellar organization which can evolve up to the formation of a fluid liquid crystalline smectic phase after lengthening the siloxane chain in CBP-2Si n (with n ≈ 10), a new compound reported herein ( Figure 1). Finally, the liquid state of CBP-4Si 3 reveals the presence of structuration at the local range that will be detailed and discussed. All these changes in molecular organization naturally impact the neat film absorption and emission properties that will also be analyzed herein, in relation to the fine molecular packing. the minimal insertion of siloxane in CBP-2Si3 is able to stabilize a lamellar organization which can evolve up to the formation of a fluid liquid crystalline smectic phase after lengthening the siloxane chain in CBP-2Sin (with n ≈ 10), a new compound reported herein ( Figure 1). Finally, the liquid state of CBP-4Si3 reveals the presence of structuration at the local range that will be detailed and discussed. All these changes in molecular organization naturally impact the neat film absorption and emission properties that will also be analyzed herein, in relation to the fine molecular packing. Figure 1. Structures of CBP and siloxane-functionalized CBP derivatives investigated in this study. Results and Discussion The siloxane-based CBP derivatives investigated in this study are presented in Figure 1. Molecules CBP-2Si3 and CBP-4Si3 are substituted by two and four short heptamethyltrisiloxane chains, respectively, while CBP-2Sin is substituted by 2 longer oligo(dimethylsiloxane) chains (polydispersity Đ = 1.2, with an average number of dimethylsiloxane units of 10, see supporting information). Synthesis The different siloxane-functionalized CBP derivatives investigated in this study were synthesized using optimized catalytic methodologies recently reported elsewhere (Scheme 1) [38]. Briefly, monobromocarbazole 1 was used to prepare the two CBP derivatives functionalized with two siloxane segments (short: CBP-2Si3 and longer: CBP-2Sin). A propylene linker was introduced to 1 via Stille cross-coupling leading to intermediate 3. Then, 3 was reacted under Ullmann coupling conditions with dibromobiphenyl 5 to generate adduct 6, which was further hydrosilylated with siloxane chains of different lengths to produce the final molecules CBP-2Si3 and CBP-2Sin. CBP-4Si3 derivative was prepared by the same sequence of reactions using 3,6-dibromocarbazole 2 instead of monobromocarbazole. The latter synthetic route involved to double the catalytic loadings and Results and Discussion The siloxane-based CBP derivatives investigated in this study are presented in Figure 1. Molecules CBP-2Si 3 and CBP-4Si 3 are substituted by two and four short heptamethyltrisiloxane chains, respectively, while CBP-2Si n is substituted by 2 longer oligo(dimethylsiloxane) chains (polydispersity Ð = 1.2, with an average number of dimethylsiloxane units of 10, see Supplementary Materials). Synthesis The different siloxane-functionalized CBP derivatives investigated in this study were synthesized using optimized catalytic methodologies recently reported elsewhere (Scheme 1) [38]. Briefly, monobromocarbazole 1 was used to prepare the two CBP derivatives functionalized with two siloxane segments (short: CBP-2Si 3 and longer: CBP-2Si n ). A propylene linker was introduced to 1 via Stille cross-coupling leading to intermediate 3. Then, 3 was reacted under Ullmann coupling conditions with dibromobiphenyl 5 to generate adduct 6, which was further hydrosilylated with siloxane chains of different lengths to produce the final molecules CBP-2Si 3 and CBP-2Si n . CBP-4Si 3 derivative was prepared by the same sequence of reactions using 3,6-dibromocarbazole 2 instead of monobromocarbazole. The latter synthetic route involved to double the catalytic loadings and stoichiometry of reagents in the first and third steps, leading to intermediates 4, 7, and the final CBP derivative (CBP-4Si 3 ) functionalized with 4 short siloxane segments instead of 2. The detailed synthetic protocols can be found in Ref. [38] or in the Supplementary Materials. stoichiometry of reagents in the first and third steps, leading to intermediates 4, 7, and the final CBP derivative (CBP-4Si3) functionalized with 4 short siloxane segments instead of 2. The detailed synthetic protocols can be found in ref. [38] or in the ESI. Thermal and Structural Properties The functionalization of the CBP core by siloxane chains is found to drastically impact the organization of the materials, as CBP-2Si3, CBP-2Sin, and CBP-4Si3 exhibit at room temperature a solid state, a liquid crystalline phase, and a liquid state, respectively. Table 1 summarizes the transition temperatures and the enthalpy changes, issued from differential calorimetry (DSC) and thermogravimetry (TGA) analyses, shown in Figures 2 and S5 (supporting information), respectively. Tg, glass transition temperature; TLC→Iso, transition temperature from the liquid crystal phase to the isotropic liquid; Tm, melting point; ΔH→Iso, enthalpy change associated with the transition toward the isotropic liquid phase; Tdeg, degradation temperature corresponding to a 5% weight loss temperature, under air. Scheme 1. Synthesis of siloxane-based CBP derivatives. Thermal and Structural Properties The functionalization of the CBP core by siloxane chains is found to drastically impact the organization of the materials, as CBP-2Si 3 , CBP-2Si n , and CBP-4Si 3 exhibit at room temperature a solid state, a liquid crystalline phase, and a liquid state, respectively. Table 1 summarizes the transition temperatures and the enthalpy changes, issued from differential calorimetry (DSC) and thermogravimetry (TGA) analyses, shown in Figures 2 and S5 (Supplementary Materials), respectively. Table 1. Phase behavior with transition temperatures and enthalpies associated in the CBP materials series. To start with thermal stability, the siloxane chain functionalization lowers the materials stability as T deg is decreased by about 100 • C for all siloxane-based CBP derivatives as regards to reference CBP. Next, we examine the effect of siloxane chain substitution on the organization of the CBP materials (See Table 1). The introduction of two short siloxane chains is sufficient to strongly destabilize the crystal state, as T m is decreased from 282 to 87 • C when transitioning from CBP to CBP-2Si 3 . The destabilization is further aggravated by lengthening the siloxane chain, the crystalline state being replaced by a room-temperature liquid crystalline phase (clearing temperature: T LC→Iso = 27 • C) in CBP-2Si n . Finally, the substitution of four short siloxane chains totally suppresses any long-range order in the packing of the CBP core, leading to a room-temperature liquid state with a sub-ambient glass transition temperature (T g ≈ −60 • C) in CBP-4Si 3 . The destabilizing effect of the siloxane functionalization is well illustrated by the strong decrease of the enthalpy change, ∆H →Iso (associated with the transition toward this isotropic liquid phase), observed in the CBP materials series. This value is found to drop stepwise when transitioning from the crystal CBP (40 kJ mol −1 ), the solid state CBP-2Si 3 (16 kJ mol −1 ), the liquid crystalline CBP-2Si n (11 kJ mol −1 ), down to the glassy CBP-4Si 3 (0 kJ mol −1 ). The states of the CBP derivatives have been primarily assigned after polarized microscopic observation (POM). For CBP-2Si n , the liquid crystal phase is clearly evidenced by the POM texture obtained under crossed polarizers shown in Figure 3. The fluid focal-conic domain texture observed gives indication of the presence of a uniaxial lamellar mesophase. To start with thermal stability, the siloxane chain functionalization lowers the materials stability as Tdeg is decreased by about 100 °C for all siloxane-based CBP derivatives as regards to reference CBP. Next, we examine the effect of siloxane chain substitution on the organization of the CBP materials (See Table 1). The introduction of two short siloxane chains is sufficient to strongly destabilize the crystal state, as Tm is decreased from 282 to 87 °C when transitioning from CBP to CBP-2Si3. The destabilization is further aggravated by lengthening the siloxane chain, the crystalline state being replaced by a room-temperature liquid crystalline phase (clearing temperature: TLC→Iso = 27 °C ) in CBP-2Sin. Finally, the substitution of four short siloxane chains totally suppresses any long-range order in the packing of the CBP core, leading to a room-temperature liquid state with a sub-ambient glass transition temperature (Tg ≈ −60 °C ) in CBP-4Si3. The destabilizing effect of the siloxane functionalization is well illustrated by the strong decrease of the enthalpy change, ΔH→Iso (associated with the transition toward this isotropic liquid phase), observed in the CBP materials series. This value is found to drop stepwise when transitioning from the crystal CBP (40 kJ mol −1 ), the solid state CBP-2Si3 (16 kJ mol −1 ), the liquid crystalline CBP-2Sin (11 kJ mol −1 ), down to the glassy CBP-4Si3 (0 kJ mol −1 ). The states of the CBP derivatives have been primarily assigned after polarized microscopic observation (POM). For CBP-2Sin, the liquid crystal phase is clearly evidenced by the POM texture obtained under crossed polarizers shown in Figure 3. The fluid focal-conic domain texture observed gives indication of the presence of a uniaxial lamellar mesophase. The fine structural organization of the siloxane-functionalized CBP derivatives has been extensively investigated by small-and wide-angle X-ray scattering (SWAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS). Representative SWAXS patterns recorded for all materials at room temperature are presented in Figure 4a-d. The GIWAXS patterns specifically recorded for CBP-2Si3 are shown in Figure 5 (room-temperature) and Figure S6 (100 °C ), respectively. A compilation of the structural parameter data can be found in supporting information. The fine structural organization of the siloxane-functionalized CBP derivatives has been extensively investigated by small-and wide-angle X-ray scattering (SWAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS). Representative SWAXS patterns recorded for all materials at room temperature are presented in Figure 4a-d. The GIWAXS patterns specifically recorded for CBP-2Si 3 are shown in Figure 5 (room-temperature) and Figure S6 (100 • C), respectively. A compilation of the structural parameter data can be found in Supplementary Materials. The fine structural organization of the siloxane-functionalized CBP derivatives has been extensively investigated by small-and wide-angle X-ray scattering (SWAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS). Representative SWAXS patterns recorded for all materials at room temperature are presented in Figure 4a-d. The GIWAXS patterns specifically recorded for CBP-2Si3 are shown in Figure 5 (room-temperature) and Figure S6 (100 °C ), respectively. A compilation of the structural parameter data can be found in supporting information. Examining the SWAXS pattern of CBP-2Si3 (at 20 °C ), shown in Figure 4a, this pattern displays many sharp reflections in the whole angular range together with a broad scattering signal from lateral interactions of molten alkyl chains (hch). This information indicates a soft-crystalline organization, in which crystal-like, three-dimensional long-range ordering of conjugated segments coexists with molten chain zones [39][40][41]. Structural features could be specified through the combination with an oriented GIWAXS pattern obtained for a spin-coated thin film of CBP-2Si3 (see Figure 5). Thereby, the structure and morphology of the film was lamellar with the layers oriented parallel to the substrate. The direction Figure 5. GIWAXS pattern of a CBP-2Si 3 layer spin-coated onto Si wafer recorded at room temperature. This pattern displays an orthorhombic structure equivalent to the one recorded by SWAXS for the bulk material (blue labels). Here, some weak satellite spots become visible (green labels) and reveal the coexistence with a monoclinic modification, in an amount of approximately 20% (as deduced from spot integration). Except for the non-orthogonal angle, the lattice parameters are nearly unchanged with respect to the orthorhombic cell (a = 20. Examining the SWAXS pattern of CBP-2Si 3 (at 20 • C), shown in Figure 4a, this pattern displays many sharp reflections in the whole angular range together with a broad scattering signal from lateral interactions of molten alkyl chains (h ch ). This information indicates a soft-crystalline organization, in which crystal-like, three-dimensional long-range ordering of conjugated segments coexists with molten chain zones [39][40][41]. Structural features could be specified through the combination with an oriented GIWAXS pattern obtained for a spin-coated thin film of CBP-2Si 3 (see Figure 5). Thereby, the structure and morphology of the film was lamellar with the layers oriented parallel to the substrate. The direction of the normal layer was identified to the c-axis of an orthorhombic structure, designed by the in-plane arrangement and the superposition of layers. The lattice parameters are reported in Table S1. The cell periodicity involves two lamellae with a staggered superposition causing the extinction of (00l) reflections with odd l value; the lamellar periodicity is hence: d lam = d 002 = 17.25 Å. The in-plane arrangement follows an a × b sublattice of p2mg or p2gg symmetry that involves Z 2D = Z/2 = 2 molecular stacks covering an area A/Z 2D = ab/2 = 91.0 Å 2 (cf. Table S1). In addition to that of odd (00l), one notices the extinction of reflections (h00) and (h0l) with odd h value. Three space groups are then compatible with this composition of patterns: Pca2 1 , Pna2 1 , and Pbcm. Figure 4b presents the SWAXS pattern of the liquid CBP-4Si 3 recorded at 20 • C. A similar pattern was obtained for CBP-2Si 3 in its high temperature (100 • C) liquid state (see Figure S6). Both patterns show the same distinct scattering signals for siloxane segments (h dMS = 6.5-7 Å), and aliphatic or CBP segments (h ch + h ar = 4.5-5 Å), demonstrating the persistence of nanosegregated strata in the liquid state. The periodicity of the strata alternation leads to a further scattering signal in the low-angle region (D lay = 29 and 27 Å, for CBP-2Si 3 and CBP-4Si 3 , respectively, with similar correlation length ξ ≈ 60 Å obtained from full width half-maximum FWHM and Scherrer formula ξ = K 2π/ ∆q with ∆q = (FWHM 2 -FWHM 0 2 ) 0.5 , beam width FWHM 0 = 0.006 Å −1 and shape factor K = 0.9). Figure 4c,d show the SWAXS patterns of CBP-2Si n after different thermal treatments. They correspond to a record of the material at 10 • C in the mesophase (Figure 4c), and after heating to isotropic liquid down to the supercooled isotropic liquid at 20 • C (Figure 4d). First, the mesophase pattern ( Figure 4c) indicates a mesophase with a smectic E-analogue structure, with lamellae constituted by the alternation of molten chain layers and layers of mesogens arranged in a long-range correlated two-dimensional rectangular lattice (lamellar periodicity: d lam = d 001 = 48.1 Å). The presence of the molten chain sublayers is indeed demonstrated by their characteristic signatures h dMS and h ch . The lattice geometry and reflection intensity ratios of CBP-2Si n are comparable to the p2mg a × b sub-lattice of the soft crystal phase of CBP-2Si 3 , indicating the similarity of the molecular organizations, notwithstanding the loss of the three-dimensional superstructure as a result of the thick siloxane layer intercalation. Due to its low clearing temperature (24 • C), CBP-2Si n stays for days in the isotropic liquid phase at room-temperature once melted. Then, the SWAXS pattern recorded in its supercooled liquid state at 20 • C (see Figure 4d) is very similar to the one of CBP-2Si 3 and CBP-4Si 3 in their isotropic liquid phase, as shown in Figures S6 and 4b, respectively. Because of the high proportion of siloxane chains in CBP-2Si n , its liquid state gives higher layer periodicity (D lay = 42 Å) and correlation length (ξ ≈ 100 Å), as CBP-2Si 3 and CBP-4Si 3 in their liquid phase (compare with D lay = 27-29 Å, and ξ ≈ 60 Å). The combination of the results obtained by SWAXS measurements and by geometrical calculations gave access to a number of structural parameters (see Tables S1 and S2 in Supplementary Materials) allowing us to fully describe the molecular organization of the CBP derivatives in the different phases. First, let us consider the molecular organization of CBP-2Si 3 in its soft crystal phase. Actually, the CBP core has a shape of a dumbbell of approximately 19 Å length and 9 Å width (both including the van der Waals radii), with lateral close-packing distances in the order of 4 Å, while siloxane units can be approximated by cylinders of 7-7.5 Å diameter. The nanosegregation of both moieties (i.e., aromatic core and siloxane chains) into lamellae implies that the molecular area of the sequence of layers fits the individual space requirements, which is realized by the interdigitation of the siloxane end-segments. The different types of self-arrangements (namely, rows aligned on the a-axis and spaced by b/2 = 4.3 Å for CBP, and close-packed cylinders for siloxane chains), are however mutually constrained by the interconnecting propylene spacer. Therefore, the successive rows constituting the CBP layers are longitudinally shifted along the a-axis to allow the close-packing of siloxane chains, which determines the in-plane periodicity of two rows along the b-axis. Additionally, the interdigitation of the side-chains imposes a staggered superposition of successive CBP layers and thus the periodicity of two molecular layers along the c-axis. These constraints result in an original p2mg arrangement of CBP cores and a cohesive three-dimensional structure of Pca2 1 symmetry (Figure 6), which is consistent with the selected space groups. The molecular geometry of the CBP segment was extracted from the single-crystal structure CSD-WETFOS [42]. The organization of CBP derivatives evolves significantly with siloxane chain content. The molecular self-organization can thus be controlled through the siloxane chain functionalization, as illustrated in Figure 7. Due to the lower cross-sectional area of the siloxane chain (about two-fold smaller) as compared to the CBP core, the two trisiloxane chains in CBP-2Si3 form intercalated monolayers that alternate with CBP monolayers. This intercalation strongly constrains the respective positions of segments from different layers, which explains the evolution of the lamellar structure into a cohesive three-dimensional soft-crystal. Conversely, these constraints are removed with the four chains of CBP-4Si3 and the disentanglement of the siloxane segments, resulting in a single liquid phase. On the other hand, the use of longer siloxane chains for CBP-2Sin blurs the three-dimensional structure interconnecting CBP segments, while reinforcing the nanosegregation into layers. This results in the substitution of the lamellar soft-crystal by a smectic-like mesophase. As a side-effect, however, a lateral shrinking of the siloxane layers (and thus of the entire lamellae) is observed, in relation to the polydispersity of the long siloxane chains, that mimic a partial bilayer configuration. More detailed structural information can be found in the ESI (Tables S1 and S2, and Figure S7). The organization of CBP derivatives evolves significantly with siloxane chain content. The molecular self-organization can thus be controlled through the siloxane chain functionalization, as illustrated in Figure 7. Due to the lower cross-sectional area of the siloxane chain (about two-fold smaller) as compared to the CBP core, the two trisiloxane chains in CBP-2Si 3 form intercalated monolayers that alternate with CBP monolayers. This intercalation strongly constrains the respective positions of segments from different layers, which explains the evolution of the lamellar structure into a cohesive three-dimensional soft-crystal. Conversely, these constraints are removed with the four chains of CBP-4Si 3 and the disentanglement of the siloxane segments, resulting in a single liquid phase. On the other hand, the use of longer siloxane chains for CBP-2Si n blurs the three-dimensional structure interconnecting CBP segments, while reinforcing the nanosegregation into layers. This results in the substitution of the lamellar soft-crystal by a smectic-like mesophase. As a side-effect, however, a lateral shrinking of the siloxane layers (and thus of the entire lamellae) is observed, in relation to the polydispersity of the long siloxane chains, that mimic a partial bilayer configuration. More detailed structural information can be found in the Supplementary Materials (Tables S1 and S2 and Figure S7). Figure 7. Evolution of the self-organized structures of the CBP derivatives, as a function of the siloxane chains content, as obtained from structural modeling based on pattern information and molecular segment geometry [43][44][45][46]. It is worth noting that the lamellar structure observed for all siloxane-functionalized CBP molecules differs quite significantly from the unsubstituted CBP crystal structure. Actually, neat CBP molecules self-assemble into herringbone rows along which the carbazole rings stack into columns [47]. Successive herringbone rows then fit one into the other with close-packing of the carbazole and biphenyl units, as illustrated in Figure 8. The whole results clearly demonstrate the strong microsegregation ability of siloxane chains which is able to impose, not only the lamellar organization of the molecules but also the lateral packing of the CBP cores, by forcing molecular interactions through carbazole units. Siloxane chain functionalization then constitutes a powerful tool to control molecular arrangement and molecular packing. Depending on the location, number, and length of the siloxane chain, it is possible to tune the organization of the molecules and their molecular interactions. Lastly, the spontaneous alignment observed for the siloxane-functionalized solidstate CBP derivatives should be addressed. Actually, when CBP-2Si3 was deposited as a thin film by spin coating, the siloxane layer planes were found to spontaneously align parallel to the substrate. This effect is most likely driven by the very low surface energy of ODMS (around 20-22 mN m −1 ) [13]. Siloxane-containing molecular systems should minimize their energy by preferably orienting the siloxane chains at the interface with air (and probably with the glass substrate also), thereby imposing the orientation of the = = = Figure 7. Evolution of the self-organized structures of the CBP derivatives, as a function of the siloxane chains content, as obtained from structural modeling based on pattern information and molecular segment geometry [43][44][45][46]. It is worth noting that the lamellar structure observed for all siloxane-functionalized CBP molecules differs quite significantly from the unsubstituted CBP crystal structure. Actually, neat CBP molecules self-assemble into herringbone rows along which the carbazole rings stack into columns [47]. Successive herringbone rows then fit one into the other with close-packing of the carbazole and biphenyl units, as illustrated in Figure 8. The whole results clearly demonstrate the strong microsegregation ability of siloxane chains which is able to impose, not only the lamellar organization of the molecules but also the lateral packing of the CBP cores, by forcing molecular interactions through carbazole units. Siloxane chain functionalization then constitutes a powerful tool to control molecular arrangement and molecular packing. Depending on the location, number, and length of the siloxane chain, it is possible to tune the organization of the molecules and their molecular interactions. Molecules 2023, 28, x FOR PEER REVIEW 10 of Figure 7. Evolution of the self-organized structures of the CBP derivatives, as a function of the s loxane chains content, as obtained from structural modeling based on pattern information and m lecular segment geometry [43][44][45][46]. It is worth noting that the lamellar structure observed for all siloxane-functionalize CBP molecules differs quite significantly from the unsubstituted CBP crystal structur Actually, neat CBP molecules self-assemble into herringbone rows along which the carb zole rings stack into columns [47]. Successive herringbone rows then fit one into the oth with close-packing of the carbazole and biphenyl units, as illustrated in Figure 8. Th whole results clearly demonstrate the strong microsegregation ability of siloxane chain which is able to impose, not only the lamellar organization of the molecules but also th lateral packing of the CBP cores, by forcing molecular interactions through carbazo units. Siloxane chain functionalization then constitutes a powerful tool to control mole ular arrangement and molecular packing. Depending on the location, number, and lengt of the siloxane chain, it is possible to tune the organization of the molecules and the molecular interactions. Lastly, the spontaneous alignment observed for the siloxane-functionalized solid state CBP derivatives should be addressed. Actually, when CBP-2Si3 was deposited as thin film by spin coating, the siloxane layer planes were found to spontaneously alig parallel to the substrate. This effect is most likely driven by the very low surface energ of ODMS (around 20-22 mN m −1 ) [13]. Siloxane-containing molecular systems shoul minimize their energy by preferably orienting the siloxane chains at the interface with a (and probably with the glass substrate also), thereby imposing the orientation of th = = = Lastly, the spontaneous alignment observed for the siloxane-functionalized solid-state CBP derivatives should be addressed. Actually, when CBP-2Si 3 was deposited as a thin film by spin coating, the siloxane layer planes were found to spontaneously align parallel to the substrate. This effect is most likely driven by the very low surface energy of ODMS (around 20-22 mN m −1 ) [13]. Siloxane-containing molecular systems should minimize their energy by preferably orienting the siloxane chains at the interface with air (and probably with the glass substrate also), thereby imposing the orientation of the lamellar organization parallel to the substrate on the whole film thickness [48,49]. Siloxane-functionalization then turns out to be a valuable tool for controlling the morphology of functional organic materials. Photophysical Properties in Solution The photophysical properties of the dyes were inspected in a dichloromethane solution ( Figure S9). The absorption peaks observed in CBP at 340 and 293 nm are associated with transitions localized on the carbazole units while the additional absorption band at 317 nm is attributed to transitions involving the central benzidine group [35]. As stated in another study [50], the presence of both carbazole and benzidine characteristics in the absorption spectrum of CBP indicates that the electron density in the ground state is delocalized over the whole chromophore. As can be seen in Figure S9, the UV-visible absorption spectra of the different CBP functionalized with the siloxane chains are rather similar to that of CBP. This implies that the functionalization of the siloxane side chains on the carbazole units does not significantly affect the delocalization of the electron density in the ground state. The only differences appear on the redshift of the lowest absorption bands in energy when transitioning from CBP to CBP-2Si 3 (or CBP-2Si n ) to CBP-4Si 3 . As shown in Figure S9, a gradual red-shift of the emission spectra is also observed when transitioning from CBP to CBP-2Si 3 , CBP-2Si n , and CBP-4Si 3 . The fluorescence properties of CBP involve predominantly the central benzidine part of the molecule and exhibit some charge transfer character that can make it sensitive to the polarity of the environment [50]. However, previous studies have shown that the siloxane chains present a low polarity, similar to that of alkane chains [51], and we thus exclude changes in the polarity of the local environment as a reason for the different photophysical properties of the siloxane-based compounds. To gain further insights, quantum chemistry calculations were carried out to estimate the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) distributions, oscillator strengths and the first excited-state singlet energies in CBP and CBP functionalized with 2 and 4 propyl side chains (CBP-2prop and CBP-4prop, respectively) in the gas phase using time-dependent density functional theory (TD-DFT) with the B3LYP functional in the 6-31 G basis. Siloxane chains were removed for these calculations to reduce their computing time. In good consistency with a previous study devoted to CBP derivatives [52], it can be seen in Figure 9 that the HOMOs of CBP, CBP-2prop, and CBP-4prop are delocalized over the whole molecules while their LUMOs are localized mainly onto the central biphenyl. Substituting propyl side chains to the carbazole units of the CBP core is also found to hardly affect the dihedral angle at the ground state between the phenyl rings (36.4 • for CBP, 36.3 • for CBP-2prop and 36.2 • for CBP-4prop). More noticeably, this substitution leads to some changes in HOMO/LUMO energy levels and to a gradual decrease in the singlet energy together with a slight increase in the oscillator strength (see Table 2). Overall, these calculations suggest that the small redshift of both absorption and steady-state emission spectra in Figure S9 should not be attributed to a change of planarization of the molecules, but rather to the variation in their electronic properties induced by the electro-donating character of the grafted chains. * These values are in good agreement with the experimental optical bandgaps determined from the solution UV-Vis spectra of CBP (3.52 eV), CBP-2Si3 (3.45 eV) and CBP-4SI3 (3.41 eV) (see Figure S9). Photophysical and Charge Transport Properties in Thin Films The functionalization of the CBP molecule by the siloxane hybrid side-chains has been found to strongly modify the structural properties in thin films, which, in turn, should have a significant impact on their photophysical and charge transport properties. Figure 10 displays the thin film UV/visible absorption spectra of the siloxane-functionalized CBP derivatives with their different phases, as well as of the neat CBP in its glassy amorphous and crystal states. The glassy amorphous CBP film was obtained immediately after spin-coating deposition of the CBP solution in 1 wt.% chloroform, while the crystalline state was formed after allowing the same film to stand for more than one day at room temperature. All data in solution and solid state are summarized in Table 3. Unlike the results obtained in solution ( Figure S9), the absorption and photoluminescence spectra in thin films that are displayed in Figures 10 and 11 show significant differences depending on the nature of their condensed phase. The absorption spectrum of the crystalline CBP thin film exhibits a substantial redshift compared to its glassy film with a tail at longer wavelengths, which can be explained by the particular CBP molecular packing and the possible contribution of scattering by small crystallites [53]. Noticeably, the absorption spectrum of the soft crystal CBP-2Si3 film shows different features compared to that of crystalline CBP. In contrast to what is observed in dichloromethane solution, the relative intensity of the absorption bands in CBP-2Si3 film is modified with, in particular, Photophysical and Charge Transport Properties in Thin Films The functionalization of the CBP molecule by the siloxane hybrid side-chains has been found to strongly modify the structural properties in thin films, which, in turn, should have a significant impact on their photophysical and charge transport properties. Figure 10 displays the thin film UV/visible absorption spectra of the siloxane-functionalized CBP derivatives with their different phases, as well as of the neat CBP in its glassy amorphous and crystal states. The glassy amorphous CBP film was obtained immediately after spincoating deposition of the CBP solution in 1 wt.% chloroform, while the crystalline state was formed after allowing the same film to stand for more than one day at room temperature. All data in solution and solid state are summarized in Table 3. involve a lower aromatic core overlap and thus a weaker quenching of the emission due to intermolecular interactions. In addition, the twisting of the benzidine core suggested by the absorption and steady-state fluorescence spectra of CBP-2Si3 would presumably lead to a reduction in the oscillator strength and a substantially lower PLQY value. Regarding the liquid crystalline CBP-2Sin, the CBP cores in this system self-arrange in a similar way as CBP-2Si3 but the efficiency of the benzidine core interactions and ultimately the quenching is altered by the less regular smectic in-plane order. Finally, these interactions are further reduced in the liquid phase, leading to the highest PLQY for CBP-4Si3. The luminescence of this liquid material even overcomes that of glassy amorphous CBP, for which PLQY is also enhanced by structural disorder, relative to crystalline CBP. Unlike the results obtained in solution ( Figure S9), the absorption and photoluminescence spectra in thin films that are displayed in Figures 10 and 11 show significant differences depending on the nature of their condensed phase. The absorption spectrum of the crystalline CBP thin film exhibits a substantial redshift compared to its glassy film with a tail at longer wavelengths, which can be explained by the particular CBP molecular packing and the possible contribution of scattering by small crystallites [53]. Noticeably, the absorption spectrum of the soft crystal CBP-2Si 3 film shows different features compared to that of crystalline CBP. In contrast to what is observed in dichloromethane solution, the relative intensity of the absorption bands in CBP-2Si 3 film is modified with, in particular, a decrease in the absorption of the peak associated with the benzidine unit together with a hypsochromic shift. When compared to the spectra of the two other siloxane-containing CBPs, soft-crystal CBP-2Si 3 also shows a different absorption spectrum with more intense bands at 300 and 347 nm, which are characteristics of the carbazole unit. By contrast, CBP-4Si 3 and CBP-2Si n spectra are very similar indeed and strongly resemble all CBP derivatives spectra measured in solution ( Figure S9). This is presumably due to the rather weak intermolecular interactions between aromatics cores taking place in the liquid state and the fluid mesophase ( Figure 7). Finally, the absorption spectrum of the non-functionalized CBP in its glassy amorphous state strongly resembles the ones of the "fluid" siloxane-functionalized CBP derivatives CBP-4Si 3 and CBP-2Si n . involve a lower aromatic core overlap and thus a weaker quenching of the emission due to intermolecular interactions. In addition, the twisting of the benzidine core suggested by the absorption and steady-state fluorescence spectra of CBP-2Si3 would presumably lead to a reduction in the oscillator strength and a substantially lower PLQY value. Regarding the liquid crystalline CBP-2Sin, the CBP cores in this system self-arrange in a similar way as CBP-2Si3 but the efficiency of the benzidine core interactions and ultimately the quenching is altered by the less regular smectic in-plane order. Finally, these interactions are further reduced in the liquid phase, leading to the highest PLQY for CBP-4Si3. The luminescence of this liquid material even overcomes that of glassy amorphous CBP, for which PLQY is also enhanced by structural disorder, relative to crystalline CBP. Liq. Figure 11. Normalized steady-state emission spectra of CBP derivatives in thin films measured using an excitation wavelength of 310 nm. Figure 11. Normalized steady-state emission spectra of CBP derivatives in thin films measured using an excitation wavelength of 310 nm. Figure 11 displays the steady-state photoluminescence spectra of the CBP derivatives in thin films. Compared to the CBP glassy film, the emission spectrum of the CBP crystal exhibits a substantial redshift together with a well-resolved vibronic structure exhibiting two vibronic peaks and one shoulder in the range between 370 and 410 nm. Noticeably, the fluorescence spectra of both CBP-2Si n and CBP-4Si 3 show similar features (see Table 3) and the slight red-shift of the emission of CBP-4Si 3 is presumably due to the effects of the two additional electron-donating side chains in CBP-4Si 3 as compared to CBP-2Si n on the electronic properties, similarly to what is observed in solution. The most intriguing result in Figure 11 comes from the emission spectrum of CBP-2Si 3 . While its emission spectrum shows the same vibronic structure as the other siloxane-containing CBP derivatives, the emission of CBP-2Si 3 is blue-shifted by more than 20 nm. Carbazole derivatives are known for their tendency to form excimers in which interacting carbazole units are stacked in an overlapping sandwich-like configuration [54]. However, the fluorescence of CBP is dominated by the properties of the central biphenyl part of the molecule and it has been shown that this compound does not exhibit excimer emission in thin films [52]. In addition, the fluorescence of highly twisted CBP derivatives was found to be dominated by the individual properties of the N-phenylcarbazole units and to show a significantly blueshifted emission as compared with the spectrum of CBP films [52]. In this context, the most plausible explanation for the blue-shift of the emission is that CBP-2Si 3 molecules adopt in the condensed phase a more twisted geometry with a larger torsion angle between the two phenyl rings of the benzidine core. This would also be consistent with the observed decrease in the absorption band of the benzidine moiety in the absorption spectrum of CBP-2Si 3 film and with the fact that the thin film shows a blue shifted emission as compared to the solution [55]. It should also be emphasized that the X-ray scattering results indicated that the structure and morphology of the CBP-2Si 3 film is lamellar with layers oriented in the direction parallel to the substrate. The interactions within the layers of aggregated CBP rings affect the emission with respect to isolated molecules in solution [56]. Additionally, there might also be an impact of molecular architecture and packing on the dihedral angle between the two N-phenylcarbazole units. To gain further insights, we looked at the molecular geometry at the DFT level in order to examine the potential energy landscape of the CBP-2Si 3 molecule ( Figure S10). The most stable molecular geometry is obtained for an angle around 30 • , which is in good consistency with a previous report devoted to CBP [52]. However, the calculations show that there is another energetically favorable minimum for an angle around 50 • and it is, therefore, possible that the average dihedral angle in CBP-2Si 3 films deviates from the average value in solution, which could explain a part of the frequency shift. The photoluminescence quantum yield (PLQY) of the CBP derivatives was then measured in thin films. As displayed in Table 3, the siloxane-functionalized CBP series show a substantial decrease in PLQY as the degree of order increases, from 0.58 (Liquid), 0.35 (Liquid crystal) to 0.16 (Soft crystal). However, soft crystal CBP-2Si 3 gives a lower PLQY than classical crystal CBP (0.36), which means that molecular organizations and conformations in both systems have different efficiencies upon luminescence quenching. If we compare the molecular organization in CBP and CBP-2Si 3 , which are schematically represented in Figures 7 and 8, the herringbone organization in the CBP crystal might involve a lower aromatic core overlap and thus a weaker quenching of the emission due to intermolecular interactions. In addition, the twisting of the benzidine core suggested by the absorption and steady-state fluorescence spectra of CBP-2Si 3 would presumably lead to a reduction in the oscillator strength and a substantially lower PLQY value. Regarding the liquid crystalline CBP-2Si n , the CBP cores in this system self-arrange in a similar way as CBP-2Si 3 but the efficiency of the benzidine core interactions and ultimately the quenching is altered by the less regular smectic in-plane order. Finally, these interactions are further reduced in the liquid phase, leading to the highest PLQY for CBP-4Si 3 . The luminescence of this liquid material even overcomes that of glassy amorphous CBP, for which PLQY is also enhanced by structural disorder, relative to crystalline CBP. The CBP molecule has been intensively used as a host material in organic lightemitting diodes [57] and its neat film shows hole and electron mobilities on the order of 10 −3 -10 −4 cm 2 V −1 s −1 [58,59]. A previous work has characterized the charge carrier mobilities of siloxane-containing oligofluorene derivatives using the time-of-flight technique (ToF). The electron and hole mobilities in liquid oligofluorene were found to be on the order of 10 −4 cm 2 V −1 s −1 and comparable with the values measured in solid thin films of other fluorene derivatives [27]. In this context, it was relevant to characterize the charge transport properties of the siloxane containing CBPs but, in contrast with what was observed in liquid fluorene derivatives, their investigation by ToF turned out to be unsuccessful (see Supplementary Materials). The measurements carried out on commercial ITO-covered liquid crystal cells filled by capillarity with materials in their liquid state only led to a poor response indicating a low charge carrier mobility with estimated values well below 10 −6 cm 2 V −1 s −1 for all materials. Regarding the soft crystalline CBP-2Si 3 , the film shows no domain orientation when implemented as the semiconducting layer of the measuring device. Charge transport of CBP-2Si 3 was therefore jeopardized by the insulating siloxane layers interrupting the conduction pathways towards electrodes. In the case of CBP-4Si 3 , it can be assumed that the significantly reduced intermolecular interactions between CBP units, as confirmed by its high PLQY value, lead to poor conductive pathways. The same effect presumably occurs for CBP-2Si n in conjugation with the dilution of the low-efficient conduction pathways in the high-volume fraction of insulating siloxane. The results obtained in this study indicate that these CBP derivatives exhibit promising photophysical properties for organic optoelectronics, but their potential use is still strongly limited by their charge transport properties. One aspect potentially detrimental to charge transport in the liquid and liquid crystal states of these systems is the location of the siloxane chains onto the carbazole end units. By considering the molecular organization as depicted in Figure 7, it is possible that the insertion of voluminous side chains directly to the carbazole end units alter the efficiency of conduction pathways by reducing the π-orbital overlap between carbazole units. Consequently, there is still a possibility to improve the molecular design and get liquid or liquid crystalline CBPs with enhanced charge transport performances. Conclusions Siloxane substituents can be seen as an alternative to alkyl chains for the control of the molecular organization in organic thin films. Thus, by functionalizing π-conjugated molecules with siloxane chains, mesomorphic organizations are readily obtained by innate segregation between the siloxane chains and the conjugated units. At the same time, crystallization is drastically hindered, usually in favor of a glass transition at a very low temperature, leading to the formation of soft and fluid material in an ambient environment. In a previous work, we demonstrated that CBP, a conjugated molecule of interest for organic optoelectronics and well-known in the area of OLEDs, could become liquid at room temperature through the introduction of siloxane-terminated side-chains, whereas the unsubstituted material is crystalline and melts at 270 • C. In the present study, the design of the siloxane side-chains allowed to vary the self-organization of the materials between a threedimensional lamellar soft-crystal with two short siloxane chains, a nanosegregated liquid freezing only at −62 • C with four chains, and a room-temperature smectic liquid-crystal with two longer chains. The photophysical properties of the films were then investigated in the different materials and could be correlated to their molecular organizations. In particular, the liquid CBP functionalized with four siloxane chains was found to exhibit a PLQY of 58%, higher than that of glassy CBP, due to a reduction in the intermolecular interactions between neighboring conjugated cores. However, the results also indicate that those siloxane-containing CBP derivatives exhibit poor charge transport properties, which seriously limit for now their potential use in organic optoelectronics. While the outcome of this work is highly relevant for rationalizing the role of siloxane chain functionalization in the control of the molecular organization, further efforts are still required in terms of molecular design to obtain high-performance functional siloxane containing optoelectronic materials based on a CBP core. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/molecules28052038/s1, Figure S1: Size exclusion chromatography (SEC) of the starting oligo(dimethylsiloxane) chains (MCRH11 from Gelest) used in the synthesis of CBP-2Si n . Elution in toluene with PDMS standards; the peak indicated at 35 mL indicates the octamethylcyclotetrasiloxane reference (D4); Figure S2: 1 H NMR spectrum of CBP-2Si n recorded in CDCl 3 ; Figure S3: 13 C NMR spectrum of CBP-2Si n recorded in CDCl 3 ; Figure S4: Maldi-ToF MS spectrum of CBP-2Si n ; Figure S5: TGA thermograms of CBP and CBP derivatives recorded at 5 • C min −1 ; Figure S6: SWAXS patterns recorded in the liquid phase of CBP-2Si 3 (at 100 • C). This pattern is similar than the one of CBP-4Si 3 recorded in its liquid state at 20 • C. Both patterns show distinct scattering signals for siloxane segments (h dMS = 6.5-7 Å), and aliphatic or CBP segments (h ch + h ar = 4.5-5 Å), demonstrating the persistence of nanosegregated strata in the liquid state. The periodicity of the strata alternation leads to a further scattering signal in the low-angle region (D lay = 29 and 27 Å, for CBP-2Si 3 and CBP-4Si 3 , respectively, with similar correlation length ξ ≈ 60 Å, determined from Scherrer formula with shape factor K = 0.9; Figure S7: Illustration of the evolution of the self-organized structures of the CBP derivatives as a function of the siloxane chains content, including molecular parameters issued from Table S5; Figure S8: Left: SWAXS patterns of the CBP-2Si 3 in the pristine state at 20 • C (black curve) and in the supercooled liquid phase at 20 • C (blue curve). Right: SWAXS pattern of CBP-2Si 3 in the mesophase at 20 • C, after subtraction of the 43% liquid phase amount present in the pristine state; Figure S9: Absorption and emission (under excitation at 310 nm) spectra of the CBP derivatives in solution (10-5 M in dichloromethane); Figure S10: Potential energy landscape of the CBP-2Si 3 molecule calculated at the DFT level by varying the torsion angle between the two N-phenylcarbazole units; Table S1: Structural parameters for the CBP layers in the lamellar phases; Table S2: Siloxane layer configurations in the nanosegregated phases. Refs. [17,27,38]
2023-02-24T16:49:08.502Z
2023-02-21T00:00:00.000
{ "year": 2023, "sha1": "1923abbdf7aa4293dfd2cee5943b3ff9c0654341", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/5/2038/pdf?version=1676994301", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec83ea9f2bffdfc01241f170d30e410bbf7f191f", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
204800348
pes2o/s2orc
v3-fos-license
Revised astrometric calibration of the Gemini Planet Imager Abstract. We present a revision to the astrometric calibration of the Gemini Planet Imager (GPI), an instrument designed to achieve the high contrast at small angular separations necessary to image substellar and planetary-mass companions around nearby, young stars. We identified several issues with the GPI data reduction pipeline (DRP) that significantly affected the determination of the angle of north in reduced GPI images. As well as introducing a small error in position angle measurements for targets observed at small zenith distances, this error led to a significant error in the previous astrometric calibration that has affected all subsequent astrometric measurements. We present a detailed description of these issues and how they were corrected. We reduced GPI observations of calibration binaries taken periodically since the instrument was commissioned in 2014 using an updated version of the DRP. These measurements were compared to observations obtained with the NIRC2 instrument on Keck II, an instrument with an excellent astrometric calibration, allowing us to derive an updated plate scale and north offset angle for GPI. This revised astrometric calibration should be used to calibrate all measurements obtained with GPI for the purposes of precision astrometry. Introduction The Gemini Planet Imager 1,2 (GPI) is an instrument, currently at the Gemini South telescope, Chile, that was designed to achieve high contrast at small angular separations to resolve planetary-mass companions around nearby, young stars. Many high-contrast imaging observations also require highly precise and accurate astrometry. One of the objectives of the large Gemini Planet Imager Exoplanet Survey 3 (GPIES) was to characterize via relative astrometry the orbits of the brown dwarfs and exoplanets imaged as a part of the campaign. 4 These measurements have been used to investigate the dynamical stability of the multiplanet HR 8799 system, 5 the interactions between substellar companions and circumstellar debris disks, 6,7 and to directly measure the mass of β Pictoris b. 8 Improved astrometric accuracy and precision can reveal systematic discrepancies between instruments that need to be considered when performing orbital fits using astrometric records from multiple instruments. Accurate, precise astrometry can also help with common proper motion confirmation or rejection of detected candidate companions. Previous work has demonstrated that the location of a faint substellar companion relative to the host star can be measured within a reduced and postprocessed GPI image to a precision of ∼700'th of a pixel. 9 Since GPI's science camera is an integral field spectrograph (IFS)/ polarimeter, "pixel" in this context means the spatial pixel sampling set by the IFS lenslet array rather than of the subsequent Hawaii-2RG detector. Converting these precise measurements of the relative position of the companion from pixels into an on-sky separation and position angle (PA) requires a precise and accurate astrometric calibration of the instrument. The plate scale of the instrument is required to convert from pixels in the reconstructed data cubes into arcseconds and the angle of north on an image that has been derotated to put north up based on the astrometric information within the header. The previous astrometric calibration (a plate scale of 14.166 AE 0.007 mas px −1 and a north offset angle of −0.10 AE 0.13 deg) was based on observations of calibration binaries and multiple systems obtained during the first two years of operations of the instrument. 4,10 In the course of several investigations using GPI that relied upon astrometric measurements, over time it became apparent that there were potentially remaining systematic biases after that calibration, particularly in regard to the north angle correction. This motivated a careful, thorough calibration effort into GPI astrometry, an effort that eventually grew to include cross checks of the GPI data processing pipeline, the performance of several Gemini observatory systems, and a complete reanalysis of all astrometric calibration targets observed with GPI. This paper presents the findings of those efforts and the resulting improved knowledge of GPI's astrometric calibration. After introducing some background information regarding GPI and the Gemini architecture (Sec. 2), we describe two issues that we identified and fixed in the data reduction pipeline (DRP) (Sec. 3), a retroactive calibration of clock biases affecting some GPI observations (Sec. 4), and a model to calibrate for small apparent PA changes in some observations, at small zenith distances (Sec. 5). With those issues corrected, we revisit the astrometric calibration of GPI based on observations of several calibration binaries and multiple systems (Secs. 6 and 7). Compared to the prior calibration values, we find no significant difference in the plate scale. However, we find a different value for the true north correction by þ0.36 deg, along with tentative low-significance evidence for small gradual drifts in that correction over time. Finally, we discuss the effect of the revised astrometric calibration on the astrometric measurements of several substellar companions (Sec. 8). GPI Optical Assemblies The GPI 1,2 combines three major optical assemblies (Fig. 1). The adaptive optics (AO) system is mounted on a single thick custom optical bench. The Cassegrain focus of the telescope is located within the AO assembly. On that bench, the beam encounters a linear thin-plate atmospheric dispersion corrector, steerable pupil-alignment fold mirror, an off-axis parabolic (OAP) relay to the first deformable mirror, and an OAP relay to the second deformable mirror. After that, the beam is refocused to f∕64. The last optic on the AO bench is a wheel containing microdot-patterned coronagraphic apodizer masks. 11,12 These apodizer masks also include a square grid pattern that induces a regular pattern of diffracted copies of the stellar point spread function (PSF). 13,14 The second optical assembly is an infrared wavefront sensor known as the calibration (CAL) system. 15 It contains the focal plane mask component of the coronagraph (a flat mirror with a central hole) and collimating and steering optics. The third assembly is the IFS. 16,17 The input collimated beam is refocused onto a grid of lenslets that serve as the image focal plane of the system. After this, the spectrograph optics relay and disperse the lenslet images, but since the beam has been segmented, these can no longer introduce astrometric effects. The lenslet array samples the focal plane and produces a grid of "spots" or micropupils, each of which is an image of the telescope pupil. The only aberrations affecting the image quality of the field are from elements in front of the lenslet array. 17 Each of these three assemblies is independently mounted by three bipods. The bipods are supported by a steel truss structure that attaches to a square front mounting plate. The mounting plate attaches to the Gemini Instrument Support Structure (ISS) with large fixed kinematic pins. The ISS is a rotating cube located just above the Cassegrain focus of the telescope. In typical Gemini operations, the ISS rotator operates to keep the sky PA fixed on the science focal plane. High-contrast imaging typically instead tries to fix the telescope pupil on the science instrument to allow angular differential imaging 18 (ADI). In GPI's case, this is always done at a single orientation (corresponding to GPI's vertical axis parallel to the telescope vertical axis). In the simplest case, this would involve stopping all rotator motion. However, as discussed in Sec. 5, in some but not all observations, the observatory software instead tries to maintain the absolute (sky) vertical angle (VA) stationary on the science focal plane, which must be accounted for in astrometric observations. Software Interface and IFS Operation The software architecture for GPI and the Gemini South telescope is complex, as is typical for a major observatory. Simple operations often require interactions between several different computers. For example, taking an image with the IFS is a process that involves four separate computer systems; the main Gemini environment that runs the observatory's control software, GPI's top level computer (TLC) that is interfaced with each component of the instrument, the IFS "host" computer that acts as an interface between the UNIX-based TLC and the Windows-based detector software, and the IFS "brick" that interfaces directly with the Hawaii-2RG detector. 17 Three of these four computer systems are responsible for populating the flexible image transport system (FITS) 19 image header keywords appended to each image. The Gemini environment handles telescope-specific quantities such as the telescope mount position, the TLC handles keywords associated with other parts of the instrument such as the AO system, and the IFS brick records detector-specific quantities. Each of these computer systems also maintains its own clock, although only the clock of the Gemini and environment and the IFS brick are relevant for the purposes of this study. These clocks are used when appending various timestamps to FITS headers during the process of obtaining an image. In theory, these clocks should all be synchronized periodically with Gemini's Network Time Protocol (NTP) server. The IFS camera is controlled by the IFS brick, a computer used to interface with the Teledyne JADE2 electronics and Hawaii-2RG detector. This computer is responsible for commanding the camera, calculating count rates for each pixel based on raw up-the-ramp (UTR) reads, 20 sending completed images back to the observatory computers and providing ancillary metadata including the start and end time of the exposure (UTSTART and UTEND) that are stored in the FITS header. The detector is operated almost exclusively in UTR mode; correlated double sampling (CDS) mode 21 images have been taken in the laboratory, but this mode is not available for a standard observing sequence. The IFS runs at a fixed pixel clocking rate of 1.45479 s for a full read or reset of the detector. The IFS software allows for multiple exposures to be coadded together prior to writing an FITS file. This mode has lower operational overheads and greater operational efficiency compared to individual exposures, and therefore, is frequently used for short exposures (from 1.5 to 10 s per coadd) but not generally used for long exposures (60 s per coadd) due to field rotation. Improvements in the GPI Data Reduction Pipeline The GPI DRP 22,23 is an open-source pipeline that performs basic reduction steps on data obtained with GPI's IFS to remove a variety of instrumental systematics and produce science-ready spectrophotometrically and astrometrically calibrated data cubes. The DRP corrects for detector dark current, identifies and corrects bad pixels and cosmic ray events, extracts the microspectra in the two-dimensional (2-D) image to construct a three-dimensional (3-D) (x; y; λ) data cube (or x; y, Stokes in polarimetry mode), and corrects for the small geometric distortion measured in the laboratory during the integration of the instrument. 4 Critically, the DRP calculates the average parallactic angle between the start and end of an exposure, an angle that is used to rotate the reduced data cubes so that the vector toward celestial north is almost aligned with the columns of the image. We have identified and corrected in the latest data pipeline version two issues with the calculation of average parallactic angle that affect a subset of GPI measurements. These issues are most pronounced for observations taken at a very small zenith distance, where the parallactic angle is changing very rapidly. An example dataset showing the combined effect of these issues, and those described in Secs. 4 and 5, on observations of a calibration binary is shown in Fig. 2. Each image has been rotated such that north is up based on the value of AVPARANG in the header of the reduced image (white compass). We use the prime symbol to denote the fact that the old reduction does not correctly rotate north up. The original detector coordinate axes are also shown (yellow compass). Note the flip of the x axis due the odd number of reflections within the instrument. A significant change in the sky PA of the companion is seen between the two images in (a), (c), due to a combination of the errors described in Sec. 3. The PA of the companion is stable after the revisions to the pipeline. Calculation of Average Parallactic Angle from Precise Exposure Start and End Times Calculating the time-averaged parallactic angle during the course of an exposure requires accurate and precise knowledge of the exact start and end times of that exposure. We found that the GPI DRP was not originally using a sufficiently precise value for the start time in the case of an exposure with more than one coadd. Doing this correctly requires an understanding of the low-level details of the UTR readout of the Hawaii-2RG detector and the surrounding GPI and Gemini software. The header of a raw GPI FITS file contains four timestamps saved at various times during the acquisition of an image with the IFS: UT, MJD-OBS, UTSTART, and UTEND. The keywords UT and MJD-OBS contain the time at the moment the header keyword values were queried by the Gemini master process prior to the start of the exposure. UT is reported in the coordinated universal time (UTC) scale, whereas MJD-OBS is reported in the terrestrial time scale, a scale linked to the International Atomic Time that is running ∼65 s ahead of UTC. Because these keywords are written during exposure setup by a different computer system, neither is a highly precise metric for the exact exposure time start. The other keywords (UTSTART and UTEND) are generated by the IFS brick upon receipt of the command to execute an exposure and after the final read of the last coadd has completed. These two timestamps are reported in the UTC scale. Because they are written by the same computer that directly controls the readout, these are more accurate values for exposure timing. UTSTART is written when the IFS software receives the command to start an exposure, but since the Hawaii-2RG will be in continuous reset mode between exposures, it must wait some fraction of a read time to complete the current reset before the requested exposure can begin. Thus, the true exposure start time will be some unknown fraction of a read time after UTSTART. The final keyword UTEND is written with negligible delay immediately at the moment the last read of the last pixel is concluded. A schematic diagram of the read and resets of the Hawaii-2RG is shown for two example exposures in Fig. 3. The pipeline was, therefore, written under the assumption that the UTEND keyword provides the most accurate way to determine the true start and end time of each exposure, which, in turn, is used to calculate the average parallactic angle during the exposure. The effective end time of the exposure can be calculated as occurring half a read time prior to UTEND, i.e., the time at which half of the detector pixels have been read. The effective start time of the exposure, i.e., when half of the detector pixels have been read for the first time and can be calculated working backward from UTEND toward UTSTART. We do so based on the read time (t read ), number of reads per coadd (n read , where n read − 1 multiplied by t read yields the integration time per coadd), and number of coadds (n coadd ). The pipeline writes two additional keywords to the science extension of the reduced FITS file that stores the calculated effective start (EXPSTART) and end (EXPEND) times of the exposure calculated using UTEND, t read , n read , and n coadd . EXPSTART and EXPEND are then used to calculate the average parallactic angle over the course of the exposure, which is written as keyword AVPARANG. Inadvertently, versions 1.4 and prior of the GPI pipeline contained an error in this calculation by not correctly accounting for the number of coadds. The total exposure time including overheads was calculated as t exp ¼ t read × ðn read − 3∕2Þ, where n read is the number of reads per coadd. Instead, the exposure time is more correctly calculated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 5 6 8 (1) where the additional terms account for the extra resets that occur between each coadd. The effect of this error was negligible for single-coadd exposures, the most common type of exposures taken with GPI; 89% of on-sky observations were taken with a single coadd. For images with multiple coadds, the effect can be very significant, with the error on the estimated time elapsed during the complete observation of E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 4 7 2 To demonstrate how large this error can get for exposures with multiple coadds, an exposure with an integration time of 1.45 s with 10 coadds has a Δt of 40 s, an error equivalent to 98% of the actual time spent exposing (see Fig. 4). A large Δt can cause a significant and systematic error in the parallactic angle used to rotate the reduced data cubes north up as EXPSTART and Combinations with more than 100 images are shown as red circles (size scaled by the number), whereas combinations with less than 100 are shown as small gray circles. The vast majority of GPI exposures are taken with a single coadd, but for some frames with multiple coadds ΔT exceeded 120 s. EXPSTOP header keywords are converted into the hour angle at the start and end of the exposure from which the parallactic angle is calculated. This is most pronounced for targets observed at a small zenith distance where the parallactic angle is changing most rapidly. This error not only affects the astrometry of substellar companions, but also the measurement of binaries observed with other instruments that were used to calibrate GPI's true north offset angle. After this inaccuracy was discovered, the GPI pipeline was updated to perform the correct calculation, as in version 1.5. Average Parallactic Angle During Transits A second issue affecting a small number of observations is related to time-averaging during exposures that span transit. The pipeline computes the average parallactic angle between the start and end of an exposure via Romberg's method. For northern targets that transit during an exposure, the function contains a discontinuity at an hour angle (H) of H ¼ 0 rad where the parallactic angle jumps from −π to þπ. This discontinuity can easily be avoided by performing the integration between H ¼ H 0 and H ¼ 0 rad, and between H ¼ 0 rad and H ¼ H 1 , where H 0 and H 1 are the hour angles at the start and end of the exposure. The prior versions 1.4 of the pipeline and earlier contained an error in how this calculation was performed. As an example, the average parallactic angle p avg for an exposure with jH 0 j < H 1 was calculated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 6 ; 4 9 5 rather than E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 1 1 6 ; 4 3 7 This error only affects sequences where the target star transited the meridian between the start and end of an exposure. The magnitude of this error depends on exactly when transit occurred relative to the start and the end and the declination of the target. The net effect of the error on companion astrometry is small as it will only affect one of ∼40 images taken in a typical GPI observing sequence. This issue has also been corrected as of the latest version of the GPI pipeline. Inaccuracies in Some FITS Header Time Information The pipeline necessarily relies on the accuracy of the FITS header keywords in the data it is processing. However, it been proven that the FITS header keyword time information is not always as reliable as we would like. A review of FITS header timing information allowed us to uncover several periods in which misconfiguration or malfunction of time server software resulted in systematic errors in header keyword information. We were able to reconstruct the past history of such timing drifts sufficiently well as to be able to retroactively calibrate it out when reprocessing older data. As a reminder, the UTSTART keyword is written by the IFS brick computer. The clock on the IFS brick is, at least in theory, configured to automatically synchronize once per week with Gemini's NTP server. This server provides a master time reference signal to maintain the accurate timings necessary for telescope pointing and control. In order to cause a noticeable error in the average parallactic angle, the IFS brick time stamps would have to be between a few and a few tens of seconds out of sync, depending on the declination of the star (Fig. 5). The regular synchronization of the clock on the IFS brick was intended to be sufficient to prevent it from drifting at such an amplitude relative to the time maintained by Gemini's NTP server. However, it was eventually discovered that this time synchronization has not always operated as intended, resulting in significant clock offsets for some periods. The history of the offset between the IFS brick clock and UTC cannot be recovered directly from the various logs and headers generated by the IFS. Instead, we can use the difference between the UT and UTSTART header keywords as a proxy. The first timestamp is generated when the command to execute an observation is issued by Gemini's Sequence Executor (SeqExec) and is assumed to be accurate; a significant offset in the observatory's clock would quickly become apparent when attempting to guide the telescope. The second timestamp is generated when the IFS brick receives the command to start an exposure from the GPI TLC. The difference between these two timestamps, UTSTART-UT, should be small and relatively stable, as there have not been any significant changes to these software components since the instrument was commissioned in 2014, and we show below that this time difference does prove to be stable for the majority of GPI data. We, therefore, data mined all available GPI data to determine the time evolution of the offset between UT and UTSTART during the entire time GPI has been operational. We queried the GPIES Structured Query Language database, 24,25 which contains the header information for all images obtained in the GPIES Campaign programs, selected guest observer (GO) programs whose principal investigators have contributed their data into this database and all public calibration programs. We augmented this with all GO programs that were publicly accessible in the Gemini Observatory Science Archive when this analysis was performed. We excluded engineering frames-images that are obtained via GPI's interactive data language interface-as the UT keyword is populated via a different process for these types of frames. A total of 99,695 measurements of the UT to UTSTART offset spanning the previous six years were obtained, including 93,575 from the GPIES database and 6120 from other GO programs not included within the database. The evolution of this offset between the installation of the instrument at Gemini South and now is shown in Fig. 6. We identified several periods of time, two quite extended, where the IFS clock was not correctly synchronized with the Gemini NTP. From the initial commissioning of the instrument until the end of 2014, the offset varied significantly, from about 8 s slow to up to 30 s fast. The causes of these variations are not fully known, but we point out that during this first year, GPI was still in commission and shared-risk science verification and software was still significantly in flux. In several instances, negative shifts in the offset are correlated with dates on which the IFS brick was used after having been restarted but prior to the periodic time synchronization occurred. The gradual negative drifts in offset observed at several points imply that the IFS clock was running too fast, gaining time at a rate of ∼1 s per day over this period. Later, other small excursions in April 2016, August 2018, and August 2019 were also apparently caused by the IFS brick being used after an extended time powered off but prior to the scheduled weekly time synchronization. It would, of course, have been better had the time synchronization occurred automatically immediately after each reboot, but that was not the case. A second long period with a significant offset, between June 2015 and March 2016, was caused by the IFS brick being synchronized to the wrong time server; it was tracking the Global Positioning System (GPS) time scale rather than UTC, and therefore, ran 18 s ahead Improved systems administration can prevent such drifts in the future, but in order to properly calibrate the available data, we must model out the drifts that occurred in the past. The offset between UT and UTSTART remained relatively stable from mid-2016 through mid-2018 and was independent of the observing mode. We measured the median offset value between 2016.5 and 2018.5 as −3.38 s and defined this as the nominal UT to UTSTART offset (Fig. 7). We used a rolling median with a width of 12 h to calculate the value of the offset at a resolution of 1 h between late 2013 and 2019. A lookup table was created that the pipeline queries when reducing an IFS image so that it can apply a correction to UTSTART and UTEND if the observation was taken during a period identified as having a significant offset (Fig. 6). Modeling Apparent Image Rotation at Gemini's Cassegrain Port Recall from Sec. 2.1 that GPI always operates in ADI mode, with its pupil fixed or nearly fixed relative to the telescope pupil. GPI is attached to Gemini's ISS, which itself is mounted on the Cassegrain port of the telescope. A Cassegrain instrument rotator is used to maintain a fixed PA between the columns on an instrument's detector and either celestial north or the zenith. For an ideal altitude-azimuth telescope with an elevation axis perfectly aligned with local vertical and with an azimuth platform perpendicular to vertical, an instrument mounted on the Cassegrain port would observe the north angle changing with the parallactic angle as the telescope tracked a star through the meridian. The angle between the columns on the instrument detector and the direction of vertical would remain fixed (Fig. 8). Differences between true vertical and the vertical axis of the telescope cause this angle to vary slightly, an effect most pronounced for stars observed near the meridian with a small zenith distance (≲5 deg). When enabled, Gemini South's instrument rotator compensates for this motion, keeping the VA fixed on the detector (Fig. 9). Due to difficulties maintaining the AO guide loops for targets with a very small zenith distance, it became common for some operators to keep the instrument rotator drive disabled while GPI was in operation, regardless of the target elevation. However, this practice was inconsistently applied. The drive was disabled and the rotator was kept at a nominal home position for 99 of the 317 nights on which GPI was used over the last 6 years. For data taken on these nights, a small correction needs to be applied to the parallactic angle in the header to compensate for this small motion of the VA as a star is tracked through the meridian. Such a correction relies on precise knowledge of the telescope mount alignment. Sufficiently precise information on the Gemini South telescope mount is not publicly available. We, therefore, derived post facto knowledge of the Gemini South telescope mount based on the behavior of the Cassegrain rotator on nights when it was activated. Fig. 9 Angle of the instrument rotator as a function of hour angle for GPI observations where the rotator drive was enabled. The color of the symbol denotes the declination of the target. The instrument rotator angle has a different behavior for northern and southern targets due to the nonperpendicularity of the Gemini South telescope. The angle of the vertical vector (green) remains fixed relative to the image coordinate system for an ideal altitude-azimuth telescope, here at an angle of ∼23.5 deg from the x axis within a reduced GPI data cube. Any offset between true vertical and the vertical axis of the telescope will cause the vertical vector within a reduced image to move slightly as the target crosses the meridian, the magnitude of which would be imperceptible in this diagram for a small offset as is the case for Gemini South, but significant relative to the precision of astrometric measurements made with GPI. We constructed a simple model to predict the correction to the parallactic angle caused by the nonperpendicular nature of the telescope. 26 For a perfect telescope, the parallactic angle of a source p is calculated as (Fig. 10) E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 6 9 9 tan p ¼ − cos ϕ sin A sin ϕ cos E − cos ϕ sin E cos A ; (5) where A and E are the topocentric horizontal coordinates of the target, i.e., azimuth and elevation. If the telescope's azimuth platform is tilted at an angle of θ with an azimuth of Ω, the difference between the true p and apparent p 0 parallactic angles is 27 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 6 ; 6 2 0 where Δp ¼ − arcsinðsin θ E ∕ cos EÞ: These tilts will lead to a slight difference in the elevation and azimuth (E 0 , A 0 ) of the telescope mount versus the topocentric elevation and azimuth (E, A) of the target. The telescope elevation and azimuth modified by the azimuth tilt are calculated as and due to an elevation tilt as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 6 ; 4 1 0 To construct a model of the tilt of the azimuth and elevation axes of the Gemini South telescope, we assumed that the instrument rotator was only compensating for the change in parallactic angle induced by these tilts. We collected measurements of the telescope elevation and azimuth and instrument rotator position on the 207 nights where GPI observations were taken with the rotator drive enabled. As the header stores the mechanical position of the telescope, we inverted the previous equations to compute the topocentric elevation and azimuth. Using these, we predicted the change in parallactic angle, and thus the position that the instrument rotator would need to be at to compensate for nonperpendicularity for a given set of tilt parameters (θ, Ω, and θ E ). We performed a least squares minimization to determine the set of tilt parameters that best reproduce the instrument rotator position for 10 roughly 6-month periods over the last 5 years. The break points were chosen arbitrarily to be at the start and midpoint of each year except for years in which a major earthquake occurred near Cerro Pachon (September 17, 2015, and January 19, 2019), and when a break point coincided with a period in which GPI was being used. The tilt model parameters that best fit the measured instrument rotator positions are given in Table 1. A comparison between the model and data on the night of May 6, 2015, UT is shown in Fig. 11. The model is able to reproduce the commanded rotator positions with residuals smaller than the north calibration uncertainty (discussed below) in all but a handful of the images, specifically those taken at elevations ≳88 deg (Fig. 12). We identified all GPI images to which we had access that were taken with the instrument rotator drive disabled. We used the tilt model parameters in Table 1 and the telescope elevation and azimuth within the header to calculate the correction to apply to the parallactic angle to compensate for the slight change in the angle of vertical on the detector. We created a lookup table with these corrections using the DATALAB header keyword to uniquely assign a correction to a specific GPI observation taken with the rotator drive disabled. Files with DATALAB values not in the lookup table do not have a correction applied. This lookup table contains all GPI observations taken with the drive disabled that were accessible at the time of this study, including GPIES campaign data, GO program data that are ingested into the GPIES database, and GO program data that were public at the time of the analysis. North Angle Calibration The corrections to the GPI DRP described in Secs. 3, 4, and 5 necessitated a revision of GPI's astrometric calibration, specifically the true north angle. The north angle offset is defined as the angle between IFS pixel columns and north in an image that has been rotated to put north up based on the average parallactic angle during the exposure. Here, we define the direction of the north angle offset as θ true − θ observed , a correction that would need to be added to a PA measured in images reduced with the GPI DRP (after correcting for the x axis flip) to recover the true PA of a companion. We calibrate true north in GPI data based on observations of astrometric reference targets on sky. The small field of view (2.8 arc sec × 2.8 arc sec) and relatively bright limiting magnitude (I < 10) of GPI exclude many of the typical astrometric calibration fields used by other instruments (e.g., M15 and M92). Instead, we rely on periodic observations of a set of calibration binaries that have near-contemporaneous measurements with the well-calibrated NIRC2 camera on the Keck II telescope. 28,29 Gemini South/GPI Observations We have observed nine binary or multiple star systems since the start of routine operations in 2014. A summary of all these observations is given in Table 2 (λ eff ¼ 2.06 μm); note that since the spectral filter in the GPI IFS is after the spatial pixellation at the lenslet array, change of filter cannot affect the astrometric calibration. The majority of the observations were obtained in GPI's "direct" mode, a configuration where the various coronagraphic components are removed from the optical path. Some were obtained in "unblocked" mode, which includes the Lyot mask and pupil plane apodizer in the optical path to reduce instrument throughput, preventing saturation from brighter stars. The addition of a neutral density filter in 2017 allowed us to observe calibrator binaries that were significantly brighter than the nominal H-band saturation limit of the IFS in either direct or unblocked mode. Observations of the θ 1 Ori B multiple system were taken in the coronagraphic mode, the typical mode for planet search observations, allowing for a high signal-to-noise ratio (SNR) detection of the fainter stellar components B2, B3, and B4 that all lie within an arcsecond of the primary star. We do not expect the coronagraph optics to have a significant effect on astrometric measurements, except for those made for objects extremely close to the edge of the focal plane mask, which is not relevant here. The three coronagraph optics are in pupil and focal planes only, so cannot individually introduce distortions. By effectively weighting the beam profile across the pupil, they could, in principle, cause the beam to sample a different portion of any intermediate optics if those optics have polishing errors that could cause a slight field-dependent photocenter Fig. 1) are small, located in a slow beam, and superpolished to ∼1 nm rms wavefront error. Measured distortions are ∼3 mas across the field of view 4 and completely dominated by the geometric effects of the telephoto relay inside the spectrograph, with no evidence for a polishing-error component. These observations were processed using version 1.5 (revision e0ea9f5) of the GPI DRP, incorporating the changes described in Secs. 3, 4, and 5. The data were all processed using the same DRP recipe with standard processing steps. The raw images were dark subtracted and corrected for bad pixels using both a static bad pixel map and outlier identification. The individual microspectra in each 2-D image were reassembled into a 3-D data cube (x, y, λ) using a wavelength solution derived from observations of a calibration argon arc lamp. An additional outlier identification and rejection step was performed on the individual slices of the data cubes. A distortion correction was then applied to each slice based on measurements of a pinhole mask taken during the commissioning of the instrument. 4 Keck II/NIRC2 Observations The same nine multiple systems have been observed with the NIRC2 instrument in conjunction with the facility AO system on the Keck II telescope. The isolated calibration binaries have between one and six NIRC2 epochs between 2014 and 2019. The Trapezium cluster that contains θ 1 Ori B has been observed periodically with NIRC2 as an astrometric calibrator field by multiple different teams, with archival measurements extending as far back as December 2001. The observations were taken in a variety of instrument configurations and filters. A summary of these observations is given in Table 3. Datasets were taken in either PA mode, where north remains fixed at a given angle on the detector, or VA mode, where the VA remains fixed and north varies with the parallactic angle of the target. We reduced these data using a typical near-infrared imaging DRP; correction for nonlinearity, 30 dark subtraction, flat fielding, and bad pixel identification and correction. Reduced images were corrected for geometric distortion using the appropriate distortion map. 28,29 For observations taken using a subarray of the NIRC2 detector, we zero-padded the images prior to applying the distortion correction as the distortion correction script is hard-coded for 1024 × 1024 px images. 31 The astrometric calibration of NIRC2 was derived from analyses of globular cluster observations and has been validated with measurements of the locations of SiO masers in the galactic center that were determined precisely using very long baseline radio interferometry measurements. 28,29 We used a plate scale of 9.952 AE 0.002 mas px −1 and a north angle offset of −0.252 AE 0.009 deg for data taken prior to April 13, 2015, 28 and 9.971 AE 0.005 mas px −1 and a north angle offset of −0.262 AE 0.020 deg for data taken after. 29 Relative Astrometry We used PSF fitting to measure the position of the companion relative to the primary. For the calibration binaries other than θ 1 Ori B, we estimated the location of the primary star within each image (or wavelength slice) by fitting a 2-D Gaussian to a small 7 × 7 pixel stamp centered on an initial estimate of the primary star. The five parameters (x, y, σ x , σ y , and amplitude A) were allowed to vary except for the NIRC2 data obtained on 2019-04-25 (HIP 80628) and 2019-05-23 (HIP 44804), where σ x and σ y were fixed due to a strongly asymmetric PSF and the proximity of the companion. This process was repeated using the output of the first iteration as the initial guess for the second. We extracted a 15 × 15 px stamp centered on the fitted position of the primary to use as a template to fit the location of the secondary. We used the Nelder-Mead downhill simplex algorithm to determine the pixel offset and flux ratio between the primary and secondary stars by minimizing the squared residuals within a 2λ∕D radius aperture surrounding the secondary. We estimated the uncertainty in the centroid of each fit as the full-width-at-halfmaximum divided by the SNR measured as the peak pixel value divided by the standard deviation of pixel values within an annulus 15λ∕D from the star. We corrected differential atmospheric refraction caused by the different zenith angle of the two stars using the model described in Ref. 32. We used the simplifying assumption that the observations were monochromatic at the central wavelength of the filter, negating any stellar color dependence on the effective wavelength. This effect causes a reduction in the separation of a binary star along the elevation axis and was typically very small; at most 0.3 mas for the NIRC2 observations of HIP 80628 taken at an elevation of ∼35 deg. PAs measured in datasets taken in VA mode were corrected by the parallactic angle at the middle of the exposure such that they were effectively measured relative from north. The small angular separation between the two components of the θ 1 Ori B2-B3 binary required us to use either θ 1 Ori B1 for the NIRC2 observations or θ 1 Ori B4 for the GPI observations as a reference PSF. We used this template PSF to simultaneously fit the location and fluxes of the two components of the B2-B3 binary following a similar procedure. We used a Fourier high-pass filter to subtract the seeing halo from B1 that was introducing a background signal for both B4 and the B2-B3 binary. The relative astrometries are listed in Table 2 for GPI and in Table 3 for NIRC2. We did not apply any correction for the differential atmospheric refraction for these observations given the extremely small difference in zenith angle between the two stars. We did not use the relative astrometry of B1-B2, B1-B3, or B1-B4 as B1 was obscured by GPI's focal plane mask, nor did we use B2-B4 or B3-B4 as the relative motion of these three stars cannot be described using a simple Keplerian model. As a verification of the relative astrometry presented here, we performed an independent analysis of a subset of both the GPI and NIRC2 observations using the procedure described in Ref. 4. The GPI data were reduced with the same version of the DRP, whereas the NIRC2 data were reduced with a separate pipeline that performed the same functions as described in Sec. 6.2. Once the data were reduced, relative astrometry was performed using StarFinder. 33 For this subset of observations, we measured consistent separations and PAs to the values reported in Tables 2 and 3. Accounting for Orbital Motion Orbital motion of the calibration binaries between the NIRC2 and GPI epochs can introduce a significant bias in the north angle offset measurement. We fit Keplerian orbits to each of the calibration binaries using the NIRC2 astrometry presented in Table 3. These fits allowed us to simulate NIRC2 measurements on the same epoch as the GPI observations listed in Table 2, mitigating the bias induced by orbital motion. We use the parallel-tempered affine invariant Markov chain Monte Carlo (MCMC) package emcee 34 to sample the posterior distributions of the Campbell elements describing the visual orbit and of the system parallax. A complete description of the fitting procedure as applied to the 51 Eridani system can be found in Ref. 35. We used prior distributions for the system mass based on the blended spectral type and flux ratios of the components, and for the system parallax using measurements from either Hipparcos 36 or Gaia. 37 We used a parallax of 2.41 AE 0.03 mas for θ 1 Ori B2-B3. 38 We also fitted the radial velocity measurements of both components of the HD 158614 binary 39 to help further constrain its orbital parameters. We purposely excluded astrometric measurements from other instruments and assumed that the NIRC2 astrometric calibration was stable before and after the realignment procedure in mid-2015. The PA of the visual orbit and corresponding residuals are shown in Fig. 13 for the nine calibration binaries. We simulated NIRC2 measurements at the epoch of the GPI observations by drawing 10,000 orbits at random from MCMC chains and converting the orbital elements into separations and PAs at the desired epoch. We used the median of the resulting distribution of separations and PAs as the simulated measurement and the standard deviation as the uncertainty. These simulated measurements are reported in Table 4. The small semimajor axis of the HIP 43947 binary led to a significant uncertainty on the simulated NIRC2 observation despite the short 50-day baseline between the NIRC2 and GPI observations, precluding a measurement of the north offset angle with this binary. This was also the case for all but one epoch of both the HD 1620 and HD 6307 systems. Additional observations of these systems with NIRC2 to reduce the orbital uncertainties will be required for more precise predictions at these epochs. The remaining binaries (HD 157516, HD 158614, HIP 44804, HIP 80628, HR 7668, and θ 1 Ori B2-B3) either had enough NIRC2 measurements to sufficiently constrain the orbit at the GPI epochs or were close enough in time that the orbital motion between the NIRC2 and GPI epochs was smaller than the measurement uncertainties. GPI Plate Scale The plate scale for GPI was measured using the predicted separations in angular units from the orbit fit to the NIRC2 measurements and the pixel separations measured in the reduced GPI images (Table 4). We saw no evidence of a variation in the plate scale with time (Fig. 14) and adopted a single value of 14.161 AE 0.021 mas px −1 . This measurement is consistent with the previous plate scale of 14.166 AE 0.007 mas px −1 , 4,10 but with a larger uncertainty. The pipeline changes described in Secs. 3, 4, and 5 have no impact on the separation of two stars within a reduced GPI image. The slight difference in the inferred plate scale can instead be ascribed to (a) (b) Fig. 13 (a) PA and (b) residuals of the orbits (blue lines) consistent with the NIRC2 astrometry in Table 3 (squares). The dates of GPI observations are highlighted; green dashed lines denote epochs that were used for the astrometric calibration, and red dotted lines denote epochs where the orbital motion is significant relative to the GPI measurement uncertainties. In a subset of the plots in (b), the date range has been restricted to focus on the dates of the GPI observations. trend of increasing north offset angle over the course of 6 years when comparing the calibration binary measurements in early-2014 and mid-2019. One plausible cause of a rotation of the instrument with respect to the telescope is the annual shutdown of the telescope when both the instrument and ISS are removed to perform maintenance. We fit a variable north offset angle that remains static between the dates of telescope shutdowns. A series of weighted means were calculated using measurements between each shutdown, as listed in Table 4 and plotted in Fig. 15. This model reproduces the trend of increasing north offset angle during the previous 6 years and is an improved fit (χ 2 ν ¼ 0.4, ν ¼ 31) relative to the single-valued model. We opted to use this variable north offset angle model for the final astrometric calibration of the instrument. Fig. 14 Measurements of the plate scale of GPI derived from calibration binaries (red circles) and the θ 1 Ori B2-B3 binary (black squares). The mean and standard deviation (blue solid line and shaded region) were calculated using a weighted mean and assuming that the measurements were not independent. The previous astrometric calibration is overplotted for reference (gray dashed line and shaded region). (a) (b) Fig. 15 Measurements of the north offset angle of GPI derived from calibration binaries (red circles) and the θ 1 Ori B2-B3 binary (black squares). We fit the north angle assuming it is either (a) a constant calibration for the entire date ranges or (b) that it varies between telescope shutdowns. The mean and standard deviation (blue solid line and shaded region) are calculated as in Fig. 14 Instrument Stability The cause of the change of the north offset angle over time is not known. In principle, a movement of the IFS or the CAL system on their bipod mounts could produce a clocking of the focal plane with respect to the telescope, although a movement of 5 mm would be required. We excluded rotations internal to the instrument by measuring the angle between two of the satellite spots within a postalignment image taken routinely before instrument operation. These satellite spots are generated by a periodic wire grid on the pupil plane apodizer, 13,14 located on the AO bench (Fig. 1). A physical rotation of the IFS relative to the apodizer would manifest itself as a rotation of the satellite spots within the focal plane as recorded by the IFS. We measured the angle between the bottom left and top right satellite spots in 406 postalignment images taken between late-2014 and mid-2019 using the satellite spot finding algorithm that is a part of the GPI DRP. We find no significant trend in this angle over the past 5 years (Fig. 16), although a significant offset of ∼0.1 deg is seen for a few months at the start of 2016 that coincides with mechanical difficulties with the wheel containing the pupil plane apodizers. Excluding this period, we find an angle between these two satellite spots of 335.96 AE 0.02 deg. The stability of this angle implies that the change in the north offset angle seen in Fig. 15 is caused by a mechanical rotation upstream of the pupil plane mechanism containing the apodizer. The GPI optics upstream of this are all rigidly mounted in a single plane onto a thick optical bench and are extremely unlikely to produce such a rotation. In principal, a rotation of the outer truss structure holding all three assemblies with respect to the mounting plate could rotate the focal plane, but again that would have to be on the order of 5 mm, essentially impossible. GPI has an extremely rigid truss structure supporting various subcomponents. Integrated finite element analysis/optical modeling shows that flexure motions of any component relative to the optical axis are <25 μm over the operating range of gravity vectors. 40 Although we did not explicitly model rotation, if any hypothetical rotation component involves displacements on the same scale, the angular rotation would be on the order of 0.01 deg. The pins that locate GPI onto the ISS face have much more precise tolerances than that as well (<0.23 mm). Revised Astrometry for Substellar Companions The changes to the pipeline described in Secs. 3, 4, and 5 and the revised astrometric calibration of the instrument described in Sec. 7 both necessitate a revision of previously published relative astrometry of substellar companions measured using GPI observations. Revisions for β Pictoris b, 8 Fig. 16 (a) One wavelength slice of a reduced GPI data cube for a postalignment image taken using GPI's internal source on November 12, 2014. The four satellite spots generated by the grid on the pupil plane apodizer are clearly visible. The PA between the bottom left (S1) and top right (S2) satellite spot, measured from S1 to S2 counter-clockwise from vertical, (b) plotted as a function of date for each postalignment image taken since the instrument was commissioned. to the astrometry for the exoplanets in the HR 8799 5 and HD 95086 7 systems, and the brown dwarfs HR 2562 B 42 and HD 984 B, 43 that correct for the changes to the pipeline and the revised astrometric calibration of the instrument. We reduced the same images used in the previous studies with the latest version of the GPI DRP. The revisions described in Secs. 3, 4, and 5 all affect the AVPARANG header keyword. The change in this value is plotted as a function of frame number for each observing sequence in Fig. 17. Δ AVPARANG is typically small and static, only changing by at most ∼0.05 deg between the start and end of the J-band sequence on HD 984 taken on August 30, 2015. The effect of the parallactic angle integration error described in Sec. 3.2 is apparent in several epochs. The median Δ AVPARANG was used in conjunction with the revised north offset angle described in Sec. 7 to revise the previously published astrometry. We assumed that a single offset to the measured PA of a companion accurately describes the effect of the change to the parallactic angle for each frame within a sequence. As the maximum change in Δ AVPARANG over a sequence was 0.05 deg, the effect on the companion astrometry is likely on this order or smaller. For the majority of cases, Δ AVPARANG changes by <100'th of a degree over the course of a full observing sequence. The previous and revised astrometries for each published epoch are given in Table 5. We find small but not significant changes in the measured separations, and significant changes in the measured PAs due to the significant change in the north offset angle described in Sec. 7. Discussion/Conclusion We have identified and corrected several issues with the GPI DRP that affected astrometric measurements of both calibration binaries and substellar objects whose orbital motion was being monitored. We reprocessed the calibration data after implementing these fixes into the pipeline and revised the astrometric calibration of the instrument. The most significant change was to the north offset angle; changing from −0.10 AE 0.13 deg to between 0.17 AE 0.14 deg and 0.45 AE 0.11 deg, depending on the date. The plate scale of the instrument was also remeasured as 14.161 AE 0.021 mas px −1 , consistent with the previous calibration albeit with a larger uncertainty. Although the change to the astrometric calibration of the instrument is significant relative to the stated uncertainties, the impact should be limited to studies that combine GPI astrometry with that from instruments of similar precision. The revised calibration should not have a significant impact on the results and interpretation of studies that used GPI astrometry either solely or in conjunction with astrometry from instruments with significantly worse astrometric precision; 6,9,10 an offset in the north angle will simply change the PA of the orbit on the sky (Ω). A more significant effect might be seen for orbit fits that combined astrometry from GPI with astrometry of a similar precision from other instruments. 7,44 The magnitude of the effect on the derived orbital parameters is likely small. All but one of the substellar companions studied with GPI have a small fraction of their complete orbits measured, and so the change of the shape of the posterior distributions describing the orbital elements is likely not statistically significant. The precision of astrometric measurements made with GPI is currently limited by measurement uncertainties except for widely separated companions such as the HR 8799 bcd, and the highest SNR measurements of β Pic b made in 2013 when the projected separation was ∼430 mas, where the north angle uncertainty dominates the PA error budget. Lower SNR measurements of faint companions such as 51 Eri b are less affected, with the north angle uncertainty being between a factor of two and five smaller than the measurement uncertainty. Future studies using archival GPI data will need to account for both the changes to the pipeline and the revision to the astrometric calibration. The updated pipeline is publicly available on the GPI instrument website 45 and on GitHub. 46 All users wishing to perform precision astrometry will have to reduce their data using the latest version of the pipeline, especially those obtained on the highlighted dates in Fig. 6, and apply the revised astrometric calibration presented in Sec. 7. The measurements presented here demonstrate the importance of continued astrometric calibration, especially for instruments on the Cassegrain mount of a telescope. Improvements to the limiting magnitude of GPI's AO system as it is moved to Gemini North will allow us to use globular clusters as astrometric calibrations instead of isolated binaries, allowing for a more precise determination of the north angle via a comparison to both archival Hubble Space Telescope and contemporaneous Keck/NIRC2 observations. This study also demonstrates the importance of precise and accurate astrometric calibration of instruments designed for high-contrast imaging of extrasolar planets. Instruments equipped with IFS necessarily have a small field of view, challenging for astrometric calibration that typically relies on images of globular clusters extending over several to tens of arcseconds. These results also demonstrate the importance of accounting for orbital motion, either between the two components of a calibration binary and/or the photocenter motion of one of the components if one of the components is itself a tight binary. A similar problem arises with the use of SiO masers near the Galactic Center; 28 the location of the infrared source is not necessarily coincident with that of the radio emission that the infrared astrometric reference frame is tied. 47 Precise and accurate astrometric calibration of future instruments with very narrow fields of view such as the Coronagraphic Instrument on the Wide Field Infrared Survey Telescope 48 will require a careful calibration strategy to mitigate the effects of these and other biases.
2019-10-18T23:23:41.000Z
2019-10-18T00:00:00.000
{ "year": 2019, "sha1": "5878fd9759ec697ce987cfaf0e2177b3abc0a22d", "oa_license": "CCBY", "oa_url": "https://www.spiedigitallibrary.org/journals/Journal-of-Astronomical-Telescopes-Instruments-and-Systems/volume-6/issue-1/015006/Revised-astrometric-calibration-of-the-Gemini-Planet-Imager/10.1117/1.JATIS.6.1.015006.pdf", "oa_status": "HYBRID", "pdf_src": "SPIE", "pdf_hash": "c922901c31f808bf4049638754cada6663e3e755", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
12487628
pes2o/s2orc
v3-fos-license
Ordering in granular rod monolayers driven far from thermodynamic equilibrium The orientational order in vertically agitated granular rod monolayers is investigated experimentally and compared quantitatively with equilibrium Monte Carlo simulations and density functional theory. At sufficiently high number density, short rods form a tetratic state and long rods form a uniaxial nematic state. The length-to-width ratio at which the order changes from tetratic to uniaxial is around $7.3$ in both experiments and simulations. This agreement illustrates the universal aspects of the ordering of rod-shaped particles across equilibrium and nonequilibrium systems. Moreover, the assembly of granular rods into ordered states is found to be independent of the agitation frequency and strength, suggesting that the detailed nature of energy injection into such a nonequilibrium system does not play a crucial role. I. INTRODUCTION The ordering of anisotropic particles is a universal phenomenon appearing widely in nature, ranging from thermally driven molecules or colloids [1][2][3][4] to active particles such as bacteria colonies [5], actin filaments [6,7], animal groups [8][9][10], and living liquid crystals [11]. In equilibrium lyotropic systems, such as hard rods interacting only through excluded volume interactions, the transition of sufficiently anisotropic particles into various ordered states is entropy driven. The loss in rotational degrees of freedom in the ordered state is compensated by the gain in the translational ones [3,4,12]. Taking a two-dimensional system of hard rectangles as an example, a tetratic state with four-fold rotational symmetry has recently been discovered in Monte Carlo (MC) simulations [13,14], and studied theoretically with density functional theory (DFT) [15][16][17]. The number density and the length-to-width ratio (aspect ratio) of the particles are found to be the key parameters determining the ordered states of hard rectangles with only excluded volume interactions [15]. Given the ubiquity of ordering transitions in nature, it is important to ask how well the existing knowledge on such transitions in equilibrium (thermal) systems can be extended to nonequilibrium (athermal) systems. Due to the dissipative interactions between particles, agitated granular matter has been frequently used as a nonequilibrium model system for phase transitions [18][19][20][21][22][23][24]. Rich and often counterintuitive dynamical behaviors [25] have been discovered for granular rods, including vortex patterns [26], collective swirling motions [27], giant number fluctuations [28,29], violation of the equipartition theorem [30], and an enhanced ordering transition in an effective 'thermal' bath of spherical particles [31]. Reminiscent to equilibrium systems, ordering transitions of vertically agitated granular rods have * kai.huang@uni-bayreuth.de been investigated in three-dimensional (3D) and quasitwo-dimensional systems. In 3D, the aspect ratio of the rods was found to influence the ordered state of cylindrically shaped rods [36]. In quasi-two-dimensional systems, a bulk isotropic-uniaxial nematic (I-U) transition was observed for cylindrical rods with large aspect ratio [32] and an effective elastic constant was characterized quantitatively [33]. Particularly in strict monolayer systems, the shape of the rods was found to play an important role in determining the ordered states: Tetratic, nematic or semectic ordering was found for cylindrical rods, tapered rods or rice particles correspondingly [34]. Moreover, tetratic ordering was also found for tubular shaped particles and the influence of the container shape was discussed in [35]. Despite of these progresses, it is still unclear to which extent one can draw quantitative connections between systems in and out of thermodynamic equilibrium. More specifically, a quantitative comparison between the state diagram of dissipative granular rods and that of the corresponding equilibrium model system is still lacking. This quantitative comparison is the purpose of the present work. Here, we investigate experimentally the orientational ordering of cylindrically shaped granular rod monolayers driven far from thermodynamic equilibrium, and compare the results to MC simulations as well as DFT of the analogous equilibrium system. Focusing on the bulk region of the system, we detect both tetratic and uniaxial nematic states by varying the aspect ratio of the rods. We demonstrate that the aspect ratio and the number density of rods are the key parameters determining the state diagram in both systems. In the state diagram, we find a common aspect ratio that separates tetratic and uniaxial nematic states in both experiments and MC simulations. Such an agreement illustrates the universal aspects of the ordering of rod-shaped particles. A. Experiments A sketch of the experimental set-up is shown in Fig. 1. Monodisperse polyvinyl chloride (PVC) rods of diameter D and length L, cut from welding wires of D = 3 mm (aspect ratio L/D ≤ 5) or 1.5 mm (L/D ≥ 5), are confined in a cylindrical container of height H and radius R = 10 cm. The ratio H/D = 4/3 is chosen for both diameters to ensure a monolayer of particles; that is, no rods can cross or jump over each other. The inner surface of the container is covered with antistatic spray (Kontakt Chemie, Antistatik 100) to minimize electrostatic forces. An electromagnetic shaker (Tira TV50350) is employed to drive the sample sinusoidally against gravity with frequency f = 50 Hz and peak acceleration Γ = 4π 2 f 2 z 0 /g, where z 0 is the peak vibrational amplitude and g is the gravitational acceleration. The acceleration is monitored with an accelerometer (Dytran 3035B2). We capture high contrast images of the rods using backlight LED illumination and a camera (IDT MotionScope M3) mounted above the container. The camera is synchronized with the shaker so as to capture images at a fixed phase of each vibration cycle. The images are subjected to an analysis algorithm that determines the center of mass P i = (x i , y i ) and the orientation θ i ∈ [0, π[ of the i-rod with i ∈ [1, N ]. θ i is the angle of the main rod axis with respect to a fixed laboratory axis, and N is the total number of rods in the container. The detection rate is 100 % for D = 3 mm and 95 % for D = 1.5 mm. To systematically study the collective behavior of the rods, we vary the global area fraction Φ g = N LD πR 2 between ∼ 0.3 and ∼ 0.9, and the aspect ratio L/D between 2.0 and 13.3. For each Φ g and L/D, we vary the peak acceleration Γ with a step of 1 from 2 to 20 and back. The waiting time between each step is fixed at ∼ 1.5 minutes. We repeat the whole cycle at least 3 times. B. Simulations and theory Correspondingly, we model the particles as twodimensional hard rectangles of length L and width D that interact through excluded volume interactions. N of such particles are placed in a box with dimensions L x and L y along the x-and y-axes, respectively. We use periodic boundary conditions in both axes and study the equilibrium bulk configurations by means of standard MC simulations [37] in the canonical ensemble. That is, we fix the number of particles N and the system area A = L x L y (the temperature is irrelevant in hard models). The number of particles is similar to that in the experiments, N ∼ 10 3 . We use simulation boxes with rectangular and square shapes. No difference has been found between both geometries. The simulation method is as follows. In order to equilibrate the system we start at very high area fractions, φ ≈ 0.95, placing the particles, with their main axes pointing in the same direction, in a rectangular lattice. Next we run 10 7 Monte Carlo steps (MCSs). Each MCS is an attempt to move and rotate all the particles in the system. The maximum displacement ∆r max and maximum rotation ∆θ max that each particle is allowed to perform in a MCS is determined such that the acceptance probability is 0.2. Then we remove a few particles randomly chosen, recalculate ∆r max and ∆θ max , and start a new simulation. The number of removed particles is such that the change in area fraction is ∆φ < ∼ 0.01. In order to rule out metastable configurations related to the preparation of the initial state, we discard simulations with φ > ∼ 0.8. When the area fraction is below that limit we start the proper simulation. For each simulation we first run 10 6 MCSs to equilibrate the system and then accumulate data over 10 7 MCSs. For selected L/D we have also simulated the system by increasing the number of particles, i.e. by adding particles instead of removing them. We have found no differences between both methods. In addition, an Onsager-like DFT is employed to study the equilibrium bulk phase behavior. Details on the implementation of the DFT are provided in appendix A. III. RESULTS AND DISCUSSION This section is organized as follows: We first introduce the ordered states observed in experiments and MC simulations in section III A. In section III B, we characterize experimentally the influence of the container walls and the driving conditions. Finally in section III C, we quantify the ordering transition threshold for various aspect ratios and compare the state diagrams obtained experimentally, in simulations and with DFT. A. Ordered states Figure 2 shows typical snapshots of the ordered states obtained experimentally. Short rods (a) tend to develop tetratic ordering with two alignment directions perpendicular to each other. Long rods (b) form uniaxial nematic ordering with only one preferred alignment direction. In both cases, the container promotes either homeotropic (perpendicular) or planar (parallel) anchoring of the rods close to the boundaries. To minimize the influence of boundary effects, we consider only those particles located in the central region of the container, as marked in Fig. 2. A quantitative justification of this region of interest (ROI) will be given in section III B. Sometimes during the experiments, especially at low global area fractions, we observe regions with very low number density of rods (almost empty regions). As we are interested in the bulk behavior, we discard those configurations in which the "empty regions" and the ROI overlap. Figure 3 shows a direct comparison of the ordered states obtained in both experiments and MC simulations. The color coded rod configurations are reconstructed from agitated granular rods in the ROI of the container (upper panels) and from MC simulations (middle panels) with periodic boundary conditions. In the tetratic state with fourfold rotational symmetry (left column), the orientational distribution function h(γ), where γ is the angle with respect to the directorn, has two peaks at γ = 0 and γ = π/2 (c). In contrast, in the uniaxial nematic state (right column), the elongated particles are oriented on average along the director, yielding only one peak at γ = 0 (f). The directorn is calculated as the eigenvector of the largest eigenvalue of the tensorial order parameter Q αβ = 2w α,i w β,i − δ αβ . Here w α,i is the αth Cartesian coordinate of the unit vectorŵ i = (cos θ i , sin θ i ), δ αβ is the Kronecker delta, and ... denotes an average over the rods [38,39]. To quantify the orientational order we measure where q 2 and q 4 are the uniaxial and tetratic order parameters, respectively. In an isotropic state (no orientational order) q 2 and q 4 vanish. In a uniaxial nematic state q 2 > 0 and q 4 > 0. Finally in a tetratic state q 2 = 0 and q 4 > 0. The states in Fig. 3 B. Experiments: The influence of boundary and driving Former experiments [32,34,40,41] and MC simulations [39] show that the container induces a preferential alignment of the particles close to the wall. In order to facilitate the investigation in the bulk, we first need to characterize such an influence quantitatively. Following the ideas in [32], we calculate the wall-rod angular correlation function g 4 (s) = cos[4(θ t,i − θ i (s))] , where s is the shortest distance from the rod center to the container wall, θ t,i is the tangential direction of the corresponding point on the wall (see inset in Fig. 4), and ... denotes an average over all the particles at a distance s. Either homeotropic or planar alignment of the particles with respect to the wall results in g 4 ∼ 1. In Fig. 4, g 4 is presented as a function of the rescaled distance to the wall s/R with a binning width of 0.03R. For all aspect ratios investigated, g 4 decays exponentially with s/R. To minimize the influence of the wall, we consider only those particles with s/R > 0.5 to be in the ROI. In this region, g 4 is always smaller than 0.06 and remains in a range comparable to the experimental uncertainties. We characterize the state of the system by measuring the area fraction Φ and h(γ) in circular regions with radius 3L inscribed in the ROI. Subsequently, we calculate q k (Φ) from h(γ) accumulated over all the regions that share the same Φ. Φ for short rods with L/D = 3.3. It indicates an area fraction Φ c above which the tetratic order parameter q 4 grows from its initial low value, while the uniaxial order parameter q 2 remains low. Such a combination of q 2 and q 4 suggests a gradual isotropic-tetratic (I-T) transition. As shown in (a), the behavior of q 2 and q 4 does not depend on the peak vibration acceleration. This is further confirmed through a comparison among data obtained for all Γ in the range of 2 ≤ Γ ≤ 20 and also for all aspect ratios investigated. As shown in (b), a variation of the vibration frequency f from 35 Hz to 80 Hz for L/D = 3.3 also yields the same behavior of q k (Φ). Such agreements indicate that the details of how the rods are effectively 'thermalized' in our nonequilibrium system are not essential in determining the ordering transitions, providing us the opportunity to draw connections to the corresponding equilibrium systems. Accordingly, we accumulate the data over all Γ at f = 50 Hz for a more accurate characterization of the transition threshold Φ c . By fitting q 4 with a constant value in the isotropic region and with a straight line in the ordered state, we obtain Φ c as the intersection point which minimizes the standard error. Only data with sufficient statistics (i.e. error bar < 0.02) and q 4 < 0.3 are chosen for the fits. Moreover, the height of the container is found to play a minor role in determining the ordering transition: A variation of H/D from 4/3 to 2 leads to the same behavior of q k . Experiments with H/D = 2 for L/D = 3.3 and L/D = 10.0 give rise to slightly lower transition thresholds Φ c . More specifically, we find a decrease of 12 % for short rods and of 5 % for long rods, which is in both cases within the uncertainty of the fit. In addition, for a specific aspect ratio of L/D = 5.0, the same experiments have been performed for two different rod diameters. The results agree with each other within the error bar, suggesting that the mass of the rods does not play a dominating role in the ordering transition. C. Experiments vs. simulations and DFT Based on the above characterizations of the boundary influence, we compare the ordering transitions of granular rods in the ROI to the corresponding thermal system. Figure 6 shows the averaged order parameters obtained in both experiments and MC simulations (insets) for rods with L/D = 3.3 (a) and L/D = 10.0 (b). As discussed above, tetratic ordering occurs in the system of short rods. For long rods, both order parameters start to grow above Φ c , suggesting a gradual I-U transition. Qualitatively, the agreement between experiments and MC simulations on the behavior of both tetratic q 4 and uniaxial q 2 order parameters is remarkable for both aspect ratios. Such similarities indicate that the ordering of granular rods is governed by the geometric constrain of non-overlapping rods, which is the only interaction considered in the simulations. Quantitatively, the threshold Φ c = 0.66 ± 0.11 obtained experimentally for the I-T transition for rods with L/D = 3.3 agrees with the one 0.65 ± 0.02 obtained from MC simulations within the error. However, the experimentally obtained threshold Φ c = 0.79 ± 0.04 of the I-U transition for rods with L/D = 10.0 is larger than the one obtained for the corresponding thermal system, 0.44 ± 0.03. As L/D and Φ are the key parameters determining the state of the system, we compare the experimental (nonequilibrium) results with the MC (equilibrium) simulations in a state diagram shown in Fig. 7. In both systems short rods form a tetratic state and long rods an uniaxial state at sufficiently high area fractions. The aspect ratio at which the ordered state changes from tetratic to uniaxial nematic agrees quantitatively. It is found to be (L/D) T−U ≈ 7.3 ± 0.7 in both simulation and experiment [42]. This result agrees with previous simulations in which a tetratic phase was found for L/D = 7 and some evidence of uniaxial ordering for L/D = 9 [45]. The quantitative agreement of (L/D) T−U across systems in and out of thermodynamic equilibrium illustrates the universal aspect of the ordering transitions. On the other hand, the threshold Φ c for agitated rods differs from that in MC simulations, indicating the nonuniversal aspects of the ordering transitions. First, the experimentally determined Φ c exhibit a peak around (L/D) T−U . In contrast, MC simulations show a monotonic decay with L/D. Second, there exists a systematic deviation of Φ c in experiments compared to MC simulations as L/D grows. For the largest aspect ratio investigated experimentally, L/D = 13.3, much higher area fraction is required for the uniaxial state to develop. This difference might be attributed to the following mechanisms. (i) The strong fluctuations in the nonequilibrium steady states of granular rods may lead to temporal disorder in a system that could in principle relax into an ordered state. (ii) Due to the dissipative rod-rod interactions, the tendency of clustering for granular rods is larger in comparison to MC simulations, especially for large L/D (compare panels (d) and (e) of Fig. 3). (iii) Finally, the container wall may frustrate the orientational order of the agitated rods in the entire cavity. Further experiments using containers with different sizes and shapes might shed light on such a discrepancy. Concerning the fluctuations, it is known that the velocity distributions of agitated granular spheres are nongaussian and exhibit exponential tails, no matter whether the particles form clusters [43] or not [44]. As the dissipative nature does not depend on the shape of the particles, we expect similar behavior in our system. This feature sets agitated granular rods apart from thermally driven liquid crystals, and triggers the question of how to define an effective 'thermal' energy scale for an athermal system. Experiments on monitoring the mobility of individual granular rods with high speed photography could help to shed light on the difference between thermal and athermal systems found here. In the inset of Fig. 7 we show the state diagram according to DFT together with the thresholds obtained from MC simulations in an extended region of L/D. It is similar to the one predicted by the scaled particle theory [15]. DFT also predicts I-T transitions for small L/D and I-U transitions for large L/D. However, the tetratic state is stable only for L/D < ∼ 2.2, most likely because only twobody correlations are considered in the theory [16,45]. Concerning the ordering transition threshold Φ c , there is a good agreement between DFT and MC simulations for L/D > ∼ 7. For low aspect ratios, the deviations between both approaches are due to the mean field character of the theory. In systems with L/D < (L/D) T−U , DFT predicts a T-U transition at very high area fractions. Due to the limitations in both experiments and MC simulations, the region of very high area fractions, where the T-U transition may arise, has not been explored. IV. CONCLUSIONS To summarize, the ordering of agitated monodisperse granular rod monolayers is found to be determined predominately by the aspect ratio of the rods and the area fraction, while the frequency and the strength of the agitation are not essential. It suggests that the detailed nature of energy injection into such a nonequilibrium system is not important, analogous to the role that temperature plays in equilibrium hard rod models. In comparison to former experimental investigations on monolayer systems, we have focused on the bulk region of the container and found both tetratic and uniaxial nematic ordering for cylindrical rods. This enables a direct comparison to the phase diagram of the corresponding equilibrium system. We find that, depending on whether the aspect ratio is smaller or larger than ≈ 7.3, a gradual isotropic-tetratic or isotropic-uniaxial nematic transition arises for both systems as the area fraction grows. This agreement with the predictions from equilibrium MC simulations considering only excluded volume interactions suggests some degree of universality for the ordering of rod shaped particles across systems in and out of thermodynamic equilibrium. Nevertheless, we have also found a qualitative difference between both systems, namely the trend of the area fraction threshold at the ordering transitions. Further investigations will focus on the characterization of the area fraction and velocity fluctuations of the system, in order to find an effective 'thermal' energy scale for such an athermal system. Moreover, a comparison to molecular dynamics simulations [46] with tunable rodrod dissipation energy could help to elucidate how fluctuations influence the ordering transition threshold. Appendix A: Density functional theory We use an Onsager-like DFT with Parsons-Lee rescaling. A similar DFT was previously used to analyze the state diagram of two-dimensional rods confined in a circular cavity [47]. We are interested in the behavior of fluid states in which the density is spatially homogeneous. Hence we can write, without loss of generality, the one body density distribution as ρ( r, γ) = ρh(γ), where ρ is the number density and h(γ) is the orientational distribution function. Here γ is the angle with respect to the director. h(γ) is normalized such that π 0 dγh(γ) = 1. We split the free energy into two parts where F id is the ideal gas part and F ex is the excess part accounting for the excluded volume interactions. The ideal free energy per unit of area A is given exactly by where β = 1/k B T with k B the Boltzmann's constant and T the absolute temperature. Λ is the (irrelevant) thermal volume that we set to one. The excess part is approximated by (A5) v exc (γ 12 ) is the excluded area between two rectangles with length L, width D and relative orientation γ 12 : v exc (γ 12 ) = (L 2 + D 2 )|sin γ 12 |+ 2LD(1 + |cos γ 12 |), (A6) and ψ ex (φ) is the excess free energy per particle of a reference system of hard disks at the same area fraction φ = ρLD as our system of hard rectangles. The diameter of the disks is selected such that both disks and rectangles have the same area. Following Baus and Colot [48] we approximate ψ ex by: with c 2 = 7/3 − 4 √ 3/π ≈ 0.1280. Eq. (A5) recovers the Onsager approximation in the low density limit. Finally, the grand potential is given by with µ the chemical potential. We minimize Ω with respect to ρ and h(γ) in order to find the equilibrium states. We use a standard conjugated gradient method to minimize the functional. We use a truncated Fourier expansion to describe h(γ). We truncate the expansion such that the absolute value of the last coefficient in the expansion is smaller than 10 −7 .
2015-05-22T09:42:45.000Z
2015-03-12T00:00:00.000
{ "year": 2015, "sha1": "bdd556d481ee07357a86e0b6ea66fc2051719c6a", "oa_license": null, "oa_url": "https://epub.uni-bayreuth.de/3936/1/PhysRevE.91.062207-1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bdd556d481ee07357a86e0b6ea66fc2051719c6a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
102307576
pes2o/s2orc
v3-fos-license
Factorization of two-particle distributions in AMPT simulations of Pb-Pb collisions at $\mathbf{\sqrt{s_{\text{NN}}}} $ = 5.02 TeV The flow ansatz states that the single-particle distribution of a given event can be described in terms of the complex flow coefficients $V_n$. Multi-particle distributions can therefore be expressed as products of these single-particle coefficients; a property commonly referred to as factorization. The amplitudes and phases of the coefficients fluctuate from event to event, possibly breaking the factorization assumption for event-sample averaged multi-particle distributions. Furthermore, non-flow effects such as di-jets may also break the factorization assumption. The factorization breaking with respect to pseudorapidity $\eta$ provides insights into the fluctuations of the initial conditions of heavy ion collisions and can simultaneously be used to identify regions of the phase space which exhibit non-flow effects. These proceedings present a method to perform a factorization of the two-particle Fourier coefficients $V_{n\Delta}(\eta_a, \eta_b)$ which is largely independent of detector effects. AMPT model calculations of Pb-Pb collisions at $\sqrt{s_{\text{NN}}} = 5.02$ TeV are used to identify the smallest $|\Delta\eta|$-gap necessary for the factorization assumption to hold. Furthermore, a possible $\Delta\eta$-dependent decorrelation effect in the simulated data is quantified using the empirical parameter $F_2^\eta$. The decorrelation effect observed in the AMPT calculations is compared to results by the CMS collaboration for Pb-Pb collisions at $\sqrt{s_{\text{NN}}} = 2.76$ TeV. Introduction The Fourier coefficients of an event-sample averaged two-particle distribution are commonly described as = v n (η a )v n (η b )e in(ψn(ηa)−ψn(η b )) , where v n are the flow coefficients and ψ n are the symmetry planes. Either of these two quantities may fluctuate from event to even due to varying initial conditions, thereby breaking the factorization of the sample average even for simulations of ideal hydrodynamics [1]. By studying the factorization behavior of V n∆ (η a , η b ) one can therefore infer the properties of such fluctuations. Flow related analyses commonly assume that non-flow contributions decrease with an increasing η-separation of the particles. In order to minimize the impact of non-flow effects on the measurement a minimal longitudinal separation between particles, referred to as |∆η|-gap, is therefore often applied. Under the assumption that non-flow effects do not factorize identically to anisotropic flow it is possible to identify regions of the phase space where non-flow effects become negligible [2,3]. Whether Eq. (2) may be written in a factorized form depends on the correlations between the four quantities v n (η a ), v n (η b ), ψ n (η a ), and ψ n (η b ). These proceedings focus on the effect of symmetry plane decorrelation effects. The phases at η a and η b are commonly assumed to be correlated with each other through a common symmetry plane angle Ψ n , but may fluctuate from event to event. The fluctuations are equally likely to occur in either direction which ensures that V n∆ (η a , η b ) is a real quantity. The observed average is attenuated due to these fluctuations which are therefore also referred to as decorrelation effects. If the fluctuations at ψ n (η a ) and ψ n (η b ) in Eq. (2) exhibit a dependence on ∆η = η a − η b it may cause a factorization breaking of V n∆ (η a , η b ). Observable definition The results presented here are exclusively based on the Monte-Carlo (MC) simulations and would therefore not require considering detector effects on the observables. However, in order to present a generally applicable method, the analysis presented here is constructed around an observable which is largely independent of uncorrelated detector deficiencies. At its core, this analysis is based on the single-and two-particle distributions averaged over many events. The single-particle distributionρ 1 is given bŷ where N is the number of observed charged particles and ϕ is the azimuthal coordinate. In experimental measurements any azimuthal anisotropies inρ 1 are exclusively caused by detector effects. The distribution of particle-pairsρ 2 is given bŷ where N pairs denotes the number of particle pairs observed at η a , η b , ϕ a , ϕ b . The two distributionsρ 1 andρ 2 can be used to construct the normalized two-particle density r 2 which is largely independent of uncorrelated detector effects [4,5]. The non-zero two-particle Fourier coefficients can then be computed by Factorization The functional form of V n∆ (η a , η b ) in the (η a , η b )-plane is assessed with two different models. Both models focused on the flow ansatz i.e., individual events can be described in terms of single particle distributions. Neither model attempts to describe any non-flow processes such as di-jets or weak decays. Figure 1: Schematic representation of Model A. Each element ofv n (η) affects several elements of V n∆ (η a , η b ). Purely factorizing model (Model A) This model assumes that the averaged two-particle coefficients may be described by a product If ψ n (η a ) is always equal to ψ n (η b ) within each event and if the fluctuations of v n are uncorrelated along η, Eq. (9) holds andv A n (η) is the mean value of the event-by-event flow coefficients v n (η). The degree to which the measured V n∆ (η a , η b ) is compatible with Model A provides a limit to the size of factorization-breaking fluctuations of the flow coefficients, event-plane decorrelations and non-flow effects. The flow coefficientsv A n (η) are fit to the observed V n∆ (η a , η b ). The latter is computed as a histogram of finite bin size in η a and η b . Eq. (9) can thus be seen as a non-linear equation where i and j are the bin-indices along η a and η b respectively. Graphically, Eq. (10) can be represented as shown in Fig. 1 where V n∆ (η a , η b ) is a two-dimensional matrix and v A n (η) a onedimensional vector. Solving Eq. (10) for all points in V n∆ (η a , η b ) yields the "vector" v A n (η) which best describes the observed data. Each value inv A n affects several points in the (η a , η b )-plane. Long-range decorrelating model (Model B) The second model presented here was suggested by the CMS collaboration, albeit based on a vastly different analysis method [6]. The model is given by where the parameter F η n is a measure for a ∆η = η a − η b dependent factorization breaking. Despite being empirical F η n provides insights into longitudinal fluctuations during the early stages of the collision [7,8]. It should be noted thatv A n (η) =v B n (η) unless F η n = 0. Analogous to the previous model, the flow coefficientsv B n (η) and the constant F η n are found by solving The graphical representation of Eq. (12) is depicted in Fig. 2. The exponential factor causes an attenuation of V n∆ (η a , η b ) along |∆η| and is constant along η a + η b . Particle pairs with a small separation in ∆η are commonly excluded from flow analyses as this region of the phases space is known to exhibit large non-flow contributions. Furthermore, an experimentally measured V n∆ (η a , η b ) may exhibit acceptance gaps if various detector systems are combined in order to maximize the η coverage. Therefore, the procedure to numerically solve the equation systems in Eq. (10) and Eq. (12) needs to be able to be performed on arbitrary subsets of the (η a , η b )-plane. A minimization of a weighted sum of squares fulfills this requirement. The weighted sum S is given by where N bin a (N bin b ) represents the number of bins in the η a (η b ) and M represent either Model A or Model B as defined in Eq. (10) or Eq. (12) respectively. The uncertainty associated with each point of V n∆ (η i a , η j b ) is given by σ n∆ (η i a , η j b ). Factorization ratio The agreement of V n∆ (η a , η b ) with either of the two models is assessed by means of a factorization ratio f n (η a , η b ) defined by where M (η a , η b ) represents either model with the parameters fitted to the observed V n∆ (η a , η b ). Note that f n (η a , η b ) can be computed for the entire (η a , η b )-plane even if S was only minimized for a subset of it. Figure 3 presents V 2∆ (η a , η b ) obtained for collisions of 20-40% centrality for AMPT calculations of Pb-Pb collisions at √ s NN = 5.02 TeV with string melting enabled. Every two-particle Fourier coefficient in the (η a , η b )-plane is computed independently with no a priori assumption about the event-by-event fluctuations. Figure 4 (left) presentsv A n (η) obtained from a fit to V 2∆ (η a , η b ) without the requirement of a |∆η|-gap. Therefore, the factorization procedure included the short-range ∆η region of the (η a , η b )-plane which exhibits non-negligible non-flow contributions. Figure 4 (right) shows the factorization ratio for the flow coefficients from the left panel. The short-range non-flow does not exhibit the same factorization behavior as the long-range regions. This caused the fitted solution to neither accurately describe the short-range nor the long range regions of the (η a , η b )-plane highlighting the need of a |∆η|-gap. Figure 5 (left) presents the factorization ratio for Model A if only particle pairs with |∆η| > 3 are taken into account. By excluding the short-range pairs, good agreement to Model A is observed in the long-range region. However, the solution found for long-range pairs is not able to describe the short-range region of V 2∆ (η a , η b ) further corroborating that non-flow effects in this region of the phase space do not factorize identically to the long-range anisotropic flow. In order to determine the minimal |∆η|-gap necessary to exclude pairs originating from non-flow processes from the fitting procedure, a projection of the factorization ratios for Model A onto 1.20 the ∆η-axis is performed. The results for all analyzed centralities and for a |∆η|-gap of 3 are depicted in Fig. 6. A centrality dependence for the factorization breaking in the short-range region is observed. The deviation from unity is most pronounced for the most central events, decreases to a minimum for the 20-40% centrality class and increases for more peripheral events thereafter. This centrality dependence may originate from the centrality dependence ofv A 2 (η). Events of all centralities exhibit good agreement to Model A in the long-range region for ∆η > 3. A further increase of the |∆η|-gap does not significantly alter the extracted flow coefficients. A decrease of the |∆η|-gap, includes short-range regions of the phase-space which are incompatible to the solution found in the long range region. Factorization procedures with a smaller |∆η|-gap therefore decrease the fit quality in the long-range region. Analyses which implicitly rely on the factorization assumption to hold should thus apply a minimal longitudinal separation of ∆η min ≈ 3 for the kinematic region here studied. The method presented here can be used to improve the precision of similar previously published results which found that the factorization assumption holds to within 10% for Pb-Pb collisions at √ s NN = 2.76 TeV for ∆η > 2 [9]. Performing the factorization using Model B allows for the measurement of the decorrelation parameter F η 2 . The decorrelation parameters computed with the method presented here is shown in Fig. 7 for various |∆η|-gaps and centralities. The data point for the 0-5% centrality bin is removed from the figure due to a poor statistical uncertainties. Results published by the CMS collaboration for Pb-Pb collisions at √ s NN = 2.76 TeV are included for comparison. The analysis used by the CMS collaboration corresponds to a |∆η|-gap of approximately 2.9 in this analysis. The centrality dependence observed for Pb-Pb collisions is reproduced by the AMPT simulations at 5.02 TeV supporting previous model comparisons [8] and studies of the energy dependence of F η 2 [10]. The centrality dependence of the decorrelation parameter is also found to reflect the centrality dependence of the short-range factorization breaking in Fig. 5. However, a quantification of possible non-flow contributions to the centrality dependence of F η 2 requires further research. Summary Depending on the nature of the event-by-event fluctuations, the two-particle Fourier coefficients V n∆ (η a , η b ) may assume different functional shapes. In these proceedings two distinct models were studied: The first model is a purely factorizing one as it is implicitly assumed in most flow analyses. The second model is an extension to the former and allows for a ∆η dependent attenuation of V n∆ (η a , η b ). AMPT calculations of Pb-Pb collisions at √ s NN = 5.02 TeV were used as the basis for the presented results. The first model was use to estimate the longitudinal extent of short-range non-flow correlations under the assumption that such effects are not well described by the factorized solution found from long-range particle pairs. For the studied kinematic region a minimal ∆η-separation of ∆η min ≈ 3 is required for the factorization assumption to hold in the long-range region. The second model was used to determine ∆η-dependent decorrelation effects as they are to be expected from event-plane decorrelation effects. The empirical decorrelation parameter F η 2 is qualitatively compatible to measurements by the CMS collaboration at √ s NN = 2.76 TeV. This confirms previous studies suggesting that AMPT is able to reproduce the observed decorrelation effects as well as that these effects exhibit only a weak dependence on the center of mass energy [8,10]. The method presented here offers a new way to investigate possible non-flow contributions to the observed decorrelation effects and will help to better understand the three dimensional initial conditions of heavy ion collisions in the future.
2019-02-18T00:05:38.407Z
2018-07-13T00:00:00.000
{ "year": 2018, "sha1": "9975d6dca501b70f8949bf7e5db7a1a242cc4f1e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1070/1/012027", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ad8b8159bd4b42f478c113b3cc6ea290058d30b2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
203951346
pes2o/s2orc
v3-fos-license
Green function for linearized Navier-Stokes around a boundary shear layer profile for long wavelengths This paper is the continuation of a program, initiated in Grenier-Nguyen [8,9], to derive pointwise estimates on the Green function of Orr Sommerfeld equations. In this paper we focus on long wavelength perturbations, more precisely horizontal wavenumbers $\alpha$ of order $\nu^{1/4}$, which correspond to the lower boundary of the instability area for monotonic profiles. Introduction We are interested in the study of linearized Navier Stokes around a given fixed profile U s = (U (z), 0) in the inviscid limit ν → 0. Namely, we consider the following set of equations ∂ t v + U s · ∇v + v · ∇U s + ∇p − ν∆v = 0, (1.1) where 0 < ν ≪ 1, posed on the half plane x ∈ R, z > 0, with the no-slip boundary conditions v = 0 on z = 0. (1. 3) The linear problem (1.1)-(1.3) is a very classical problem that has led to a huge physical and mathematical literature, focussing in particular on the linear stability, on the dispersion relation, on the study of eigenvalues and eigenmodes, and on the onset of nonlinear instabilities and turbulence [1,15]. We also mention several efforts in proving linear to nonlinear stability and instability around shear flows in the small viscosity limit [2,3,4,5,10]. Throughout this paper, we will assume that U (z) is holomorphic near z = 0, that U (0) = 0, that U ′ (0) > 0, that U (z) > 0 for any z > 0, and that U converges exponentially fast at ∞, to some positive constant U + as well as all its derivatives (which converge to 0). Note in particular that this class of profiles includes for instance the exponential profile where β > 0. As such a profile has no inflection point, according to Rayleigh's inflection criterium, it is stable with respect to linearized Euler equations. However, strikingly, a small viscosity has a destabilizing effect. That is, all such shear profiles are unstable for large enough Reynolds numbers ν −1 [6,7]. More precisely, for such shear flows there exist lower and upper marginal stability branches α low (ν) ∼ ν 1/4 and α up (ν) ∼ ν 1/6 , so that whenever the horizontal wave number α belongs to [α low (ν), α up (ν)], the linearized Navier-Stokes equations about this shear profile have an eigenfunction and a corresponding eigenvalue λ ν with (1.4) Heisenberg [11,12], then Tollmien and C. C. Lin [13,14] were among the first physicists to use asymptotic expansions to study this spectral instability. We refer to Drazin and Reid [1] and Schlichting [15] for a complete account of the physical literature on the subject, and to [6,7] for a complete mathematical proof of this instability. We then take the Fourier transform in the tangential variables with Fourier variable α and the Laplace transform in time with dual variable −iαc, following the traditional notations. In other words we study solutions of linearized Navier Stokes equations which are of the form This leads to the classical Orr-Sommerfeld equation, together with the boundary conditions and where ∆ α = ∂ 2 z − α 2 . The aim of this paper is to give bounds on the Green function of the Orr Sommerfeld equation when α is of order ν 1/4 and c is of the same order, which corresponds to one of the boundaries of the instability area. This restricted study appears to be sufficient to construct linear and nonlinear instabilities for the full nonlinear Navier Stokes equations [8,10]. To construct the Green function we first construct two approximate solutions φ app s,± with a "slow behavior", and two approximate solutions φ app f,± with a "fast behavior" (the "-" solutions going to 0 as z goes to +∞). These approximate solutions (and in fact exact solutions) have already been constructed in [7]. In this paper we propose a much simplified and much shorter construction of these approximate solutions, making the current paper self contained. The slow approximate solutions will be solutions of the Rayleigh equation with boundary condition φ(0) = 0. They will be constructed by perturbation of the case α = 0 where the Rayleigh equation degenerates in The main observation is that φ 1,0 = U − c is a particular of (1.9). Let φ 2,0 be the other solution of this equation such that the Wronskian W [φ 1,0 , φ 2,0 ] equals 1. We will construct approximate solutions to the Orr Sommerfeld equation which satisfy The "fast approximate solutions" will emerge in the balance between −ε∆ 2 α φ and (U − c)∆ α φ. Keeping in mind that α is small, they will be constructed starting from solutions of the simplified equation As c is small, and as U ′ (0) = 0, there exists a unique z c ∈ C near 0 such that Such a z c is called a "critical layer" in the physics literature. It turns out that all the instability is driven by what happens near this critical layer. Near z c , equation (1.12) is a perturbation of the Airy equation The fast approximate solutions are thus constructed as perturbations of second primitives of classical Airy functions. This construction will be detailed in Section 2, where we will construct two approximate solutions φ app f,± to Orr Sommerfeld equation, with a fast behavior and with and where Ai(1, .) and Ai(2, .) are the first and the second primitives of the classical Airy function Ai. We now introduce the Tietjens function, defined by Tietjens function is a classical special function in physics, precisely known and tabulated. Then In this paper we will bound the Green function of Orr Sommerfeld equations. More precisely, for each fixed α ∈ R + and c ∈ C, we let G α,c (x, z) be the corresponding Green kernel of the Orr Sommerfeld problem. By definition, for each x ∈ R and c ∈ C, G α,c (x, z) solves on z ≥ 0, together with the boundary conditions: That is, for z = x, the Green function G α,c (x, z) solves the homogenous Orr-Sommerfeld equations, together with the following jump conditions across z = x: Here, the jump [f (z)] |z=x across z = x is defined to be the value of the right limit subtracted by that of the left limit as z → x. Let taking the square root with a positive real part. Note that The main result in this paper is as follows. Let G α,c (x, z) be the Green function of the Orr-Sommerfeld problem. Then, there exists a smooth function P (x) and there are universal positive constants θ 0 , C 0 so that uniformly for all x, z ≥ 0. Similarly, Let us comment (1.19). We have Note that both terms under the brackets are of order O(1), since γc is of order O(1). The Wronskian vanishes if there exists a linear combination of φ app s,− and φ app f,− which satisfies the boundary conditions, namely if there exists an approximate eigenmode of Orr α,c (recalling that φ app s,− and φ app f,− are only approximate solutions of Orr α,c ). We have to remain away from such approximate modes, since nearby there exists true eigenmodes where Orr α,c is no longer invertible. Note that σ 1 may be taken arbitrarily small. Note that in this Theorem we are at a distance O(ν 1/4 ) from a simple eigenmode ψ 0 . It is therefore expected that Orr α,c is of order O(ν −1/4 ) and that The Airy operator In this section, we construct two approximate solutions of Orr Sommerfeld equation, called φ f,± = φ app f,± , with fast increasing or decreasing behaviors. For these approximate solutions, it turns out that the zeroth order term U ′′ φ f,± may be neglected. Moreover, as α is small, α 2 terms may also be neglected. This simplifies the Orr Sommerfeld operator in the so called modified Airy operator defined by Note that The main difficulty lies in the fact that the "phase" U (z) − c almost vanishes when z is close to ℜz c , hence we have to distinguish between two cases: z ≤ σ 1 and z ≥ σ 1 for some small σ 1 . The first case is handled through a Langer transformation, which reduces (2.1) to the classical Airy equation. The second case may be treated using a classical WKB expansion. We will prove the following proposition. To prove this proposition we construct ψ app for z < z c in Section 2.2 using the Langer's transformation introduced in (2.1) and for z > z c in Section 2.3 using the classical WKB method. We then match these two constructions in Section 2.4, integrate them twice in Section 2.5 and detail the Green function of Airy operator in Section 2.7. A primer on Langer's transformation The first step is to construct approximate solutions to Aψ = 0, starting from solutions of the genuine Airy equation εψ ′′ = yψ, thanks to the so called Langer's transformation that we will now detail. Let B(x) and C(x) be two smooth functions. In 1931, Langer introduced the following method to build approximate solutions to the varying coefficient Airy type equation starting from solutions to the similar Airy type equation We assume that both B and C vanish at some point x 0 , and that their derivatives at x 0 does not vanish. Let ψ be any solution to (2.9). Let f and g be two smooth functions, to be chosen later. Then Note that f may be seen as a modulation of amplitude and g as a change of phase. If we choose g such that and f such that Hence φ may be considered as an approximate solution to −εφ ′′ +C(x)φ = 0. Note that (2.11) may be solved, yielding Now let B 1 be the primitive of √ B which vanishes at x 0 and let C 1 be the primitive of √ C which vanishes at x 0 . Then (2.10) may be rewritten as Note that both B 1 and C 1 behave like C 0 (x − x 0 ) 3/2 near x 0 . Hence (2.13) may be solved for x near x 0 . This defines a smooth function g which satisfies Airy critical points In this section we use Langer's transformation to construct approximate solutions to Aψ = 0 starting from solutions of the genuine Airy equation. Let c be of order ν 1/4 . Then there exists an unique z c ∈ C near 0 such that U (z c ) = c. Note that z c is also of order ν 1/4 since U ′ (0) = 0. Expanding U near z c at first order we get the approximate equation which is the classical Airy equation. Let us assume that ℜU ′ (z c ) > 0, the opposite case being similar. A first solution is given by where Ai is the classical Airy function, solution of Ai ′′ = xAi, and where Note that since α is of order ν 1/4 , γ is of order ν −1/4 and that arg(γ) = +π/6 + O(ν −1/4 ). Moreover, as x goes to ±∞, with argument iπ/6, In particular, Ai ′ (x)/Ai(x) ∼ −x 1/2 for large x. Hence, as γ(z − z c ) goes to infinity, A(z) goes to 0 and More precisely, we get An independent solution is given by with Bi(·) being the other classical Airy function. In this case |Ci(γ(z −z c ))| goes to +∞ as z − z c goes to +∞, with a plus instead of the minus in the corresponding formula (2.15). We now use Langer's transformation introduced in the previous section. As U (z) and U ′ (z c )(z −z c ) vanish at the same point with the same derivative at that point, we use Langer's transformation with Then, g(z) is locally well defined, for 0 ≤ z ≤ σ 1 for some positive σ 1 . Moreover g(z c ) = z c and g ′ (z c ) = 1. Now are two approximate solutions of Aφ = 0 in the sense that AÃi = −εf ′′ Ai(γg(z)), ACi = −εf ′′ Ci(γg(z)). Away form the critical layer If z − z c is small then g is well defined, precisely on [0, σ 1 ] for some small σ 1 as in the previous section. However, if z > σ 1 , then Langer's transformation is no longer useful, and we may directly use a WKB expansion. We look for solutions ψ of the form Hence we look for θ such that As we are only interested in approximate solutions, we solve (2.18) in an approximate way, and look for θ of the form for some arbitrarily large M . The θ i may be constructed by iteration, starting from θ ′ 0 = ± U (z) − c. If we keep the positive real part to the square root, the − choice leads to a solution going to 0 at +∞ and the + choice to a solution going to +∞ at +∞. This construction gives a solution ψ app f,± such that where N can be chosen arbitrarily large provided M is sufficiently large. Note that More generally, for any z ≥ σ 1 and any k, and similarly for ψ app f,+ . Matching at z = z c It remains to match at z = z c the solutions constructed with the WKB method for z ≥ σ 1 with the solutions construct thanks to Langer's transformation for z ≤ σ 1 . We look for constants a and b such that and ψ app f,− /ψ app f,− (σ 1 ) and their first derivatives match at z = σ 1 , which leads to (2.15) and (2.19) to get a ∼ 1 and b = O(µ f (σ 1 ) −1 ). We then multiply a and b by ψ app f,− (σ 1 ) to get an extension of ψ app f,− from z > σ 1 to the whole line. The construction is similar to extend ψ app f,+ . From A to Airy We have now constructed global approximate solutions, that we again call ψ app f,± . It remains to solve Let us focus on the − case, the other being similar. For z ≥ σ 1 , we look for solutions φ app f,± of the form Hence h may be expanded as a series in ε 1/2 ; namely, for some arbitrarily large M . The leading term h 0 (x) is defined by while the other terms are computed similarly. We may thus write a complete WKB expansion for φ app f,± . In particular For z < σ 1 , we integrate once (2.21) which gives Now ψ app f,− is a combination ofÃi andCi for z < σ 1 . Let us focus on theÃi term. We have to study Ai(γg(t))dt. Let s = γg(t). Then ds = γg ′ (t)dt, hence As γ is large, the integral term is equivalent to where we introduced the primitive Ai(1, x) of Ai. This leads to ( Green function for Airy We will now construct an approximate Green function for the Airy operator. We first construct an approximate Green function for A. Let where W Ai is the Wronskian of ψ app ± (x). Note that this Wronskian is independent of x and of order In particular, we have therefore G Ai is rapidly decreasing in y on both sides of x, within scales of order ν 1/4 . By construction, We then integrate twice G Ai in y to get an approximate Green function for the Airy operator. More precisely, let and similarly for G Airy = G Ai,2 , the primitive of G Ai,1 , so that ∂ 2 y G Ai,2 (x, y) = G Ai (x, y). We have Note that, taking into account the fast decay of G Ai near x, (2.26) We define the AirySolve operator by (2.27) and the associated error term the Airy operator acting on the y variable. These operators will be used in Section 3.5. Rayleigh solutions near critical layers In this section, we construct two approximate solutions φ app s,± to the Orr Sommerfeld equation, whose modules respectively go to +∞ and 0 as z → +∞. More precisely, we prove the following Lemma with The construction of approximate solutions for Orr Sommerfeld equation starts with the construction of approximate solutions for the Rayleigh operator. For small α, the construction of solutions to the Rayleigh equation is a perturbation of the construction for α = 0, which is explicit. We will now detail the construction of an inverse of Ray 0 and then of an approximate inverse of Ray α for small α Function spaces In the next sections we will denote The highest derivative of the Rayleigh equation vanishes at z = z c , since U (z c ) = c. To handle functions which have large derivatives when z is close to ℜz c , we introduce the space Y η defined as follows. Note that in our analysis, z c is never real, so z − z c never vanishes. We are close to a singularity but never reach it. We say that a function f lies in Y η if for any z ≥ 1, and . The best constant C in the previous bounds defines the norm f Y η . Rayleigh equation when α = 0 In this section, we study the Rayleigh operator Ray 0 . More precisely, we solve The main observation is that This leads to the following Lemma whose proof is given in [7, Lemma 3.2] Lemma 3.2 ( [6,7]). Assume that ℑc = 0. There exist two independent solutions φ 1,0 = U − c and φ 2,0 of Ray 0 (φ) = 0 with unit Wronskian determinant Furthermore, there exist smooth functions P (z) and Q(z) with P (z c ) = 0 and Q(z c ) = 0, so that, near z = z c , for some η 1 > 0. Let φ 1,0 , φ 2,0 be constructed as in Lemma 3.2. Then the Green function G R,0 (x, z) of the Ray 0 operator can be explicitly defined by The inverse of Ray 0 is explicitly given by Note that the Green kernel G R,0 is singular at z c . The following lemma asserts that the operator RaySolver 0 (·) is in fact well-defined from X η to Y 0 , which in particular shows that RaySolver 0 (·) gains two derivatives, but losses the fast decay at infinity. It transforms a bounded function into a function which behaves like (z − z c ) log(z − z c ) near z c . Lemma 3.3. Assume that ℑc = 0. For any f ∈ X η , RaySolver 0 (f ) is a solution to the Rayleigh problem (3.1). In addition, RaySolver 0 (f ) ∈ Y 0 , and there holds for some constant C. Proof. Using (3.3), it is clear that φ 1,0 (z) and φ 2,0 (z)/(1 + z) are uniformly bounded. Thus, considering the cases x < 1 and x > 1, we obtain That is, G R,0 (x, z) grows linearly in x for large x and has a singularity of order |x − z c | −1 when x is near z c . As |f (z)| ≤ e −ηz f X η , the integral (3.4) is well-defined and we have in which we used the fact that ℑz c ≈ ℑc. To bound the derivatives, we need to check the order of the singularity for z near z c . We note that Thus, ∂ z RaySolver 0 (f )(z) behaves as 1+| log(z −z c )| near the critical layer. In addition, from the Ray 0 equation, we have This proves that RaySolver 0 (f ) ∈ Y 0 and gives the desired bound. Approximate Green function when α ≪ 1 Let φ 1,0 and φ 2,0 be the two solutions of Ray 0 (φ) = 0 that are constructed above, in Lemma 3.2. We now construct an approximate Green function to the Rayleigh equation for α > 0. To proceed, let us introduce A direct computation shows that their Wronskian determinant equals Note that the Wronskian vanishes at infinity since both functions have the same behavior at infinity. In addition, We are then led to introduce an approximate Green function G R,α (x, z), defined by Again, like G R,0 (x, z), the Green function G R,α (x, z) is "singular" near z c . By a view of (3.8), for each fixed x, where the error kernel E R,α (x, z) is defined by We then introduce an approximate inverse of the operator Ray α defined by (3.10) and the related error operator Lemma 3.4. Assume that ℑc > 0. For any f ∈ X η , with α < η, the function RaySolver α (f ) is well-defined in Y α , and satisfies Ray α (RaySolver α (f )) = f + Err R,α (f ). Furthermore, there hold RaySolver and for some universal constant C. Proof. The proof follows that of Lemma 3.3. Indeed, since the behavior near the critical layer z = z c is the same for these two Green functions, and hence the proof of (3.12) and (3.13) near the critical layer identically follows from that of Lemma 3.3. Let us check the behavior at infinity. Consider the case p = 0 and assume f X η = 1. Using (3.5), we get Hence, by definition, which is bounded by C(1 + | log ℑc|)e −αz , upon recalling that α < η. This proves the right exponential decay of RaySolver α (f )(z) at infinity, for all f ∈ X η . The estimates on Err R,α are the same, once we notice that (U (z) − c)∂ z φ 2,0 has the same bound as that for φ 2,0 , and similarly for φ 1,0 . Remark 3.5. For f (z) = (U − c)g(z) with g ∈ X η , the same proof as done for Lemma 3.4 yields which are slightly better estimates as compared to (3.12) and (3.13). Construction of φ app s,− Let us start with the decaying solution φ s,− . We note that is only a O(α) smooth approximate solution to Rayleigh equation since Similarly, a direct computation shows that This is not sufficient for our purposes, and we have to go to the next order. We therefore introduce Note that ψ 1 is of order O(α) in Y η , and behaves like α(z − z c ) log(z − z c ) near z c . It particular ψ 1 is not a smooth function near z c . Its fourth order derivative behaves like α/(z − z c ) 3 in the critical layer. We have hence Orr α,c (ψ 0 + ψ 1 ) = ε(∂ 2 z − α 2 ) 2 ψ 1 + Err R,α (e 0 ). (3.15) Note that Moreover, using Rayleigh equation, In view of Remark 3.5, Ray α (ψ 1 ) and U ′′ ψ 1 are of order O(α) in X η . We thus have Next we expand ∂ 2 z in (3.17) which gives three terms. The first one is As Ray α (ψ 1 ) and ψ 1 are of order O(α) in Y η , this quantity is bounded by (3.18) The third term in the expansion of (3.17) is which is bounded by O(α). Thus, we can write the error term as This error term E 2 is therefore too large for our purposes. However, it is located near z = z c , namely in the critical layer. We therefore correct ψ 0 + ψ 1 by ψ 2 by approximately inverting the Airy operator in this layer. More precisely, let which will create an error term E 3 = Orr α,c (ψ 2 ) + E 2 = Airy(ψ 2 ) + OrrAiry(ψ 2 ) + E 2 = OrrAiry(ψ 2 ) + ErrorAiry(E 2 ). We will prove that C is small, that D is invertible and that A is related to Rayleigh equations. This will allow the construction of an explicit approximate inverse, and by iteration, of the inverse of M . Let us detail these points. Let us first study D. Following (2.6), for z ≫ ν 1/4 , hence D is invertible and For z of order ν 1/4 , we note that F + and F − are of order O(1), Ai(γg(x)) Ai(2, γg(x)) + O(γ) and similarly for ∂ y φ f,− and ∂ y φ f,+ . Note that γ 2 /µ 2 f , γ 3 /µ 3 f , Ai(2, γg(x)) and Ci(2, γg(x)) are of order O(1). As g ′ (z c ) = 1, up to normalization of lines and columns, D is close to Ai Ci Ai ′ Ci ′ which is invertible by definition of the special Airy functions Ai and Ci. Let us turn to C. The worst term in C is those involving φ s,+ because of its logarithmic singularity. More precisely, ∂ k y φ s,+ behaves like (z − z c ) k−1 and is bounded by |ℑc| k−1 ∼ ν (1−k)/4 for k = 2, 3. Hence, as µ −1 f = O(ν 1/4 ), Note that A = A 1 A 2 with The determinant A 2 is the Wronskian of φ app s,± and hence a perturbation of the Wronskian of φ 1,α and φ 2,α which equals to e −αx . We distinguish between x < α 1/2 and x > α 1/2 . In the second case, Orr c,α is a small perturbation of a constant coefficient fourth order operator. The Green function may therefore be explicitly computed. We will not detail the computations here and focus on the case where x < α 1/2 . In this case the Wronskian is of order O(1). As a consequence and We now observe that the matrix M has an approximate inverse As D −1 is bounded and A −1 BD −1 is of order O(µ f ), we obtain that a ± and b ± are respectively bounded by C/νµ 2 f and C/νµ 3 f . Exact Green function Once we have an approximate Green function, we obtain the exact Green function by iteration, following the strategy developped in [8].
2019-10-09T13:44:46.000Z
2019-10-09T00:00:00.000
{ "year": 2022, "sha1": "5f3d2fcb04c64afc10e68a90e21e1ecf422bb47a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5f3d2fcb04c64afc10e68a90e21e1ecf422bb47a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
248730913
pes2o/s2orc
v3-fos-license
Splint efficacy in chronic post-stroke spasticity: a pilot study Introduction. Hand spasticity after stroke is a serious issue and may lead to hygiene problems, range of motion limitations, or contractures. Hand splints are often used to reduce spasticity and prevent movement limitations; however, there is little research available on the efficacy of splints in spasticity. The study aimed to investigate the efficacy of a reflex inhibitory splint (RiS) for upper extremity spasticity in stroke patients by using clinical and electrophysiological studies. Methods. Stroke patients with elbow and hand spasticity were allocated into 2 groups. The splint group ( n = 16) wore RiS. The control group ( n = 13) did not wear any upper extremity splint. Both groups received the same rehabilitation program during this period. They were evaluated for motion in the upper extremity with the Brunnstrom scale and Fugl-Meyer upper extremity scale. Electrophysiological measurements showing motor neuron excitability such as the ratio between the maximum amplitude of H-reflex and the maximum amplitude of M-response (H max /M max ratio), H-reflex latency, and F-wave persistence and latency were also studied. All clinical and electrophysiological measurements were performed in both groups on days 0 and 15. Results. At the end of the treatment, elbow and finger flexion tonus decreased and active wrist extension angle increased in the splint treatment group compared with both baseline and the control group. Compared with the pre-treatment status, a correlation was detected between the H max /M max ratio and the wrist flexion tonus in the splint group. Conclusions. RiS may be useful for the management of post-stroke upper-limb spasticity. Introduction Spasticity accounts for functional impairment in 17-41% of patients with stroke [1]. it is characterized by an increased velocity-dependent resistance to passive stretch. in hemiplegia, spasticity is severe in patients with more motor weakness who developed hemihypoesthesia and with a history of stroke [2][3][4]. The aim of spasticity treatment programs is to reduce or normalize the muscle tone to prevent secondary complications. if spasticity is not treated, shortening, fibrosis, calcification, and contracture develop in the muscles [5]. Treatment options include stretching, splinting, strengthening the agonist muscle, oral medications, or local injections (phenol or botulinum toxin) [6]. As the muscles remain in a long position because of splinting or stretching, motor neuron excitability may decrease and the biomechanical properties of the muscle fascicles may change [7]. decreased spasticity may lead to increased motor function, decreased pain, and improved patient and caregiver quality of life [5,8]. Hand spasticity can be a major complication that increases disability after stroke. it may cause muscle shortening and contractures, pain from muscle spasms, oedema, poor hygiene, loss of function, and depression [9]. Hand-wrist splints are commonly used to prevent these complications. Splints provide a biomechanical effect by stretching the muscle and connective tissue. it also reduces the reflex stretch of the muscles and reduces spasticity with neurophysiological effects [10]. Although the reflex inhibitory splint (RiS) is one of these splints, the few studies in the literature have provided contradicting results on the efficiency of RiS for spasticity. These heterogeneous studies are small in number, with a short follow-up (2-8 weeks) [11,12]. in a review, Steultjens et al. [13] concluded that splint usage reduced spasticity. However, Lannin and Ada [14] reported that splint usage at night neither reduced spasticity nor prevented contractures. The aim of this study was to investigate the effectiveness of a RiS for upper extremity spasticity in stroke patients with clinical and electrophysiological studies. Subjects and methods Adult spastic hemiplegic patients with upper extremity involvement and who had a stroke for the first time were evaluated for inclusion in the study. inclusion criteria were age > 18 years, stroke duration > 1 month, spasticity in wrist or finger flexors with Ashworth scale score 2, and being treatment-naive for spasticity (botulinum toxin injection, previous splinting, or anti-spasticity medications). Excluded were patients whose H-reflex could not be demonstrated by electrophysiological studies, as well as those with polyneuropathy or radiculopathy of the upper extremity, upper motor neuron lesion to the non-hemiplegic upper extremity, complex regional pain syndrome or upper extremity contractures, or severe cognitive problems. Patients were alternatively allocated to the splint or control group one by one. demographic and clinical characteristics of the patients, findings on neurologic examination of upper extremities, and range of motion of the affected upper extremities were recorded. Clinical and instrumental outcome measures were obtained at baseline and on the 15 th day. The patients were evaluated with the Ashworth spasticity scale, Brunnstrom scale, Fugl-Meyer upper extremity motor function scale, and electrophysiological studies. Electrophysiological studies included H-reflex and F-wave studies in both upper extremities. The electrophysiological assessment occurred when the patient was lying in a supine position in a warm, quiet room and performed by using Medtronic Keypoint 4-channel electromyography. Bilateral median and ulnar motor and sensory conduction studies and unilateral tibial and peroneal motor and sural sensory conduction studies were carried out to rule out polyneuropathy. H-reflex and F-wave response were measured to evaluate motor neuron excitability. The H-reflex of the patients was recorded from the flexor carpi radialis (FCR) muscle. Active surface electrodes were placed over the FCR muscle belly. A stimulator was placed at the antecubital fossa to stimulate the median nerve. The maximum amplitude of H-reflex, H-reflex latency, and compound muscle action potential of the FCR were recorded [10]. The H max /M max ratio was calculated for both sides. The F-wave responses were measured from the abductor pollicis brevis muscle. Persistence of the F-wave response was calculated by using 20 consecutive stimulations and included in the statistics as percentages. Minimum values of F-wave latency (ms) were recorded. The study and control groups received a standard conventional rehabilitation program (range of motion exercises, stretching exercises, posture exercises, and neurophysiological exercises -Brunnstrom approach) 2 hours a day, 5 days a week, for 2 weeks. The splint group wore the RiS 8 hours a day, except while sleeping, for 15 days. The control group did not wear any upper extremity splint. The patients treated with a RiS were asked about pain or discomfort to determine tolerability, evaluated with visual analogue scale. The RiS devices were made of thermoplastic material. They were placed on the palmar surface of the hand and auto-adhesive straps were located on the hand, wrist, and forearm dorsal face. The patient's joint positioning was as follows: wrist in 15° extension, metacarpal joints proximal, distal interphalangeal joints in neutral position, and the fingers in abduction position ( Figure 1). Statistics Statistical analyses were performed with the MedCalc program, version 11.5.0. descriptive statistics were shown as mean ± standard deviation for continuous variables, and nominal variables were presented as the number of cases and percentages. inter-and intra-group comparisons were performed by using Student's t-test, chi-square test, Mann-Whitney U test, and Wilcoxon test where appropriate. Spearman's method was applied to calculate the correlation rho. The value of p < 0.05 was accepted as significant for the results. Ethical approval The research related to human use has complied with all the relevant national regulations and institutional policies, has followed the tenets of the declaration of Helsinki, and has been approved by the Ankara Physical Medicine and Rehabilitation Training and Research Hospital ethical committee (approval number: 09-3852). Informed consent informed consent has been obtained from all individuals included in this study. Results overall, 39 patients were involved in the study, but 4 were removed for not being treatment-naive. The remaining 35 patients were allocated either to the splint (n = 18) or to the control group (n = 17). After electromyographic evaluation, 6 participants were excluded from the study for having additional neurologic deficits (n = 4) or for not having a demonstrable H-reflex (n = 2). The remaining 29 patients (16 in the splint group and 13 in the control group) continued the study. The demographic and clinical data of the study population are shown in Table 1. Both groups were similar in age, gender distribution, side of hemiplegia, and duration and aetiology of stroke. At baseline, there was no statistically significant difference in the scores for upper extremity Brunnstrom scale, hand Brunnstrom scale, elbow, wrist, or finger flexor Ashworth scale, Fugl-Meyer upper extremity motor function scale, H-reflex amplitude, H-reflex latency, M amplitude, H max /M max ratio, F-wave latency, or F-wave persistence (%) between the groups (p > 0.05). in the splint treatment group, the 15 th day elbow and finger flexion tonus were decreased (Table 2). There was no significant difference in spasticity in the control group (Table 3). H-reflex amplitude, H max /M max ratio, H-reflex latency, F-wave a reduction in the tonus of the elbow and finger flexors and a difference in active wrist extension angle after treatment. No significant difference was detected in the electrophysiological parameters. Spasticity and contracture are actually intertwined conditions. Two mechanisms have been proposed to explain the effect of contracture on spasticity development. in the first mechanism, if a muscle is shortened, the joint angle changes, so the muscle fibres are stretched more than normally and the reflex response increases. The other mechanism involves more tension reflexes than the muscle in the short state. Thus, a spasticity-contracture-spasticity cycle develops. one of the methods used to prevent this mechanism is splints [15]. RiS is suggested to reduce spasticity by stretching the wrist dorsiflexors and finger extensors [16]. Some authors believe that dorsal splints are more effective in reducing spasticity because palmar splints are thought to increase spasticity by stimulating the flexor muscles. However, there is no evidence to support this idea. Even when a dorsal splint is used, tapes will still be in the palmar region [10]. Although with RiS, changing clothing was a little more difficult than with the other hand splints in our study, the patient compliance with the splint was good. Pizzi et al. [11] followed spastic hemiplegic patients who wore a RiS for 3 months. They found increased wrist range of motion, reduced tonus of elbow flexion, and reduced FCR H max /M max ratio. in this study, the splint group exhibited a reduction in the tonus of the wrist and finger flexors, an increase in upper extremity Functional independence Measure values, and differences in active wrist extension angle after treatment. Similar to our results, Basaran et al. [10] reported that H-reflex latency and H max /M max ratio were not statistically significantly different after 5 weeks of splint usage in hemiplegic patients. in the evaluation of spasticity, there are clinical methods, as well as biomechanical and electrophysiological methods. Recently, elastography and myotonometry methods have been used. Although H-reflex is the most commonly applied method, there is no correlation with spasticity in most studies [17,18]. We used electrophysiological measurements for objective and sensitive assessment. in the literature, there are many studies on H-reflex in lower extremities, but studies in the upper extremities are limited. Phadke et al. [19] found that FCR H-reflex could be a reliable and sensitive indicator in hemiplegia for both paretic and non-paretic hands. in our study, we detected a statistically significantly higher H max /M max ratio on the spastic side compared with the non-hemiplegic side. The difference persisted after the treatment period. Although mechanical and electrophysiological measurements are attractive and parametrical, they are generally not correlated with clinical changes (as in our study). Even though these clinical scales are inadequate, they still remain the most used assessment methods. in the splint group, we detected a significantly higher increase in the active wrist extension angle on day 15. This was attributed to the extension in the muscle due to positioning or a reduction in the flexor spasticity. The absence of an increase in the other joints range of motion could result from the short follow-up time. Relative to the pre-treatment status, the splint group exhibited a reduction in the tonus of the wrist, elbow, and finger flexors. A statistically significant improvement was observed in the splint group in the elbow flexion tonus, wrist flexion tonus, and finger flexion tonus. Two studies implied a reduction in elbow flexion tonus [11,20]. Like us, Pizzi et al. [11] hypothesized that biceps brachii hyperactivity was inhibited by wrist flexors group ii PRoM -passive range of motion, ARoM -active range of motion a mean ± standard deviation, b Mann-Whitney U test latency, and F-wave persistence of the groups were similar between baseline and the 15 th day. The active wrist extension was significantly higher in the splint treatment group on the 15 th day (Table 4). in the splint treatment group, on the 15 th day, wrist flexion tonus decreased in 5/16 patients and none had increased tonus, whereas in the control group it increased in 6/13 patients and none had improvement. All of the patients in the splint group were able to tolerate RiS. The mean visual analogue scale score of patients wearing splints equalled 2.74 ± 1.8. No participant complained of pain associated with splinting. Discussion This study investigated the impact of RiS on upper extremity spasticity in hemiplegic patients with clinical and electrophysiological assessments. in addition to the conventional rehabilitation program provided to both groups, the patients in the splint group were also administered a RiS. Relative to the pre-treatment status, the splint group exhibited Physiother Quart 2022, 30 (2) afferents from the stretched FCR or/and other wrist flexors. Regarding the wrist flexion tonus, 5 of the 16 patients in the splint group showed a reduction, whereas 6 of the 13 patients in the control group had an increased tonus; however, the difference was not statistically significant. This may be due to the short follow-up. in the study by Pizzi et al. [11], one patient could not tolerate the RiS, whereas all participants in our study tolerated the splint. Limitations our study limitations are the short post-treatment assessment period, non-randomized design, and the small number of patients. in addition, we did not evaluate the sensation or hemi-neglect in the participants. We assessed the patients after 2 weeks of treatment. in the literature, similar studies present treatment and follow-up periods of 2-12 weeks [11,21]. Conclusions RiS appears to be effective in reducing spasticity. Longer follow-up studies are needed to evaluate the long-term effect. Disclosure statement No author has any financial interest or received any financial benefit from this research.
2022-05-13T15:13:05.385Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "b35d6153167240e3267b8ec4fe6788cd6a50be23", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-128/pdf-44998-10?filename=PQ_30(2)_20_23.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ec51e1912d1c1679203b0ac829f792a88cb90191", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
52018099
pes2o/s2orc
v3-fos-license
Distinct branches of the N-end rule pathway modulate the plant immune response (cid:1) The N-end rule pathway is a highly conserved constituent of the ubiquitin proteasome system, yet little is known about its biological roles. (cid:1) Here we explored the role of the N-end rule pathway in the plant immune response. We investigated the genetic influences of components of the pathway and known protein substrates on physiological, biochemical and metabolic responses to pathogen infection. (cid:1) We show that the glutamine (Gln) deamidation and cysteine (Cys) oxidation branches are both components of the plant immune system, through the E3 ligase PROTEOLYSIS (PRT)6. In Arabidopsis thaliana Gln-specific amino-terminal (Nt)-amidase (NTAQ1) controls the expression of specific defence-response genes, activates the synthesis pathway for the phytoalexin camalexin and influences basal resistance to the hemibiotroph pathogen Pseudomonas syringae pv tomato ( Pst ). The Nt-Cys ETHYLENE RESPONSE FACTOR VII transcription factor substrates enhance pathogen-induced stomatal closure. Transgenic barley with reduced HvPRT6 expression showed enhanced resistance to Ps. japonica and Blumeria graminis f. sp. hordei , indicating a conserved role of the pathway. (cid:1) We propose that that separate branches of the N-end rule pathway act as distinct components of the plant immune response in flowering plants. Introduction The regulation of protein stability through the ubiquitin proteasome system (UPS) is a central component of cellular homeostasis, environment interactions and developmental programmes (Varshavsky, 2012), and an important component of the plant immune system (Zhou & Zeng, 2017). Plants have evolved to recognize the presence of a pathogen in two main ways. Basal (primary) defence is characterised by the recognition of pathogen elicitors called pathogen associated molecular patterns (PAMPs) by protein receptors known as pattern recognition receptors (PRR), activating PAMP-triggered immunity (PTI) (Boller & Felix, 2009). When this response is effective, pathogens can deliver effector molecules into the host cells to weaken PTI and facilitate infection triggering a second layer of defence (effector triggered immunity; ETI). ETI is typically a qualitative response based on interference with pathogen effector activity by plant resistance (R) gene products, localized inside the cell (Dangl & Jones, 2001). Both PTI and ETI induce similar immune responses but of different amplitude, with ETI often resulting in a hypersensitive response (HR). The specific set of mechanisms activated also depend to a large extent on the life strategy of the pathogen and how adapted they are to the host. Typically, the plant hormones jasmonic acid (JA) and ethylene (ET) mediate responses to nonadapted necrotrophs that cause host cell death to acquire nutrients from dead or senescent tissues (Grant & Jones, 2009;Pieterse et al., 2009) whilst salicylic acid (SA) plays a crucial role in activating defence against adapted biotrophs and hemibiotrophs. Recently, regulation of protein stability by the Arg/N-end rule pathway of ubiquitin-mediated proteolysis has been demonstrated to play a role in plant responses to biotic stress. The pathway is associated with increased development of clubroot caused by the obligate biotroph Plasmodiophora brassicae (Gravot et al., 2016). Induction of components of the hypoxia response, controlled by Group VII ETHYLENE RESPONSE FACTOR (ERFVII) transcription factor substrates (ERFVIIs), enhanced clubroot development, indicating that the protist hijacks the N-end rule ERFVII regulation system to enhance infection. In another study, inactivation of different components of the Arg/N-end rule pathway was shown to result in greater susceptibility of Arabidopsis to necrotrophic pathogens and altered timing and amplitude of response to the hemibiotroph Pseudomonas syringae pathovar tomato (Pst) AvrRpm1 (de Marchi et al., 2016). A correlation between Nt-Acetylation and the stability of a Nod-like receptor, Suppressor of NPR1, Constitutive 1 (SNC1) was also reported (Xu et al., 2015). Whilst these reports provide evidence that the N-end rule pathway is involved in the regulation of plant defence responses, the mechanisms, substrates or their function in resistance have not been investigated previously (Gibbs et al., 2014a). The N-end rule pathway of ubiquitin-mediated proteolysis is an ancient and conserved branch of the UPS (Gibbs et al., 2014a). This pathway relates the half-life of substrates to the amino-terminal (Nt-) residue, which forms part of an N-degron (Gibbs et al., 2014a). Destabilizing residues of the Arg/N-end rule are produced following endo-peptidase cleavage and may be primary, secondary or tertiary (Fig. 1a). Basic and hydrophobic primary destabilizing residues are recognized directly by N-recognin E3 ligases, in plants represented by two proteins, PROTEOLYSIS(PRT)6 and PRT1 (Gibbs et al., 2014a). Secondary destabilizing residues (Glu, Asp and oxidized Cys) can be N-terminally arginylated by arginyl-transferases (ATEs), and tertiary destabilizing residues (Gln, Asn and Cys) can undergo modifications to form secondary destabilizing residues (Gibbs et al., 2014a). Oxidation of Cys was shown in vitro to occur both nonenzymically (Hu et al., 2005) or enzymatically (Weits et al., 2014;White et al., 2017), whereas in higher eukaryotes deamidations of Gln and Asn are carried out by residue-specific N-terminal amidases (NTAQ1 (Wang et al., 2009) and NTAN1 (Grigoryev et al., 1996), respectively). This hierarchical structure is conserved in eukaryotes, and physiological substrates with N-terminal residues representing these destabilizing classes have been identified (Piatkov et al., 2014). The Usp1 deubiquitylase is targeted for degradation through the deamidation branch of the Arg/N-end rule via NTAQ1 as a consequence of auto-cleavage, that reveals N-terminal Gln (Piatkov et al., 2012). Proteins with similarities to mouse NTAN1 and NTAQ1 are encoded in higher plant genomes, in Arabidopsis by AT2G44420 (putative NTAN1) and AT2G41760 (putative NTAQ1). Expression of these in a deamidation deficient nta1 mutant of Saccharomyces cerevisiae could functionally restore degradation of the N-end rule reporters Asn-b-galactosidase (b-Gal) and Gln-b-Gal, respectively. ATE activity was required for this destabilization in yeast (Graciet et al., 2010). Although the Arg/N-end rule pathway is evolutionarily highly conserved in eukaryotes, few substrates or functions for different branches have been shown. In plants the Cys branch of the Arg/N-end rule pathway controls homeostatic response to hypoxia (low oxygen) and NO sensing through the Met-Cys initiating ERFVII transcription factor substrates (Gibbs et al., 2011(Gibbs et al., , 2014bLicausi et al., 2011). In this paper, we investigated the role of distinct branches of the Arg/N-end rule pathway in the immune response in Arabidopsis and barley (Hordeum vulgare). We demonstrate that two branches of the pathway, Glu-deamidation and Cys-oxidation, regulate resistance to the hemibiotroph Pst and the biotroph Blumeria graminis f. sp. hordei (Bgh). We also show a significant role for Nt-Gln amidase NTAQ1 in the regulation of molecular components associated with basal responses to infection, and a role for both NTAQ1 and the known Nt-Cys ERFVII substrates in resistance related to stomatal function. Construction of transgenic Arabidopsis lines ectopically expressing NTAQ1 To generate Arabidopsis NTAQ1 overexpressing lines, fulllength cDNA sequence (with and without the STOP codon) was amplified from 7-d-old seedling cDNA and recombined into pDONR221. The constructs were mobilized into pH7m34G and pH7m24GW2, with the GSrhino tag in C-terminal or Nterminal position of the NTAQ1, respectively (Karimi et al., 2007). Then the constructs were transformed into Agrobacterium tumefaciens (strain GV3101 pMP90) and Arabidopsis ntaq1-3 using standard protocols (Clough & Bent, 1998). In vitro assay for NTAQ1 activity The Arabidopsis NTAQ1 coding sequence was cloned from cDNA and flanked by an N-terminal tobacco etch virus (TEV) protease recognition sequence (ENLYFQ-X) using primers ss_ntaq1_tev and as_ntaq1_gw, followed by a second PCR with as_ntaq1_gw and adapter tev attaching a Gateway attB1 site for sub-cloning into pDONR201 (Invitrogen). An LR reaction into pVP16 (Thao et al., 2005) leads to an N-terminal 8xHis:MBP double affinity tag. An assay for NTAQ activity was performed as described previously (Wang et al., 2009) Analysis of pathogen growth in plant material The bacterial suspension was injected with a needleless syringe into the abaxial side of leaves or sprayed on the surface of the leaves of 3.5-wk-old plants. Pst DC3000 avrRpm1, Pst DC3000 and Pst DC3000 hrpA À were grown overnight at 28°C in Petri dishes on King's B medium. For analysis of bacterial growth, three leaves per plant of at least seven plants were injected with a bacterial suspension of 10 6 CFU ml À1 (OD 600 nm 0.1 = 10 8 CFU ml À1 ) or sprayed with a suspension of 10 8 CFU ml À1 . A disc of 0.28 cm 2 from each infected leaf was Ó 2018 The Authors New Phytologist Ó 2018 New Phytologist Trust New Phytologist (2018) www.newphytologist.com excised at 96 h, pooled in triplicate, homogenized, diluted and plated for counting. Inoculation of Botrytis cinerea was performed by pipetting a drop of 10 ll of a suspension of 5 9 10 5 spores ml À1 onto the surface of the leaves. The response was analyzed by measuring the diameter of the symptoms produced in three leaves of at least 20 independent plants. Barley plants were infected with Fusarium spp. and Blumeria graminis f. sp. hordei as previously described (Ajigboye et al., 2016). Leaf material of 25-d-old barley plants grown under controlled conditions (20°C:15°C; 16-h photoperiod; 80% RH, 500 lmol m À2 s À1 metal halide lamps (HQI) and supplemented with tungsten bulbs) were syringe infiltrated with 0.1 OD Ps. pv japonica obtained from the National Collection of Plant Pathogenic Bacteria (NCPPB), UK. Leaf material was collected before treatment and 4 d after inoculation for conductivity assays and RNA extraction. Production of H 2 O 2 was visualized by staining with 3,3 0 -diaminobenzidine tetrachloride as described (Thordal-Christensen et al., 1997;Moreno et al., 2005). Stomatal aperture analyses For stomatal aperture in response to Pst assays, leaves from 3.5wk-old plants were used. In the morning after 2 h the lights were switched on and peels from the abaxial side of the leaves were placed in Petri dishes containing 10 mM MES/KOH pH 6.1, 50 mM KCl and 0.1 mM CaCl 2 for 2 h in continuous light. Then the buffer was replaced with a solution of Pst DC3000 (OD 0.2: 2 9 10 8 CFU ml À1 ). Stomatal aperture was measured after 0, 1 and 3 h of incubation with the bacteria. Stomatal aperture measurements for ABA sensitivity assays were carried out on detached leaf epidermis as described previously (McAinsh et al., 1991;Chater et al., 2011). Protein extraction and Immunoblotting Protein extractions and immunoblotting were carried out as described previously (Gibbs et al., 2011). Gene expression analysis RNA extraction, cDNA synthesis, semiquantitative and quantitative RT-PCR were performed as previously described for Arabidopsis (Gibbs et al., 2011(Gibbs et al., , 2014b and barley (Mendiondo et al., 2016). For primers used see Supporting Information Table S1. Analysis of nitrate reductase activity Nitrate reductase was assayed as previously (Vicente et al., 2017) with modifications described elsewhere (Kaiser & Lewis, 1984). Experimental statistical analyses All experiments were performed at least in triplicate. Statistical comparisons were conducted using GraphPad PRISM 7.0 software. Horizontal lines represent standard error of the mean values in all graphs. For statistical comparisons we used Student's t-test, where statistically significant differences are reported as: ***, P < 0.001; **, P < 0.01; *, P < 0.05; and one-way analysis of variance (ANOVA) with Tukey's multiple comparisons test, where significant differences (a < 0.05) are denoted with different letters. Results Nt-Gln amidase and Cys oxidation branches of the Arg/ N-end rule pathway increase basal resistance against Pst DC3000 The role for the Arg/N-end rule pathway in the plant immune response was assessed using the model bacterial pathogen P. syringae pv tomato DC3000 and T-DNA insertion null mutants of the putative Gln-specific amino-terminal amidase NTAQ1 (AT2G41760) (Fig. S1a-d) and N-recognin E3 ligase PRT6 (AT5G02310) genes, and a premature termination allele of the putative Asn-specific amino-terminal amidase NTAN1 (AT2G44420) (Q202*) (Fig. 1a). The entire effect of NTAQ1, NTAN1 and Cys branches of the Arg/N-end rule pathway on response to pathogen challenge can be assessed by analysis of the prt6 mutant, as this removes E3 ligase activity, thus stabilizing all substrates of NTAQ1, NTAN1 and substrates with Nt-Cys (Fig. 1a). Bacterial growth in leaves of prt6 was significantly lower by 4 d post-infiltration with virulent (Pst DC3000) or avirulent (Pst DC3000 avrRmp1) strains, indicating that substrates destabilized by PRT6 action contribute to the immune response (Figs 1b, S2a). In comparison, ntaq1 alleles also showed significantly lower bacterial growth (comparable with that of prt6 ) compared with both the ntan1-1 mutant or the wild type (WT) Col-0 for plants grown from seed in soil under neutral days (12 h : 12 h, light : dark). These results are opposite to those obtained by de Marchi et al. (2016), who found enhanced sensitivity to Pst DC3000 of N-end rule mutants prt6 and ate1 ate2 (which removes ATE Nt-arginylation activity, Fig. 1a). To investigate this difference, we assayed bacterial growth under conditions used by de Marchi et al. for plant growth and infection. In their case, germination and initial 7 d growth of seedlings was carried out on agar containing MS medium and 0.5% sucrose before transfer to soil and, following transfer, plants were grown under short-day conditions (9 h : 15 h, light : dark). We grew Col-0, prt6-1 and ate1 ate2 under these conditions and assayed bacterial growth at 2 d and 4 d post-infiltration. For plants grown under neutral days, we found that by 4 d post-infection, bacterial growth was significantly lower in N-end rule mutants than in the Research New Phytologist WT (Fig. S2b). All subsequent reported experiments were carried out using plants grown from seed under neutral-day conditions. Tissue cellular leakage measured 4 d following infection was significantly lower in prt6 and ntaq1 mutants (Figs 1c, S1d). Expression in WT of NTAQ1 and PRT6 was not strongly affected by infection with either bacterial strain (Fig. S2c). Inoculation with the PTI inducer Pst DC3000 hrpA À (with a compromised type-three secretion system), resulted in reduced susceptibility of prt6 and ntaq1 mutants compared with WT or ntan1 (Fig. 1d). Ectopic expression of either Nt-or C-terminally tagged NTAQ1 removed enhanced resistance of ntaq1-3 (Fig. 1e), and the double mutant prt6-1 ntaq1-3 did not show significant difference compared with the single mutants prt6-1 or ntaq1-3 (Fig. 1f). It was previously suggested that formation of N-terminal pyroglutamate by glutaminyl cyclase (GC) might compete with NTAQ1 for Nt-Gln substrates (Wang et al., 2009), implying that a lack of GC activity could lead to enhanced susceptibility. We observed a similar response to Pst DC3000 of WT and a mutant of GLUTAMINYL CYCLASE1 (GC1) (Schilling et al., 2007) (Fig. S2d), indicating that competition for Nt-Gln substrates between NTAQ1 and GC1 is not relevant for the regulation of bacterial growth following infection. To define the biochemical action of NTAQ1, we analysed the Nt-deamidation capacity of recombinant Arabidopsis NTAQ1 that showed high specificity for Nt-Gln in comparison with Nt-Asn, -Gly and-Lys (Fig. 1g). Col-0 prt6-1 erfVII prt6 erfVII Pst DC3000 injection Pst DC3000 spray Pst DC3000 spray Using mutants in which ERFVII activity was removed (Abbas et al., 2015) (rap2.12 rap2.2 rap2.3 hre1 hre2 pentuple mutant, hereafter erfVII, and the prt6 erfVII sextuple mutant), analysis of infections of Pst DC3000 following infiltration showed no significant influence of ERFVIIs in affecting apoplastic growth of either virulent or avirulent Pst strains (Figs 2a, S3a). Bacterial growth 4 d following foliar spray application of Pst DC3000 revealed greater resistance of both prt6-1 and ntaq1-3 mutants compared with WT or ntan1-1 (Figs 2b, S3b), which for both foliar spray and injection required SA, analysed in double mutant combinations of prt6-1 or ntaq1-3 with sid2-1. SID2 is an isochorismate synthase required for SA synthesis (Nawrath & Metraux, 1999) (Fig. S3c). Stomatal closure is a key component of early defence response following pathogen attack (Arnaud & Hwang, 2015). We found that, in response to Pst, WT initially closed and then, induced by the pathogen, reopened its stomata, as did prt6-1 and ntaq1-3. The erfVII and prt6 erfVII mutants failed to close stomata at any point (Fig. 2c). ERFVIIs have previously been shown to regulate stomatal ABA sensitivity via the Nend rule pathway (Vicente et al., 2017), and we also found ntaq1-3 stomata were hypersensitive to ABA (Fig. S3d). In response to Pst DC3000 infection following foliar spray application, resistance was significantly lower in the absence of ERFVII transcription factors (either erfVII or prt6 erfVII) compared with WT or prt6 (Fig. 2d), respectively. Response to the foliar spray application of Pst DC3000 was associated with a large decrease in activity and expression of NITRATE REDUCTASE (NR) (Fig. 2e,f). This reduction has been previously linked with increased basal resistance against Pst (Park et al., 2011), whereas expression of ADH1, a marker for hypoxia, was only increased immediately following pathogen challenge (Fig. S3e). Infection with Pst DC3000 was associated by 24 h with increased stabilization of an artificial Cys-Arg/N-end rule substrate derived from the construct 35S:MC-HA GUS, that following constitutive MetAP activity is expressed as C-HA GUS (Gibbs et al., 2014b;Vicente et al., 2017) (Fig. 2g). To clarify whether plant-derived factors were solely responsible for the control of the stability of C-HA GUS, we injected the PAMP peptide flg22, and showed that injection of flg22 was able to stabilize C-HA GUS (Fig. 2h). The Arg/N-end end rule pathway has a conserved function in the immune response To determine the conservation of Arg/N-end rule pathway role in the immune response, we tested responses to pathogens in barley, a monocot species distantly related to Arabidopsis, in which the expression of the PRT6 orthologue gene HvPRT6 was reduced by RNAi (Mendiondo et al., 2016). Following inoculation with a strain of P. syringae pv japonica with known pathogenicity to barley (Dey et al., 2014), significantly lower bacterial load was observed in HvPRT6 RNAi leaves compared with the WT (Fig. 3a). Similarly, HvPRT6 RNAi plants exhibited reduced development and severity of mildew caused by Bgh (Fig. 3b,c). By contrast, susceptibility of HvPRT6 RNAi to Fusarium graminearum or F. culmorum, tested on detached leaves was increased compared with the WT (Fig. 3d). To assess the response of prt6-1 in Arabidopsis to a necrotroph we inoculated the mutant and WT with the fungal pathogen B. cinerea but we failed to observe any significant differences in disease severity, measured as diameter of necrotic lesions (Fig. S3f). Infection of barley with Ps pv japonica or Bgh also resulted in accumulation of the artificial Nt-Cys substrate CGGAIL-GUS (from pUBI: MCGGAIL-GUS, containing the first highly conserved seven residues of ERFVIIs; Gibbs et al., 2014b;Mendiondo et al., 2016;Vicente et al., 2017), therefore Nt-Cys stabilization in response to infection is conserved in flowering plants (Fig. 3e). NTAQ1 regulates expression of the camalexin biosynthesis pathway A shotgun proteomic analysis of total proteins from untreated ntaq1-3 and WT adult leaves revealed 13 proteins which were significantly differentially regulated, 12 exhibited increased and one decreased abundance in ntaq1-3 (Table S2). The functions of most ntaq1 upregulated proteins are related to oxidative, biotic and abiotic stresses, including a 2-OXOGLUTARATE OXYGENASE (AT3G19010) potentially involved in quercetin biosynthesis and targeted by bacterial effectors (Truman et al., 2006) and DJ-1 protein homolog E (DJ1E) involved in response to PAMPs (Lehmeyer et al., 2016). Not all ntaq1 upregulated proteins were also upregulated at the level of RNA (Fig. S4). Several ntaq1 over-accumulated proteins are involved in the regulation of reactive oxygen species (ROS). However, analysis of gene expression of a ROS accumulation marker, the antioxidant enzyme CATALASE1 (CAT1), and histochemical analysis of the accumulation of the ROS hydrogen peroxide (H 2 O 2 ) during infections with Pst failed to reveal significant differences between the mutants ntaq1 and prt6 and WT (Fig. S5). Increased tolerance of the mutants which was associated with less cellular damage required SID2, an isochorismate synthase required for SA synthesis (Nawrath & Metraux, 1999), as double mutant combinations of prt6-1 or ntaq1-3 with sid2-1 showed susceptibility similar to the sid2 single mutant (Fig. S3c). Analysis of phytohormone levels indicated that there were no differences between ntaq1-3, prt6-1 or WT in untreated or infected leaves for SA, JA or IAA (Figs 4, S6). These results together suggest a functional redundancy of ntaq1 upregulated proteins with other antioxidant mechanisms, already documented in the case of the GLUTATHIONE S-TRANSFERASEs (GSTs) (Sappl et al., 2009), or alternative roles for ntaq1 upregulated proteins in plant defence. One of the identified proteins upregulated in ntaq1, the phi class GSTF6, functions in secondary metabolism related to the synthesis of the major Arabidopsis phytoalexin, camalexin (Su et al., 2011), as do the upregulated proteins PUTATIVE ANTHRANILATE PHOSPHORIBOSYLTRANSFERASE (involved in the synthesis of the camalexin precursor tryptophan; Zhao & Last, 1996) and IAA-AMINO ACID HYDROLASE (ILL4), that generates indole-3-acetic acid (IAA) from its conjugated form (Davies et al., 1999). Another upregulated protein, GSTF7 was hypothesized to play a role in camalexin synthesis based on its induction in the constitutively active MKK9 mutant (Su et al., 2011). Our analysis of previously published transcriptome data (de Marchi et al., 2016) comparing gene expression in ate1 ate2 with WT, and comparing gene expression during Pst infection in Col-0 and ate1 ate2 also showed increased expression of RNAs encoding camalexin synthesis genes (Tables S3, S4). Analysis of transcript expression indicated greater accumulation for most genes of camalexin synthesis in mature uninfected leaves of ntaq1 and prt6 compared to WT (Figs 4, S7), including PAD3 (CYP71B15), that catalyzes the final two steps of camalexin synthesis. Interestingly, during a time course following infiltration with Pst DC3000, levels of camalexin-associated transcripts, including GSTF6 and PAD3, as well as GSTF7 increased in WT but to a lesser extent in mutant leaves (Figs 4, S7). Whilst basal levels of camalexin in uninfected leaves were similar in mutants and WT they increased to a greater degree in mutants than WT in response to infection (Fig. 4). Mutant plants showed greater basal levels of indole-3-carboxylic acid (I3CA), a compound synthesized during the defence response and a potential precursor of camalexin through the action of GH3.5 (Forcat et al., 2010;Wang et al., 2012) that was also upregulated at the RNA level in untreated leaves of ntaq1-3 (Fig. 4). Camalexin synthesis is highly interconnected with other pathways of secondary metabolism, for example it has been reported that vte2 and cyp83a1, mutants of key steps of tocopherol and aliphatic glucosinolate synthesis pathways respectively, show increased levels of camalexin (Sattler et al., 2006;Liu et al., 2016). VTE2 and CYP83A1 showed decreased expression in ntaq1-3 and prt6-1 in both basal and infected conditions (Figs 4, S8). Combination of a null pad3 allele with prt6-1 resulted in a loss of the prt6 enhanced resistance to injected Pst DC3000 (Fig. 5). The Arg/N-end rule pathway regulates an age-dependent primed state in uninfected plants Previous work showed that hypoxia-associated genes are ectopically upregulated in prt6 and ate1 ate2 mutant seedlings (Gibbs et al., 2011;Licausi, 2013). However, it was recently shown that this is age-dependent, that in mature mutant plants these genes (Giuntoli et al., 2017). We also observe a large reduction in expression of hypoxia genes in older prt6 plants and saw a similar trend in WT for some genes (Fig. S9a). No age-related differences were found in NTAQ1 expression in either WT or prt6 backgrounds (Fig. S9b), however GSTF6/7 and PAD3 showed increased expression with age in prt6-1 and ntaq1-3 plants compared with WT (Fig. 6a). In N-end rule mutants, compared to WT we found age-related increases for the SAresponsive PATHOGENESIS RELATED (PR) protein genes PR1 and PR5, whilst JA and ET responsive PR3 and PR4 showed no differences (Fig. 6b). In barley, constitutive increase in expression of the SA-responsive genes HvPR1 and Hvß1-3 glucanase (Horvath et al., 2003;Rostoks et al., 2003) was found in leaves of HvPRT6 RNAi plants, and infection with Bgh did not result in an increase in expression in HvPRT6 RNAi plants, that was observed in WT plants (Fig. 6c). Discussion We show here that a role for Arg/N-end rule pathway-mediated immunity is conserved in flowering plants. In Arabidopsis we demonstrate physiological, biochemical and molecular roles for Nend rule component NTAQ1 in influencing basal defence by enhancing expression of defence proteins and synthesis of camalexin, and a role for the ERFVII known substrates in influencing stomatal response, against the hemibiotroph Pst. We show a role in barley of the Arg/N-end rule in response to the biotroph Bgh and hemibiotroph Ps japonica. We suggest that benefits of increased immunity may not be realized against necrotrophic pathogens (as shown in the interaction between Fusarium spp. and barley). It has been documented that camalexin is part of the defence response against the necrotroph fungus B. cinerea,inhibiting its growth in a dose-dependent manner (Ferrari et al., 2 0 0 3 ) . In our experiments, there were no differences in responses of WT and prt6 to B. cinerea suggesting that independently of other mechanisms activated, an increase in camalexin in prt6 may not reach a level necessary for reduction in fungal growth. A recent report showed N-end rule mutants, including alleles of prt6, ate1 ate2 and ntaq1 to be in general equal or more sensitive than WT Arabidopsis to a wide range of bacterial and fungal pathogens with diverse infection strategies and lifestyles (de Marchi et al.,2 0 1 6 ) . Our results, in which plants were grown under either neutral days or under the short-day condition used by de Marchi et al. showed the opposite results (of increased resistance). Our results provide a consistent pattern across different levels of expression (including enhanced defence gene transcripts and increased levels of camalexin synthesis proteins in untreated plants, and consistent phenotypes between Arabidopsis and barley) that indicate a role for NTAQ1 substrates and ERFVIIs as components of the immune response that enhance resistance. Therefore, differences in observed phenotypes of N-end rule mutants in response to infection between our studies remain to be resolved. A specific effect for ERFVIIs was observed in the stomatal response to Pst. ABA is an important component of stomatal response to pathogens (McLachlan et al., 2014) and stabilized ERFVIIs enhance ABA sensitivity of stomata (Vicente et al., 2017). We observed a large increase in stability of artificial Nt-Cys reporters in both Arabidopsis and barley. Stabilisation could be caused by shielding of the Nt, or a reduction of either NO or oxygen. We did not observe an increase in hypoxia-related gene expression (of ADH1) at the same time as GUS stabilization, however we did observe a decline in NR activity. Seemingly contradictory to this assertion is the well known burst of NO in response to Pst infection (Delledonne et al., 1998). However, this burst occurs early following infection, well before the reduction in NR activity and stabilization of artificial Nt-Cys reporters in both Arabidopsis and barley. It has previously been shown that in the NR null mutant nia1 nia2, which produces very low NO levels, the NO burst in response to infection is highly reduced (Modolo et al., 2006;Chen et al., 2014). Further experiments would be required to determine a causative role of reduced NR activity leading to enhanced stabilization. Regardless of the mechanism of stabilization, the observation of increased stability of Nt-Cys substrates following infection in both Arabidopsis and barley indicates a conserved role for modulation of the Cys-Arg/N-end rule pathway, and function for Nt-Cys substrates, in response to pathogen infection that deserves further investigation. Enhanced ABA sensitivity and stomatal response to Pst of the ntaq1 mutant also suggests that Nt-Gln substrate(s) contribute to the stomatal ABA response to pathogens, and explains why erfVII is more sensitive to Pst than prt6 erfVII (where NTAQ1 substrates are still stabilized). An opposite effect of ERFVIIs was shown for interactions of Arabidopsis with the biotroph P. brassicae, as ERFVIIs enhanced infection indirectly by influencing fermentation (Gravot et al., 2016). These observations and others indicate an important role for ERFVIIs in the plant immune response. Analysis of the response to Pst DC3000 hrpA À , together with increased expression of SA-associated defence genes and increased camalexin synthesis, suggests a role for NTAQ1 in the onset of general and inducible PTI defence. An age-related increase in SArelated defence gene expression in N-end rule mutants was not matched by increased SA levels. This suggests a possible role for immune-related MAPK cascade activating MPK3/6 that are sufficient for SA-independent induction of most SA-responsive *** Research New Phytologist genes, including PR1 (Asai et al., 2002). Concomitantly, it has been demonstrated that both MPK3 and MPK6 activation trigger GSTF6, 7 (and DJ1E) protein accumulation, which produces an increase in camalexin (Xu et al., 2008;Su et al., 2011). The observed increased accumulation of camalexin in ntaq1 and prt6 provides one explanation for the increased resistance of these mutants. Although expression of camalexin synthesis genes was ectopically upregulated in uninfected mature leaves of mutants, enhanced camalexin accumulation was only observed in response to infection. This may be the result of shunting of intermediate(s) to other secondary metabolism pathways. In line with this, unchallenged ntaq1 and prt6 plants show greater levels of I3CA. The observation that mutation of pad3 reverts the enhanced resistance of prt6 highlights the role of N-end rule regulated camalexin synthesis in enhancing the immune response. How might NTAQ1 function during development and in response to pathogen attack? NTAQ1 and PRT6 expression do not change in response to pathogen attack. NTAQ1 function influences defence gene expression and camalexin synthesis. We demonstrate that downstream responses to NTAQ1, measured as responsive gene expression, are modified during development (although the expression of NTAQ1 (and PRT6 ) transcripts were not affected by ageing), suggesting that NTAQ1 substrate(s) may show an age-dependent increase in abundance. Following protease cleavage their activity would be revealed in the ntaq1 mutant, where they would remain ectopically stabilized. Following protease cleavage to reveal Nt-Gln, NTAQ1 substrates should be degraded in WT plants. In this case, in mature WT leaves down-regulation of NTAQ1-linked protease activity (or NTAQ1 activity) in response to pathogen attack could result in substrate stabilization. Stabilized NTAQ1 substrate(s) (or uncleaved protease targets that provide substrates) may then function to enhance gene expression associated with defence genes and camalexin synthesis, both resulting in an enhanced basal immune response. Our data support a conserved role of the Arg/N-end rule pathway in influencing plant immune responses. Barley contains one NTAQ1 gene (MLOC_70886) (Mayer et al., 2012). Manipulation of expression or activity of this gene will be required to understand whether NTAQ1 activity is also required for defence in barley. An important goal of future work will be to identify Nt-Gln substrates that influence the immune response. Although NTAQ1-related genes are present in all major groups of eukaryotes, only a single example exists of a biochemical role for this enzyme and its associated substrate (Usp1) (Piatkov et al., 2012). There is already evidence for Nt-Gln-bearing peptide fragments derived from proteins of diverse functions present in the plant METACASPASE-9 degradome (Tsiatsiani et al., 2013), suggesting that substrates for NTAQ1 exist. Our results establish new components of the plant immune response, and offer new targets to enhance resistance against plant pathogens. Supporting Information Additional Supporting Information may be found online in the Supporting Information section at the end of the article:
2018-08-18T21:15:57.275Z
2018-08-17T00:00:00.000
{ "year": 2019, "sha1": "28f6f32e7df32fcd00c8e80e353a04137e833066", "oa_license": "CCBY", "oa_url": "https://nph.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nph.15387", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1b5152e5b3234345fbafbfa00e2c09d2558f99f2", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268059478
pes2o/s2orc
v3-fos-license
Selective laser trabeculoplasty: An updated narrative review Selective laser trabeculoplasty (SLT) has experienced a resurgence in interest, primarily driven by promising findings from the Laser in Glaucoma and Ocular Hypertension Trial. By offering SLT as an initial drug-free treatment option, we may be able to thwart issues such as adherence and persistence that plague our current medical management protocols. In this comprehensive narrative review, we delve into the current body of literature that explores the utility of SLT across a wide spectrum of scenarios and glaucoma subtypes. We present evidence that provides valuable insight into the efficacy and benefits of SLT, positioning it as a viable option in the management of glaucoma. Careful consideration of the associated risks and challenges is also necessary for successful adoption into clinical practice. Despite the ample evidence supporting SLT’s efficacy, some questions remain regarding its long-term effects and the potential need for retreatment. This review aims to shed light on these aspects to guide clinicians in making informed decisions and tailoring treatment plans to individual patient needs. This review also provides the readers with a bird’s eye view of the potential impact of SLT and adds clarity to the various therapeutic protocols that one can follow to ensure optimal clinical outcomes for our patients. Glaucoma management underwent a seismic shift when prostaglandin analogs (PGAs) were approved two decades ago.[3] Though superior in comparison to alternative therapeutic options, persistence and adherence rates for PGAs hover around 25%-45%, which are insufficient to reduce disease morbidity. [4,5]2] The abovementioned reasons led to the re-evaluation of selective laser trabeculoplasty (SLT), which always had the potential but was not widely adopted as a therapeutic choice in glaucoma management due to the lack of robust evidence. [13,14]This approach has changed dramatically after the landmark Laser in Glaucoma and Ocular Hypertension Trial (LIGHT). [15]The LIGHT trial filled in this lacuna and provided 6-year follow-up data, allowing for a prominent shift in glaucoma practice patterns. [16]The UKNICE guidelines now recommend that patients with ocular hypertension (OHT) and primary open-angle glaucoma (POAG) should be initially offered SLT as first-line treatment. [17]This evidence potentiates a scenario wherein we are likely to see a wider acceptance of SLT as a first-line therapeutic choice for POAG, OHT, and some specific secondary glaucomas.With this situation unraveling, an updated review of SLT would be immensely useful for ophthalmologists embarking on shifting glaucoma practice patterns.This review aims to encompass and highlight existing literature that impact intraocular pressure (IOP) outcomes post SLT and provide the readers with a bird's eye view to help make the appropriate therapeutic choice for their patients. SLT and primary open-angle glaucoma The role of SLT in the management of POAG has been extensively studied.Multiple peer-reviewed reports along with seven meta-analyses vouch for the efficacy of SLT for IOP control in patients with POAG [18][19][20][21][22][23][24] [Table 1].SLT results in an average 6.9%-35.9%IOP reduction in patients with OAG. [21]hen compared with argon laser trabeculoplasty, SLT has better efficacy with fewer medications required for IOP control at 12 months. [24]The adverse events and side effects are rare and comparable across different types of laser trabeculoplasties. [21,24] recent Cochrane review has also recommended the use of SLT for controlling IOP at a lower cost than conventional medications. [14]ng-term outcomes of primary SLT in treatment-naive early POAG indicate a sustained control of IOP over time.The reported treatment success rate (IOP reduction by 20% from baseline and IOP <19 mmHg) was 98.0% at year 1, 89.0% at year 5, and 72.0% at year 10.However, it is imperative to note that 60% of patients required re-treatments. [25]Moreover, although initial and repeat SLTs lead to comparable long-term IOP reduction, repeat SLTs Cite this article as: Narayanaswamy A, Sood SR, Thakur S. Selective laser trabeculoplasty: An updated narrative review.Indian J Ophthalmol 2024;72:312-9. This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. [28] Khawaja et al., [29] using EMR records of 831 SLT-treated eyes, reported the probability of success as 70% at 6 months, 45% at year 1, and only 27% at year 2. The variation in results may be due to different definitions of success and failure adopted by different studies. SLT has been recommended as primary treatment for POAG and OHT, primarily due to the encouraging results from the LIGHT trial. [15,16]This was a prospective randomized controlled trial (RCT) designed to compare health-related quality of life (HRQoL) in newly diagnosed, treatment-naive patients with POAG or OHT, treated with either topical IOP-lowering medication from the outset (Medicine 1 st ) or primary SLT followed by topical medications as required (Laser 1 st ).Secondary outcomes were cost-effectiveness, disease-specific HRQoL, clinical effectiveness, and safety.The trial recruited 718 subjects (356 Laser and 362 Medication arm), and at 3 years, SLT provided a stable, drop-free IOP control to 69.0% of POAG patients, with a reduced need for surgery, lower cost, and comparable HRQoL.At all-time points, drop-free disease control was achieved in a higher percentage of OHT and mild POAG eyes compared with moderate and severe POAG eyes.This was further substantiated by the authors publishing the 6-year outcomes recently. [16]They noted in eyes that received SLT, 69.8% remained at or less than the target IOP without the need for medical or surgical treatment.Eyes with POAG that had received SLT showed a lower rate of progression compared to eyes in the medication arm (92 vs. 125; P < 0.01).They also reported a good safety outcome post SLT, with no serious laser-related adverse events over 6 years. SLT as a first-line therapeutic option for POAG has propelled into several healthcare systems.The logistics and costs may not work for every healthcare system, but a gradual change appears inevitable. [30]Long-term progression of glaucoma often requires additional interventions post SLT, and this has been well documented. [28,29]SLT is not a magic bullet, and practitioners have to educate patients and the community about the need for continued monitoring of glaucoma status with periodic follow-ups. Authors No. of studies • SLT similar to ALT for reduction in the number of medications, success rate, adverse events, or side effects Wang et al. [18] 6 SLT and ocular hypertension The Ocular Hypertension Treatment Study (OHTS) has demonstrated that early medical treatment of OHT patients reduces the 5-year incidence of POAG by 60%. [31]The LIGHT trial showed that IOP reduction with SLT was similar in OHT and POAG eyes.They reported a mean initial IOP (mmHg) lowering at two months of 8 ± 4.0 in OHT eyes and 6.5 ± 4.3 in POAG eyes.The mean percentage IOP reduction was 29.7 ± 13.1% in OHT eyes and 26.1 ± 14.7% in POAG eyes.The authors noted a clear trend toward increasing absolute IOP reduction, with higher baseline IOP in both OHT and POAG eyes.Drop-free disease control was achieved in 88.6% (140/195 eyes) of OHT eyes after one or two SLT procedures at 3 years, which was higher than in eyes with mild (76.6%), moderate (56.1%), or severe glaucoma (42.3%). [32]These results indicate that SLT can be considered an effective and safe first-line treatment for OHT patients at risk of progression. SLT and normal tension glaucoma IOP reduction remains the cornerstone of managing normal tension glaucoma (NTG).SLT has been evaluated as a therapeutic option in patients with NTG. [1,33,34]Lee et al. [34] demonstrated that a single session of 360° SLT in NTG patients can reduce an additional 15.0%IOP reduction while using 27.0% less medication at 1 year.Over the course of 2 years following SLT, significant reductions in IOP and medication use were observed, with 11.0% of the eyes no longer requiring medication. [35]There was, however, a gradual reduction in the absolute success rate, from 61.0% at 6 months, 22.0% at 12 months, and 11.0% at 24 months.They also demonstrated that a higher intraocular IOP before SLT (coefficient = 1.1, OR = 3.1, P = 0.05) and a lower IOP at 1 week post SLT (coefficient = −0.8,OR = 0.5, P = 0.04) were associated with treatment success. [36]though the impact of SLT on NTG might not be as remarkable as in individuals with POAG, it remains a valuable tool, particularly in cases where poor adherence to glaucoma medications or in patients intolerant to medications.However, SLT primarily addresses the pressure-related aspect of the disease and does not directly target the hypoperfusion associated with NTG, which can play a crucial role in disease progression.Thus, with appropriate case selection, we may be able to pass on the therapeutic benefits of SLT to NTG patients. Impact of demographics on outcomes Conventional patient factors such as age, gender, TM pigmentation, lens status, and central corneal thickness status have been investigated and not found to be predictive of SLT success. [37,38]Moreover, the safety and efficacy of SLT have been demonstrated across different ethnic groups. [15,24]This includes Caucasian, [39] Hispanic, [40] Brazilian, [28] Asian, [41] Indian, [42] and African [43][44][45] patients.In studies with mixed ethnicity, Africans have been shown to have a sustained IOP reduction (20.0%) in 90.0% of eyes as compared to 50.0% of Indian eyes at 12 months follow-up. [46]Similarly, the mean decrease in medications after secondary SLT for Blacks and Whites has been reported to be significantly different, with Blacks having a larger mean decrease in medications than Whites (P < 0.01). [47]Peripheral anterior synechiae (PAS) development post SLT appears to be more common in Chinese patients, with a reported 4-year incidence of 13.6%. [48]Thus, there are subtle differences between different ethnicities and SLT outcomes, which may be driven by factors such as angle configuration, angle pigmentation, and differences in baseline IOP. SLT and angle closure disease Management of angle closure disease (ACD) requires a laser peripheral iridotomy (LPI) followed by IOP control. [49]Typical practice protocols would include a drug regimen similar to the management of POAG leading to drug-induced side effects and poor adherence. [1]2][53] One of the earliest studies done by Ho et al. [50] reported an IOP reduction of 4 mmHg in 72% of eyes at 6 months without a change in the number of medications at the end of the study.In another prospective RCT that compared the efficacy of SLT (49 eyes) against PGA (47 eyes) in post-iridotomized treatment-naive eyes with primary angle closure (PAC) and PACG at 6 months, the authors reported an IOP reduction of 4.0 mmHg (95% CI = 3.2-4.8) in the SLT group (P < 0.001) and by 4.2 mmHg (95%CI = 3.5-4.9) in the PGA group (P < 0.001). [51]They reported no differences between the SLT and PGA groups in the absolute mean reduction of IOP (4.0 vs. 4.2 mmHg, respectively; P = 0.78).The complete success rates (IOP ≤21 mmHg without medications) were significantly higher in the PGA group (P = 0.008).The mean endothelial cell count showed a significant decrease from baseline in the SLT arm (4.8% decrease; P = 0.001).No other events such as persistent uveitis or increase in PAS were noted in eyes that underwent SLT.The authors concluded that though SLT has a hypotensive effect in eyes with ACD, the therapeutic effectiveness of PGA is superior.In a case-control study involving 59 eyes diagnosed with ACD, subjects were treated with SLT after an iridotomy and compared with POAG eyes.The authors reported an IOP reduction of 20% or more from baseline, or discontinuation of one or more glaucoma medications to be 84.7% in the PAC/PACG group and 79.6% in the POAG group (P = 0.47). [52]Overall, the efficacy of SLT was equivocal in both PACD and OAG.Further data from a long-term observational study spanning over 6 years reflected similar findings of efficacy in both diagnostic subgroups, with an efficacy of 84.0% at 1 year and dropping to 6.0% at 6 years. [53]hus, available evidence indicates that SLT can be a reasonable alternative to medical therapy in PAC and PACG with the potential to harness its effects with appropriate case selection and systematic follow-up. SLT in secondary glaucoma Secondary glaucomas represent a wide spectrum of causes that can elevate IOP.Typically, the primary outflow obstruction is at the TM, and SLT could be an acceptable therapeutic modality except in the presence of active trabeculitis.Shazly et al. compared the efficacy of SLT in a small group of eyes with PXFG and POAG. [54]At 30 months of follow-up and after 180° of SLT therapy, the POAG group showed a mean IOP of 17.6 ± 2.8 mmHg and a mean IOP reduction of 5.7 ± 2.1 mmHg, and the PXFG group showed a mean IOP of 18.3 ± 4.7 and a mean IOP reduction of 5.3 ± 3.0 mmHg.They reported the cumulative probability of success for patients with either POAG or PXFG to remain off medications for 2.5 years following SLT to be approximately 75.0%.In another prospective study, Ayala et al. evaluated IOP reduction and inflammation following SLT in a group of 60 patients diagnosed with POAG/PXFG and reported similar IOP reduction (6.0 mmHg in both groups; P = 0.27) and the inflammation parameters between the subgroups at various time points. [55]bjects with pigmentary glaucoma (PG) have been reported to respond favorably to SLT over the short term. [56]here are concerns of complications such as IOP spikes due to the excessive absorption of laser by the pigmented TM, and the guidelines recommend titrating laser energy and area of treatment appropriately in this subset of patients. [57]oucheki and Hashemi, [58] in a prospective, nonrandomized, interventional study of patients with OAG unresponsive to maximum tolerable antiglaucoma medication, assessed the efficacy of 360° SLT in a cohort of patients with PG, POAG, and PXFG.They reported a mean IOP reduction of 14.5% in the PG group with a similar profile of IOP reduction among the POAG (16.7%) and PXFG (16.6%) subgroups.Complications such as IOP spikes and inflammation were recorded to be more common in the PG group, and as discussed earlier, this could be explained by the excessive laser absorbed by the pigmented TM.Long-term outcomes have been reported by Ayala et al. from their retrospective analysis. [56]They reported a cumulative drop in success rates from 85.0% (first year) to 14.0% over four years post 180° SLT, with an average time to failure documented as 27.4 months post SLT. The uveitic spectrum of patients has raised IOP due to the inflammatory process as well as to the steroid-induced process.Overall, disease pathology is complex, and the inflammatory component could lead to worsened outcomes with SLT.Zhou et al. [59] reported similar IOP reduction profiles post SLT in their uveitic and POAG patients at 18 months.Though the mean medications increased from 2.7 at baseline to 3.5 at 18 months, they did not note any difference in terms of the complication rates in uveitic eyes compared to POAG or PXFG eyes. Steroid-induced glaucoma is typically encountered during the course of management of uveitic eyes and following intravitreal steroid injections wherein withdrawal is tricky or not possible.Rubin et al. reported the efficacy of SLT from a retrospective review of eyes with steroid-induced glaucoma following intravitreal injections. [60]In their series, the IOP decreased to 23.9 ± 10.6 at 3 months (P < 0.006) and 15.7 ± 2.2 at 6 months (P < 0.001) from a baseline of 38.4 ± 7.3 mmHg.Maleki et al. [61] also reported the efficacy of SLT in uveitic eyes that had received intravitreal steroids.Mean IOP in their series lowered to 13.42 mmHg (55.7% reduction) at 6 months and 15.14 mmHg (50.4% reduction) at 12 months from a baseline of 30.57mmHg. To summarize, secondary glaucoma is often amenable to SLT, and with appropriate case selection, invasive surgery can be avoided in a significant proportion. SLT in treatment-naïve eyes and as adjunct therapy The effect of SLT as first-line therapy in treatment-naive eyes has been reported in fewer studies than outcomes in previously treated eyes.This can be attributed to the real-world scenario where medications are the clinically preferred first line of treatment and factors such as expertise, resources, need for additional sittings, and physician preferences interfere with SLT service delivery. Nagar et al. [62] compared the efficacy of SLT against latanoprost and reported a similar percentage of success in both arms (75.0% in SLT and 73.0% in the latanoprost arm).However, they reported IOP fluctuations to be more effectively reduced by latanoprost (64.0% vs. 41.0%).Recently, compelling evidence for the efficacy of primary SLT in POAG and OHT was provided by the LIGHT trial, wherein 75% of participants in the SLT arm maintained drop-free disease control at 3 years. [15]t was further supported by subsequent 6-year follow-up data, which reported medication-free status and reduced need for incisional glaucoma and cataract surgery in 70% of participants from the SLT arm. [16]sari et al. [33] in their retrospective study involving 54 subjects (108 treatment-naïve eyes) with POAG evaluated the long-term efficacy of SLT.They defined success of treatment as achieving at least a 20% reduction in IOP and IOP <19 mmHg, and with that cutoff reported a success rate of 98.0% at year 1, 89.0% at year 5, and 72.0% at year 10.Failure is most common after the third year.The median time to re-treatment was 81 months (CI: 60-100 months), with 60% needing re-treatment by 10 years.Higher baseline IOP was associated with an increased risk of re-treatment in their study.The correlation between any previous antiglaucoma medication and SLT outcome has been evaluated in some studies.While some suggest reduced efficacy, [63,64] others reported no correlation between them. [29,65]The varying duration of antiglaucoma medication and the inadequate washout duration might explain the different outcomes from these studies.In particular, topical therapy with PGA has been associated with decreased IOP response due to a similar mechanism of action, [66] and carbonic anhydrase inhibitors are associated with better response. [67]tent of treatment and energy SLT can be administered in 1-4 quadrants of the eye (90°, 180°, 270°, and 360°), and multiple studies have compared the variability in IOP outcomes based on the extent of treatment.360° of SLT has been associated with better efficacy in comparison to 180° at 6 months, [68] 12 months, [69] 18 months, [70] and 24 months [71] of follow-up.Prasad et al. [72] also demonstrated a lower range of IOP fluctuations with 360° SLT in comparison to 180° at 2 years of follow-up.Some reports contradict the above and have reported a lack of statistically significant benefit of 360° over 90° at 2 years [73] or 180° at 6 months. [74]ula et al. [75] suggested that 180° SLT performed initially in the nasal sector was associated with better outcomes in comparison to temporal at 6 months.This finding could be attributed to the distribution and density of collector channels. [76]hkonen and Välimäki [77] showed that the average IOP reduction with 270° SLT at 6 months was similar to that of 180°.Wong et al. [78] demonstrated that increasing the application of spots from 120 to 160 over 360° resulted in better IOP outcomes at 1 year with no increase in IOP spikes.Chen et al. [79] demonstrated that 90° SLT with 25 spots had similar outcomes in comparison to 180° with 50 spots. The optimal energy dose response for SLT has not been established. [80]Overlapping of SLT spots in the same location in repeat applications appears to decrease IOP response. [81]This could be due to a ceiling effect of TM modification achieved by repeat procedures beyond which damage from laser and disease progression prevent further therapeutic response.Tang et al. [82] demonstrated that low-energy (1/2 of conventional energy) 360° SLT reduced IOP with lesser complications in comparison to standard energy protocol.Danielson et al., [83] reported that fixed high-energy (1.2 mJ) has similar results to standard titrated energy (starting at 0.8 mJ and titrating using champagne bubble effect) at 36 months post 360° SLT.The Clarify the Optimal Application of SLT (COAST) trials are underway to further evaluate the role of low-energy SLT, repeated annually, in reducing/delaying the need for ocular hypotensive agents, surgical intervention, and improving quality of life. [84]e LIGHT trial presented robust IOP control data with minimal IOP spikes over 6 years. [16]They used a 360° SLT with 100 non-overlapping shots (25 per quadrant, 3-ns duration, 400-μm size).The energy delivered varied from 0.3 to 1.4 mJ.The desired endpoint was the production of a few fine champagne bubbles.Reaction to laser with large gas bubbles and TM blanching were not acceptable, and if noted, the power was titrated in steps of 0.1 mJ.Pigmented TM was expected to require lower energy (0.3-1.2 mJ) than non-pigmented, and it was advisable to start treatment at 0.4 mJ. [30]st-SLT steroid vs. NSAIDs Conventionally, topical anti-inflammatory medications were given post SLT to reduce inflammation in the immediate post-SLT period.However, this inflammation is usually transient and self-resolving in nature.In addition, it is hypothesized that such topical therapy may interfere with the immune-mediated mechanism of IOP reduction post SLT. [85]ealini et al. [43] reported good IOP outcomes in Afro-Caribbean POAG participants without the administration of topical anti-inflammatories, and the potential concern of enhanced inflammation in these eyes with dark irides and excessively pigmented TM was reported to be low.Gracner suggested that short-term use of 0.1% dexamethasone had no influence on post-SLT outcomes in eyes with POAG at 24 months. [86]ebenitsch et al. [87] reported that loteprednol did not appear to have an effect on IOP outcomes at 1 year post SLT.The SALT trial, published in 2019, compared the role of postoperative NSAIDs (ketorolac 0.5%), steroids (prednisolone 1%), and saline in IOP outcomes at 12 weeks post SLT and concluded that short-term steroids or NSAIDs may improve post-SLT IOP outcomes. [85]During the LIGHT trial, SLT treatment did not include routine use of anti-inflammatory drops post laser, but participants were provided NSAIDs for topical use in case of significant discomfort, despite also providing routine oral analgesics (e.g., paracetamol).This has been reported to be a common practice in most centers worldwide. [30]T and outflow facility Different types of imaging techniques have been used to evaluate the impact of SLT on the outflow facility.[88][89][90][91] These methods include horizontal enhanced depth imaging (EDI) optical coherence tomography (OCT) B-scans, pneumatonography, fluorophotometry, and electronic Schiøtz tonography. Post-SLT expansion of the Schlemm canal has been demonstrated using horizontal enhanced depth imaging (EDI) optical coherence tomography (OCT) B-scans. [89]Gulati et al. used pneumatonography and fluorophotometry to report that baseline higher aqueous flow and lower outflow facility may be predictive of better response to SLT. [90] Goyal et al. evaluated the effect of 180° versus 360° primary SLT on the outflow facility by using electronic Schiøtz tonography. [88]They reported that while SLT significantly increased the outflow facility in both the 180° group (37.5%) and 360° group (41%), the difference between the two was not statistically significant (P = 0.23). Recent advances such as hemoglobin video imaging (HVI) offer the noninvasive potential to quantify human aqueous outflow in real time. [91]Khatib TZ et al. were able to demonstrate a significant increase in the aqueous column after the administration of SLT and found that it correlated with the degree of IOP reduction.Emerging techniques such as HVI hold promise for enhancing our understanding of aqueous outflow dynamics in glaucoma management and may allow us to screen and treat patients who may benefit the most. Direct SLT Direct SLT (DSLT) is a new method of laser trabeculoplasty where laser energy is delivered directly to TM through the perilimbal ocular area, thereby eliminating the need for gonioscopic laser delivery. [92]It uses the same Q-switched Fd-NDYag laser but aims to address three major limitations of SLT: the need for surface contact with gonioscope, skilled expertise in gonioscopy, and long duration of procedure.The average duration of the DSLT procedure is 2-3 s, which is significantly shorter than SLT. The first prospective clinical trial involving DSLT was conducted in 15 eyes: ten with POAG, four with OHT, and one with PXFG.The mean baseline IOP (mmHg) in all eyes was 26.7 ± 2.3.At 1, 3, and 6 months, this value significantly reduced to 21.7 ± 4.2 (by 18.1%), 20.8 ± 2.5 (by 21.4%), and 21.5 ± 4.1 (by 18.8%), respectively, thereby indicating the efficacy of the procedure. [93]e technique offers immense advantages that could allow wider use of the technology with potential administration by general ophthalmologists and advanced therapeutic practitioners.This could then prove to be a promising tool to manage the immense burden of glaucoma, which is estimated to hit 111.8 million by 2040. [94]Currently, the GLAUrious study, a large-scale multicenter RCT, is underway to study the safety and efficacy of DSLT in reducing IOP in OAG patients. [92]T and long-term safety SLT is a minimally invasive intervention and typically does not cause significant inflammation.[24] Most evidence for this is provided by studies conducted in eyes with open angles.[15,16] However, it has been observed that SLT can stimulate the release of prostaglandins, cytokines, and free oxygen radicals in the anterior segment.[95,96] These inflammatory agents can result in increased permeability of the corneal endothelium. [97]In addition, the formation of excessive and large bubbles during the procedure may lead to endothelial cell damage.[97,98] However, long-term studies such as the LIGHT trial have reported no sight-threatening complications of SLT as well as no clinically identifiable corneal changes at the end of a 6-year follow-up period.[16] There is some evidence to show that eyes with POAG may behave differently with SLT application compared to eyes with PACG.[51,98] Kurysheva et al. reported that lower baseline endothelial cell count and older age correlated with increased corneal endothelial damage. The endhelium, however, recovered in the POAG patients at 1 month, whereas in PACG patients, the endothelium showed a sustained loss with no recovery noted at 6 months.[98] The biological plausibility of the endothelial cell loss to be progressive over the years after an SLT is minuscule; however, one should exercise caution in eyes with shallow ACD that would need repeat therapy. Summary SLT has demonstrated immense potential to keep a patient drug-free.The 6-year data from the LIGHT trial reported that 70.0% of the subjects with POAG and OHT remained drop-free.The laser arm subjects also demonstrated a lower rate of disease progression and a reduced need for cataract and glaucoma procedures.The safety profile has been reported to be robust, and it is now recommended as the first-line treatment for OAG and OHT by the UKNICE. [17]The ease of effective laser delivery through the DSLT has opened up an exciting era for glaucoma management, and there exists a potential of blunting the disease morbidity significantly without the dependence on daily dosed drugs.The costs and logistics of including SLT as a therapeutic option may not be feasible for every healthcare system, but this is likely to change, and acceptance on a wider scale should soon be a reality with lowered equipment costs and appropriate training strategies. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
2024-03-01T06:18:25.666Z
2024-02-28T00:00:00.000
{ "year": 2024, "sha1": "66caa4e57cda0983b1432ae53acc096b08afa31f", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijo.ijo_2104_23", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "545c39659c8f5cbd8fe57a38aa73c2663e4cc877", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239045776
pes2o/s2orc
v3-fos-license
Emergence and Characterization of a Ceftriaxone-Resistant Neisseria gonorrhoeae FC428 Clone Evolving Moderate-Level Resistance to Azithromycin in Shenzhen, China Abstract We here described a ceftriaxone-resistant Neisseria gonorrhoeae FC428 clone (YL201) with moderate-level resistance to azithromycin in Shenzhen, South China in 2020. The NG-STAR type of YL201 is ST2238, containing a mosaic penA-60.001 allele, which is a typical characteristic of FC428 clone. YL201 harbours four copies of the 23S rRNA C2611T mutation, conferring moderate-level resistance to azithromycin. The MLST type is ST1600, identical with two N. gonorrhoeae FC428 clones identified in Hangzhou. Genome-wide phylogeny analysis demonstrates that YL201 is clustered with other FC428 clones from Hangzhou (South-east China) and Chengdu (South-west China). Isolates within this cluster have relatively higher MIC for ceftriaxone and display closely related MLST STs (ST1600 and ST7363) but are different from the ST of typical FC428 clone (ST1903). As ST1600 and ST7363 are common STs in Shenzhen, the further spread of FC428 clones may increase the severity of gonococcal resistance. In summary, identifying a multidrug-resistant (MDR) N. gonorrhoeae isolate in Shenzhen showed FC428 clones have undergone further transmission in China and presented more extensive and concerning antimicrobial resistance (AMR) characteristics during the spread. Introduction With the emerging resistance of N. gonorrhoeae to nearly all antibiotics, effective antimicrobials for gonorrhoea have become increasingly scarce, including first-line dual therapy with ceftriaxone (CRO) and azithromycin (AZM) recommended by WHO. 1 To date, the MDR N. gonorrhoeae isolates have been reported in Ireland, 2 Denmark, 3 UK 4 and Australia. 5 In China, N. gonorrhoeae isolates with decreased susceptibility or resistance to both CRO and AZM have been reported, [6][7][8] and here in Guangdong Province (South China), we describe a ceftriaxone-resistant N. gonorrhoeae FC428 clone with a higher level of macrolide resistance than previously reported. The patient was a heterosexual male in his late twenties. He visited the sexually transmitted diseases clinic in Shenzhen Center for Chronic Disease Control in August, 2020 with urethritis symptoms. He reported this was his third infection, and all infections were due to sexual intercourse with commercial sex workers. N. gonorrhoeae (isolate YL201) was cultured from urethral secretions. The minimal inhibitory concentrations (MICs) of the isolate were determined using E-TEST method, and the results were interpreted in accordance with the European Committee on Antimicrobial Susceptibility Testing (EUCAST) (www.eucast.org) interpretative criteria. YL201 showed resistance to CRO (MIC: 0.75 mg/L) and AZM (MIC: 12 mg/L), but was susceptible to spectinomycin (MIC: 12 mg/L) ( Table 1). Whole genome sequencing of YL201 was performed using Illumina HiSeq X Ten and Oxford Nanopore MinION sequencer. N. gonorrhoeae multiantigen sequence typing (NG-MAST), multilocus sequence typing (MLST) and N. gonorrhoeae Sequence Typing for Antimicrobial Resistance (NG-STAR) were confirmed using Sanger sequencing. The NG-MAST type was novel with porB-3101 and tbpB-752. The MLST type was ST1600, identical with SRRSH214 and SRRSH229 identified in Hangzhou. 7 Results of antimicrobial susceptibility testing showed that the three isolates with MLST ST1600 have higher MIC for ceftriaxone than most strains with MLST ST1903 (Table 1). This finding indicates that although isolates harbor identical penA mosaic allele, their MIC values may differ. Such variation can be explained by penA-60.001 allele recombined into isolates with certain MLST types associated with CRO decreased susceptibility, and in this case, recombination events happening in MLST ST1600 isolates may contribute to a higher MIC value. According to our previous study, 8 MLST ST7363 is associated with decreased ceftriaxone susceptibility. Moreover, phylogenetic analysis showed that MLST ST7363 isolates (SC18-68) were clustered with MLST ST1600 isolates, and that they share 6 identical loci with each other. Therefore, considering the genomic similarity between isolates with the two MLST STs, and the fact that MLST ST7363 is a common ST in Shenzhen, the expansion of penA-60.001 allele to MLST ST7363 isolates may have already happened and resulted in elevated MIC values. YL201 had the NG-STAR type of ST2238, containing a mosaic penA-60.001 allele with key resistance-mediating amino acid substitutions A311V and T483S, as well as G545S, I312M and V316T, which is typical characteristics of FC428 clone. YL201 has different NG-STAR type with SRRSH214 and SRRSH229 (ST2238 versus ST2208). The reason for this difference is that YL201 harbours four copies of the 23S rRNA C2611T mutation, while SRRSH214 and SRRSH229 with wild type 23S rRNA allele. Compared with wild-type, four copies of 23S rRNA C2611T mutation increased MICs 40-120-fold, 7 conferring moderate-level resistance to azithromycin. Raw short-reads or draft genome assemblies of worldwide FC428-related strains were analysed to infer the phylogeny of YL201. A concatenate superset of SNPs relative to NCCP11945 was generated as previously described. 9 Based on the genome-wide SNP sites, a maximum likelihood tree was built using PhyML 3.0 10 and the substitution model was automatically selected using SMS (http://www.atgcmontpellier.fr/phyml/). 11 According to the phylogeny, YL201, SC18-68, SRRSH214 and SRRSH229 formed a clade (Figure 1), indicating FC428 clones originated from distinct regions have undergone further transmission in China. To date, all isolates within this clade have MLST STs different from ST1903, which may confer a higher MIC for ceftriaxone. In future, novel identified isolates belonging to this clade may present similar features. Additionally, including YL201, genomes of FC428-related strains were compared using BLAST Ring Image Generator (BRIG) and showed high similarities in genome structure without large insertions or deletions (Figure 2). Illumina and Nanopore sequencing data of YL201 have been stored in NCBI short read archive under BioProject PRJNA560592. In conclusion, we have identified an MDR N. gonorrhoeae isolate in Shenzhen China with resistance to CRO and moderate-level resistance to AZM. The findings demonstrated that FC428 clones have undergone further transmission in China, and during the spread, they have presented more extensive and concerning AMR characteristics. More importantly, as a major port with a large floating population, combined with our previous baseline data, we consider Shenzhen possesses the conditions for further transmission of FC428 clones, thus increasing the severity of gonococcal resistance. Therefore, regional surveillance should be highlighted to understand the transmission of emerging gonococcal drug-resistant clones. Ethics Approval and Consent to Participate This study was conducted in accordance with the Declaration of Helsinki and obtained approval from Medical Ethics Committee at the Shenzhen Center for Chronic Disease Control (approval number SZCCC-2021-008-01-PJ). Written informed consent was provided by the patient to allow the case details to be published. Funding This study was supported by CAMS Innovation Disclosure The authors report no conflicts of interest in this work.
2021-10-20T15:06:59.857Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "4baa80859f4ce5bbacbef7e321fe1cb8d938bf6a", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=75051", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86609a6fa8df5d775af000a9f03e113eaf607780", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234095140
pes2o/s2orc
v3-fos-license
“Clickase” Single-Chain Nanoparticles: Effect of Intra-Chain Distribution of Catalytic Sites on Catalytic Activity “Clickase” single-chain nanoparticles (Ck-SCNPs) are folded, enzyme-mimetic unimolecular polymeric nano-objects containing copper (Cu) ions able to catalyze the azide-alkyne Huisgen cycloaddition reaction in water and/or selected organic solvents, often in the presence of a reductant. Herein, we investigate the effect of morphology on catalytic activity of Ck-SCNPs synthesized by means of two different routes. An amphiphilic random copolymer composed of oligo(ethylene glycol) methyl ether methyl methacrylate (OEGMA) and 2-acetoacetoxy ethyl methacrylate (AEMA) units was used as precursor of these Ck-SCNPs. Folding was promoted through metal complexation between Cu(II) ions and beta-ketoester-containing AEMA moieties. The first route resulted in Ck-SCNPs1 containing Cu ions homogeneously distributed within each nanoparticle, whereas the second one promoted intra-chain clustering of Cu ions inside Ck-SCNPs2. A model fluorogenic “click” reaction between 9-(azidomethyl)anthracene and phenylacetylene, which was catalyzed either by Ck-SCNPs1 or Ck-SCNPs2, was used to unravel the effect of morphology on catalytic activity. This work paves the way to improve the catalytic activity of metallo-folded SCNPs through control of the intra-chain distribution of catalytic sites. In spite of these advances, investigations about how the intra-chain distribution of catalytic sites affects the catalytic activity of SCNPs are very scarce. We envisioned that controlling the spatial distribution of catalytic sites in metallo-folded SCNPs should be critical for the rational design of improved catalytic soft nano-objects. Based on the use of amphiphilic random copolymers and two different SCNP synthesis procedures involving selective or nonselective solvents, we reported previously a pathway for tuning the internal structure of metallo-folded SCNPs [64]. The first SCNP synthesis procedure involved the conventional synthesis in good solvent (method 1). The second one was based on transfer, after SCNP formation, from selective to good solvent conditions (method 2). By combining size exclusion chromatography (SEC) with triple detection, small-angle Xray scattering (SAXS) and molecular dynamics (MD) computer simulations we unraveled the SCNP size, sparse morphology in good solvent and spatial distribution of catalytic sizes for Cu-containing SCNPs ("clickase" SCNPs, Ck-SCNPs) synthesized by method 1 (Ck-SCNPs1) and method 2 (Ck-SCNPs2). Interestingly, we observed a homogeneous distribution of catalytic sites in the case of Ck-SCNPs1 but the presence of clusters of catalytic sites in the case of Ck-SCNPs2 in good solvent [64]. However, we did not investigate at that time the effect of these very different distributions of active sites on the catalytic activity of Ck-SCNPs1 and Ck-SCNPs2. Herein we report the results obtained from a model fluorogenic "click" reaction between 9-(azidomethyl)anthracene and phenylacetylene, which was catalyzed either by Ck-SCNPs1 or Ck-SCNPs2. Additionally, we investigate the effect of the nature of the solvent in which this model "click" reaction is carried out. The results obtained are of great interest for further advance the field of enzyme-mimetic SCNPs. 1 H Nuclear Magnetic Resonance ( 1 H NMR) 1 H NMR spectra were recorded at room temperature on a Bruker spectrometer operating at 400 MHz using CDCl3 as solvent. Dynamic Light Scattering (DLS) DLS measurements were carried out at room temperature on a Malvern Zetasizer Nano ZS apparatus. Differential Scanning Calorimetry (DSC) DSC measurements were carried out on 5-10 mg of sample using a Q2000 TA Instrument. A liquid nitrogen cooling system (LCNS) was used with a 25 mL/min helium flow rate. Measurements were performed using hermetic aluminum pans from -150 °C to 100 °C, at a scanning rate of 10 °C/min. Thermal Gravimetric Analysis (TGA) TGA measurements were performed in a Q500-TA Instruments apparatus at a heating rate of 10 °C/min under nitrogen atmosphere from room temperature to 800 °C. Fluorescence Spectroscopy (FS) Photoluminiscence spectra were recorded at room temperature on an Agilent Cary Eclipse spectrometer at an excitation wavelength of 370 nm. Synthesis of "Clickase" SCNPs by Method 1 (Ck-SCNPs1) In a typical reaction, P1 (100 mg, 0.06 mmol) was dissolved in THF (90 ml) at room temperature. Then, a solution of Cu(OAc)2 (6 mg, 0.03 mmol Cu) in 10 ml of THF was added, and the mixture was maintained under stirring for 24 h. After reaction completion to give Ck-SCNPs1, the system was concentrated and precipitated in hexane (twice). Finally, Ck-SCNPs1 were dried in a vacuum oven at r.t. under dynamic vacuum [64]. Results and Discussion The aim of this work is to determine how the intra-chain distribution of catalytic sites affects the catalytic activity of metallo-folded SCNPs. As precursor of the SCNPs, we synthesized an amphiphilic random copolymer denoted as P1 decorated with -ketoester units via RAFT polymerization of MMA and AEMA, following a procedure previously optimized by our group [64]. SEC measurements with triple detection of P1 revealed a weight average molecular weight (Mw) of 72.1 kDa and a very narrow dispersity of Ð = 1.02. P1 contained 17 mol % of -ketoester moieties as determined by 1 H NMR spectroscopy. It is well-known that -ketoester groups are efficient ligands for Cu(II) ions to give Cu(-ketoester)2 complexes. When complexation takes place within an individual polymer chain decorated with -ketoester units at high dilution in a good solvent, metallofolded SCNPs are obtained (see Figure 1, method 1) [31]. As reported by Zimmerman and coworkers [54], Cu(II)-containing SCNPs under reducing conditions can function as highly efficient catalyst of the azide-alkyne Huisgen cycloaddition reaction (i.e., "ckickase" SCNPs, Ck-SCNPs). Herein we denote as Ck-SCNPs1 the Cu-containing SCNPs obtained from P1 via method 1 (see Figure 1 and section 2.3). From our previous study combining SEC, SAXS and MD simulations [64], a uniform distribution of Cu catalytic sites is expected for metallo-folded SCNPs obtained using method 1. Conversely, the presence of clusters of Cu catalytic sites is expected in metallo-folded SCNPs synthesized via method 2 when dissolved in a non-selective, good solvent (e.g., THF) (see Figure 1, method 2). We denote the Cu-containing SCNPs obtained from P1 via method 2 as Ck-SCNPs2 (see Figure 1 and section 2.3). Characterization of Ck-SCNPs1 and Ck-SCNPs2 by means of SEC measurements in THF revealed an increase in retention time (i.e., reduction of hydrodynamic size) when compared to precursor P1, as illustrated in Figure 2A of Cu incorporated into Ck-SCNPs1 and Ck-SCNPs2 was very similar, although as stated previously the distribution of catalytic sizes would be rather different in Ck-SCNPs2 when compared to Ck-SCNPs1 due to the presence of clusters in the former. TGA results obtained following the method reported in ref. [31] showed almost complete formation of the theoretical amount of Cu(-ketoester)2 complexes in both Ck-SCNPs1 (>99%) and Ck-SCNPs2 (>99%). DSC measurements ( Figure 3C,D) revealed a similar increase in glass transition temperature (Tg) for SCNPs1 (Tg = -41.3 °C) and Ck-SCNPs2 (Tg = -41.5 °C) when compared to that of P1 (Tg = -47.6 °C). The increase in Tg is also a signature of the efficient formation of intra-chain Cu(-ketoester)2 complexes that restrict the mobility of the AEMA chain segments involved and, probably, also that of near-neighbor segments. Consequently, the hydrodynamic size, content of Cu(-ketoester)2 complexes and thermal behavior of Ck-SCNPs1 and Ck-SCNPs2 was very similar even having a different internal distribution of Cu catalytic sites. Subsequently, we investigated the catalytic activity of Ck-SCNPs1 and Ck-SCNPs2 using a model fluorogenic "click" reaction between non-fluorescent phenylacetylene (1) and 9-(azidomethyl)anthracene (3) to give the fluorescent compound 3-(anthracen-9ylmethyl)-5-phenyltriazole (4) (see Scheme 2). Two different reaction media were selected to guarantee the solubility of reagents 1 and 3 as well as the product 4: aqueous THF (a mixture of THF and H2O at a volume ratio 3:1) and neat DMSO. It is worth mentioning that Ck-SCNPs1 and Ck-SCNPs2 are completely soluble in both aqueous THF and DMSO without the presence of aggregates, as determined by DLS measurements (see Figure 4). Control reactions for comparison were performed by replacing Ck-SCNPs1 and Ck-SCNPs2 by CuSO4 as catalyst. Figure 5A illustrates the fluorescence observed from the product of reaction, 4, after 1h of reaction time by using Ck-SCNPs1 (blue trace), Ck-SCNPs2 (green trace) or CuSO4 (red trace) as catalysts of the fluorogenic "click" reaction between 1 and 3 in aqueous THF ( Figure 5A). Figure 5B illustrates the results obtained in DMSO at longer reaction time. In this sense, the fluorogenic "click" reaction was found to be faster in aqueous THF than in DMSO and Ck-SCNPs2 being more efficient than Ck-SCNPs1 and CuSO4 in both solvents. The high reproducibility of the results and the high sensitivity of the fluorescence spectroscopy technique allowed us to perform such a reliable comparison between different catalytic systems. Remarkably, the difference in catalytic activity observed (Ck-SCNPs2 > Ck-SCNPs1 > CuSO4) was more notorious in DMSO than in aqueous THF. The slower reaction rate in DMSO can be attributed -to a large extent-to the oxidant power of this solvent [67] that oxidizes a certain amount of Cu(I) ions generated by NaAsc to Cu(II) ions, the latter being inactive for the "click" reaction. Interestingly enough, the presence of cluster of Cu ions in the case of Ck-SCNPs2 was found beneficial for improving the stability and effectiveness of the "clickase" SCNPs under the deactivating, oxidative nature of the DMSO solvent. The above experiments with a model fluorogenic "click" reaction and different reaction media provide solid support that the intra-chain distribution of catalytic sites has an important effect on the catalytic activity of "clickase" SCNPs, especially in solvents with significant oxidative nature like DMSO [67]. Conclusions We investigate the effect of intra-chain distribution of catalytic sites on catalytic activity of Ck-SCNPs as folded, enzyme-mimetic unimolecular polymeric nano-objects containing Cu ions able to catalyze the azide-alkyne Huisgen cycloaddition reaction under appropriate reaction conditions. By using an amphiphilic random copolymer composed of 83 mol% of OEGMA units and 17 mol% of -ketoester-containing AEMA units, we synthesize Ck-SCNPs by two different methods. The first one results in Ck-SCNPs1 containing Cu ions homogeneously distributed within each nanoparticle in the form of individual Cu(-ketoester)2 complexes, whereas the second method promotes intra-chain clustering of Cu(-ketoester)2 complexes inside Ck-SCNPs2. To unravel the effect of morphology on catalytic activity, we have evaluated the efficiency of Ck-SCNPs1 and Ck-SCNPs2 in a model fluorogenic "click" reaction between non-fluorescent 9-(azidomethyl)anthracene and phenylacetylene to give the fluorescent compound 3-(anthracen-9-ylmethyl)-5-phenyltriazole. Fluorescence spectroscopy experiments allowed us to determine the catalytic efficiency of Ck-SCNPs1 and Ck-SCNPs2 when compared to a classical catalyst such as CuSO4. Using aqueous THF or neat DMSO in the presence of NaAsc as reducing agent, the catalytic activity was found to follow the order: Ck-SCNPs2 > Ck-SCNPs1 > CuSO4. The fluorogenic "click" reaction was faster in aqueous THF than in DMSO -a solvent that promotes the oxidation of Cu(I) to Cu(II)-with Ck-SCNPs2 being more efficient than Ck-SCNPs1 (or CuSO4) in both solvents. The presence of clusters of Cu ions in the case of Ck-SCNPs2 was beneficial for improving the stability and effectiveness of "clickase" SCNPs in DMSO solvent. In this sense, control of the intra-chain distribution of catalytic sites could be also a useful strategy to improve the catalytic activity of other metallo-folded SCNPs containing metal ions different from Cu ones.
2021-05-10T00:04:20.404Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "4b356cb2ad0fca1923aead8501aae00f5ade79f1", "oa_license": "CCBY", "oa_url": "https://www.preprints.org/manuscript/202101.0613/v1/download", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "cbc8a4177da2dfd6c56e923a86c022eac41c84eb", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
109625368
pes2o/s2orc
v3-fos-license
The 1m3 Semidigital Hadronic Prototype A high granularity hadronic 1 m3 calorimeter prototype with semi-digital readout has been designed and built. This calorimeter has been made using stainless steel as absorber and Glass Resistive Plates Chambers (GRPC) as active medium, and read out through 1x1 cm2 pads. This prototype aims to demonstrate that this technology fulfills the physics requirements for future linear collider experiments, and also to test the feasibility of building a realistic detector, taking into account design aspects as for instance a fully embedded front-end electronics based on power pulsing system, a compact and self-supporting mechanical structure, one-side services... Introduction Many interesting physics processes in a future lepton collider will involve multi-jet final states, often accompanied by charged leptons or missing energy. In order to exploit the rich physics potential of this collider, an improvement of the jet resolution of the order of a factor two compared with what has been achieved in previous experiments will be needed. In order to achieve this, one should think of new approaches instead of the ones from traditional calorimetry, and one of the most promising techniques is based on Particle Flow Algorithms (PFA) [1]. This concept, in contrast with a purely calorimetric measurement, requires the reconstruction of all the particles in the event, both charged and neutral ones. Particles are tracked in different sub-detectors and their energy is estimated from the most precise measurement (charged particles are measured by the tracker, photons by the electromagnetic calorimeter, and the neutral hadrons by using the information of the hadronic calorimeter and the electromagnetic calorimeter). The jet resolution obtained is a combination of the detector information and the reconstruction software. This represents a new approach to calorimeters; they must not only measure the particle energy but, in addition, they should have a strong tracking capability to separate the contributions from the individual particles belonging to a jet. High granularity, both in longitudinal and transverse directions, becomes even more important than the energy resolution in order to make a correct assignment of calorimeter hits to the charged particles, and in order to have an efficient discrimination of nearby showers. To improve the tracking, those calorimeters should be placed within a magnetic field. This adds constraints on materials and on the available space. Due to the huge number of electronics channels, the read out electronics must be embedded in the detector and, to reduce the power consumption, it should operate in power pulsing mode matching the LC beam time cycles. Figure 1: Cross section of a GRPC and its electronics readout. The Semi-Digital Hadronic Calorimeter concept The CALICE collaboration [2] is developing highly segmented electromagnetic and hadronic calorimeters using different technologies. One of them is a semi-digital hadron calorimeter (SDHCAL) using stainless steel as absorber and a gas detector, read by pads of 1x1 cm 2 , as active medium. Two options are considered. Glass Resistive Plate Chambers (GRPCs) and MICROMEGAS. Currently the GRPC is the baseline. The SDHCAL incorporates a new concept: instead of recording the deposited energy on the calorimeter, it will register in how many pads, and in which ones, energy bigger than a certain threshold has been deposited. A semi-digital readout, with two different thresholds, is used to improve the linearity and resolution at high energies with respect to a purely digital option, due to a mitigation of the saturation effects. The semi-digital approach reduces the complexity and costs of the electronics. To validate the SDHCAL concept a 1 m 3 prototype has been built. The prototype is intended to come as close as possible to a hadron calorimeter of the future ILC experiments This prototype aims to demonstrate that this technology fulfills the physics requirements but also the feasibility of building a realistic detector taking into account electronics and mechanical design aspects as a fully embedded front-end electronics based on power pulsing system, a compact and robust self-supporting mechanical structure, one-side services… The construction of this prototype allows gain experience with the procedures and possible problems in view of developing the final design. The next sections describe the design and construction of this prototype and show the first events collected during the start-up of commissioning with beam particles at CERN. The Glass Resistive Parallel Plates The gas detectors used in this prototype are Glass Resistive Parallel Plates (GRPC). The GRPC is a simple and robust gaseous detector consisting of two planes of highly resistive glass plates separated a few millimeters by thin spacers and filled with gas (a mixture of TFE/i-C4H10 (or CO2) /SF6 (93-94.5) / 5 / (2-0.5)). Figure 1 shows the cross section of the GRPCs used in this prototype and the associated electronics. The thinner glass (0.7mm) is used to build the anode while the thicker one (1.1 mm) forms the cathode. The anode thickness has been reduced with the purpose of reducing the multiplicity of the signal, i.e. the number of readout pads fired by a particle. The plates are kept apart 1.2 mm by ceramic balls (diameter 1.2 mm) and cylindrical buttons (diameter 4 mm). The advantage of these spacers with respect to using fishing lines is the reduction of the dead zones (0.1% versus few percent) and noise. The gas volume is closed by a glass fiber frame of 1.2 mm thick and 3 mm wide glued on both glass plates. The outer sides of the glass plates are covered by a resistive coating which is used to apply the high voltage. A special effort has been done to evaluate the best coating for the electrodes, taking into account the pad multiplicity, the homogeneity, and painting procedures. It was found that using the silk screen method it is possible to obtain a very uniform surface quality with stable resistivity in the range 0.5-2 MΩ/Square A 50 microns thick Mylar layer separates the anode from the 1x1 cm 2 copper pads of the electronic board that will be described in the next section. The pads pick up the signal induced by the charge of the avalanche electrons caused by the ionization in the gas. The read out electronics A chip called HARDROC (HARdronic Detector ReadOut Chip) [3] is used for the signal read out. Each chip contains 64 channels, it has three adjustable thresholds and a digital memory to store up to 127 events. The gain of each channel can be adjusted separately by a factor between 0 and 4 with a 6-bit precision. Each channel of the ASIC has a test capacitor of 2±0.02 pF which can be used to calibrate its response. It is very useful to have a uniform response for all channels. The HARDROC is also equipped with a power pulsing system to reduce the power dissipation by a factor 100 in the case of the proposed ILC duty cycle (their consumption is lower than 10 µW/channel). Due to the high granularity of the SDHCAL a real calorimeter made with this technique will have more than 50 millions of read out channels. A single detector plate of 1 m 2 needs about ten thousand channels. The very front-end part of the readout electronics is integrated in the detector itself. A printed circuit board hosts the 1x1 cm 2 copper pads for the GRPC read out and the HARDROC chips in the opposite face. The pads are connected to the HARDROC channels through the board structure. This board also provides the connection between adjacent chips and links the first to the readout system. Due to some difficulties on producing and handling a 1m 2 board, it was decided to use 6 smaller boards. Each of these boards (called ASU=Active Sensor Unit) hosts 24 readout ASIC chips. The HARDROC are controlled by specifically Detector InterFace (DIF) cards. Every two PCBs are connected to each other and connected to one DIF. The DIF powers the ASUs, distributes the DAQ commands to the chips and transmits the collected data. Figure 2 shows a picture of the electronics used to read out a square meter GRPC made of 6 ASU hosting the 6x24 HARDROC connected to the 3 DIFs. During the assembly of the SDHCAL prototype all the different electronic components have been verified before and after have been assembled. About ten thousand HARDROC chips have been tested in a special test bench using a Labview based application. The system was automated using a robot arm to pick-up the chip and place it in the test area. Different tests were performed to check the DC levels and power consumption, the slow control loading, linearity, trigger efficiency and memory. The Design Figure 3 (left) shows the design layout for the barrel part of the HCAL proposed in the framework of the International Large Detector (ILD) [4]. The design has been optimized to reduce cracks. The barrel consists of 5 wheels each made of 8 identical modules. Each module is made of 48 stainless steel absorber plates interleaved with detector cassettes of different sizes as showed in Figure 3 (right). Since the geometry of these modules is not appropriate for studying the performance of this calorimeter in test beam, a simpler geometry has been adopted for the prototype: a cubic design with all plates and detectors having the same dimensions of ~1x1 m 2 . This allows a much better coverage of the hadronic shower profile. Figure 4 shows the prototype design. It consists on a mechanical structure, made of the absorber plates, that hosts the detector cassettes. The design allows an easy insertion and further extraction of the cassettes. The mechanical structure is made of 51 stainless steel plates assembled together using lateral spacers fixed to the absorbers through staggered bolts as can be seen in Figure 5. The dead spaces have been minimized as much as possible taking into account the mechanical tolerances (lateral dimensions and planarity) of absorbers and cassettes, to ensure a safe insertion/extraction of the cassette. The plate dimensions are 1011x1054x15 mm 3 . The thickness tolerance is 0.05 mm and a surface planarity below ~500 microns was required. The spacers are 13 mm thick with 0.05 mm accuracy. The excellent accuracy of plate planarity and spacer thickness allowed reducing the tolerances needed for the safe insertion of the detectors. This is important to minimize the dead spaces and reduce the longitudinal size view of a future real detector. The thickness and flatness of the plates used for the prototype has been verified using a laser interferometer system in order to certify they were inside the tolerances. The Figure 6 shows, as an example, the planarity distribution for one face of one of the plates. For this particular plate the maximum deviation from planarity is lower than 150 microns. For most plates the maximum does not exceed the required 500 microns. Figure 7 shows the maximum deviations from planarity for the first 44 measured plates. Assembly of the mechanical structure For the assembly of the mechanical structure a special table has been designed and built at CIEMAT (see Figure 8). This table must support a weight of about 6 tons. The table has vertical guides attached to the table and horizontal guide lines machined in 6 supports for the positioning of the first spacers. Plates and spacers are piled up and screwed together. Figure 8 shows a detail of one corner of the assembly of the first plates. Figure 10 shows a picture of the mechanical structure almost finished. Once the structure was completed it was placed ( Figure 11) in a rotation tool specially designed and it was rotated ( Figure 12) to vertical position. This rotation tool serves not only to rotate the mechanical structure but also the full prototype once it is equipped with the detector electronics and cables. This could be useful to change the orientation to test it in beam tests (vertical) and cosmic rays (horizontal). The structure deformation has been checked during the assembly and after rotation using a laser interferometer and a 3D articulated arm. The Detector mechanical structure: The cassette The GRPC detector together with its associated electronics is hosted into a special cassette which protects the chamber, ensures a good contact of the readout board with the anode and simplifies the handling of the detector. The cassette (see Figure 13) is a box made of 2 stainless steel plates 2.5 mm thick and 6 x 6 mm 2 stainless steel spacers machined with high precision closing the structure. One of the two plates is 20 cm larger than the other. It allows fixing the three DIFs and the detector cables and connectors (HV, LV, signal cables). A polycarbonate spacer, cut with a water jet, is used as support of the electronics; it fills the gap between the HARDROC chips improving the rigidity of the detector. A Mylar foil (175 mm thick) isolates the detector from the box. The total width is 11mm, 6 of them correspond to the GRPC and electronics, and the rest is absorber. Integration of the GRPC in the structure A total of 48 GRPC cassette detectors have been built and inserted in the mechanical structure. The insertion has been made from the top with the help of a small crane as it is illustrated in Figure 14. Vertical insertion minimizes the deformation of the cassettes making easier the procedure. Each cassette is connected to a 5 cables. Three of them correspond to the HDMI readout connections, and the others carry the high and low voltage. The gas is distributed individually to each GRPC. Figure 15 shows a detail of the external cabling distribution in the cassettes. After the final assembly several cassettes have been extracted and replaced by others, the operation was done smoothly without problems. Figure 15: Detail of the cassette cabling. The third DIF is not seen in this picture. Prototype Commissioning The prototype has been exposed to muon and pions beams at CERN. Figure 16 shows a picture of the final prototype at SPS. It was the first attempt to show that both, detectors and electronics perform well, but the new CALICE DAQ generation that is needed to operate this prototype was not completely ready to work properly with this large number of channels. Most of the data taking period was invested on debugging and improving the DAQ system, and more work is ongoing now with cosmic rays. Nevertheless the first results look very promising. Figure 17 shows a typical event display of a muon crossing the SDHCAL prototype. Figure 18 shows the shower development in the prototype for a single pion event (left) and two pions (right) crossing the calorimeter at the same time. Each color corresponds to a different threshold and corresponds to raw data. The results show the details of the shower with a low noise level. Summary A self supporting 1m 3 SDHCAL prototype using GRPC as active medium has been designed and built. It is made by a mechanical structure containing 51 stainless steel absorber plates and, at present, it is instrumented with 47 detectors. Those detectors are equipped with read out electronics embedded in the detectors and including a power pulsing system. The assembly methods have been tested and no major problems have been found. The very preliminary results are rather promising. The prototype should be exposed to new test beam campaigns during 2012 in order to study the performance of the prototype with enough detail to conclude whether or not it might be considered a valid technology for the hadronic calorimeter of a future linear collider.
2019-04-12T13:54:27.753Z
2012-02-24T00:00:00.000
{ "year": 2012, "sha1": "a42dd6e760b1d0a15b9929c8c8355c22fcde7f05", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c7497e708b4817fba8e41da953df544470d06b1f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
198193986
pes2o/s2orc
v3-fos-license
Calorie changes among food items sold in U.S. convenience stores and pizza restaurant chains from 2013 to 2017 The aim of this study was to describe trends in calories among food items sold in U.S. convenience stores and pizza restaurant chains from 2013 to 2017 – a period leading up to the implementation of the federal menu labeling mandate. Using data from the MenuStat project, we conducted quantile regression analyses in 2018 to estimate the predicted median per-item calories among menu items available at convenience stores (n = 1522) and pizza restaurant chains (n = 2085) – two retailers that have been openly resistant to implementing menu labeling – and assessed whether core food items were reformulated during the study period. We also compared calories in food items available for sale on convenience store and pizza restaurant menus to calories in items that were newly added or dropped. We found that leading up to the national menu labeling implementation date, convenience stores showed a significant decreasing trend in median calories of overall menu items (390 kcals in 2013 vs. 334 kcals in 2017, p-value for trend <0.01) and among appetizers and sides (367 kcals in 2013 vs. 137 kcals in 2017, p-value for trend = 0.02). Pizza restaurants introduced lower-calorie pizza options in 2017, but no other significant changes in calories were observed. Going forward, it will be important to track calorie changes in convenience stores and pizza restaurant chains as both food establishments represent significant sources of calories for Americans. Introduction To address concerns that consumers lack nutrition information, the 2010 Affordable Care Act included a provision requiring chain restaurants and similar food establishments with twenty or more locations nationwide to post calorie information on menus and menu boards alongside price. This rule, which was delayed on several occasions, was implemented in May 2018 (Overview of FDA Labeling Requirements for Restaurants, Similar Retail Food Establishments and Vending Machines, 2017). Following the implementation date, the Food and Drug Administration announced it would work with affected establishments to comply with the menu labeling requirements (FDA, 2019). Pizza restaurants and convenience stores have been particularly resistant to the menu labeling rule, arguing that they should be excluded and citing unfair burden (Domino's CEO Pleads for Menu-Labeling Flexibility, 2011; Federal Menu Labeling Requirements Are Back on the Table, NACS, 2017). Further, the National Association of Convenience Stores (Lancaster, 2018) and the American Pizza Community (Black, 2017), industry trade groups representing convenience stores and pizza restaurants, respectively, have advocated for the Common Sense Nutrition Disclosure Act, which aims to weaken the federal menu labeling rule by limiting the authority of state and local governments to enforce the legislation. The act would allow restaurants to determine the amount of food in one serving (e.g., half of a hamburger) and exempt restaurants in which most orders are placed online (i.e., pizza restaurants) from posting calorie information in their stores. (H.R. 772 -115th Congress: Common Sense Nutrition Disclosure Act of 2017,2018). The Common Sense Nutrition Disclosure Act was introduced in Congress and passed by the U.S. House of Representatives; however, it was never voted on in the Senate (H.R.772 -Common Sense Nutrition Disclosure Act of 2017,2018). The effects of menu labeling on consumer purchases and restaurant sales have been mixed (Bleich et al., 2017a(Bleich et al., , 2017bVanEpps et al., 2016). For example, a 2017 review of 53 studies found some evidence that menu labeling may lower calories purchased at certain types of restaurants and in cafeteria settings (Bleich et al. 2017a). Others have found that while menu labeling increased knowledge of nutrition information, it did not decrease the amount of calories purchased (Tandon et al., 2011). Results from a meta-analysis involving 6 controlled studies in restaurants did not find menu labeling to be associated with a significant reduction in calories ordered (Long et al., 2015). Further, there is evidence that menu labeling could influence population health by encouraging restaurants to reformulate their menu items to be lower in calories. Prior studies of large chain fast food, fast casual, and full service restaurants have shown that restaurants introduced lower calorie menu items and removed higher calorie items in the years leading up to the federal menu labeling mandate (Bleich et al., 2018;Bleich et al., 2015a;Bleich et al., 2015b). Additionally, chain restaurants implementing menu labeling voluntarily have lower calorie counts than restaurants without calorie labels (Bleich et al., 2015b). Few prior studies have examined the potential impact of the federal menu labeling rule in pizza restaurants and convenience stores. These venues are important given that they make up a growing share of prepared food purchases and are an important source of calories for Americans. In 2016, convenience stores earned $233 billion in sales, with prepared food accounting for nearly 22% of total sales (Convenience Stores Hit Record In-Store Sales in 2016, 2017). Convenience stores are generally common in low-income neighborhoods (Hilmers et al., 2012) and high accessibility to convenience stores is associated with lower quality diets (Lind et al., 2016;Rummo et al., 2015); they have also been associated with obesity and chronic disease (Powell et al., 2007;Wang and Beydoun, 2007). Pizza restaurants have also experienced growth in sales. In 2017, Pizza Hut and Domino's Pizza experienced increases in gross sales compared to 2016; both popular chains had gross sales of over 14 and 10.5 billion dollars, respectively Top 100 Pizza Companies 2017. Furthermore, according to national data, 13% of the United States (U.S.) population aged 2 years and older consumes pizza on any given day, with pizza accounting for 27% of caloric intake on days it is consumed (McGuire 2014). Convenience stores and fast-food restaurants are key locations in which youth consume pizza and other foods high in saturated fats (Crepinsek et al. 2009;Poti et al., 2014). The anticipatory response of pizza restaurants and convenience stores to menu labeling may differ from chain restaurants, which were subject to menu labeling under a patchwork of prior local laws and, thus, supported the federal legislation (Block, 2018). In particular, the persistent efforts of pizza restaurants and convenience stores to seek an exemption to the rule may have muted the menu reformulation that has been consistently observed in large chain restaurants since 2012. The purpose of this paper is to describe trends in calories among food items sold in U.S. convenience stores and pizza restaurant chains from 2013 to 2017a period leading up to the highly anticipated implementation of the federal menu labeling mandate. Sample Our data comes from the MenuStat project (2017), which is a publicly available database containing nutrient information for foods and beverages sold in the nation's largest 66 restaurant and convenience store chains ranked by annual sales (New York City Department of Health and Mental Hygiene 2017). To populate the MenuStat database, item descriptions, nutrients and serving sizes are collected each January from all products appearing on restaurant websites; nutrients are entered per item. Only restaurants that post nutrition information online are available in MenuStat. From 2013 to 2017, websites were used as the primary source of product data, supplemented with data in other formats when available (e.g., PDFs of nutrition information). Items were assigned a unique identifier and matched over time using the item description. Additional methods of the MenuStat project, including its data collection procedures, are described elsewhere (New York City Department of Health and Mental Hygiene 2017). For the present study, we analyzed food items available for sale from four of the nation's largest convenience store chains (7-Eleven, Casey's General Stores, Wawa, Sheetz) and eight pizza restaurant chains (Domino's, Papa John's International, Little Caesars, Pizza Hut, Papa Murphy's International, CiCi's Pizza, California Pizza Chicken, Chuck E. Cheese's). These specific convenience store and pizza restaurant chains were selected since they sell restaurant-type foods and have nutrition information available in MenuStat. MenuStat includes nutrient data available from 2013 to 2017 (2013 was the first year in which data were available for convenience stores). In 2017, the four convenience stores examined in this study ranked among the top 20 by number of stores nationwide; all eight pizza restaurant chains were ranked in the top 10 (See Supplemental Table 1). Analyses were limited to prepared food categories that would be impacted by the menu labeling rule, which included appetizers and side dishes, main course items, and dessertsall of which are mutually exclusive categories assigned by MenuStat. We also examined sub-categories of main courses, assigned by the MenuStat team (e.g., pizza, sandwiches, salads, soups, entrees). We designated main courses as being central to the restaurant's business if the item category accounted for the majority of main course items on the menu (sandwiches for convenience stores and pizza for pizza chains). Because most beverages sold in these settings were packaged beverages with a Nutrition Facts Label, they were excluded. Data on portion sizes were incomplete in MenuStat and were not analyzed. Menu items missing calorie information in any year, most of which were from convenience stores, were excluded (n = 1101, 23%). Our final dataset included 3607 items available in convenience stores (n = 1522) and pizza restaurants (n = 2085). Statistical analysis We used quantile regression models (Koenker and Hallock 2001) to estimate the following: 1) the predicted median per-item calories among all menu items available at convenience stores and pizza restaurant chains in each year during the study period (2013, 2014, 2015, 2016, and 2017); 2) the predicted median per-item calorie changes from 2013 to 2017 among popular items available on menus of convenience stores and pizza restaurant chains across all years (reformulation of "core" menu items); 3) the predicted median per-item calories among items available only on the menu in 2013 compared to newly introduced items in 2014, 2015, 2016, and 2017; and 4) the predicted median per-item calories among items on the menu in 2013, 2014, 2015, or 2016 that stayed on the menu through 2017 compared to items dropped from the menu in any subsequent year. Quantile regressions were selected to estimate median values to reduce the influence of a small number of outliers (mainly large, shareable items). Our primary independent variables in each model were a year indicator, with 2013 as the reference group (models 1 and 2); an indicator for whether a menu item was on the menu only in 2013 (reference) or newly introduced in a subsequent year (model 3); and an indicator for whether the item was on the menu through 2017 (reference) or was dropped from the menu in any year prior to 2017 (model 4). To make inferences about whether certain characteristics of restaurants (e.g., regional vs. national chain status) are associated with changes in calories over time, we did not include restaurant chain as a covariate. In all models, covariates included restaurant type (indicators for whether a restaurant chain was fast food, fast casual, or full service), an indicator for whether the restaurant was national (sold in all nine U.S. census regions) or not, and an indicator for children's menu item status. Children's items were those with "kid," "child," or "children" appearing in the menu item or its description. We estimated cluster robust standard errors to account for similarity of menu items within restaurants. Statistical significance for our analyses was established at p < 0.05. All analyses were conducted using Stata 13 (StataCorp, 2013). Results Of 1522 food items on the menu in convenience stores, most were main courses (n = 1150, 76%), 16% (n = 245) were desserts, and 8% (n = 127) were appetizers and sides (Supplementary Table 2). At pizza restaurant chains, the majority of the 2085 items were main courses (n = 1754, 84%), followed by appetizers and sides (n = 211, 10%) and desserts (n = 120, 6%). Table 1 shows median calories among all items on the menu in convenience stores and pizza restaurant chains in each year between 2013 and 2017, overall and by menu item category. Between 2013 and 2017, convenience stores showed a significant decreasing trend in median calories of overall menu items (390 kcals in 2013 vs. 334 kcals in 2017, p-value for trend < 0.01) and among appetizers and sides (367 kcals in 2013 vs. 137 kcals in 2017, p-value for trend = 0.02). There were no changes in pizza restaurants during this period. Items on the menu in each year between 2013 and 2017 ("core" menu items) made up a smaller percentage of menu items in convenience stores (n = 112; 7%) compared to pizza restaurants (n = 417; 20%), and there were no changes in calories in these items over time in either restaurant type ( Table 2). The low percentage of core items at convenience stores may be indicative of many items coming on and off the menu each year. Table 3 shows the predicted median calories for food items that appeared on the menu only in 2013 compared to those newly introduced in subsequent years (2014, 2015, 2016, or 2017). At convenience stores, appetizers and sides newly introduced in 2017 con- Table 4 shows the median calories of food items that consistently were on the menu in 2013 or newly introduced and remained on the menu through 2017, compared to those that were on the menu in 2013 or newly introduced and removed from the menu in any subsequent year. In convenience stores, food items dropped from the menu were significantly higher in calories compared to those that remaineda There were no significant differences between calories in items that stayed on the menu and items dropped from the menu in pizza restaurants, overall or for any menu item category. Discussion Our study findings suggest that in the period leading up to the national menu labeling implementation date, convenience stores and pizza restaurant chains reduced calories in menu items, but the magnitude of the reduction varied by menu item category and restaurant type. We saw more changes among convenience stores, which showed a trend towards reducing calories overall, primarily by introducing new, lower-calorie entrees and removing higher-calorie appetizers and sides, entrees, and desserts. We saw no changes in calories among core menu items (available in all years) in either restaurant type. These findings are consistent with prior literature among the nation's large chain restaurants, which have reduced calories in menu items in the years leading up to the anticipated implementation of the federal menu labeling mandate (Bleich et al. 2017b). In one study, which examined 19,391 items from 44 chain restaurants, calories per item declined from 2008 to 2015 (Bleich et al., 2017b). Another study, which also used MenuStat data to compare calories among items at national chain restaurants, found restaurants that voluntarily implemented menu labeling offered lower calorie menu items than those that did not (Bleich et al., 2015b). These results suggest restaurants may be responding to the increased transparency by offering lower calorie items to consumers, although these changes may also be part of a larger secular trend that pre-dates the passage of the menu labeling rule in 2010. In our study, only convenience stores demonstrated a slight decreasing trend in calories from 2013 through 2017. We did not observe changes in calories in pizza restaurant chains, possibly because they expected further delays of the federal rule, not to be included in the federal rule, or for the federal rule to not be implemented at all (CSD Whitehead 2018). With American households spending more on food away from home over the past three decades (Saksena et al. 2018), these findings reinforce the importance of menu labeling standards as one promising mechanism to encourage industry reformulation for both food establishments. A growing number of consumers are spending more of their food budget on food purchased at convenience stores and less on food from large restaurant chains (Maze 2017). Efforts aimed at combating dietrelated disease should consider the shifting role of convenience stores, which are not only a source of unhealthy snack foods and beverages, but offer an increasing number of prepared foods at low prices (Larson et al. 2009;Morland et al. 2006;Rose et al. 2009). Furthermore, food items sold at convenience stores may be disproportionately consumed by populations at high risk of obesity and related chronic disease. An estimated 4.1 million adolescents in the U.S. visit convenience stores at least once a week (Sanders-Jackson et al. 2015), with African American adolescents more likely to visit convenience stores on a weekly basis compared to their peers who belong to other racial/ethnic groups (Sanders-Jackson et al. 2015). Lastly, consumers often attribute unhealthy options and food quality as their top concerns at convenience stores (GasBuddy 2019). Therefore, in addition to the increasing demand for prepared food items that are quicker and more convenient, convenience store chains may also be working towards reformulating their food offerings to address consumer concerns over nutrition and food quality. Pizza is also a significant source of both total and excess calories among adolescents and children (Poti et al., 2014). In a cross-sectional study of > 3000 U.S. children aged 2 to 18 years, pizza purchased at fast-food restaurants contributed more solid fat (e.g., saturated and trans-fatty acids) to the diet than sandwiches, hamburgers, and many other food categories (Poti et al., 2014). Among children and adolescents, respectively, pizza has been found to comprise 5% and 7% of total energy intake (Powell et al. 2015), suggesting that even small changes to menus may potentially reduce energy intake at the population level. In addition, pizza comprises an estimated 4% of the total energy for all adults in the U.S. . Future research should continue to track voluntary reformulation within large chain restaurants. It will be especially important to compare changes to restaurant menus pre-and post-implementation of the final federal menu labeling rule. Should the observed trend towards lower calorie items in convenience chains be amplified in response to implementation, the potential for improving population health is greater than these results suggest. Limitations The MenuStat database is limited to the nation's largest convenience store and pizza restaurant chains and may not be generalizable to smaller chains. However, the food establishments in our study include four of the country's top 20 convenience store chains by number of stores (Top 202 Convenience Stores, 2018), and eight of the leading 10 pizza chains by annual gross sales Top 100 Pizza Companies 2017. Second, median calories per item are limited to portion sizes provided on restaurant websites and may not reflect actual consumption. For example, most calorie information for pizza is listed per slice, though the average adult consumes 2-3 slices per serving (McGuire 2014). Therefore, listing caloric content for portions smaller than what is generally consumed may be confusing to consumers. Research findings suggest that consumers often feel less guilt and consume more when food is presented in smaller serving sizes, relative to larger sizes (Mohr et al. 2012). In addition, restaurants often serve food in large portion sizes that exceed those generally recommended (Cohen et al. 2016;Nestle 2003). The larger-sized portions may be confusing to individuals and encourage them to consume more food than necessary (Hollands et al. 2015). Third, our analyses do not account for customized pizzas in which additional toppings and other ingredients are added to pre-established menu items. Fourth, given caloric information were obtained from establishment websites, the translation of such data to MenuStat is subject to human error. Results from prior research, however, suggest nutrition data provided from restaurants are generally accurate (Reports 2013). Lastly, our analyses do not reflect individual sales of items but highlight those available for purchase. Conclusion Convenience stores and pizza restaurant chains represent a growing share of prepared food purchases in the U.S. These results suggest that, like other large chain restaurants, convenience stores have reduced calories from 2013 to 2017 and added lower calorie items to the menu. These changes may have been due to the anticipation of the May 2018 federal menu labeling rule or in response to shifts in consumer demand for lower calorie options. Regardless, the observed changes in convenience stores have the potential to improve population health. By contrast, we observed few changes in the calories of menu items at pizza restaurants. Further research is needed to explore both the pre/ post and long-term impact of the federal menu labeling mandate on the calories of food items offered at both convenience stores and pizza restaurant chains, and on calories purchased and consumed in these venues. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Dr. Moran was supported by a training grant T32DK007703 from the National Institutes of Health. Human subjects No protocol approval was needed as this study did not include human subjects. Declaration of Competing Interest Conflicts of interest: none.
2019-07-25T13:03:56.545Z
2019-06-29T00:00:00.000
{ "year": 2019, "sha1": "7d4a8a08f57535a576c4dc6a91971a65f842b5e7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.pmedr.2019.100932", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b32881d4bc6289a5e47742dea9caf6e6e4aacefc", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Business" ] }
229363583
pes2o/s2orc
v3-fos-license
Resistivity, Hall effect, and anisotropic superconducting coherence lengths of HgBa$_2$CaCu$_2$O$_{6+\delta}$ thin films with different morphology Thin films of the high-temperature superconductor HgBa$_2$CaCu$_2$O$_{6+\delta}$ have been prepared on SrTiO$_3$ substrates by pulsed-laser deposition of precursor films and subsequent annealing in mercury-vapor atmosphere. The microstructural properties of such films can vary considerably and have been analyzed by x-ray diffraction and atomic force microscopy. Whereas the resistivity is significantly enhanced in samples with coarse-grained structure, the Hall effect shows little variation. This disparity is discussed based on models for transport properties in granular materials. We find that, despite of the morphological variation, all samples have similar superconducting properties. The critical temperatures $T_c \sim 121.2$ K $\dots 122.0$ K, resistivity, and Hall data indicate that the samples are optimally doped. The analyses of superconducting order parameter fluctuations in zero and finite magnetic fields yield the in-plane $\xi_{ab}(0) \sim 2.3$ nm $\dots 2.8$ nm and out-of-plane $\xi_{c}(0) \sim 0.17$ nm $ \dots 0.24$ nm Ginzburg-Landau coherence lengths at zero temperature. Hall measurements provide estimates of carrier scattering defects in the normal state and vortex pinning properties in the superconducting state inside the grains. I. INTRODUCTION The mercury cuprates of the Hg-Ba-Ca-Cu-O family form a homologous series with the chemical formula HgBa 2 Ca n−1 Cu n O 2n+2+δ (HBCCO). The discovery of high-temperature superconductivity in the n = 1 compound [1] and the even higher transition temperatures T c = 120 K in the n = 2 [2] and T c = 135 K in the n = 3 material [3], respectively, has triggered enormous research interest. The latter compound still holds the record for the highest critical temperature T c of any superconductor at ambient pressure and for a cuprate superconductor with T c,onset = 164 K under quasi-hydrostatic pressure of 31 GPa [4]. Mercury cuprates have been synthesized from n = 1 to 7 with T c raising with the number n of neighboring CuO 2 layers up to a maximum at n = 3 and then decreasing for n > 3 [5]. In contrast to their very promising superconducting properties, the mercury cuprates are hard to synthesize and handle due to the highly volatile and toxic nature of Hg and Hg-based compounds. For a possible effective use, but also for a measurement of the basic intrinsic properties, the fabrication of high-quality thin films is demanded. Several groups have succeeded in this task, mainly by using pulsed-laser deposition (PLD) of a precursor film with subsequent annealing in Hg and O 2 containing atmosphere [6][7][8][9]. Later it was demonstrated that HBCCO thin films can be grown on vicinal substrates in a well-oriented manner and form a roof-tile like structure that allows for measurements of in-plane and out-of-plane properties on the very same sample [10,11]. As a consequence of the subtle preparation conditions, the properties of HBCCO samples reported by various groups vary significantly. Investigations of the electrical transport properties in samples with different structural properties have been rare. It remains ambiguous whether such diversity stems from slightly different preparation conditions in individual laboratories or can occur also at supposedly identical fabrication procedures. For instance, in polycrystalline samples of HgBa 2 CaCu 2 O 6+δ (Hg-1212) a room temperature variation of the electrical resistivity by a factor larger than two was observed between individual samples that were cut from the same ceramics but annealed for different time intervals [12]. Surprisingly, the Hall effect of these samples was remarkably similar. On the other hand, both the resistivity and the Hall effect changed significantly in a Hg-1212 thin film after several annealing steps in different mercury and oxygen atmospheres [13]. Several authors have investigated the electrical transport properties, such as resistivity and the Hall effect, in HBCCO in detail, but without putting emphasis on possible variations between individual samples. Mostly, results of only a single sample were presented. In the mixed-state Hall effect, a double sign reversal, similar to that observed in other cuprate superconductors with high anisotropy, was reported [14]. In addition, it was found that after introducing strong pinning by high-energetic Xe + ion irradiation, a triple sign change evolves [15]. Further investigations concerned the resistivity in magnetic fields oriented perpendicular and parallel to the CuO 2 planes, the critical current density, the angular dependence of the depinning field [16] and the normal and mixed-state Hall effect [17] in partially Re-substituted Hg 0.9 Re 0.1 Ba 2 CaCu 2 O 6+δ (HgRe-1212) thin films. The Hall effect's dependence on the angle between the magnetic field and the ab planes in the Hg-1212 thin films could be explained by the common behavior of HTSCs in the normal state and a renormalized superconducting fluctuation model for the temperature region close to T c , where the Hall effect exhibits its first sign change [18]. Recently, revived interest in the n = 1 HBCCO compound has emerged, as it is a model system for a single-CuO 2 layer cuprate. Investigations on underdoped samples of this compound [19,20] have shed light on the nature of the ubiquitous pseudogap in underdoped cuprate superconductors. In this paper we investigate the structural and the electrical transport properties of three Hg-1212 thin films, fabricated by pulsed-laser deposition on SrTiO 3 substrates. Although we intentionally included a "bad" sample with much higher room-temperature resistivity, we find that fundamental superconducting properties, like the critical temperature and the anisotropic Ginzburg-Landau coherence lengths show only little variation in samples with significantly different granularity. II. SAMPLE PREPARATION AND CHARACTERIZATION Thin films of Hg-1212 were fabricated in three individual runs (each about four months apart) under the same preparation conditions. The labeling of the samples corresponds to the time sequence. The fabrication is a two-step process using polished 5 × 5 mm large (001) SrTiO 3 crystal substrates with similar surface roughness. Firstly, amorphous precursor films were deposited on the using pulsed-laser deposition (PLD) [21] with 25 ns KrF excimer laser pulses (λ = 248 nm) with 10 Hz repetition rate. In a second step, the films were annealed in a mercury vapor atmosphere employing the sealed quartz tube technique [8][9][10]. Typically, sintered targets of nominal composition Ba:Ca:Cu = 2:2:3 are employed for laser ablation and precursor films are deposited at room temperature. For Hg-1212 phase formation and for crystallization of films with c-axis orientation annealing at high temperature (800 − 830 • C) and high vapor pressure (35 bar) is required. HBCCO thin films usually reveal reduced surface quality, phase purity and crystallinity as compared to other HTSC thin films that are grown in a singlestep process, like those of YBa 2 Cu 3 O 7 (YBCO) and Bi 2 Sr 2 Ca n−1 Cu n O 2(n+2)+δ (BSCCO). However, phasepure epitaxial Hg-1212 films with improved surface morphology are achieved by using mercury-doped targets (Hg:Ba:Ca:Cu ≈ 0.8:2:2:3) for laser-ablation and by deposition of precursor films at an elevated substrate temperature T S = 350 • C. Figure 1 shows the x-ray diffraction (XRD) data of three samples. The (00l) indices of Hg-1212 are clearly visible. The full width at half maximum (FWHM) of the (005) rocking curve is about 7 • for sample A, 6 • for sample B, and 1 • for sample C, respectively (data not shown). Besides of the well pronounced peaks resulting from the SrTiO 3 substrate only small traces of unknown spurious phases are visible and marked by asterisks. In sample B, however, the Hg-1212 XRD peaks have lower heights as compared to those of SrTiO 3 and the spurious phases, indicating a smaller amount of the Hg-1212 phase and a larger portion of spurious phases and voids. The surface textures of the three samples are measured by atomic force microscopy (AFM) and are displayed in Fig. 2. The annealed films reveal a dense and homogeneous structure, an evenly surface without a-axis oriented grains, and no regions of unreacted material. Bright spots in the AFM scans indicate particulates of 3 . . . 6 µm diameter and 0.5 . . . 1 µm height, which are typically found in HTSC thin films fabricated by PLD and, for instance in YBCO, can be removed by mechanical and chemical polishing [22]. Sample B exhibits a coarser grain morphology and the larger angular spread of the x-ray rocking curve indicates an enhanced misorientation of the grains. III. EXPERIMENTAL SETUP FOR THE ELECTRICAL MEASUREMENTS For the electrical transport measurements, the Hg-1212 films were patterned by standard photolithography and wet-chemical etching into strips with two pairs of adjacent side arms. Electrical contacts were established by Au wire and silver paste on Au pads that were previously evaporated on the sample's side arms. The thicknesses of the films were determined by AFM. The main parameters of the three samples are summarized in Table I. Sample Resistivity and Hall effect measurements were performed in a closed-cycle cryocooler mounted between the pole pieces of an electromagnet. DC currents were provided by a Keithley 2400-LV constant current source and the longitudinal and transverse voltages were recorded simultaneously with the two channels of a Keithley 2182 nanovoltmeter. The directions of both the current and the magnetic field were reversed multiple times for every data point to cancel spurious thermoelectric signals, transverse voltages stemming from contact misalignment, and to enhance the signal to noise ratio. The temperature stability at individual setpoints is better than ± 0.01 K. IV. RESULTS AND DISCUSSION The temperature dependencies of the longitudinal resistivity of the three samples are compared in Fig. 3. They show a linear behavior of the normal state resistivity ρ xx , typical for optimally-doped cuprate HTSCs, and a reduction of ρ xx below the linear trend above T c stemming from superconducting fluctuations [23]. The inset of Fig. 3 demonstrates that, despite of the significantly different absolute values of ρ xx , all three samples exhibit a similar qualitative behavior when a scaling according to ρ xx (T )/ρ xx (300 K) is applied. This fact contrasts with the typical observation in HTSCs with point defects like disordered oxygen atoms [24,25], where such a scaling is violated and the intercept ρ xx (0 K) from an extrapolation of the normal state is shifted to higher values when the concentration of point defects is increased. In fact, ρ xx (0 K) is slightly negative and similar for all three samples, indicating a minor influence of point defects on the resistivity. Remarkably, the critical temperatures of all samples are similar, too, as listed in Table I. These observations indicate that the intragrain resistivities of the samples are similar, but different granularity and different amount of voids and bad-conducting spurious phases lead to a large variation of the macroscopic resistivity. An analysis of thermodynamic superconducting order parameter fluctuations (SCOPF) [26] allows one to determine the anisotropic Ginzburg-Landau coherence lengths in superconductors. This method has been applied to many HTSCs but only rarely to Hg-1212 and is based on an evaluation of the paraconductivity ∆σ xx (0) in zero and the paraconductivity ∆σ xx (B z ) in a moderate magnetic field B z , applied perpendicular to the crystallographic ab plane. The total measured conductivity is σ xx (0) = σ N xx (0) + ∆σ xx (0) and σ xx (B z ) = σ N xx (B z ) + ∆σ xx (B z ), respectively. Commonly, the normal-state conductivities σ N xx (0) and σ N xx (B z ) are determined by extrapolating the linear temperature dependence of the resistivity in the normal state towards lower temperatures. In our samples no deviations from a linear trend at temperatures between 200 K and 300 K are noticeable and, hence, it is assumed that the SCOPF are negligible above 200 K and furthermore no influence of a pseudogap behavior is expected [27]. In a magnetic field, however, such a procedure requires the assumption that the normal-state magnetoresistance is negligible, i.e., σ N xx (B z ) σ N xx (0) in the temperature range under investigation [28]. In fact, the parameters for the linear fits to σ xx (B z ) and σ xx (0) above 200 K are the same, which indicates a negligible normal-state magnetoresistance in all samples. Several different processes contribute to SCOPF [26], but under the conditions explored in this work, the Aslamazov-Larkin (AL) [29] process dominates by far. The Lawrence-Doniach (LD) model [30] is an appropriate extension for two-dimensional layered superconductors with the out-of-plane Ginzburg-Landau coherence length at T = 0, ξ c (0), as the sole fit parameter. The paraconductivity is given by where e is the electron charge,h the reduced Planck constant, d = 1.2665 nm the distance between adjacent CuO 2 double layers [31], and = ln(T /T c ) ≈ (T − T c )/T c is a reduced temperature. The dimensionless coupling parameter between the superconducting layers is α = 2ξ 2 c (0)d −2 −1 . A magnetic field oriented perpendicular to the CuO 2 layers leads to a reduction of SCOPF by orbital and Zeeman pair breaking, which is also reflected by a decrease of the mean-field T c . The Zeeman interaction is important for an orientation of the magnetic field parallel to the CuO 2 layers only [32] and can be neglected in the present analysis. The paraconductivity in finite magnetic field considering the orbital interaction with the AL process (ALO) is [33] ∆σ ALO where k = [1 + α(1 − cos kd)], k is the momentum parallel to B z , ψ is the digamma function, and h = ln[T c (0)/T c (B z )] = 2eξ 2 ab (0)B z /h reflects the reduction of T c in the magnetic field. Figure 4(a) shows the superconducting transitions in zero and finite magnetic fields and Figs. 4(b-d) the resulting paraconductivities ∆σ xx (0) and ∆σ xx (B z ) as a function of the reduced temperature together with fits to the LD (Eq. 1) and ALO (Eq. 2) processes. To account for the higher resistivities observed in samples A and B due to a larger fraction of non-superconducting voids and spurious phases, the theoretical paraconductivity curves are scaled by a factor of 0.70 (0.09) for sample A (sample B) that is determined during the fit procedure. Of course, this introduces an additional uncertainty for the evaluation of ξ c (0) in samples A and B but has only a minor impact on ξ ab (0). In paraconductivity studies of HTSCs three temperature regions can be distinguished. Very close to T c the fluctuating superconducting domains start to overlap and the paraconductivity falls below the predictions of Eqs. (1) and (2). This situation is accounted for by renormalized fluctuation theories [34,35], which do not contribute to the determination of the coherence lengths and hence are outside of the scope of the present analysis. A counteracting effect can be evoked by a nonhomogenous T c [36]. Both corrections, as well as the exact value of the mean-field T c used for the calculation of the reduced temperature are relevant for < 0.01 only. On the other hand, a high energy cutoff of the fluctuation spectrum [37] leads to a smaller paraconductivity as compared to theory for > 0.1 that can be modeled by a heuristic function resulting in a very similar ξ c (0) [38]. In the intermediate temperature region 0.01 < < 0.1 the fit parameters can be determined with good precision. The resulting values of the coherence lengths are listed in Table I quite similar. Still, a systematic trend can be presumed. With degradation of the morphology and increase of the resistivity, ξ c (0) increases, while ξ ab (0) decreases, leading to a reduction of the anisotropy in the superconducting state. An increase of crystallographic misorientation between individual grains naturally leads to a reduction of the anisotropy, which is here determined averaged over the entire film, and is also evidenced by the broader rocking curves of samples A and B. Compared to other studies in grain-aligned polycrystalline Hg-1212 samples we find about half as long outof-plane coherence lengths [39] and larger [39,40] or similar [41] in-plane coherence lengths, indicating a higher anisotropy γ = ξ ab (0)/ξ c (0) ∼ 9.6 . . . 16.5 in our samples. In a HgRe-1212 thin film γ ∼ 7.7 was estimated [16], while higher values γ ∼ 29 [42] and γ ∼ 52 [43] were reported in HgBa 2 CuO 4+δ and HgBa 2 Ca 2 Ca 2 Cu 3 O 8+δ single crystals, respectively. These findings point to a correlation between sample morphology and measured anisotropy. The Hall coefficient is , where E y is the transverse electric field measured between adjacent side arms of the sample, j x the current density along the strip-shaped Hg-1212 film, and B z the magnetic field perpendicular to the film surface. Remarkably, R H is similar in all samples, as can be noticed in Fig. 5(left panel). In the normal state R H is positive (hole-like) and increases towards lower temperatures, followed by a sharp drop around T c and a subsequent change to negative values. Furthermore, a second sign reversal back to positive R H is noticeable, but differently pronounced in the samples as displayed in the inset of Fig. 5(left panel). Qualitatively, our observations are in line with previous investigations in Hg-1212 [14,17] and Bi-2212 films [44,45]. The peculiar sign change of R H from positive in the normal state to negative in the vortex-liquid regime is still not consensual [46] and is out of the scope of the present work. Renormalized superconducting fluctuations [47], collective vortex effects [48], and pinning centers [49] are some possible explanations. While the domain of negative R H looks similar in all samples, the positive R H data at the low temperature tail show more variations. In YBCO, such a double sign reversal is rarely observed and then attributed to vortexlattice melting [50] or pinning at twin boundaries [51]. In Hg-1212 it is considered an intrinsic property [14], like in Bi-2212, where it becomes more prominent when pinning is reduced in enhanced current densities [45]. But vortex pinning within the grains leads to vanishing of the Hall signal and can lead to a cut-off before the second sign change develops. One might then speculate that different intragrain pinning properties might cause the differences in the positive low temperature peaks of R H . The Hall effect in the normal state is displayed in the right panel of Fig. 5 in an appropriate scaling to demonstrate that, for all three samples, it follows Anderson's law [52] cot Θ H = αT 2 +C, where C is proportional to the density of carrier scattering defects and α is a measure of the carrier density. The linear trend can be observed in the temperature range from ∼ 135 K to 300 K, up to higher temperatures than in HgRe-1212 thin films [16]. The upturn close to T c is due to the onset of SCOPF. At first sight, α and C appear to be quite different. Since cot Θ H = ρ xx /(R H B z ), the voids in the sample that lead to enhanced ρ xx , as seen in Fig. 3, in a similar manner effect cot Θ H . Hence, if the same scaling as in the inset of Fig. 3 is applied to the curves of samples A and B, taking sample C as the reference, a comparison can be made. It turns out that α is similar in all samples, reflecting a similar carrier density inside the grains, while the intercept C is largest in sample B and smallest in sample A. Although only a rough estimate, it would indicate the smallest density of carrier scattering defects inside the grains in sample A and be consistent with the previous observation that samples A exhibits the weakest intragrain vortex pinning. Finally, we discuss the minor variation with sample morphology of R H observed in Fig. 5(left panel) to the contrasting large spread of the resistivities (see Fig. 3). Volger [53] has theoretically considered a material consisting of well conducting grains separated by thin layers of lower conductivity, which in our samples can be attributed tentatively to grain boundaries, voids, and spurious badly conducting phases. In this scenario, the experimentally determined average resistivity can be dominated by the high-resistance domains and can thus be much higher than the intragrain resistivity. On the other hand, the experimentally found R H will not be very different from its intragrain value. Note, that the narrow sample dependence of R H at temperatures slightly above T c is comparable to the relative variation of the superconducting coherence lengths, which also represent intragrain properties. Alternatively, one could consider that the voids in the material reduce the cross section of the current path. Then, using the macroscopic dimensions of the sample for the calculation, the resistivity will be overestimated. However, the local current density in the grains is larger than its average value, giving rise to an enhanced transverse Hall voltage. Since the Hall voltage is probed quasielectrostatically, intergranular resistances are negligible and the Hall voltages of individual grains add up in a series connection across the width of the thin film. Intuitively, this is also reflected by the calculation of R H into which only the film thickness enters, whereas for the evaluation of the resistivity the sample's thickness, width, and the probe distance are relevant. V. CONCLUSIONS In summary, we have investigated the resistivity and the Hall effect in three Hg-1212 thin films of different morphologies, which were characterized by x-ray diffraction and AFM scans. Despite of a large variation of the absolute values, the resistivity of all samples is linear in the normal state, as it is observed in optimally doped HTSCs. The critical temperatures T c ∼ 121.2 K . . . 122.0 K are similar in all samples, too, and the deviations from the linear resistivity trend due to SCOPF allow for the determination of the inplane ξ ab (0) ∼ 2.3 nm . . . 2.8 nm and out-of-plane ξ c (0) ∼ 0.17 nm . . . 0.24 nm Ginzburg-Landau coherence lengths. In sharp contrast to the resistivity, the normal-state Hall effect is similar in the three samples and is dominated by their intragranular properties. It allows to conclude that inside the grains the carrier density is almost the same in all samples, but the density of carrier scat-tering defects is different. The Hall effect in the superconducting state exhibits two sign changes, from which the one at lower temperatures is sample dependent and can indicate different vortex pinning properties due to different defect densities inside the grains. Finally, our analyses of various transport measurements on different samples indicate that the intragranular intrinsic properties of the Hg-1212 films can be estimated adequately despite of their diverse macroscopic resistivities.
2020-12-24T02:15:50.109Z
2020-12-23T00:00:00.000
{ "year": 2020, "sha1": "bb91291a10335c7bc4abc49394a48359ad882d06", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bb91291a10335c7bc4abc49394a48359ad882d06", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
14713509
pes2o/s2orc
v3-fos-license
Neutrino Mixing and Quark-Lepton Complementarity As a result of identification of the solution to the solar neutrino problem, a rather precise relation theta_{sun} + theta_C = pi/4 between the leptonic 1-2 mixing angle theta_{sun} and the Cabibbo angle has emerged. It would mean that the lepton and the quark mixing angles add up to the maximal, suggesting a deep structure by which quarks and leptons are interrelated. We refer the relation ``quark-lepton complementarity'' (QLC) in this paper. We formulate general conditions under which the QLC relation is realized. We then present several scenarios which lead to the relation and elaborate on phenomenological consequences which can be tested by the future experiments. We also discuss implications of the QLC relation for the quark-lepton symmetry and the mechanism of neutrino mass generation. Introduction The most distinct feature of the lepton flavor mixing is the existence of two large mixing angles in the Maki-Nakagawa-Sakata (MNS) matrix [1], which is in sharp contrast to the CKM quark mixing [2]. One of the large angles comes from the atmospheric neutrino experiments [3] which have discovered the neutrino oscillation [1,4], whereas the other one -from the solar [5] and the reactor neutrino observations [6]. The atmospheric mixing is suspected to be maximal or close to the maximal, though the experiment gives only a mild constraint 36 • ≤ θ 23 ≤ 54 • [7]. On the other hand, the solar angle θ 12 is known to be away from the maximal mixing value [8,9]. It has been marked long time ago that the large mixing angle required for a solution of the solar neutrino problem may appear as a difference between the maximal mixing angle π/4 and the Cabibbo angle θ C , so that or tan 2θ sun = 1/ tan 2θ C [10]. The equality holds with rather high accuracy as became clear by accumulating data of the solar neutrino experiments [11]. Indeed, the global fit of the solar neutrino and KamLAND results gives [8,9,12,13] θ sun = 32.3 • ± 2.4 • (1σ). The deviation of the central value is well within the present experimental errors at 1σ CL. Notice that the best fit values of the solar angle from analyses of different groups have very small spread: θ sun = 32.0 • − 33.2 • . This shows stability of the result and may indicate that true value of θ sun is indeed in this narrow interval, unless some systematic shift in the experimental data will be found. With this interval we obtain for the sum of the best fit angles The equality (1) relates the 1-2 mixing angles in quark and lepton sectors, and if not accidental, implies certain relation between quarks and leptons. It is very suggestive of a bigger structure in which quarks and leptons are complementary. The equality probably means a quark-lepton symmetry or quark-lepton unification [14] in some form. It may be considered as an evidence of the grand unification, and/or certain flavor symmetry [15]. If not accidental, it can give a clue to understand the fermion masses in general context. In what follows we will call the equality (1) the quark-lepton complementarity (QLC) relation. In this paper, we try to answer the following questions: Can the QLC relation be not accidental? What are the general conditions for the QLC relation? What is the underlying physical structure and the resultant scenarios that satisfy the conditions? What are the experimental predictions of these scenarios and how can they be tested? As a whole, we explore experimental consequences and theoretical implications of the QLC relation. The paper in organized as follows. In sec. 2 we formulate general conditions for the QLC relation. In sec. 3 and 4 we elaborate on various scenarios which realize the relation (1). In sec. 3 a possibility of "bimaximal minus CKM mixing" is studied. In sec. 4 we consider single maximal mixing scenarios. In sec. 5 the predictions by various scenarios are summarized. In sec. 6 we give a summary with brief comment on how to test them experimentally. Some theoretical implications of the QLC relation and heuristic remarks are also presented. In secs. 3 and 4 we give detailed and comprehensive description of possible phenomenological scenarios providing for each case with comments on implications for neutrino mass matrix and quark-lepton symmetry. For those who want to avoid these details we recommend, after reading sec. 2, to go directly to sec. 5 in which an overview of phenomenological aspects of our results are summarized, in particular in Table 1. One can go back for details of particular scenarios to secs. 3 and 4. General conditions for the quark-lepton complementarity relation The lepton mixing matrix U M N S is defined as where U e and U ν are the transformations of the left handed components which diagonalize the mass matrices of the charged leptons and neutrinos respectively. In the standard parameterization [16] the MNS matrix reads * where R ij is the matrix of rotation in the ij -plane. In this form, the angle of 1-2 rotation is identified with the solar angle, θ 12 = θ sun , the angle of 2-3 rotation -with the atmospheric angle, θ 23 = θ atm , and θ 13 -with the angle restricted by the CHOOZ experiment [18]. The matrix with the CP-violating phase is parameterized as To identify the mixing angles with those measured in experiments one should reduce a given mixing matrix to the form (9). Let us formulate general conditions which lead to the QLC relation. Single maximal or bi-maximal In principle, it is enough to have a single maximal mixing, that is R m 12 ≡ R 12 (π/4), to realize relation (1). However, existence of maximal or near maximal 2-3 leptonic mixing hints that whole pattern of fermion mixings may be generated as a combination of no mixing, a maximal and the CKM mixings. Namely, we can speak on the scenario characterized by "bi-maximal minus CKM mixing". Because it is very predictive and the easiest to test experimentally, it deserves a separate description from more general cases. A possibility of the lepton mixing as small deviation from the bi-maximal mixing [19] has been extensively discussed recently [20] but without identification of small deviation with the quark mixing. See, however, the first reference in [20]. Relation (1) allows to restore the bi-maximal mixing [19] as the element of underlying theory [15]. It should be stressed [21] that the present data do not yet give strong bound on deviation of 2-3 mixing from the maximal, which can be characterized by It is constrained by |D 23 | ≤ 0.16, or |D 23 |/ sin 2 θ 23 ≤ 0.47 at 90% CL [7]. Furthermore, the latest analysis, (without renormalization of the original fluxes) shows some excess of the e-like events at sub-GeV energies and the absence of excess in the multi-GeV sample, thus giving a hint to non-zero D 23 [22]. In the scenario (10), one expects the deviation to be small: π/4 − θ 23 < ∼ θ CKM 23 , or For specific scenarios see sec. 3. The next generation long-baseline experiments, in particular the JPARC-SK, will be sensitive to |D 23 | ∼ 0.05 [23,24,25]. Also it would be a challenge for the future atmospheric neutrino experiments to achieve the required sensitivity. Establishing the deviation from the maximal mixing more significant than the one in (12) will exclude the scenario (10). If the bi-maximal scenario is not realized and D 23 is large, an additional 1-3 rotation (apart from 1-3 CKM rotation) should be considered. Indeed, generically, the same symmetry (e.g., Z 2 ) leads to the maximal 2-3 mixing and simultaneously vanishing 1-3 mixing [26]. Therefore, the deviation from maximal 2-3 angle, D 23 , which implies violation of the symmetry, should also be accompanied by a non-zero 1-3 mixing. In this case, predictability will be lost unless one imposes the condition that such an additional 1-3 rotation is very small. Order of rotations To reproduce the equality (1) exactly one needs to have the following order of rotations: That is, the maximal and the CKM rotations must be attached with each other. Here, R CKM ij ≡ R ij (θ CKM ij ) describes the CKM rotation in the ij-plane, and R m ij denotes the maximal mixing rotations, R m ij ≡ R ij (π/4). In (13) " · ·· ′′ denotes possible insertion of the CKM rotations, R CKM 23 and R CKM 13 . (The similar structure holds also in the case that R 23 is not maximal.) The complete CKM matrix is parametrized as The reversed ordering of maximal mixing rotations in (13), namely R m 12 · · · R m 23 , would lead to an unacceptably large 1-3 mixing: sin θ 13 = 0.5 and incorrect 1-2 mixing, θ sun ∼ π/6 ± θ C , after reducing the mixing matrix to the form (9). Two other CKM rotations, R CKM 23 and R CKM 13 , can be located in any place indicated by dots. Their effect on the relation (1) is negligible even if they are situated in the right-hand side of the combinations in (13) or between two 1-2 rotations. The largest possible deviation appears for the case R m 12 R CKM † 12 R CKM 23 which, however, reduces to a small unobservable correction: where sin θ CKM In what follows we will neglect these type of corrections to the 1-2 mixing. However, position of small CKM rotations can become important for other observable such as U e3 or deviation of the 2-3 mixing from the maximal one. We will also consider the combination which is not excluded experimentally, though leading to the QLC relation (1) only in an approximate way. CKM matrix and the quark-lepton symmetry The natural framework in which the CKM angles appear in the lepton mixing is the quarklepton symmetry [14] according to which in a certain basis Then according to the definition (8) in both cases the CKM matrix will appear in the leptonic matrix as hermitian conjugate, Therefore, some permutations of R CKM † 12 and other matrices are necessary which lead to a violation of the exact relation (1). The smallest corrections are produced when only R m 12 appears right next to V CKM † on the RHS of the mixing matrix (13). In this case ∆ sin 2 θ 12 ∼ sin θ C V 2 cb . It is possible that the quark-lepton connection is not realized in a straightforward way as in (17). The Cabibbo angle could be the universal parameter which controls the whole structure of fermion masses and therefore appears in many places such as mass ratios and mixing parameters (see sec. 6). Naturalness In underlying models one expects that some deviation from the exact QLC relation always exists. It can be parametrized as where X i denote parameters of a model. Note that ∆ sin 2 θ 12 = sin 2θ sun ∆θ 12 . Then, one should require that ∆θ 12 (X i ) is very small in whole allowed ranges of the parameters X i . Otherwise, the QLC relation appears as a result of fine tuning of several parameters and in this sense turns out to be unnatural or accidental. This leads to immediate and non-trivial conditions: ∆θ 12 (X i ) should not depend on the masses of quarks and leptons or the dependence must be weak. Indeed, masses of down quarks and charged leptons for the first and the second generations (which are relevant here) are substantially different. Therefore, one would not expect an appearance of the same mixing angle θ C in the quark and the lepton sector. The quark-lepton symmetry should be realized in terms of mixings and not masses. Effect of CP Violation Diagonalization of the neutrino and charge lepton mass matrices can lead to the CP-violating phases in U l and U ν (which eventually will be reduced to the unique phase δ l in U M N S ). This can be described by the phase matrices which appear in various places of the products (13). To keep the equality (1), the matrices Γ δ,δ ′ should not be between R CKM 12 and R m 12 , or the corresponding phases should be small enough. With the additional phase δ ′ , the QLC relation (1) appears as a result of fine tuning of the parameters and therefore is not natural. Hence, we restrict ourselves into the choice Γ δ ≡ diag(1, 1, e iδ ) in the rest of the paper. Then, the place where we can insert the phase matrix is unique: it can be easily checked that all other possible insertions either can be reduced to this possibility or lead to zero CP-violation. Furthermore, the δ dependence comes into expressions of the various mixing matrix elements and the Jarlskog invariant only together with |V cb | ≃ 0.04. Indeed, in the limit of zero rotation R CKM In both cases any insertions of the phase matrices Γ δ will not lead to physical CP violation phase. Therefore, in the limit V ub = 0 the CP-violation effects (Jarlskog invariant) are proportional to V cb : We note, in passing that if V CKM is the only origin of the CP violation, namely, if δ = 0, we obtain generically where δ q is the phase in the CKM matrix. Since U e3 can be larger than V ub due to contribution induced by "permutations", the leptonic CP violation phase is strongly suppressed in this case. Induced CP violation associated with δ can be much larger. Renormalization group effect The QLC relation (1) holds at low energies. However, the quark-lepton symmetry (unification) which leads to (1) is realized most probably at some high energy scales, e.g., the grand unification scale. To guarantee the QLC relation at high energies one should require that the renormalization group effects on the equality from this high scale to the low energy scale are small. In the Standard Model (SM), or Minimal Supersymmetric Standard Model (MSSM) the renormalization of the Cabibbo angle is indeed small. For instance, in MSSM with tan β = 50 the parameter sin θ C decreases from 0.2225 at the m Z down to 0.2224 at the 10 16 GeV [27]. The renormalization effect on the leptonic θ 12 depends on the type of mass spectrum of light neutrinos. For the spectrum with normal mass hierarchy, m 1 < m 2 ≪ m 3 , the effect is negligible. In contrast, in the case of quasi-degenerate spectrum, m 1 ≈ m 2 ≈ m 3 = m 0 , or the spectrum with inverted mass hierarchy the effects can be large [28]. In the limit of small 1-3 mixing θ 13 ≪ 10 • , the running is determined by [29] dθ 12 dt ≈ − Cy 2 τ 32π 2 sin 2θ 12 sin 2 θ 23 where t ≡ ln(µ/µ 0 ), µ is the renormalization scale, C = 1 in the MSSM and C = −3/2 in the SM; y τ is the Yukawa coupling of the tau lepton: and tan β is the usual ratio of the VEV's. In Eq. (24) φ 1 and φ 2 are the Majorana phases of the eigenstates ν 1 and ν 2 . According to (24), the running effect is proportional to the absolute mass scale squared and the relative phase difference:θ 12 ∼ m 2 0 cos(φ 2 − φ 1 )/2. In SM and in MSSM with tan β < 10 the corrections are small even for quasi-degenerate mass spectrum. In MSSM with large tan β (tan β = 50) one finds that ∆θ 12 ∼ θ 12 even for the common scale m 0 ∼ 0.1 eV [29] as a result of running from the scale of the RH neutrinos (10 10 − 10 12 GeV) or the GUT scale. Clearly, such a large correction destroys the QLC relation, which leads us to the following conclusions: 1). The QLC relation is not violated by the renormalization effect in the SM and in MSSM with small tan β even for the quasi-degenerate mass spectum of neutrinos. 2). In MSSM with large tan β and the quasi-degenerate mass spectrum the corrections are in general large. Furthermore, the corrections depend on other continuous (and presently unknown) parameters: φ i , m 0 (and also θ 13 ), so that the QLC relation would require fine tuning of several parameters. Therefore, the QLC relation, once it is established with a good accuracy, testifies against such models, unless the required tuning is a natural outcome of an additional symmetry. Notice that according to (24), the corrections can be strongly suppressed if the quasi-degenerate mass eigenstates ν 1 and ν 2 have opposite CP parities: φ 2 − φ 1 ≈ π [28]. 3). In some cases the renormalization effect can help to reproduce the QLC relation (see sec. 3.1). Basis dependence The form of the mass matrices and diagonalizing rotations depend on basis of the quark and lepton states. Let us introduce a basis called the symmetry basis by which a symmetry that determines the structure of mass matrices is defined. (In some publications this basis is named as the Lagrangian basis.) In the symmetry basis, both the neutrino and the charged fermion mass matrices, in general, are not diagonal and therefore both produce rotations which make up the MNS matrix. In what follows we will consider several realizations of the structure of lepton mixing matrix, (13) and (16). They differ by the origin of the large (maximal) angle rotations: the neutrino or the charge lepton sectors. These different realizations have different theoretical and experimental implications. Bi-maximal minus CKM mixing In this section we will consider different realizations of the possibility (10) in which only maximal mixings and the CKM rotations are involved in formation of the fermion mixing matrices. Bi-maximal mixing from neutrinos Let us assume that in the symmetry basis the bi-maximal mixing originates from the neutrino mass matrix, whereas the charged lepton mixing matrix coincides with the CKM matrix: Then the lepton mixing matrix equals where we have introduced the phase matrix Γ δ following our general prescription described in Sec. 2. In the quark sector we have so that the second equality in (26) implies the quark-lepton symmetry relation, V l = V d . We also assume that the neutrino Dirac matrix is diagonal due to the equality Then, the bi-maximal rotation of neutrinos follows from the seesaw mechanism [30] and the specific structure of the mass matrix of right-handed (RH) neutrinos. Notice that the bimaximal mixing can be related to the quasi-degenerate type mass spectrum of neutrinos. Such a possibility for the bi-maximal neutrino mixing and general matrix U l , not necessarily related to V CKM , has been discussed recently in [20]. The problem in this scenario is that in spite of the equality V d = V l the mass eigenvalues are different: m diag d = m diag l , where m diag l ≡ diag(m e , m µ , m τ ). Therefore, the mass matrices are also different. Some special conditions have to be met for the matrices such that they produce the same mixing despite the different eigenvalues. A possibility is the singular mass matrices for which different (strong) mass hierarchies can be reconciled with approximate equality of the of mixing matrices [31]. Let us discuss the phenomenological consequences of this scenario. 1). The mixing matrix (27) does not satisfy the conditions (13) and therefore the relation (1) receives corrections Numerically, we obtain for θ sun and for the deviation parameter ∆ sin 2 θ 12 ≈ sin θ sun sin θ C ( where the intervals indicate uncertainty due to the unknown phase δ. The deviation in (32) is 15−20 %. It corresponds to θ sun +θ C − π 4 ≃ 2.9 • −3.6 • . Therefore, one needs to measure sin 2 θ sun with better than 10% accuracy to establish this difference. According to the estimations given in [32], the future solar neutrino and the KamLAND experiments may have a sensitivity of ≃ 4 % to sin 2 θ sun , provided that θ 13 is measured, or severely restricted. The sensitivity of a dedicated reactor θ 12 experiment can reach ≃ 3 % [33]. The errors quoted are at the confidence level of 1 σ. So with such an accuracy the equality (30) can be established at about (4 − 5)σ. 2). For 1-3 mixing we obtain where the first dominant term is induced by permutation of the Cabibbo rotation R CKM 12 with the nearly maximal 2-3 rotation. The two elements of U M N S , |U e3 | and |U µ3 |, are connected by a simple relation which does not depend on δ and θ ν 23 (the latter is taken to be π/4 in this section), and represents the characteristic feature of the scenario of bi-large mixing from neutrinos (see sec. 4). Using the Super-Kamiokande bound [7] 0.34 ≤ |U µ3 | 2 ≤ 0.66, we obtain the prediction for |U e3 | 2 : sin 2 θ 13 = 0.026 ± 0.008 (35) which is just below the CHOOZ bound and falls into the region of sensitivity of the next generation accelerator [23,34,35,36,37] and the reactor experiments [38,39]. 3). The deviation of 2-3 mixing from the maximal can be written as where the two terms are of the same order. Numerically it gives and the interval is due to the unknown CP violating phase. Maximal possible value of D 23 is at the level of sensitivity of the J-PARC experiment [23]. 4). For the leptonic Jarlskog invariant we obtain It is a factor of ≃ 30 smaller than the maximal value of J lep allowed by the CHOOZ constraint: We note that J lep vanishes in the two-flavor limit θ 13 → 0, as it should, because the limit implies θ C → 0 (ignoring V ub ), as one can see from (33). The smallness of J lep in (38) despite the relatively large sin θ 13 means that the way of introducing the CP violating phase δ in (27) is not quite general. As we have shown in sec. 2.4 the induced part is proportional to V cb and if the CKM matrix is the only source of CP violation the resultant leptonic CP violation is extremely small. Let us consider a possibility that the value of θ 12 given in (31) is realized at high-energy scale, and it diminishes when running from high to low energy scales. So the better agreement with the QLC relation is achieved at the electroweak scale. As we have discussed in sec. 2.5, a substantial effect due to renormalization can be obtained in the MSSM with large tan β and quasi-degenerate neutrino mass spectrum. In this case, however, running toward low energies leads to an increase of θ 12 , as follows from (24) for negligible sin θ 13 . Therefore, to diminish θ 12 , one needs (i) to suppress the main term given in (24), and (ii) to take into account the effect due to non-zero 1-3 mixing. The former can be reached in the case of opposite CP-parities of ν 1 and ν 2 . As far as the latter is concerned, it was shown in [29] that if φ 2 − φ 1 ≈ π the decrease of θ 12 by 3 • − 5 • can be easily achieved by running down from (10 10 − 10 13 ) GeV for θ 13 = 5 • − 10 • . Bi-maximal mixing from charged leptons Let us assume that the bi-maximal mixing appears from diagonalization of the charged lepton mass matrix, whereas the CKM rotation originates from the neutrino sector: This possibility has been suggested in [15]. Our predictictions, however, differ from those obtained in [15]. Notice that in U l the 1-2 and 2-3 rotations need to be permuted in comparison with the standard definition of the bi-maximal matrix to produce correct order of rotations in U M N S . The lepton mixing matrix with the CP phase δ is given by In the quark sector we assume the left rotations The former relations in (40) and (42) imply the quark-lepton symmetry, V ν = V u . This in turn can originate from the equality of the up-quark and the neutrino Dirac mass matrices, m u = m D ν as in (29), under the assumption (in the seesaw context) that the Majorana mass matrix of the right handed neutrinos does not produce any additional rotations [15]. However, the latter equalities in (40) and (42) require a departure from the simple quark-lepton symmetry. They can be easily accommodated in the "lopsided" schemes [42] of the SU(5) GUT. However, the relation (29) is not explained in SU (5). In SO(10) models which naturally lead to (29), on the other hand, the lopsided scenario requires further complications. The scenario does not appear to follow naturally from the grand unified models. Notice that the problem of equal mixings but different masses outlined in sec. 3.1 exists also here: In the basis where m d and m l are diagonal, that is V d = V l = I, the eigenvalues of mass matrices are different. In another words the question is why m d and m l are diagonal in the same basis. Let us spell out the consequences of the lepton bi-maximal scenario. 3). The 2-3 mixing angle is determined, ignoring the terms of the order |V cb | 2 , by The second term in the RHS of (46) is small, and the relation θ 23 = π/2−θ CKM 23 is satisfied with a good accuracy though it is not as precise as claimed in [15]. We find 0.995 ≤ sin 2 2θ 23 ≤ 1.0. The deviation from maximal mixing, is relatively large at δ ≃ 0. 4). The Jarlskog invariant equals Its absolute value is larger than that in the neutrino scenario of sec. 3.1, but is an order of magnitude smaller than J max lep (39). Hybrid scenario The maximal 1-2 and 2-3 mixings may come from different mass matrices. To keep correct order of these rotations in the MNS matrix (13), we have to assume that in the symmetry basis the maximal 1-2 mixing originates from the neutrino mass matrix, whereas the maximal 2-3 mixing is generated by the charged lepton mass matrix. The CKM rotation can come from neutrinos or charged leptons and also mixed version is possible. We only discuss the former two cases. In the first case, we have the CKM mixing from the neutrino mass matrix: For quarks we take equalities (42) as in the "charged lepton" scenario. This possibility looks more appealing than the second one. A realization can be as follows. In the symmetry basis due to the quark-lepton symmetry we have (29), m u = m D ν . This leads to the rotation which diagonalizes the neutrino Dirac mass matrix: The maximal 1-2 rotation, R m 12 , is the outcome of the seesaw mechanism. It can be generated by the pseudo-Dirac (off-diagonal) 1-2 structure of the Majorana mass matrix of the RH neutrinos [10]. As a result, the rotation matrix (49) is reproduced. For the charged leptons and down quarks one should assume the lopsided scenario with a single maximal mixing. Here, the quark-lepton symmetry is broken. In the second case, the CKM mixing comes from the charged leptons: Both of the scenarios lead to the identical MNS matrix where we have ignored the R CKM 13 rotation. Below we summarize the predictions of the hybrid scenario. The QLC relation (1) is satisfied to a good accuracy: The 1-3 mixing angle is very small: which corresponds to sin 2 2θ 13 = 3.3 × 10 −4 . The prediction for D 23 reads It is almost identical to the one in the lepton bi-maximal scenario (47) but with replacing cos θ sun by cos θ C . For the Jarlskog invariant we obtain Single maximal mixing To reproduce the QLC relation (1), it is sufficient to have a single maximal mixing in 1-2 rotation (sec. 2). We discuss in this section the three scenarios which differ by the origin of large but not maximal atmospheric mixing. Large 2-3 mixing from neutrinos Here we relax the assumption of maximal 2-3 mixing in the neutrino scenario considered in sec. 3.1. The lepton mixing matrix is given by (27) with the replacement R m 23 → R 23 (θ ν 23 ), Such a possibility can be realized in the following way. Suppose in the symmetry basis, (i) the up-quark mass matrix and the neutrino Dirac matrix are diagonal, (ii) the down quark matrix generates the CKM mixing: and (iii) the Majorana mass matrix of the right handed neutrinos has the following form with M 12 /M 33 ≥ m 2 c /m 2 t . Then, the see-saw mechanism leads to the maximal 1-2 mixing and enhancement of the 2-3 mixing [43] when also non-zero but small 2-3 entries are introduced in (60). Typically the 1-3 mixing turns out to be very small, and an additional 1-3 rotation in the neutrino mixing matrix (58) can be neglected. Because of the non-maximal 2-3 mixing, the QLC relation is satisfied with slightly better accuracy as in the case of bi-maximal neutrino scenario of sec. 3.1. The correction to this relation reads ∆ sin 2 θ 12 = sin 2θ C sin 2 θ ν Neglecting the small δ-dependent term in (61) and using the bound on θ ν 23 , we obtain 0.034 ≤ ∆ sin 2 θ 12 ≤ 0.079 (62) which corresponds to 2.2 • ≤ θ sun + θ C − π 4 ≤ 5.0 • . Since the scenario can accommodate the whole region of |U µ3 | 2 allowed by the present data, the deviation from maximal θ 23 , can be large, |D 23 | ≤ 0.16, which gives the opportunity for verification in the next generation experiments. The Jarlskog invariant is enhanced by a factor of ≃ 4.6 in comparison with bi-maximal case, Numerically, keeping the same numbers as above, we obtain J lep = 9.1 × 10 −3 , which is the largest among predictions from all the scenarios in this paper. It is because of the feature that some of the small angles in elements of the MNS matrix (72) are "absorbed" into the large angles, as in (74) and (75). Summary of the predictions by various scenarios We compare predictions of different scenarios and discuss perspectives to disentangle them. In the Table 1 we summarize predictions for observables obtained in the last two sections. One can see some typical features of the predictions from various scenarios. The lepton and the hybrid scenarios can be characterized by extremely small deviation from the QLC relation, which may be unobservable experimentally. They also have common features which predict small θ 13 which probably requires facilities beyond the superbeam experiments. These statements apply not only to bi-maximal scenarios but also to their variations with single maximal mixing angle. On the other hand, the predictions of the "neutrino" scenarios are markedly different. Both the bi-maximal and the single maximal cases predict relatively large deviation from the exact QLC relation of ∆ sin 2 θ 12 / sin 2 θ 12 ∼ 17 %. They lead to relatively large θ 13 just below the CHOOZ limit which will be detected by the next generation long-baseline and reactor experiments. The neutrino (lepton and the hybrid) bi-maximal scenarios predict deviation from the maximal 2-3 mixing by 5-7 %. The prediction is lost when we modify the scenario by allowing the (2-3) mixing to be non-maximal. There exists a relation characteristic to the neutrino scenario, |U e3 | = tan θ C |U µ3 |, which holds independently of δ and of whether the neutrino-origin 2-3 angle is maximal or not. Similarly, in the lepton scenario there exists an analogous relation |U e3 | = tan θ C |U e2 |, which is again independent of whether the lepton-origin 2-3 angle is maximal or not. They represent general consequences of the neutrino-and lepton-origin bi-large mixing scenarios, and can be tested by future measurement of θ 13 as well as more precise determination of θ 23 and θ 12 . Throughout all scenarios, leptonic CP violation is small: the Jarlskog invariant is smaller than the presently allowed value by a factor of ∼ 10. There exist simple relations between predictions of the lepton and the hybrid scenarios. For the deviation from the exact QLC equality we find sin θ 13 and D 23 are related by However, it will be extremely difficult to measure the small values of θ 13 and D 23 , and consequently to check these relations. Therefore, distinguishing between these scenarios is an open question. Discussion and Conclusions To summarize, the current solar neutrino data shows a precise relation between the leptonic and the quark 1-2 mixing angles. The measured values of these angles sum up to π/4 in an accurate way such that the deviation of the central value is smaller than the experimental error at 1σ CL. The relation, which was referred as the QLC (quark-lepton complementarity) relation in this paper, seems indicative of a deeper connection between quarks and leptons, the most fundamental matter to date. We have formulated general conditions under which the QLC relation is satisfied. They include: (1) correct order of large rotations, which impose certain restrictions on the neutrino and charge lepton mass matrices, (2) certain restrictions of CP-violating phases in the mass matrices, and (3) absence of large renormalization group effects. We require that no other free parameter enters the relation between these angles, otherwise the relation implies the tuning of parameters. We explored, first, a possibility that lepton mixings appear as the combination of maximal mixing and the CKM rotations. This led to the "bi-maximal minus CKM mixing" scenario which has several different realizations. These realizations differ by the ways of how maximal mixings are generated. The generic prediction of all these realizations is very small deviation of 2-3 mixing from maximal. So that if large deviation is observed the scenario will be excluded. Natural possibility would be the neutrino origin of the bi-maximal structure. It leads to the QLC relation only at an approximate level, which is consistent with the current experimental data. This scenario can be identified by relatively large 1-3 mixing which is close to the present upper bound. In the (charged) lepton-origin and hybrid bi-maximal scenarios, deviation from the QLC relation, the 1-3 mixing angle, and deviation of the 2-3 mixing angle from the maximal one are predicted to be all very small. The former two features are shared by their bi-large extension, but the last one not. Let us make several theoretical and heuristic remarks: 1). We have considered the origin of lepton mixing as the "maximal mixing minus Cabibbo mixing". There are two problems in this context: • the origin of maximal (or bi-maximal mixing), • propagation of the Cabibbo (or CKM-) mixing to the leptonic sector. The latter is rather non-trivial especially for the first and the second generation fermions in view of a large difference in mass hierarchies: m e /m µ = 0.0047 and m d /m s = 0.04 − 0.06 as well as difference in masses of the s-quark and muon. The precise quark-lepton symmetry should show up in mixing and not in mass eigenvalues. This can be done rather easily in the two generation context but difficult to implement for the first and second families in the three generation case [44]. So, the main problem is propagation of the Cabibbo (or CKM) mixing from the quark sector to the lepton sector. Since the quark-lepton symmetry is broken by masses of quarks and lepton, one does not expect that the quark mixing is "transmitted" to the lepton sector exactly. On general ground one would get corrections to the mixing angle of the order which, however, is below the present 1σ accuracy. For illustration let us outline one possible scenario of such a propagation of mixing in the case of neutrino origin of maximal 1-2 mixing. (i). The first and the second generation of fermions form the doublet of the flavor group and acquire masses independently of the third generation (singlet of the group). This is required to reconcile the propagation of the Cabibbo mixing with the b − τ unification. (ii). The quark-lepton symmetry leads to the approximate equality of matrices of the Yukawa couplings for the first and the second generations. To explain the difference of masses of muon and s-quark at GUT scale one needs to introduce two different Higgs doublets with different VEV's for quarks and for leptons. Notice that m s ≈ m µ at the electroweak (EW) scale, so that if the flavor symmetry is realized at the EW scale one Higgs doublet is sufficient. In this case however the problem of flavor changing neutral currents both in the lepton and quark sectors becomes very severe. (iii). In the basis where the Dirac mass matrices of up-quarks and neutrinos are diagonal the matrices of the Yukawa couplings of the down quarks and charged leptons should be nearly equal and singular to reconcile equal mixings and different mass hierarchies of the quarks and leptons. The singularity and quark-lepton symmetry are broken by terms of the order m d /m s and this leads to the correction given in (80). We emphasize that what is really needed for the QLC relation to hold is the single maximal mixing in the 1-2 rotation either from neutrino or from lepton sectors. Theoretically, the single maximal mixing can be realized much more easily. The mass matrix of the RH neutrinos can be the origin of the maximal mixing for the first and the second generations and it can lead to enhancement of the 2-3 mixing. 2). It is not excluded that the quark-lepton connection, which leads to relation between the angles, is not so direct. It may work for the Cabibbo angle only, since sin θ C may turn out to be a generic parameter of the whole theory of the fermion masses. Therefore, it may appear in various places as the mass ratios and the mixing angles. An empirical relation is in favor of this point of view. 3). One can consider some variations of the QLC equality (1). Noting that the 2-3 leptonic mixing angle measured with the atmospheric neutrinos is nearly maximal, θ atm ≡ θ 23 ≃ π/4, we may write instead of (1) θ sun + θ C = θ atm , allowing possible extension to the case of non-maximal θ atm . 4). Still the QLC relation can be accidental. There is also another non-trivial coincidence: where the angle θ µτ is determined by the equality tan θ µτ ≈ m µ m τ . Apparently, the equalities (82) and (83) have different interpretations from the QLC relation. In particular, (83) is a pure leptonic relation. 5). The most important future measurements turn out to be: (i) Precise measurements of the 1-2 leptonic mixing and further checks of the QLC relation. The accuracy in sin 2 θ sun determination must be better than 10% to discriminate the neutrino version of scenario. (ii) Searches for deviation of the 2-3 mixing from the maximal one which can discriminate whole "bi-maximal minus CKM" approach. (iii) Measurements of the 1-3 mixing angle. In conclusion, it is possible that the equality (1) is not accidental, thus testifying for a certain quark-lepton relation. Implementation of the equality naturally involves the idea that the lepton mixing appears as maximal mixing minus the Cabibbo mixing. In this sense, the quark and lepton mixings are complementary. The approach leads to a number of interesting relations between the lepton and quark mixing parameters which can be tested in future precision measurements. Acknowledgments One of us (A. Yu. S.) is grateful to M. Frigerio for fruitful discussions. This work was supported by the FY2004 JSPS Invitation Fellowship Program for Research in Japan, S04046, and by the Grant-in-Aid for Scientific Research, No. 16340078, Japan Society for the Promotion of Science.
2014-10-01T00:00:00.000Z
2004-05-11T00:00:00.000
{ "year": 2004, "sha1": "0ab36c91158187112aa2d96755b1241ad01406f2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0405088", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0b74a8ca4cb28fe4bd6d3ac47a802e437e8f2013", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7206354
pes2o/s2orc
v3-fos-license
Synergism of Antifungal Activity between Mitochondrial Respiration Inhibitors and Kojic Acid Co-application of certain types of compounds to conventional antimicrobial drugs can enhance the efficacy of the drugs through a process termed chemosensitization. We show that kojic acid (KA), a natural pyrone, is a potent chemosensitizing agent of complex III inhibitors disrupting the mitochondrial respiratory chain in fungi. Addition of KA greatly lowered the minimum inhibitory concentrations of complex III inhibitors tested against certain filamentous fungi. Efficacy of KA synergism in decreasing order was pyraclostrobin > kresoxim-methyl > antimycin A. KA was also found to be a chemosensitizer of cells to hydrogen peroxide (H2O2), tested as a mimic of reactive oxygen species involved in host defense during infection, against several human fungal pathogens and Penicillium strains infecting crops. In comparison, KA-mediated chemosensitization to complex III inhibitors/H2O2 was undetectable in other types of fungi, including Aspergillus flavus, A. parasiticus, and P. griseofulvum, among others. Of note, KA was found to function as an antioxidant, but not as an antifungal chemosensitizer in yeasts. In summary, KA could serve as an antifungal chemosensitizer to complex III inhibitors or H2O2 against selected human pathogens or Penicillium species. KA-mediated chemosensitization to H2O2 seemed specific for filamentous fungi. Thus, results indicate strain- and/or drug-specificity exist during KA chemosensitization. Introduction The mitochondrial respiratory chain (MRC) can serve as a valuable molecular target for control of fungal pathogens (Figure 1a). Chemical inhibitors of MRC, such as antimycin A (AntA) or strobilurins (e.g., Pyraclostrobin (PCS), Kresoxim-methyl (Kre-Me), mucidin, etc.), interfere with cellular energy (e.g., ATP) production in fungi [1,2], weakening fungal viability. Coinciding with this interference is an abnormal leakage of electrons from MRC. The escaped electrons can cause oxidative damage to vital components in fungal cells, such as chromosomes, lipid membranes and proteins, resulting in apoptosis or necrosis [1,2] (see Figure 1b for scheme). The antioxidant system in fungi, e.g., glutaredoxins, cytosolic or mitochondrial superoxide dismutases (Cu, Zn-or Mn-SOD), glutathione reductase, plays a protective role in such cases, maintaining cellular homeostasis/integrity from toxic oxidative species [3,4]. Fungi can also overcome the toxicity of MRC inhibitors by expressing alternative oxidase (AOX) (Figure 1a), rendering the completion of electron flow via MRC [5,6]. AOX is insensitive to MRC inhibitors [5,6]. With respect to other targets of conventional antifungal drugs already identified (e.g., cell wall/membrane integrity pathway, cell division, signal transduction, and macromolecular synthesis, etc.) [8], MRC is a relatively unexploited target in human fungal pathogens. However, the MRC has been actively used as a drug target for control of malarial parasites, e.g., Plasmodium. For example, the antimalarial drug atovaquone disrupts the mitochondrial electron transport as well as the inner mitochondrial membrane potential (ΔΨ m ) in parasites [9]. Atovaquone is also used to treat fungal infections such as Pneumocystis jirovecii (pneumonia) [10]. Co-application of certain types of compounds with commercial antimicrobial drugs can increase the effectiveness of drugs through a mechanism termed "chemosensitization" [11][12][13][14]. For example, a prior study showed that the 4-methoxy-2,3,6-trimethylbenzensulfonyl-substituted D-octapeptide chemosensitized cells to the antifungal drug fluconazole (FLC), countering FLC resistance of clinical isolates of Candida pathogens, and of strains of the model yeast Saccharomyces cerevisiae overexpressing multidrug efflux pumps/drug transporter or a lanosterol 14α-demethylase (Erg11p, molecular target of FLC) [11]. Similarly, in bacterial pathogens, application of sub-inhibitory concentrations of squalamine enhanced the antibiotic susceptibility of various Gram-negative bacteria, in both antibiotic-resistant and susceptible strains [12]. Squalamine is thought to modify membrane integrity by increasing permeability of drugs [12]. Meanwhile, co-application of proguanil, which modulates mitochondria in protozoan parasites, resulted in an increased antimalarial activity of atovaquone [15]. Of note is that proguanil-based chemosensitization was specific for atovaquone, i.e., proguanil did not enhance the activities of other MRC inhibitors, such as myxothiazole or AntA [15]. Results indicate "drug-chemosensitizer specificity" exists in the process. Collectively, these studies showed that chemosensitization could ultimately lead to lowering dosages of conventional drugs necessary for effective control of pathogens. It would also lead to preventing development of pathogen resistance to conventional drugs [16]. Kojic acid (KA, Figure 2a) is a natural product of some filamentous fungi, mainly certain species of Aspergillus or Penicillium. KA is widely used as a depigmenting agent due to its ability to inhibit the activity of tyrosinase, a key enzyme responsible for melanogenesis in melanoma and melanocytes [17][18][19][20]. From a clinical perspective, KA can potentially inhibit pathogen infection since: (1) it enhances host immunity by stimulating phagocytosis, generating reactive oxygen species (ROS) in macrophages, and potentiating phytohemagglutinin-based proliferation of lymphocytes [21,22]; (2) KA or its structural derivatives directly exert antimicrobial activity against fungal/bacterial pathogens [23]. For instance, KA functions as an antifungal agent against Cryptococcus neoformans (cryptococcosis), where KA also inhibits melanin synthesis necessary for fungal infectivity [24]. We previously showed that KA could act as a chemosensitizing agent when co-applied with the polyene antifungal drug amphotericin B (AMB) or hydrogen peroxide (H 2 O 2 ) against various filamentous fungal or yeast pathogens [25]. The mechanism of antifungal chemosensitization appeared to be modulation of the function of the antioxidant system in the fungus. Noteworthy is that the degree/efficacy of KA-mediated antifungal chemosensitization was related to the kinds of fungal strain and/or drug examined [25]. This tendency is similar to the "drug-chemosensitizer specificity" found in atovaquone-mediated chemosensitization (see above). In this study, we further investigated if KA, as a chemosensitizer, could improve the activities of complex III inhibitors of MRC (i.e., AntA, Kre-Me, PCS; see Figure 2b-d for structures and 2e for scheme), and thus, possess potential as an active pharmaceutical/agroceutical ingredient, against various filamentous fungi. We included a number of human and plant pathogens, as well as model fungal strains, in our tests (see Table 1; Figure 2e). We observed that human fungal pathogens, i.e., Aspergillus fumigatus, A. terreus, Acremonium sp., and Scedosporium sp., were the most sensitive strains to KA-mediated chemosensitization to complex III inhibitors. Enhancing Antifungal Activity of H 2 O 2 or Complex III Inhibitors with KA against Aspergillus or Penicillium Strains: Agar Plate Bioassay Hydrogen peroxide acts similarly to host-derived ROS, as a host defense response against infecting pathogens. For example, patients with chronic granulomatous disease (CGD) experience high susceptibility to invasive infections by Aspergillus [28]. The phagocytic immune cells of CGD patients cannot induce an oxidative burst because they lack NADPH oxidase, necessary to generate superoxides, the precursor to the antimicrobial ROS H 2 O 2 [28]. Although the infecting fungi rely on their cellular antioxidant system for protection from host ROS, application of KA further enhances host immunity by stimulating phagocytosis and generation of ROS in macrophages (see Introduction) [21,22]. We previously examined KA-mediated chemosensitization to H 2 O 2 and AMB [25]. Besides disrupting fungal plasma membranes, AMB also induces fungal oxidative damage [29][30][31][32] by stimulating ROS production [33]. Thus, we surmised that the effect of KA + AMB would be similar to KA + H 2 O 2 . However, unlike with KA + AMB, chemosensitization did not occur with KA + H 2 O 2 in any of the yeast pathogens tested. We concluded that the effectiveness of KA-mediated chemosensitization was fungal strain-and/or drug-specific [25]. Since complex III inhibitors, like AMB, also trigger cellular oxidative stress in fungi (see Introduction), we also compared the effect of KA + complex III inhibitors with that of KA + H 2 O 2 in this study. Filamentous Fungi Our initial agar bioassays were performed with the human pathogenic fungi. Co-application of KA Table 2). For example, co-application of H 2 O 2 and KA at 5 mM, each, completely inhibited the growth of A. fumigatus AF10 (i.e., no visible germination on plates), whereas independent application of either H 2 O 2 or KA, alone, did not achieve this level of antifungal activity. A similar level of chemosensitization was also observed in other fungi tested, i.e., A. terreus, Acremonium, and Scedosporium, by KA + H 2 O 2 (Figure 3a; see Table 2 for summary). Next we found that KA-mediated chemosensitization could also be achieved with complex III inhibitors in most of the human pathogens tested (Figure 3b; Table 2 . Whereas, independent application of KA or PCS, alone, did not result in such a level of antifungal activity. Levels of enhancement of antifungal activity also depended upon types of complex III inhibitors co-applied. PCS exerted the highest activity, followed by Kre-Me and AntA. Similar trends were also observed in other pathogens, such as A. terreus, Acremonium and Scedosporium (Figure 3a; see Table 2 for summary). The only exceptions were A. terreus UAB698 (no enhancement of sensitivity by KA + any of the complex III inhibitors) and A. terreus UAB673/680 (no enhancement of sensitivity by KA + AntA), respectively. Therefore, sensitivity of fungal strains to KA-mediated chemosensitization with complex III inhibitors ranged, from highest to lowest, as follows: Acremonium, Scedosporium > A. fumigatus > A. terreus. Of note is that, although human pathogens were also sensitive to KA + H 2 O 2 , levels/degrees of their sensitivity were generally not parallel to that of KA + complex III inhibitors (see Table 2). -+ + a +, Enhancement of antifungal activity after co-application (reduced radial growth of fungi); ++, Enhancement of antifungal activity after co-application (no germination of fungi); -, No enhancement of antifungal activity after co-application. b [25]. c n/t, Not tested due to no growth of fungi w/ PCS (25 μM) alone (i.e., hypersensitivity to PCS alone). d n/t, Not tested due to no growth of fungi w/ Kre-Me (25 μM) alone (i.e., hypersensitivity to Kre-Me alone). Agar bioassays performed on Penicillium strains, mostly plant pathogens, showed that co-application of KA with H 2 O 2 resulted in enhancement of antifungal activities of both compounds (KA and H 2 O 2 ), except P. griseofulvum 2300, P. italicum and P. glabrum, which were insensitive to this chemosensitization (Table 2; Figure data not shown). KA-mediated chemosensitization was also performed using the complex III inhibitors on the Penicillium strains. Unlike the human pathogens tested, chemosensitization was more limited with the Penicillium strains, being effective only in strains, P. expansum FR2 and FR3 (both being fludioxonil (FLUD) resistant strains), P. digitatum, P. italicum and P. glabrum with KA + PCS ( Table 2; PCS was the most effective complex III inhibitor in this test). Levels of strain sensitivity in decreasing order with KA + PCS were: P. digitatum > P. italicum, P. expansum FR2 > P. glabrum, P. expansum FR3. P. digitatum, P. italicum, and P. glabrum were also sensitive to KA + Kre-Me or AntA. However, Penicillium strains were generally not as sensitive to KA-mediated chemosensitization with complex III inhibitors as human pathogens. As observed in human pathogens, levels/degrees of fungal sensitivity to KA + H 2 O 2 were not parallel to that of KA + complex III inhibitors (see Table 2). Agar bioassays were performed on six other strains of Aspergillus, mainly plant pathogens or model strains (A. flavus: pathogenic to both plants and humans). These assays showed that co-application of KA with H 2 O 2 or complex III inhibitors resulted in no enhancement of antifungal activity of any compound tested (KA, H 2 O 2 or complex III inhibitors), except A. nidulans, which showed sensitivity to KA + PCS or Kre-Me ( In our previous study, KA-mediated chemosensitization with H 2 O 2 was not effective in any of the yeast pathogens tested [25]. Therefore, in the present study, we attempted to examine how the treatment of KA + H 2 O 2 was related to various functions of antioxidant system of yeasts using S. cerevisiae as a model. For this study, we used a yeast dilution bioassay (see Experimental Section) and tested a wild type and four antioxidant mutant (gene knock-out) strains of S. cerevisiae as follows: (1) yap1 (Yap1p, a transcription factor, regulates the expression of four downstream genes within the antioxidant system, i.e., GLR1 (glutathione reductase), YCF1 (a glutathione S-conjugate pump), TRX2 (thioredoxin), and GSH1 (γ-glutamylcysteine synthetase [34,35]); (2) sod1 (Cu,Zn-SOD); (3) sod2 (Mn-SOD); and (4) glr1 (Glr1p, glutathione reductase; see Saccharomyces Genome Database [27]). These representative mutants play key roles in maintaining cellular redox homeostasis in both enzymatic (e.g., ROS radical-scavenging) and non-enzymatic (e.g., glutathione homeostasis) aspects. Worth noting is that S. cerevisiae has also been developed as a model system for studying atovaquone resistance [36]. To our surprise, in these yeast strains, KA mainly acted as an antioxidant, but not as an antifungal chemosensitizer ( Figure 4). For example, when wild type or mutants were treated with 1 mM of H 2 O 2 alone, all yeast strains showed sensitive responses, as reflected in no growth of cells at 10 −2 to 10 −5 dilution spots ( Figure 4). As expected, yap1, which regulates the expression of four downstream genes in the antioxidant system, was more sensitive to H 2 O 2 (i.e., no growth at 10 −1 dilution spot) than any other yeast strains. However, as shown in Figure 4, co-application of KA with H 2 O 2 ameliorated the H 2 O 2 -triggered oxidative stress, resulting in enhancement of the growth of all yeasts tested. For example, the wild type showed growth recovery up to 100,000-fold dilution (the 10 −5 dilution spot), revealing this strain fully recovered from oxidative stress induced by H 2 O 2 when KA was co-applied. Additionally, the sod1, sod2 and glr1 mutants grew up to 10 −3 to 10 −4 dilution spots and yap1 grew up to 10 −1 dilution spot when H 2 O 2 was co-applied with 5 mM KA. The antioxidant capacity of KA was also commensurate with KA concentrations. Although yeast strains showed increased sensitivity to 2 mM of H 2 O 2 , similar trends of antioxidation activity by KA were also observed ( Figure 4). Thus, overall, the results indicate KA has a different effect, depending on types of fungi examined. That is, KA functions as an antioxidant in S. cerevisiae, while it acts as an antifungal chemosensitizer in certain of the filamentous fungi tested. KA may induce different transcriptional programs in S. cerevisiae than in filamentous fungi. Further studies, such as genome-wide gene expression profiling, are warranted to determine the precise mechanism of antioxidation in and/or insensitivity of yeast to KA + H 2 O 2 chemosensitization. Co-Application of KA and PCS Synergistic Fractional Inhibitory Concentration Indices (FICIs; see Experimental Section for calculations) were found between KA and PCS for most human pathogens (A. fumigatus, A. terreus, Acremonium sp., Scedosporium sp.) and A. nidulans (Table 3). Despite the absence of calculated "synergism", as determined by "indifferent" interactions [38] (Table 3), there was enhanced antifungal activity of KA and PCS (i.e., chemosensitization) in Acremonium, which was reflected in lowered Minimum Inhibitory Concentrations (MICs) of each compound when combined. However, synergistic Fractional Fungicidal Concentration Indices (FFCIs) (at the level of ≥ 99.9% fungal death) between KA and PCS occurred only in Acremonium (Table 3), indicating the KA-mediated chemosensitization with PCS is fungistatic, not fungicidal, in most strains tested. Synergistic FICIs between KA and PCS also occurred in four Penicillium strains (Table 3). Despite the absence of calculated "synergism" [38] (Table 3), there was enhanced antifungal activity of KA and PCS (i.e., chemosensitization) also in P. expansum FR3 (FLUD resistant strain), which was reflected in lowered MICs of each compound when combined. However, synergistic FFCI (at the level of ≥99.9% fungal death) between KA and PCS was not achieved in any of the Penicillium strains examined (Table 3), indicating that, as in the human pathogens/A. nidulans (see above), the KA-mediated chemosensitization with PCS is mostly fungistatic, not fungicidal, in Penicillium strains (Lowered Minimum Fungicidal Concentrations (MFCs), although not "synergistic" level, were observed in P. glabrum and P. italicum at the level of ≥99.8% fungal death; see Table 3). Strains Hypersensitive to Complex III Inhibitors: Testing Acremonium, Scedosporium, P. digitatum with Kre-Me KA + Kre-Me was also examined in Acremonium, Scedosporium and P. digitatum, which were the most sensitive strains to complex III inhibitors (see Table 2). We tried to determine the level of sensitivity of these strains to Kre-Me, which is less potent than PCS (see Figure 3 and Table 2). Consistently, synergistic FICIs between KA and Kre-Me occurred in all strains tested (Table 4). However, synergistic FFCIs (at the level of ≥99.9% fungal death) between KA and Kre-Me were not achieved in any of the strains examined (Table 4), while lowered MFCs for both KA and Kre-Me were observed in Acremonium (FFCI = 0.6). Acremonium sp. is the only strain with low FFCI values for both PCS and Kre-Me, i.e., 0.5 PCS and 0.6 Kre-Me , respectively (see Tables 3 and 4). Results further confirmed the sensitive responses of Acremonium, Scedosporium and P. digitatum to complex III inhibitors (both PCS and Kre-Me). The results of all CLSI-based checkerboard (chemosensitization) tests (i.e., KA + PCS or Kre-Me in filamentous fungi) are summarized in Table 5. As shown in the Table, the FICIs for thirteen strains (out of fifteen strains) w/PCS and for three fungi (the most sensitive strains to complex III inhibitors) w/Kre-Me were synergistic. Whereas, FFCI for only Acremonium sp. was synergistic, indicating the KA-mediated chemosensitization with complex III inhibitors exerted mostly fungistatic (but not fungicidal) effects. Chemicals Antifungal chemosensitizing agent [kojic acid (KA)], antifungal drugs [antimycin A (AntA), kresoxim methyl (Kre-Me), pyraclostrobin (PCS)] and oxidizing agent [hydrogen peroxide (H 2 O 2 )] were procured from Sigma Co. Each compound was dissolved in dimethyl sulfoxide (DMSO; absolute DMSO amount: <1% in media), except H 2 O 2 , which was dissolved in water, before incorporation into culture media. In all tests, control plates (i.e., "No treatment") contained DMSO at levels equivalent to that of cohorts receiving antifungal agents, within the same set of experiments. Agar Plate Bioassay: Filamentous Fungi In the agar plate bioassay, measurement of sensitivities of filamentous fungi to the antifungal agents was based on percent (%) radial growth of treated compared to control ("No treatment") fungal colonies (see text for test concentrations.) [40]. Minimum inhibitory concentration (MIC) values on agar plates were determined based on triplicate bioassays, and defined as the lowest concentration of agents where no fungal growth was visible on the plate. For the above assays, fungal conidia (5 × 10 4 CFU/mL) were diluted in phosphate-buffered saline (PBS) and applied as a drop onto the center of PDA plates with or without antifungal compounds. Growth was observed for three to seven days to determine cellular sensitivities to compounds. . Interactions were defined as: "synergistic" (FICI or FFCI ≤ 0.5) or "indifferent" (FICI or FFCI > 0.5-4) [38]. Statistical analysis was based on [39]. Agar Plate Bioassay: S. cerevisiae Petri plate-based yeast dilution bioassays were performed on the wild type and antioxidant mutants (yap1Δ, sod1Δ, sod2Δ, glr1Δ) to assess effects of KA + H 2 O 2 on the antioxidant system. Yeast strains were exposed to 1 to 5 mM of KA, w/o or w/H 2 O 2 (1 or 2 mM) on SG for 5 to 7 days. These assays were performed in duplicate on SG agar following previously described protocols [41]. Conclusions In this study, KA enhanced antifungal activities of MRC inhibitor(s) or H 2 O 2 as follows: (1) Most human pathogens tested (i.e., A. fumigatus, A. terreus, Acremonium sp., Scedosporium sp.) were sensitive to both KA + complex III inhibitors and KA + H 2 O 2 , except A. terreus UAB698 (no chemosensitization w/all complex III inhibitors tested) and A. terreus UAB673/680 (no chemosensitization w/AntA); (2) Most of the plant pathogenic Penicillium species were sensitive to KA + H 2 O 2 , except P. griseofulvum 2300, P. italicum and P. glabrum (no chemosensitization); (3) Some Penicillium species (i.e., P. digitatum, P. italicum, P. glabrum, and FLUD-resistant P. expansum FR2/FR3) were sensitive to KA + at least one of the complex III inhibitors. However, all other Penicillium species were insensitive to KA + complex III inhibitors (no chemosensitization); (4) All other Aspergillus species (i.e., A. flavus, A. parasiticus, A. oryzae, A. niger, A. ochraceous, A. nidulans) were insensitive to KA + complex III inhibitors and/or KA + H 2 O 2 (no chemosensitization), except A. nidulans, which was sensitive to KA + PCS or Kre-Me (chemosensitization). Further studies are required to determine the mechanism(s) governing the variability of these Aspergillus strains to KA-mediated chemosensitization; (5) Most compound interactions at MIC level (i.e., FICI) between KA and PCS or Kre-Me, determined by CLSI method, resulted in synergism, except Acremonium sp. (KA + PCS) and P. expansum FR3 (KA + Kre-Me), which resulted in a certain level of positive interaction between compounds, but not synergism; (6) The antifungal chemosensitizing capacity of KA appears to be fungal strain-specific (i.e., specific for certain human pathogens or Penicillium species only) as well as fungal isolate-dependent (i.e., A. terreus). KA mainly functions as an antioxidant in yeasts; and (7) Strain sensitivity to KA + complex III inhibitors or H 2 O 2 varied as follows (in decreasing order): human fungal pathogens > Penicillium species > all other Aspergillus species. In conclusion, KA, which is a relatively safe, natural compound to humans [42], shows some potential to serve as an antifungal chemosensitizing agent in combination with complex III inhibitors. This potential appears to be greatest with those filamentous fungi tested that are mainly pathogenic to humans. Chemosensitization can lower dosage levels of antifungal drugs necessary for effective control of fungi. Thus, use of safe chemosensitizing agents that selectively debilitate the fungal pathogen may be a viable approach to circumvent potential side-effects commonly associated with antimycotic therapy.
2014-10-01T00:00:00.000Z
2013-01-25T00:00:00.000
{ "year": 2013, "sha1": "7c4e4b6b9e4e96c3cd6af43fa7f96f019509f163", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/18/2/1564/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7b50b04c048bf32dabfb44a632872cc14ee4d0a", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16448623
pes2o/s2orc
v3-fos-license
Productivity losses associated with tuberculosis deaths in the World Health Organization African region Background In 2014, almost half of the global tuberculosis deaths occurred in the World Health Organization (WHO) African Region. Approximately 21.5 % of the 6 060 742 TB cases (new and relapse) reported to the WHO in 2014 were in the African Region. The specific objective of this study was to estimate future gross domestic product (GDP) losses associated with TB deaths in the African Region for use in advocating for better strategies to prevent and control tuberculosis. Methods The cost-of-illness method was used to estimate non-health GDP losses associated with TB deaths. Future non-health GDP losses were discounted at 3 %. The analysis was conducted for three income groups of countries. One-way sensitivity analysis at 5 and 10 % discount rates was undertaken to assess the impact on the expected non-health GDP loss. Results The 0.753 million tuberculosis deaths that occurred in the African Region in 2014 would be expected to decrease the future non-health GDP by International Dollars (Int$) 50.4 billion. Nearly 40.8, 46.7 and 12.5 % of that loss would come from high and upper-middle- countries or lower-middle- and low-income countries, respectively. The average total non-health GDP loss would be Int$66 872 per tuberculosis death. The average non-health GDP loss per TB death was Int$167 592 for Group 1, Int$69 808 for Group 2 and Int$21 513 for Group 3. Conclusion Tuberculosis exerts a sizeable economic burden on the economies of the WHO AFR countries. This implies the need to strongly advocate for better strategies to prevent and control tuberculosis and to help countries end the epidemic of tuberculosis by 2030, as envisioned in the United Nations General Assembly resolution on Sustainable Development Goals (SDGs). Electronic supplementary material The online version of this article (doi:10.1186/s40249-016-0138-5) contains supplementary material, which is available to authorized users. Background The World Health Organization (WHO) estimates that the total number of deaths from tuberculosis (TB) worldwide was 1.514 million in 2014 [1]. Almost half of those deaths were from the WHO African Region. Approximately 21.5 % of the 6 060 742 TB cases (new and relapse) reported to the WHO in 2014 were in the African Region [1]. According to the WHO, TB is intimately linked to poverty, and the control of TB is ultimately a question of justice and human rights [2]. Failure to control TB (and other poverty-related diseases) is a consequence of the significant inequities in the distribution of wealth and health care both within and between countries [3][4][5][6]. In the African Region, the situation is exacerbated by the relatively high incidence and prevalence of coinfection of HIV/AIDS and TB and the growing problem of mycobacterial drug resistance [7][8][9][10]. The majority of TB deaths could have been prevented if the available preventive and treatment interventions were universally accessible to all those in need. Unfortunately, the coverage of those interventions is suboptimal in the African Region. For example, the BCG (Bacillus Calmette-Guérin) immunisation coverage among infants (aged 1 year) is between 50 and 70 % in 4 countries, 71-90 % in 17 countries, and 91 % and above in 26 countries [11]. The case detection rate for all forms of TB was 52 %, and the treatment success rate for new tuberculosis cases was 81 % [12]. In the absence of an effective vaccine for older ages, efforts to control the spread of TB will continue to rely on early diagnosis, directly observed therapy (DOTs) and public health infection control measures. The prevention and control of TB is hampered by poor living conditions for vulnerable population groups and weak national health systems [12]. The national health systems lack capacities to assure universal access to TB prevention and control services for all those in need [13][14][15][16]. The situation calls for strong evidence-based advocacy for increased domestic and external investments into the fight against TB. One such evidence is the economic burden of TB. A retrospective cost-of-illness study in the United States estimated the 1991 direct expenditures for TB-related diagnosis and treatment to range from $515.7 million to $934.5 million [17]. Miller et al. estimated that in 2002, the 108 confirmed TB cases in Tarrant County (Texas, USA) cost a total of US$40 574 953 [18]. Rajbhandary et al. estimated the mean direct cost of treating a multi-drug-resistant (MDR-TB) patient in the United States to be US$45,000 [19]. Atun et al. estimated the mean cost of managing TB in Russia over 12 months to be US$572 per case [20]. Fløe et al. estimated the direct cost per TB patient to be €10 509 in Denmark [21]. Kik [30]. The Foster et al. study in South Africa estimated the mean total pre-treatment and treatment direct plus indirect costs incurred by respondents in accessing health care during TB diagnosis and treatment to be US$ 324.07 [31]. Except for Peabody et al. [27], none of the other studies included the economic losses due to premature tuberculosis-related mortality. In addition, to the best of our knowledge, no study has attempted to estimate the combined economic losses due to premature tuberculosisrelated mortality for all 47 countries of the WHO African Region. Therefore, there is a dearth of evidence in the African Region on the economic burden of TB for use in advocacy for increased domestic and external investments in strengthening the national and local health systems to combat the spread of TB. This paper attempts to answer the following question: What is the impact of TB deaths on the future nonhealth gross domestic product (GDP) in the WHO African Region? The specific objective of this study was to estimate the future GDP losses associated with TB deaths in the African Region to advocate for better strategies to prevent and control tuberculosis. Cost-of-illness framework Tuberculosis deaths result in future losses in the macroeconomic outputs of countries concerned with attrition of future labour and productivity, as well as an erosion of investments in human and physical capital formation [32]. In this paper, we employ a cost-of-illness model to estimate the non-health GDP losses attributable to TBrelated deaths in the African Region. Nattrass et al. [33] defines GDP as the value of the aggregate spending on all final goods and services. GDP is the sum of the private household consumption spending on final consumer goods (e.g., food, cloth, books, detergents) and services (e.g., health, education, tourism); central, regional and local government consumption spending on salaries and wages of civil servants and goods; private and public sectors producers investment spending on additional physical stock of capital (e.g., machinery, construction, vehicles) plus changes in the total value of inventories (unsold stocks); and net exports (i.e., exports minus imports). Private consumption is the use of goods and services to directly satisfy an individual's personal needs and wants [33]. Private consumption spending is funded from incomes earned by employees and self-employed people (e.g., farmers, entrepreneurs) and thus premature death of workers or self-employed people from tuberculosis (or any other cause) depletes household income and consumption. Death of those aged 0-14 years diminishes the quantity of future labour force and hence future household income and consumption. Government consumption spending is financed largely through revenues from various forms of taxes, such as personal income tax, value-added tax, social security taxes, corporate taxes, and taxes on international trade and transactions [34]. Premature mortality due to tuberculosis (or any other cause) reduces the number of current and future tax payers and hence tax revenues available for government consumption spending and investment. Investment spending is financed by savings, i.e., loanable funds [33,34]. Once again, premature death from TB erodes a household's current and future income and savings needed by investors. At times, the bereaved are forced by circumstances to sell assets and spend savings to pay for funeral expenses. Agriculture is the main source of income and employment for the 62 % of the African Region population that lives in rural areas. In 2013, agriculture (including crops, forestry, hunting, and fishing, and livestock production) contributed 14.7 % to the GDP of Sub-Saharan Africa. However, the contribution varies from 2.3 % in South Africa to 58.2 % in the Central African Republic. Of 41 countries reporting, agriculture contributed to less than 10 % in 12 countries; 10-30 % in 16 countries; and 31-60 % in 13 countries [35]. Premature TB deaths would be expected to impact negatively on the agricultural and other sectors productivity. According to WHO [32], the key ways through which tuberculosis deaths impact macroeconomic output include increased health expenditure, losses in labour and productivity and reduced investment in human and physical capital formation. This study uses a macroeconomic-or societal-perspective. The study's scope is limited to market economy losses (GDP), its quantity of interest is the impact of tuberculosis deaths on nonhealth components of GDP, and its estimation method is the cost-of-illness model capturing the effects across all sectors of the economy [36]. The non-health GDP loss (NHGDPLoss) associated with tuberculosis deaths in a country is the sum of the potential non-health GDP loss due to tuberculosis deaths among those aged 0-14 (NHGDPLoss 0 − 14 ), those aged 15-59 (NHGDPLoss 15 − 59 ) and those aged 60 years and above (NHGDPLoss 60 ± ). Economic losses among the three age brackets were estimated to facilitate comparisons and to avail information for use in advocacy for an increase in investments against tuberculosis, and the growing challenge of antimicrobial resistance in the region. The non-health GDP loss associated with tuberculosis deaths among persons of a specific age group is the product of the total discounted years of life lost, per capita non-health GDP in purchasing power parity (PPP) and the total number of tuberculosis deaths. Each country's discounted total non-health GDP loss attributable to tuberculosis deaths was estimated using eqs. (1), (2), (3) and (4) presented below [37]. where 1/(1 + r) t is the discount factor; r is the rate of discount of future losses; X n t¼1 is the summation from year t to n; t is the first year of life lost, and n is the final year of the total number of years of life lost per tuberculosis death, which is obtained by subtracting the average age at death (AAD) for tuberculosis-related causes from each country's average life expectancy at birth. NHGDPPC Int$ is the per capita non-health gross domestic product in purchasing power parity (PPP), which is obtained by subtracting the per capita total health expenditure (PCTHE) from the per capita GDP (GDPPC Int$ ). TBD 0 − 14 is the total tuberculosis deaths between the ages of 0-14 years in country k in 2013; TBD 15 − 59 is the total tuberculosis deaths between the age of 15-59 years in country k in 2013; and TBD 60 ± is the total tuberculosis deaths for ages 60 years and above in country k in 2013. We used 2013 as the base year to which losses occurring in future years were discounted. As explained by Kirigia [38], Drummond et al. [39] and Curry and Weiss [40], the discount factor applied to the GDP losses of different years then depends on both the discount rate (r) and the number of years (t) over which the discounting is conducted. The non-health GDP per capita in purchasing power parity for each of the 47 countries in the WHO African Region was calculated by subtracting the per capita total health expenditure from the per capita GDP. Illustration of calculation of loss in total non-health GDP The example below presents a calculation of the tuberculosis death-related loss in non-health GDP using the actual information on Nigeria: Sensitivity analysis A discount rate of 3 % was used because it is commonly used in cost-of-illness studies [41,42], burden of disease studies [43,44] and WHO health systems' performance assessment [45]. However, a one-way sensitivity analysis was conducted at 5 and 10 % discount rates to test the effect of the discount rate on the overall total expected non-health GDP loss estimate. The study used 7 years (a simple average) as the average age at death for the 0-14 age bracket; 37 years for the 15-59 age bracket; and 60 years for the 60 years and above. Because the legal minimum working age limit is 15 years [46], we considered only the years above 14 years when calculating the productive years of life lost for the 0-14 age bracket. A sensitivity analysis was conducted to determine the effect of age on the overall total non-health GDP loss estimate. The model was reestimated assuming an average age at death of 0 years for the 0-14 age bracket; an average age at death of 15 years for the 15-59 age bracket; and each country's average life expectancy as the average age at death for the age bracket of 60 years and above, while simultaneously assuming the African Region's maximum life expectancy of 75 years (i.e., life expectancy for Cape Verde). Data sources and analysis The data used to estimate eqs. 1, 2, 3 and 4 were obtained from following sources: the life expectancy at birth data were taken from WHO World Health Statistics 2015 [12]; the proportions of deaths occurring in the three age groups were from the WHO mortality and burden of disease estimates for WHO member states in 2008 [43]; the total tuberculosis deaths were taken from the WHO World Tuberculosis Report 2015 [1]; the per capita gross domestic product in purchasing power parity (PPP) values were from the International Monetary Fund database [47]; and per capita total health expenditure data were from the World Health Statistics 2015 [12]. The formulas in eqs. (1), (2), (3) and (4) were used to estimate the non-health GDP losses and were built in an Excel spreadsheet. For the analysis, the countries were organised into three economic groups, as shown in Table 1, with high-and upper-middle-income countries in Group 1, lower-middle-income countries in Group 2 and low-income countries in Group 3. A calculation for the countries by income group was meant to facilitate comparisons. Ethical clearance The study did not require WHO/AFRO Ethics Review Committee approval because it did not involve human subjects. It relied entirely on data from published sources. Table 2 presents the WHO African Region's population and tuberculosis deaths by economic group in 2014. Of the total of 753 423 tuberculosis deaths that occurred, 16.26 % belonged to the high-and upper-middle-income countries (Group 1), 44.73 % to the lower-middleincome countries (Group 2) and 39.01 % to the lowincome countries (Group 1). Non-health GDP loss attributable to tuberculosis deaths The 0.753 million tuberculosis deaths that occurred in the African Region in 2014 would be expected to decrease future non-health GDP by Int$50,382,574,953 (Table 3). Nearly 40.8 % of the loss would be represented by Group 1 countries, 46.7 % by group 2 and 12.5 % by group 3. The interquartile range of the median GDP loss by country is Int$440,387,653. The potential loss of future discounted non-health GDP would vary widely, from Int$0 in Seychelles to Int$18.84 billion in Nigeria. Non-health GDP loss in Group 1 countries The 122 526 TB deaths in Group 1 countries are expected to result in a total loss of Int$20 534 328 490 in non-health GDP in 2013, which is equivalent to 1.36 % of the group's total GDP. The total productivity loss varied importantly, from Int$0 in Seychelles to Int$16.6 billion in South Africa. Figure 1 displays the distribution of the total non-health GDP across the nine high and upper-middle income countries in Group 1. Approximately 81 % of the expected loss in Group 1 was represented by South Africa. Figure 2 presents the distribution of the total non-health GDP loss across the 13 lower-middle income countries in Group 2. Approximately 80.1 % of the loss in Group 2 was represented by Nigeria. Non-health GDP loss in Group 3 countries The 293 888 TB deaths that occurred among Group 3 countries in 2013 resulted in a total expected loss in non-health GDP of Int$6 322 410 528, which is equivalent to 0.91 % of the group's total GDP. The expected loss varied from Int$1.5 million in Comoros to Int$2.54 billion in Tanzania. Figure 3 shows the distribution of the total non-health GDP loss across the 25 low-income countries in Group 3. The Democratic Republic of the Congo (DRC), Ethiopia, Madagascar, Mozambique and Tanzania collectively incurred 76.8 % of the expected loss in this group. In spite of the fact that Group 3 TB deaths were 2.4 times those of Group 1, the non-health GDP loss of Group 1 was 3.2 times higher than that of Group 3 because Group 1 had a higher per capita GDP. The average non-health GDP lost per TB death was slightly more than two times that for Group 2 and about eight times that for Group 3. Sensitivity analysis Employing a 5 % discount rate reduced the total expected non-health GDP loss by Int$9.703 billion The use of the average age at death of 0 years for the 0-14 age bracket; 15 years for the 15-59 age bracket; and each country's average life expectancy as the average age at death for the age bracket of 60 years and above, while simultaneously assuming the region's maximum life expectancy of 75 years, raised the total non-health GDP loss by Int$28.4 billion, which is a 56.3 % increase. This also increased the average non-health GDP loss per TB deaths by Int$37,649. Because the non-health GDP loss also seems to partially depend on the average age used for the onset of TB deaths, there is a need for epidemiological research into the age distribution of TB deaths. Discussion The estimated total expected non-health GDP loss ascribed to TB deaths of Int$50.4 billion is approximately 1.37 % of the collective GDP of the 47 WHO African Region member states. This estimate signifies the expected loss in potential GDP in the future from the 753,423 TB deaths, which is revalued relative to the base year of 2013. The sensitivity analysis revealed that the size of the total non-health GDP loss partially depends on the discount rate used and the average age used for the onset of TB deaths. The latter implies that there is a need for epidemiological research into the age distribution of TB deaths. The Group 3 (low income) countries are the home of 51.4 % of the African Region population, incurred 39 % of TB deaths, and bore only 12.5 % of non-health GDP losses associated with TB deaths in the region. On the other hand, even though Group 1 (high income and upper-middle income) countries have only 13.1 % of the regional population and incurred only 16.3 % of TB deaths (probably due to better living and working conditions), it bore 40.8 % of the non-health GDP losses associated with TB in the region. This is attributed to the fact that the Group 1 per capita income of Int$9,257 is eight times higher than that of Group 3 countries of Int$1,131. This implies that even though the TB disease burden is lower in Group 1 vis-a-vis Group 3, it should not be a reason for complacency because the negative impact on Group 1 economies is quite sizeable. As mentioned in the Background, there is a worldwide paucity of studies that estimate the economic losses due to premature mortality from TB. Peabody et al. estimated the combined annual income loss due to TB morbidity and premature mortality to be US$145 million in the Philippines in 1997, of which US$32 million (22.1 %) was attributed to premature mortality [27]. Hickson estimated the magnitude of the decline in the mortality and morbidity burden of TB at 104,425 life years, valued at US$127 billion in England and Wales. Out of the latter loss, $71 billion (55.9 %) was attributed to TB mortality [48]. The median GDP loss per country in the African Region was Int$140.4 million in 2013, which confirms that premature mortality from TB lowers a country's GDP. Cognizant of the correlation between health and economic development, the UN General Assembly in 2015 adopted a development agenda whose sustainable development goal (SDG) 3 focuses on ensuring healthy lives and promoting well-being for all people at all ages [49]. Target 3.3 focuses on ending the epidemics of AIDS, tuberculosis, malaria and neglected tropical diseases and combating hepatitis, water-borne diseases and other communicable diseases by 2030. The Sixty-Seventh World Health Assembly resolution, WHA67.1, adopted the global strategy and targets for TB prevention, care and control after 2015 [50]. The strategy provides detailed guidance to member states on key interventions for eliminating TB by 2035; some of which include the following: early diagnosis and treatment using DOTS; treatment of all people with multi-drug-resistant TB; and antiretroviral therapy for HIV-positive TB patients with tuberculosis/HIV activities [51]. One may ask whether those interventions are economically viable. Korenromp et al. [52] projected that in the African Region, the cost of diagnosing and treating one TB patient under DOTS would be US$503, with an additional cost incurred if the patient has multi-drug resistance (MDR) TB that would be US $4 315. Other additional costs would be incurred if the patent is HIVpositive and receives antiretroviral therapy (ART) for the duration of a 6-month DOTS course, which would be US$236 in 2010. As shown in Table 5, if we inflate those Fig. 2 Non-health GDP loss in Group 2 due to due to tuberculosis deaths in lower-middle-income countries of the WHO African Region, 2013 2010 costs by 3 % per year over a period of 3 years (to 2013) and sum them, then we obtain a total cost of US$1 975 738 381, which when discounted at 3 % comes to $1 918 192 602. Dividing the GDP loss (which is potential savings) of $50.4 billion by the cost of TB interventions ($1.92 billion) yields a benefit-cost ratio (BCR) of 26.2. This means that policymakers can expect $26.2 in benefits for every $1 invested in the three TB interventions. Therefore, since the BCR is greater than 1, this means the benefits outweigh the costs and the investment into the three interventions for TB patients should be considered worthwhile. The sizeable economic losses attributable to premature TB-related mortality imply an urgent need for governments (in collaboration with the Regional Economic Communities, the private sector, the civil society, Global Health Initiatives and development partners) to fully implement the global End TB Strategy to eliminate premature mortality from TB. The full implementation of the strategy to curb the TB disease burden and attenuate the related economic losses has high-level political support contained in the decisions and resolutions on TB from the Organization of African Unity/African Union [53][54][55][56], the WHO Regional Committee for Africa [57][58][59][60], the World Health Assembly [50,[61][62][63][64] and the United Nations General Assembly [65,66]. Limitations of the study This study has a number of limitations. First, it focuses only on the effects that TB-related premature mortality has on the economy. It does not include the cost of absence from work and reduced labour performance/productivity due to prolonged periods of sickness. We omitted direct costs, including health care cost of treating ordinary TB cases and those resulting from longer hospital stays for individuals with resistant infections, e.g., MDR-TB and XDR-TB. Second, the GDP per capita gives no indication about how available resources are distributed across people and households. For instance, the average income per capita might remain unchanged while the distribution of income changes, which has implications for the typical household [67]. Third, the GDP only captures economic activities associated with market transactions. Its calculation omits the value of full-time homemakers (domestic labour). For example, the value of labour of women who choose to stay at home doing house work and raising children is omitted [68]. Fourth, GDP does not include the cost of production or consumption processes externalities such as pollution, environmental degradation and costs of substance abuse (e.g., alcohol, smoking) [68]. Finally, loss of human life due to tuberculosis has an effect on the well-being of the bereaved family members that goes well beyond the loss of incomes to which it gives rise [69]. Some of that effect includes psychological pain of losing a loved one; the stress and anxiety of losing a caretaker or a breadwinner; and negative impact on the children's nutrition status and education when a parent dies. Conclusion This paper sought to contribute to the literature on the economic burden of TB. The 47 WHO African Region Member States lost 1.37 % of their combined GDP due to the 753 423 TB deaths in 2014. That is a sizeable loss in a Region where 47 % of the population lives on less than one international dollar per day [12]. Approximately 75.86 % of the loss was represented by those aged 15-59 years, which is the most productive age bracket. The fact that a premature mortality resulting from TB lowers the GDP implies that the governments of African countries in collaboration with the Regional Economic Communities, private sector, the civil society, Global Health Initiatives and development partners ought to support full implementation of the Global End TB Strategy. The economic evidence contained in this paper is only one argument for the universal coverage of public health interventions to end morbidity and premature mortality from TB. The literature is replete with other arguments such as the contagious nature of TB and its threat to global health security [70], the growing burden of MDR-TB and XDR-TB [1,71], comorbidity of HIV and TB [72], sub-optimal performance of national TB programmes [73] and human rights (social justice) considerations [74]. Additional file Additional file 1: Multilingual abstract in the six official working languages of the United Nations (PDF 330 kb)
2018-04-03T06:11:49.038Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "c4a671b2ba010f7a85bdf70216032877ea3dc0c4", "oa_license": "CCBY", "oa_url": "https://idpjournal.biomedcentral.com/track/pdf/10.1186/s40249-016-0138-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4a671b2ba010f7a85bdf70216032877ea3dc0c4", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
235889214
pes2o/s2orc
v3-fos-license
Drawing Normal Curves: A Visual Analysis of Feedback in Writing-To-Learn Assignments in an Introductory Statistics Course for Community College Students Writing-to-learn benefits students in polishing their communication skills and understanding of statistical concepts cultivating a deeper understanding of statistics. A series of writing-to-learn activities were given to introductory statistics students at a community college in the Rocky Mountain region of the United States. Historically, research on the teaching and learning of statistics has been performed on undergraduates while overlooking the experiences of community college students in learning statistics. A total of 79 students completed the feedback instrument over the course of three semesters (Summer 2017, Fall 2017, and Spring 2018). The feedback instrument included three Likert scale questions, two open-ended questions and a prompt to draw their feelings about the writing assignments and statistics course. Research suggests that drawings are a creative and novel form of collecting student feedback. Data were analyzed using descriptive statistics where appropriate, thematic analysis was used to evaluate written responses, and visual thematic analysis was performed on the drawings. Findings are useful to introductory statistics instructors and statistics education researchers in understanding the students’ experience with writing-to-learn assignments as the responses provide insight, feedback, and drawbacks on the assignment. Introduction In order to improve writing and critical thinking among students in the 1970s, the Writing Across the Curriculum movement was implemented by colleges and universities (Bazerman et al., 2005). By the 1980s, the movement had reached disciplines such as mathematics and statistics (Woodward et al., 2019). Hayden (1989) wrote about the usefulness of utilizing writing to teach statistics. The author had concluded that the students in the statistics course were simply "tossing a coin" when responding to statistical problems, thus teaching them computational skills was not a good use of instructional time. Instead, Hayden focused on evaluation and interpretation as part of this new teaching approach which included the introduction of writing assignments to assess the comprehension of statistical concepts. Nowadays, focusing on evaluation and interpretation of statistics is common practice in statistical courses particularly those taught within disciplines outside of mathematics such as sociology, psychology, and nursing among other fields. Statistical literacy is an important component of statistics education as a basic understanding of data and figures is necessary to make sense of everyday life in the form of public health figures, educational statistics, and budget predictions. A number of papers exist focused on developing statistical literacy through writing (Delcham & Sezer, 2010;Goenner & Snaith, 2003;Johnson, 2016;Parke, 2008, Smith et al., 1992Woodward et al., 2019). Yet, few of them have assessed the students' perspective of completing these activities (Smith et al., 1992). In this study, I implemented a writing-tolearn assignment among community college students in an introductory statistics course focusing specifically on students' feedback of the activity through student generated drawings along with open-ended responses to gain the students' perspective on writing-to-learn activities. Statistical Literacy As the interpretation of data and statistics continues to shape the world, cultivating statistical literacy remains an important goal in quantitative courses. For example, the most recent Guidelines for Assessment and Instruction in Statistics Education (Guidelines for Assessment and Instruction in Statistics Education, 2016) report details "interpretation of results should be emphasized in statistics education for statistical literacy" (p. 9). Gal (2002) defined statistical literacy broadly as a two-fold concept: (a) the ability to interpret and evaluate statistical concepts and (b) learning to communicate the results of a statistical process. Likewise, Ziegler and Garfield (2018) define statistical literacy as "the ability to read, understand, and communicate statistical information" (p. 162). Ziegler and Garfield's definition of statistical literacy does apply only to students, but such reasoning and understanding can be used by anyone. Engel (2017) stated that any individual "empowered to study evidence-based facts and that has the capacity to manage, analyze and think critically about data is the best remedy for a world that is guided by fake news or oblivious towards facts" (p. 45). For example, the comprehension and interpretation of unemployment records, public health figures, and education statistics all rest on the assumption that good statistical literacy has been instilled during high school and college. The "enlightenment" of individuals begins with statistical reason and quantitative reasoning (Engel, 2017, p. 45). Benefits and Guidelines for Writing in Statistics In fact, a number of organizations support the instruction method and scholars have focused on creating multiple activities to implement writing in mathematics and statistics (Johnson, 2016, Woodward et al. 2019. The use of written assignments in an introductory course in statistics targets many of the goals set by GAISE (2016) including statistical literacy. Briefly, the goals of GAISE (2016) are (a) students should become critical consumers of statistical information in the media, (b) students should recognize the appropriate statistical procedure for a particular question, (c) students should be able to produce graphical descriptive information, (d) students should be able to understand and explain variability, (e) students should understand the use of statistical models, (f) students should understand the concept of statistical inference, (g) students should gain experience with technology used in statistics, and (h) students should be aware of ethical issues in statistics. Likewise, the Association for Psychological Science (APS) published guidelines on how to incorporate writing into the teaching of statistics in its Teaching Tips feature encouraging faculty to integrate the technique in their teaching highlighting three major aspects: (a) writing to minimize anxiety, (b) to deepen conceptual understanding, and (c) develop statistical thinking and reasoning skills (Holmes, 2012). Similarly, the Principles and Standards for School Mathematics recommended the use of written assignments to assess students (National Council of Teachers of Mathematics, 2000). The interest of these organizations in incorporating writing into statistics shows that it is an important interdisciplinary goal in order to instill statistical literacy. Within the statistics education literature, multiple researchers have shared the numerous benefits to incorporating writing in statistics to both student and instructors. Researchers and educators have studied the writing-to-learn method in the past which consists of assigning either a prompt related to statistical (or mathematical) concepts to which students can explain the significance of a concept, the reasoning on how to solve a problem, or the use of a certain technique (Johnson, 2016;Radke-Sharpe, 1991;Smith et al., 1992). Supporting Hayden's (1989) argument, Shibli (1992) stated that the use of writing prevents students from falling into the trap of memorizing the formulas and forces students to articulate their thought process which results in "better internalization" (p. 126). The benefits of the writing-to-learn method are numerous and include (a) improving writing skills, (b) internalization and conceptualization of the statistics material, (c) encouraging creativity, and (d) improving communication skills regarding methodology and drawing conclusions (Johnson, 2016;Radke-Sharpe, 1991). Furthermore, the writing-to-learn assignments are also useful to instructors allowing them to glimpse the thought process of students. For example, instructors were able to follow the decision-making process of students when checking for statistical assumptions of a test (Woodward et al., 2019). While the writing-to-learn process encourages creativity, it may also cause difficulties for the instructor when it comes to creating a rubric and grading. However, the existing literature of writing-to-learn assignments primarily focuses on applications of writing-to-learn with little emphasis on students' perspectives, in addition to the sharing of activities for other instructors to implement. Next, I will review the available writing-to-learn focusing specifically on the field of statistics. Research on Writing-to-Learn in Statistics Researchers and educators have executed a variety of action plans to implement the writing-to-learn method in statistics courses as a low-stakes assignment in a variety of populations (i.e., undergraduate, graduate) as well as content areas (i.e., mathematics, psychology statistics, business statistics). For example, Smith et al. (1992) conducted a survey research study in undergraduate business statistics courses to examine if the writing exercises improved students' understanding of the content and their attitudes toward the writing-to-learn activities. The authors assigned prompts to the students throughout the course of the semester which were graded for completion. The author examined the descriptives (i.e., means and standard deviation) of a feedback survey, along with open-ended responses from the students regarding their attitudes toward the writing-to-learn exercises. The authors found no correlation between GPA and the students' perception of the value of the assignments. In a different observational study Stromberg and Ramanathan (1996) studied the use of peer evaluation. Students were responsible for reviewing the statistical content of an article from a newspaper or magazine and then engaged in peer evaluation of their written work during class. The authors concluded that the activity addressed one of the key points of why students did not perform well in written assignments such as failing to read the assignments' instructions correctly. Additionally, the authors empirically compared the grade of the students with and without the peer evaluation activity finding that the students who engaged in the peer evaluation activity had higher grades on the written assignments. Next, Parke (2008) incorporated a similar activity, but focused on graduate students in which they engaged in student-guided discussion of the statistical content and reporting of journal articles. The students' own reports were compared to those of students who did not engage in such activities. Parke developed a list of elements that the students should be able to describe in their own writings. This list included items such as "mentioned the independent variable" and "included the t-value and the associated degrees of freedom." As anticipated, when comparing the groups, students who did engage in the instructional approach had higher percentages of correctly including the elements in the list. A common approach to writing-to-learn assignments is to create multiple small-scale assignments throughout the semester. Goenner and Snaith (2003) applied small scale writingto-learn activities with a business statistics course focusing on the data analysis and developing business memos. Goenner and Snaith's paper primarily focuses on sharing the activity so that other instructors can utilize it in their own courses; additionally, the authors stress how it can help incorporating writing in statistics though it can cause the instructor to become overburdened by the amount of grading. Lastly, Delcham and Sezer (2010) utilized staged writing assignments throughout the course leading to a final paper in an introductory course. The writing assignments implemented focused on a variety of topics such as critical thinking and comparing and contrasting. Anecdotally, the authors concluded that the staged written assignments gave the instructor a "critical insight into student learning and allow[ed] them to make a timely instructional additions and adjustments" before students completed the final paper (p. 512). Most recently, Woodward et al. (2019) reviewed a four-step process of implementing writing in statistics. The idea was to have students answer a prompt in the context of the statistics course, state the relevant facts and implications, and finally explain how these lead to the statistical conclusion. The authors believe that the four-step process allowed the instructor to assess all the processes that lead to statistical literacy. Like previous authors, Woodward et al. shared the activities used in the course so that instructors can make use of them. Though not tested empirically, the authors believe the assignments were useful tools for the instructor to gain insight on the students' thought processes. The majority of the available literature focused on a variety of populations in a university setting (i.e., undergraduate, graduate) in addition to different fields (i.e., business, psychology, mathematics). Researchers focused on having the students examine their own writing of statistical content or having the students criticize or review the statistical content of available articles or news pieces or giving the students a writing prompt (Smith et al., 1992;Woodward et al., 2019). In many instances, the researchers made the writing-to-learn activities a low-stakes activity in which students received credit for participating in the activity (Smith et al., 1992;Stromberg & Ramanathan, 1996). Finally, it is important to note that there is a wide range of approaches within the available literature. In general, the available literature focuses on the sharing of the activity, comparing the student's improvement in writing to a control, identifying a pre-post writing improvement, and the writing-to-learn success is viewed through the lens of the instructor (Hayden, 1989;Woodward et al., 2019). While participants in these studies showed improvement in their understanding and learning of statistics, few of these studies focused on the participant's perspective of the activity. Thus, I emphasized this in the present research by utilizing a qualitative approach which is the best method to give voice to the participants. Similar to research by Pitt (2017), who successfully assessed students' perspectives on class assignments utilizing a visual method such as drawing, I set to collect student feedback of the writing-to-learn activities through a combination of drawings, open-ended questions, and three Likert items. Visual Research Visual research in the social sciences has become a popular research tool within the last decade (Emmison et al., 2012;Forrester & Sullivan, 2018). Emmison et al. proposed a participant-centered approach which focused on actively involving research subjects when conducting visual research (2012). Further, Pitt (2017) suggested that drawing as a method can be used for qualitative research as well as a teaching tool, advocating for drawing as a data collection technique within teaching. Pitt states that drawing gives understanding to "lived experience and opportunity to articulate the minutiae and nuances of everyday life in a mutually supportive and constructive environment" (p. 42). One benefit of visual methods is that they are not restrained by language (Literat, 2013). Pitt (2017) adds that the method can be fun for the participant. Another advantage is how this method requires very little in terms of equipment: paper and pencil. Pitt (2017) states: Participants can utilize a method such as drawing to represent concepts, emotions and information, which is not always possible through writing or oral diction, which by definition are bound by temporal logic. The participantgenerated images act as a graphical metaphor, which represents the oftenunseen experience of the individual. (p. 87) Participants' drawings allow the researchers a glimpse of the participants' thoughts in a manner in which oral or writing communication cannot. Researcher Stance I started teaching introduction to statistics for a mathematics department as a graduate student. As is very common for graduate students, I soon found myself teaching part-time at the local community college to support myself through my graduate degree. In a community college setting, it is common to have returning students experiencing mathematical anxiety; more than once, I was told at the beginning of the semester "I have not been in school in 34 years" or "I haven't done math in 10 years." After a couple of semesters, I understood this was how students let me know about their anxiety toward the class. However, student anxiety definitely made me rethink my teaching strategies. Many times, students are able to solve statistical problems "mechanically" (e.g., hypothesis testing) when guided by a sequence of steps, to solve a hypothesis testing problem to a degree of correctness, able to write a hypothesis, find a critical value, and calculate a test statistic, but less often can they write a conclusion for the test. Certainly, it is more difficult to communicate the results of their test than it is to calculate; however, it is also true that communicating results will be the most useful skill students gain from the course (assuming students can always be aided by technology when in need of a statistical calculation). Having attended multiple professional development workshops, I became certain that having a variety of assignments helped students understand the content material better. In other words, I was encouraged to have a class that was not always focused on exams and homework. I was advised on many different techniques, many of them quite unorthodox, for example, have students bring a song to class and analyze it (I was never able to figure how to do this in the context of statistics); have them use software (unfortunately not all students have the resources or computer literacy and as an adjunct instructor one is rarely paid office hours needed to support students with their technology issues); or incorporating writing (bingo!). Incorporating writing into my statistics course was the most cost-effective solution for me as an adjunct instructor, as well, I believed it perfectly matched my social efficiency ideology as an instructor. Social efficiency focuses on the instructor developing essential societal skills in their students (Alanazi, 2016). I believe that developing students' writing aided them in both interpretation of statistical content in addition to giving them the experience of writing about statistical content which can be asked of them in the workplace or in everyday life. Recall that statistical literacy shapes students' critical skills so they can objectively interpret everyday data (Engel, 2017). Thus, I wanted to focus on developing my students' statistical literacy through the writing-to-learn assignments so that they could apply concepts to the real world rather than focusing on problem-solving and receiving a passing grade. As an additional benefit, I hoped the writing-to-learn assignment would work to the advantage of those students who had difficulty with the mathematical side of statistics but considered writing their better skill. Purpose and Rationale The following gaps emerged in the statistics education literature. First, while there is plenty of literature in teaching introductory statistics courses, none have focused on community college students' experiences in learning statistics. Second, the use of drawings in social research is commonly utilized, particularly in arts-based research , research with children (Literat, 2013), and with patients' perceptions of illness and treatment (Cheung et al., 2016); few instances have used the method within teaching or assignment feedback from students (Pitt, 2017). Third, though research has been performed on the writingto-learn method, these studies have mainly focused on transferring knowledge from instructor to instructor. In other words, these publications focused on why the method is beneficial and how to do it within one's own class. Few studies have focused on the students' experiences with the writing-to-learn assignment (Smith et al., 1992). Thus, the purpose of this study was to implement the use of writing assignments in an introductory statistics course and gather feedback and understanding from the students' perspective on the writing-to-learn activity. The research questions of this study are the following: RQ1. What were the experiences of introductory statistics community college students when completing writing-to-learn assignments? RQ2. How does a group of introductory statistics community college students depict their experience with writing-to-learn assignments? Methodology I obtained Institutional Review Board (IRB) approval for this study before data collection began. For the present study, I chose a qualitative methodological framework based on thematic analysis of participant-generated drawings and written responses. Additionally, I examined three survey items through descriptive statistics. To keep responses confidential, I did not ask for demographic information from the participants. I focused the questions of my survey on the writing-to-learn assignments. Thus, the setting and participants are described in aggregate form within the context of the institution. During the summer of 2017, and the academic year of 2017-2018, as an adjunct faculty in a community college in the Rocky Mountain region of the United States, I implemented writing-to-learn assignments in an introductory statistics course. The most recent demographic information available on the community college's website from Fall 2015 is the headcount of students: N = 5,298. The distribution of ethnicities is as follows: 32.69% identified as Hispanic or Latino, 0.42% identified as Native American, 1.25% as Asian, 1.85% as African American, 0.21% as Native Hawaiian or Pacific Islander, 59.53% as White, 2.10% identified themselves with two or more races, 1.66% did not disclose their ethnicity and finally, 0.30% instead of selecting an ethnicity identified as Non-Resident Alien. Additionally, gender was distributed as 58.51% females and 41.49% male. The type of credits taken at the institution is 53.07% as transfer credits, 33.91% vocational credits, 13.02% developmental credits. Setting The course of Business Statistics was cross listed with Introduction to Statistics course thus students from both courses completed the assignments and feedback of the writing-tolearn assignment. Students self-selected themselves into the course once they had completed the prerequisites (a score of 21 in the Math ACT or a grade of C or better in Intermediate Algebra). Due to the course being part of the common core classes there was a variety of majors in the class: pre-nursing, human services, psychology, and criminal justice among others. A number of them were enrolled at the local university and were planning on transferring the course. A total of three writing assignments were assigned throughout the semester. The first one provided a "news" piece to students to examine the research design. The news articles chosen were Musulin's (2014) article on textbook prices, and Science Daily's article on a live theater experiment (2014). Students also had the option to find their own news article. The second writing assignment was based on Smith et al. (1992) writing assignments for an introductory statistics course and focused on the standard deviation. Details on this assignment can be found in the Smith et al. (1992) paper. Finally, students were provided an article for the topic of correlation and regression namely Messerli's (2012) article on chocolate, cognitive function, and Nobel laureates. The reason why this article was chosen was because the use of correlation and regression is clear and simple enough for introductory students. These assignments consisted of asking students to write their understanding of statistics rather than providing a computational answer. At the end of the semester, students were asked to provide feedback on the usefulness and perceptions of the assignments. The instructions for the assignments can be found in Appendix A and the instrument can be found in Appendix B. Participants Further, participants were 18 and older while enrolled in the statistics sections I taught over the course of three semesters. Verbal consent was obtained as students were asked to complete the feedback of the writing-to-learn assignment; additionally, the students were reminded that they could "opt-out" of providing feedback. The students were told their thoughts on the assignments were needed in order to improve them or eliminate them. Thus, purposeful sampling was used in this study to select participants who could provide the most insight regarding the topic of interest (Merriam & Tisdell, 2016). A total of N=79 participants provided feedback on the writing-to-learn assignments over the course of three semesters. Data Collection While I implemented the writing assignments throughout the semester, data collection on the student feedback of the activity was done once per semester usually once the third assignment was completed (2 or 3 weeks before the semester would end). Shortly after the third assignment was completed, I asked the students to complete feedback on the assignments. The idea was to gather feedback on what worked and what did not. For this purpose, I allowed 20 minutes at the end of class to complete the feedback of the writing-to-learn assignments. I distributed the data collection instrument (see Appendix A) and explained to them what the purpose of the feedback was to help me improve the assignments and their understanding of class concepts. I also asked them not to provide any identifiable information in their feedback. The instrument of data collection I used for this study was a one-page form with three sections: three Likert scale items, open-ended questions, drawings. The purpose of combining the multiple means of collecting data such as Likert scale items, open-ended questions and drawings was to yield more robust and rich findings (Snyder, 2012). Moreover, participants were reminded that they could choose not to participate and leave the feedback blank, in addition to not disclosing their name on the feedback paper. Additionally, I conveyed to the participants I would not look at the feedback until semester had finished. My reasoning was that I wanted to avoid inadvertently recognizing a student's writing. The students were asked three Likert-type questions. These questions were based on Smith et al.'s (1992) work, one of the first exploratory studies on using writing in statistics, thus it seemed reasonable to emulate their questions to gain feedback from students. Smith et al.'s (1992) questions focused on the helpfulness of writing assignments and how these helped communicate statistical concepts. My first two questions targeted helpfulness and the ability to communicate statistics concepts as Smith et al.'s had (1992) whereas the third question I created focused on communicating about research. The questions could be answered with a Likert type scale in which the participants could rate the assignments from 1 = "Not helpful" to 5 = "Very helpful": In addition to these three questions the students were asked to share a brief sentence on the usefulness (or lack of) of the assignments in the following open-ended questions: 1. Please share your overall thoughts on the written assignments (positive, negative, neutral feelings are all welcome). Typical responses to this question were simple phrases of "neutral" or "I think they are ok." However, when students felt the assignments were useful, they would mention how the assignments "ensures class wide understanding" and "helpful in understanding concepts." 2. Please share any suggestions you have on improving the written assignments. Typical answers would range from uncertainty from the student requesting to have writing assignments in class where I could give immediate feedback and examples to students describing the assignment as "too easy." A number of students also suggested loosening up the word limit requirement. The idea behind the open-ended questions was to triangulate responses later in the analysis. Collecting similar information in different formats in order for one collection form to complement the other; for example, a student may decide not to draw, but may be inclined to answer the survey items, or a student may draw but may not respond to the open-ended questions. Finally, the students were asked to draw their experience or feelings when completing the writing assignments. In the instructions, students were asked to be creative in the drawing section and read "Draw your experience/feelings completing the written assignments of the written assignments. You can use any color pencils or markers (Hint: be creative)." The participants were responsible for drawing and providing context to their answers thus they retain control in the power relationship with the instructor (Pitt, 2017). In terms of the representativeness of the sample, recalled purposeful sampling was utilized; thus, data collection focused exclusively on students who completed the writing-to-learn assignments while allowing participants to decline participation if they so desired. Data Analysis The method I chose to analyze my data was thematic analysis. Thematic analysis is a popular approach for analyzing qualitative data and can be used for a variety of content areas and has been used successfully to analyze student reflections and when researching visual methods for this reason I thought it would appropriate to use for the data I collected (Davies & Bourke, 2017;Freeman, & Sullivan, 2018;Rookwood, 2017). The process of thematic analysis identifies emergent themes and provides a substantial and detailed information of the data (Braun & Clarke, 2006;Taylor et al., 2015). Thematic analysis is useful for summarizing information from large datasets which the student-generated drawings in combination with open-ended responses created (Nowell et al., 2017). I took an inductive approach, meaning I did not analyze data until I had completed data collection over the course of the three semesters, so that I could code the data as a whole as opposed to an iterative process where I could inadvertently change my process, as a considerable amount of time passed between each data collection period (at least one semester). Thus, the data were coded once it was possible to examine it in context of the complete dataset (Basit, 2003). Part of the organization process included the scanning of the student feedback drawings, data entry for the Likert items, and transcribing of the open-ended questions. Data Management and Coding The first step in conducting thematic analysis was to familiarize myself with the data (Freeman, & Sullivan, 2018). I familiarized myself with the data by reading the student feedback multiple times before I entered the data in a spreadsheet. The process of data entry also helped me become familiar with student responses while making notes in color in the spreadsheet of where I would code certain responses. I re-constructed the data in a spreadsheet with the Likert items and open-ended questions. I created a random ID, so it was possible to connect the paper version of the feedback to the data file. Next, I also scanned the paper version of the feedback, then I took screenshots of the student-generated drawings and added the drawing to the corresponding participant in the spreadsheet. This allowed me to easily view the responses as whole as opposed to individual sections thus helping me create initial themes for the analysis. This reconstruction of the data in digital form allowed me to add my own notes to the participant generated responses in addition to color coding the themes. The second step in conducting thematic analysis was to generate initial coding (Freeman, & Sullivan, 2018). Initial coding was relatively fast, drawing from my teaching experience, I expected students to lean heavily toward "neutral" (Given, 2008). I attached codes to the open-ended questions as well as the drawings by adding a column and within this column a written code and distinct color for the code. For instance, many of the student generated drawings were describing a process going from confused face to understanding or "lightbulb" moment (see Table 5 and Figure 7) next to these drawings I would add a note "student process." When coding, memos can help the researcher move from coding to relating concepts and establishing relationships (Weaver-Hightower, 2018). Thus, I also included memos on how a student's feedback reoccurred in another students' feedback (by making a note of their IDs, for example a note would read "ID 23 similar thoughts to ID 54"). This data management work facilitated the organization, search, and retrieval of codes (Given, 2008). The third step focused on generating the themes (Freeman, & Sullivan, 2018). I considered the relevance of each code to the research questions I wanted to answer and how each code related to the data as a whole (Given, 2008;Weaver-Hightower, 2018). I wanted to see what the experiences of the students while completing the writing-to-learn assignments were. This process led to a data condensation of simple inductive themes of "Student liked the assignment" or "Student didn't like it," or "neutral." It was a complex task to code the neutral category as it could range from simple indifference by the student indicated by simply writing "neutral" along with a smiley face or the student could provide more context as to why they felt neutral. In the cases where the student did provide context, I created subthemes within the neutral category, and an additional memo for that piece of data (Weaver-Hightower, 2018). For example, student responses such as "do more examples of them in class" and "maybe extend the writing assignments to in class" would get a memo similar to this: "They want the assignment discussed in class/More context in class/More hand-holding." Another example of a subtheme within the neutral category was when the student seemed focused on completing the assignment but did not feel they gained from it, for example, "I believe they may be helpful to some.' but I personally don't feel I gain much from them. I don't mind the assignments though" and another "I think they are ok, but I don't feel like they help me to learn or understand." In this same category, another participant simply left blank and drew a completed check in the drawing section of the data collection tool (see Table 3). For these types of responses, I would create a note: "No personal gain/Just wants to complete the assignment." In the fourth step I reviewed the codes and began extracting data. I looked for the extracted data to have a coherent pattern, focusing on reviewing themes and extracting the data of the open-ended responses and drawings aggregating for easy access when I began writing (Freeman, & Sullivan, 2018). Once I was satisfied with the themes, I finally labeled the final three themes as Not Helpful, Neutral, and Very Helpful. As described earlier, within these themes exists subthemes which are discussed further in the findings. Trustworthiness I used triangulation to enhance trustworthiness in the form of using multiple sources and to decrease researcher bias (Denzin, 2017). When utilizing drawing as a research method, it is recommended that participants either discuss or write about the content of their drawings. This is an important part of the research due to the fact that a drawing by itself can be neutral, so it was important to give context to the drawing by asking students open-ended questions . Next, I focused on methodological triangulation, that is corroborating findings through the different data collection methods: open-ended questions, drawings, and Likert items (Weaver-Hightower, 2018). For the methodological triangulation the survey items, along with written responses and drawings, were used to explore the students' feedback of the writing-to-learn activity. Initially, I focused on the open-ended questions since students were more likely to offer a written response than a drawing then I would add a memo to the spreadsheet describing the relationship. Next, I would examine the relationship between the open-ended response and survey responses; for example, survey items aligned with the comments. For example, the following open-ended response "I don't care to do them, it's well meaning [sic], but needs development" was corroborated by low ratings in the survey. Next, I proceeded to analyze the responses to the short survey items. Findings The findings will be presented in the following order: descriptive information from the Likert items will be presented followed by a qualitative thematic analysis of both open-ended questions and participant generated drawings. The data collection instrument can be found in Appendix A. Survey Items Descriptive statistics were calculated utilizing the R statistical package while graphics were obtained through the cowplot package (R Core Team, 2013;Wilke, 2019). Table 1 shows the descriptive information for the three Likert items. Note that there were no missing data for the Likert items and the sample size was N=79. The distribution of item responses was as follows: "Rate how helpful are the writing exercises in learning statistical concepts?" 2.5% of the students found it "Not helpful", 48.10% of the students found it "Somewhat helpful" and 11.39% found it "Very helpful." Next, participants were asked "How helpful are the writing exercises in developing your ability in writing and talking about statistics?" the distribution of responses was as follows: 2.53% as "Not helpful" and 36.70% as "Somewhat helpful" while 15.18% found it "Very helpful." Finally, in response to "How helpful are the writing exercises in developing your ability in writing and talking about research?" the majority of the students found it "Somewhat helpful" 35.44%, and 13.92% of students found it "Very helpful," while only a minority of students found it "Not helpful" at 3.79%. Figure 1 shows the distribution of responses for the Likert items. Examining the three charts it is clear that few students selected the "Not helpful" option. Visual Thematic Analysis Examining the drawings simultaneously with the written responses, the following themes emerged: the students felt they either disliked or liked the assignments with the only other category being neutral toward the assignments. However, to be consistent with the survey responses, the major themes were classified as not helpful, neutral or very helpful. Theme #1: Not Helpful Students who disliked the assignment had a theme in common: They believe the assignments needed more development or they were not challenging enough. When examining the student generated drawings, it is clear they expressed these thoughts through the use of question marks (see Figures 2 and 3). "They're too easy and require too little thought." "I think the writing assignment would be more helpful if they were more challenging and if they were more focused on the stats part rather than the writing part." "I don't care to do them, its well-meaning but need development." "The assignment seems like it's more about filling a standardized expectation that about getting the concepts." "I didn't like them but I'm lazy. If you want to practice the skills with this kind of exercise, then make more written assignments." "Maybe have a few more?" Figure 2 "I didn't like them but I'm lazy. If you want to practice the skills with this kind of exercise, then make more written assignments." While students rated the assignment low, few declared they disliked the assignment because it was difficult. Most of them found the writing-to-learn assignments not challenging as the earlier quotes show. One student simply wrote "Not useful." Finally, only one student declared his/her dislike based on struggling with the material this theme was more common among students that rated the assignments as "neutral." Shown in Figure 3, the student drew out his or her frustration with the content of the writing-to-learn assignments: "I personally don't like them because I don't understand most of the material on them. I can barely keep up in class + the written assignment confuse me." Figure 3 Dislike for the writing-to-learn assignment based on difficulty with the material Theme #2: Neutral Students who rated the writing-to-learn assignments as neutral can be divided into two categories: focusing on completing the assignment (most likely so that their overall grade in the class would not be affected) and struggling to make connections between the assignment and class material. The participants in Table 2 rated the assignment neutral though they struggled to find the connection between class material and the writing-to-learn assignment as the following participant states: "I'm not a huge fan of the writing assignments because I get confused . . . or coming up with." Note that "emoticons" are often used by students in their drawings. In this theme, many of the faces have a neutral, almost emotionless face (see Table 2). More importantly, the struggle of these students to connect the writing-to-learn assignments and statistical concepts covered in the course supports Shibli's (1992) argument that writing requires more insight into the statistical concept compared to mechanical calculations. "I am a visual learner, therefore trying to visualize something in writing is difficult for me." There were also students who simply completed the assignment and the feedback instrument for the sake of completion. In Table 3 there are examples of how students thought of the assignments as simply something that must be completed in order to advance in the course. See Table 3. Similarly, a number of students had positive reflections on the assignments though they did not feel the assignments helped them personally to develop their statistical knowledge. Table 4 displays the participants' quotes alongside their respective drawings for the students with these opinions. Table 4 Quotes and drawings on the writing-to-learn assignments not being personally helpful Participant quote Participant drawing "I believe they may be helpful to some but I personally don't feel I gain much from them. I don't mind the assignments though." "Neutral, they are helpful but can be time consuming" The participants also discussed how to make the assignments better. One of the suggestions was to include the writing-to-learn assignments in class rather than assign them as homework. This could indicate the students do not actually feel comfortable with the writingto-learn assignments and would like to have the instructor present to have more guidance though the participants do not explicitly say so. Among the suggestions was to add the writingto-learn assignments to the class session: "Do more examples of them in class." "Maybe extend the writing assignments to in class." "Basically, add the assignments in class." Figure 4 Note how this drawing utilizes the Greek letter (upper case M) which symbolized the population mean for this statistics course Several students mentioned forgetting to complete the written assignments or having completed only one or two out of the three assignments. Finally, a student suggested to increase the word limit he or she found it difficult to keep it under 500 words. A student declared, "I kind of like the writing assignments because it is not opinion-based writing, you just explain something that is already there" (see Figure 5). However, another student agreed that the use of the writing-to-learn method was a "creative method of applied learning." Figure 5 "I kind of like the writing assignments because it is not opinion-based writing, you just explain something that is already there." Theme #3: Very Helpful Finally, a number of students stated they found the writing-to-learn assignments useful and had positive comments regarding the assignment. Table 5 has a compilation of these responses alongside their respective drawing. There were positive reactions with short and simple statements as "I think the written assignments are great!" Among the drawings notice the increased use of smiling faces. "Assignments are helpful to talk about statistical informationbroadens the knowledge gained from just equations and numbers to analytical exercise." More importantly, there was a clear pattern of drawing a lightbulb going off, in fact, a participant actually wrote the words in one of the drawings (see Figures 6 to 8). This indicates how the use of the writing assignments helped students think through the class content until finally the "lightbulb went off." This is supported by the following participant quotes and drawings: "The writing assignments helped me interpret what I was thinking, it was a good way for me to put my work into words and talk myself through problem(s)." Figure 7 Student's thought process "I enjoy the writing assignments. It applies statistics to a real world concept that helps understanding the applications of stats and makes it easier to learn" Figure 8 Student mood process. Discussion Drawing as method of data collection is certainly not without detractors, yet in combination with other data collection elements it was a useful form of collecting student feedback on assignments. Gauntlett (2005) discussed limitations such as drawings by themselves could be too ambiguous, and thus recommends to supplement them with participant interviews. However, to preserve anonymity and due to the number of participants, individual interviews were not possible. Instead, written responses were used. While the majority of the students completed all portions of the data collection instrument (Likert items, open-ended questions, and drawing), missing data occurred only for the open-ended questions and/or drawings. With drawings, resistance can be met by the participants themselves, for example, Table 3 shows a student who left blank the written responses but actually provided a pictorial response. It is important to note that a number of students decided not draw something and submit their feedback and thus were not included in the qualitative analysis. On a positive note, this indicates a level of trust with the researcher and instructor; in other words, participants felt comfortable not participating. The three elements of the data collection instrument not only served for triangulation purposes but to avoid missing data which is a common issue for student feedback. The use of drawings helped further my understanding of the students' perspectives. If I had simply used Likert items to explore student perceptions of the writing-to-learn assignments, I could have concluded that many students had liked them given the high ratings and skewness of the data. In Figure 1, it is easy to see that the three distributions are left skewed, indicated most students leaned toward the assignments as "Helpful." However, with a more careful look at the written responses, alongside the student generated drawings, I gained more insight into the students' perceptions. Indicating that participant drawings can allow the researchers a glimpse of the participants' thoughts in a manner in which oral or writing cannot. I learned about the students' desire to facilitate the assignment during class, as well as their need for more guidance, and the need of a subset of students for more challenging assignments thus aligning my findings with the ideas postulated by Hayden's (1989) and Shibli (1992). Also, I would like to draw attention to this student quote, "Creative method of applied learning," which resonated with Radke-Sharpe (1991) that writing in statistics encourages creativity. Finally, the majority of the studies published on the writing-to-learn method in statistics did not represent the students' direct perspective on the assignments. Many studies focused on the development of the activity (Johnson, 2016;Radke-Sharpe, 1991;Woodward et al., 2019), and few of them actually incorporated student ratings via a survey (Smith et al., 1992) and a handful focused on the instructor's perspective of the assignment (Goenner & Snaith, 2003;Woodward et al., 2019). Thus, this study is unique in the aspect of incorporating the students' perspective and feedback of the writing-to-learn method. For this reason, I believe that the findings in this study may be specific to the samples available to me and not generalizable. Though, the teaching tools and feedback process can definitely be replicated by educations and statistics education researchers by examining the appendices of this paper. More importantly, the majority of research regarding teaching of introductory statistics is often performed utilizing samples of university undergraduates. In this sense, this article makes an important contribution to the literature as our distinct sample of community college students is rarely researched. Further, Pitt (2017) suggested that drawing as a method can be used for qualitative research as well as for teaching framework advocating for drawing as a data collection technique within teaching. Drawing as method of data collection offered insight into the assignment that was not always reflected through the quantitative responses; however, many responses relied on written sentences to communicate their responses. In conclusion, the use of drawing as a method served as a novel approach to receive feedback from students in an introductory statistics course.
2021-02-07T13:48:05.710Z
2020-12-20T00:00:00.000
{ "year": 2020, "sha1": "036c12f0ceb6cd36caa4691fa3deac32e84f3553", "oa_license": "CCBYNCSA", "oa_url": "https://nsuworks.nova.edu/cgi/viewcontent.cgi?article=4634&context=tqr", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "036c12f0ceb6cd36caa4691fa3deac32e84f3553", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
219332213
pes2o/s2orc
v3-fos-license
Study on the Compound Path of Living Environment Renovation Under the Background of “Renovation and Restoration” to Old Communities As a product of a certain era,the old community is now faced with many serious problems that cannot be completely changed, such as the simple functional structure of space, lacking of community vitality, losing of cultural characteristics and the rupture of historical context. Which make it difficult to adapt to diversified complex residential needs for today’s people. This article is based on the concept of " renovation and restoration ", combined with practical investigation and research, made a detailed analysis on the renewal of the residential environment of the old residential areas in Hangzhou from sociology, culture, economics and some other aspects, systematically sorted out the complex and multi-dimensional research paths, drew out work frame of “renovation of old residential area”. Furthermore, this paper through the analysis and calculation of the Mixed functional indexes such as Space Syntax, Spacematrix and MXI, studied the technical path of the renewal of the old community and put forward the feasible reconstruction strategy. Introduction "Restoration and Repair of the City" refers to an ecological, refined, and sustainable urban renewal method which guides the transformation and development of cities to solve problems, such as insufficient urban development space, poor ecological environment, and lack of social culture in the era of rapid urbanization. [1]. It emphasizes the attention paid to both function and aesthetics, as well as the function to provide design guidelines for the governance of buildings and public spaces in the process of urban development, so as to create a high-quality and sustainable physical environment for urban residents [2]. Over the years, scholars and experts at home and abroad have talked about urban renewal and transformation from different paths. In the west, which is mainly represented by British and American countries, there has been "urban renewal/reconstruction", "urban redevelopment", "urban renewal/renaissance" and "neighborhood renewal" [3]. In China, well-known scholars such as Liangyong Wu, Yisan Ruan, Gengli Zhang, etc, based on the perspective of "organic urban renewal", have made beneficial explorations from various aspects, including "protection and inheritance of culture veins, and harmonious development of society", most of which focus on physical space environment. [4][5][6]. Since further constraints on urban ecological red line and growth boundary issued in 2014, more scholars such as Baoxing Qiu and Shaoqin Zhuang have begun to introduce advanced foreign concepts such as "sustainability, ecology, and humanity" to conduct more extensive explorations [1]. In the research of scholars' studies and exploration of reality transformation, we can find that under the "new normal", urban renewal has become a main method of urban development in China [7]. With respect to current situations in renewal design of specific areas like old communities, people pay much attention to fundamental renewal of residential building and physical environment, but little attention to synergetic researches from the perspective of sociology, culturology and economics. Living environment renovation in old communities is an integrated research covering different disciplines. It requires a compound-path exploration from various dimensions so as to achieve the goal of sustainable and coordinated development. Existing Problems and Connotation Expansion of Living Environment Renovation to Old Communities Existing Problems in Living Environment to Old Communities in an Incremental Era Based on a study of a series of policies issued by Hangzhou city government, such as the Implementation Plan of Comprehensive Renovation and Upgrading of Old Communities in Hangzhou, the Four-year Action Plan for Comprehensive Renovation and Upgrading of Old Residential Districts in Hangzhou (2019-2022), and the Technical Guide of Comprehensive Renovation and Upgrading of Old Communities in Hangzhou (try out), the author visited and investigated about 10 typical old communities in the first batch of pilot projects published in 2019, including Xingong community in the upper urban area, Zhugan lane, Xiaotianzu Community and Zhizunong community in the lower urban area, Dujia new village in Gongshu District, and Jingtan Community in Jianggan District. Based on the field survey of old residential districts in different regions of Hangzhou, the author distributed questionnaire, conducted field survey and paid visits to local residents in different age, with different occupations and in different communities to collect their opinions on the current situation of old communities and their suggestions for renovation. After collecting data, with the residents' cognition as a variable, the author made systematical analysis and summarized specific problems as follows: (1) The space place has the problem of single function with poor experience: during the incremental era of the early stage of socialism when the demand for housing and other items is growing rapidly, in the planning of residential areas, people take meeting the largest demand for housing as the design and construction goal, inevitably ignoring the quality of human settlements to some extent; In this way, the public space environment of the residential districts enjoys relatively high floor area ratio, but insufficient open space, which will inevitably lead to problems such as deficiency in green space ecosystem, traffic congestion, and insufficiency of public service facilities, further resulting in environmental problems including rigid space, single function, and placement of vehicles at will; the function of space place needs to be improved. (2) Lack of community vitality with outstanding isolated problem in living: with the rapid development of urbanization, the functional spatial layout of different areas in Hangzhou has also changed accordingly. The springing up of new residential districts leads to juxtaposed new and old residential areas, which disrupts residents' normal daily life and living habits, further making the scope of social communication be cut off and neighborhood relations become alienated. Gradually, a phenomenon of isolated living appears among people in the same community and people in different communities in the same area. As far as the current social situation is concerned, due to a long-term tendency to material infrastructure construction in the renovation of old communities in Hangzhou, with a negligence of the protection of the social network in the community in which interpersonal relationship is the main body, and the important role of the public space system in the community in the life communication of residents, some communities lacks vitality and has prominent problem of residential isolation. (3) The loss of urban features with increasingly highly-assimilated designs. Hangzhou, which is known as "heaven on earth", boasts a long history with profound culture contents along the way from "Liangzhu Culture". As one of the most typical spatial areas that can reflect the historical style and cultural characteristics of Hangzhou, old communities are characterized by the regionalism of local life and the diversity of interpersonal communication, standing as an important constituent of urban organization. As a product of the continuity and preservation of history, old communities can reflect not only the quality of local residents' life and living conditions, but also the historical and cultural renovation of a city in the process of spatial development. However, as far as the current situation of the communities is concerned, old communities are deficient in regional characteristics to leave an indelible impression on their residents; There are few public landscape nodes and landmark buildings. The Connotation Expansion of the Environmental Reform of the Old Communities in the Age of Stock In recent years, with the development of big data and diversity in cities, the traditional spatial environment model of old communities gradually has difficulty in adapting to the diversified and complex needs of human settlements. To solve various problems brought out by single space function and deficiency in interpersonal communication in traditional old communities of Hangzhou, the residential districts are bound to experience the renovation from a single living mode to a multi-dimensional, ecological and comprehensive one. With regard to policies, from some opinions of the Central Committee of the CPC and the State Council on further strengthening the management of urban planning and construction issued by the Central Committee in 2016 and the guidance on strengthening the repair of ecological restoration cities officially released by the Ministry of Housing and Urban-rural Development of the People's Republic of China in 2017, to a new path of urbanization with people as its core put forward on the 18th National Congress of the Communist Party of China, a new path with Chinese characteristics featured with humanism, harmonized development of four modernizations, namely industrialization, informatization, urbanization and agricultural modernization, optimized layout, ecological civilization, and cultural heritage, all deeply emphasize the key tasks in "double repair", such as "filling up infrastructure debts, increasing public space, improving travel conditions, transforming old communities, protecting history and culture, and shaping the style of city time" [8], which are in line with the design of optimizing and updating of old residential areas in recent years. In recent years, Zhejiang province has also put forward various relevant policies, such as the experimental scheme for future community construction in Zhejiang province, and the implementation plan for Hangzhou to implement the construction action of the whole Zhejiang province. Based on various explorative researches on the sharing of overall living space, cultivation of neighborhood emotion, satisfaction of the traffic, and the quality of life in blocks, we found that despite many problems in their internal and external space, old communities, with a strong flavor of life, enjoy a large renovation flexibility and intervention space, thanks to the particularity of their formation. According to the reflection on the previous demolishment and reconstruction, as well as thinking of the severe challenges faced by the communities in their future development and the realistic problem of stock land, the design of the old communities' upgrading needs to break through the material space level to study from the perspective of a compound environmental complex such as the "adaptive renovation" of the composite space, the "cultivation of vitality" in shared space of communities, and the "activation of local cultural capital" of the public space. Exploration of the Research Path of the Renovation of Old Communities As China's cities gradually change from external expansion to connotative development, old urban areas in the stock age have begun to consciously promote the urban upgrading [9]. Based on the analysis of CiteSpace, a literature research map, at present, the research of material renewal is still dominant in China's urban renewal theory without forming a network featured with strong connections; In foreign countries, co-authors around David Harvey, such as Richard Florida and other representatives of non-traditional architectural schools, and some other professional fields are more or less intersected with political economy, sociology and other fields. This indicates that the consensus foundation of foreign countries for urban renewal has changed from the original "material space determinism" to a more realistic or humanistic perspective. Besides, the professional field of urban renewal has expanded from connotation to extension, and has formed a consensus basis featured with clear structure system and relatively intensive internal relations [10]. Based on the above comprehensive discussion on the existing problems in the function, spatial vitality and regional features of the old communities, the author, combining the survey data and the analysis of the satisfaction of the citizens, concludes that the renewal of the old communities in Hangzhou requires not only a simple remedy of the design and planning. That also requires us to rise from the level of material repair to a multi-level and comprehensive discussion and study from perspectives of sociology, culture, and economics to make use of the conclusion of the comprehensive analysis for following renewal design of planning in return. The specific analysis is as follows. Sociological Perspective: The "Adaptive Renovation" of Complex Space and the "Vitality Creating" of Public Space in Communities Historical experience shows that the urban renewal with the focus on material environment renovation has destroyed the social fabric of the city due to the lack of attention paid to social problems, thus bringing about many social problems. The old communities, characterized by the dual connotation of sociality and public space, are confronted with complex and diversified issues including public and private space in communities. Our renewal and renovation should, combining the concept of humanism, emphasizes the "adaptive renovation" of complex space in the renovation process, that is, expanding a single dimension of material environment to a multi-dimensional level of society, economy, environment and culture, respecting the wills of residents of different classes, studying the complexity of public participation, and paying attention to the "vitality creating" of public space in communities. We will transform our renewal design from image construction to exploratory research on the mechanism of internal action. For example, general planning of cities in Shenzhen City (2010-2020) takes the lead to propose that the spatial development should be changed from "incremental expansion" to "stock optimization". The plan focuses on key contents under the new development values that emphasize "overall", "endogenous", "integrated" and "culture-oriented" development, which is very worthy of our reference [11]. With detailed interpretation of "Notice of declaration work on the construction of pilot future communities in Zhejiang Province issued by Zhejiang Provincial Development and Reform Commission in 2019", we can think about in the construction of future community neighborhood, how to combine the scene with various designs such as system architecture, space carriers, visual design, scale standards and mechanism guarantee to study the modification scheme and implementation methods in community open ans sharing of neighborhood space. From that, it can be seen that the renovation of old communities is not a single entity, but a complex sociological problem. According to the above analysis, the author, taking into consideration various needs of residents in society, economy, culture and life, draws the SWOT analysis chart (Figure 1) to guide the subsequent planning and design. From the Perspective of Culturology: Improve Cultural Cognition and Perceptual Experience Under the Background of Shared Communities As for the importance of culture, Pierre Bourdieu, a French sociologist, pointed out that in the contemporary society, culture has penetrated into all fields and overtaken traditional factors such as politics and economy to take the top spot in social life. Based on the concept of cultural capital, urban renewal can be understood as a process of continuously activating, creating and accumulating urban cultural capital [12]. From the perspective of culturology, the old communities, as an important carrier of urban spatial structure, not only records the urban spatial pattern and construction mode, but also embodies various connotation of the relationship between city and society, as well as between city and humanity. Under the planning background of "building a city with culture", only when the spatial environment around the old communities become the living place and space for daily culture to stimulate people to develop the depth and thickness of culture here [13] can old communities in Hangzhou really arouse people's resonance and sense of belonging to their living space, and make them essentially different from that of other cities. To allow residents to perceive due cultural atmosphere, historical characteristics and nostalgia from the city and communities, it is of important significance to implant cultural elements synchronously tin the process of renovation. According to the investigation and visits, the author found the human settlements of old communities in different areas of Hangzhou is generally simple, the design of public facilities is not systematic enough, and the cultural inheritance has not been paid enough attention to. The author took Dujia new village in Gongshu District of Hangzhou as the research case and drew up the following design of its public facilities. Considering that the new village, which is close to the Beijing-Hangzhou Grand Canal, carries the spatial meaning of "canal culture" and water towns, the author, by combing the elements of "canal culture", designed boat--shaped bus stations outside residential areas, and boat-shaped rest seats, containers for green plants, street lamps, and landmarks inside residential areas ( Figure 2). Meanwhile, taking into consideration of the aesthetic needs of modern people, the author also made use of stainless steel and other materials to design relatively fashionable sailboat-shaped bus platforms and rest seats (Figure 3). Both of the above schemes, by making use of the cultural element of "boat" and the architectural elements in horse-like wall in water towns to modify the infrastructure, makes it possible for "water towns" and "canal culture" be better inherited around the residential areas of Hangzhou. From The Perspective of Economics: The Combination of "Standardized Regions" and "Organic Renewal ", a Study of the Universality and Individuality From the perspective of economics, it is of greater difficulty in transforming old communities based on the "reuse of stock land" than building new ones. That requires us to do scientific and rational renovation based on the integrated analysis of the current situation. Taking economics into consideration, we, after conducting deep study on the current situation of human settlements in old communities in Hangzhou to systematically classify their public space environment, creatively put forward the renewal ideas that combine "gradual renovation of standardized regions" and "organic renewal of personalized space", which can coordinate both universal and individual needs of the residential areas in the process of renewal and renovation. (1) "gradual renovation of standardized regions". First of all, residential areas, as a public space with systematic planning and design, must have something in common in their designs. In terms of the author's investigation and visits to old communities, after data analysis is carried out based on the "material" level, renovation of many items can be updated according to the standard dimension. For example, in the old community of Nanbanxiang in the above urban area, its building facade, outdoor cabinets, canopies, catch basin covers can be upgraded in such a way (Figure 4). Besides, considering that these kinds of renovation often have common standards with that of other old residential areas in the same period, this method can be analogized to the renovation of other old communities. In accordance with the above common problems, we put forward the concept of "gradual renovation of standardized regions". Under guidance of the concept, we plan to start from the small parts of the existing visible space that need to be repaired, such as the reconstruction of the old residential building facade, the road system, greening system, and the infrastructure supporting facilities to carry out standardized and systematic renovation, so as to save human, material and financial resources to the maximum extent. What's more, through the "gradual renovation of standardized residential landscape environment", we plan to achieve renovation and management program with innovative mechanism and long-term management; we hope to build a systematic management mechanism which is featured with"one-time renovation and long-term maintenance". (2) "organic renewal of personalized space". Interpreting from the connotation of urban double repair, old communities achieve space's regeneration through ecological restoration; it is a personalized behavior that targets at a specific region. That requires us to, in the process of renovation, reexamine the development history and current situation of the city to the street, and the community, to analyze the historical changes of the spatial environment and differences among residents, so as to select upgrading methods in a reasonable and discreet way. That is why we put forward the concept of "organic renewal of personalized space" in the renewal of landscape nodes, infrastructure and public space in different communities. Inline with the requirements of "protecting the foundation, promoting the upgrading, expanding the space and increasing the facilities", we optimize the utilization of space resources in and around residential areas, make clear the contents and basic requirements of personalized space renovation, and strengthen the guidance of designs, so as to achieve "every single residential area has a unique scheme " (Figure 5). Take the special reconstruction of old communities such as Zhugan Lane in upper urban area, Hangzhou for example. In order to solve social problems such as low participation, weak sense of belonging, and lack of community construction of the community, we, by carefully considering the spatial layout, residents, social needs and other elements, carry out "customized" expansion design with certain targets and plans to the nodes of landscape for public communication and multi functions of public facilities of the residential area. In addition, targeting at different groups, we can set out personalized innovation schemes, such as that of the renovation of Xiaotianzhu community. After practical investigation, we, with focus on the residents most of whom are the aged, unify the color and style of the greening landscape facilities in public rest areas, combined with elements in traditional Chinese buildings, to design a small-scale garden of ancient charm with the flavor of culture in the Southern Song Dynasty, thus creating a comfortable and livable garden-style community full of Zen and ancient charm of water towns ( Figure 6). As far as the practical significance is concerned, the "organic renewal of personalized space" not only maintains the basic functions of communities and expands public space and supporting services, but also activates the living space of space experience and living experience of old communities, which makes it possible to ensure the lasting of neighborhood space with close communication [14]. At the same time, it is necessary for us to conduct the renewal and renovation of each residential area in stages and with emphasis, so as to reserve time and space for reflection for the "urban double repair", which is important to strike an efficient balance between"individuality and universality" and realize the multi-dimensional development of old communities. Analysis of Work Frame and the Technical Path From the macro perspective, our renewal design needs to combine the harmonious and inclusive social concept, the pluralistic and interconnected cultural concept and the dialectical and rational economic concept to be coordinated and renewed through the above researches on the reconstruction path of the demands and needs of the old communities. Based on this, the writer combines big data, Spatial Syntax, urban morphology and other technical path analysis methods to draw the work frame and analysis diagram of technical path, so as to provide reference for the following renovation. Analysis on the Work Frame of "Renovation of Old Residential Area" Since urban renewal involves many complex interests, the success of renewal and reconstruction need to be ensured by the establishment of a truly effective urban renewal governance system, an open and inclusive decision-making mechanism, and a cooperative and coordinated implementation process [15]. Therefore, the writer refers to various relevant policies, such as "The Experimental Scheme for Future Community Construction in Zhejiang Province". Based on the principle of "overall control from macro level, systematic combine from mesoscopic level, detailed implementation from micro level", the writer draws a "Frame Diagram of Renovation And Restoration " (Figure 8) in details from the survey to the implemented works at all stages, puts forward some problems that need to be solved in "urban restoration" as well as "ecological restoration", and thinks about the existing problems from point to surface, so as to provide an effective framework for the sustainable development of communities to guide the following work. Analysis of Compound Technology Path In the big data age, our urban design has transited to the stage of quantitative and scientific rational analysis combined with big data such as urban morphology and Spatial Syntax rather than the previous traditional era of space construction, which only relies on the intuition and experience of designers themselves. Therefore, on the basis of surveys, the writer combines a series of quantitative urban morphology analysis tools such as Spatial Syntax with the planning and design theory of communities to analyze the patterns of spatial vitality of old communities in Hangzhou, so as to make quantitative analysis on the construction of spatial vitality in multiple stages of residential renewal designs. Based on the above analysis of the problems of the old communities, we put forward to study the technological path of renewal and reconstruction through the way of experimental design, which is roughly divided into two parts. On the one hand, based on GIS, we combine the qualitative traditional theory of morphology with the new quantitative method (Space Syntax, Spacematrix and MXI) to analyse the urban morphological characteristics of the spatial vitality of the communities. On the other hand, we test effectiveness of the above analysis from the perspective of residents' selective activity intensity. Then, we will take Jingtan Community in Jianggan District as an example to make the technical demonstration. First, we use Space Syntax to make the spatial connection relationship abstraction and fabric analysis of the traffic system of Jingtan Community, and this way can reflect the traffic accessibility of the communities to a certain extent through the data image. Second, we can effectively define the spatial form layout of the communities through the Spacematrix that based on the data analysis of plot ratio, construction intensity and floor height. Simply speaking, the way defines the spatial characteristics of the community by analyzing the intensity of residents' activities. Third, based on the calculation of MXI (Mixed-use Index), we can define the functional mixing degree of residential area by the ratio of building area of three main functions: living, public space (work) and infrastructure ( Figure 9). Based on the above analysis, according to the calculation of MXI analysis (formula: MXI = living (%) / working (%) facilities (%)), we can predict the analysis chart of functional mixing degree of residential environment space, and the chart can accurately reflect the space situation of Jingtan Community to guide the following reconstruction. Through the data analysis of the three methods of Space Syntax, Spacematrix and MXI of the areas of communities (Figure 10), we can respectively analyse and describe the accessibility of the residential space, the intensity of the plot construction and the compound conditions of architectural form and the plot function [16] of the Jingtan Community. According to the panel points of data-based and scientific analysis and reconstruction that from the perspective of factors such as crowd activity, we study the renewal design of the spatial form of the communities, such as the traffic, building height, public landscapes, public activity space layout, so as to collect the basic data of the renewal design of the old communities in different regions. The above research shows that the cultivation of urban space vitality is not a relatively vague concept that lacks of practical measurement. With the purpose of ensuring a better effect of the design and renovation, we can guide the future update design through the analysis of data to provide a solid foundation for building a good traffic accessibility of the community, a suitable layout of the building space, and a certain level of mixing degree of public space functions. Meanwhile, we can test the effectiveness of the design and reconstruction from the data analysis of non spatial elements such as the strength of residents' selective activities after the renovation. We can know that these quantitative data recorded on the map can help to implement the urban design from the original "traditional intentional design" to "multi-dimensional compound design research that "based on big data" as the reference for our updated design, a new perspective for the construction of urban space vitality. Conclusions The reconstruction of the community is a process of sustainable regeneration and activation, which is conductive to the comprehensive improvements full of recessiveness and dominance in urban development context, beautification of urban environment, improvement of living conditions of residents, etc. In the process of reconstruction, we should fully combine the internal artistic conception of social and cultural attributes to reconstruct and integrate the external space areas, and make a comprehensive consider of the multi level renovation of economic benefits. As the above step by step analysis, we should further clarify that the basic responsibility of urban renewal is to cultivate the living environment, and the expanded responsibility is to coordinate the reopening of the city [17]. We should reconstruct our familiar places, make our communities have the ability to face the future, develop a sustainable and resilient future community, so that the regeneration and activation of settlements can form a virtuous circle and interaction under the premise of "stock planning" and the requirements of improving urban services and the ecological pattern and highlighting the cultural background.
2020-06-06T12:23:15.061Z
2020-06-04T00:00:00.000
{ "year": 2020, "sha1": "5241ab968f77a23c70020063a8297ef8fad520dc", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.urp.20200502.12.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5241ab968f77a23c70020063a8297ef8fad520dc", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
21873965
pes2o/s2orc
v3-fos-license
Simultaneous stimulation of sedoheptulose 1,7‐bisphosphatase, fructose 1,6‐bisphophate aldolase and the photorespiratory glycine decarboxylase‐H protein increases CO 2 assimilation, vegetative biomass and seed yield in Arabidopsis Summary In this article, we have altered the levels of three different enzymes involved in the Calvin–Benson cycle and photorespiratory pathway. We have generated transgenic Arabidopsis plants with altered combinations of sedoheptulose 1,7‐bisphosphatase (SBPase), fructose 1,6‐bisphophate aldolase (FBPA) and the glycine decarboxylase‐H protein (GDC‐H) gene identified as targets to improve photosynthesis based on previous studies. Here, we show that increasing the levels of the three corresponding proteins, either independently or in combination, significantly increases the quantum efficiency of PSII. Furthermore, photosynthetic measurements demonstrated an increase in the maximum efficiency of CO 2 fixation in lines over‐expressing SBPase and FBPA. Moreover, the co‐expression of GDC‐H with SBPase and FBPA resulted in a cumulative positive impact on leaf area and biomass. Finally, further analysis of transgenic lines revealed a cumulative increase of seed yield in SFH lines grown in high light. These results demonstrate the potential of multigene stacking for improving the productivity of food and energy crops. Introduction The accumulated photosynthate produced over the season determines the yield of a crop, but improvements in photosynthesis have not been used in traditional breeding approaches to identify high-yielding varieties. The reasons for this are twofold, (i) methodologies to make accurate field measurements have only been available in the last 10-20 years, and also, (ii) there is a lack of evidence to determine whether there is a correlation between the rate of photosynthesis on a leaf area basis and final yield of the crop (Evans, 2013;Fischer et al., 1998;Gifford and Evans, 1981). There is now an urgent need to increase crop productivity and yields to meet the nutritional demands of a growing world population, and there is growing evidence that this may be achieved through improvement of photosynthetic energy conversion to biomass (von Caemmerer and Evans, 2010;Ding et al., 2016;Lefebvre et al., 2005;Long et al., 2006Long et al., , 2015Simkin et al., 2015). Evidence from a combination of theoretical studies and transgenic approaches has provided compelling evidence that manipulation of the Calvin-Benson (CB) cycle can improve energy conversion efficiency and lead to an increase in yield potential (Long et al., 2006;Poolman et al., 2000;Raines, 2003Raines, , 2006Raines, , 2011Zhu et al., 2007Zhu et al., , 2010. Previous studies have demonstrated that even small reductions in individual CB cycle enzymes such as sedoheptulose 1,7bisphosphatase (SBPase) and fructose 1,6-bisphosphate aldolase (FBPA) negatively impact on carbon assimilation and growth, indicating that these enzymes exercise significant control over photosynthetic efficiency (Ding et al., 2016;Haake et al., 1998Haake et al., , 1999Harrison et al., 1998Harrison et al., , 2001Lawson et al., 2006;Raines, 2003;Raines and Paul, 2006;Raines et al., 1999). Furthermore, the disruption of the chloroplastic fructose-1,6-bisphosphatase (FBPase) gene was also shown to impact negatively on carbon fixation (Kossmann et al., 1994;Rojas-Gonz alez et al., 2015;Sahrawy et al., 2004). These results strongly suggested that improvements in photosynthetic carbon fixation may be achieved by increasing the activity of individual CB cycle enzymes. Evidence supporting this hypothesis came from transgenic tobacco plants over-expressing SBPase (Lefebvre et al., 2005), the cyanobacterial bifunctional SBPase/FBPase (Miyagawa et al., 2001) or FBPA (Uematsu et al., 2012). These single manipulations resulted in increase in photosynthetic carbon assimilation, enhanced growth and an increase in biomass. More recently, Simkin et al. (2015) demonstrated that the combined over-expression of SBPase and FBPA in tobacco resulted in a cumulative increase in biomass and that these increases could be further en'hanced by the overexpression of the cyanobacterial inorganic carbon transporter B (ictB), proposed to be involved in CO 2 transport, although its function was not established in these plants (Simkin et al., 2015). These results demonstrate the potential for the manipulation of photosynthesis, using multigene stacking approaches, to increase biomass yield (Simkin et al., 2015). The efficiency of CO 2 fixation by the CB cycle is compromised by the oxygenase activity of ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco) which directly competes with CO 2 fixation at the active site, resulting in the formation of 2phosphoglycolate (2PG) and subsequently significant energy costs and CO 2 losses in the photorespiratory pathway, resulting in significant losses in yield (Bowes et al., 1971;Tolbert, 1997;Walker et al., 2015Walker et al., , 2016. Therefore, a major target to improve photosynthesis has been to reduce photorespiration, either through protein engineering to improve Rubisco catalysis or by limiting the flux through this pathway, none of which have as yet yielded positive results due to both the complexity of the Rubisco catalytic and assembly processes (Cai et al., 2014;Carmo-Silva et al., 2015;Lin et al., 2014a;Orr et al., 2016;Sharwood et al., 2016;Whitney et al., 2011). More ambitious approaches to this problem are now being taken, including the introduction of cyanobacterial or algal CO 2 -concentrating mechanisms, novel synthetic metabolic pathways and the introduction of the C4 pathway into C3 crops (Betti et al., 2016;Lin et al., 2014b;McGrath and Long, 2014;Meyer et al., 2016;Montgomery et al., 2016). However, to date the only successful approach to limiting photorespiration which has resulted in an improvement in photosynthesis has been through the introduction of alternative routes to metabolize 2PG and return CO 2 for use in the CB cycle (Dalal et al., 2015;Kebeish et al., 2007;Maier et al., 2012;Nolke et al., 2014;Peterh€ ansel et al., 2013;Xin et al., 2015). Reductions in the flux through the photorespiratory cycle by targeted knock-down of GDC-P in potato and GDC-H in rice have been shown to lead to reductions in photosynthesis and growth rates (Engel et al., 2007;Heineke et al., 2001;Lin et al., 2016). The opposite approach, namely over-expression of the glycine decarboxylase (GDC)-H protein (GDC-H) and glycine decarboxylase (GDC)-L protein (GDC-L) in Arabidopsis thaliana (Arabidopsis), resulted in an improvement of photosynthesis and increased vegetative biomass when compared to wild-type plants (Timm et al., 2012(Timm et al., , 2015(Timm et al., , 2016. Although the underlying mechanism responsible for this effect has not been fully elucidated, these authors proposed that stimulation of the CB cycle is brought about by the increase in GDC activity, resulting in a reduction in the steady-state levels of photorespiratory metabolites that may negatively impact on the function of the CB cycle (e.g. 2PG, glycolate, glyoxylate or glycine (Anderson, 1971;Kelly and Latzko, 1976;Eisenhut et al., 2007;Lu et al., 2014;Timm et al., 2015Timm et al., , 2016). In the light of the results from Timm et al. (2012Timm et al. ( , 2015, the aim of this study was to explore the possibility that the simultaneous increase in the activity of enzymes of both the CB cycle and the photorespiratory pathway could lead to a cumulative positive impact on photosynthetic carbon assimilation and yield. To test this, we have taken a proof-of-concept approach using the model plant Arabidopsis in which we have over expressed SBPase, FBPA and GDC-H either alone or in combination. We have shown that the simultaneous manipulation of multiple targets can lead to a cumulative impact on biomass yield under both low-and high-light growing conditions. Interestingly, we have also shown that manipulation of the photorespiratory pathway alone resulted in an increase in vegetation biomass but not seed yield. In contrast, manipulation of both the CB cycle and photorespiratory pathway increased both biomass and seed yield. Further analysis was carried out on T3 plants grown at 130 lmol/m 2 /s in an 8-h/16-h light/dark cycle and total extractable SBPase and FBPA activity determined in extracts from newly fully expanded leaves. The results are represented as a percentage (%) of total activity for SBPase (6.7 lmol/m 2 /s) and FBP aldolase (22 lmol/m 2 /s) determined in wild type (WT). This analysis showed that these plants had increased levels of SBPase (137%-185%) and FBPA (146%-180%) activity ( Figure 1) compared to WT and nontransformed azygous (A) controls (azygouscontrol plants used in this study were recovered from the segregating population and verified by PCR). Interestingly, a small increase in endogenous FBPA activity (125%-136%) was also observed in SBPase over-expressing lines (Figure 1a), but no significant increase in SBPase activity was observed in lines overexpressing FBPA. Plants over-expressing SBPase (S), FBPA (F) and the GDC-H protein (H) were generated by crossing two SBPase + FBPA (SF) lines (SF6 and SF12) with two Flaveria pringlei GDC-H protein (Kopriva and Bauwe, 1995) over-expressing lines (FpHL17 and FpHL18) originally generated by Timm et al. (2012) under the control of the leaf-specific and light-regulated Solanum tuberosum ST-LS1 promoter (Stockhaus et al., 1989). Four independent lines (SFH4, SFH20, SFH23 and SFH31) over-expressing SBPase, FBPA and GDC-H (SFH) were identified by PCR and SBPase and FBPA enzyme activities. SBPase and FBPA protein levels were found to be similar to those observed in SF lines ( Figure 1b). No significant difference in SBPase or FBPA activities was observed in lines over-expressing GDC-H alone compared to WT/A controls (C). The full set of assays showing the variation between plants for both SBPase and FBPA activities can be seen in Figure S2. In addition to total extractable enzyme activity, immunoblot analysis of the T3 progenies of S, F, SF, H and SFH lines was carried out using WT/A as controls (C). This analysis identified a number of plants over-expressing SBPase or FBPA and others with increased levels of both SBPase and FBPA (Figures 1a,b and S3). Interestingly, the over-expression of SBPase in Arabidopsis led to an increase in endogenous FBPA protein levels ( Figure 1a) in agreement with the observed increase in enzyme activity. The original H lines and the newly generated SFH plants were shown to accumulate GDC-H when compared to both nontransformed control plants and other transgenic lines (Figure 1a,b). Given the change in FBPA protein levels in the SBPase over-expressing line, we used immunoblot analysis to determine whether there were any changes in other CB cycle enzymes. No detectable changes in the levels of transketolase (TK), phosphoribulokinase (PRK), fructose-1,6-bisphosphatase (FBPase), Rubisco or the ADP glucose pyrophosphorylase (ssAGPase) small protein were observed when compared to levels in C plants ( Figure 2). Chlorophyll fluorescence imaging reveals increased photosynthetic efficiency in young over-expressing seedlings To explore the impact of increased levels of SBPase, FBPA and the GDC-H protein on photosynthesis, plants were grown at 130 lmol/m 2 /s in an 8-h/16-h light/dark cycle and the quantum efficiency of PSII photochemistry (F q '/F m ') analysed using chlorophyll a fluorescence imaging (Baker, 2008;Murchie and Lawson, 2013). Plants over-expressing SBPase and FBPA, either independently or in combination (including with GDC-H), had a significantly higher F q '/F m ' at an irradiance of 200 lmol/m 2 /s when compared to C plants (Figure 3a,b). Plants over-expressing GDC-H alone showed a small increase in the average levels of F q '/F m ' compared to C (P = 0.061). When measurements were made at a higher light level (600 lmol/m 2 /s), all lines analysed, with the exception of SFH, showed a significant increase in F q '/F m ' compared to C plants ( Figure S4a). From images taken as part of the chlorophyll fluorescence analysis, leaf area was determined and shown to be significantly larger for all transgenic lines compared with WT and azygous (A) controls ( Figure 3c). Interestingly, SFH plants showed the greatest leaf area in all experiments. No significant differences in leaf area were observed between WT and A. From this point on, C plants used were the combined data from WT and A plants. Photosynthetic CO 2 assimilation rates are increased in mature plants grown in low light To explore the impact of changes in the levels of enzymes in both the CB cycle and photorespiratory pathway, CO 2 assimilation rates were determined as a function of light intensity (Figure 4a, b). From these light response curves, the maximum lightsaturated rate of photosynthesis (A sat ) was shown to be significantly higher in all transgenic plants when compared to C plants ( Figure 4c). Small differences in CO 2 assimilation rates (A) were also observed in the S, F, SF and SFH plants even at light intensities as low as 150 lmol/m 2 /s, which is close to that of the growth conditions ( Figure S5). We also determined A as a function of internal CO 2 concentration (C i ) in the same plants ( Figure 4d,e). In all transgenic plants, except those over-expressing GDC-H alone, A was significantly greater at C i concentrations above 400 lmol/mol than in C plants (Figure 4d,e). Although A in SFH plants was higher than in the control plants at 400 lmol/mol, it was lower than that observed in the S, F or SF plants. The maximum rate of CO 2 assimilation (A max ) was significantly higher in lines S, F, SF SF and SFH lines used in this study compared to non-transformed control (C). Enzyme assays represent data from 12 to 24 independent plants per group compared to 12-16 C plants. The results are represented as a percentage (%) of total activity for SBPase (6.7 lmol/m 2 /s) and FBP aldolase (22 lmol/m 2 /s) determined in wild type (WT). Enzyme activities per plant can be seen in Figure S2. Columns represent mean values, and standard errors are displayed. Lines that are significantly different to C are indicated (*P < 0.05). and SFH compared to C; however, no significant differences were observed between these lines (Figure 4f). Interestingly, the H plants show no increase in A max when compared to C plants. Further analysis of the A/C i curves using the equations published by von Caemmerer and Farquhar (1981) illustrated that the maximum rate of carboxylation by Rubisco (Vc max: Figure S4b) in lines S, SF and SFH was significantly greater than in C, and Vc max in these lines was also significantly greater than in lines expressing GDC-H alone. Maximum electron transport rates (J max : Figure S4c) were also elevated in lines S, F, SF and SFH compared to C and were also shown to be significantly elevated compared to H. To further assess the effect of the manipulation of the CB cycle and/or the GDC-H protein, the rates of photosynthetic carbon assimilation and electron transport were determined in mature plants as a function of light intensity at 2% [O 2 ] to eliminate photorespiration ( Figure 5a). Electron transport rates through PSII in H and SFH over-expression plants were significantly greater than in the C and SF plants at light levels above 300 lmol/m 2 /s ( Figure 5b). A sat was also significantly higher, 12%-19%, in all lines compared to C although no significant differences were observed between the different transgenic lines (Figure 5c). Increased SBPase and FBPA activity and over-expression of the glycine decarboxylase-H protein stimulates growth in low light The growth of the different transgenic and C plants was determined using image analysis of total leaf area over a period of 38 days from planting (Figure 6a), which showed all transgenic lines had a significantly greater leaf area than C, as early as 16 days after planting (Figure 6b). Furthermore, plants overexpressing all three transgenes (SFH) were shown to have a significantly larger leaf area when compared to all other transgenic lines including G and SF, indicating a cumulative advantage from combining these transgenes at this stage in development. This growth trend continued through to 15 days postplanting ( Figure S6a). By 20 days after planting ( Figure S6b), plants over-expressing the glycine decarboxylase-H protein (H) were shown to be significantly bigger than S, F and SF at the same time point, and triple over-expressing lines SFH remained significantly bigger than all other lines studied (Figure 6b). Plants were allowed to continue growing until harvest at 38 days ( Figure S7). At this stage of development, no significant difference in leaf area or dry weight could be observed between S, F, H or SF lines when compared to each other ( Figure 6c). However, all lines attained a significantly larger leaf area and dry weight when compared to C. Notably, at this stage, the triple over-expressing lines SFH were significantly larger with a higher dry weight (+70%) than all other transgenic and C plants. Furthermore, lines SF and SFH both showed a significant increase in leaf number after 38 days ( Figure S8). Increased SBPase and FBPA activity and expression of the glycine decarboxylase-H protein impacts on the carbohydrate profile of selected lines To determine how the over-expression of these key proteins impacts on downstream processes, leaf tissue was harvested and starch and sugar content were evaluated. No significant difference in starch levels were detected at the end of the day in any of the transgenic lines compared to C (Figure 7). Interestingly, slightly higher starch levels were detected 1 h before sunrise (dark) in transgenic lines F, H and SFH compared to C. All transgenic lines were shown to have consistently higher levels of sucrose, with these levels being significantly higher than C in F and SF lines. SF lines were also shown to have a significantly higher amount of glucose (Figure 7) compared to C, although other lines were consistently elevated but not significantly so. Impact of light intensity on biomass and seed yield A subset of plants was allowed to seed in either low or high light, and final vegetative biomass and seed yield determined per plant. In low-light grown plants, the final vegetative biomass was higher in all of the transgenic lines compared to C; however, no significant differences were observed between the different transgenic lines (Figure 8a). Furthermore, seed yield was increased by 35%-53% in transgenic lines S, SF and SFH ( Figure 8b). Interestingly, no increase in seed yield was observed in the H plants. We next compared the impact of growth of plants in high light to explore further the potential positive impact of these transgenic manipulations on growth. In high-light grown plants, an increase in vegetative biomass from 14% to 51% was observed (Figures 8c and S9). Notably, the H and SFH plant produced significantly more vegetative biomass than the S, F, SF or C plants. Furthermore, seed yield in high-light-grown plants was increased by 39%-62% in transgenic lines S, F, SF and SFH, when compared to C (Figure 8d). Although the highest increase in seed yield was observed in lines SFH in high light, no increase in seed yield was observed in the H plants in high-light-grown plants. The seed yield for individual plants can be seen in Figure S10. Discussion In this study, we have shown that simultaneously increasing the levels of two enzymes of the CB cycle, SBPase and FBPA, and the H protein of the glycine decarboxylase enzyme of the photorespiratory pathway in the same plant, resulted in a substantial and significant increase in both vegetative biomass and seed yield of Arabidopsis grown in controlled environment conditions. An increase in both biomass and yield was also observed in plants overexpressing SBPase or FBPA alone or in combination. However, although overexpression of GDC-H alone resulted in an increase in vegetative biomass, no increase in seed yield was evident in these plants, grown in either low-or high-light conditions. The reasons for this differential effect on seed yield have not yet been elucidated but may be due to changes in carbon status brought about by altered source/sink allocation which is supported by changes to starch and sucrose levels at the end of the night period in some of these lines. Higher levels of sucrose (and fructose, maltose) have also been observed in GDC-L over-expressers (Timm et al., 2015), and the over-expression of GDC-L enhances the metabolic capacity of photorespiration and is believed to alter the carbon flow through the TCA cycle (Timm et al., 2015). It was shown in earlier studies that over-expression of FBPA or SBPase alone in tobacco results in a stimulation of photosynthesis and biomass, with the greatest effect being seen in plants grown under elevated CO 2 (Lefebvre et al., 2005;Rosenthal et al., 2011;Uematsu et al., 2012). Furthermore, when FBPA was overexpressed in combination with SBPase in tobacco, this led to a cumulative increase in biomass in plants grown in ambient CO 2 under greenhouse conditions (Simkin et al., 2015). Interestingly, in this current study, we have shown that in Arabidopsis that the over-expression of FBPA alone, under current atmospheric CO 2 levels, results in a stimulation of photosynthesis and increase in biomass on a similar level to that observed by over-expression of SBPase alone. However, contrary to the results obtained in tobacco, the co-expression of SBPase and FBPA in Arabidopsis did not lead to a further significant increase in either leaf area or biomass when compared to plants independently expressing SBPase (resulting in higher endogenous FBPA activity) or FBPA. This lack of differential effect of the co-overexpression of SBPase and FBPA in this study can likely be explained by the fact that over-expression of SBPase in Arabidopsis also led to a small but significant increase in endogenous FBPA protein levels and activity (25%-36%). Given that no increase in SBPase was present in the FBPA plants, this would suggest that in Arabidopsis, the stimulation in the SBPase, FBPA and the SF over-expression lines may be due to increased FBPA activity. This is in contrast to tobacco where over-expression of SBPase alone led to an increase in biomass and no increases in endogenous FBPA activity, highlighting the differences between species (Lefebvre et al., 2005;Rosenthal et al., 2011;Simkin et al., 2015). Detailed analysis of a range of photosynthesis parameters revealed a similar increase in A sat at low [O 2 ] for all of the transgenic lines studied. The most significant increase was observed in SF lines which showed a 44% increase over control plants, with the lowest increase of 19% being observed in the H plants. An evaluation of the electron transport rates at low [O 2 ] in a subset of these plants showed that lines over-expressing GDC-H (both H and SFH) displayed higher photosynthetic electron transport rates compared to C and plants over-expressing SBPase and FBPA (SF). These results are in keeping with the previous study by Timm et al. (2012). All of the transgenic lines analysed here showed an increase in photosynthesis under high light and ambient CO 2 conditions. However, under high light and saturated levels of CO 2 the rate of assimilation in the H plants was similar to C, and this is in contrast to all other transgenic lines. This observation is in keeping with the proposal that overexpression of the H protein stimulates the flow of carbon through the photorespiratory pathway, thereby reducing steady-state levels of inhibitory photorespiratory metabolites, which in turn stimulates flux through the CB cycle. Whilst this hypothesis is supported by metabolite data and the observation that growth of GDC-H plants is not stimulated when these plants are grown in elevated CO 2 conditions (Timm et al., 2012), the exact mechanism of such feedback from photorespiration to the CB cycle is not yet known. The effect of these manipulations on photosynthesis was also determined at the growth light intensity where small differences in A are observed even at light levels as low as 150 lmol/m 2 /s. This together with the increased leaf area observed at early stages in development provides evidence that the small differences in photosynthesis lead to an increase in leaf area. The cumulative impact of this over time results in increased biomass and yield. Conclusion In this proof-of-concept study in Arabidopsis, we have demonstrated that the simultaneous over-expression of two CB cycle enzymes leads to an increase in photosynthesis and an increase in overall biomass and seed yield. We also show that when the transgenic SF lines were crossed with GDC-H overexpressing plants (Timm et al., 2012), the combined effects of these three transgenes (SFH) resulted in a cumulative impact on biomass (+71%) which was significantly higher than H (+50%) and SF (+41%) under low light. Importantly, the work here also allowed a parallel comparative analysis between the different manipulations under different environmental conditions. Although it is still necessary to address the importance of these manipulations in crop species and also under field conditions, this study provides additional evidence that multigene manipulation of photosynthesis and photorespiration can form an important tool to improve crop yield. These results also provide new information indicating that it will be necessary to tailor the targets for manipulation for different crops and for either biomass or seed yield. Constructs were generated using Gateway cloning technology and vector pGWPTS1. All transgenes were under the control of the rbcS2B (1150 bp; At5 g38420) promoter. Full details of PTS1-SB, PTS1-FB and PTS1-SBFB construct assembly can be seen in the supplementary data. Construct maps are shown in Figure S1b-d. Generation of transgenic plants The recombinant plasmids PTS1-SB, PTS1-FB and PTS1-SBFB were introduced into wild-type Arabidopsis by floral dipping (Clough and Bent, 1998) using Agrobacterium tumefaciens GV3101. Positive transformants were regenerated on MS medium containing kanamycin (50 mg/L) and hygromycin (20 mg/L). Kanamycin-/hygromycin-resistant primary transformants (T1 generation) with established root systems were transferred to soil and allowed to self-fertilize. Plants over-expressing SBPase, FBPA and the GDC-H protein were generated by floral inoculation of two SBPase + FBPA lines (SF6 and SF12) with the pollen from two GDC-H protein over-expressing lines (FpH17 and FpH18) provided by Timm et al. (2012). Lines FpH17 and 18 were originally generated by floral dipping and over-expressing the Flaveria pringlei GDC-H protein (Kopriva and Bauwe, 1995) under the control of the leaf-specific and light-regulated Solanum tuberosum ST-LS1 promoter (Stockhaus et al., 1989). Following initial characterization of generated lines, three lines for SBPase (S3, S8, S12), FBPA (F6, F9, F11) and SF (SF6, SF7, SF12) were selected for further study from all lines generated. Plant growth conditions Wild-type T2 Arabidopsis plants resulting from self-fertilization of transgenic plants were germinated in sterile agar medium containing Murashige and Skoog salts (plus kanamycin 50 mg/L for the transformants) and grown to seed in soil (Levington F2, Fisons, Ipswich, UK). Lines of interest were identified by immunoblot and qPCR. For experimental study, T3 progeny seeds from selected lines were germinated on soil in controlled environment chambers at an irradiance of 130 lmol photons/ m 2 /s, 22°C, relative humidity of 60%, in an 8-h/16-h squarewave photoperiod. Plants were sown randomly, and trays rotated daily. Four leaf discs (0.6 cm diameter) from two individual leaves, for the analysis of SBPase and FBPA activities, were taken and immediately plunged into liquid N 2 , and stored at À80°C. Leaf areas were calculated using standard photography and ImageJ software (imagej.nih.gov/ij). Wild-type plants and null segregants (azygous) used in this study were initially evaluated independently. However, once it was determined that no significant difference were observed between these two groups (see figures and supplementary figures), wild-type plants and null segregants were combined (null segregants from the transgenic lines verified by PCR for nonintegration of the transgene) and used as a combined 'control' group (C). Determination of SBPase activity by phosphate release SBPase activity was determined by phosphate release. Immediately after photosynthesis measurement, leaf discs were isolated from the same leaves and frozen in liquid nitrogen. For analysis, leaf discs were ground to a fine powder in liquid nitrogen in extraction buffer (50 mM HEPES, pH8.2; 5 mM MgCl 2 ; 1 mM EDTA; 1 mM EGTA; 10% glycerol; 0.1% Triton X-100; 2 mM benzamidine; 2 mM aminocaproic acid; 0.5 mM phenylmethylsulphonylfluoride; 10 mM dithiothreitol), and the resulting solution was centrifuged 1 min at 14 000 g, 4°C. The resulting supernatant was desalted through an NAP-10 column (Amersham) and eluted, aliquoted and stored in liquid nitrogen. For the assay, the reaction was started by adding 20 lL of extract to 80 lL of assay buffer (50 mM Tris, pH 8.2; 15 mM MgCl 2 ; 1.5 mM EDTA; 10 mM dithiothreitol; 2 mM SBP) and incubated at 25°C for 30 min as described previously (Simkin et al., 2015). The reaction was stopped by the addition of 50 lL of 1 M perchloric acid and centrifuged for 10 min at 14 000 g, 4°C. Samples (30 lL) and standards (30 lL, 0.125-4 nmol PO 3À 4 ) in triplicate were incubated 30 min at room temperature following the addition of 300 lL of Biomol Green (Affiniti Research Products, Exeter, UK), and the A 620 was measured using a microplate reader (VERSAmax, Molecular Devices, Sunnyvale, CA). Determination of FBPA activity Desalted protein extracts, as described above, were evaluated for FBPA activity as described previously (Haake et al., 1998). Chlorophyll fluorescence imaging Measurements were performed on 2-week-old Arabidopsis seedlings that had been grown in a controlled environment chamber providing 130 lmol/mol 2 /s PPFD and ambient CO 2 . Chlorophyll fluorescence parameters were obtained using a chlorophyll fluorescence (CF) imaging system (Technologica, Colchester, UK; Barbagallo et al., 2003;Baker and Rosenqvist, 2004). The operating efficiency of photosystem two (PSII) photochemistry, F q '/F m ', was calculated from the measurements of steady-state fluorescence in the light (F') and maximum fluorescence in the light (F m ') since F q '/F m ' = (F m ' À F')/F m '. Images of F' were taken when fluorescence was stable at 130 lmol/m 2 /s PPFD, whilst images of maximum fluorescence were obtained after a saturating 600 ms pulse of 6200 lmol/m 2 / s PPFD (Baker et al., 2001;Oxborough and Baker, 1997). Parallel measurements of plants grown in high light (390 lmol/mol 2 /s PPFD and ambient CO 2 ) were also performed on 2-week-old Arabidopsis (Supporting Information). Gas exchange measurements The response of net photosynthesis (A) to intracellular CO 2 (C i ) was measured using a portable gas exchange system (CIRAS-1, PP Systems Ltd, Ayrshire, UK). Leaves were illuminated with an integral red-blue LED light source (PP systems Ltd) attached to the gas exchange system, and light levels were maintained at saturating photosynthetic photon flux density (PPFD) of 1000 lmol/m 2 /s for the duration of the A/C i response curve. Measurements of A were made at ambient CO 2 concentration (C a ) at 400 lmol/mol, before C a was decreased to 300, 200, 150, 100 and 50 lmol/mol before returning to the initial value and increased to 500, 600, 700, 800, 900, 1000, 1100 and 1200 lmol/mol. Leaf temperature and vapour pressure deficit (VPD) were maintained at 22°C and 1 AE 0.2 kPa, respectively. The maximum rates of Rubisco-(Vc max ) and the maximum rate of electron transport for RuBP regeneration (J max ) were determined and standardized to a leaf temperature of 25°C based on equations from Bernacchi et al. (2001) and McMurtrie and Wang (1993), respectively. Photosynthetic light response curves A/Q response curves were measured using a CIRAS-1 portable gas exchange system (PP Systems (CIRAS-1, PP Systems Ltd). Cuvette conditions were maintained at a leaf temperature of 22°C, relative humidity of 50%-60% and ambient growth CO 2 concentration (400 mmol/mol for plants grown in ambient conditions). Leaves were initially stabilized at saturating irradiance 1000 lmol/m 2 /s, after which A and g s were measured at the following PPFD levels: 0,50,100,150,200,250,300,350,400,500,600,800 and 1000 lmol/m 2 /s. Measurements were recorded after A reached a new steady state (1-2 min) and before stomatal conductance (g s ) changed to the new light levels. A/Q analyses were performed at 21% and 2% O 2 . Determination of sucrose and starch Carbohydrates and starch were extracted from 20 mg leaf tissue, and samples were collected at two time points, 1 h before dawn (15 h into the dark period) and 1 h before sunset (7 h into the light period). Four leaf discs collected from two different leaves were ground in liquid nitrogen, and 20 mg/FW of tissue was incubated in 80% (v/v) ethanol for 20 min at 80°C and then repeated three times with ethanol 80% (v/v) at 80°C. The resulting solid pellet and pooled ethanol samples were freezedried. Suc was measured from the extracts in ethanol using an enzyme-based protocol (Stitt et al., 1989), and the starch contents were estimated from the ethanol-insoluble pellet according to Stitt et al. (1978), with the exception that the samples were boiled for 1 h and not autoclaved. Statistical analysis All statistical analyses were performed by comparing ANOVA, using Sys-stat, University of Essex, UK. The differences between means were tested using the post hoc Tukey test (SPSS, Chicago, IL). Supporting information Additional Supporting Information may be found online in the supporting information tab for this article: Figure S1 Schematic representation of the (a) vector pGWPTS1, (b) A. thaliana SBPase (PTS1SB) over-expression construct, and the (c) A. thaliana FBPA (PTS1-FB) over-expression construct, (d) shows the structure of a duel construct for the expression of both SBPase and FBPA (PTS1-SBFB). Figure S2 (a) Complete data set for SBPase enzyme assays in plants analysed. (b) Complete data set for FBP aldolase enzyme assays in plants analysed. Figure S3 Molecular and biochemical analysis of the transgenic plants overexpressing SBPase (S), FBPA (F) or both (SF). Figure S4 (a) The operating efficiency of PSII photochemistry of C and transgenic plants at 600 lmol/m 2 /s light. Capacity determined using chlorophyll fluorescence imaging. (b) the maximum carboxylation activity of Rubisco and (c) J max were derived from A/ C i response curves (Figure 4). Figure S5 Photosynthesis carbon fixation rates determined as a function of light intensity in developing leaves. Figure S6. Complete data set for all transgenic lines evaluated. (a) leaf area at 15 days, (b) Leaf area at 20 days (c) Leaf area at 25 days. Figure S7 Growth analysis of the transgenic and control plants grown in low light. Figure S8 Leaf number in control and transgenic lines. Figure S9 Complete data set for leaf area of all transgenic lines evaluated at high light (390 lmol/m 2 /s). Figure S10 Complete data set for seed yield (g) from all transgenic lines evaluated in (a) low light (130 lmol/m 2 /s) and (b) high light (390 lmol/m 2 /s).
2018-04-03T02:21:06.027Z
2017-03-21T00:00:00.000
{ "year": 2017, "sha1": "be76356e208592d695fa687f50ab99c5085000f7", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/pbi.12676", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "efdc6b2b00cee5a7ea03950258e3abf2708ef894", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
199002455
pes2o/s2orc
v3-fos-license
Scale Accuracy Evaluation of Image-Based 3D Reconstruction Strategies Using Laser Photogrammetry : Rapid developments in the field of underwater photogrammetry have given scientists the ability to produce accurate 3-dimensional (3D) models which are now increasingly used in the representation and study of local areas of interest. This paper addresses the lack of systematic analysis of 3D reconstruction and navigation fusion strategies, as well as associated error evaluation of models produced at larger scales in GPS-denied environments using a monocular camera (often in deep sea scenarios). Based on our prior work on automatic scale estimation of Structure from Motion (SfM)-based 3D models using laser scalers, an automatic scale accuracy framework is presented. The confidence level for each of the scale error estimates is independently assessed through the propagation of the uncertainties associated with image features and laser spot detections using a Monte Carlo simulation. The number of iterations used in the simulation was validated through the analysis of the final estimate behavior. To facilitate the detection and uncertainty estimation of even greatly attenuated laser beams, an automatic laser spot detection method was developed, with the main novelty of estimating the uncertainties based on the recovered characteristic shapes of laser spots with radially decreasing intensities. The effects of four different reconstruction strategies resulting from the combinations of Incremental/Global SfM, and the a priori and a posteriori use of navigation data were analyzed using two distinct survey scenarios captured during the SUBSAINTES 2017 cruise (doi: 10.17600/17001000). The study demonstrates that surveys with multiple overlaps of nonsequential images result in a nearly identical solution regardless of the strategy (SfM or navigation fusion), while surveys with weakly connected sequentially acquired images are prone to produce broad-scale deformation (doming effect) when navigation is not included in the optimization. Thus the scenarios with complex survey patterns substantially benefit from using multiobjective BA navigation fusion. The errors in models, produced by the most appropriate strategy, were estimated at around 1% in the central parts and always inferior to 5% on the extremities. The effects of combining data from multiple surveys were also evaluated. The introduction of additional vectors in the optimization of multisurvey problems successfully accounted for offset changes present in the underwater USBL-based navigation data, and thus minimize the effect of contradicting navigation priors. Our results also illustrate the importance of collecting a multitude of evaluation data at different locations and moments during the survey. Introduction Accurate and detailed 3D models of the environment are now an essential tool in different scientific and applied fields, such as geology, biology, engineering, archaeology, among others. With advancements in photographic equipment and improvements in image processing and computational capabilities of computers, optical cameras are now widely used due to their low cost, ease of use, and sufficient accuracy of the resulting models for their scientific exploitation. The application of traditional aerial and terrestrial photogrammetry has greatly expanded in recent years, with commercial and custom-build camera systems and software solutions enabling a nearly black-box type of data processing (e.g., the works by the authors of [1][2][3][4]). These rapid developments have also significantly benefited the field of underwater photogrammetry. The ability to produce accurate 3D models from monocular cameras under unfavorable properties of the water medium (i.e., light attenuation and scattering, among other effects) [5], and advancements of unmanned underwater vehicles have given scientists unprecedented access to image the seafloor and its ecosystems from shallow waters to the deep ocean [6][7][8][9]. Optical seafloor imagery is now routinely acquired with deep sea vehicles, and often associated with other geophysical data (acoustic backscatter and multibeam bathymetry) and water column measurements (temperature, salinity, and chemical composition). High-resolution 3D models with associated textures are, thus, increasingly used in the representation and study of local areas of interest. However, most remotely operated vehicles (ROVs) or autonomous underwater vehicles (AUVs) that are currently used in science missions have limited optical sensing capabilities, commonly comprising a main camera used by the ROV-pilot, while larger workclass ROVs have additional cameras for maneuvering. Due to the nature of projective geometry, performing 3D reconstruction using only optical imagery acquired by monocular cameras results in a 3D model which is defined only up to scale, meaning that the unit in the model is not necessary a standard unit such as a meter [10]. In order to correctly disambiguate the scale, it is essential to use additional information in the process of model building. Predominantly, solutions in subaerial applications are based on the fusion of image measurement with robust and dependable satellite references, such as Global Navigation Satellite System (GNSS) [11][12][13], or ground control pointss (GCPs) [14][15][16], due to their accuracy and ease of integration. On the contrary, the water medium not only hinders the possibility of accurately establishing the control points, but also prevents the use of global positioning system (GPS) due to the absorption of electromagnetic waves. Hence the scale is normally disambiguated either using a combination of acoustic positioning (e.g., Ultra-Short BaseLine (USBL)) and inertial navigation system (INS) [17][18][19], or through the introduction of known distances between points in the scene [20]. In shallow water environments, i.e., accessible by divers, researchers have often placed auxiliary objects (such as a scaling cube [21], locknuts [22], graduated bars [23], etc.) into the scene, and used the knowledge of their dimensions to scale the model a posteriori. Such approaches, while applicable in certain scenarios, are limited to the use in small-scale reconstructions (e.g., a few tens of square meters), and in shallow water environments, due to the challenges in transporting and placing objects in deep sea environments. Similarly, laser scalers have been used since the late 1980s, projecting parallel laser beams onto the scene to estimate the scale of the observed area, given the known geometric setup of the lasers. Until recently, lasers have been mostly used in image-scaling methods, for measurements within individual images (e.g., Pilgrim et al. [24] and Davis and Tusting [25]). To provide proper scaling, we have recently proposed two novel approaches [26], namely, a fully-unconstrained (FUM) and a partially-constrained method (PCM), to automatically estimate 3D model scale using a single optical image with identifiable laser projections. The proposed methods alleviate numerous restrictions imposed by earlier laser photogrammetry methods (e.g., laser alignment with the optical axis of the camera, perpendicularity of lasers with the scene), and remove the need for manual identification of identical points on the image and 3D model. The main drawback of these methods is the need for purposeful acquisition of images with laser projections, with the required additional acquisition time. Alternatively, the model scaling can be disambiguated with known metric vehicle displacements (i.e., position and orientation from acoustic positioning, Doppler systems, and depth sensors [19,27,28]). As this information is recorded throughout the mission, such data are normally available for arbitrary segments even if they have not been identified as interesting beforehand. The classic range-and-bearing position estimates from acoustic-based navigation, such as USBL, have an uncertainty that increases with increasing range (i.e., depth) in addition to possible loss of communication (navigation gaps). Consequently, the scale information is inferred from data which is often noisy, poorly resolved, or both. Hence the quality of the final dataset is contingent on the strategy used in the fusion of image and navigation information. Depending on the approach, the relative ambiguity can cause scale drift, i.e., a variation of scale along the model, causing distortions [29]. Furthermore, building of large 3D models may require fusion of imagery acquired in multiple surveys. This merging often results in conflicting information from different dives, and affects preferentially areas of overlap between surveys, negatively impacting the measurements on the model (distances, areas, angles). The need to validate the accuracy of image-based 3D models has soared as the development of both the hardware and the techniques enabled the use of standard imaging systems as a viable alternative to more complex and dedicated reconstruction techniques (e.g., structured light). Numerous evaluations of this accuracy are available for aerial and terrestrial 3D models (e.g., the works by the authors of [2,[30][31][32]). Environmental conditions and limitations of underwater image acquisition preclude their transposition to underwater image acquisition and, to date, most underwater accuracy studies use known 3D models providing reference measurements. This leads to marine scientists nowadays being constantly faced with the dilemma of selecting appropriate analyses that could potentially be performed on the data derived from the reconstructed 3D models. Early studies [21,[33][34][35][36][37][38] evaluated the accuracy of small-scale reconstructions (mainly on coral colonies), comparing model-based and laboratory-based volume and surface areas for specific corals. More recently, auxiliary objects (e.g., locknuts [22], graduated bars [23], special frames [39,40], and diver weights [41]) have been used to avoid removal of objects from the environment. Reported inaccuracies range from 0.85% to 17%, while more recent methods achieve errors as low as 2-3% [22,41]. Diver-based measurements and the placement of multiple objects at the seafloor both restrict the use of these methods in shallow water or experimental environments, and hinder such approaches in deep sea environments (e.g., scientific cruises), where reference-less evaluation is needed instead, which has been performed in only a few experiments. Ferrari et al. [38] evaluated their reconstruction method on a medium-size reef area (400 m) and a 2 km long reef transect. Maximum heights of several quadrants within the model were compared to in situ measurements, coupled with an estimation of structural complexity (rugosity). The stated inaccuracies in reef height were 18 ± 2%. This study split larger transects into approx 10 m long sections to reduce potential drift, and hence model distortion. Similarly, Gonzales et al. [42] reported 15% error in rugosity estimates from stereo imaging and compared them with results from a standard chain-tape method, along a 2 km long transect. To the best of our knowledge, no other scale accuracy estimate of submarine large-area models has been published. Furthermore, although laser scalers are often used for qualitative visual scaling, they have never been used to evaluate the accuracy of underwater 3D models. Objectives Although a growing body of literature supports the belief that underwater image-based 3D reconstruction is a highly efficient and accurate method at small spatial extents, there is a clear absence of scale accuracy analyses of models produced at larger scales (often in deep sea scenarios). Validation of 3D reconstruction methods and associated error evaluation are, thus, required for large underwater scenes and to allow the quantitative measurements (distances and volumes, orientations, etc.) required for scientific and technical studies. The main goal of this paper is to present and use an automatic scale accuracy estimation framework, applicable to models reconstructed from optical imagery and associated navigation data. We evaluate various reconstruction strategies, often used in research and industrial ROV deep sea surveys. First, we present several methods of 3D reconstruction using underwater vehicle navigation, to provide both scaling and an absolute geographic reference. Most commonly, SfM uses either an incremental or a global strategy, while the vehicle navigation may be considered a priori as part of the optimization process, or a posteriori after full 3D model construction. Here, we compare four different strategies resulting from combination of Incremental/Global SfM and the a priori and a posteriori use of navigation data. We discuss the impact of each strategy in the final 3D model accuracy. Second, the four methods are evaluated to identify which one is best suited to generate 3D models that combine data from multiple surveys, as this is often required under certain surveying scenarios. Navigation data from different surveys may have significant offsets at the same location (x, y, z, rotation), show noise differences, or both. The changes between different acquisitions of a single scene are taken into account differently by each 3D reconstruction strategy. Third, prior approaches, recently presented by Istenič et al. [26], to estimating model scale using laser scalers, namely FUM and PCM methods, are augmented with Monte Carlo simulations to evaluate the uncertainty of the obtained scale estimates. Furthermore, the results are compared to the kinds of estimates commonly used and suffering from parallax error. Fourth, an automatic laser detection and uncertainty estimation method is presented. Accurate analyses require a multitude of reliable measurements spread across the 3D model, whose manual annotation is extremely labor-intensive, error-prone, and time-consuming, when not nearly impossible. Unlike previous detection methods, our method detects the centers of laser beams by considering the texture of the scene, and then determines their uncertainty, which, to the best of our knowledge, has not been presented in the literature hitherto. With the data from the SUBSAINTES 2017 cruise (doi: 10.17600/17001000; [43]) we evaluate the advantages and drawbacks of the different strategies to construct underwater 3D models, while providing quantitative error estimates. As indicated above, these methods are universal as they are not linked to data acquired using specific sensors (e.g., laser systems and stereo cameras), and can be applied to standard imagery acquired with underwater ROVs. Hence, it is possible to process legacy data from prior cruises and with different vehicles and/or imaging systems. Finally, we discuss the best practices for conducting optical surveys, based on the nature of targets and the characteristics of the underwater vehicle and sensors. Image-Based Underwater 3D Reconstruction Textured 3D models result from a set of sequential processing steps ( Figure 1). As scene geometry is computed entirely from the optical imagery, the end result directly depends on image quality and an adequate survey strategy. Compared to subaerial imagery, the unfavorable properties of the water medium (i.e., light attenuation and scattering effects) [5] cause blurring of details, low image contrast, and distance-dependent color shifts [44]. As such, acquisition is conducted at close range, thus limiting the observation area of any single image, while significantly increasing the amount of data collected and processed. The distance from the camera to the scene is often defined by a combination of several factors, such as the visibility, amount of available light, terrain roughness, and maneuverability of the imaging platform. As some of these factors may change from place to place and over time; it is common to adjust the distance during acquisition. The survey speed is also affected by several factors. Commonly, the survey speed is adjusted as a function of the distance to the scene and of the image exposure time, in order to keep motion blurriness to minimum levels (often less than 2 pixels). Typically, the survey speed is in the order of a 1/4 of the distance to the scene, per second. Preprocessing Keyframe selection is hence important preprocessing step, used to remove unnecessary redundancy (i.e., images taken from very similar poses). Selecting too many images may represent an unnecessary increase of processing time, whereas the selection of too few images may result in missing observations and prevent the reconstruction of a single complete model. The commonly used strategy for image selection based on constant time intervals (e.g., selecting a frame every n-th second) is often not suitable; surveys with significant changes in speed and/or distance to the scene can lead to over-or underselection of images. Instead, we use an approach with implicit detection of frames with similar vantage points [45] through estimates of feature displacements between consecutive frames (e.g., Lucas-Kanade tracking algorithm [46]). For sufficiently dense image sets (e.g., video acquisitions), sharpness may be used for further selection (e.g., variance of Laplacian [47]). Additionally, color correction can be used to counteract the degradation effects of water. Minimizing the effects of the water medium not only benefits human perception and interpretation of the scene, but also improves the quality and quantity of successful feature matches between image pairs [48], thus increasing the quality of the final model. Accurate color information recovery depends on knowledge of the physical image formulation process model which is rarely available in its completeness. Alternatively, color enhancing methods (e.g., Bianco et al. [49] used in our tests) can remove the attenuation effects, as well as the color cast introduced by an unknown illuminant ( Figure 2). Sparse Reconstruction A concise set of preselected images is then used to jointly estimate the sparse 3D geometry of the scene (set of 3D points) and the motion of the camera (trajectory) through a technique called Structure from Motion (SfM). The inherent scale ambiguity of the reconstructed 3D structure and camera motion from a set of images taken by a monocular camera is addressed by either using the vehicle navigation a priori as part of the optimization process (multiobjective BA), or a posteriori through an alignment with the reconstructed camera path using a similarity transformation. As the structure and motion parameters are inferred entirely from multiple projections of the same 3D point in overlapping images, the robustness of detection, and matching of feature points across the image set is important. In the evaluated approach, the features are detected as accelerated KAZE (AKAZE) [50], and described using Modified-SURF descriptors [51]. To avoid an empirical selection of the inlier/outlier threshold in the geometric filtering procedure (e.g., fundamental/essential matrix [10]), a parameter-free A Contrario Ransac (AC-RANSAC) [52], implemented in openMVG library [53], is used. The approach automatically determines the threshold and model meaningfulness by a statistical balance between the tightness of fitting of data and the number of the inliers. Due to the nonlinearity in the projection process, a nonlinear optimization, Bundle Adjustment (BA), is required, with the final solution obtained by formulating a nonlinear least squares (NLS) problem. The cost function to be minimized is normally an image-based error, consisting of the sum of squared re-projection errors, defined as the distances between the 2-dimensional (2D) feature observations of the 3D point and their corresponding projections onto the images. Intrinsic camera parameters, if a priori known, can be excluded from the optimization, leading to lowering the complexity of the problem and thus improving the results. The problem can be efficiently solved using iterative methods such as Levenberg-Marquardt (LM) [54], which, however, only guarantees finding a local minimum of the optimizing function. This makes it extremely sensitive to the initial parameter estimate, leading to different strategies proposed for their initialization broadly classified as either: incremental or global. Incremental SfM expands model reconstruction one image at a time, allowing for a gradual estimation of parameters for the newly added points and cameras. After each addition, intermediate BA can be performed to propagate and minimize the error of intermediate reconstructions. Incremental approaches are widely used, given that the intermediate partial reconstructions enable a more robust detection of outliers, and thus decrease the chance of convergence to a wrong local minimum. However, when no prior information about the scene is available, the initialization step of decomposing the fundamental/essential matrix is critical, as a poor selection of the seed pair of images can quickly force the optimization to a nonrecoverable state. Furthermore, as the method inherently gives disproportionate weight to images used at the beginning of the process, it can result in error accumulation. This may produce significant drift and fail to reconstruct the scene in the form of a single connected model. In our tests, the method of Moulon et al. [53,55] was used with a contrario model estimation. Global SfM considers instead the entire problem at once, with full BA performed only at the end. To alleviate the lack of partial reconstructions, that identifies possible outliers, the parameter initialization is split into two sequential steps (i.e., rotation and translation estimation), the first one being more robust to a small number of outliers. This mitigates the need for intermediate nonlinear optimizations, as camera and scene points are estimated simultaneously in a single iteration. It also ensures an equal treatment of all the images, and, consequently, equal distribution of errors. These methods rely on averaging relative rotations and translations, thus requiring images to have overlap with multiple other images, to ensure meaningful constraints and mutual information. As a consequence, the reconstruction from a sparsely connected set of images will result in distorted or even multiple disconnected components. In our test, the method of Moulon et al. [53,56] is used. Navigation Fusion The result of the sparse reconstruction process is thus expressed as a set of 3D points X = {X k ∈ R 3 | k = 1 . . . L}; camera motion is defined with the following set of projection matrices, where P i ∈ SE(3) defines the projection from world to camera frame, and the following set of intrinsic camera parameters, K = {K i | i = 1 . . . N}. As the joint parameter estimation is an inherently ill-conditioned problem, when estimated from a set of images acquired by a single camera, the solution is determined only up to an unknown scale [10]. The estimated parameters can be multiplied by an arbitrary factor, resulting in an equal projection of the structure on the images. A metric solution thus requires known measurements [20] or metric vehicle displacements (navigation/inertial priors) [19,27,28]. Depending on the availability of synchronization between the camera and the navigation, priors C = {C i | i = 1 . . . N} extracted from the ROV/AUV's navigation, can either be used in a multisensor fusion approach or to align the reconstructed camera path via a similarity transformation. Multiobjective BA When navigation priors are available for a significant proportion of images, then this information can be incorporated in the optimization through a multisensor fusion approach. When the measurement noises are not known, the problem of appropriate weighting of different objectives arises, as each of the sensor mean squared error (MSE) does not share the same unit neither the same significance [57]. Most commonly, there is no unique solution that would simultaneously optimize both the re-projection ( there exists a hypersurface of Pareto optimal solutions, where one of the objective functions can only be improved by degrading the other [58]. Such solution space can be defined as a weighted compound function of the two objectives [57]. Assuming that both re-projection and navigation fit errors are independent and Gaussian, it is statistically optimal to weight the errors by their variance [59,60]: which can be rewritten as where λ = σ v /σ n indicates the ratio between the two covariances, representing the noise variance of each sensor measurement and M and N are the number of re-projection and navigation prior terms. In such cases, the weighting can be selected empirically or through automatic weight-determining methods. For bi-objective optimizations, Michot et al. [57] have shown that the L-Curve criterion is the preferred selection method. This criterion is based on plotting the trade-off between the cost of the objectives using different weights, represented in log-log space. This plot has a typical L-curve shape, with two prominent segments. Each term dominating a segment (flat and vertical part) is used to detect the "corner" separating the two, essentially identifying a neutral objective dominance. The associated weight is considered to be the optimal, and representative of the ratio between the covariances of the sensors. Lying between two nearly flat segments, it can be easily identified as the point with maximum curvature. Similarity Transformation Alternatively, the navigation data can be used in an a posteriori step of rescaling and georeferencing. A similarity transformation, which minimizes the sum of differences between the reconstructed camera poses and their navigation priors, is applied to the reconstructed model. Depending on the survey pattern, this method can be used even in cases when the camera is not synchronized with the navigation data. If the reconstructed path can be unambiguously matched to the path given by the navigation data, then the associations between the cameras and navigation poses can be determined through finding the closest points between the paths. Dense, Surface and Texture Reconstruction To better describe the scene geometry, a denser point cloud representation is computed using the method of Shen [61]. For each image reconstructed in SfM, a depth-map is computed, and subsequently refined to enforce consistency over neighboring views. These depth maps are merged into a single (dense) set of 3D points, where points with high photometric inconsistencies are removed to ensure the visibility constraints. The final steps towards obtaining a final photo-realistic 3D model require estimating both a surface and high-quality texture to be pasted upon such surface. As underwater reconstructions are inevitably affected by noise and outliers [5], a method is used [62] to compute the most probable surface, by modeling the surface as an interface between the free and full space as opposed to directly using the input points. The reconstruction is completed by estimating the texture with a two step method [63]. The method prioritizes near, well-focused and orthogonal high-resolution views, as well as similar adjacent patches. Texture inconsistencies are mitigated by an additional photo-consistency check. Finally, any significant color discontinuities between neighboring regions are addressed by per-vertex-based globally optimal luminance correction as well as with Poisson image editing [64]. Model Evaluation Framework Estimating the scale accuracy of 3D models reconstructed from underwater optical imagery and robot navigation data is of paramount importance since the input data is often noisy and erroneous. The noisy data commonly leads to inaccurate scale estimates and noticeable variations of scale within the model itself, which precludes use of such models for their intended research applications. Real underwater scenarios usually lack elements of known sizes that could be readily used as size references to evaluate the accuracy of 3D models. However, laser scalers are frequently used during underwater image collection to project laser beams onto the scene and can be used to provide such size reference. The framework we will describe builds upon two recently introduced methods [26] for scale estimation of SfM-based 3D models using laser scalers. We extend the scale estimation process by including it into a Monte Carlo (MC) simulation, where we propagate the uncertainties associated with the image features and laser spot detections through the estimation process. As the evaluated models are built with metric information (e.g., the vehicle navigation data, dimensions of auxiliary objects), their scale is expected to be consistent with the scale provided by the laser scaler (s L ). Therefore, any deviation from the expected scale value (s = 1.0) can be regarded as an inaccuracy of the scale of the model ( s ). The error can be used to represent the percentage by which any spatial measurement using the model will be affected: where m andm represent a known metric quantity and its model based estimate. Scale Estimation The two methods, namely, the fully-unconstrained method (FUM) and the partially-constrained method (PCM), are both suitable for different laser scaler configurations. FUM permits an arbitrary position and orientation for each of the lasers in the laser scaler, at the expense of requiring a full a priori knowledge of their geometry relative to the camera (Figure 3a). On the other hand, the laser-camera constraints are significantly reduced when using the PCM method. The laser origins have to be equidistant from the camera center and laser pairs have to be parallel ( Figure 3b). However, in contrast to prior image scaling methods [24,25], the lasers do not have to be aligned with the optical axis of the camera. The initial camera-extrinsic values (and optionally also camera-intrinsics) are obtained by solving an Perspective-n-Point (PnP) problem [65] using 3D-2D feature pairs. Each pair connects an individual image feature and a feature associated with the sparse set of points representing the model. As these observations and matches are expected to be noisy and can contain outliers, the process is performed in conjunction with a robust estimation method A-Contrario Ransac (AC-RANSAC) [52]. The estimate is further refined through a nonlinear optimization (BA), minimizing the re-projection error of known (and fixed) 3D points and their 2D observation on the image. The camera pose and location of the laser spots are lastly used either to estimate the position of the laser origin, so as to produce the recorded result (FUM), or else to estimate the perpendicular distance between the two parallel laser beams (PCM). As these predictions are based on the 3D model, they are directly affected by its scale, and can therefore be used to determine it through a comparison with a priori known values. As shown through an extensive evaluation in our previous work, both FUM and PCM can be used to estimate model scale regardless of the camera view angle, camera-scene distance, or terrain roughness [26]. After the application of a maximum likelihood estimator (BA) and a robust estimation method (AC-RANSAC), the final scale estimation is minimally affected by noise in the detection of feature positions and the presence of outlier matches. In the fully-unconstrained method (Figure 5a), knowledge of the complete laser geometry is used (origins, O L , and directions, v L ) to determine the position of laser emissionÔ L , and then produce the results observed on the image (Equation (4)). The laser originsÔ L are predicted by projecting 3D points X L , representing the location of laser beam intersections with the model, using a known direction of the beam v L . As the points X L must be visible to the camera, i.e., be in the line-of-sight of the camera, their positions can be deduced by a ray-casting procedure using a ray starting in the camera center and passing through the laser spot x L detected in the image. The final scale estimate can then be determined by comparing the displacement of them L = Ô L with its a priori known value O L . where P is defined as the projection from world to camera frame and c z represents the optical axis of the camera. Alternatively, the partially-constrained method (Figure 5b) can be used when laser pairs are parallel but with unknown relation to the camera. As opposed to other image scaling methods, laser alignment with the optical axis of the camera is not required, allowing its application to numerous scenarios in which strict rigidity between camera and lasers is undetermined or not maintained (e.g., legacy data). To overcome the lack of information on the direction of laser beams with respect to the camera, the equidistance between the laser origins and the camera center is exploited. Laser beam direction is thus approximated with the direction of the vector connecting the camera center and the middle point between the two points of lasers intersections with the model v CM . As we have shown in our previous work [26], this approximation can lead to small scaling errors only in the most extreme cases where the distance discrepancy between two points on the model is disproportionately large compared to the camera-scene distance. As underwater surveys are always conducted at sufficiently large safety distances, this scenario is absent in underwater reconstructions. Uncertainty Estimation Uncertainty characterization of each scale estimate is crucial for quantitative studies (precise measurement of distances and volumes, orientations, etc.), as required in marine science studies where accurate metrology is essential (such as in geology, biology, engineering, archaeology and others). The effect of uncertainties in the input values on the final estimate is evaluated using a MC simulation method. The propagation of errors through the process is modeled by repeated computations of the same quantities, while statistically sampling the input values based on their probability distributions. The final uncertainty estimate in scale is derived from the independently computed values. Figure 6 depicts the complete MC simulation designed to compute the probability distribution of an estimated scale error, computed from multiple laser observations in an image. We assume that the sparse 3D model points, associated with the 2D features in the localization process, are constant, and thus noise free. On the other hand, uncertainty in the imaging process and feature detection is characterized using the re-projection error obtained by the localization process. We also account for the plausible uncertainty in the laser calibration and laser spot detection, with each laser being considered independently. Laser Spot Detection The accurate quantification of scale errors affecting 3D models derived from imagery requires numerous reliable measurements that have to be distributed throughout the model. As scale estimates are obtained by exploiting the knowledge of laser spot positions on the images, the quantity and quality of such detections directly determine the number of useful scale estimates. Furthermore, to properly estimate the confidence levels of such estimated scales, the uncertainty of the laser spot detections needs to be known. The laser beam center is commonly considered to be the point with the highest intensity in the laser spot, as the luminosity of laser spots normally overpowers the texture of the scene. However, due to the properties of the water medium, the laser light can be significantly attenuated on its path to the surface before being reflected back to the camera. In such cases, the final intensity of the beam reaching the camera might be overly influenced by the texture at the point of the impact ( Figure 7). As such, performing manual accurate annotations of laser spots tends to be extremely challenging and labor intensive, and even impossible in certain cases. Considerable attention has been given to the development of the image processing components of laser scanners, namely on laser line detection [66,67], while the automatic detection of laser dots from underwater laser scalers has only been addressed in a few studies. Rzhanov et al. [68] developed a toolbox (The Underwater Video Spot Detector (UVSD)), with a semiautomatic algorithm based on a Support Vector Machine (SVM) classifier. Training of this classifier requires user-provided detections. Although the algorithm can provide a segmented area of the laser dot, this information is not used for uncertainty evaluation. More recently, the authors of [69] presented a web-based, adaptive learning laser point detection for benthic images. The process comprises a training step using k-means clustering on color features, followed by a detection step based on a k-nearest-neighbor (kNN) classifier. From this training on laser point patterns the algorithm deals with a wide range of input data, such as the cases of having lasers of different wavelengths, or acquisitions under different visibility conditions. However, neither the uncertainty in laser point detection nor the laser line calibration are addressed by this method. To overcome the lack of tools capable of detecting and estimating the uncertainty in laser spot detection while still producing robust and accurate detections, we propose a new automatic laser detection method. To mitigate the effect of laser attenuation on the detection accuracy, scene texture is considered while estimating the laser beam center. Monte Carlo simulation is used to estimate the uncertainty of detections, considering the uncertainty of image intensities. Detection To determine laser spot positions on any image, the first step is a restriction of the search area to a patch where visible laser spots are expected (Figure 8a). Although not compulsory, this restriction minimizes false detections and reduces computational complexity and cost. The predicted area may be determined from the general pose of the lasers with respect to the camera, and from the range of distances to the scene. An auxiliary image is used to obtain a pixel-wise aligned description of the texture in the patch. This additional image is assumed to be acquired at a similar distance and with laser spots either absent or in different positions. This ensures visually similar texture information at the positions of the laser spots. This requirement is easily achievable for video acquisitions, as minor changes in camera pose sufficiently change the positions of the lasers. Alternatively, if still images are acquired, in addition to each image with visible laser spots, an additional image taken from a slightly different pose or in the absence of laser projections has to be recorded. The appropriate auxiliary patch is determined using normalized cross correlation in Fourier domain [70] using the original patch and the auxiliary image. The patch is further refined using a homography transformation estimated by enhanced correlation coefficient maximization [71] (Figure 8b). Potential discrepancies caused by the changes in the environment between the acquisitions of the two images, are further reduced using histogram matching. Once estimated, the texture is removed from the original patch to reduce the impact of the texture on the laser beam spots. A low-pass filter further reduces noise and the effect of other artifacts (e.g., image compression), before detection using color thresholding (e.g., red color) in the HSV (Hue, Saturation, and Value) color space (Figure 8d). Pixels with low saturation values are discarded as hue cannot be reliably computed. The remaining pixels are further filtered using mathematical morphology (opening operation). The final laser spots are selected by connected-component analysis (Figure 8e). Once the effects of the scene texture have been eliminated, the highest intensity point may be assigned to the laser beam center. In our procedure, the beam luminosity is characterized by the V channel of the HSV image representation. Figure 8f,g depicts the estimate of the laser beam luminosity with and without texture removal. Our proposed texture removal step clearly recovers the characteristic shape of the beam, with radially decreasing intensity from the center. Fitting a 2D Gaussian distribution to each laser spot allows us to estimate the center of the beam, assuming a 95% probability that the center falls within the top 20% of the luminance values (Figure 8h). Uncertainty Given that the estimation of the laser center is based on color information, it is important to consider the effect of image noise. Depending on the particularities of the image set, image noise is the result of the combined effects of sensor noise, image compression, and motion blur, among others. In our approach, the image noise is characterized by comparing the same area in two images taken within a short time interval (e.g., time between two consecutive frames), where the sensed difference can be safely attributed to image noise rather than an actual change in the environment. For a randomly selected image from dataset FPA, the relation between assumed image noise (pixel-wise difference of intensities) and pixel intensities per color channel is depicted in Figure 9a, with the histogram of differences shown in Figure 9b. The results clearly illustrate a lack of correlation between image noise and pixel intensity levels or color channels as well as the fact that the noise can be well described by a Gaussian distribution. Furthermore, the analysis of 52 images from both datasets (FPA and AUTT28), acquired at a wide range of camera-scene distances and locations, in which the image noise was approximated by Gaussian distribution, indicating that the distribution of noise remains bounded regardless of the dataset or camera-scene distance (Figure 9c). While it is worth noting that the motion blur will be increasingly noticeable in images acquired at closer ranges, the analyzed range of distances is representative of the ones used for the evaluation presented in the following sections. To obtain a final estimate of confidence levels of detection, the uncertainty of image intensities are propagated through the laser detection process using MC simulation. At each iteration the noise is added independently to each pixel before the described laser spots detection. The iterations yield a set of independent detections, each characterized by a Gaussian distribution (Figure 8h). The final laser spot detection is subsequently obtained by determining an equivalent Gaussian using the Unscented Transform [72] to make this process less computationally expensive. If the laser is not detected in more than 80% of iterations, the detection is considered unstable and discarded. A set of laser spot detections obtained by a MC simulation is shown in Figure 10, together with the final joint estimation. Red and green ellipses represent 66% and 95% confidence levels for independent detections, while blue and cyan indicate the final (combined) uncertainty. Dataset During the SUBSAINTES 2017 cruise (doi: 10.17600/17001000) [43] extensive seafloor imagery was acquired with the ROV VICTOR 6000 (IFREMER) [73]. The cruise targeted tectonic and volcanic features off Les Saintes Islands (French Antilles), at the same location as that of the model published in an earlier study [9], and derived from imagery of the ODEMAR cruise (doi: 10.17600/13030070). One of the main goals of this cruise was to study geological features associated with a recent earthquake-measuring the associated displacement along a fault rupture-while expanding a preliminary study that presented a first 3D model where this kind of measurement was performed [9]. To achieve this, the imagery was acquired at more than 30 different sites along ∼ 20 km, at the base of a submarine fault scarp. This is, therefore, one of the largest sets of image-derived underwater 3D models acquired with deep sea vehicles to date. The ROV recorded HD video with a monocular camera (Sony FCB-H11 camera with corrective optics and dome port) at 30Hz, and with a resolution of 1920 × 1080 px ( Figure 11). Intrinsic camera parameters were determined using a standard calibration procedure [74] assuming a pinhole model with the 3rd degree radial distortion model. The calibration data was collected underwater using a checkerboard of 12 × 8 squares, with identical optics and camera parameters as those later used throughout the entire acquisition process. The final root mean square (RMS) re-projection error of the calibration was (0.34 px, 0.30 px). Although small changes due to vibrations, temperature variation, etc. could occur, these changes are considered too small to significantly affect the final result. Onboard navigation systems included a Doppler velocity log (Teledyne Marine Workhorse Navigator), fibre-optic gyrocompass (iXblue Octans), depth sensor (Paroscientific Digiquartz), and a long-range USBL acoustic positioning system (iXblue Posidonia) with a nominal accuracy of about 1% of the depth. As the camera was positioned on a pan-and-tilt module lacking synchronization with the navigation data, only the ROV position can be reliably exploited. To date, 3D models have been built at more than 30 geological outcrops throughout the SUBSAINTES study area. Models vary in length between ∼10 m and ∼300 m horizontally, and extend vertically up to 30 m. Here we select two out of the 30 models (FPA and AUTT28), representative both of different survey patterns and spatial extents and complexity. Concurrently, evaluation data were collected with the same optical camera centered around a laser scaler consisting of four laser beams. For both selected datasets, numerous laser observations were collected, ensuring data spanning throughout the whole area. This enabled us to properly quantify the potential scale drifts within the models. FPA The first model (named FPA), extends laterally 33 m and 10 m vertically, and corresponds to a subvertical fault outcrop at a water depth of 1075 m. The associated imagery was acquired in a 10 min 51 s video recording during a single ROV dive (VICTOR dive 654). To fully survey the outcrop, the ROV conducted multiple passes over the same area. In total, 218 images were selected and successfully processed to obtain the final model shown in Figure 12. The final RMS re-projection errors of BA using different strategies are reported in Table 1. As expected, the optimizations using solely visual information and incremental approach are able to achieve lower re-projection errors, which, however, is not sufficient proof of an accurate reconstruction. AUTT28 The second model (named AUTT28), shown in Figure 13, is larger and required a more complex surveying scenario, as is often encountered in real oceanographic cruises. Initially, the planned area of interest was recorded during VICTOR dive 654. Following a preliminary onboard analysis of the data, a vertical extension of the model was required, which was subsequently surveyed during VICTOR dive 658. This second survey also partially overlapped with the prior dive, with overlapping images acquired at a closer range and thus providing higher textural detail. The survey also included a long ROV pass with the camera nearly parallel to the vertical fault outcrop, an extremely undesirable imaging setup. This second 3D model is the largest constructed in this area, covering a sub-vertical fault scarp spanning over 300 m laterally and 10 m vertically, with an additional section of approximately 30 m in height from a vertical ROV travel. This model is thus well suited to evaluate scaling errors associated with drift as it includes several complexities (survey strategy and geometry, multiple dives, extensive length and size of the outcrop). After keyframe selection, 821 images were used out of a combined 1 h 28 min and 19 s of video imagery to obtain reconstructions with the RMS re-projection error, as reported in Table 1. Multiobjective BA Weight Selection Models built with a priori navigation fusion through the multiobjective BA strategy require a weight selection which represents the ratio between re-projection and navigation fit errors. As uncertainties of the two quantities are in different units and, more importantly, not precisely known, this selection must be done either empirically or automatically. Due to the tedious and potentially ambiguous trial-and-error approach of empirical selection, the weight was determined using L-Curve analysis. The curve, shown in Figure 14a, uses the FPA dataset and 100 BA repetitions with weights λ ranging from 0.18 to 18. As predicted, the shape of the curve resembles an "L", with two dominant parts. The point of maximum curvature is determined to identify the weight with which neither objective has dominance (Figure 14b). As noise levels of the camera and navigation sensors do not significantly change between the acquisition of different datasets, the same optimal weight λ = 2.325 was used in all our multiobjective optimizations. Given the heuristic nature of the optimal weight determination, it is important to evaluate the effects of the selection uncertainty on the final results. Figure 15 depicts the maximum difference in scale between the reconstructions computed using different weights and the reconstruction obtained with the optimal weight. The maximum expected scale differences were determined by comparing the Euclidean distances between the cameras of various reconstructions. The scale of the model is not expected to change if the ratio between the position of cameras does not change. The results show that the scale difference increases for an approximately 1% with the increment or decrement of λ by a value of one. Given that the optimal λ can be determined with the uncertainty of less than 0.1, as illustrated in Figure 14b, it can be assumed that the uncertainty in the determination of optimal weight has no significant effect on the final result. Multisurvey Data As is often the case in real cruise scenarios, the data for the AUTT28 model was acquired in multiple dives (Figure 16). When combining the data, it is important to consider the consequences of the merger. Optical imagery can be simply combined, given the short period of time between the two dives, in which no significant changes are expected to occur in the scene. In contrast, the merging of navigation data may be challenging; ROV navigation is computed using smoothed USBL and pressure sensor data, with expected errors in acoustic positioning being~1% deep. As data was collected at roughly 1000 m depth, the expected nominal errors are ∼ 10 m, or more in areas of poor acoustic conditions (e.g., close to vertical scarps casting acoustic shadows or reverberating acoustic pings). These errors, however, do not represent the relative uncertainty between nearby poses, but rather a general bias of the collected data for a given dive. Although constant within each dive, the errors can differ between the dives over the same area, and are problematic when data from multiple dives are fused. Models built with data from a single dive will only be affected by a small error in georeferencing, while multisurvey optimization may have to deal with contradicting navigation priors; images taken from identical positions would have different acoustic positions, with offsets of the order of several meters or more. This is overcome by introducing an additional parameter to be estimated, in the form of a 3D vector for each additional dive, representing the difference between USBL-induced offsets. Each vector is estimated simultaneously with the rest of the parameters in the SfM. For the case of AUTT28, the offset between the dives 654 and 658 was estimated to be −2.53 m, 1.64 m, and −0.02 m) in the x (E-W), y (N-S), and z (depth) directions, respectively. The disproportionately smaller z offset is due to the fact that the pressure sensor yields inter-dive discrepancies that are orders of magnitude smaller than the USBL positions. Laser Calibration Normally, the calibration process consists of the initial acquisition of images containing clearly visible laser beam intersections with a surface at a range of known or easily determined distances (e.g., using a checkerboard pattern). The 3D position of intersections, expressed relative to the camera, can be subsequently computed through the exploitation of the known 2D image positions and aforementioned camera-scene distances. Given a set of such 3D positions spread over a sufficient range of distances, the direction of the laser can be computed through a line-fitting procedure. Finally, the origin is determined as the point where the laser line intersect the image plane. Given the significant refraction at the air-acrylic-water interfaces of the laser housing, the images used in the calibration process must be collected under water. In our case, the evaluation data was collected during multiple dives separated by several days, and with camera and lasers being mounted and dismounted several times. While the laser-scaler mounting brackets ensured that the laser origins remained constant, the laser directions with respect to the camera changed slightly with each installation. Due to operational constraints on the vessel, it was not possible to collect dedicated calibration data before each dive. However, given that the origins of the lasers are known a priori and remained fixed throughout the cruise, the only unknown in our setup is the inter-dive variation of the laser directions (relative to the camera and with respect to each other). The fact that independent laser directions do not encapsulate scale information (only the change of direction on an arbitrary unit) enables us to overcome the lack of dedicated calibration images and alternatively determine the set of points lying on each laser beam using images collected over the same area for which a 3D model has been constructed. As our interest is only in the laser directions, the points used in the calibration can be affected by an arbitrary unknown scale factor, as long as this factor is constant for all of the points. Therefore, it is important to avoid models with scale drift, or to use data from multiple models with different scales. For each of the images used in the calibration, the camera was localized with respect to the model by solving an PnP problem [65] as in the FUM and PCM and additionally refined through BA. Each of the individual laser points were then determined by a ray-cast process and expressed in the camera coordinate system, before the direction of each of the lasers was determined by line-fitting. To maximize the conditioning of line-fitting, the selection of a model with the widest distance range of such intersection points and the smallest scale drift is important. This is the case for the AUTT28 model built using Global SfM and multiobjective BA, selected here. The global nature of the SfM and internal fusion of navigation data is predicted to most efficiently reduce a potential scale drift. As noisy laser detections are used to obtain the 3D points utilized in the calibration, laser spot uncertainties were propagated to obtain the associated uncertainty of the estimated laser direction. A MC simulation with a 1000 repetitions was used. Together with the a priori known origin of the laser, this calibration provides us with all the information needed to perform scale estimation using the fully-unconstrained method. The evaluation data were collected on dives 653, 654, and 658. As no camera and laser dismounting/mounting occurred between dives 653 and 654, there are two distinct laser setups: one for dives 653 and 654 and one for dive 658. Figure 17a depicts all laser intersections with the scene (for both AUTT28 and FPA models), as well as the calibration results, projected onto an image plane. Intersections detected in 3D model AUTT28 are depicted in black, and those from 3D model FPA are shown in orange. Similarly, the squares and circles represent dives 653/654 and dive 658, respectively. The projections of the final laser beam estimations are presented as solid and dotted lines. The figure shows a good fit of estimated laser beams with the projections of the intersections, both in the AUTT28 and FPA models. The distributions of calibration errors, measured as perpendicular distances between the calibrated laser beams and the projected laser intersections with the scene, are depicted in Figure 17b, and the RMS errors are listed in Table 2. The adequate fit to the vast majority of AUTT28 points (RMS error < 0.6px) shows that the model used in the calibration had no significant scale drift. Furthermore, the fitting of the FPA related points (RMS error < 0.8px), which were not used in the calibration and are affected by a different scale factor, confirms that the calibration of laser directions is independent of the 3D model used, as well as of different scalings. The broad spread of the black points relative to the orange ones also confirms that the choice of the AUTT28 over the FPA model was adequate for this analysis. Lastly, it is worth reiterating that the data from all the models cannot be combined for calibration, as they are affected by different scale factors. Table 2. RMS calibration errors (in pixels) measured as perpendicular distances between the projected laser beams and laser intersections with the scene for the two datasets and dives. Results As the accuracy of the measurements performed in 3D models for the purposes of quantitative studies (precise measurement of distances and volumes, etc.) depends on the strategy used for image-based 3D reconstruction, in addition to data quality itself, four of the most widely used approaches were evaluated: A) Incremental SfM with a posteriori navigation fusion. B) Global SfM with a posteriori navigation fusion. C) Incremental SfM with multiobjective BA navigation fusion. D) Global SfM with multiobjective BA navigation fusion. The models for each of the two datasets (FPA and AUTT28) were built using each of the four strategies, and subsequently evaluated on multiple segments spread across the observed area. Using the model evaluation framework and laser spot detection method presented above, the scale accuracy and its associated uncertainties were automatically estimated using more than 550 images. To minimize the effects of possible false laser spot detections, only images with at least two confidently detected laser points were used. Furthermore, any images exhibiting excessive variation of the estimated scale between the individual lasers were discarded, as scale can be assumed to be locally constant. Scale Accuracy Estimation During accuracy evaluation, the scale error s is estimated for each image independently. The final per-image scale error and its uncertainty are estimated through a MC simulation, with input variables (features, laser spot locations and laser calibration) sampled according to their probability distributions. The repeated computation with noisy data thus results in an equal number of final scale error estimates per laser. Figure 18 shows one example of such an estimation, together with the selected intermediate results of the evaluation process. As each MC iteration encapsulates the complete evaluation process (image localization, ray-casting, origin estimation, and scale error evaluation), intermediate distributions presented in Figure 18 are only shown for illustration, and are not used as distribution assumptions in the process itself. To satisfactorily represent the complexity of the process, 5000 iterations were used for each estimation. Figure 19 shows the evolution of the estimated scale error with associated uncertainty with increasing number of samples. After 500 iterations, the errors exhibit only minor fluctuations, and after 1500 iterations there is no noticeable difference. Hence, our selection of 5000 iterations is more than adequate to encapsulate the distribution of noise. To demonstrate the advantages of our fully-unconstrained approach compared to previously available methods or our partially-constrained method, scale estimates obtained for each laser/laser pair are compared. Given the nonalignment of lasers with the optical axis of the camera, the majority of previous image-scaling methods (e.g., [24,25]) are not applicable. The only available option is thus a simplified approach where the Euclidean distance between a pair of 3D points (laser intersection points with the scene) is assumed to be the actual distance between the laser pair. Results using different lasers ( Figure 20) show that the FUM method produces the most consistent results. This is expected as the estimation process considers both individual laser directions and the geometry of the scene. The effect of scene geometry is clear when Figure 20a,b are compared. The slightly slanted angle together with the uneven geometry of the scene causes a large variation in the scale error estimates by the individual laser pairs. Similarly, the comparison of Figure 20b,c shows the effect of an inaccurate assumption of laser parallelism. This error depends on the camera-scene distance as shown in Figure 21. It is likely that the overestimation of laser pair 3-4 and the underestimation of other laser pairs can be explained by the use of oversimplified laser geometry. To validate this assumption, the results of the partially-constrained method were corrected by the expected errors (at d = 2m) induced by disregarding nonparallelism of laser beams (Figure 20d). While the result is nearly identical to that from a FUM method (Figure 20c), we note that the scale error in Figure 20c is computed for each laser individually, while the partially-constrained method considers laser pairs instead, and hence there are minor discrepancies. FPA The accuracy of the FPA model was analyzed using 148 images (432 lasers). To represent the results concisely, measurements are grouped into 7 segments based on their model position ( Figure 22 and Table 3). To ensure that the scale of the model did not vary within each segment, the maximum distance of any laser detection to the assigned segment center was set to 1 m. FPA covers a relatively small area, and is imaged with multiple passes, thus providing redundancy that promotes model accuracy. Hence, it is expected to have only minor variations in scale error between areas. Figure 23 depicts the distribution of estimated scale errors for all four methods of 3D model construction. The comparison of results (Table 4) shows that accuracy does not significantly differ between them. The scale error varies between −1% and −5% with estimated uncertainties of approximately ±3%. The highest errors occur at the borders of the model. As expected, uncertainty is closely related to the camera-scene distance, as small uncertainties in the laser direction translate to larger discrepancies at larger distances. AUTT28 For model AUTT28, the evaluation data (images containing projected laser spots) were gathered during VICTOR dives 654 and 658, after the video acquisition of data used for 3D model creation. A total of 432 images with 1378 laser measurements were selected and grouped into 6 distinct sections throughout the 3D model, as shown in Table 5 and Figure 24. Dive 654 covered a longer vertical path (blue dots), while dive 658 (red dots) surveyed an additional horizontal segment together with parts of the area already viewed using dive 654. The higher density of red points indicates that the ROV observed the scene at a closer range during dive 658, requiring a higher number of images to obtain the necessary overlap compared to dive 654. The comparison of results (Table 6) shows that the models built using a posteriori navigation fusion (Figure 25a,b) are significantly impacted by scale drift (∼ 15%), and that this impact is nearly identical regardless of the use of global or incremental SfM approaches. The gradual scale sliding observed is caused by inherent scale ambiguity of the two-view image pair geometry when BA is solely dependent on visual information. While this might not have been as obvious in the previous case, the long single pass of the camera, as performed in dive 654, introduces in this particular model numerous consecutive two-view image pairs, magnifying the scale drift. As shown in Figure 25c,d, additional constraints in the BA (e.g., navigation data) reduce ambiguity and, ultimately, nearly eliminate scale drift. Overall, scale error of the model built with global SfM using multiobjective BA is less than 1% with nearly zero scale drift, while a model built with incremental SfM approach showed a 2% scale drift along its 300 m length. It should be noted that the observed difference in scale estimates are within the uncertainty levels of the estimations, and therefore inconclusive. The effects of different navigation fusion strategies are further demonstrated through the comparison of two reconstructions obtained using Global SfM with multiobjective BA and with similarity transformation (Figure 26). The reconstructions diverge on the outer parts of the model, consistent with a "doming" effect. A broad-scale systematic deformation produces a reconstruction that appears as a rounded-vault-distortion of a flat surface. This effect is a result of applying a rigorous re-projection error minimization to a loosely interconnected longer sequence of images taken from a nearly parallel direction, combined with slight inaccuracies in modeling of the radial distortion of the camera [14]. Multiobjective BA is able to reduce the effect by introducing the additional non-vision-related constraints, while the similarity transformation, with its preservation of angles between any three points and subsequent shape preservation, is not adequate for such corrections of the model deformations. Multisurvey Data Fusion As explained in Section 5.2.2, the multimission data fusion can cause contradictory navigation priors during optimization. We address this by expanding the optimization problem with an additional 3D vector, representing the possible USBL offset between the recorded navigation data of the two dives. To examine the effects of this offset compensation on model construction, an additional model was constructed using raw navigation data (i.e., without offset compensation). Figure 27 depicts errors in the camera pose estimates with respect to their navigation priors, and shows a concentration of errors in areas imaged during both dives (Figure 16), where navigation priors of the two dives are incoherent. The errors dramatically decrease with the introduction of an offset, yielding an improved fitting solution. Alternatively, incoherence can cause model distortions to compensate for contradicting priors, as shown by abrupt changes of scale (area D in Figure 28). Scale Error Estimation Methods To recover high-resolution and precise information from 3D models (lengths, areas, and volumes) it is important to use the most accurate method. As the nonalignment of lasers with the optical axis of the camera prevents the use of previous image-scaling methods (e.g., Pilgrim et al. [24]; Davis and Tusting [25]), two other methods could be used instead. Minor misalignments of laser scalers may be discarded for simplicity or lack of sufficiently distributed calibration data. In such cases, both our partially-constrained approach and the simplified direct 3D method, that assumes an equivalence of the Euclidean distance between the points of laser intersections and the beams themselves, could be used for the evaluation. For this comparison the model with least scale drift was selected (Global SfM and multiobjective BA navigation fusion) to emphasize the effects of different methods on the results. Furthermore, as both the simplistic direct 3D and partially-constrained methods assume laser-pair parallelism, the analysis of these two methods was performed on data consisting of only laser pairs that were the closest to being parallel (Figures 29a and 30a), as well as on the complete dataset (Figures 29b and 30b), to show the effect that nonparallelism of laser beams may have on the different methods. As expected, in comparison to the simplistic approach (orange) (Figure 29a), our method (green) is significantly less impacted by the discrepancies in camera-scene distances at the two laser intersections caused by the deviation of camera-scene angle from perpendicularity. The spread of the estimated values within each segment for the direct approach is directly correlated with the span of camera-scene distances (most notable in area E in Figure 29). Although varying distances themselves do not play a role, they do however increase the probability of both having deviating camera-surface angles, and of violating the surface flatness requirement, which both result in discrepancies in camera-scene distances between the two laser points. In contrast, the analysis of the results of the partially-constrained approach (Figure 30a) confirms that this method is unaffected by changes of camera angle and scene roughness. As expected, the results in sections D, E and F are nearly identical, with discrepancies in sections A, B and C. Sections A, B, and C were evaluated using data collected during dive 658, and D, E, and F were evaluated during dive 654; we attribute this discrepancy to the marginally larger error in nonparallelism of the laser configuration used during dive 658 compared to that of dive 654. This is clearly shown when the results are computed on the data from all laser pairs (Figure 30b), as nonparallelism of different laser pairs causes significant variation in the results. Segments acquired at closer ranges (A, B, and C), and therefore less affected by the errors in parallelism, have smaller errors than those of segments D and F, which are evaluated at larger distances. While similar multimodal distributions appear in the results of the simple direct 3D method, the clear multimodal peaks are suppressed by the effects of camera-surface angles and roughness of the surface model. Discussion The comparison between the results show that the fully-unconstrained method is more consistent and accurate than the two other approaches, i.e., partially-constrained and simplistic direct 3D method. We note that by limiting the data to parallel laser pairs (dive 654), the partially-constrained method produced similar results. Therefore, the PCM approach can be used when the relation between parallel lasers and the camera is not known, opening up its use to numerous scenarios where strict rigidity between the camera and lasers is not maintained or determined (e.g., legacy data). The effects of different reconstruction strategies were analyzed using two distinct survey scenarios. The first model (FPA dataset) was acquired with multiple passes over the same areas. Overlap of nonsequential images restricted the potential solutions of the optimization problem to a nearly identical solution regardless of the strategy (SfM or navigation fusion). In a second model (AUTT28 dataset), data were acquired during two separate surveys, and include a long single pass with the camera oriented nearly parallel to a vertical wall. The results demonstrate that surveys where sequential images are weakly connected are prone to produce broad-scale deformation (doming effect) in the final model. Rigorous minimization of the re-projection error, combined with the projective scale ambiguity, distorts the model, and can lead to further drift in the scale estimate. While the navigation fusion strategy did not play a role in the first model (FPA), the results of this second model (AUTT28) demonstrate the advantage of using multiobjective BA navigation fusion to process data with more complex survey patterns. Furthermore, the introduction of additional vectors in the optimization of multisurvey problems successfully accounted for the offset changes present in the underwater USBL-based navigation data, and thus minimize the effect of contradicting navigation priors. Conclusions This study presented a comprehensive scale error evaluation of four of the most commonly used image-based 3D reconstruction strategies of underwater scenes. This evaluation seeks to determine the advantages and limitations of the different methods, and to provide a quantitative estimate of model scaling, which is required for obtaining precise measurements for quantitative studies (such as distances, areas, volumes and others). The analysis was performed on two data sets acquired during a scientific cruise (SUBSAINTES 2017) with a scientific ROV (VICTOR6000), and therefore under realistic deep sea fieldwork conditions. For models built using multiobjective BA navigation fusion strategy, an L-Curve analysis was performed to determine the optimal weight between competing objectives of the optimization. Furthermore, the potential offset in navigation when using USBL-based positioning from different dives was addressed in a representative experiment. Building upon our previous work, the lack of readily available measurements of objects of known sizes in large scale models was overcome with the fully-unconstrained method, which exploits laser scaler projections onto the scene. The confidence level for each of the scale error estimates was independently assessed with a propagation of the uncertainties associated with image features and laser spot detections using a Monte Carlo simulation. The number of iterations used in the simulation to satisfactorily represent the complexity of the process was validated through the analysis of the final estimate behavior. As each scale error estimate characterizes an error at a specific area of the model, independent evaluations across the models enable efficient determination of potential scale drifts. To obtain a sufficient number of accurate laser measurements, an automatic laser spot detector was also developed. By mitigating the effects of scene texture using an auxiliary image, a much larger number of accurate detections was possible, even with greatly attenuated laser beams. The requirement of having laser spots either not present or in at different position in the auxiliary image is easily satisfied in video acquisitions, while an additional image has to be recorded if still images are collected. Furthermore, the recovery of characteristic shapes of laser spots with radially decreasing intensities enabled additional determination of the uncertainty of laser spot detections. In total, the scale errors have been evaluated on a large set of measurements in both models (432/1378) spread across them. Finally, the comparison of results obtained using different reconstruction strategies were performed using two distinct survey scenarios. In surveys comprising a single dive and with multiple overlapping regions, the choice of reconstruction strategies is not critical, since all strategies perform adequately well. However, in more complex scenarios there is a significant benefit from using optimization including the navigation data. In all cases, the best reconstruction strategies produced models with scale errors inferior to 5%, with errors on the majority of each model area being around 1%. Acquisition of calibration data (points collected over a large range of distances) is indeed critical. Depending on laser setup, a modification of laser geometry is possible (e.g., during the process of diving due to pressure changes). As minor discrepancies in parallelism can cause significant offsets at the evaluating distance, performing a calibration in the field is desirable (e.g., approach of the scene illuminated with laser beams). Furthermore, our results also indicate and justify the importance of collecting a multitude of evaluation data at different locations and moments during the survey.
2019-08-02T11:23:40.444Z
2019-07-15T00:00:00.000
{ "year": 2019, "sha1": "6dbb67fc781c91e1fe919202e4fc04897999ed9d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/11/18/2093/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e20058e75518dc0c3021c4dda5756867f8dc855f", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
213098425
pes2o/s2orc
v3-fos-license
Vectorized Discrete Gaussian Sampling with SIMD Support . Discrete Gaussian sampling is a fundamental building block of lattice-based cryptography. Sampling from a Gaussian distribution 𝐷 ℤ,𝜎,𝑐 over the integers ℤ is an important sub-problem of discrete Gaussian sampling, where parameter σ > 0 and center c ∈ ℝ . In this paper, we show that two common sampling algorithms for discrete Gaussian distribution over the integers can be implemented more e ffi ciently by using vectorization with SIMD (Single Instruction Multiple Data) support. Specifically, we use the VCL (C++ vector class library) by Agner Fog, which o ff ers optimized vector operations for integers and floating-point numbers with the support of SIMD. The VCL is also a simple tool for constant-time implementations, which helps prevent the information leakage caused by the timing attacks on sampling operations. Introduction Discrete Gaussian sampling, which is to sample from a discrete Gaussian distribution ℤ, , with parameter σ > 0 and center ∈ ℝ over an n-dimensional lattice , plays a fundamental role in lattice-based cryptography [1,2]. An important sub-problem of discrete Gaussian sampling, denoted by Sampleℤ, which is to sample from a discrete Gaussian distribution ℤ, , over the integers ℤ with parameter σ > 0 and center c ∈ ℝ, is usually one of the subroutines in discrete Gaussian sampling algorithms for distributions over a general n-dimensional lattice [3]. Furthermore, Sampleℤ is much more efficient and simpler than discrete Gaussian sampling over a general lattice, in some lattice-based cryptosystems, such as the ones shown in [4,5], the operations involving discrete Gaussian sampling are nothing but Sampleℤ The methods of sampling from a continuous Gaussian distribution are not trivially applicable for the discrete case. Therefore, how to design and implement good sampling algorithms, especially for Sampleℤ, has received attentions in recent years. Sample ℤ usually involves floating-point arithmetic because of the exponent operation in Gaussian function. This implies that online high-precision floating-point computation may be the biggest implementation bottleneck [6]. We can see that most of improved Sampleℤ algorithms manage to use high-precision floating-point arithmetic only at offline time [3,7]. Fortunately, the later work on discrete Gaussian sampling precision suggested that significand precision of 53 bits provided by double-precision floating arithmetic is sufficient for most of the security applications [2,8]. Thus, it is feasible to design Sampleℤ algorithms with the standard double precision floating-point arithmetic. The side-channel leakage of discrete Gaussian sampling algorithms has also been recognized as an important problem. Bruinderink et al. and Espitau et al. presented (timing) side-channel attacks on the sampling algorithms used in BLISS signature scheme respectively [9,10]. In order to resist timing attacks, all the operations involving secret information in a discrete Gaussian sampling algorithm should be executed in a time which is independent from the secret data. This property of discrete Gaussian sampling algorithms is called time independence, which can be achieved by ensuring constant execution time [11] or randomly shuffling the secret values [12]. In this paper, we show that two common sampling algorithms for discrete Gaussian distribution over the integers can be more efficiently implemented by using vectorization with SIMD (Single Instruction Multiple Data) support. Specifically, we use the VCL (C++ vector class library) by Agner Fog, which offers optimized vector operations for integers and floating-point numbers with the support of SIMD [13]. The two sampling algorithms are the GPV algorithm given by Gentry et al. in [1] and the inversion sampling algorithm (using a cumulative distribution table) which was suggested to be used by Peikert in [3]. We select these two algorithms, since the GPV algorithm supports varying parameters (including parameter σ > 0 and center c ∈ ℝ), while inversion sampling is widely used as a base sampler with a fixed and relatively small σ [2,3]. Furthermore, the VCL is a simple tool for constant-time implementations [14]. One can see that the two algorithms we implemented in this paper are constant-time or at least time-independent, helping mitigate the information leakage caused by the timing attacks on sampling operations. Preliminaries We denote the set of real numbers by ℝ and the set of integers by ℤ. We extend any real function (•) to a countable set A by defining ( ) = ∑ ( ) ∈ . The Gaussian function on ℝ with parameter σ > 0 and center c ∈ ℝ evaluated at ∈ ℝ can be defined by , ( ) = −( − ) 2 /2 2 . For real σ > 0 and c ∈ ℝ, the discrete Gaussian distribution over the integers ℤ is defined by ℤ, , ( ) = , ( )/ , (ℤ) for ∈ ℤ. By convention, the subscript c is omitted when it is taken to be 0. For σ > 0 and c ∈ ℝ, sampling an integer from ℤ, , is equivalent to sampling an integer − ⌊ ⌋ according to ℤ, ,{ } , where { } is the fractional part of c such that 0 ≤ { } < 1, it suffices to discuss sampling from a discrete Gaussian distribution ℤ, , with 0 ≤ < 1. The GPV Sampling Algorithm The GPV sampling algorithm was given by Gentry et al. in [1], which uses rejection sampling from the uniform distribution over [c − τσ, c + τσ] by outputting a uniform integer with probability , ( ) = −( − ) 2 /2 2 . It needs high-precision floating-point arithmetic to compute the value of , ( ) online for every integer uniformly taken from [c − τσ, c + τσ]. By the principle of rejection sampling, one can see that GPV algorithm requires about 2τ/√2π trails on average. A few years ago, it was believed that τ should be not less than 12, which implies that the GPV sampling algorithm cannot be very efficient because it requires 2τ/√2π ≈ 10 trials on average. The later works on discrete Gaussian sampling precision suggested that the number of trials could be decreased by using more cryptographically efficient measures, such as Rényi divergence [8] and max-log distance [2], and the significand precision of 53 bits provided by double-precision floating arithmetic is sufficient for most of the security applications. So, as in [2], one can take τ = 6 (i.e., 2τ/√2π ≈ 5), and uses standard double-precision floating arithmetic to compute , ( ), which certainly gives a performance boost for the GPV sampling algorithm. The Inversion Sampling Algorithm based on Cumulative Distribution Tables By using a pre-computed cumulative distribution table (CDT), inversion sampling can generate sample numbers at random from any probability distribution given its cumulative distribution function. We take the discrete (and finite) case as an instance. Let 1 , 2 , ⋯ be all the possible value of the random variate X. The probability of X = is denoted by Pr(X = ) = , where = 1,2, ⋯ , and ∑ =1 = 1. Thus, the CDT can be represented by {F( 1 ), F( 2 ), ⋯ , F( ) = 1} with F( ) = ∑ =1 . When generating a sample number from X, inversion sampling takes a uniform deviate ∈ [0,1) , and then returns X = such that F( −1 ) < u ≤ F( ) , i.e., ∑ −1 =1 < u ≤ ∑ =1 . SIMD and the VCL Vector Class Library Modern CPU's have "Single Instruction Multiple Data" (SIMD) instructions for handling vectors of multiple data elements in parallel. The VCL vector class library is a tool that allows C++ programmers to speed up their code by handling multiple data in parallel. The compiler may be able to use SIMD instructions automatically in simple cases, but at the same time, a human programmer is more likely to do it better by organizing data into vectors that fit the SIMD instructions. With the VCL library, it is easier for programmers to write vector code instead of using assembly language or intrinsic functions [13]. Implementing the GPV Algorithm with VCL In this section, we implement the GPV algorithm with the VCL. Since most of modern CPU's support AVX (AVX2) instruction set but not AVX512, we use Vec4d class which handles vectors with 4 elements of double floating-point precision in parallel with the support of AVX. Moreover, the GPV algorithm needs a large amount of uniformly (pseudo-)random numbers. We generate these random numbers by using AES256-CTR with AES-NI. We divide the GPV algorithm into two sub-functions, PopGPVPool() and SamplerGPV() (see the pseudocode in Appendix). The for loop in the PopGPVPool() is a time independent implementation of sampling uniformly from [c − τσ, c + τσ] ∩ ℤ. Specifically, it is to sample an integer between 0 and 2 log 2 − 1 and accept if < to generate a uniformly random number from Next, SamplerGPV() carries out the acceptance-rejection operation for each integer in vx until all the 4 prospective outputs in vx are exhausted, and then calls PopGPVPool() again. In particular, the integer w in SamplerGPV() can be seen as the binary representation of the probability value. If the random integer uw ≤ w, the corresponding can be returned as a sample number from ℤ, , . According to the implementation detail of the VCL, most of arithmetic operators and mathematical functions in VCL have constant runtime, including multiplication, division, square and the exponential function, i.e. there is no timing difference dependent on the input [14]. This makes the VCL ideal for constant-time implementations, so we can see that PopGPVPool() is time independent. It is clear that SamplerGPV() is not constant-time because of its acceptance rejection operations. Fortunately, the timing difference from the acceptance-rejection operations is independent on the final outputs. Therefore, we say that SamplerGPV() is time-independent, i.e., adversary cannot obtain any more information on samples generated by SamplerGPV() through the time difference. Implementing the Inversion Sampling Algorithm with VCL The support of a discrete Gaussian distribution over the integers is infinite, which leads to a CDT of infinite size. In practice, one has to truncate the distribution table but ensure adequate precision at the same time. Following the idea of setting parameter τ in the GPV sampling algorithm, we can also take τ = 6. Specifically, by using the MPFR library, we compute the probability density of offline, which is σ ( ) = −( − ) 2 /2 2 / ∑ −( − ) 2 /2 2 ⌈ ⌉ y=−⌈ ⌉ for = 0, ±1, ±2, ±3 ⋯ , ±⌊ ⌋, then use 2⌊ ⌋ + 1 uint64_t integers to store the binary expansions of these probabilities and get the CDT {F( 1 ), F( 2 ), ⋯ , F( 2⌊ ⌋+1 ) = 1}. After establishing the CDT for given parameter σ and center c, the inversion sampling algorithm is just to traverse the whole CDT linearly, and implementing it with the VCL is only a few lines of code (see the pseudocode in Appendix). The variable vu and voutput are both declared as Vec4q. The variable vu contains 4 random 64-bit integer, corresponding to 4 uniform deviates that are required for inversion sampling. The variable voutput, which is initialized with the vector (−⌈ ⌉, −⌈ ⌉, −⌈ ⌉, −⌈ ⌉), handles vectors with 4 elements of uint64_t integers in parallel with the support of AVX. We use the 'select(*,Vec4q, Vec4q)' statement that is provided by the VCL so that each component of voutput is updated independently. Finally, the SamplerCDT() generates 4 separate sample numbers according to the CDT. Experimental Results On a laptop computer (Intel i7-8550U, 16GB RAM), using the g++ compiler and enabling -O3 optimization option, we tested the performance of the GPV sampling algorithm for discrete Gaussian distribution ℤ, , with = 10,50,100,200,500,1000 and c picked uniformly from [0,1). As shown in Table 1, one can get about 3.4 × 10 6 samples per second by using the (standard) GPV algorithm. The performance gains from the VCL is substantial, and one can get about 9.6 × 10 6 samples per second. At the same implementation environment, Table 2 shows the performance of inversion sampling algorithm for discrete Gaussian distribution ℤ, , with = 5,10,15,20,25,30 and = 0. We can see that the performance gains from the VCL is also substantial, though its sampling speed is directly bound up with the value of . Conclusion There have been several theoretical and practical attempts to replace discrete Gaussian distributions in lattice-based cryptography with more implementation friendly ones. However, they only show in some specific lattice-based cryptosystems (such as Bliss signature [5]) that discrete Gaussians could be replaced by more easily samplable distributions with almost no security penalty. The impact on the whole lattice-based cryptography, especially on some advanced cryptographic applications of lattices, still needs to be assessed. For example, it is suggested to replace discrete Gaussians with rounded Gaussians, which can be sampled very efficiently by using Box-Muller transform and rounding to the nearest integer, but the security analysis of rounded Gaussians is only confined to the Bliss signature [14]. Moreover, the VCL is also suggested to be used in the rounded Gaussians to get a constant-time and efficient implementation. The results presented in paper means that one may not need to replace discrete Gaussian distributions in lattice-based cryptography. Sampling algorithms for discrete Gaussian distribution over the integers can be implemented simply and more efficiently by using the VCL (C++ vector class library) with SIMD (Single Instruction Multiple Data) support. The VCL can also help give constant-time (at least time-independent) implementations of sampling algorithms for Gaussian distribution over the integers.
2019-11-28T12:30:20.692Z
2019-11-25T00:00:00.000
{ "year": 2019, "sha1": "33f58ef0eaf99c864992b2d77ec00f7cfdd4a59b", "oa_license": null, "oa_url": "https://doi.org/10.12783/dtcse/cscme2019/32569", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c7646cc8058914f30e4f773b1bd27e343a4ee0cf", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
201846156
pes2o/s2orc
v3-fos-license
Tunable Magnetocaloric Properties of Gd-Based Alloys by Adding Tb and Doping Fe Elements In this paper, the magnetocaloric properties of Gd1−xTbx alloys were studied and the optimum composition was determined to be Gd0.73Tb0.27. On the basis of Gd0.73Tb0.27, the influence of different Fe-doping content was discussed and the effect of heat treatment was also investigated. The adiabatic temperature change (ΔTad) obtained by the direct measurement method (under a low magnetic field of 1.2 T) and specific heat capacity calculation method (indirect measurement) was used to characterize the magnetocaloric properties of Gd1−xTbx(x = 0~0.4) and (Gd0.73Tb0.27)1−yFey (y = 0~0.15), and the isothermal magnetic entropy (ΔSM) was also used as a reference parameter for evaluating the magnetocaloric properties of samples together with ΔTad. In Gd1−xTbx alloys, the Curie temperature (Tc) decreased from 293 K (x = 0) to 257 K (x = 0.4) with increasing Tb content, and the Gd0.73Tb0.27 alloy obtained the best adiabatic temperature change, which was ~3.5 K in a magnetic field up to 1.2 T (Tc = 276 K). When the doping content of Fe increased from y = 0 to y = 0.15, the Tc of (Gd0.73Tb0.27)1−yFey (y = 0~0.15) alloys increased significantly from 276 K (y = 0) to 281 K (y = 0.15), and a good magnetocaloric effect was maintained. The annealing of alloys (Gd0.73Tb0.27)1−yFey (y = 0~0.15) at 1073 K for 10 h resulted in an average increase of 0.3 K in the maximum adiabatic temperature change and a slight increase in Tc. This study is of great significance for the study of magnetic refrigeration materials with adjustable Curie temperature in a low magnetic field. Introduction Magnetic refrigeration technology based on magnetocaloric effect (MCE) has gained wide attention because of its high efficiency and low carbon dioxide emissions. As a new refrigeration technology, its environmentally friendly properties can largely reduce the global greenhouse effect and excessive energy consumption. The MCE is the endothermic and exothermic behaviors of materials with a change in the applied magnetic field, and it is evaluated by adiabatic temperature change (∆T ad ) and isothermal magnetic entropy (∆S M ). The ideal operating temperature of the magnetic refrigeration material is near the Curie temperature, because the adiabatic temperature change and the isothermal magnetic entropy reach peaks in this temperature range. Room-temperature magnetic refrigeration technology is expected to replace traditional gas compression refrigeration technology in the future, therefore, magnetic materials with excellent magnetocaloric properties at room temperature have been extensively studied around the world [1][2][3][4]. After decades of research, many excellent room temperature magnetic refrigeration materials have been discovered. In 1968, Brown [5] found the large MCE of Gd (T c = 293 K). In 1997, Pecharsky and Gschneidner [6] observed Gd 5 Si 2 Ge 2 with a first-order phase transition having an MCE of about 18 J/(kg·K) under the magnetic field change of 0-5 T, which is larger than that of pure Gd (~10 J/(kg·K)) Materials 2019, 12 under the same condition. Fengxia Hu et al. [7] found that LaFe 13−x Si x had a large MCE of about 19.7 J/(kg·K) at 208 K for a field change of 0-5 T. In addition to the discoveries mentioned above, materials such as MnAs 1−x Sb x [8,9], Ni-Mn-Sn [10][11][12], and La 1−x Ca x MnO 3 [13][14][15] have also been found to have good room temperature magnetic refrigeration performances. These achievements are sufficient to show that room temperature magnetic refrigeration has good development prospects, especially the pure Gd, which is now used as a benchmark for magnetic refrigeration materials. However, the isothermal magnetic entropy (∆S M ) obtained by calculating the isothermal magnetization curve using Maxwell's equation (Equation (1)) is limited. The results obtained by indirect measurement show that the magnetic entropy change calculated by Maxwell equation should not have a huge peak value. Further research shows that the calculation of magnetic entropy change by Maxwell's equation is not applicable near Curie temperature, because paramagnetism (PM) and ferromagnetism (FM) coexist near Curie temperature, so that the huge false results of entropy peak are obtained [16]. Therefore, it is not appropriate to use isothermal magnetic entropy to evaluate the magnetocaloric properties of a material. The results of ∆S M can be used as a reference for magnetocaloric properties. In order to correctly evaluate the magnetocaloric properties of magnetic materials, it is necessary to study the performance of the material under cyclic conditions [17]. The adiabatic temperature change (∆T ad ) measured by direct measurement method (i.e., ∆T ad is the difference among the temperature of the sample measured directly at H i and H f , where H f and H i are the final and initial magnetic fields, respectively) and specific heat capacity calculation method (indirect measurement) is also suitable for practical applications [18]. Adiabatic temperature change is the driving force of the heat transfer efficiency of the heat transfer fluid in the refrigerator. The adiabatic temperature change is also a key and direct parameter to measure the magnetocaloric properties of a material. It is more direct and accurate to characterize the magnetocaloric properties of materials through adiabatic temperature change. The direct measurement method is more suitable for the testing of commercial products because of its intuitiveness and convenience. The indirect measurement method is applicable to the equilibrium state or the near equilibrium state. However, most of the magnetic refrigeration processes are dynamic, so the ∆T ad obtained by the indirect measurement method was used as supplementary data and reference for the direct measurement method in this paper [2]. These two methods can be applied in both first-order and second-order phase transition magnetic materials. The premise of the excellent MCE in the currently available magnetic refrigerant materials is only realized in a high-cost superconducting magnetic fields (usually from 0 T to 5 T/10 T), which brings out high costs in practical application. Therefore, it is very important to develop advanced magnetic refrigeration materials with high adiabatic temperature change under low-applied magnetic fields provided by permanent magnets [19,20]. The adiabatic temperature changes directly measured in this paper were achieved with a 1.2 T low magnetic field provided by an NdFeB permanent magnet. As a typical magnetic refrigeration material, pure Gd has an excellent application prospect in the field of magnetic refrigeration. However, for the reason that the Curie temperature of pure Gd is fixed and not adjustable, its application scope is limited. Therefore, it is of great significance to study alloys with variable Curie temperature [21][22][23][24]. In this work, Gd and Tb alloys with different atomic ratios were studied, and the effect of adding Tb on the T c and MCE of Gd-based alloys was achieved. After the Gd 1−x Tb x alloy system was determined, the effects of doping a small amount of Fe and adding heat treatment on the MCE were also revealed. Experimental Details The alloys of Gd 1−x Tb x , (Gd 0.73 Tb 0.27 ) 1−y Fe y (x = 0~0.4, y = 0~0.15, at.%) were obtained by arc melting Gd (99.9%), Tb (99.9%), and Fe (99.9%) in an argon atmosphere. Each ingot was smelted five times to ensure uniformity of composition. Heat treatments of (Gd 0.73 Tb 0.27 ) 1−y Fe y (y = 0~0. 15) were carried out at 1073 K for 10 h. The phase structure was characterized on a D/max-rB X-ray diffractometer. The adiabatic temperature change (∆T ad ) of all samples was measured by the direct measurement method under an applied magnetic field of 1.2 T (the magnetocaloric direct measuring instrument is shown in Figure 1. Test procedure: (1) Firstly, the sample was attached to the temperature sensor in an adiabatic thermostat, and then the initial temperature, the end temperature, and heating rate of the test were set; (2) in the second step, the sample was pushed into an applied magnetic field of 1.2 T, and the temperature controller was operated to raise the temperature; (3) in the third step, the temperature of the test chamber rose slowly, and the test was performed every 4 K during the heating process; the instrument pulled the sample out of the magnetic field, the temperature of the sample dropped sharply until it was stable, and the T and ∆T ad at this time were recorded; (4) the fourth step was to push the sample into the magnetic field until the next test temperature point). dropped sharply until it was stable, and the T and ΔTad at this time were recorded; (4) the fourth step was to push the sample into the magnetic field until the next test temperature point). The physical property measurement system (PPMS-9) was used to measure the samples' isothermal magnetization curve in a 2 T magnetic field (temperature increment was 4 K). The magnetic entropy change (ΔSM) calculated by the Maxwell relation (1): The PPMS system was also used to measure the specific heat capacity in the temperature range of 2-400 K under a zero applied magnetic field. The adiabatic temperature change can also be calculated by applying Formula (2) shown below. Gd-Tb Alloys The adiabatic temperature change (ΔTad) (achieved by direct measurement) and the Curie temperature (Tc) of Gd1−xTbx (the value of x from 0 to 0.4, step size is 0.1) are shown in Figure 2a. As the Tb content increased, the Curie temperature decreased monotonously, in accordance with the linear fitting equation shown in Figure 2b. To consider the Curie temperature and the adiabatic temperature change together as a whole, further study is needed between x = 0.1 and x = 0.3 (the value of x from 0.1 to 0.3, step size is 0.01). It can be concluded from Figure 2a that when x = 0.27, the Gd- The physical property measurement system (PPMS-9) was used to measure the samples' isothermal magnetization curve in a 2 T magnetic field (temperature increment was 4 K). The magnetic entropy change (∆S M ) calculated by the Maxwell relation (1): (1) The PPMS system was also used to measure the specific heat capacity in the temperature range of 2-400 K under a zero applied magnetic field. The adiabatic temperature change can also be calculated by applying Formula (2) shown below. (The parameters in Formula (2)-∆T ad : adiabatic temperature change; T: temperature; C p : specific heat capacity; ∆S M : magnetic entropy change.) Gd-Tb Alloys The adiabatic temperature change (∆T ad ) (achieved by direct measurement) and the Curie temperature (T c ) of Gd 1−x Tb x (the value of x from 0 to 0.4, step size is 0.1) are shown in Figure 2a. As the Tb content increased, the Curie temperature decreased monotonously, in accordance with the linear fitting equation shown in Figure 2b. To consider the Curie temperature and the adiabatic temperature change together as a whole, further study is needed between x = 0.1 and x = 0.3 (the value of x from 0.1 to 0.3, step size is 0.01). It can be concluded from Figure 2a that when x = 0.27, the Gd-Tb system obtained the largest adiabatic temperature change under 1.2 T applied magnetic field (∆T ad = 3.5 K, T c = 276 K). The X-ray diffraction results for Gd 1−x Tb x alloys (x = 0, 0.1, 0.2, 0.27, 0.3, 0.4) are shown in Figure 2c. The results showed that these samples had similar XRD curves, and only Gd phase can be labeled, indicating that the Tb atoms solubilize in Gd solutions. The crystal structure of Gd and Tb was a hexagonal, close-packed structure, and the atomic radius difference of the elements (∆r) was 1.2% (r Gd = 2.54 Å, r Tb = 2.51 Å), which tended to form a substitutional solid solution. Figure 2c. The results showed that these samples had similar XRD curves, and only Gd phase can be labeled, indicating that the Tb atoms solubilize in Gd solutions. The crystal structure of Gd and Tb was a hexagonal, close-packed structure, and the atomic radius difference of the elements (Δr) was 1.2% (rGd = 2.54 Å, rTb = 2.51 Å), which tended to form a substitutional solid solution. (1)) on the isothermal magnetization curves M(μ0H)T can be used together with the adiabatic temperature change to evaluate the magnetocaloric properties of alloys, which provides a more accurate result. Figure 3d indicates that with the magnetic field increases, the isothermal magnetic entropy increased significantly, reaching the maximum value near Curie temperature and the value were 3.1 J·kg −1 ·K −1 , 3.7 J·kg −1 ·K −1 , 5.4 J·kg −1 ·K −1 , corresponding to the applied magnetic field changes of 0-1 T, 0-1.2 T, 0-2 T. The adiabatic temperature change obtained by indirect measurement is also an important parameter for measuring the magnetocaloric properties of materials. In this paper, we used it as a supplement and reference for the results of direct measurement. The parameters used in Equation (2) are isothermal magnetic entropy change (ΔSM) and specific heat capacity under zero field (Cp). Figure 4a shows the specific heat capacity obtained in zero applied field, Figure 3d shows the isothermal magnetic entropy change, and the result of adiabatic temperature change calculated by combining the data of these two figures is shown in Figure 4b. By comparing the value of the adiabatic temperature change achieved by indirect measurement method and direct measurement method under 1.2 T magnetic field, we found that the peak value was the same, about 3.5 K, which shows that the direct measurement method and the indirect measurement method were in good agreement. In addition, when the magnetic fields were 1 T and 2 T, the values of the adiabatic temperature change obtained by indirect measurement were 2.9 K and 5.1 K, respectively. (1)) on the isothermal magnetization curves M(µ 0 H) T can be used together with the adiabatic temperature change to evaluate the magnetocaloric properties of alloys, which provides a more accurate result. Figure 3d indicates that with the magnetic field increases, the isothermal magnetic entropy increased significantly, reaching the maximum value near Curie temperature and the value were 3.1 J·kg −1 ·K −1 , 3.7 J·kg −1 ·K −1 , 5.4 J·kg −1 ·K −1 , corresponding to the applied magnetic field changes of 0-1 T, 0-1.2 T, 0-2 T. The adiabatic temperature change obtained by indirect measurement is also an important parameter for measuring the magnetocaloric properties of materials. In this paper, we used it as a supplement and reference for the results of direct measurement. The parameters used in Equation (2) are isothermal magnetic entropy change (∆S M ) and specific heat capacity under zero field (C p ). Figure 4a shows the specific heat capacity obtained in zero applied field, Figure 3d shows the isothermal magnetic entropy change, and the result of adiabatic temperature change calculated by combining the data of these two figures is shown in Figure 4b. By comparing the value of the adiabatic temperature change achieved by indirect measurement method and direct measurement method under 1.2 T magnetic field, we found that the peak value was the same, about 3.5 K, which shows that the direct measurement method and the indirect measurement method were in good agreement. In addition, when the magnetic fields were 1 T and 2 T, the values of the adiabatic temperature change obtained by indirect measurement were 2.9 K and 5.1 K, respectively. Gd-Tb-Fe Alloys According to the research on Gd1−xTbx, Gd0.73Tb0.27 has the best magnetocaloric effect. On the basis of Gd0.73Tb0.27, the influence of Fe doping needs to be further studied. The curves of maximal adiabatic temperature change and the Curie temperature varied with the content of Fe (y value) can be clearly seen in Figure 5a,b. With the increase of Fe content, the maximum adiabatic temperature change of (Gd0.73Tb0.27)1−yFey decreases from 3.5 K (y = 0) to 2.6 K (y = 0.15) under 1.2 T magnetic field, and the Curie temperature of (Gd0.73Tb0.27)1−yFey increases from 276 K (y = 0) to 281 K (y = 0.15). The XRD pattern of (Gd0.73Tb0.27)1−yFey shown in Figure 5c can reveal the effect of Fe doping on the phase structure. When Fe content x = 0.05, Fe2Gd phase begins to appear. With the increase of Fe doping, the cubic Fe2Gd phase content gradually increases. Because the Curie temperature of Fe2Gd phase is too high (Tc = 795 K), the Fe2Gd phase is still ferromagnetic when the ferromagnetic and Gd-Tb-Fe Alloys According to the research on Gd1−xTbx, Gd0.73Tb0.27 has the best magnetocaloric effect. On the basis of Gd0.73Tb0.27, the influence of Fe doping needs to be further studied. The curves of maximal adiabatic temperature change and the Curie temperature varied with the content of Fe (y value) can be clearly seen in Figure 5a,b. With the increase of Fe content, the maximum adiabatic temperature change of (Gd0.73Tb0.27)1−yFey decreases from 3.5 K (y = 0) to 2.6 K (y = 0.15) under 1.2 T magnetic field, and the Curie temperature of (Gd0.73Tb0.27)1−yFey increases from 276 K (y = 0) to 281 K (y = 0.15). The XRD pattern of (Gd0.73Tb0.27)1−yFey shown in Figure 5c can reveal the effect of Fe doping on the phase structure. When Fe content x = 0.05, Fe2Gd phase begins to appear. With the increase of Fe doping, the cubic Fe2Gd phase content gradually increases. Because the Curie temperature of Fe2Gd phase is too high (Tc = 795 K), the Fe2Gd phase is still ferromagnetic when the ferromagnetic and paramagnetic transition take place in the main phase at 200-300 K, which weakens the intensity of the magnetic moment change, thus leading to the reduction of the magnetocaloric effect. The Gd-Tb-Fe Alloys According to the research on Gd 1−x Tb x , Gd 0.73 Tb 0.27 has the best magnetocaloric effect. On the basis of Gd 0.73 Tb 0.27 , the influence of Fe doping needs to be further studied. The curves of maximal adiabatic temperature change and the Curie temperature varied with the content of Fe (y value) can be clearly seen in Figure 5a,b. With the increase of Fe content, the maximum adiabatic temperature change of (Gd 0.73 Tb 0.27 ) 1−y Fe y decreases from 3.5 K (y = 0) to 2.6 K (y = 0.15) under 1.2 T magnetic field, and the Curie temperature of (Gd 0.73 Tb 0.27 ) 1−y Fe y increases from 276 K (y = 0) to 281 K (y = 0.15). of 4f-4f electrons between Gd and Tb atoms, and the interaction of Fe-Fe was stronger than R-Fe and R-R (R = Gd, Tb). As a result, the Curie temperature increased due to the increase in Fe content. Although the addition of Fe reduced the maximum adiabatic temperature change of the alloy, its ΔTad was not lower than pure Gd (ΔTad = 3.1 K) under the same applied magnetic field change of 0-1.2 T, the alloy is still an excellent room temperature magnetic refrigeration material, and the addition of Fe can adjust the Curie temperature while reducing costs. The effect of Fe doping on the magnetocaloric properties was obtained by studying the (Gd0.73Tb0.27)1−yFey alloy. The heat treatment of the (Gd0.73Tb0.27)1−yFey was also very meaningful and is worth further investigation. According to the binary phase diagram of Gd-Fe and Tb-Fe, the solidus temperature of the two was 1118 K and 1120 K, respectively, thereby determining the heat treatment temperature of 1073 K and the heat treatment time of 10 h, in order to homogenize the structure, remove the residual stress, and reduce the lattice defects. As one can see in Figure 6a-d, compared with the adiabatic temperature change curves before and after heat treatment, the maximum adiabatic temperature change of Gd0.73Tb0.27 did not change, while the others obviously increased. The increase in the maximum adiabatic temperature change can be observed more intuitively from Figure 6e, with an average raise of 0.3 K. The change in Tc can be derived from Figure 6f. From the temperature measurement point of view, the Curie temperature rise rarely changed. The XRD pattern of (Gd 0.73 Tb 0.27 ) 1−y Fe y shown in Figure 5c can reveal the effect of Fe doping on the phase structure. When Fe content x = 0.05, Fe 2 Gd phase begins to appear. With the increase of Fe doping, the cubic Fe 2 Gd phase content gradually increases. Because the Curie temperature of Fe 2 Gd phase is too high (T c = 795 K), the Fe 2 Gd phase is still ferromagnetic when the ferromagnetic and paramagnetic transition take place in the main phase at 200-300 K, which weakens the intensity of the magnetic moment change, thus leading to the reduction of the magnetocaloric effect. The magnetic moment of Fe was 5.8, and the spin magnetic moment of Fe was anti-parallel with the spin magnetic moments of Gd and Tb, so that the saturation magnetization was reduced, also resulting in a decrease in adiabatic temperature change. The transition element Fe had a high Curie temperature due to the strong interaction among 3d electrons, and its Curie temperature was 1043 K. The addition of Fe enhanced the indirect interaction of 4f-4f electrons between Gd and Tb atoms, and the interaction of Fe-Fe was stronger than R-Fe and R-R (R = Gd, Tb). As a result, the Curie temperature increased due to the increase in Fe content. Although the addition of Fe reduced the maximum adiabatic temperature change of the alloy, its ∆T ad was not lower than pure Gd (∆T ad = 3.1 K) under the same applied magnetic field change of 0-1.2 T, the alloy is still an excellent room temperature magnetic refrigeration material, and the addition of Fe can adjust the Curie temperature while reducing costs. The effect of Fe doping on the magnetocaloric properties was obtained by studying the (Gd 0.73 Tb 0.27 ) 1−y Fe y alloy. The heat treatment of the (Gd 0.73 Tb 0.27 ) 1−y Fe y was also very meaningful and is worth further investigation. According to the binary phase diagram of Gd-Fe and Tb-Fe, the solidus temperature of the two was 1118 K and 1120 K, respectively, thereby determining the heat treatment temperature of 1073 K and the heat treatment time of 10 h, in order to homogenize the structure, remove the residual stress, and reduce the lattice defects. As one can see in Figure 6a-d, compared with the adiabatic temperature change curves before and after heat treatment, the maximum adiabatic temperature change of Gd 0.73 Tb 0.27 did not change, X-ray diffraction (XRD) experiments were performed for (Gd0.73Tb0.27)1−yFey alloys after heat treatment, as shown in Figure 7; the results showed that there was no new phase formed in (Gd0.73Tb0.27)1−yFey alloys compared with that before heat treatment. It means annealing at 1073 K for 10 h does not change the phase structure of the alloy, but only homogenizes the structure. Therefore, the improvement of magnetocaloric properties was due to the homogeneous composition of the sample after heat treatment, and the equilibrium phase was obtained, which is beneficial to the interaction among atomic magnetic moments. X-ray diffraction (XRD) experiments were performed for (Gd 0.73 Tb 0.27 ) 1−y Fe y alloys after heat treatment, as shown in Figure 7; the results showed that there was no new phase formed in (Gd 0.73 Tb 0.27 ) 1−y Fe y alloys compared with that before heat treatment. It means annealing at 1073 K for 10 h does not change the phase structure of the alloy, but only homogenizes the structure. Therefore, the improvement of magnetocaloric properties was due to the homogeneous composition of the sample after heat treatment, and the equilibrium phase was obtained, which is beneficial to the interaction among atomic magnetic moments.  In Gd1−xTbx, the Curie temperature decreased monotonously and linearly with the increase of Tb content, but the adiabatic temperature first rose and then decreased. Considering the magnetocaloric properties and the Curie temperature, x = 0.27 was the most suitable choice.  In (Gd0.73Tb0.27)1−yFey, Fe doping reduced the adiabatic temperature change of the alloy while increasing the Curie temperature  Heat treatment of (Gd0.73Tb0.27)1−yFey at 1073 K for 10 h resulted in an average increase in adiabatic temperature change of 0.3 K and a slight increase in the Curie temperature.  The adiabatic temperature change obtained by the direct measurement method is widely used in the characterization of magnetocaloric effects. The results obtained by the direct measurement method had a good correlation with the results of the isothermal magnetic entropy change and the indirect measurement method, which just shows the accuracy of the direct measurement method. The alloys studied in this paper have a high magnetocaloric effect, and doping Fe can effectively reduce the cost and adjust the Curie temperature. This series of alloys are potential magnetic refrigeration materials. Conclusions In this work, the magnetocaloric properties of Gd 1−x Tb x and (Gd 0.73 Tb 0.27 ) 1−y Fe y alloys were systematically studied. • In Gd 1−x Tb x , the Curie temperature decreased monotonously and linearly with the increase of Tb content, but the adiabatic temperature first rose and then decreased. Considering the magnetocaloric properties and the Curie temperature, x = 0.27 was the most suitable choice. • In (Gd 0.73 Tb 0.27 ) 1−y Fe y , Fe doping reduced the adiabatic temperature change of the alloy while increasing the Curie temperature • Heat treatment of (Gd 0.73 Tb 0.27 ) 1−y Fe y at 1073 K for 10 h resulted in an average increase in adiabatic temperature change of 0.3 K and a slight increase in the Curie temperature. • The adiabatic temperature change obtained by the direct measurement method is widely used in the characterization of magnetocaloric effects. The results obtained by the direct measurement method had a good correlation with the results of the isothermal magnetic entropy change and the indirect measurement method, which just shows the accuracy of the direct measurement method. The alloys studied in this paper have a high magnetocaloric effect, and doping Fe can effectively reduce the cost and adjust the Curie temperature. This series of alloys are potential magnetic refrigeration materials. Conflicts of Interest: The authors declare no conflicts of interest.
2019-09-07T13:15:11.621Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "5b973623663b9cbfd2f2fda5c437098567116c82", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/12/18/2877/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6371f2edb111871cd79a31dac5f6773229c0c53a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
195837452
pes2o/s2orc
v3-fos-license
Carbonic Anhydrase 1-Mediated Calcification Is Associated With Atherosclerosis, and Methazolamide Alleviates Its Pathogenesis Vascular calcification is an important pathogenic process in atherosclerosis (AS); however, its immediate cause is unknown. Our previous study demonstrated that carbonic anhydrase 1 (CA1) stimulates ossification and calcification in ankylosing spondylitis and breast cancer. The current study investigated whether CA1 plays an important role in AS calcification and whether the CA inhibitor methazolamide (MTZ) has a therapeutic effect on AS. We successfully established an AS model by administration of a high-fat diet to apolipoprotein E (ApoE−/−) mice. The treated animals had significantly increased serum levels of high-density lipoprotein cholesterol (HDL-c) and nitric oxide (NO) and decreased serum concentrations of total cholesterol (TC), triglycerides (TG), low-density lipoprotein cholesterol (LDL-c), interleukin (IL-6), interferon (IFN)-γ, granulocyte-macrophage colony-stimulating factor (GM-CSF), tumor necrosis factor-α (TNF-α), chemokine (C-X-C motif) ligand 1/keratinocyte-derived chemokine (CXCL1/KC), and C-C motif chemokine ligand 2 (CCL2)/monocyte chemoattractant protein 1 (MCP-1). The treated mice also had reduced AS plaque areas and fat accumulation, with no clear calcium deposition in the intima of the blood vessels. CA1 expression was significantly increased in the aortic lesions, particularly in calcified regions, but the expression was dramatically lower in the mice that received MTZ treatment or MTZ preventive treatment. CA1 was also highly expressed in human AS tissues and in rat vascular smooth muscle cells (VSMCs) with β-glycerophosphate (㒐β-GP)-induced calcification. Acetazolamide (AZ), a CA inhibitor with a chemical structure similar to MTZ, markedly suppressed calcification and reduced CA1, IL-6, IFN-γ, GM-CSF, and TNF-α expression in cultured VSMCs. Anti-CA1 small interfering ribonucleic acid (siRNA) significantly suppressed calcification, cell proliferation, and migration, promoted apoptosis, and reduced IL-6, IFN-γ, GM-CSF, and TNF-α secretion in cultured VSMCs. These results demonstrated that CA1 expression and CA1-mediated calcification are significantly associated with AS progression. MTZ significantly alleviated AS and suppressed CA1 expression and proinflammatory cytokine secretion, indicating the potential use of this drug for AS treatment. Vascular calcification is an important pathogenic process in atherosclerosis (AS); however, its immediate cause is unknown. Our previous study demonstrated that carbonic anhydrase 1 (CA1) stimulates ossification and calcification in ankylosing spondylitis and breast cancer. The current study investigated whether CA1 plays an important role in AS calcification and whether the CA inhibitor methazolamide (MTZ) has a therapeutic effect on AS. We successfully established an AS model by administration of a high-fat diet to apolipoprotein E (ApoE −/− ) mice. The treated animals had significantly increased serum levels of high-density lipoprotein cholesterol (HDL-c) and nitric oxide (NO) and decreased serum concentrations of total cholesterol (TC), triglycerides (TG), low-density lipoprotein cholesterol (LDL-c), interleukin (IL-6), interferon (IFN)-γ, granulocyte-macrophage colony-stimulating factor (GM-CSF), tumor necrosis factor-α (TNF-α), chemokine (C-X-C motif) ligand 1/keratinocyte-derived chemokine (CXCL1/KC), and C-C motif chemokine ligand 2 (CCL2)/monocyte chemoattractant protein 1 (MCP-1). The treated mice also had reduced AS plaque areas and fat accumulation, with no clear calcium deposition in the intima of the blood vessels. CA1 expression was significantly increased in the aortic lesions, particularly in calcified regions, but the expression was dramatically lower in the mice that received MTZ treatment or MTZ preventive treatment. CA1 was also highly expressed in human AS tissues and in rat vascular smooth muscle cells (VSMCs) with β-glycerophosphate (β-GP)-induced calcification. Acetazolamide (AZ), a CA inhibitor with a chemical structure similar to MTZ, markedly suppressed calcification and reduced CA1, IL-6, IFN-γ, GM-CSF, and TNF-α expression in cultured VSMCs. Anti-CA1 small interfering ribonucleic acid (siRNA) significantly suppressed calcification, cell proliferation, and migration, promoted apoptosis, and reduced IL-6, IFN-γ, GM-CSF, and TNF-α secretion in cultured VSMCs. These results demonstrated that CA1 expression and CA1-mediated calcification are significantly associated with AS progression. MTZ significantly alleviated AS and suppressed CA1 expression and proinflammatory cytokine secretion, indicating the potential use of this drug for AS treatment. Keywords: carbonic anhydrase 1, calcification, atherosclerosis, acetazolamide, methazolamide INTRODUCTION Vascular calcification is involved in the plaque formation of atherosclerosis (AS) (Ross, 1999;Gamble, 2006;Cianciolo et al., 2010). Arterial calcification is an active, cell-regulated process occurring during osteogenesis and includes the transition of vascular smooth muscle cells (VSMCs) to osteoblasts and to the crystallization and precipitation of hydroxyapatite salt (Adeva-Andany et al., 2015). Runt-related transcription factor 2 (Runx2), bone morphogenetic protein 2 (BMP2), and alkaline phosphatase (ALP) interact and influence each other, leading to a process similar to that of osteoblast-like differentiation (Aikawa et al., 2007). However, the immediate cause of calcium salt deposition is unknown. Carbonic anhydrase 1 (CA1) is a member of the carbonic anhydrase (CA) family that reversibly catalyzes the hydration of CO 2 to form HCO 3 − , which then rapidly binds to calcium ions to form calcium carbonate (Supuran, 2008). We have found that CA1 expression is specifically upregulated in the synovial membrane in patients with ankylosing spondylitis (Chang et al., 2010). The most distinctive pathological manifestations of ankylosing spondylitis are inflammation of the hip and spinal joints, hyperosteogeny, joint fusion, and fibrosis (Zhang et al., 2003;Landewe and van der Heijde, 2009). We then found that CA1 could promote joint calcification, ossification, and joint fusion by accelerating calcium carbonate deposition Zheng et al., 2012). Additionally, our group found that CA1 was highly expressed in breast carcinoma tissues and in blood from patients with breast cancer, leading to the calcification of the tumor tissue, inhibition of apoptosis, and promotion of tumor cell migration (Zheng et al., 2015). Thus, CA1 plays important roles in promoting biocalcification. Acetazolamide (AZ) and methazolamide (MTZ) are CA inhibitors (Masini et al., 2013;Solesio et al., 2018). MTZ and AZ are sulfonamide derivatives and are clinical drugs used in the treatment of glaucoma. MTZ is an improved version of AZ with a relatively low adverse reaction rate (Mincione et al., 2008). As shown in our previous studies, the induction of calcification in human osteosarcoma Saos-2 cells and murine mammary adenocarcinoma 4T1 cells upregulated CA1 expression. Treatment of these cells with AZ not only reduced CA1 expression but also suppressed cell calcification Zheng et al., 2015). Our clinical study showed that MTZ was an effective treatment for patients with active ankylosing spondylitis by suppressing CA1 expression and joint fusion (Chang et al., 2011). CA may play important roles in AS. As shown in the study by Oksala et al.,CA2 and CA12, members of the CA family, were highly expressed in human atherosclerotic plaques and might be associated with osteoclast-like cells of a mononuclear cell lineage in patients with advanced AS and were involved in plaque remodeling (Oksala et al., 2010). Ando et al. examined abdominal aortic aneurysm using proteomics and detected an abundant CA1 autoantigen, suggesting an important role for CA1 in the formation of this lesion (Ando et al., 2013). Based on their findings and the results reported by our group, we hypothesize that CA1 plays an important role in the AS process by stimulating tissue calcification and that the CA inhibitors AZ and MTZ can treat AS by inhibiting CA1 expression. This study investigated the function of CA1 in AS and its underlying mechanism. We first examined the expression of CA1 in human AS tissues and in the aortic tissue of an AS mouse model to determine the relationship between CA1 expression levels and AS pathogenesis. Next, we treated the AS animal model with MTZ to determine the therapeutic effect of the CA inhibitor on AS and AS plaque formation in vivo. Furthermore, we cultured rat VSMCs to observe CA1 expression after calcification induction. We then treated VSMCs with anti-CA1 small interfering ribonucleic acid (siRNA) or AZ to confirm the role of CA1 in VSMC calcification in vitro. MTZ is a clinical drug, but no sterile MTZ for cell culture is available. Sterile AZ for cell culture experiments is commercially available, but clinical AZ is not. We thus treated the mouse model with MTZ and cultured the VSMCs with AZ. This study used aortic aneurysm and aortic dissection tissue samples to investigate the calcification mechanism in AS. Our clinical imaging data showed calcification in these samples. Many studies have demonstrated that most aortic aneurysms and aortic dissections are caused by AS and are related to vascular calcification (Lavall et al., 2012;Ladich et al., 2016). Acquisition of Human AS Tissue The tissue specimens were acquired from patients undergoing heart surgery at Shandong Provincial Qianfoshan Hospital Affiliated with Shandong University (Jinan, China). The control specimens were acquired from volunteers who served as heart transplant donors (n = 7), and information on the volunteers was confidential. AS tissue specimens were acquired from those patients with aortic aneurysm or aortic dissection accompanied by AS symptoms (n = 7). The patients had hypertension and aortic calcification, as shown by CT (Discovery 750, GE Healthcare, USA) imaging examination. Detailed information about the patients is shown in Supplementary Table 1. All aortic tissues were obtained from the ascending aortas. The study protocol was approved by the Medical Ethics Committee of Shandong Provincial Qianfoshan Hospital at Jinan (approval number: 20170607). Effects of AZ on Rat VSMC Calcification The CA inhibitor AZ (Sigma, USA) was dissolved in 0.05% dimethylsulfoxide (DMSO). Rat VSMCs were plated at a density of 1 × 10 5 cells per well, and 100 µmol/L AZ was added to the cells undergoing calcification induction. The experiments were performed as previously described (Hall and Kenny, 1985a;Hall and Kenny, 1985b). In the control culture, 0.05% DMSO was added to the calcification induction medium. Analysis of Alizarin Red S Staining On day 14 after the induction of rat VSMC calcification, the cells were washed with phosphate buffer saline (PBS) (Solabio, China), fixed with 95% ethanol for 10 min, and stained with 0.5% (w/v) alizarin red S (AR-S, Solabio) (pH = 4.2) for 30 min at room temperature. The formation of calcified nodules in the VSMCs was examined under a microscope. Rat VSMCs treated with anti-CA1 siRNA were analyzed by the same protocol. Quantification of Calcification Using Cetylpyridinium Chloride Following AR-S staining, VSMCs that underwent calcification induction were treated with 10% (w/v) cetylpyridinium chloride and incubated at 37°C for 1 h. The optical density (OD) was measured at a wavelength of 562 nm using a microplate reader (Molecular Devices, USA). VSMCs treated with anti-CA1 siRNA were analyzed by the same protocol. Examination of CA, Runx2, ALP, and BMP2 Expression Using Real-Time, Fluorescence-Based Quantitative PCR Total RNA was extracted from the human tissues, cultured VSMCs, and mouse aortic tissues according to the RNApure Tissue & Cell kit instruction manual (CWbiotech, China). The total RNA was reverse transcribed into complementary deoxyribonucleic acid (cDNA) according to the instructions of a reverse transcription kit (Toyobo, Japan); messenger ribonucleic acid (mRNA) expression was then determined using real-time fluorescencebased quantitative PCR (StepOnePlus, Life Technology, USA). Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA was used as an internal control to quantify expression of the target genes. Each sample was examined in triplicate. The mRNA expression relative to GAPDH was measured using the comparative 2 −ΔΔCt method (Livak and Schmittgen, 2001). The specificity of the primers was determined using melting curve analysis. The primer sequences are listed in Supplementary Table 2. Examination of CA1 Expression Using Western Blotting (WB) The human tissue specimens, mouse aorta tissue specimens, and cultured VSMCs were homogenized in Radio-Immunoprecipitation Assay (RIPA) lysis buffer (Beyotime, China) on ice and centrifuged at 12,000 rpm. Proteins were separated using 12% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to a PVDF membrane. The membrane was incubated with an anti-CA1 antibody (Abcam, USA, catalog number: 108367) at 4°C overnight. The membrane was then incubated with a horseradish peroxidase (HRP)-conjugated goat anti-rabbit secondary antibody at 37°C for 1 h. The membrane was developed using Western Chemiluminescent HRP Substrate (ECL, Millipore). The expression level of GAPDH was used as an internal control. The grayscale value of the target protein was quantified using ImageJ software (National Institutes of Health, USA). Transfection of Anti-CA1 siRNA Anti-CA1 siRNA was synthesized by GenePharma (China), and the sequence was 5'-GGAUGCCCUAAGCUCAGUUTT-3' . The transfection was performed with PepMute ™ siRNA Transfection Reagent (SignaGen, USA). Following transfection, the VSMCs were harvested, and proteins were extracted. An Allstars siRNA that does not suppress the expression of any gene was used as the control. Examination of VSMC Proliferation Using the Cell Counting Kit-8 (CCK-8) Assay Rat VSMCs were harvested after transfection with anti-CA1 siRNA. Cell Counting Kit-8 solution (Dojindo, Japan) was added, and the cells were cultured for 2 h. The OD was measured at 450 nm using a microplate reader. Examination of VSMC Migration Using Transwell Chambers After transfection with anti-CA1 siRNA, Transwell chambers (Corning, USA) were placed in 24-well plates. Dulbecco's modified Eagle medium (DMEM) containing 10% fetal bovine serum (FBS) was added to the bottom chamber, and serum-free cell suspension (containing approximately 8 × 10 4 cells) was added to the top chamber. The cells were cultured at 37°C for 24 h. The cells in the upper well of the chambers were completely removed with a cotton swab to ensure that they would not affect subsequent experiments. The chambers were fixed with methanol for 15 min, and the upper wells were wiped again after the chambers were dry. The chambers were then stained with 0.01% crystal violet, and the cells were observed and counted under a microscope. Flow Cytometry Analysis of VSMC Apoptosis Using Annexin V-FITC/ Propidium Iodide (PI) Staining VSMCs were transfected with anti-CA1 siRNA for 48 h, and 5-10 × 10 4 cells from each group were centrifuged, washed with PBS, resuspended in binding buffer, and stained with Annexin V-FITC and PI (Dakewei, China) in the dark for 15 min. The cells were examined using flow cytometry (NovoCyte D2040R, USA). Establishment of an AS Mouse Model Eight-week-old healthy male C57BL/6J-ApoE −/− mice weighing 22 ± 2 g were purchased from Vital River Laboratory Animal Technology in Beijing, China. The mice were randomly divided into four groups: 1) Control group (n = 40): ApoE −/− mice were fed a normal diet for 21 weeks; from weeks 1 to 21, the mice were administered normal saline at a dose of 0.1 ml/10 g body weight by oral gavage every other day; 2) AS model group (n = 40): ApoE −/− mice were fed a high-fat diet (1% cholesterol and 10% fat in the regular diet) for 21 weeks; from week 1 to 21, the mice were administered normal saline at a dose of 0.1 ml/10 g body weight by oral gavage every other day; 3) MTZ treatment group (n = 40): ApoE −/− mice were fed a high-fat diet for 21 weeks; from weeks 1 to 12, the mice were administered normal saline at a dose of 0.1 ml/10 g body weight by oral gavage every other day, and from weeks 12 to 21, the mice were administered MTZ at a dose of 25 mg/kg body weight/day by oral gavage every other day (Wang et al., 2009;Konstantopoulos et al., 2012); and 4) MTZ preventive treatment group (n = 40): ApoE −/− mice were fed a high-fat diet for 21 weeks; from weeks 1 to 21, the mice were administered MTZ at a dose of 25 mg/kg body weight/day by oral gavage every other day, and at week 21, all mice were humanely euthanized by a lethal dose of ketamine and xylazine, and the aorta was isolated. The study protocol was approved by the Medical Ethics Committee of Shandong Provincial Qianfoshan Hospital at Jinan (approval number: 20170607). The breeding and handling of the experimental animals were carried out in accordance with the Helsinki Convention on Animal Protection and the Regulations of the People's Republic of China on the Administration of Experimental Animals. Measurement of Mouse Blood Lipid Levels Blood was collected and centrifuged at 3,000 rpm for 10 min. The mouse serum levels of total cholesterol (TC), triglycerides (TG), low-density lipoprotein cholesterol (LDL-c), and highdensity lipoprotein cholesterol (HDL-c) were measured using a kit from Nanjing Jiancheng Bioengineering Institute (China), and the atherogenic index (AI) was calculated as follows: AI = [TC − (HDL-c)]/HDL-c. Measurement of Mouse Serum Nitric Oxide (NO) Level Blood serum was collected from mice in the four groups, and the NO level was measured using a kit from Nanjing Jiancheng Bioengineering Institute (China). Measurement of Inflammatory Cytokines in VSMC Culture and Mouse Serum The VSMC culture medium was collected after culturing, and the serum from mice was collected at week 21. The levels of inflammatory cytokines were examined by flow cytometry (NovoCyte D2040R, USA) using an inflammatory cytokine kit. The VSMC culture medium was examined using the Rat Th1/Th2 Panel with filter plate (BioLegend, USA); the serum from mice was examined using the Mouse Inflammation Panel with filter plate (BioLegend, USA). The capture bead mixture, fluorescence reagent, and the sample were added to the detection plate wells. The plate was then placed on a shaker and incubated in the dark at room temperature for 2 h. After two washes, the mixture was resuspended in wash buffer. The changes in the levels of IL-1α, IL-1β, IL-2, IL-4, IL-5, IL-6, IL-10, IL-12p70, IL-13, IL-17A, IL-18, IL-33, interferon (IFN)-γ, granulocytemacrophage colony-stimulating factor (GM-CSF), tumor necrosis factor-α (TNF-α), chemokine (C-X-C motif) ligand 1/ keratinocyte-derived chemokine (CXCL1/KC), and C-C motif chemokine ligand 2 (CCL2)/monocyte chemoattractant protein 1 (MCP-1) were analyzed using FCAP_Array_v3 software (BD Biosciences, USA). Observation of Mouse Atherosclerotic Plaques Using Sudan IV Staining The thoracoabdominal aorta was carefully isolated up to the iliac artery branch. The aorta was placed in Sudan IV solution for 20 min and differentiated in 80% ethanol for 20 min. The aorta was observed and imaged under a microscope. The extension of atherosclerotic plaques was semiquantified by calculating the ratio of the red-stained area in aorta to the total aortic area using Image-Pro Plus 6.0 (Media Cybernetics, USA) (Zeng et al., 2014). Observation of Mouse AS Plaques Using Oil Red O Staining Cryostat sections of the aortas from each mouse group were prepared, incubated with 70% ethanol, and stained with freshly prepared Oil Red O working solution for 10 min. The sections were washed in 70% ethanol followed by distilled water and then counterstained with hematoxylin and placed under running water for bluing. The sections were then mounted with glycerol jelly mounting medium. The atherosclerotic plaques were observed and imaged under a light microscope. The extent of the aortic root atherosclerotic plaques was semiquantified by calculating the Oil Red O-positive staining area in the sections using ImageJ software (Kulathunga et al., 2018;Li et al., 2019). Hematoxylin and Eosin (HE) Staining of Mouse Aorta Tissue Mouse aorta tissue samples were fixed with 4% paraformaldehyde, embedded in paraffin, and sectioned continuously. After deparaffinization and dehydration, the sections were stained with HE. Observation of Calcium Deposition in Human and Mouse Aortas Using von Kossa Staining A tissue array containing AS human aortic tissue samples (n = 8) and normal human aortic tissue samples (n = 8) commercially obtained from Alenabio (China) was used for von Kossa staining. Detailed information on the tissue array is provided in Supplementary Table 3. The mouse aortic tissues were dehydrated, embedded, and sliced into 5 µm thick sections. After routine deparaffinization and dehydration, the sections were incubated with a von Kossa (Solabio, China) silver solution under bright light for 15 min and then treated with a hypotonic solution (sodium thiosulfate) for 2 min. The nuclei were counterstained with HE. Sections were then dehydrated, cleared, and mounted with neutral balsam. The status of calcium deposition in the mouse aorta was observed and imaged under a light microscope. Immunohistochemistry of CA1 Expression Paraffin sections of mouse aorta and human aorta were deparaffinized. The tissue array (Alenabio, China) used for immunohistochemistry of human tissue contained continuous tissue sections for von Kossa staining. The sections were incubated with an anti-CA1 polyclonal antibody (Cusabio, China) overnight at 4°C. The sections were then incubated with goat antirabbit IgG (Zhongshan Golden Bridge Biotechnology, China). Sections were treated with diaminobenzidine (DAB, Zhongshan Golden Bridge Biotechnology, China) and counterstained with hematoxylin. Statistical Analysis Normality and homogeneity of variance tests were performed using SPSS 17.0 software (IBM, USA). Data that met the test criteria were represented as the means ± SEM. The significance of differences among multiple groups was analyzed using one-way analysis of variance (ANOVA). Comparisons between groups were analyzed using Fisher's least significant difference (LSD) method. Paired and/or unpaired Student's t-tests were used to evaluate the statistical significance of differences between two groups. P < 0.05 was considered to be statistically significant. CA1 Expression in Human Aortic Tissue The expression of CA1 in human ascending aortic tissues was evaluated. Western blot analysis showed significantly increased expression of CA1 protein at a molecular weight of 29 kD in the human AS tissues compared with that in the healthy aortic tissues (P = 0.0204, Figures 1A, B). Immunohistochemistry also revealed CA1 expression in human aortic AS tissues. Many cells in the vascular wall were immunostained with anti-CA1 antibody in the diseased tissues. However, little CA1 immunosignal was observed in the healthy samples (P < 0.001, Figures 1C, D). Additionally, von Kossa staining revealed obvious dark brown calcium deposition in continuous tissue sections of the human aortic AS samples. No identifiable calcium deposition was found in the healthy samples ( Figure 1E). mRNA levels of CA1, CA2, CA3, CA4, CA5a, CA6, CA7, CA8, CA9, and CA10 were examined in the aortic aneurysm tissues, aortic dissection tissues, and healthy aortic tissues using real-time PCR. Only CA1 exhibited significantly increased expression in the AS aortic samples (P < 0.001, Figure 1F). The above results demonstrated that the expression of CA1 and calcium salt deposition was increased in human aortic AS tissues. The Effects of MTZ on Mouse AS The AS mouse model was established by feeding the ApoE −/− mice a high-fat diet. Compared to the control group, the body weights of the mice in all three other groups increased more rapidly. However, no significant difference was observed among the three groups that were fed the high-fat diet (Supplementary Figure 1). After 21 weeks, blood was collected from the mice in all four groups for serum lipid analysis. Compared to the healthy control group, the AS model mice exhibited significantly higher serum TC, TG, and LDL-c levels (P < 0.001, < 0.001, and 0.001, respectively) and significantly lower HDL-c levels (P < 0.001), indicating successful establishment of the AS animal model. Compared to the AS model group, the MTZ treatment significantly reduced serum TC, TG, and LDL-c levels (P = 0.0036, 0.0039, and 0.0056, respectively) and increased HDL-c levels (P = 0.012) in the mice; the mice in the MTZ preventive treatment group also displayed significantly reduced serum TC, TG, and LDL-c levels (P = 0.0014, 0.0013, and <0.001, respectively) and increased HDL-c levels (P = 0.004). The difference in the levels of TC, TG, and HDL-c between the MTZ treatment group and the MTZ preventive treatment group was not statistically significant (P = 0.502, 0.158, and 0.506, respectively), but the difference in the LDL-c levels was statistically significant (P = 0.009) (Figure 2A). The AI of the AS model group was significantly elevated (P < 0.001). Compared to the AS model group, the AIs of the MTZ treatment and preventive treatment groups were significantly reduced (both P values were <0.001, Figure 2B). Measurement of the serum NO levels in the mice from each group showed that the serum NO level in the AS model was significantly reduced compared to that of the healthy controls (P < 0.001). Compared to the AS model group, the serum NO levels in both the MTZ treatment group and the preventive treatment group were significantly increased (P = 0.002 and P < 0.001, respectively). Though the serum NO levels in the preventive treatment group were increased, there was no significant difference between the MTZ treatment group and the MTZ preventive group ( Figure 2C). The above results demonstrated that MTZ had a significant therapeutic effect on AS in the animal model. The levels of different cytokines were assessed using flow cytometry. Compared to the control group, the levels of IL-6, IFN-γ, GM-CSF, TNF-α, CXCL1/KC, and CCL2/MCP-1 were significantly increased in mice from the AS model group (P = 0.0026, 0.015, 0.037, 0.014, 0.004, and <0.001, respectively), but the levels of the other analyzed cytokines (IL-1α, IL-1β, IL-2, IL-4, IL-5, IL-10, IL-12p70, IL-13, IL-17A, IL-18, and IL-33) were not significantly different between the two groups. Compared to the AS model group, the levels of IL-6, IFN-γ, TNF-α, CXCL1/ KC, and CCL2/MCP-1 were significantly reduced in mice from the MTZ treatment group (P = 0.0015, 0.036, 0.02, 0.00105, and 0.005, respectively). Meanwhile, the levels of IL-6, IFN-γ, TNF-α, GM-CSF, CXCL1/KC, and CCL2/MCP-1 were also significantly reduced in mice from the MTZ preventive treatment group (P = 0.0027, 0.037, 0.046, 0.019, 0.0296, and 0.007, respectively). Compared to the MTZ treatment group, the levels of these inflammatory cytokines in the MTZ preventive treatment group did not change significantly (P = 0.208, 0.076, 0.215, 0.850, 0.170, and 0.996, respectively; Figure 2D). The above results demonstrated that MTZ suppressed the production of some inflammatory cytokines in the AS model. CA1 expression levels in aortic tissue samples from the AS mouse model were examined using Western blot and real-time PCR. Compared with the control group, CA1 protein with a molecular weight of 29 kD showed significantly higher expression levels in the AS mice (P < 0.001). Compared with the AS model group, the CA1 protein level was significantly reduced in the treatment group and in the preventive treatment group (P < 0.001 and P < 0.001, respectively; Figures 3A, B). The CA1 mRNA level was also significantly increased in the AS model aortic tissue and was decreased in the MTZ treatment group and MTZ preventive treatment group (P < 0.001 and P < 0.001, respectively; Figure 3C). The levels of other CA members in the aortic tissue samples from the AS animal model were also examined using real-time PCR. Compared with the expression profile of the healthy controls, mRNA levels of CA9 and CA10 in addition to CA1 were increased in the AS group (P = 0.029 and P = 0.025, respectively), but the transcription levels of other CA members from CA2 to CA8 did not significantly change in the animals. Furthermore, all CA members except CA1 had no significant change in their mRNA levels in the MTZ treatment and MTZ preventive treatment groups (Supplementary Figure 2). The above results demonstrated that CA1 had increased expression in aortic AS tissues and that MTZ treatment principally reduced CA1 expression in the tissues. At week 21 of feeding, mouse aorta samples were stained with Sudan IV. Compared to the control group, the area of the AS plaques in the aorta of the AS model group was significantly increased (P < 0.001), while compared to the AS model group, this value was decreased in the MTZ treatment group (P < 0.001); the AS plaque area was even smaller in the aortas of the MTZ preventive treatment group (P < 0.001, Figures 3D, E). The mouse aorta samples were also examined using Oil Red O staining. The aorta lumen in the control group was smooth, and few AS plaques were observed. The most striking red staining was observed in the aorta intima of the AS model group, and massive fat dots, stripes, and typical AS plaques protruded from the aorta intima. AS plaques were also observed in the MTZ treatment and preventive treatment groups but to a lesser (D) Proinflammatory cytokines in the four groups of mice (ANOVA, P = 0.004, 0.028, 0.004, < 0.001, < 0.001, and 0.002, respectively). N = 10 for each group. Data are shown as the means ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. extent than those in the AS model group (P = 0.0015 and P = 0.00102, respectively; Figures 3F, G). HE staining did not show plaques in the aortas of mice from the control group. The internal vascular wall was thin and small, the endothelial cells and VSMCs were aligned in order, and the media thickness was normal in the healthy controls. In the aortas of the AS model group, clear concentric AS plaques, accompanied by massive foam cells that accumulated inside the plaques, were observed. Additionally, large amounts of cholesterol crystals were present. The smooth muscle layer was relaxed and structurally disrupted, and inflammatory cells had accumulated. The intima was thickened and protruded into the lumen, leading to stenosis of the lumen. However, those pathological changes were obviously alleviated in the MTZ treatment and preventive treatment groups ( Figure 3H). Thus, the administration of MTZ to ApoE −/− mice by oral gavage significantly alleviated AS pathogenesis. Von Kossa staining was used to detect calcium deposition in the mouse aorta. The assay showed a large amount of dark brown calcium deposition in the intima of the AS model mice. Compared to the AS model group, identifiable calcium deposition was not observed in the MTZ treatment and preventive treatment groups, indicating that the MTZ treatment significantly suppressed calcification in AS mice ( Figure 3I). CA1 expression in the mouse aortic tissue sections was examined using immunohistochemistry. Compared to the control group, a large amount of brown immunoreactive signal was present in the AS model group, indicating that CA1 was expressed at high levels in the AS mice. Furthermore, the CA1 expression was localized in the calcified regions by comparing the successive sections with von Kossa staining. Compared to the AS model group, the immunoreactivity in the MTZ treatment group was significantly reduced, suggesting that the alleviation of AS was accompanied by reduced CA1 expression. Similarly, the level of immunoreactivity in the preventive treatment group was also significantly less than that in the AS model group, indicating a reduction in CA1 expression ( Figure 3J). The above observations demonstrated that CA1 was extensively expressed in the AS model and that MTZ suppressed the expression. Furthermore, the CA1 expression was colocated with calcium deposition, and MTZ inhibited calcification in the aortic tissue of AS mice. Effect of CA1 Expression and AZ on Rat VSMC Calcification Rat VSMC calcification was induced with β-GP, and the cells were simultaneously treated with the CA1 inhibitor AZ. Because AZ was dissolved in DMSO, we also added DMSO to the culture as a control. AR-S staining showed a large amount of orange, calcified nodules in the rat VSMCs. No calcified nodules were observed in cells that did not receive β-GP treatment, and few calcified nodules were observed in cells treated with AZ ( Figure 4A). Quantitative analysis with cetylpyridinium chloride showed a similar increase in cellular calcification upon β-GP induction (P < 0.001). The AZ treatment drastically reduced calcification (P < 0.001) to a level similar to that of cells without calcification induction (P > 0.05, Figure 4B). The mRNA expression of the ossification markers Runx2, ALP, and BMP2 in the rat VSMCs was analyzed before and after calcification using real-time PCR. Compared to the control group, the induction of calcification significantly increased the expression of Runx2, ALP, and BMP2 (P = 0.006, 0.008, and 0.004, respectively), indicating that the rat VSMCs underwent a biomineralization process upon calcification induction. The AZ treatment significantly suppressed Runx2, ALP, and BMP2 expression in VSMCs (P = 0.005, 0.008, and 0.017, respectively; Figures 4C-E). The above results demonstrated that AZ inhibited the calcification of VSMCs. The CA1 protein level in rat VSMCs was analyzed using WB. The induction of calcification significantly increased CA1 protein levels in rat VSMCs (P < 0.001), and the AZ treatment significantly suppressed CA1 expression (P = 0.003, Figures 5A, B). These results confirmed that CA1 expression was elevated during the cellular calcification. Compared to the control group, the levels of IL-6, IFN-γ, GM-CSF, and TNF-α in rat VSMC suspension were significantly increased after induction of calcification (P = 0.023, 0.002, 0.003, and <0.001, respectively). The AZ treatment significantly reduced the levels of IL-6, IFN-γ, GM-CSF, and TNF-α (P = 0.014, 0.026, 0.011, and <0.001, respectively; Figure 5C). The levels of the other analyzed cytokines (IL-2, IL-4, IL-5, IL-10, and IL-13) were not significantly changed in the culture that received AZ treatment. The above results demonstrated that CA1 expression in VSMCs was accompanied by calcification and AZ suppressed this expression. AZ also inhibited inflammatory cytokine production during VSMC calcification. Rat VSMCs were transfected with anti-CA1 siRNA. Transfection with Allstars siRNA was used as a control. Compared with the cells with Allstars siRNA, the anti-CA1 siRNA treatment significantly reduced the CA1 expression level by 53%, indicating effective inhibition of anti-CA1 siRNA on CA1 expression in cultured VSMCs (P = 0.0046, Figures 6A, B). Following anti-CA1 siRNA transfection, the cell proliferation of rat VSMCs was significantly decreased compared with that of the cells transfected with Allstars siRNA (P = 0.0021 and 0.0067, respectively; Figure 6C), and the cell migration of VSMCs with anti-CA1 siRNA transfection was also significantly inhibited (P < 0.001, Figures 6D, E). Anti-CA1 siRNA induced a significant increase in the apoptosis of rat VSMCs compared with that of the cells with Allstars siRNA transfection (P = 0.0353, Figures 6F, G). Rat VSMCs were transfected with anti-CA1 siRNA in the presence of β-GP. The CA1 protein levels in rat VSMCs with siRNA transfection were measured using WB analysis. Compared with the cells with Allstars siRNA transfection, anti-CA1 siRNA transfection significantly reduced the CA1 protein expression, indicating siRNA inhibition of CA1 expression (P = 0.0013, Figures 7A, B). Compared with VSMCs treated with β-GP and Allstars siRNA, the AR-S staining revealed fewer calcified nodules in the cells that were transfected with anti-CA1 siRNA in the presence of β-GP ( Figure 7C). Quantitative analysis with cetylpyridinium chloride also showed a significant decrease in VSMC cellular calcification upon β-GP induction when the cells were transfected with anti-CA1 siRNA (P < 0.001, Figure 7D). Following anti-CA1 siRNA treatment, the Runx2, ALP, and BMP2 transcription in rat VSMCs was significantly lower than that of cells that received Allstars siRNA and β-GP treatment (P < 0.001, < 0.001, and 0.002, respectively; Figures 7E-G). These results demonstrated that anti-CA1 siRNA inhibited the calcification of VSMCs and that CA1 expression mediated cell calcification. The change in cytokine production in VSMC supernatant was also measured using flow cytometry. Compared with the control cells transfected with Allstars siRNA in the presence of β-GP, the cells transfected with anti-CA1 siRNA had significantly reduced levels of IL-6, IFN-γ, GM-CSF, and TNF-α (P = 0.003, 0.002, < 0.001, and 0.007, respectively; Figure 7H). Meanwhile, the levels of the other analyzed cytokines (IL-2, IL-4, IL-5, IL-10, and IL-13) were not significantly changed following the treatment. DISCUSSION Calcification is a marker of AS and is used to predict AS severity (Rodondi et al., 2007). In the present study, CA1 protein was expressed at high levels in AS tissues of both aortic aneurysm and aortic dissection, with extensive calcification. This study also detected high CA1 expression levels in mouse AS aortas, (B) Quantification of calcification using cetylpyridinium chloride (ANOVA, P < 0.001). Runt-related transcription factor 2 (Runx2) (C), alkaline phosphatase (ALP) (D), and bone morphogenetic protein 2 BMP2 (E) mRNA expression in rat VSMCs using real-time PCR (ANOVA, P < 0.001, 0.003, and 0.003, respectively). The results were derived from three independent experiments. Data are shown as the means ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. accompanied by rich calcium deposits. The formation of AS plaques results from the influences of several types of cells in the vascular wall, including vascular endothelial cells, lymphocytes, monocytes/ macrophages, and VSMCs (Bennett et al., 2016). VSMCs account for approximately 70% of all AS cells (Gabunia et al., 2017). In this study, the induction of rat VSMCs with β-GP led to massive calcium deposition, a significantly increased expression of ossificationrelated genes, including ALP, Runx2, and BMP2, and increased CA1 expression levels. Furthermore, treatment of rat VSMCs with the CA inhibitor AZ significantly suppressed CA1, ALP, Runx2, and BMP2 expression and inhibited cellular calcification. Additionally, anti-CA1 siRNA treatment decreased VSMC calcification (induced with β-GP) and suppressed CA1, ALP, Runx2, and BMP2 expression. We also examined the expression levels of CA1 through CA10 in human and mouse AS aortic tissues. Only CA1, CA9, and CA10 had significantly increased expression levels in the animal AS tissues, and only CA1 had increased expression in human AS tissues. Furthermore, MTZ treatment inhibited only CA1 expression and did not significantly suppress CA9 or CA10 expression in the animal model. Thus, these results suggest that the increased expression of CA1 is related to vascular calcification and osteoblastic transformation of VSMCs. MTZ downregulates CA1 expression and inhibits aortic calcification. AZ is an inhibitor of CA1 activity. Many studies have measured the efficiency of AZ in terms of activity inhibition (Mincione et al., 2008;Masini et al., 2013;Solesio et al., 2018). In our study, we found that MTZ and AZ inhibited CA1 expression but did not inhibit other CA members in the AS animal model. In addition, CA1 rather than other CA members had significantly increased expression in human AS tissues. We obtained similar results in our previous studies (Chang et al., 2011;Chang et al., 2012;Zheng et al., 2015). In those studies, AZ inhibited CA1 expression in 4T1 cells (originating from mouse breast tumors) and Saos-2 cells (originating from human osteosarcoma). This means that AZ not only inhibits CA activity but also decreases CA1 expression. AS plaques form as a result of smooth muscle cell proliferation, and the deposition of cholesterol and other lipids, hydroxyapatite and fibrous connective tissue, and calcium deposition is a critical step in AS (Gamble, 2006). Normally, in adult humans, VSMCs mature and differentiate, with restricted proliferation and migration capabilities. Based on the results of a cellular function examination, upon inhibition of CA1 expression using anti-CA1 siRNA, the proliferation and migration as well as the calcification of rat VSMCs were reduced, while apoptosis was found to be increased. Thus, CA1 may also stimulate smooth muscle cell calcification, proliferation, and migration and suppress apoptosis to accelerate AS pathogenesis. We successfully established an AS mouse model by feeding ApoE −/− mice a high-fat diet. Sudan IV and Oil Red O staining revealed significantly less plaque formation in the MTZ treatment and preventive treatment groups than in the AS model group. Consistent with these findings, significantly higher serum HDL-c and NO levels were detected in the treatment and preventive treatment groups than in the AS model group. Therefore, MTZ has a therapeutic effect on AS in mice. Additionally, von Kossa staining showed the presence of substantial atherosclerotic calcification in the aortas of mice from the AS model group, while markedly less calcification was observed in the other three groups. Meanwhile, Western blot analysis demonstrated that the level of CA1 protein was significantly higher in the aortas from the AS model group than in those of the control group, and CA1 expression levels were significantly lower in the MTZ treatment and preventive treatment groups than in the AS model group. CA1 immunohistochemistry also revealed high CA1 expression in the AS model group compared to the control group, and the regions with calcification had high levels of CA1 expression. Compared to the AS model group, CA1 immunoreactivity in the MTZ treatment and preventive treatment groups was significantly reduced. Von Kossa staining and immunohistochemistry also demonstrated colocation of CA1 expression and calcification in human aortic AS tissues. Calcification occurs very early in the process of atherosclerosis. However, it is only able to be detected using imaging modalities when it increases in quantity. Cellular microvesicle release contributes to the development and calcification of atherosclerotic plaque in atherosclerotic plaque formation (Alique et al., 2018;Andrews et al., 2018;Shioi and Ikari, 2018). Our results not only show that CA1 plays a role in AS by regulating calcification but also support the hypothesis that calcification plays important roles in AS progression. NO has an effect on AS and is a useful index in the AS animal model. Endothelial dysfunction is an important pathogenesis of AS. As an important endothelium-derived relaxation factor, NO plays a role in cardiovascular protection and anti-AS function. Endothelial nitric oxide synthase (eNOS) disorder causes an abnormal production of NO, which may damage endothelial function and trigger AS (Hong et al., 2019). The present study revealed decreased levels of NO in AS animals upon treatment with MTZ, which is in accordance with previous findings. MTZ could have an effect on organs such as the liver to regulate blood parameters. MTZ is a hepatic insulin sensitizer that lowers blood glucose to treat type 2 diabetes (Konstantopoulos et al., 2012;Simpson et al., 2014). Therefore, the effects of MTZ on blood parameters and plaque histopathology may be independent of the effects of MTZ on plaque calcification. However, another possibility is that inhibition of MTZ in the formation of plaque calcification can subsequently aggravate AS and affect blood parameters. As shown in the present study, serum IL-6, IFN-γ, GM-CSF, TNF-α, CXCL1/KC, and CCL2/MCP-1 levels were dramatically increased in the AS mouse model. Compared to those in the AS model group, the serum levels of IL-6, IFN-γ, GM-CSF, TNF-α, CXCL1/KC, and CCL2/MCP-1 were significantly reduced in the MTZ treatment and preventive treatment groups, suggesting that MTZ not only suppresses calcification in AS progression but also alleviates the pathogenesis by downregulating the levels of these inflammatory cytokines. Furthermore, VSMCs with AZ treatment or anti-CA1 siRNA transfection also had decreased secretion of IL-6, IFN-γ, GM-CSF, and TNF-α in the culture medium. During AS, multiple inflammatory mediators stimulate VSMC proliferation and migration toward the intima (Bonomini et al., 2016). TNF-α is expressed in the endothelial cells, smooth muscle cells, and macrophages of AS tissues (Ridker et al., 2000). TNF-α promotes the occurrence and development of AS by inducing endothelial cell damage, inhibiting fibrinolysis, promoting coagulation, increasing smooth muscle cell proliferation, and upregulating matrix metalloproteinase expression (Daugherty and Rateri, 2002;Kleinbongard et al., 2010). IFN-γ is an immuneactivating factor and mainly affects the inflammatory response and cellular components in plaques in the AS pathogenic process (Koga et al., 2007). GM-CSF is secreted by macrophages, smooth muscle cells, and endothelial cells within the AS plaques. This cytokine is involved in angiogenesis within AS plaques and is closely associated with AS plaque stability and disease progression (Falk et al., 1995;Chen et al., 1999). IL-6 is secreted by vascular endothelial cells, macrophages, and VSMCs and is involved in the formation and stability of AS plaques (Verma et al., 2002;Schuett et al., 2012;Azancot et al., 2015). The chemoattractant effect of MCP-1 leads to the accumulation of numerous macrophages on artery walls and promotes plaque generation and development (Zeng et al., 2003;Wang et al., 2014). MCP-1 accelerates AS progression and is expressed at high levels in macrophages located in AS plaques in ApoE −/− mice (Aiello et al., 1999). CXCL1 plays critical roles in the recruitment of monocytes that leads to AS development (Weber et al., 2008). CXCL1 expressed in the blood vessel promotes macrophage accumulation and induces the capture of monocytes during early AS (Boisvert et al., 2006;Hartmann et al., 2015). Thus, the present results are in accordance with the findings of others and suggest the possibility that the high CA1 expression in AS increases IL-6, IFN-γ, GM-CSF, TNF-α, CXCL1/KC, and CCL2/MCP-1 production to stimulate AS progression. Plaque calcification develops via inflammationdependent mechanisms in AS. Macrophages can undergo two distinct polarization states. Predominantly proinflammatory M1 macrophages promote the initial calcium deposition within the necrotic core of the lesions, which is termed microcalcification. Anti-inflammatory M2 macrophages may facilitate macroscopic calcium deposition, called macrocalcification. Macrocalcification leads to plaque stability, while microcalcification is more likely to be associated with plaque rupture (Shioi and Ikari, 2018). Microcalcifications appear to derive from matrix vesicles enriched in calcium-binding proteins that are released by cells within the plaque (Hutcheson et al., 2014). The present study demonstrated that IL-6, IFN-γ, GM-CSF, and TNF-α had increased production in AS animals and in cultured VSMCs, with induced calcification. MTZ, AZ, and anti-CA1 siRNA decreased the levels of these proinflammatory cytokines. It is possible that CA1 expression plays a role in AS by stimulating calcification and elevating proinflammatory cytokine levels. However, we do not have a sufficient amount of data to demonstrate the involvement of CA1 in microcalcification formation. In summary, this study found that CA1 was expressed at high levels in calcified human and mouse aortic AS tissues. CA1 expression induced calcification of VSMCs and affected the cell proliferation, apoptosis, migration, and cytokine production. CA1 expression and CA1-mediated calcification were significantly associated with AS progression. MTZ treatment alleviated pathogenic progression in the AS model. By inhibiting CA1 expression with MTZ, AZ, or siRNA, IL-6, IFN-γ, GM-CSF, and TNF-α secretion was significantly decreased in cultured VSMCs and the AS mice. These findings demonstrate the calcification mechanism in AS and suggest that the inhibition of calcification is key in treating AS. MTZ represents a potential treatment for AS. DATA AVAILABILITY All datasets generated for this study are included in the manuscript and the supplementary files.
2019-07-10T13:07:10.870Z
2019-07-10T00:00:00.000
{ "year": 2019, "sha1": "872a8fd3f557486173fe2775309f744a3226c94f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2019.00766/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "872a8fd3f557486173fe2775309f744a3226c94f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
235285769
pes2o/s2orc
v3-fos-license
Application of simple kinematic model from flexion movement of upper-limb with RGB-D camera perspective This study describes the use of an RGB-D camera for the assessment of the upper-limb movement for stroke rehabilitation patients. The assessment process is carried out by making comparisons between patient movements and simulated movements. The motion simulation is modelled by the kinematics model of the 6 DoF arm with extended flexion motion. Tests were carried out on 13 normal patients with movement schemes that often appear in the rehabilitation process. The results show that the use of the 6 DoF model results in better accuracy and calculation time than using the 8 DoF model. Introduction The telerehabilitation system for partial stroke paralysis is the important equipment for this pandemic situation. Based on data [1], only 39.4% of patients who undergo therapy routinely until they recover, the others do not return to treatment after treatment and therapy with an uncertain schedule. Uneven health facilities are one of them, because there is still high awareness of patients about stroke disease outside of economic factors. In designing a telerehabilitation system, a guide robot is needed as previously designed [2] [3]. These robots are divided into two groups, namely exoskeleton robots, and end-effectors. The end-effector robot allows free movement of objects with a greater degree of freedom than the exoskeleton robots. The end-effector robot system that has been built [2] is not equipped with a feedback system so that the movement of the robot cannot adjust to the object movement. The monitor system required is at least able to represent all the movements of the upper-limb in three axes. During its development, this monitoring system has been carried out by various methods. Du et al (2018) used the IMU sensor to track upper-limb movements [4]. Meanwhile Ref [5] uses an EMG sensor to assess the strength of the arm and will take action if the natural object is fatigued and unable to perform a movement. The system using EMG is generally used on a single axis system, so it is quite easy to represent muscle strength in one axis. A monitoring system using a camera has also been used [6]. This monitoring uses a positioned camera to assess hand movements on the table. Monitoring systems using RGB-D cameras have also been used [7] [8]. In implementations, [9] uses a comparison with the kinematic model of upper-limb as a comparison of the motion captured by the RGB-D camera. The problem that has not been resolved in the system [9] is that there are still several factors that have not been modeled such as the tilt of the camera towards the object, curvature of the elbow in a straight position, and the position when the wrist, elbow, and shoulder are located in a straight line parallel to the camera. This problem causes the reading from the camera to have an error in that case. So that in this study it is necessary to add compensation to overcome the problems from the previous paper. Then the modeling in previous research also includes from the shoulder to the fingertips, which makes the assessment results take time to process these commands. This paper will discuss the simplification of the model into six degrees of freedom with the end effector being the wrist. This simplification of the model is intended to reduce the calculation time required for the assessment. This paper is composed of an introduction containing the background, problems, and objectives to be achieved, then a method containing system design and model simplification. For testing, a comparison with the previous model will be discussed regarding the resulting accuracy after adding some compensation and a comparison of the calculation time of the two designed models. In closing, this paper ends with conclusions and possible future research to be developed. Methods The system designed is a development of Ref [9]. This system still uses an RGB-D camera with Kinect v2 type to measure the coordinates of the object's upper-limb motion when performing a flexionextension. Extension-flexion is a movement by moving the upper and lower arms in the sagittal plane. Design System The system design consists of an RGB-D camera and a computer as a signal processor. The camera system used is the Kinnect v2 camera, which is the latest version of Kinnect. In the system design, the model is used as a reference. The flowchart of the proposed system can be seen in Figure 1. Starting with a camera set up to calibrate the system against the patient's physical location and condition. From these two parameters, the camera is placed at the optimal point of the observation. After obtaining the optimal distance, the camera reads the coordinates of each joint to obtain the length parameter of each segment as input to the model. In the assessment process, the camera is only tasked with reading the position of the coordinates of each joint when performing the flexion motion which is used to describe the trajectory of the object's upper-limb motion. The resulting path is then compared with the path of the simulation result. The assessment is based on the deviation from the actual and simulated paths. The greater the deviation, the smaller the value to be obtained at each joint. Upper-limb Kinematic modelling. The purpose of modeling is to assess the flexion motion of the object. An illustration of the flexion and path movements resulting from the reading of these movements can be seen in Figure 2. There are 3 paths that must be observed to assess that the flexion movements performed are correct, namely wrist, elbows, and shoulders path. This path is represented on three axes to accommodate the movement of objects in three axes (X, Y, Z). The green path is a path of the wrist, the red one is elbow and the blue one is the shoulder. To adopt this flexion motion, it is actually enough to use one degree of freedom located on the shoulder, the angle of which is changed increment in value to get the coordinates of the end-effector. However, this model cannot accommodate the differences in hand structure in each object. This difference in structure is caused by the historical conditions of each object. This different pattern causes the model of the hand of each object to not be completely straight. Where h1 is the chest coordinate height and is parallel to the camera height. a1 is the distance between the camera and the center of the chest. a2 is the length of the shoulder, which is the distance between the center of the chest and the shoulder. a3 is the length of the upper arm or the distance between the shoulder and the elbow and the last, a4 is the lower arm, which is the distance between the elbow and the wrist. From the model in Figure 3, the Denavit-Hartenberg parameter (DH Parameter) is obtained which can be seen in Table 1. The parameters of the six joints are used to get the coordinates of the wrist, elbow, and shoulder by applying the forward kinematics transformation method. The transformation convert s the change in angel θ1 -θ6 to the change in the coordinates of the end-effector. The coordinate position of the wrist can be calculated by multiplying the transformation matrix for each joint. Where θi is a parameter that represents the angle of the connection formed from the xi-1 axis to the zi axis with the right-hand rule. The parameter di represents the distance from the center point of the coordinate frame between the zi and the xi axis along zi. The ai parameter represents the distance from the intersection point between the zi axis and the zi+1 axis on the xi axis. And the αi parameter represents the angle between zi and zi+1 axis on the xi axis. The First joint is used to tolerate reading errors due to the angle formed between the camera and the chest plane. This angle is formed because in its placement to facilitate system implementation, the camera is only positioned at an optimal distance from the object. This optimal distance is sought by calculating the minimum error value from the measurement results with the camera's RGB-D with the actual value. The camera height is set parallel to the center of the chest using a tripod. The system set up is done by displaying the measurement results on the monitor according to predetermined parameters. The calculation of this value is based on calculating the deviation between the simulation and the value that was read. The greater the deviation value, the smaller the final value. The deviation calculation using point to line distance measurement as in the equation (1). FS is the final score which represents the value of the movement made by the object. The value of FS is in the range 0-100. The greater the value indicates the movement is approaching the normal upper-limb movement. Result and Discussion Several tests were carried out on the proposed model. Tests were carried out on 13 objects in healthy condition, consisting of 2 women and 11 men. The object age ranges from 17 to 24 years, with ideal physical conditions. Figure 4 is the result of running the program on an object with normal movement. It can be seen that for normal movement, the overall score is 72.22, the shoulder score 58.15, the elbow score is 79.98 and the wrist score is 78.54 on a scale of 0-100. From this value will be determined the minimum threshold of a normal upper-limb movement. This value is used as a threshold indicating that if the score obtained lower than threshold, the result of the movement is acted as an abnormal movement. And vice versa, movements that have a higher score than threshold can be classified as appropriate (back to normal). In Figure 4, it also can be seen that the farthest deviation from the model occurs at the position when the end-effector is marked with a red node. From this node, it is hoped that the therapist will know which part of the movement needs to be improved. The second test is done by comparing 2 kinematic models, namely a model with 8 degrees of freedom with a model with 6 degrees of freedom. The test results can be seen in Table 2. It can be seen from the average model with 6 DoF that it requires relatively less time than the 8 DoF which is 2.35 s. Although some data showing that the time required for 6 DoF is greater, such as in the 5 th object data. This is because, in the data, the number of points tends to be more than the others. This more point is obtained because the object moves the hand too slowly. In Table 3, can be seen the comparison of the scores obtained from the same 13 data with different models. It can be seen that the overall average with the addition of this compensation shows that the score on the 6 DoF model is better than the core at 8 DoF, which is 74.41, while the 8 DoF Conclusion From several tests it can be seen that the proposed model has better accuracy and time values than the previous model. Although it does not apply to all data, the overall results can represent the entire data taken. So that for the sake of it, this model can be used for the telerehabilitation system to be built. And this model is also possible to be combined with a wireless IMU sensor to get a better assessment value.
2021-06-03T00:02:56.506Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "17629221108f36f1f47e3d6dbb03fb5cb4ed05b4", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1882/1/012122/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "17629221108f36f1f47e3d6dbb03fb5cb4ed05b4", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Physics" ] }
255400154
pes2o/s2orc
v3-fos-license
Managing autonomy in university–industry research: a case of collaborative Ph.D. projects in the Netherlands Research partnerships between university researchers and industry partners are becoming increasingly prevalent. For university researchers, maintaining autonomy is crucial. We explore how researchers strategically manage autonomy in collaborative research partnerships, using a framework to distinguish strategically planned and opportunity-driven behaviour in the process of selecting partners and executing research in partnerships. We then focus on the management of autonomy in setting research directions and managing the research process. We draw on insights from 14 management scholars engaged in collaborative Ph.D. research projects. Based on our analysis, we show that researcher autonomy has two facets: operational and scientific. Researchers are willing to compromise their operational autonomy as a price for industry collaboration. They have a strong need for scientific autonomy when deciding on research direction and research execution. Although they need funding, entering a specific relationship with industry and accepting restrictions on their operational autonomy is a choice. We conclude that researchers’ orientations towards practice and theory affects their choices in partnerships as well as modes of governance. Introduction In collaboration with industry, a key goal of researchers is to produce scientifically credible knowledge (Merton 1957). Thus, autonomy about deciding on scientific aspects of research is crucial to researchers (Zalewska- ). Threats to scientific credibility are perceived as a significant barrier to starting a university-industry (U-I) partnership (Ramos-Vielba et al. 2016). Even in collaboration where partners are carefully selected (Steinmo and Rasmussen 2015), U-I partnerships may threaten autonomy (Estrada et al. 2016). If it is true that autonomy is under threat in U-I partnerships, then a relevant research question is: How do researchers strategically manage autonomy in U-I partnerships? It has been argued that autonomy is a factor that influences researchers' performance (Trevelyan 2001). Job characteristics theory (Hackman and Oldham 1975) also underlines that autonomy is a key driver of motivation, satisfaction and performance. In U-I partnerships, strategic positioning theory is particularly salient, because it highlights the interplay between autonomy and resource dependence between researchers and industry partners (Kurek et al. 2007; Zalewska- Kurek et al. 2010). To answer our research question, we augment strategic positioning theory (Kurek et al. 2007) by adopting an entrepreneurial process model that distinguishes between project selection and project execution (Bingham et al. 2014). We analyse how autonomy is managed in the context of resource interdependencies in the project selection and project execution phases. This research context is relevant, because social scientists' industry engagement has reached similar levels to natural scientists and engineering scientists' engagement (Olmos-Penuela et al. 2014). Research into how researchers manage autonomy may lower the barriers to U-I partnerships for social scientists. For practitioners, our research seeks to make U-I partnerships more effective by providing suggestions on how researchers can address their key need for autonomy. Our primary theoretical contribution is the enrichment of strategic position theory by addressing its application in two phases of the U-I research process. U-I partnership types and their implications for research productivity U-I partnerships include collaborative research, contract research and consulting (Perkmann and Walsh 2007). While collaborative research emphasises knowledge generation, contract research covers commercially relevant subjects. The consulting channel is mainly transactional knowledge transfer initiated by a firm. These are short-term projects that accentuate both research and commercialisation (D'Este and Perkmann 2010). Academic consulting takes many forms, such 1 3 Managing autonomy in university-industry research: a case… as research-driven consulting (when researchers want to learn and validate scientific assumptions in collaboration with industry), commercialisation-driven consulting (aimed at sharing a researcher's knowledge on developing technology), to opportunity-driven consulting (driven by monetary compensation) (Perkmann and Walsh 2008). The research shows that engaging in knowledge transfer via patenting and researcher scientific excellence (high research productivity) and entrepreneurial performance (high research budget) reinforce one another (Van Looy et al. 2004). Further, evidence suggests that applied research does not necessarily compromise basic research (Van Looy et al. 2004). Autonomy and strategic interdependence in U-I partnerships Autonomy refers to the freedom to decide on research subjects, research goals and research execution (Kurek et al. 2007). High autonomy means that a researcher is able to conduct their own research without external pressures (Zalewska- ). However, autonomy decreases when an industry partner directs and is strongly involved in a research project (Trevelyan 2001). This reduction may be counterbalanced when a partnership involves shared goals and both partners are committed and agree on the research direction from the outset of a project. Autonomy also decreases for those lower in the scientific hierarchy (Zalewska- . Ph.D. researchers tend to be less autonomous, because they are usually appointed to execute a project designed by a senior researcher or a firm. Ph.D. researchers have limited scope to change the research direction. However, since they are training to become independent researchers, they should develop their competencies in research (Lee and Miozzo 2015). Ph.D. researchers seek to influence their supervisors and other project stakeholders. Strategic interdependence is defined as the need to share heterogeneously distributed assets (resources and competences). Examples include knowledge, experience, judgment, skills, social capital, access to networks, funds, research facilities or means to publish research results (Haspeslagh and Jemison 1991;Zalewska-Kurek et al. 2016). When a partner falls short in at least one of these assets, a collaboration may be sought to fill this gap. Researchers seek external funding (Wilts 2000), access to resources such as facilities (D'Este and Patel 2007) and knowledge (D'Este and Perkmann 2010). Firms seek access to state-of-the-art technologies and applicable research results (Perkmann et al. 2011). Sharing heterogeneously distributed resources is a necessary condition for any partnership (Kale and Singh 2009). Researchers and industry partners need one another's resources to accelerate innovative knowledge production (Perkmann et al. 2013). Drawing on resource dependency theory (Pfeffer and Salancik 1978), we argue that a successful research partnership depends on the alignment of the need for organisational autonomy and the need for strategic interdependence. While resource dependency theory addresses interdependence, which includes power (Pfeffer and Salancik 1978) or mutual dependence and power imbalance (Casciaro and Piskorski 2005), we use autonomy as a central concept in academia. The degree of autonomy indicates the extent to which an industry partner influences a researcher's research activities. Combinations of the need for autonomy and the requirements of strategic interdependence result in four archetypes of researchers' behaviour concerning U-I partnerships (see Fig. 1). Mode 1 (ivory tower) researchers have a strong need for autonomy (Gibbons et al. 1994). They have a low need to access others' resources, and do not engage with industry. Instead, they remain focussed on purely academic interests. Mode 2 researchers have a strong need for external resources, but little need for autonomy. Thus, they allow industry to influence their research. Mode 2 researchers comply with an industry partner's demands rather than exerting a strong influence on research projects. Mode 3 researchers have a strong position in U-I partnerships, influencing decisions while also considering industry partners' perspectives. To this framework, we add the dimensions of focus and flexibility. Researchers can act in a focussed or in a flexible way when selecting opportunities, i.e. research projects and partners. They can also act in a focussed or in a flexible way when they execute opportunities, i.e. carry out research (see Sect. 2.3). Since the need for strategic interdependence is high in the context of U-I partnerships-otherwise, U-I partnerships would not emerge-we focus on Modes 2 and 3 (Fig. 2). The behaviour modes provide insights into researchers' behaviours in different phases of a research partnership. By analysing the interplay between interdependence and autonomy, we can arrive at conclusions about how researchers manage their autonomy in these partnerships. Selection and execution in U-I partnerships A U-I partnership involves a dynamic process between researchers and industry partners (Estrada et al. 2016). To capture these dynamics, we used Bingham et al.'s (2014) division of the entrepreneurial process into the phases of opportunity selection and opportunity execution. According to Bingham, a firm can operate in a focussed (strategically planned) or flexible (opportunity-driven) way when Fig. 1 Researcher behaviour modes Adopted from Zalewska- 1 3 Managing autonomy in university-industry research: a case… selecting opportunities to enter new markets and when executing its strategy in these new markets. We will translate the concepts of opportunity selection/execution and focus/flexibility from the business domain into the realm of researchers. In the research context, opportunity selection refers to a researcher's choice about which partner/project to cooperate with. This can either be focussed or flexible. In focussed opportunity, a researcher selects projects or partners that fit their own strategy (based on their long-term research interests). Flexibility means that a researcher is willing to compromise research interests as long as a collaboration appears to be promising. In the research context, opportunity selection refers to operational issues that occur in a research project, such as choices of theory, method, organisational aspects and publishing strategy. In focussed opportunity execution, the researcher initially planned most details and the execution follows this plan. Flexibility means that a researcher can react flexibly to opportunities and risks that emerge during a research project-for instance, by adapting the theory, method, organisation and publishing strategy. We will now focus on how autonomy and strategic interdependence are managed by researchers in the opportunity selection and the opportunity execution phases. We investigate relationships between autonomy/interdependence and focus/flexibility in the two phases of a research project. We don't focus on combinations' performance implications for a sequence of research projects. Figure 2 illustrates our research model. Case selection and data collection We collected data from a range of Dutch universities. We conducted 14 interviews with professors (lead scientists) and Ph.D. researchers as key agents in executing research. We analysed 11 cases (projects) from four universities (Janßen 2016). We selected the academics based on their involvement in industry-related projects. Here, industry is understood in its broader context, and includes societal partners such as commercial firms, associations of firms and professionals as well as large public national and European organisations that are organised like firms (excluding funding agencies). We used the context of management research as examples of social sciences. We aimed for heterogeneity in the range of projects, because U-I ventures vary from being purely sponsored by industry to consortia drawing on both public and private funding. We interviewed researchers who had established different partnership types with industrial partners at different engagement levels. Some firms had a well-defined managerial problem and wanted the researcher to solve this by delivering an applicable, science-based solution. Some firms asked for a solution but were not interested in a scientific outcome. Others sought a generalised response derived from a problem at their firm they wished to understand, seeking to learn from the research insights. In some projects, researchers proposed an academically driven question that could be related to the firm. Researchers were involved in at least one U-I project at the time of the interviews. The interviewees answered open-ended questions about one of their most recent and most representative collaborative research projects. All but one reported on a project in progress. Thus, retrospective bias was small. All projects were longterm Ph.D. projects or short-term projects embedded in Ph.D. projects. We gathered the data in semi-structured face-to-face or Skype interviews. Before embarking on the interviews, we checked the activity profiles of academics and their websites, to become familiar with the interviewees and to prepare questions that drilled down into specific aspects of their research behaviours. We collected additional data from publication records and social media profiles (LinkedIn) to augment information about their career orientations (for instance, we checked whether they had or are currently engaged in consulting). We chose a researcher-centred perspective and did not include the industry partner, since our research question focusses on researchers' strategies (Table 1). Data analysis We applied a mainly deductive approach (Fereday and Muir-Cochrane 2006;Mayring 2000) by drawing on established theories and concepts. We used the codes (or categories) (e.g. Gläser and Laudel 2013) for the thematic analysis, as outlined in Table 2. We deduced most of the codes from our theoretical framework. This framework was then exposed to data from the semi-structured interviews and revised on the basis of the interview results. Autonomy in the opportunity selection phase is high when the researcher determines partner selection and has a strong influence on the choice of the research topic. Autonomy in the opportunity execution phase is high when the researcher determines all aspects of executing the research, such as the choice of theory, method and results dissemination. Managing autonomy in university-industry research: a case… (Bingham et al. 2015) Code 1 Definition Behaviour that focusses on the long-term planning and long-sighted decision-making of one partner with the goal of mutually enhancing their own resource base and achieving specific goals Description A focus chosen by a partner that is characterised by a long-term perspective on the part of the researcher and goal attainment. Formalisation supports the strategic focus Indicators Reason to choose the partner was the good fit with the own research programme; low willingness to compromise in terms of research direction and outline; disagreements that indicate distinct goals and strategies followed by partners Label Opportunity-driven behaviour (Bingham et al. 2015) Code 2 Definition Behaviour that is driven more by the short-term capturing of emerging opportunities and that focusses, besides mutual value creation, on more direct valorisation of project deliverables for both sides of the partnership Description Opportunity potential as a driver makes a partner act with greater flexibility and adaptive response. Less formalisation allows for manoeuvrability in the execution phase Indicators The reason to choose the partner was not only the fit with own expertise; great willingness to compromise in terms of research direction and outline Code 3 Definition Each partner's dependence on the counterpart's resources, assets and capabilities Description High degree: a reciprocal relationship with mutual dependencies. Sharing many resources. The industrial partner sponsors research; provides access to data Low degree: a unilateral relationship, with the greatest benefit for one partner. The researcher does not depend greatly on the firm's resources to undertake the research Indicators The need for strategic interdependence is indicated by the need to access resources, assets and capabilities without which researchers could not perform their research. Examples of resources and capabilities are internal data access, financial resources, access to organisational facilities, access to contacts, social networks, skills and knowledge of both the organisation and the researcher Label Autonomy (Zalewska-Kurek 2016) Code 4 Definition The researcher's freedom to decide on the research direction and to conduct the research, but with the (continuous) support of the firm and the environment Description High degree: having full power and influence over the decisions concerning research direction and the execution of the research Low degree: the industry organisation influences the context and the research directions by making decisions; the research takes place within highly formalised boundaries Indicators Who makes decisions on the research direction and project outline; who proposes changes; time spent on activities not directly related to the joint project but required by the partner; confidentiality clauses and other influences on an intended publication; frequency and content of progress meetings on the project; the researcher's independence in conducting the research (e.g. deciding on the method); the extent of practitioner-oriented deliverables offered by the researcher Strategic interdependence of the researcher is high in the opportunity selection phase when there is a promise of a reciprocal exchange of research resources. In the opportunity selection phase, strategic interdependence is high if there is an de facto exchange of resources. We then translated the concepts of strategically planned and opportunitydriven behaviours (Bingham et al. 2014) into the research project levels. Strategically planned behaviour in the opportunity selection phase means that researchers approach a firm to join or initiate a project that fits their specific long-term research programme. Opportunity-driven behaviour in the opportunity selection phase is indicated by acceptance or initiation of a project that falls within the researcher's competencies but may not reflect long-term research interests. Strategically planned behaviour in the opportunity execution phase is indicated by a research process that deviates little from what was initially planned. Opportunity-driven behaviour in the opportunity execution phase refers to a research project that exhibits flexibility. We also allowed for new categories to emerge (inductive element) (Fereday and Muir-Cochrane 2006;Mayring 2000). An interesting category emerged from the interviews: time spent on activities not directly related to the joint project but required by the partner. It came to our attention when hearing about Ph.D. researchers who were embedded in companies and were required to spend time working for the firm, to the detriment of their research. We expanded our analysis with this category. Autonomy and opportunity-driven project selection Autonomy in choosing the collaborations seemed to drive the researchers' project selection. For instance, researchers who seemed more practice-oriented often engaged in consultancy projects or integrated (short-term) consultancy projects into their research. These quotations exemplify such behaviour. Asked how he selects research projects, a senior researcher replied: I am always very open-minded. And sometimes people trigger me, and people are triggered by me and then something might happen. So as an academic to increase fortune, you should be a very active networker. You should go to meetings, you should go to conferences (R7: 61-64). In ten percent of the cases, I am the one who goes out and invites people [to join a research project]. In eighty to ninety percent of the cases, I am being invited. And then it depends on your capacity and your real interest, and on the energy that you feel with a person, whether you engage with that person on that project or not. (R7: 74-77). So, these younger colleagues say to me '(…) is this going to lead to any publication?' and I said: I don't know; I know we are going to do a survey which has a practitioner relevance; it isn't very theory-driven, but it creates a lot of contacts (R7: 89-91). It's not always the case that you can do research that leads to publications. Sometimes you have to do other research which leads to money, to income, but doesn't necessarily satisfy the basic needs of an academic in terms of publication (R7: 106-109). This researcher (R7) is fairly opportunity-driven when deciding on which projects to accept. Although he describes a project's appeal along with the other partners' engagement as a catalyst for collaboration, his long-term planning was not defined. He intentionally keeps his long-term perspective open and wide in scope. He also sees contract research as an opportunity to generate further contacts for prospective research (R7: 89-91). Opportunity-driven researchers often had a broad perspective on what constitutes an opportunity. They were willing to accept proposals for research that fitted their loosely defined research interests rather than a specifically designed research programme. For instance: He explained some of the things that they were working on and that fits to some extent pretty well with what I'm doing. So, we went by that organization to discuss what they were doing and what their future development were etcetera. And then along the way we found that, or we basically asked: what can we do together? (R12: 8-12). Autonomy and strategically planned project selection Theory-driven researchers more often engaged in projects that fitted their long-term research programmes. They showed strategically planned behaviour. For instance: There are many, many research opportunities out there. So, if you really like sort of going after the opportunity, you probably end up with all kinds of research projects that are not really in line of what you actually want to do. So, I am always very careful in what I do, in which projects I actually accept for companies. And if they don't fit my own research interest, my research lines, I am not going to do them (R8: 392-396). There is a difference between the cases observable in the decision-making process on whether to engage with a firm on a certain project as well as on the strategic orientation. The abovementioned researcher (R8) showed strategically planned and autonomous behaviour in choosing only research projects that suited his area of expertise and core research. He had clear expectations on how a project should generally be outlined. He anticipated that his contribution would remain theoretical without designing managerial interventions to be executed within the firm. The interventions were the responsibility of the firm or a consulting business mandated by the firm. In addition, R8 makes a project conditional on the research being published and with no obligation on him to engage in contract research. This requirement can be connected to this researcher's belief that universities and their research programmes are increasingly influenced by firms, which can have a detrimental effect on researcher integrity. Autonomy and mixed behaviour in project selection Strategically planned and opportunity-driven behaviours form two ends of a continuum. The two researcher types we presented demonstrate clear-cut behaviours. However, most respondents fell somewhere between these two ends. A few researchers agreed to specific projects that were close to their research area because they offered money for a new Ph.D. researcher and, in the end, would provide scientific output. In sum, new projects and new publications would lead to new knowledge. In a competitive research funding landscape, researchers may be opportunitydriven. On the basis of our observations and knowledge of the science system, rather than explicit statements from respondents, we identified career stage as a factor that affects researcher behaviours. The boost Ph.D. researchers can give to scientific production is an additional resource for more senior researchers. Ph.D. researchers are seldom employed full-time by a university in joint projects. In U-I projects, an agreement is often made that a Ph.D. researcher will work part-time for a firm. For instance: And then we said that maybe it would be beneficial if they would have someone working there, who is also, next to working there, is also an advisor also does PhD research. So, two days a week for PhD research, three days a week just doing actual work. But of course, there is an overlap between the two (R12: 12-15). Ph.D. researchers also display either strategically planned or opportunity-driven behaviours when entering their Ph.D. trajectory. Their role in the process should not be omitted. All interviewed Ph.D. researchers were interested in their research's practical relevance. However, the respondents with an academic career orientation struggled with the work they were required to perform for a firm. Those with a practical career orientation accepted practically oriented projects, and there seems to be no friction. In particular, they accept a firm's operational work conditions. Such work is part-time work for a firm, to solve its problems, and work on company projects not necessary related to their own Ph.D. project. For instance, a Ph.D. researcher said: I have little academic ambition in fact. So, publishability is not a big matter for me, specifically. My supervisors like it, but I think by now they also understood that it's better to let me do what I want to do than try to put something out of it. And what I want to do is I want to help this organization which really has a problem, or rather I would say has an opportunity actually. (PhD6: 12-17). Focus and flexibility The researchers were fairly flexible and opportunity-driven concerning research execution. One reason was that their projects were often not clearly defined ex ante. If there was no clear agreement on the research direction, there was scope for ambiguity and friction, which intensified in the project's execution phase. Researchers could be focussed (strategy-oriented) in this phase in terms of operations such as formalised deliverables, but remained flexible (opportunitydriven) in terms of conducting research (e.g. data collection). This suggests that their autonomy has two facets: operations and academic freedom. Our data also show that strategically planned behaviour-research goals set in the opportunity selection phase-remained unchanged (strategy-driven opportunity selection) but could be executed flexibly (research subgoals and conduct), notwithstanding the high formalisation required by a firm. We will now elaborate on flexibility and provide examples when analysing researcher autonomy. The need for autonomy In the opportunity execution phase, researchers maintained significant autonomy. However, some projects were more formalised than others, and researchers had to comply with the partnership agreements. Formalisation is usually related to process rather than content. Researchers were fairly flexible concerning content, but were nonetheless required to report their findings to a firm. For instance, some projects had a clear plan with objectives, deliverables and prearranged evaluation meetings, while others had a structure that was less strict and less clearly defined. Particularly in relation to the content: We had a generic idea and a good hunch that we could setup an interesting project. But we didn't set it in stone, so it was more like an organic way. Because there were quite some risks in the project. So, there was the risk that we could not develop the tool or had no tool. It all went well; we also got access via another firm to a huge database with customers. So, actually, it was a risky project in a sense that the outcome dependent on the input of at least four firms. (R4: 110-115) In relation to the process, researchers tended to be in charge of setting milestones and leading the process: It's our project, completely ours, because you saw it already in the first meeting we had. We were setting the meeting, and they okay, just come', we made the presentations, we said these are the three studies, and it was all fine. And the student thought 'what will they think about, what are the requirements'. We set the requirements (R9: 123-127). Formalisation concerned a general project outline, its feasibility and matters of practical implementation. A researcher pointed out that if formalisation was too strict, this could harm the research process: formalisation tends to be timeconsuming and limits the ability to react flexibly to contingencies. Because, in all research projects, you get deviations from what you actually set out to do, and the more you formalise in the beginning, the more you get into this kind of 'This is not what we supposed to do', 'no, because we couldn't do this, because you didn't have the data' or 'we changed this'. So, it's like a new product development project; the more you formalize in the beginning the less degrees of freedom you have and the more friction you get at some point during the process (R8: 250-255). Further, it is often not possible to formulate a specific research outcome ex ante. Instead, there should be consent on the research subject and how the research is to be conducted, so that data sources can be identified. To access data, a project partner's hierarchical position is key. A high hierarchical position fosters data collection via easier approval mechanisms and the need to involve fewer employees. An interviewee stated that discussions with firms during research projects tend to be confined to questions concerning necessities and firms' desire to expedite a project. Also, it is often difficult for company partners to offer input on subjects with high abstraction and high complexity. In response to these constraints, regular feedback meetings were held to ensure coordination. However, informal meetings between a researcher and a company representative were much more frequent, and served as a mechanism for socialising within the firm and project coordination. Such coordination is necessary for the effective deployment of complementary resources. It is key to reach agreement in the initial stage of a partnership. Our interviewees indicated that they were able to achieve a shared understanding with their partner early on. Most firms realised that the openness of research requires identification and resolution of issues along the execution of a research project. The outcomes of research cannot be predicted, and research direction may change owing to better theories or simply the availability of data. You should be very careful that the industry is not dictating what you research and how you do this, because then they will also get a say or an impact on what you are actually allowed to report and not to report. You should always maintain your academic integrity in this instance (R8: 450-453). In (only) two cases was the research direction and firms' expectations too vaguely defined, or did a firm implicitly expect consulting services. This led to ambiguity or friction. In a case without clear research directions, the researcher reported a high interference by the firm in research, describing it as very bureaucratic and hierarchical, with very formal procedures. The firm also had no experience with research publications. This led to more control from the firm. We will now consider autonomy in relation to scientific integrity. Formalisation of a partnership was not seen as destructive but facilitating if the research methodology was not compromised. The interviewees specified the need for a shared understanding of the project outline, a clear focus that eliminated distractions and generated commitment, and consent among partners on how to conduct research with high autonomy levels assured. Autonomy may be restricted by an industrial partner in the selection phase. This was the case in two short-term projects. Here, each firm wanted a solution to its problems (practice-oriented projects), yet the researchers enjoyed autonomy in the execution phase, since the firms did not interfere with the research methodology. However, this was not the case for all Ph.D. researchers. There were embedded in projects that were already designed, and depended on their supervisors and on their industry partners for financial support and data to conduct their research. If they were embedded in a firm, their autonomy was more restricted than that of their supervisors in relation to the industrial partners. Ph.D. researchers-in return for receiving finance for a Ph.D. trajectory-are often required to work part-time for a firm, or to maintain a physical presence at the firm. They often engage in operational tasks or other projects that are not always related to their research interests. This means less time for research and publishing, infringing on a researcher's autonomy: What I know from my PhD student, in the beginning she really liked to be there. So, she liked to have this both doing more practical things and doing the research. But after time, she got fed up with it, because she had this urgency of 'okay now, I have to do a couple of things'… But it was also for example that she really wants to publish, and the entrepreneur is not really interested in that… and wants more practical things. And, of course, the pressure increases of getting her scientific deliverables at a certain moment. That kind of diverted. But in the beginning, in fact, the entrepreneur was her objective study, so, for her, it was wonderful to be there. It is not only doing an interview; she was there and she could observe what happened, etcetera. So, she really liked that part for that reason, but at a certain moment, she knew how it was going. And there was always more practical work, which also kept her from a couple of other things (R11: 133-143) One Ph.D. researcher struggled to manage the expectations of both the academic and the industry partners (in this case, several firms and organisations) in the project. This negatively impacted on his research and restricted the time he could spend on delivering academic output. In one case, a supervisor and a Ph.D. researcher had different perceptions concerning part-time work. The senior researcher said "there is an overlap between the two" (R12: 15), while the Ph.D. researcher claimed that the work was simple operational work that was entirely unrelated to his Ph.D. Apparently, the senior researcher's expectations were not in line with what had been discussed in the opportunity selection phase. The Ph.D. researcher, whose ambitions were academic rather than applied, became dissatisfied. Sharing time between a university and industry caused delays in Ph.D. projects. Delays in Ph.D. processes also resulted when the industry partners did not deliver what they had promised or when they asked for additional tasks to be undertaken. On the other hand, embedding Ph.D. researchers in collaborative projects was a key consideration, since Ph.D. researchers have competences that are valuable for all partners in a partnership. Another factor that delays research projects is the requirement that papers based on firm's data are sent to the firm for approval prior to submission to a conference or journal. If we write a paper, we always send it to the firm… 'This is what we are going to submit. Do you agree? Yes or no?' Not content-wise, because that is out of the discussion, but in the way that for example, the business setting or the firm setting is described (R8: 355-362). While firms did not ask for information to be deleted from papers, it took them long to process these papers. This requirement stems from the confidentiality the firm demands: the data has to be treated anonymously. Confidentiality was an issue in only two projects. One example was a project with a multinational corporation that had clear guidelines to postpone research publication for a certain period. The need for strategic interdependence All the relationships that the interviewees entered into with industry were driven by the need for resources. Access to data (i.e. a firm's database, client contacts and data collection via interviews) were regarded as the key contribution offered by a firm. One researcher noted that data quality and availability is even more important than the funding. Data access is the most important asset they gave us. Even in this case, that if they didn't finance the research position, the PhD position, then we would still go ahead with financing for the PhD in any other way, because, you know, getting financing is less difficult than finding good data access (R8: 82-85). In (only) two cases did a firm's employees create difficulties by delaying, restricting or blocking access to data, with negative consequences for the researchers. In seven projects, firms shared not only their data, but also their network and contact 1 3 information. This was very valuable to researchers. Most projects were financed or co-financed by firms. While firms in most cases contributed financially to a project (not necessarily financing the whole project), further contributions were mostly based on data provision and access to further contacts that were valuable to the researchers. Although companies that host Ph.D. researchers were expected to be supportive and amenable in providing help, for instance in understanding their (research) problems, this was not the case in all circumstances. A Ph.D. researcher, when asked about the firm's contribution to the Ph.D. process, said: Yeah, well, that's the difficult thing… I am the only academic working there. That makes it very difficult, I think, because they don't understand really the university and the academic world and how research is conducted. I wanted to explain it to them, but it's very hard. I mean, they don't understand. And I am the first PhD there (PhD2: 105-108). Firms that engage in a U-I project are expected to provide not only financial resources and data but also time, support and commitment. Researchers who received such commitments were more enthusiastic, not only about their own research, but also their work for their firms. The interviewees generally indicated that firms that sponsor research projects usually engage and commit to them as researchers. This is demonstrated not only through existence of communication channels and evaluation meetings, but also through the interest and involvement of top managers in both the selection and execution phases. On the other hand, we observed lower commitment from firms in cases when they were asked to join a consortium to fulfil National Science Foundation requirements. And I think still today we notice that… I think there is one particular firm that we work together the most. But for the other ones it remains a little bit hard to keep them on board, to keep them interested. We had another network meeting, two months ago; again, not all parties were presented, but most of them were present; also, the big corporations were there. Especially the multinational corporation leading in building wind farms is for us important party, not only because it is a big and important firm, but they are basically the firm making the decisions that we are actually focusing on in our research (R13: 255-262). These partners showed less interest in the research process-even though they are committed personally committed (and through the allotted work hours) to this project-and tended to wait for the valorisation of results. The researcher contributions ranged from conceptual thinking and theory development to determining organisational factors relating to a firm's problem. For some researchers, the project outcome is the point at which termination of negotiations or the contract is considered, since elucidation of a research problem coupled with a scientific publication is the primary motivation, and this has now been fulfilled. Other researchers extended their contributions to the more practical aspects of implementation, leading sometimes to consulting services. In many cases, researchers sought to comply with the demands of both their academic goals and a firm's interests. For instance, one researcher was interested in innovation management, based on his involvement in resolving a practical problem and his academic interest in advancing knowledge in innovation management. Well, they've developed a product, they don't have that much knowledge about how to interact with the customer and let them design their own home, but it could be useful in that setting as well. So, I had a research question, I thought it could them in maturing their concept. So, we just contacted them, and they were interested (R4: 26-30). To simultaneously deal with the expectations of companies in delivering practical value for a firm and scientific output for the research community, researchers engaged Master's and Bachelor students working on their final assignments. These additional resources allow researchers to focus on the academic aspect of research. Thus, short-term deliverables can be provided to a firm, as underscored by an interviewee: (…) if you have a good student, it catches some low-hanging fruits, yeah, companies are happy. Not everybody sees the difference between more consultancy-like work of a Master's student and real scientific stuff. You know that most companies are short term thinking, so if they get some short-term results, they're happy. So, if you make some kind of combination of Bachelor or Master's students and a PhD student, who also take care that papers are written, that works pretty well so far (R14: 97-102). This action can be seen as a strategy aimed at gaining more time and exerting less pressure on delivering specific solutions to industry, which can facilitate greater autonomy for researchers. Discussion Our research question was to identify how researchers strategically manage autonomy in U-I partnerships. We analysed the interplay between autonomy and interdependence, developing a framework that distinguishes between strategically planned and opportunity-driven behaviour in the partner selection process (opportunity selection phase) and executing research in partnerships (opportunity execution phase). In both the opportunity selection and opportunity execution phases, we observed systematic patterns in researcher behaviours (see Table 3). Very few researchers seemed to be purely opportunity-driven. Most researchers had a strategy to advance their research in a specific field and occasionally accepted an offer from industry, as long as it doesn't divert them from their strategy. We assume that this is due to their goals and the way researchers are assessed. In the science system, promotion for researchers depends on the quantity and quality of their publications. Thus, it is in their best interest to engage only in partnerships that add to their career goals. Our data suggest that strategically planned and opportunity-driven behaviours relate to a researcher's vision and strategies. Purely practice-oriented researchers engaged enthusiastically in projects driven by company-specific problems. They were motivated to deliver quick solutions to these problems, even if this delayed their research agendas. These researchers saw such projects as opportunities that could result in publishable results, yet publishing was not the primary goal at this point. On the other hand, theory-driven researchers engaged only in projects that guaranteed academic freedom and that would lead to publishable output. They accepted projects that would contribute to their strategic goals. Therefore, we postulate: Proposition 1: Practice-oriented researchers are more likely to select projects in an opportunity-driven way, while theory development-oriented researchers will select projects in a strategically planned way. This proposition, based on the pattern reflected in our cases, is consistent with Ramos-Vielba et al. (2016), who found that researchers that seek to apply knowledge collaborate with firms, while researchers who want to advance knowledge will more likely collaborate with government agencies. Nonetheless, we argue that Autonomy in choosing collaborations that first meet the research interest seems to drive the researchers' project selection Project selection was mostly strategically planned or on that end of the continuum Pure opportunity-driven strategic behaviour was less present; some research subjects or fields involved practitioner-oriented deliverables The contact was mostly established owing to a researcher's network rather than a formal selection process In most cases, there was an agreement between project partners concerning the research direction and project deliverables, which set a project focus and minimised ambiguity or friction Researcher autonomy increased in the project execution phase, particularly owing to the trust earned from the firm We distinguish between two autonomy types: operational (concerning formalisation and operational management of projects) and academic (scientific integrity, methods, etc.) Researcher operational autonomy was medium or high Academic autonomy was fairly high, since decisionmaking and seeking agreement were seen as an organic path through which the mutual outcome focus of partners would resolve dissimilarities along the way Strategic interdependence among all researchers can be seen as medium to high Strategic interdependence was maintained owing to agreement on a focused research direction, flexible research conduct, and the partners' general commitment to the project deliverables Ph.D. researcher autonomy was constrained by the project boundaries. Their autonomy was more limited than that of their supervisors in relation to the industrial partners if they were embedded in the partner organisation Valuable outputs for all partners can be generated via both academically accepted and practitioneroriented project deliverables within the scope of the particular research collaboration and supporting Bachelor and Master's level projects practice-oriented researchers not only strive to apply knowledge but may also seek to advance it. Thus, depending on the project, they could be placed in both the Pasteur quadrant (quest for fundamental understanding and high consideration for use) and the Edison quadrant (low quest for fundamental understanding and high consideration for use), according to Stokes (1997). Further research should validate this proposition, since it somewhat contradicts a proposition by Perkmann and Walsh (2008) that engaging in opportunity-driven and commercialisation-driven consulting does not affect a researcher's choice of more applied research. We assert that researchers only engage in such projects when they already have an established interest in applied research. Researcher orientation may also have consequences for their careers. As the research shows, physics and engineering Ph.D. researchers' engagement in industry projects negatively affected their career in academia; however, it increased their chances of a career in industry (Lee and Miozzo 2015). In the opportunity execution phase, researchers behaved mostly in Mode 3 (strong needs for both sharing resources and autonomy). Researchers may have to relinquish some autonomy when they accept the terms of collaboration with industry, but they have a strong need for autonomy when they decide on the research direction and research execution. Although they need to obtain external funding, it is their choice to enter a specific relationship with industry and accept resulting restrictions on their autonomy. Restrictions on scientific credibility are seen as the major barrier to collaborative projects (Ramos-Vielba et al. 2016). Thus, we conclude that there are two facets of researcher autonomy: operational and scientific. Operational autonomy pays considers all the issues related to planning, communication with the industrial partner, and setting and executing milestones. Scientific autonomy relates to matters such as methodology, theory and uses of results. We note that researchers give up some autonomy, in this case operational autonomy, in exchange for heterogeneous resources. This observation gives weight Salimi et al.'s (2015) finding that those who control critical resources tend to centralise governance of collaborative Ph.D. projects. We argue that giving up some operational autonomy may help in managing both partners' expectations and to preserve the envisioned focus. An example of formalisation would be setting goals, milestones and frequent meeting schedules in conjunction with executing such a plan. This could limit delays and could lead to a succession project completion. Contrary to Bingham et al. (2014), we conclude that researchers who are not diverted from their main research direction in the execution phase by environmental factors (that is, not flexible but focused in terms of operational autonomy) are more likely to successfully complete a project. Proposition 2a: Researchers will give up some operational autonomy, i.e. accept some formalisation in setting clear goals and in instituting a delivery plan in the selection phase, and they will manage both partners' expectations in the execution phase, if the perceived benefits of formalisation are high. At the same time, in line with Bingham et al. (2014), we argue that researchers who enjoy scientific autonomy and have the flexibility to choose how to execute research in the execution phase will perform better, since there is no pressure (Zalewska- Kurek et al. 2010). Bingham (2009) argues that firms are-over multiple market entries-more successful when they use a focussed approach to opportunity selection and a flexible approach to opportunity execution. Greater focus in selection leads to improved learning by linking (comparable) experiences over time. Greater flexibility in execution allows firms to adapt to market conditions that remain unique, even when the opportunities have been selected in a focussed way. We argue that the mechanisms of learning and the benefits of flexible execution are fundamentally the same in research projects concerning scientific autonomy. Thus: Proposition 2b: Researchers with high scientific autonomy and, thus flexibility, will perform better and are more likely to complete collaborative projects on time. Accepting more influence on research, and Mode 2 behaviour, could be a strategic choice not only for researchers who want to translate scientific knowledge into practice and to solve practical problems. Some researchers indicated that they were willing to accept certain projects and greater restrictions on research in the short term to secure resources that would advance their research in the future. This would be an indication of their long-term strategic thinking. Based on these findings, therefore: Proposition 3a: Researchers are willing to accept more influence and restrictions on their autonomy from industrial partners to gain resources for their future research. Proposition 3b: Researchers are willing to accept more influence and restrictions on their autonomy from industrial partners to build trust with the industrial partner and have greater autonomy in the future. Joint research with industry tends to increase research output and to generate research with greater impact (Agrawal and Henderson 2002;Louis et al. 1989;Van Looy et al. 2004), because researchers obtain access to-normally inaccessibleresources and use industry as a means to knowledge production (Zalewska-Kurek 2016). However, not all engagement types with industry are seen as enhancing productivity. For instance, Perkmann et al. (2011) argue that opportunity-driven consulting leads to a decrease in research productivity. We observed yet another aspect of consulting research that concerns the situation of Ph.D. researchers embedded in such projects. More senior interviewees who engaged in consulting/opportunity-driven projects were fairly enthusiastic about new opportunities, while Ph.D. researchers who had to perform this research type were less eager. Industry projects give Ph.D. researchers lower autonomy, since Ph.D.s are tied to a specific project and must often deal with the expectations of industry and academia, a ready source of tension. Ph.D. researchers were also under greater restrictions if they were deeply embedded in the funding firm. This finding has practical implications for those who wish to pursue a Ph.D. trajectory. Based on our observations, restrictions on autonomy in projects seeking to solve practical problems can lead to delays and can therefore jeopardise the success of otherwise fruitful partnerships (e.g. not reaching initial goals). We argue that: Proposition 4: Opportunity-driven behaviour leads to consulting-driven projects and delays research (less successful research partnerships). The execution phase makes clear whether or not the industrial partner is committed to the project. Our data showed that firms that are interested in research results from the start of the project spent more time communicating with the researchers. These firms showed greater involvement and commitment, and guaranteed academic autonomy. As seen in the strategic alliances literature, the company leaders' commitment is a necessary condition for success in an alliance (i.e. for attaining all the alliance's goals and objectives) (Kale and Singh 2009). Mora-Valentin et al. (2004) showed that commitment as well as involvement, trust, communication and clear objectives are key to U-I partnership success. These results allow us to advance the proposition that industry partners' commitment and involvement take precedence over their need for academic knowledge and certain research outcomes (as well as other resources for researchers): Proposition 5: A stronger need for strategic interdependence (resources) by the industry partner leads to greater involvement and commitment by this partner, and thus to more successful U-I research partnerships. Implications The main conclusion of this study on the strategic management of autonomy in U-I partnerships is that the choice of collaborative U-I projects is primarily driven by researcher autonomy and their strategic orientation. They may be willing to give up certain aspects of their autonomy. To understand which aspects, we distinguished between operational autonomy and academic autonomy. While operational autonomy can be surrendered if the researchers perceive the benefits as great and academic autonomy is not easily given up, especially in the case of senior researchers. Ph.D. researchers face different choices, and firms can negatively influence their autonomy when their research is delayed for instance by requirements to perform operational tasks for the industry partner. Academic autonomy is perceived as a dimension of scientific integrity. Partnership formalisation was not seen as destructive-indeed, it was considered to be fairly beneficial if the research methodology was not compromised. Managing operational and academic autonomy may prove to be the key to managing U-I research partnerships. This insight may help firms to understand how researchers work. A further means to secure autonomy in U-I relationships may be the use of open data partnerships (Perkmann and Schildt 2015). Here, a boundary organisation acts as a bridge between firms and university researchers. This facilitates the pursuit of purely academic issues, solving researchers' need for autonomy. We are aware of open data partnerships in science, technology, engineering and mathematics research, but none as yet in the social sciences. Initiatives such as the Twitter data grant (Twitter 2014) may be a move in such a direction. 1 3 Limitations and future research This study has limitations, which can be addressed in future research. The study calls for more longitudinal data. We collected cross-sectional data and were retrospectively able to capture the research process from the early stage when research questions and research partners were determined through research execution. Nonetheless, longitudinal data on the different stages would add value to the analysis. Also, the number of cases don't allow one to generalise the results. More researchers from different contexts should be analysed. Since we investigated individual research projects rather than sequences of research projects, we are not yet in a position to make full use of Bingham's (2009) ideas on the performance implications of combinations of strategically oriented and opportunity-driven behaviour across a temporal sequence of research projects. We can say that most of the projects were performing well, according to the interviewees. Thus, we can conclude with some assurance that researchers should behave in Mode 3 (strategically planned way) in project selection to gain as much as possible from industry-sponsored projects. Whether these positive effects of strategic project selection hold or indeed increase over a project remains a question for future research. Further, conflicts over resource interdependence did not appear in the cases we investigated. For instance, none of the corporate partners claimed exclusive IP on the research results, nor did corporate supervisors claim unwarranted authorship of the academic output. These cases are known in other research fields (Murray 2010) and may also exist in the social sciences. If this is the case, an investigation of U-I exchange strategies can further corroborate Murray's (2010) results and set out implications for the management of such cooperation. Finally, we cannot corroborate our results on firms' perspective. To do so would make it necessary to test whether the behaviour modes and behaviours in the opportunity selection and execution phases affect U-I partnerships' performance. With such results to hand, we would be able to formulate practical recommendations regarding research management and policy. To further develop this framework of researcher strategic behaviours, a measure of alliance performance based on the extent to which goals are attained (Bamford et al. 2004;Kale and Singh 2009) would be helpful.
2023-01-04T15:47:17.673Z
2019-11-02T00:00:00.000
{ "year": 2019, "sha1": "150544d59ea712add6424d8b5714c47ade4959c4", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11846-019-00361-4.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "150544d59ea712add6424d8b5714c47ade4959c4", "s2fieldsofstudy": [ "Business", "Education" ], "extfieldsofstudy": [] }
249095459
pes2o/s2orc
v3-fos-license
Effect of locally applied simvastatin on clinical attachment level and alveolar bone in periodontal maintenance patients: A randomized clinical trial Abstract Background The purpose of this double‐masked, randomized, controlled trial was to determine if the local application of simvastatin (SIM), combined with minimally invasive papilla reflection and root planing (PR/RP), is effective in improving clinical attachment level (CAL), probing depth (PD) reduction, and increasing interproximal bone height (IBH) in persistent 6–9 mm periodontal pockets in patients receiving periodontal maintenance therapy (PMT). Methods Fifty patients with Stage III, Grade B periodontitis presenting with a 6–9 mm interproximal PD with a history of bleeding on probing (BOP) were included in the study. Experimental [PR/RP+SIM/methylcellulose (MCL); n = 27] and control (PR/RP+MCL; n = 23) therapies were randomly assigned. Root surfaces were accessed via reflection of interproximal papillae, followed by RP assisted with endoscope evaluation, acid etching, and SIM/MCL or MCL application. CAL, PD, BOP, plaque presence, and IBH (using standardized vertical bitewing radiographs) were evaluated at baseline and 12 months. Measurements were compared by group and time using Chi‐square, Wilcoxon rank‐sum, and t‐tests. Results Both PR/RP+SIM/MCL and PR/RP+MCL, respectively, resulted in improvements in clinical outcomes (CAL: ‐1.9 ± 0.3 mm, p < 0.0001; ‐1.0 ± 0.3 mm, p < 0.003; PD: ‐2.3 mm ± 0.3, p < 0.0001; ‐1.3 mm ± 0.3, p < 0.0001; BOP: ‐58.7%; ‐41.7%, p < 0.05) and stable IBH (‐0.2 ± 0.12, ‐0.4 ± 0.2, p = 0.22) from baseline to 12 months post‐therapy. PR/RP+SIM/MCL had more improvement in CAL (p = 0.03), PD (p = 0.007), and BOP (p = 0.047). Conclusions The addition of SIM/MCL to PR/RP improved CAL, PD, and BOP compared with PR/RP alone in periodontal maintenance patients. INTRODUCTION Traditional protocols for periodontal therapy are centered on subgingival debridement by means of scaling and root planing (SRP) to control subgingival microflora and contaminated root surfaces known to drive destruction of the periodontium. 1 To ensure more calculus removal and better access at the depth of the pocket, other treatment approaches, such as open flap debridement or a minimally invasive flap access to root surfaces, may need to be performed in combination with or following nonsurgical therapy. 2 Combining open flap debridement with SRP is more effective than closed flap SRP regardless of clinician experience level. 3 However, there can be some disadvantages to open flap debridement, such as recession or sensitivity. Furthermore, not all patients are accepting of invasive periodontal surgical procedures. A papilla reflection (PR) approach differs from open flap debridement in that it is more conservative and has been shown to improve calculus removal. 4 In efforts to eliminate residual pockets, surgical access via a minimally invasive flap combined with the use of an endoscope is another approach that can aid to increase thoroughness of instrumentation, potentially improving outcomes of therapy and probing depth (PD) reduction. It has been shown that the use of an endoscope with SRP removes significantly more calculus versus SRP alone. 5 Periodontal maintenance therapy (PMT) includes removal of bacterial plaque and calculus from supragingival and subgingival regions via mechanical instrumentation and selective RP. 6 Good compliance with periodontal maintenance recall is important for long-term tooth retention. Studies have shown that patients undergoing regular PMT have less incidence of periodontal breakdown and keep their teeth longer than those who are erratic or non-compliers. 7,8 To address residual inflamed or progressing pockets during PMT, adjunctive therapies have been developed to further reduce bacterial loads, inflammation (bleeding on probing [BOP]), PD, and improve clinical attachment levels (CAL). Interproximal bone height (IBH) can be measured to determine the effect of bone anabolic medicaments on horizontal bone loss, the presence of dental plaque can determine the patient's ability to locally control bacterial biofilm and thus effect site-specific inflammation. Adjuncts to traditional PMT could include delivery of anti-inflammatory or growth factors, but no specific therapy is widely used. 9 Simvastatin (SIM) is a specific competitive inhibitor of 3-hydroxy-2-methylglutaryl coenzyme A reductase and was originally developed to reduce serum cholesterol, yet has been shown to have anti-inflammatory and bone anabolic properties. 10 Local application of statins has been shown to reduce periodontal pocket PD, reduce clinical attachment loss, and reduce inflammation in human clinical trials during initial therapy. 11 Further research is needed to explore the effects of SIM when applied to residual pockets in patients who have already undergone initial therapy and are receiving PMT. The hypothesis of this study was that local application of SIM in a methylcellulose (MCL) carrier (SIM/MCL) following surgical PR, RP, and endoscopic evaluation is effective in improving CAL (primary outcome), as well as reducing PD, BOP, and increasing bone height compared with local MCL in patients on PMT. Study population and research design This 12-month, randomized, double-masked, parallel interventional clinical trial included 50 patients who were undergoing PMT at the University of Nebraska Medical Center (UNMC) College of Dentistry in Lincoln, Nebraska or in private practices in Grand Island and Lincoln, Nebraska. The flow of study design is included in Figure 1. The following inclusion criteria were used: (1) 40-85 years of age; (2) diagnosis of Stage III, Grade B periodontitis 12 ; (3) could contribute one site with interproximal 6-9 mm periodontal PD, free gingival margin at or apical to the cemento-enamel junction (CEJ) with history of BOP with no vertical bony defect ≥1.5 mm or circumferential pocket; (4) overall good systemic health; (5) history of routine PMT; and (6) signed consent to participate in this 12-month study. Exclusion criteria were: (1) systemic diseases which significantly impact periodontal inflammation and bone turnover (e.g., rheumatoid arthritis); (2) taking drugs which significantly impact periodontal inflammation and bone turnover (e.g., chronic use of steroids or non-steroidal anti-inflammatory drugs [>325 mg/d], estrogen, bisphosphonates, calcitonin, methotrexate); (3) full quadrant SRP or periodontal surgery within the past year; or (4) pregnant or F I G U R E 1 Study design flowchart breastfeeding females. Patients taking SIM or other HMG-CoA reductase inhibitors systemically were not excluded from the study due to the local application of the drug in this protocol. Seven patients reported taking systemic statins in the SIM/MCL group and nine in the MCL group. The protocol was approved by the UNMC Institutional Review Board, Omaha, Nebraska (IRB protocol #217-18-FB) and was in accordance with the Declaration of Helsinki of 1975, as revised in 2013. The clinical study was performed from January 2019 to September 2020 and was registered with ClinicalTrials.gov as NCT03452891. Periodontal maintenance patients who were screened and met inclusion criteria were randomized into two groups for treatment of a 6-9 mm interproximal pocket: (1) PR and RP with endoscopic visual verification of a clean surface, root etching with ethylenediaminetetraacetic acid (EDTA) for 2 min, injection with 2.2 mg SIM in 0.15 ml MCL gel (PR/RP+SIM/MCL); (2) control with same treatment sequence except injection of MCL gel without SIM (PR/RP+MCL). Clinical measurements Three examiners were calibrated for reproducibility using 36 maxillary and mandibular posterior sites in a Stage III, Grade B patient with deep pockets for PD and CAL measurements within ±1 mm (RH-RR = 91, 83%, RH-AK = 83,83%, AK-RR = 89, 100%). These masked examiners obtained baseline and 12-month measurements for supragingival plaque, gingival recession, and PD of experimental and adjacent tooth, and CAL. Supragingival plaque (PL) was recorded as either present or absent upon explorer removal at six sites (mesial-facial, midfacial, distal-facial, mesial-lingual, mid-lingual, and distallingual) on both the experimental tooth and adjacent tooth. Gingival recession from the CEJ (or restoration margin if it covered the CEJ) to the free gingival margin and PD were measured with a periodontal probe * for the six sites on the experimental and adjacent tooth. BOP within 30 s was recorded. CAL was determined via the addition of REC and PD. Vertical bitewings were taken of each experimental site at baseline and conclusion of the study with the use of a positioning indicating device cone which locked into a modified radiographic sensor holder † to position the sensor with a standardized beam geometry. Baseline and 12-month IBH measurements were taken for the two proximal tooth surfaces including the experimental site and the adjacent tooth from CEJ to the most coronal aspect of the alveolar crest, where a uniform periodontal ligament was visualized. If the anatomical CEJ was obstructed due to a restoration, the apical extent of the restorative margin * UNC 15, Hu-Friedy, Chicago, IL † Rinn XCP Dental Holder Treatment Treatment protocols on the 50 enrolled patients were completed by one of two periodontal residents (MB, LK) as previously described. 13 Local anesthetic was administered via infiltration of the surgical site. Incisions limited to the interproximal area of interest were made on the buccal and lingual from the interproximal line angle of the experimental tooth to the adjacent tooth interproximal line angle ( Figure 2A). Across the papilla, an inverse bevel technique was used to spare the papilla and allow for partial removal of proximal col tissue for root access. The proximal col tissue was removed with a universal curette following PR with a periosteal elevator. The experimental and adjacent tooth proximal surfaces were scaled and root planed with a universal curette § and an ultrasonic instrument ** for removal of supragingival and subgingival calculus, plaque, and contaminated cementum. Verification of thorough debridement was completed using an endo- ‡ Mi PACS Dental Enterprise Viewer, Medicor Imaging, Charlotte, NC § Hu-Friedy #4R/4L, Chicago, IL ** Cavitron, Ontario, ON, Canada scope † † with fiberoptic visualization ( Figure 2B) and a universal explorer. ‡ ‡ Repeated RP was performed until the proximal surfaces were free of tactile and visual calculus. Normal saline irrigation was performed after instrumentation, followed by application of EDTA § § for 2 min with a final saline irrigation to biochemically debride the smear layer and root surface. Following irrigation, the operator accessed the randomization chart to determine if the patient was in the test or control group. SIM and MCL were formulated by a local compounding pharmacy *** with approval of the UNMC Pharmacy & Therapeutics Committee and the IRB. SIM powder and MCL gel were prepared in separate syringes in a certified sterile room. Lots were tested at the UNMC Microbiology Clinical Laboratory and were shown to have no bacterial contamination growth of aerobes or anaerobes, and potency was 96.9% at 37 days after formulation. Preparations were reformulated monthly. Following randomization, two 3 ml syringes; one with SIM and one with MCL, were joined via a luerlocking connector and exchanged 50 times to yield a homogeneous mixture of 2.2 mg SIM/0.15 mg MCL ( Figure 2C). The control group used MCL alone. The † † Eagle Science-Based Solutions, Houston, TX ‡ ‡ Hu-Friedy #4R/4L, Chicago, IL § § Emdogain, Straumann, Andover, MI *** Pharmacy Solutions, Lincoln, NE loaded syringe was attached to a 19-gauge blunt-end needle and a dose of 0.15 ml of SIM/MCL or 0.15 ml of MCL was deposited at the proximal experimental site, beginning at osseous level and extending coronally onto both proximal surfaces ( Figure 2D). Light pressure was applied with damp gauze to re-approximate the papillae and remove any excess medicament. Cyanoacrylate tissue adhesive † † † was applied to the buccal and lingual papillae for stabilization and set with damp gauze. A registered dental hygienist (LA or MC) then completed a full-mouth periodontal maintenance instrumentation excluding the surgical site as defined by the records and cyanoacrylate adhesive. Postoperative care Following the procedure, patients were instructed to avoid brushing and interproximal cleaning of the experimental site for 2 weeks. Twice daily phenolic compound ‡ ‡ ‡ rinsing for 2 weeks was advised. Patients returned for 2-and 6-week post-operative appointments. At both intervals, patients were asked to report any post-operative adverse events. At the 2-week post-operative appointment, homecare was reviewed with the instruction to resume normal brushing of the experimental site. At the 6-week postoperative appointment patients were instructed to begin daily interproximal brush use at the experimental site from both a buccal and lingual approach to standardize oral hygiene techniques. Patients returned for routine periodontal maintenance with a registered dental hygienist at 3-, 6-, 9-, and 12month intervals. The COVID-19 pandemic disrupted the recall schedule of this study. Six subjects were delayed on their 6-, 9-, or 12-month recall maintenance appointment. The baseline clinical measurements were repeated at the 12-month visit (AK, RR, RH). Statistical analysis A sample size of 22 patients per group was needed to achieve at least 80% power to detect a difference of 1.0 mm in CAL between groups with a common estimated group standard deviation of 1.1 mm and a significance level of 0.05 using a two-sided two-sample t-test. 14,15 For PD, CAL, and BOP measurements, the side with the deepest pocket (i.e., buccal or lingual) at baseline was identified and only measurements from that side of experimental and adjacent teeth were analyzed; if buccal and lingual had equally deep pockets at baseline, then both † † † Periacryl, Glu-Stitch, Delta, BC Canada ‡ ‡ ‡ Listerine, Johnson & Johnson, USA measurements were averaged for each measurement of interest. Associations between categorical variables and treatment group (i.e., SIM or MCL) were assessed using Chi-Square or Fisher exact tests. Means and standard errors (SE) were calculated for age, and baseline measurements and differences in baseline between groups for these variables were assessed using t-tests. Linear models were used to assess the association between the outcome change in measurement (12 month -baseline) and group, while adjusting for the initial measurement and side of worst PD. Model adjusted mean change estimates were calculated from these models. For the change in BOP outcome, BOP was dichotomized such that patients with no or reduced BOP at 12 months were considered to have a good outcome, otherwise, they were considered to have a poor outcome. Logistic regression models were used to analyze changes in BOP, which included group and adjustment for worst side. All analyses were performed using statistical software. § § § Patient characteristics The 50 patients included in this study were eligible and consented to participate. Demographic data are summarized in Table 1. There were no significant differences between groups according to age and sex. There was a significant difference in number of smokers between groups (p = 0.03) with the test group having significantly more smokers (29.6%) than the control group (4.3%). Two patients did not complete the study in its entirety (4% dropout rate); neither patient's reason for dropout was believed to be related to any dental treatment provided throughout the study. Reported adverse reactions included transient temperature sensitivity (18.7%) and pain (12.5%), and swelling (<0.1%). Twenty-five maxillary teeth and 25 mandibular teeth were included in this study. Clinical outcomes The mean baseline and 12-month change from baseline results for clinical outcomes between groups are reported in Table 2. There were no significant differences between groups for baseline PD or CAL. Both the PR/RP+SIM/MCL and the PR/RP+MCL groups saw a significant reduction in PD and gain in CAL from baseline § § § SAS Software version 9.4, SAS Institute, Cary, NC to 12 months after treatment, with a more significant improvement for SIM/MCL in both PD (p = 0.007) and CAL (p = 0.03). Interproximal sites adjacent to the treatment sites experienced significant improvements from baseline to 12 months in PD, but no significant difference in change between groups was found. A small but significant reduction in PD on the midlingual of the SIM/MCL teeth was noted, but no difference between groups. Values averaged <0.4 mm, indicating minimal facial/lingual recession as part of PR/RP. The change in BOP from baseline to 12-month posttherapy of experimental teeth was statistically significant for the test group (p = 0.047) ( Table 3). Explorer-detectable plaque (Table 3), was determined via calculating the mean of six sites from the treatment tooth and six sites from the adjacent tooth, for a total of 12 sites. Neither the PR/RP+SIM/MCL nor PR/RP+MCL groups saw a statistically significant reduction in plaque percent (PR/RP+SIM/MCL: -11.3 ± 6.4, p = 0.08; PR/RP+MCL: -9.1 ± 6.6, p = 0.17), and there was no significant difference in change between groups (p = 0.80). IBH measurements, displayed in Table 4, showed a significant gain in IBH from baseline to 12 months in the control group (PR/RP+MCL -0.4 ± 0.2 mm, p = 0.01), but not in the test group (PR/RP+SIM/MCL -0.17 ± 0.15 mm, p = 0.28). Although the improvement of IBH measurements in the control group was significant, it was <0.5 mm of improvement and not statistically significantly different from the test group. DISCUSSION Change in CAL was the primary outcome measured in this study, with changes in PD, BOP, and IBH also being measured. The current study demonstrated that treatment of an inflamed, 6-9 mm periodontal pocket in periodontal maintenance patients with PR/RP+SIM/MCL showed a statistically significant gain in CAL, reduction in PD, and reduction in BOP when compared with PR/RP+MCL at 12 months post-therapy. To our knowledge, no other studies have compared PR/RP+SIM/MCL to PR/RP+MCL as part of PMT. Previous studies have measured change in clinical parameters (PD, CAL, BOP, IBH) during initial SRP therapy, which is the first line of treatment performed in those with generalized inflammation of periodontal tissues. 11,16 Conversely, periodontal maintenance is performed on those who have already undergone RP and whose overall periodontal inflammation has reached stability or been reduced. It is intuitive then that any adjunctive treatment administered at the time of initial therapy, where inflammation is uncontrolled, would result in a greater improvement of clinical parameters over adjunctive therapies added to residual pockets in periodontal maintenance patients, where the majority of inflammation and CAL has already been monitored and reduced by instrumentation and homecare. The improvements in PD and CAL seen in previous local application SIM studies were greater than those found in the current study. Pradeep et al. 11 reported a decrease in PD of 4.26 ± 1.59 mm in the test group compared with a PD reduction of 1.20 ± 1.24 mm in the control group. The same study found a CAL gain of 4.36 ± 1.92 mm in the treatment group that received SRP in 5 to ≥7 mm pockets plus 1.2 mg local SIM application versus CAL gain in the placebo group of 1.63 ± 1.99 mm. Another study that assessed effectiveness of SIM local delivery as adjunct to SRP reported a PD decrease of 3.78 ± 0.62 mm in the test group and 1.14 ± 0.04 mm in the control. 16 A significant reduction in BOP was seen in the current study in the test group when compared with the control, which may be attributed to the anti-inflammatory effects of SIM. When assessing change in BOP over time from baseline to 12 months, the current study showed that patients treated with PR/RP+SIM+MCL had a 4.17 (95% CI adjusted odds ratio: 1.02-17.04) times the odds of patients treated with PR/RP+MCL (control) of having a good BOP outcome (i.e., shown in improvement or maintaining no BOP), (p = 0.047) as reported in Table 3. Another study by Pradeep et al. 17 reported a statistically significant decrease in bleeding index at 6 months when comparing placebo group (1.61 ± 0.43) to the test group (0.80 ± 0.18) that received local application of SIM combined with SRP (p = 0.001). Previous studies conducted have shown significant radiographic bone fill in the test group receiving local SIM application versus a placebo. 18 However, vertical or intrabony defects were part of these studies' inclusion criteria. This may play a role in the amount of bone regeneration which occurred, as defects with a depth of >3 mm and a radiographic defect angle of 25 • were reported to be the most amenable to regenerative procedures. 18 The current study did not include vertical defects, but instead used residual pockets with horizontal bone loss (no defects ≥1.5 mm). The difference in defect morphology could likely contribute to bone fill seen in other studies compared with lack of significant gain in IBH in the test group of the current study. This study had several limitations. First, this study had an unbalanced distribution of smokers versus nonsmokers. The improvements in SIM/MCL are despite having more smokers in the test group. It is well established that smokers tend to have less periodontal therapeutic success than non-smokers. 19 Second, the combination of SIM and MCL is not commercially available in the United States and, therefore, required aid of a compounding pharmacy for use in this trial. Due to MCL being a non-toxic, non-allergenic, and non-irritating material, it is commonly used as a delivery-release vehicle for therapeutic drug applications. 20 MCL is made from cellulose pulp, which is found in the plant cell wall and commonly used in various oral and topical pharmaceutical formulations. 21 Ideally, SIM and MCL would be readily available and FDAapproved for local application in periodontal pockets to make its clinical use more practical for use in the periodontal practice. Third, patient selection was challenging as most of the patients screened did not qualify due to lack of pockets with a history of BOP. While a credit to the success of PMT, it became challenging for study subject accrual. Similarly, analysis of BOP was difficult since inclusion criteria specified a history of BOP at the study site. For multiple patients, no BOP occurred at the baseline appointment, but since probing took place after gingival crevicular fluid collection and plaque presence determination, the patient was still included in the study. Finally, this study was conducted during the COVID-19 pandemic which delayed some of the later PMT appointments. This may have reduced the efficacy of the treatments in some patients due to inflammatory rebound. CONCLUSIONS Based on the results of this clinical trial, the potential benefit of locally applied SIM in residual, inflamed periodontal pockets in periodontal maintenance patients has been demonstrated. The significant improvement in periodontal clinical parameters combined with the efficiency of the procedure makes its use practical in a clinical setting. SRP with PR in inflamed, residual, deep periodontal pockets during PMT, with or without the application of SIM, resulted in improvements in PD, CAL, and BOP with stability of IBH after 12 months of PMT, and added <30 min additional time to PMT. The addition of SIM significantly enhanced the clinical benefits of PR/RP in the treatment of periodontal maintenance patients with inflamed 6-9 mm interproximal PD. Further research should be conducted investigating the anti-inflammatory and antimicrobial effects of SIM, as well as long-term clinical effects. A C K N O W L E D G M E N T S The authors would like to acknowledge the assistance of Ms. Deb Dalton and Mrs. Emily Gish (University of Nebraska Medical Center College of Dentistry) in manuscript production, as well as Drs. Van Sanderfer, Timothy Calkins, and Pioneer Periodontics and Implant Dentistry for their contribution of private practice patients to this study. This study is funded by the Windsweep Farm Fund (Lincoln, NE), and none of the authors report any conflicts of interest related to this study.
2022-05-28T06:22:57.252Z
2022-05-27T00:00:00.000
{ "year": 2022, "sha1": "3aba6432cebc1f7c231af8caebebb32dc1ea8401", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/JPER.21-0708", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6ff3640375f4c8b47a55e42c2a2d4365254b74af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246783624
pes2o/s2orc
v3-fos-license
A single-cell atlas of the cycling murine ovary The estrous cycle is regulated by rhythmic endocrine interactions of the nervous and reproductive systems, which coordinate the hormonal and ovulatory functions of the ovary. Folliculogenesis and follicle progression require the orchestrated response of a variety of cell types to allow the maturation of the follicle and its sequela, ovulation, corpus luteum formation, and ovulatory wound repair. Little is known about the cell state dynamics of the ovary during the estrous cycle and the paracrine factors that help coordinate this process. Herein, we used single-cell RNA sequencing to evaluate the transcriptome of >34,000 cells of the adult mouse ovary and describe the transcriptional changes that occur across the normal estrous cycle and other reproductive states to build a comprehensive dynamic atlas of murine ovarian cell types and states. Introduction The ovary is composed of a variety of cell types that govern its dynamic functions as both an endocrine organ capable of producing hormones such as sex steroids and a reproductive organ orchestrating the development of follicles, a structure defined by an oocyte surrounded by supporting somatic cells such as granulosa cells and theca cells. Most follicles in the ovary are quiescent primordial follicles, representing the ovarian reserve. Once activated, a primordial follicle grows in size and complexity as it progresses to primary, preantral, and antral stages, adding layers of granulosa and theca cells and forming an antral cavity, until it ultimately ejects the oocyte-cumulus complex at ovulation while the follicular remnants undergo terminal differentiation to form the corpus luteum (CL) (Dunlop and Anderson, 2014). This process necessitates precise coordination of germ cells and several somatic cell types, including granulosa cells, thecal cells, vascular cells, and other stromal cells of the ovary to support the growth of the oocyte until its ovulation or, as is most often the case, undergo follicular atresia. In addition to supporting germ cells, ovarian somatic cells must produce the necessary hormonal cues, as well as coordinate the profound tissue remodeling, necessary to accommodate these dynamic developing structures. For reproductive success to occur, the state of each of these cells must change in a coordinated fashion over the course of the estrous cycle; this allows waves of follicles to grow and mature, ovulation to be triggered precisely, and provides the hormonal support necessary for pregnancy. Single-cell RNA sequencing (scRNAseq) has been used in a variety of tissues to obtain an in-depth understanding of gene expression and cellular diversity. In the ovary, this technique has allowed us, and others, to explore various physiological processes during early ovarian development and ovarian aging (Zhao et al., 2020;Stévant et al., 2019;Wagner et al., 2020;Niu and Spradling, 2020;Jevitt et al., 2020;Man et al., 2020;Meinsohn et al., 2021;Fan et al., 2019;Wang et al., 2020). For example, Fan et al. cataloged the transcriptomic changes that occur during follicular development and regression and mapped the cell types of the human ovary using surgical specimens (Fan et al., 2019). A primate model has been used to investigate changes in cell types and states that occur in the ovary with aging . Zhao et al. looked at the formation of the follicle during early embryonic ovarian development to discern the relationship of oocytes to their support cells in formation of follicles (Zhao et al., 2020). We have used scRNAseq to identify inhibitory pathways regulated by anti-Müllerian hormone (AMH) during the first wave of follicular growth in the murine ovary (Meinsohn et al., 2021). While all these studies have helped establish a static framework to understand the major cell types in the ovary, they fail to describe the dynamic nature of cell states across the reproductive cycle, known as the estrous cycle. The estrous cycle in mice is analogous to the human menstrual cycle, which both reflect follicle development in the ovary. In mice, this cycle lasts 4-5 days and is composed of four different phases known as proestrus, estrus, metestrus, and diestrus. The murine proestrus is analogous to the human follicular stage and leads to ovulation at estrus. Metestrus and diestrus are analogous to early and late secretory stages of the reproductive cycle in humans, which are orchestrated by production of progesterone by the CL (Ajayi and Akhigbe, 2020). To understand more fully the dynamic effects of cyclic endocrine, autocrine, and paracrine signals on ovarian cell states, we performed high-throughput scRNAseq of ovaries from adult mice across a physiological spectrum of reproductive states. Ovaries were harvested from mice in the four phases of the normal estrous cycle: proestrus, estrus, metestrus, and diestrus. Additionally, ovaries were evaluated from mice that were either lactating or non-lactating 10 days post-partum, and from randomly cycling adult mice to increase the diversity of cell states represented in the dataset. Herein, we (1) describe the previously unrecognized complexity in the ovarian cellular subtypes and their cyclic expression states during the estrous cycle, and (2) identify secreted factors that cycle and thus could represent potential biomarkers for staging. scRNA-seq of adult mouse ovaries across reproductive states To survey the dynamic transcriptional landscape of ovaries at the single-cell level across a range of physiological reproductive states in sexually mature female mice, we isolated the ovaries (four mice per group) at each stage of estrous cycling (proestrus, estrus, metestrus, and diestrus), post-partum non-lactating (PPNL) (day 10 post-partum, with pups removed on the day they were born), postpartum lactating (day 10 post-partum, actively lactating with pups), and non-monitored adult mice to increase sample diversity and cell counts. Following enzymatic digestion of the ovaries, we generated single-cell suspensions and sorted them by microfluidics using the inDROP methodology (Klein et al., 2015), targeting 1500 cells per animal. Resulting libraries were indexed and combined for sequencing ( Figure 1A). Following dimensionality reduction and clustering using the Seurat algorithm, we identified multiple clusters which could be combined to represent the major cell categories of the ovary ( Figure 1B). To assign cell type identity, we used cluster-specific markers which were previously described in other studies or newly identified makers later validated by RNA in situ (Supplementary file 2). The largest groups of clusters consisted of granulosa cells (N=17627 cells) and mesenchymal cells of the ovarian stroma (N=10825 cells). Other minor cell types were identified including endothelial cells (N=3501 cells), ovarian surface epithelial cells (N=1088 cells), immune cells (N=1649 cells), and oocytes (N=22 cells), altogether recapitulating all the major cell types of the ovary (Figure 1figure supplement 1A). Oocytes were poorly represented in the dataset due to cell size limitations of Ooep Upk3b Upk1b inDROP, likely restricting our sampling to small oocytes of primordial follicles ( Figure 1B). To characterize more fully the transcriptional signatures of the identified cell types, we evaluated a heatmap of marker gene expression across the major categories of cell types and states ( Figure 1C). Cells were also classified depending on the stage of the estrous cycle or lactating states in which the ovaries were collected ( Figure 1B). Morphological differences between the stages of proestrus, estrus, metestrus, diestrus, and also post-partum lactating and non-lactating, were documented in Figure 1-figure supplement 1B. The granulosa, mesenchyme, and epithelium clusters were isolated and reanalyzed to identify subclusters. Single-cell sequencing reveals heterogeneity within granulosa and mesenchymal cell clusters Cellular diversity of mesenchymal cells The mesenchymal cluster was the second largest cluster identified in our analysis. Based on prior studies and conserved marker expression (Fan et al., 2019;Wang et al., 2020), we were able to identify subclusters within mesenchymal cells and their relative abundance (percentage) as follows: early theca (16.8%), which formed the theca interna of preantral follicles; steroidogenic theca (13.2%), which formed the theca interna of antral follicles; smooth muscle cells (10.2%), which were part of the theca externa of both antral and preantral follicles; pericytes (6.2%), which surrounded the vasculature; and two interstitial stromal cell clusters, one composed of steroidogenic cells (28.7%) and the other of fibroblast-like cells (24.9%), which together constituted the bulk of the ovarian volume (outside of follicles). These subclusters can be seen in Figure 2A, with the top five expressed markers of each subcluster described in the Figure 2B heatmap and the top 10 listed in Supplementary file 3. Distinct transcriptional signatures were identified in each of these mesenchymal subclusters ( Figure 2B); to confirm the presumed identity and histology of these cell types (detailed in Figure 1figure supplement 1A), we validated markers prioritized by highest fold-change expression, highest differential percent expression, and lowest p value ( Figure 2C). For the theca interna, the two clusters identified reflected the stage of development of the follicle: early thecal cells could be defined by their expression of hedgehog-interacting protein (Hhip) and were histologically associated with preantral follicles. Meanwhile, the steroidogenic theca cells were identified by their expression of cytochrome P450 family 17 subfamily A member 1 (Cyp17a1), an essential enzyme for androgen biosynthesis (Richards et al., 2018); they were found in antral follicles ( Figure 2C). The theca externa is a connective tissue rich in extracellular matrix situated on the outermost layer of the follicle (Figure 1-figure supplement 1A), containing fibroblasts, macrophages, blood vessels, and abundant smooth muscle cells, which we identified based on expression of microfibril-associated protein 5 (Mfap5) by RNA in situ hybridization ( Figure 2C). To validate the identity and histology of these smooth muscle cells, we performed RNAish/IHC colocalization of Mfap5 and actin alpha 2 (Acta2), another marker of smooth muscle, which confirmed their position within the theca externa. In contrast, Hhip, which was expressed in theca interna (both immature and steroidogenic), did not colocalize with Acta2 (Figure 2-figure supplement 1A-C). These results suggest Mfap5 labels smooth muscle cells of the theca externa more specifically than Acta2; these cells are thought to perform a contractile function during ovulation (Young and McNeilly, 2010). Lastly, the bulk of the ovarian interstitial stromal space was made up of two closely related cell types which could not be differentiated by specific dichotomous markers but rather were distinguished based on relative expression of ectonucleotide pyrophosphatase/phosphoiestrase 2 (Enpp2) ( Figure 2C). While Enpp2+ cells represented fibroblast-like stromal cell, Enpp2− interstitial cells were enriched for expression of genes such as Patch1 (Ptch1), a member of the hedgehog-signaling pathway, an important regulator of ovarian steroidogenesis (Spicer et al., 2009), suggesting these represented steroidogenic stromal cells. Indeed, the steroidogenic activity of this stromal cell cluster was further confirmed by its high relative expression of other genes associated with steroidogenesis including cytochrome P450 family 11 subfamily A member 1 (Cyp11a1), hydroxy-delta-5-steroid dehydrogenase, 3 beta-and steroid delta-isomerase 1 (Hsd3b1), cytochrome P450 family 17 subfamily A member 1 (Cyp17a1), steroid 5 alpha-reductase 1 (Srd5a1), along with other markers such as potassium two pore domain channel subfamily K member 2 (Kcnk2) (Figure 2-figure supplement 1E, F). In contrast the fibroblast-like stromal cluster had enriched expression of many extracellular matrix genes such as collagen type I alpha 1 chain (Col1a1), collagen type V alpha 1 chain (Col5a1), Lumican A. B. Cellular diversity of granulosa cells To explore further the cellular heterogeneity within developing follicles (listed in Figure 1-figure supplement 1A), we investigated the subclustering of granulosa cells based on their transcriptional profile. Consistent with previous reports, we could distinguish discrete granulosa cell states in follicles based on their stage of development (Zhao et al., 2020;Fan et al., 2019;Gallardo et al., 2007). Granulosa cells could be subdivided into eight main categories: preantral-cumulus (27.3%), antralmural (21.8%), luteinizing mural (4.8%), atretic (22.6%), mitotic (14.4%), regressing CL (3.7%), and active CL (5.4%) ( Figure 3A). Supplementary file 4 lists the top 10 markers for each of these clusters. Distinctive gene expression programs were identified in the granulosa cell subclusters, as visualized in the heatmap ( Figure 3B), from which we selected potential markers for validation. Early preantral granulosa cells, and those constituting the cumulus oophorus of antral follicles, could be identified by their shared expression of markers such as potassium channel tetramerization domain (Kctd14) ( Figure 3C), which we had previously shown to be expressed by preantral follicles (Meinsohn et al., 2021). In contrast, mural granulosa cells of antral follicles expressed distinct markers (Supplementary file 4) such as male-specific transcription in the developing reproductive organs (Mro) ( Figure 3C). Luteinizing mural granulosa cells could be identified by the expression of previously established markers (Supplementary file 4) and oxytocin receptor gene (Oxtr) which we propose as a highly specific marker for this cell type, a likely target of the surge in oxytocin during estrus (Ho and Lee, 1992; Figure 3C). Furthermore, we identified two different clusters that we hypothesize represent cell states of the CL, either active or regressing, which both expressed nuclear paraspeckle assembly transcript 1 (Neat1), a known marker of CLs (Nakagawa et al., 2014). To confirm the active and regressing CL cell states, we investigated the expression of Top2a, a mitotic marker (Donadeu et al., 2014), which was enriched in the active CL cluster, and Cdkn1a, a cell cycle exit and senescence marker (Ock et al., 2020), which was enriched in the regressing cluster (Figure 3-figure supplement 1B, C). Moreover, when examining the composition of clusters depending on the reproductive stage, the regressing CL cluster was found to be composed mostly of cells derived from the Postpartum non lactating (PPNL) samples (Figure 3-figure supplement 1E), which overexpressed markers related to CL regression (Talbott et al., 2017; Figure 3-figure supplement 1F), consistent with a post-partum effect of prolactin. Finally, two relatively abundant granulosa cell states could be identified based on marker expression: mitotic granulosa cells could be found in both preantral and antral follicles and were defined by their expression of Top2a, and atretic granulosa cells, which expressed markers consistent with follicular atresia and apoptosis such as phosphoinositide-3-kinaseinteracting protein 1 (Pik3ip1), nuclear protein 1, transcriptional regulator (Nupr1), growth arrest and DNA damage inducible alpha (Gadd45a), vesicle amine transport 1 (Vat1), transgelin (Tagln), and melanocyte-inducing transcription factor (Mitf) (Terenina et al., 2017; Figure 3C, Figure 3-figure supplement 1A, Supplementary file 4). Furthermore, we propose growth hormone receptor (Ghr), which was highly specific to this cluster, as a specific marker of atretic follicles, which warrants further investigation of the role of growth hormone in this process ( Figure 3C). Cellular states in the ovarian surface epithelium The epithelial cluster was composed of 1088 ovarian surface epithelium (OSE) cells, which could be further subdivided into two clusters ( Figure 4A): the larger one composed of non-dividing epithelium cells (96%), and a smaller cluster (4%), composed of mitotic epithelium. The latter was characterized A. B. C. Granulosa cell transcriptome is most dynamic during the proestrus/ estrus transition To identify changes in cell states associated with the stages of the estrous cycle, we focused on the granulosa cell subclusters, given the importance of follicular maturation in coordinating this process (illustrated in Figure 1-figure supplement 1B). When comparing the composition of granulosa cell subclusters by estrous stage, we found that some clusters were dominated by cells from either the proestrous or estrous samples, particularly the clusters corresponding to 'antral/mural' and 'periovulatory' clusters, respectively ( Figure 5A and B). A volcano plot analysis confirmed that the transition between these two stages was characterized by 24 significantly upregulated and 10 significantly A. B. C. D. downregulated markers ( Figure 5C), which together with the transition from estrus to metestrus represents the largest change in gene expression. In contrast, few genes were found to change significantly during the transition from metestrus to diestrus, or diestrus to proestrus ( Figure 5-figure supplement 1A). Gene ontology analysis revealed that the most significantly differentially regulated pathways between the proestrous and estrous phases were related to ovarian matrix remodeling and steroidogenesis and hormones production ( Figure 5-figure supplement 1B). To validate the genes with significant changes in expression identified within the single-cell sequencing dataset, we performed quantitative PCR (qPCR) on whole-ovary samples at the proestrus to estrus transition, including the steroid biosynthesis markers cytochrome P450 family 19 subfamily A member 1 (Cyp19a1, p=0.0029, proestrus to estrus), Star protein (p=0.0187, proestrus to estrus), serum-and glucocorticoid-inducible kinase-1 (Sgk1, p=0.0056, proestrus to metestrus), as well as matrix remodeling genes such as regulator of cell cycle (Rgcc, p=0.0441, proestrus to estrus), tribbles pseudokinase 2 (Trib2, p=0.0023, proestrus to estrus) ( Figure 5D and E), and immediate early genes, fos protooncogene (Fos), jun proto-oncogene (Jun, p=0.0022, proestrus to estrus), jun proto-oncogene B (Junb, p=0.0069, proestrus to diestrus), and early growth response 1 (Egr1, p=0.0504 estrus to diestrus), which represent a family of genes thought to be involved in wound repair, a sequela of ovulation (Florin et al., 2006;Wu et al., 2009;Martin and Nobes, 1992;Yue et al., 2020;Figure 5-figure supplement 1C). Transcriptional gene expression changes were found to be concordant between the scRNAseq data and whole-ovary transcripts quantified by qPCR. Identification and validation of secreted biomarkers varying throughout the estrous cycle To identify new biomarkers that vary as a function of the estrous cycle and that could be used for staging in reproductive medicine, we screened for differentially expressed secreted factors (DAVID Bioinformatics Resources) (Sherman et al., 2022;Huang et al., 2009), which would therefore be potentially measurable in the blood. Furthermore, to ensure specificity, we prioritized genes expressed specifically in the granulosa or ovarian mesenchymal clusters and not highly expressed in other tissues based on their GTEX profile (GTEx Consortium, 2013;Supplementary file 5). As a primary screen, we first validated our ability to detect gene expression changes by estrous stage using whole-ovary qPCR analysis in a separate set of staged mice (N=4 per group). Whole-ovary qPCR successfully detected expression changes of estrous cycle markers such as luteinizing hormone/choriogonadotropin receptor (Toms et al., 2017) (Lhcgr, p=0.0281 estrus to metestrus) and progesterone receptor (Pgr, p=0.0096, proestrus to estrus) (Kubota et al., 2016; Figure 6B). Using this method, we validated a set of significantly upregulated secreted markers in the proestrous to estrous transition, most prominent of which were natriuretic peptide C (Nppc, p=0.0022 proestrus to estrus) and inhibin subunit beta-A (Inhba, p=0.0067, proestrus to estrus) ( Figure 6A and B). Similarly, tubulointerstitial nephritis antigen like 1 (Tinagl1) and serine protease 35 (Prss35) were secreted markers significantly upregulated in estrus compared to their level of transcription in proestrus in the scRNAseq dataset ( Figure 6A) and by qPCR (Tinagl1, p=0.0081, proestrus to estrus; Prss35, p=0.0008, proestrus to estrus) ( Figure 6B). In situ RNA hybridization showed that, as expected, these markers were mostly expressed in mural granulosa cells of antral follicles, while Nppc was expressed in both mural and cumulus cells ( Figure 6C). To evaluate the feasibility of measuring the secreted PRSS35, NPPC, TINAGL1, and activin A proteins in the serum for staging, we performed ELISAs in mice at each stage of the estrous cycle ( Figure 6D). We found that the activin A concentration in the serum was significantly increased between the diestrous and proestrous stages (p=0.0312) and peaked at the proestrous stage ( Figure 6D). The Inhba transcript, which encodes for the activin and inhibin beta-A subunit, had a similar temporal expression profile ( Figure 6B). Circulating PRSS35 levels were lowest at the metestrous stage and were significantly increased during the transition to diestrus (p=0.0009) and remained significantly elevated until the proestrus ( Figure 6D). In contrast, the Prss35 transcript was significantly induced earlier at estrus ( Figure 6B). The serum concentrations of TINAGL1, which was lowest at the diestrous and proestrous stages, was significantly increased during the transition between proestrus and metestrus, peaking in estrus (p=0.0142) ( Figure 6D). This temporal pattern of expression was recapitulated at the transcriptional level by qPCR and scRNAseq ( Figure 6A and B). Finally, we observed a trend for serum protein concentrations of NPPC to be lowest at the proestrous and estrous stages and increase during the metestrous and diestrous stages ( Figure 6D), although the differences were not statistically significant (p=0.0889, estrus to metestrus). Importantly, these data provide a proof of concept that four markers could be used to monitor estrous cycle progression when measured in conjunction in the blood ( Figure 6E). Discussion scRNAseq has been used to catalog the transcriptomes of a variety of tissues in several species, across different physiological states (Hwang et al., 2018). Herein, we used scRNAseq to survey the cellular diversity and the dynamic cell states of the mouse ovary across the estrous cycle and other reproductive states such as post-partum lactating (PPL) and post-partum non-lactating (PPNL). The most significant changes in composition and cell states were identified in granulosa cells, particularly as they cycled through the estrous stages, reflective of their important role in cyclic follicular maturation and hormone production. Early preantral follicle numbers are thought to be relatively stable across the estrous cycle (Deb et al., 2013), given that they are largely unresponsive to gonadotropins (Richards, 1980), in contrast to antral follicles, whose numbers and size are more variable (Deb et al., 2013). Indeed, while subclusters such as 'preantral granulosa cells' were equally represented in samples from proestrus, metestrus, and diestrus, others, such as the 'luteinizing mural' cluster, were dominated by cells derived from one stage (in this case 'estrus'). Genes enriched in this cluster had been previously reported to be involved in the ovulatory process and regulated by the luteinizing hormone (LH) surge, including markers of terminal differentiation and steroidogenesis such as Smarca1 (Lazzaro et al., 2006), Cyp11a1 (Irving-Rodgers et al., 2009), metallothionein 1 (Mt1), and metallothionein 2 (Mt2) (Wang et al., 2018;Supplementary file 4). Other genes enriched in this subcluster include Prss35 (Wahlberg et al., 2008) and Adamts1 (Lussier et al., 2017;Sayasith et al., 2013), which had previously been identified as playing a role in the follicular rupture necessary for ovulation. Interestingly, we found that granulosa cells of preantral follicles and cumulus cells of antral follicles clustered together and shared markers that distinguished them from mural granulosa cells. For example, Kctd14, a member of the potassium channel tetramerisation domain-containing family, was expressed in granulosa cells during the initial growth of early preantral follicles, but also specifically expressed only in cumulus cells, but not mural granulosa cells, of larger antral follicles. Intriguingly, we have previously shown that AMH (anti-Müllerian hormone, a.k.a Müllerian Inhibiting Substance), which is specifically expressed by cumulus cells (Diaz et al., 2007) in antral follicles, regulates KCTD14 expression in preantral follicles (Meinsohn et al., 2021). This conservation of cellular state and marker expression from preantral granulosa cells to cumulus cells of antral follicles suggests a continuous lineage, potentially defined and maintained by the close interaction with the oocyte (Diaz et al., 2007). This interpretation is consistent with the presence of a differentiation fork in the granulosa cell lineage during antrum formation, which would give rise to a distinct mural granulosa cell fate poised to respond to the LH surge. Indeed, the periovulatory granulosa cell state was identified based on its expression of genes regulated by LH such as Smarca1 (Lazzaro et al., 2006), and we propose Oxtr as a specific marker for these cells (Figure 3-figure supplement 1D). Oxtr expression was found only in the mural cells of large Graafian follicles, suggesting it indeed corresponds to an LH-stimulated mural granulosa cell state. After ovulation, these LH-stimulated mural granulosa cells, along with the steroidogenic theca cells, terminally differentiate into luteal cells and form the corpus luteum (CL). The CL is a transient structure with highly active steroid biosynthesis, providing the progesterone to maintain pregnancy (Duncan, 2021). In absence of implantation, the CL degenerates (Noguchi et al., 2017). We found this progression of the CL to be recapitulated at the transcriptional level, leading to two luteal subclusters: active CL and regressing CL. While the active CL was characterized by expression of proliferation markers (Top2a) in addition to steroidogenic enzymes, the regressing CL expressed the cell cycle inhibitor and senescence maker Cdkn1a, along with luteolysis markers such as syndecan 4 (Sdc4), claudin domain containing 1 (Cldnd1), and BTG anti-proliferation factor 1 (Btg1) (Talbott et al., 2017;Zhu et al., 2013). The distinct expression signatures observed in these two clusters may provide insights into the molecular basis of luteolysis and warrant further investigation. The mesenchymal cluster was also surprisingly complex and variable across the estrous cycle, reflecting stromal remodeling and other physiological functions supporting follicle growth, steroid hormone production, and ovulation. For example, during follicle maturation, the ovarian stromal cells adjacent to the developing follicle differentiates into the theca, which is ultimately responsible for steroid hormone biosynthesis and therefore underlies the cyclic hormone production of the ovary (Ryan and Petro, 1966). Herein, we identified two thecal clusters, designated as early and steroidogenic theca. Early theca was defined by markers such as Hhip (Richards et al., 2018;Hummitzsch et al., 2019), mesoderm-specific transcript (Mest) (Fan et al., 2019), and patched 1 (Ptch1) (Fan et al., 2019;Richards et al., 2018) and given its association with small follicles, presumed to be immature and the precursor to steroidogenic theca. As the follicle matures and the antrum forms, this layer becomes more vascularized and differentiates into theca interna, which is steroidogenic. This steroidogenic theca cluster was readily identifiable through its expression of steroidogenic enzymes such as Hsd3b1, Cyp17a1, Cyp11a1, and also well-established markers such as ferredoxin-1 (Fdx1) and prolactin receptor (Prlr) (Fan et al., 2019;Grosdemouge et al., 2003). Interestingly, we confirmed the presence of steroidogenic interstitial stromal cells which also likely contribute to sex steroid production in the ovary. Indeed, such cells likely represent the precursors of the theca interna (Sheng et al., 2022;Kinnear et al., 2020). Smooth muscle cells, which are part of the theca externa, were identified by their expression of structural proteins such as Mfap5, myosin heavy chain 11 (Myh11), Tagln, and smooth muscle actin (Sma or Acta2) (Zhao et al., 2020). In contrast to mice, human smooth muscle cells are thought to express high levels of collagen (Fan et al., 2019), which we did not observe here. Another species difference between mice and humans was the expression of aldehyde dehydrogenase 1 family member A1 (Aldh1a1), which we found primarily in the steroidogenic theca cluster, while it is presumably enriched in the theca externa in humans (Fan et al., 2019). Ovulation is associated with a dramatic remodeling of the ovary, including the subsequent ovulatory wound repair. We identified fibroblast-like cells in the ovarian stroma expressing many of the extracellular matrix protein known to play a role in these processes (Mara et al., 2020;Duffy et al., 2019). Another important player in ovulatory would repair is the ovarian surface epithelium (OSE), a simple mesothelial cell layer that covers the surface of the ovary and must dynamically expand to cover wound (Hartanti et al., 2019;Xu et al., 2011). The OSE cluster could be identified based on well-established markers such as keratin (Krt) 7, 8, and 18 (Kenngott et al., 2014) and was represented by only 3% of all cells in our dataset, which could be further subdivided in proliferative and non-proliferative states. As expected from their function in ovulatory wound repair, dividing OSE was enriched during estrus. Furthermore, genes associated with wound healing such as galectin 1 (Lgals1) were also significantly upregulated in estrus. Similarly, the expression of the immediate-early genes Fos, Jun, Junb, and Egr1 was variable during the estrous cycle, following a common pattern of strong downregulation at estrous compared to the other stages, consistent with their temporal expression during the repair of other tissues such as the cornea (Okada et al., 1996). Finally, to take advantage of this rich dataset, we sought to identify secreted markers which varied in abundance during the estrous cycle and could thus be used as staging biomarkers in assisted reproduction. We identified and prioritized four secreted biomarkers, expressed in mouse but also human ovaries, which varied significantly during different transitions of the estrous cycle, namely Inhba (Wijayarathna and de Kretser, 2016), Prss35 (Wahlberg et al., 2008;Li et al., 2015), Nppc (Zhang et al., 2010;Xi et al., 2021), and Tinagl1 (Akaiwa et al., 2020;Kim et al., 2010). Activin A is a secreted protein homodimer translated from the Inhba transcript that is a crucial modulator of diverse ovarian functions including pituitary feedback, and whose expression level depends highly on the stage of the estrous cycle (Chang and Leung, 2018). Quantification of activin A protein in the serum by ELISA revealed elevated levels in the blood during both proestrus and estrus, which is consistent with studies of other species such as ewes (OConnell et al., 2016). Importantly, the protein product of Inhba, the inhibin beta-A subunit, can be incorporated into other protein dimers, such as activin BA, and inhibin A, which were not measured in this study and may also represent cycling biomarkers. The serine protease 35 transcript was expressed in the theca layers of preantral follicles and induced in granulosa cells of preovulatory follicles and all stages of the corpora lutea, peaking at the estrous stage according to qPCR, leading us to speculate that it may be involved tissue remodeling during ovulation and CL formation (Wahlberg et al., 2008). In contrast, the PRSS35 protein levels were highest in the diestrus and proestrus stages as determined by ELISA, suggesting other tissue sources of PRSS35 or an offset in peak protein levels due to delays in accumulation of the protein in the circulation. The natriuretic peptide precursor C (NPPC) protein is a peptide hormone encoded by the Nppc gene. Nppc has been reported to be expressed by mural granulosa cells, while its receptor Npr2 is expressed by cumulus cells (Zhang et al., 2010). The pair acts on developing follicles by increasing the production of intracellular cyclic guanosine monophosphate and maintains oocyte meiotic arrest during maturation. Upon downregulation of this pathway, the oocyte can escape meiotic arrest and ovulate (Celik et al., 2019). This close relationship with the ovulatory process makes Nppc an attractive marker to predict ovulation. Herein, qPCR analysis revealed that Nppc was highest in the ovary at proestrus and was quickly and significantly downregulated at estrous, probably in response to the increased levels of LH which in turn inhibit the Nppc/Npr2 system (Celik et al., 2015). In contrast, there was a trend for the circulating NPCC peptide to be highest in metestrus and diestrus, albeit not in a statistically significant way. Finally, we evaluated the level of transcription and protein expression of the matricellular factor Tinagl1. We found both the Tinagl1 transcript and the circulating TINAGL1 protein in the blood to be highest during estrous, thus coinciding with ovulation, with a pattern of expression consistent with expression by mural granulosa cells of antral follicles. While the role of TINAGL1 in the ovary has not been extensively investigated, it has been associated with delayed ovarian collagen deposition and increased ovulation in aging Tinagl1 knock-out mice (Akaiwa et al., 2020). Those four potential cyclic biomarkers, activin A, PRSS35, NPPC, and TINAGL1, provide a proof of concept that a deeper understanding of transcriptional changes at the single-cell level may translate into useful applications in assisted reproduction. It will be of interest to follow up the findings of cyclic expression of these four markers, particularly in combination as an index, for the purpose of staging and predicting ovulation timing in humans and other species . In summary, this study outlines the dynamic transcriptome of murine ovaries at the single-cell level and across the estrous cycle and other reproductive states, and extends our understanding of the diversity of cell types in the adult ovary. We identified herein novel biomarkers of the estrous cycle that can be readily measured in the blood and may have utility in predicting staging for assisted reproduction. This rich dataset and extensive validation of new molecular markers of cell types of the ovary will provide a hypothesis-generating framework of dynamic cell states across the cycle with which to elucidate the complex cellular interactions that are required for ovarian homeostasis. For the analysis of transcriptional changes in ovaries of cycling mice, animals were housed in standard conditions (12/12 hr light/dark non-inverting cycle with food and water ad libitum) in groups of five females with added bedding from a cage that previously housed an adult male mouse to encourage cycling. Estrous stage was determined by observation of the vaginal opening and by vaginal swabs done at the same time daily, as previously described (Kano et al., 2017). Each mouse was monitored for a minimum of 2 weeks to ensure its cyclicity. Four mice were sacrificed in each of the four phases of the estrous cycle and labeled as being from experimental batch 'cycling'. An additional eight mice were included in the analysis and labeled as being from experimental batch 'lactating'. Four of these mice were lactating at day 10 post-partum, and four were 10 days post-partum with pups removed at delivery. Four additional mice were not monitored for cycling and included to increase sample size and diversity. Key resources table Additional mice were monitored throughout the estrous cycle to collect ovaries at each stage (groups of N=5 for proestrus, estrus, metestrus, and diestrus) for gene validation. Paired ovaries were collected from each staged mouse: one was used to extract mRNA for qPCR, while the other was fixed in 4% paraformaldehyde for RNAish (RNAscope) or immunohistochemistry to validate gene expression. Staging of estrous cycle by vaginal cytology As previously described (Kano et al., 2017;Byers et al., 2012), staging of mice was performed using a wet cotton swab, introduced into the vaginal orifice then smeared onto a glass slide which was air-dried, stained with Giemsa, and scored for cytology by two independent observers. Briefly, proestrus was determined if the smear showed a preponderance of nucleated epithelial cells as well as leukocytes. Estrous was marked by an abundance of cornified epithelial cells, while metestrous smears contained a mixture of cornified epithelial cells and leukocytes. Finally, diestrus was characterized by abundant leukocytes with low numbers of cornified epithelium or nucleated epithelial cells. Generation of single-cell suspension Single-cell suspension from mouse ovaries was obtained as previously described with uterine enzymatic dissociation (Saatcioglu et al., 2019). Briefly, ovaries were incubated for 30 min at 34°C in dissociation medium (82 mM Na 2 SO 4 , 30 mM K 2 SO 4 , 10 mM Glucose, 10 mM HEPES, and 5 mM MgCL 2 , pH 7.4) containing 15 mg of Protease XXIII (Worthington), 100 U Papain, with 5 mm L-Cysteine, 2.5 mM EDTA (Worthington), and 1333 U of DNase 1 (Worthington). The reaction was then stopped in cold medium, and samples were mechanically dissociated, filtrated, and spun down three times before being resuspended to a concentration of 150,000 cells/mL in 20% Optiprep (Sigma) for inDrop sorting. Single-cell RNA sequencing (inDrop) Fluidic sorting was performed using the inDrop platform at the Single-Cell Core facility at Harvard Medical School as previously described (Klein et al., 2015;Macosko et al., 2015). We generated libraries of approximately 1500 cells per animal which were sequenced using the NextSeq500 (Illumina) platform. Transcripts were processed according to a previously published pipeline Klein et al., 2015 used to build a custom transcriptome from the Ensemble GRCm38 genome and GRCm38.84 annotation using Bowtie 1.1.1. Unique molecular identifiers (UMIs) were used to reference sequence reads back to individual captured molecules, referred to as UMIFM counts. All steps of the pipeline were run using default parameters unless explicitly specified. scRNAseq data analysis Data processing The initial Seurat object was created using thresholds to identify putative cells (unique cell barcodes) with the following parameters: 1000-20,000 UMIs, 500-5000 genes, and less than 15% mitochondrial genes. The final merged dataset contained ~70,000 cells which were clustered based on expression of marker genes. These were further processed in several ways to exclude low-quality data and potential doublets. Visualization of single-cell data was performed using a non-linear dimensionality-reduction technique, uniform manifold approximation and projection. Markers for each level of cluster were identified using MAST in Seurat (R version 4.1.3 -Seurat version 4.1.0). Following identification of the main clusters (granulosa, mesenchyme, endothelium, immune, epithelium, and oocyte), we reanalyzed each cluster population to perform subclustering. Briefly, the granulosa, mesenchyme, and epithelium clusters were extracted from the integrated dataset by the subset function. The isolated cluster was then divided into several subclusters following normalization, scale, principal component analysis (PCA), and dimensionality reductions as previously described (Niu and Spradling, 2020). Volcano plots Highly differentially expressed genes between different estrous cycles were identified using the function FindMarkers in Seurat. Volcano plots were generated using ggplot2 package in R. Pathway enrichment analysis Differentially expressed genes with at least twofold changes between contiguous estrous stages were used as input for gene ontology enrichment analysis by clusterProfiler. Enrichplot package was used for visualization. Biological process subontology was chosen for this analysis. Principal component analysis PCA was used to identify common patterns of gene expression across stages of the cycle. For each Level 0 cluster object, cycling cells were extracted, and genes that were expressed in more than 5% of cells were identified. The expression of these genes in the cycling cells were scaled (set to mean zero, SD 1) and averaged across each of the four cycle stages. PCA was run (prcomp) on the average scaled expression data. In situ hybridization and immunohistochemistry In situ hybridizations were performed using ACDBio kits as per manufacturer's protocol, as previously described (Saatcioglu et al., 2019). Briefly, RNAish was developed using the RNAscope 2.5 HD Reagent Kit (RED and Duplex, ACD Bio). Following deparaffinization in xylene, dehydration, peroxidase blocking, and heat-induced epitope retrieval by the target retrieval and protease plus reagents (ACD bio), tissue sections were hybridized with probes for the target genes (see Key resources table for accession number and catalog number of each gene) in the HybEZ hybridization oven (ACD Bio) for 2 hr at 40°C. The slides were then processed for standard signal amplification steps and chromogen development. Slides were counterstained in 50% hematoxylin (Dako), air dried, and coverslipped with EcoMount. In addition to cycling and non-cycling mice, superovulated mice were used to validate markers from follicles associated with LH surge response in ovulatory follicles at the estrous stage for more precise timing of collection. For colocalization of RNAish staining with immunohistochemistry, we first processed the tissue section for RNAscope as described above, including deparaffinization, antigen retrieval, hybridization, and chromogen development. Sections were then blocked in 3% bovine serum albumin in Trisbuffered solution (TBS) for 1 hr. Following three washes with TBS, the sections were incubated with the primary antibody (smooth muscle actin primary antibody; 1:300, Abcam) overnight at 4°C and developed with Dako EnVision + System horseradish peroxidase (HRP). Labeled polymer anti-rabbit was used as the secondary antibody, and the HRP signal was detected using the Dako detection system. Slides were then counterstained in hematoxylin and mounted as described above. Reverse transcription-quantitative polymerase chain reaction Mice were monitored through the estrous cycle and sacrificed at specific stage/timepoints as described above. Ovaries were dissected, and total RNA was extracted using the Qiagen RNA extraction kit (Qiagen). A cDNA library was synthesized from 500 ng total RNA using SuperScript III First-Strand Synthesis System for RT-PCR using manufacturer's instructions with random hexamers (Invitrogen). The primers used for this study are described in Supplementary file 1. Expression levels were normalized to the Gapdh transcript using cycle threshold (Ct) values logarithmically transformed by the 2− ΔCt function. ELISA Blood was collected from mice by facial vein puncture, incubated at room temperature (RT) until spontaneously clotted, centrifuged at 8000 rpm for 5 min to collect the serum layer, and diluted 1/10 in each ELISA kit according to the manufacturing protocol; Mouse CNP/NPPC ELISA kit; Mouse serine protease inactive 35 (PRSS35) ELISA kit; Mouse TINAGL1 /Lipocalin 7 ELISA kit; and Human/Mouse/ Rat Activin A Quantikine ELISA Kit (see Key resources table). • Supplementary file 2. Top 10 markers expressed in each ovary cluster. Additional information • Supplementary file 3. Top 10 markers from each mesenchyme subclusters. • Supplementary file 5. Secreted markers expressed in granulosa cells varying with the estrous cycle. • Transparent reporting form
2022-02-13T16:09:25.914Z
2022-02-11T00:00:00.000
{ "year": 2022, "sha1": "b6f6656213903d5aaa72011708117f0573b089cb", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ca2a6c44f1944ceefb3e29ffceafc6627ce98188", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
253117036
pes2o/s2orc
v3-fos-license
Federated Learning Using Variance Reduced Stochastic Gradient for Probabilistically Activated Agents This paper proposes an algorithm for Federated Learning (FL) with a two-layer structure that achieves both variance reduction and a faster convergence rate to an optimal solution in the setting where each agent has an arbitrary probability of selection in each iteration. In distributed machine learning, when privacy matters, FL is a functional tool. Placing FL in an environment where it has some irregular connections of agents (devices), reaching a trained model in both an economical and quick way can be a demanding job. The first layer of our algorithm corresponds to the model parameter propagation across agents done by the server. In the second layer, each agent does its local update with a stochastic and variance-reduced technique called Stochastic Variance Reduced Gradient (SVRG). We leverage the concept of variance reduction from stochastic optimization when the agents want to do their local update step to reduce the variance caused by stochastic gradient descent (SGD). We provide a convergence bound for our algorithm which improves the rate from $O(\frac{1}{\sqrt{K}})$ to $O(\frac{1}{K})$ by using a constant step-size. We demonstrate the performance of our algorithm using numerical examples. I. INTRODUCTION In recent years, with the technological advances in modern smart devices, each phone, tablet, or smart home system, generates and stores an abundance of data, which, if harvested collaboratively with other users' data, can lead to learning models that support many intelligent applications such as smart health and image classification [1], [2]. Standard traditional machine learning approaches require centralizing the training data on one machine, cloud, or in a data center. However, the data collected on modern smart devices are often of sensitive nature that discourages users from relying on centralized solutions. Federated Learning (FL) [3], [4] has been proposed to decouple the ability to do machine learning from the need to store the data in a centralized location. The idea of Federated Learning is to enable smart devices to collaboratively learn a shared prediction model while keeping all the training data on the device. Figure 1 shows a schematic representation of an FL architecture. In FL, collaborative learning without data sharing is accomplished by each agent receiving a current model weight from the server. Then, each participating learning separately updates the model by implementing a stochastic gradient descent (SGD) [5] using its own locally collected datasets. Then, the participating agents send their locally calculated model weights to a server/aggregator, which often combines the models through a simple averaging, as in FedAvg [4], to be sent back to the agents. The process repeats until a satisfactory model is obtained. Federated learning relies heavily on communication between learner agents (clients) and a moderating server. Engaging all the clients in the learning procedure at each time step of the algorithm results in huge communication cost. On the other hand, poor channel quality and intermittent connectivity can completely derail training. For resource management, in the original popular FL algorithms such as FedAvg in [4], at each round of the algorithm, a batch of agents are selected uniformly at random to receive the updated model weights and perform local learning. FedAvg and similar FL algorithms come with convergence guarantees [6]- [9] under the assumption of availability of the randomly selected agents at each round. However, in practice due to factors such as energy and time constraints, agents' availability is not ubiquitous at all times. Thus, some works have been done to solve this problem via device scheduling [10]- [14]. Nevertheless, the agents' availability can be a function of unforeseen factors such as communication channel quality, and thus is not deterministic and known in advance. To understand the effect of an agent's stochastic availability on the FL, recent work such as [15] proposed to move from random batch selection to an FL model where the agents availability and participation at each round are probabilistic, see Fig. 1. In this paper, we adopt this newly proposed framework and contribute to devising an algorithm that achieve faster convergence and lower error covariance. Our focus will be on incorporating a variance reduction procedure into the local SGD procedure of participating learner agents at each round. The randomness in SGD algorithms induces variance of the gradient, which leads to decay learning rate and sub-linear convergence rate. Thus, there has been an effort to reduce the variance of the stochastic gradient, which resulted in the so-called Stochastic Variance Reduced Gradient (SVRG) methods. It is shown that SVRG allows using a constant learning rate and results in linear convergence in expectation. In this paper, we incorporate an SVRG approach in an FL algorithm where agents' participation in the update process in each round is probabilistic and non-uniform. Through rigorous analysis, we show that the proposed algorithm has a faster convergence rate. In particular, we show that our algorithm results in a practical convergence in expectation with a rate O( 1 K ), which is an improvement over the sublinear rate of O( 1 √ K ) in [15]. We demonstrate the effectiveness of our proposed algorithm through a set of numerical studies and by comparing the rate of convergence, covariance, and the circular error probable (CEP) measure. Our results show that our algorithm drastically improves the convergence guarantees, thanks to the decrease in the variance, which results in faster convergence. Organization: Section II introduces our basic notation, and presents some of the properties of smooth functions. Section III presents the problem formulation and the structure behind it. Section IV includes the proposed algorithm and its scheme. Section V contains our convergence analysis for the proposed algorithm and provides its convergence rate. Section VI presents simulations and Section VII gathers our conclusions and ideas for future work. For the convenience of the reader, we provide some of the proofs in the Appendix. II. PRELIMINARIES In this section, we introduce our notations and terminologies used throughout the paper. We let R, R >0 , R ≥0 , denote the set of real, positive real numbers. Consequently, when x ∈ R, |x| is its absolute value. For x x denotes the standard Euclidean norm. We let ., . denotes an inner product between two vectors for two vectors x and y ∈ R d . A differentiable function f : for all x, y ∈ C [16]. Lastly, we recall Jensen's inequality, which states [17]: III. PROBLEM STATEMENT This section formalizes the problem of interest. Consider a set of N agents (clients) that communicate with a server to learn parameters of a model that they want to fit into their collective data set. Each agent has its own local data which can be distributed either uniformly or non-uniformly. The learning objective is to obtain the learning function weights θ ∈ R d from where f n is possibly a convex or non-convex local learning loss function. At each agent n ∈ {1, · · · , N }, f n depends on training data set {(q n,i ,ŷ n,i )} Ln i=1 ⊂ R 1×d × R (supervised learning). Examples include [18] • square loss f n,i (θ) = (ŷ n,i − q n,i θ) 2 , • log loss f n,i (θ) = log(1 + e −ŷn,iqn,iθ ). Assumption 1 (Assumption on L-smoothness of local cost functions). The local loss functions have L-Lipschitz gradients, i.e., for any agent n ∈ {1, · · · , N } we have for any θ,θ ∈ R d and L ∈ R >0 . This assumption is technical and common in the literature. Problem (2) should be solved in the framework of FL in which agents maintain their local data and only interact with the server to update their local learning parameter vector based on a global feedback provided by the server. The server generates this global feedback from the local weights it receives from the agents. In our setting, at each round k of the FL algorithm, each agent n ∈ {1, · · · , N } becomes active to perform local computations and connect to the server with a probability of p k n . We denote the active state by 1 k n ∈ {0, 1}; thus, p k n = Prob(1 k n = 1). The activation probability at each round can be different. IV. FEDERATED LEARNING WITH VARIANCE REDUCTION To solve (2), we design the FedAvg-SVRG Algorithm 1, which has a two-layer structure. In this algorithm, each agent has its own probability to be active or passive in each round which is denoted by p k n for agent n at iteration k. Algorithm 1 is initialized with θ 0 by the server. We denote the number of the FL iterations by K. At each round k ∈ {1, · · · , K}, the set of active agents is denoted by A k (line 5), which is the set of agents for which 1 k n = 1. Then, each active agent receives a copy of the learning parameter θ k from the server. Afterward, active agents perform their local updates according to steps 7 to 18. For resource management local update in FL algorithms, e.g., [15], follow an SGD update. However, the SGD update suffers from a high variance because of the randomized search of the algorithm, so instead of using the SGD update step, which results in a decaying step size and slow convergence, we use the SVRG update step which is stated in lines 7 to 18. In the SVRG update, we calculate the full batch gradient of the agents at some points, which are referred to as snapshots. Then, between every two snapshots, each agent does its local update. A schematic of SVRG update steps is shown in Fig. 2. We denote the number of snapshots in our SVRG method by S. We let M be the number of local SVRG updates in between two snapshots for each active agent before aggregation. Line 10 of Algorithm 1 corresponds to computing the full batch gradient of each agent at the snapshot points, then in line 12, each agent does its local update with substituted gradient term denoted as v k n,s,m = ∇f # (w k n,s,m ) − ∇f # (w) +μ. Note the gradient substituted term in Algorithm 1 FedAvg-SVRG with non-uniformly agent sampling 1: Input: δ, K, θ 0 , {p k n }, S, M 2: Output: θ k 3: for k ← 0, ..., K − 1 do 4: Determine the active agents (sample 1 k n ∼ p k n ) 5: A k ← set of active agents 6: for n ∈ A k do where v k n,s,m = ∇f # (w k n,s,m ) − ∇f # (w) +μ 16: end for the SVRG update is an unbiased estimator. After completing the SVRG update, each agent updates its snapshot, which is mentioned in line 17 [19] [5]. In the end, in line 20, the model parameter gets updated. It should be noted that the weight for updating the model parameter denoted by 1 k n p k n makes the gradient to be unbiased when the model parameter wants to be updated because, by this fraction, agents with a low probability of being selected for each iteration still have an adequate impact on a model parameter when they play a part at each iteration. Unlike SGD, the stepsize δ for the SVRG update does not have to decay in line 14. Hence, it gives rise to a faster convergence as one can choose a large stepsize. V. CONVERGENCE ANALYSIS In this section, we study the convergence bound for the proposed algorithm which is applicable for both convex and non-convex cost functions. for any w and # ∈ {1, ..., L n }. As a result, our substituted gradient term denoted by v k n,s, Assumption 3 (Bound on the substituted gradient term). Assumption 4 (Assumption on µ-strongly convex local cost functions). The local cost functions are strongly convex with parameter µ, i.e., Also, we should point out that 1 k n and 1 k n are independent for n = n , and the agent activation for each iteration is independent of random function selection. In other words, 1 k n and ∇f # (w) are completely independent. Theorem V.1 (Convergence bound for the proposed algorithm for both convex and non-convex cost functions). Let the assumptions 1 and 2 hold. Then Algorithm 1 results in Furthermore, if assumption (5) holds we can write the following bound on the right hand side: where f is the optimal solution to (2). Proof of Theorem V.1 is given in the appendix. Remark: According to (8) a rate of convergence of the algorithm is determined by min[ 1 δK , δ 2 , δ K ]. In order to select the convergence rate of our algorithm we can derive it by choosing δ = 1 K . Then, the rate of convergence is chosen from min[ 1 . By selecting = 1 3 = max{1 − , 2 } the best convergence rate can be obtained, which is of order O( 1 3 √ K 2 ). Thus, using the decaying step-size (δ = 1 convergence to the optimal point for both convex and non-convex cost functions. Lemma V.1. If assumption (4) holds, then algorithm 1 will convergence to the optimal point with rate no less than O( 1 K ). Proof. Without loss of generality, let say S = 1, i.e, no intermediate full gradient calculation for each agent. Then, we have the following upper bound for the substituted gradient term: Taking expectation with respect to #, using the facts that ||x + y|| 2 2 ≤ 2||x|| 2 2 + 2||y|| 2 2 ,μ = ∇f n (w) and E ||x − E x|| 2 2 ≤ E ||x|| 2 2 , we get the following inequalities: . Where in the last inequality we use theorem 1 in [19]. By summing both sides for m = 0 : m − 1 and selecting w k n,m = w k n,m for randomly chosen m ∈ {0, ..., m − 2} which is a valid update scheme in SVRG, we get the following: where α = h(µ, L, δ, M ) < 1 and D is an upper bound of E[f n (w k n )−f n (w n )], then we have geometric convergence in expectation for SVRG. Using (10) we can upper bound the last two expectations in (9) and get the following result: note that because α < 1, we have: by using (12) and (13) into (7) we can establish O( 1 K ) convergence rate for the proposed algorithm. By incorporating a SVRG approach in our FL algorithm, Theorem V.1 guarantees that we can a fixed size step-size δ and achieve a convergence rate of O( 1 K ). The improvement is due to the fact that the SVRG update step does not need to have a decaying stepsize throughout the learning process. Thus, using a constant and larger stepsize leads the algorithm to faster convergence to the stationary point. This is an improvement over the existing algorithm [15] in which they guarantee O( 1 √ K ) as the convergence rate of the algorithm by using the SGD method for their local update step. VI. NUMERICAL SIMULATIONS In this section, we analyze and demonstrate the performance of the Algorithm 1 by solving a regression problem (quadratic loss function). In this study, we compare the performance of our algorithm to that of the FedAvg in [15]. We used a real medical insurance data set of 900 persons in the form of (y, v) ∈ R × R 1×5 . Then, to observe the effect of stochastic optimization, we distribute the data among 18 agents. Thus, each agent owns 50 quadratic costs. In other words, we seek to solve the following convex optimization problem: where in our problem, N , and L n are 18 and 50, respectively. Here,ŷ n,i ∈ R, and θ is the learning parameter (weight) which is a column vector with 5 elements.We conduct 20 Monte Carlo simulation in all of which we initialize our algorithm at θ 0 = [0.5, ..., 0.5] , and we use the fixed step-size δ = 1 √ 100 in all rounds. We also simulate the FedAvg algorithm of [15] with the same initialization but using the decaying stepsize of 1 √ K as mentioned in [15]. For our algorithm we consider two cases: (1) (S = 5, M = 2) and (2) (S = 10, M = 5). The simulation results for the first case are shown in Fig. 3-Fig.5, while the results for the second case are shown in Fig. 6-Fig.8. Figures 3 and 6 show that in both cases our algorithm has a faster convergence to the optimal cost (the value is 0.01). Figures 4 and 7 show the variance caused by the two algorithms. The variance of our algorithm is significantly lower than that of the algorithm of [15]. In order to show the variance of our algorithm, we put a logarithmic axis on y axis. Also, the variance of our algorithm decreases as the number of iterations increases as opposed to the algorithm of [15], which suffers from a high variance. Figure 5 and 8 show the circular error probable (CEP) to observe the variance in the last iteration (K = 100) for our algorithm and the FedAvg algorithm in [15]. CEP is a measure used in navigation filters. It is defined as the radius of a circle, centered on the mean, whose perimeter is expected to include the landing points of 50% of the rounds; said otherwise, it is the median error radius [20]. Here, then, CEP demonstrates how far the means of the Monte Carlo runs are from 50% of the Monte Carlo iterations for both algorithms. As a result, less radius means less variance from the mean of the Monte Carlo runs. This plot shows not only our algorithm reaches a closer neighborhood to the optimal cost, but also, it has less CEP radius in comparison to that of the algorithm of [15]; this is another indication that our algorithm has less variance compared to the FedAvg algorithm in [15]. For our algorithm, the CEP radius in the first and the second cases are respectively 0.0029 and 0.0077, while these values of the algorithm of [15] are respectively 0.0059 and 0.0201. To complete our simulation study, we also compare the convergence performance of our algorithm to that of the FedAvg of [4], which uses a uniform agent selection. Figure 9 demonstrates the results when we use the batch size of 5 of the FedAvg of [4] and use the parameters corresponding to the first case for our algorithm. As we can see, our algorithm outperforms the FedAvg of [4] both in mean and variance. VII. CONCLUSIONS We have proposed an algorithm in the FL framework in the setting where each agent can have a non-uniform probability of becoming active (getting selected) in each FL round. The algorithm possesses a doubly-layered structure as the original FL algorithms. The first layer corresponds to distributing the server parameter to the agents. At the second layer, each agent updates its copy of the server parameter through an SVRG update. Then after each agent sends back its update, the server parameter gets updated. By leveraging the SVRG technique from stochastic optimization, we constructed a local updating rule that allowed the agents to use fixed stepsize. We characterized an upper bound for the gradient of the expected value of the cost function, which showed our algorithm converges to the optimal solution with the rate of no less than O( 1 K ). This showed an improvement APPENDIX This appendix gives the proof of Theorem V.1. However, before giving that proof we state some auxiliary lemmas that we will invoke in our proof. Which concludes the proof. Lemma A.2. Consider Algorithm 1. We can establish that Proof. We start by noting that Which concludes the proof. Where in the third inequality, we use the smoothness property of functions. For rest of inequalities we use Jensen's inequality. We are now ready to present the proof of Theorem V.1. Proof of TheoremV.1. Our proof is based on the smoothness of the cost function. Then, from the smoothness, we can write the following inequality: From the proposed algorithm, we can write the following: Dividing both sides of equation over 1 K . Then summing both sides from k = 0 to k = K − 1. We have the following: Which concludes the proof.
2022-10-27T01:16:27.038Z
2022-10-25T00:00:00.000
{ "year": 2022, "sha1": "dbee76852a3696e69a9d0db672c4aff2630fdfb4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2ade786a0ce06e3352817ca3f1c1a628c1516114", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
29422867
pes2o/s2orc
v3-fos-license
Comparison of nalbuphine and sufentanil for colonoscopy: A randomized controlled trial Objectives Nalbuphine is as effective as morphine as a perioperative analgesic but has not been compared directly with sufentanil in clinical trials. The aims of this study were to compare the efficacy and safety of nalbuphine with that of sufentanil in patients undergoing colonoscopy and to determine the optimal doses of nalbuphine in this indication. Methods Two hundred and forty consecutive eligible patients aged 18–65 years with an American Society of Anesthesiologists classification of I–II and scheduled for colonoscopy were randomly allocated to receive sufentanil 0.1 μg/kg (group S), nalbuphine 0.1 mg/kg (group N1), nalbuphine 0.15 mg/kg (group N2), or nalbuphine 0.2 mg/kg (group N3). Baseline vital signs were recorded before the procedure. The four groups were monitored for propofol sedation using the bispectral index, and pain relief was assessed using the Visual Analog Scale and the modified Behavioral Pain Scale for non-intubated patients. The incidences of respiratory depression during endoscopy, nausea, vomiting, drowsiness, and abdominal distention were recorded in the post anesthesia care unit and in the first and second 24-hour periods after colonoscopy. Results There was no significant difference in analgesia between the sufentanil group and the nalbuphine groups (p>0.05). Respiratory depression was significantly more common in group S than in groups N1 and N2 (p<0.05). The incidence of nausea was significantly higher in the nalbuphine groups than in the sufentanil group in the first 24 hours after colonoscopy (p<0.05). Conclusions Nalbuphine can be considered as a reasonable alternative to sufentanil in patients undergoing colonoscopy. Doses in the range of 0.1–0.2 mg/kg are recommended. The decreased risks of respiratory depression and apnea make nalbuphine suitable for patients with respiratory problems. Introduction Colonoscopy is now considered the "gold standard" for diagnosing pathologies of the colon and rectum, and is the primary modality used to screen for colorectal cancer [1]. However, colonoscopy involves air insufflation and insertion of instruments, is generally perceived by patients as painful, and is poorly tolerated without sedation [2]. Therefore, sedation and analgesia are widely accepted by patients and considered by many gastroenterologists as an integral component of the endoscopic examination. An effective analgesic agent for perioperative pain that does not produce significant respiratory depression (RD) would be useful for perioperative pain control during a colonoscopy procedure. Nalbuphine hydrochloride is a mixed agonist-antagonist opioid with a duration of action of approximately 3-6 hours. It is chemically related to both the agonist analgesic oxymorphone and the antagonist naloxone, and acts as an antagonist at the μ receptor and as an agonist at the κ receptor, resulting in analgesia and sedation with minimal effects in the cardiovascular system [3]. Any slight RD that occurs would be restricted by a ceiling effect [4]. Other proposed advantages of nalbuphine as an agonist-antagonist opioid include a lower incidence of side effects (e.g., nausea and vomiting) than that with other opioid analgesics [5]. Further, nalbuphine is superior in treating opioid-induced pruritus without affecting analgesia [6]. Sufentanil, in contrast, is a highly lipophilic opioid fentanyl analog that is commonly used for surgical analgesia. However, sufentanil is associated with an increased risk of hypoxemia and apnea [3], which is particularly undesirable for patients and anesthesiologists in the outpatient setting. Nalbuphine is as effective as morphine for perioperative analgesia [7] but has not been compared directly with sufentanil in clinical trials. The aims of this study were to compare the analgesic efficacy and safety of nalbuphine with that of sufentanil in patients undergoing colonoscopy and to determine the optimal doses of nalbuphine for colonoscopy. Materials and methods This prospective, randomized, double-blind, clinical trial was approved by the ethics committee at West China Hospital, Sichuan University, in February 2016 and was registered with the Chinese Clinical Trial Registry (ChiCTR-IPR-16009184; http://www.chictr.org.cn) before its initiation in September 2016. The trial included 240 inpatients and outpatients who underwent colonoscopy at our institution from September 2016 to November 2016. Details of the study protocol can be found on http://dx.doi.org/10.17504/protocols.io.iq4cdyw. [PROTOCOL DOI] The inclusion criteria were as follows: age 18-65 years; body mass index 18.5-30 kg/m 2 ; American Society of Anesthesiologists classification I-II; and duration of colonoscopy <30 minutes for ensuring a standardized duration, which avoids excessive dosage and high risk for side effects. Patients were excluded if they had a history of abnormal recovery from anesthesia, a heart rate on electrocardiography of <60 beats/min; systolic blood pressure (SBP) >180 mmHg or <90 mmHg, acute airway inflammation in the previous 2 weeks, neuromuscular disease, a possible or confirmed difficult airway, a suspected history of abuse of narcotic analgesics or sedatives, a history of allergy to propofol or opioids, or inability to communicate. After obtaining written informed consent, the patients were randomly assigned to one of four groups (by opening of a sealed allocation envelope that contained the group randomization number produced by the blockrand package R3.1.1 [R Foundation for Statistical Computing, Vienna, Austria] in which the block size is 8) without stratification and received either sufentanil 0.1 μg/kg (group S), nalbuphine 0.1 mg/kg (group N1), nalbuphine 0.15 mg/kg (group N2), or nalbuphine 0.2 mg/kg (group N3). To blind the anesthesiologist to study group allocation, the doses used in the four groups were prepared in the same empty 10-mL syringes as 1 μg/mL, 1 mg/mL, 1.5 mg/mL, and 2 mg/mL by the nurse. In this way, all patients, the anesthesiologist, and the gastroenterologists were blinded to group information. Patients in all groups received propofol for sedation and were then monitored for sedation depth using the bispectral index (BIS). Propofol was initially administered at a rate of 1 mL (10 mg)/5 seconds to a maximum dose of 4 mL (40 mg) if body weight was <60 kg or 5 mL (50 mg) if body weight was >60 kg. After each bolus infusion, a waiting period, typically 30-60 seconds, was used to observe and assess whether the drug had completely taken effect, judging by the BIS value falling below 80 and absence of the eyelash reflex. Additional doses (20-30 mg) of propofol were administered if the patient started to move or if the BIS value started rising to 80. The targeted sedation depth was moderate-to-deep, i.e., a stable BIS score between 60 and 80 during the procedure. Baseline vital signs were recorded immediately before the procedure. All patients received supplemental oxygen intranasally (5 L/min) and underwent continuous monitoring of heart rate (three-lead electrocardiography), oxygen saturation (pulse oximetry), blood pressure (automated blood pressure cuff, serial measurements every 3 minutes), and BIS (BIS VISTA™ monitoring system, Aspect Medical Systems Inc., Norwood, MA, USA) at 1-minute intervals for the first 3 minutes after induction and every 3 minutes thereafter. The respiratory rate and end-tidal CO 2 were recorded using a nasal cannula (Microstream1 end-tidal CO 2 circuit, Medtronic, Dublin, Republic of Ireland). Pain intensity was evaluated using the modified Behavioral Pain Scale (BPS) for non-intubated patients (BPS-NI) [8] during endoscopy as the primary outcome variable. The modified BPS-NI is based on a summed score of three items, i.e., facial expression, movements of the upper and lower limbs, and vocalizations. A total score on the modified BPS-NI of >5 meant that the patient experienced intolerable pain during the colonoscopy procedure (Table 1). Pain relief was also measured using the Visual Analog Scale (VAS), which consists of a 10-cm horizontal line, the left end representing "no pain" (0 cm) and the right end representing "the worst imaginable pain" (10 cm). The patients were instructed to draw a vertical mark on the line to indicate the previous intensity of pain as the baseline before the procedure. After the procedure, we reassessed the pain level; the assessments were performed when the patients awoke in the post anesthesia care unit (PACU) who were not informed of the previous VAS score. The total propofol dose was also documented. After the procedure, the patients were taken to a recovery room where blood pressure, blood oxygen saturation level (SpO 2 ), and Moaning not frequent ( 3/min) or prolonged ( 3 s) 2 Moaning frequent (>3/min) and prolonged (!3s) 3 Howling or verbal complaint including "ow, ouch" electrocardiographic parameters were monitored until discharge. After discharge, each patient received a follow-up telephone call during the first and second 24-hour periods after colonoscopy. Common side effects, including drowsiness, nausea, vomiting, and abdominal distension, were recorded. Data for patients who were unable to be contacted by follow-up telephone call were still included in the final analysis as part of the full analysis set, with the exception of side effects reported in the first and second 24-hour periods after colonoscopy. Thus, if patients were missing data on adverse events in the first and second post-discharge 24-hour periods, they were considered to be lost to follow-up. All patients (except those excluded on the basis duration of colonscopy) had data for the main outcomes, modified BPS-NI and VAS, before and after the procedure. All the data were recorded by the anesthesiologist. Respiratory depression was considered to be significant when SpO 2 was 90%, end-tidal CO 2 was >50 mmHg at any time, respiratory rate was <6 breaths/minute, or when airway obstruction with cessation of gas exchange was observed at any time (noted by an absent endtidal CO 2 waveform) [9]. Airway maneuvers (i.e., jaw thrust and chin lift) would be manipulated in the event of RD. A decrease in SBP to <90 mmHg was considered to indicate hypotension. Ephedrine 3-5 mg was administered to treat arterial hypotension, which was defined as SBP of 80 mmHg or a reduction in SBP of >30% compared with baseline. Bradycardia was defined as a reduction in heart rate to 60 beats/minute; intravenous atropine 0.3-0.5 mg was administered in cases where the heart rate decreased to <50 beats/minute. Statistical analysis The primary outcome variable was the modified BPS-NI score. Before the trial, we conducted a preliminary study using the same protocol that included 77 patients (20 in group S, 18 in group N1, 19 in group N2, and 20 in group N3). According to the preliminary test results, the proportion of patients with a modified BPS-NI score <5 in group S was 94.1% and the proportion of patients with a modified BPS-NI score <5 in groups N1, N2, and N3 was 88.24%, 68.75%, and 75%, respectively. Assuming a 1-β value of 0.9 and an α test level of 0.05, we needed a sample size of 207 according to the PASS 11 (NCSS, LLC., Kaysville, UT, USA) method for multiple sets of sample rates with an effect size (W) of 0.2620 (W = SQRT[(Chi-Square)/N], N = 77) [10], based on the difference of four groups. Allowing for a dropout rate of 10%, we calculated that 240 cases would need to be enrolled. The statistical calculations were performed using R3.2.1. The distribution of the data was checked for normality using the Shapiro-Wilk test. Depending on the data distribution, analysis of variance or the Kruskal-Wallis test was used for all independent continuous variables. Multiple comparisons were made using the Tukey's honest significant difference test or the Nemenyi test, and the data were presented as the 95% family-wise confidence level. The data are presented as the mean and standard deviation when the distribution was normal. Categorical variables are presented as proportions (%) and were compared using Fisher's exact test or the chi-squared test. Trends in the nalbuphine doses were evaluated using the chi-square test for trend. A p-value of <0.05 was considered to be statistically significant. Results Two hundred and forty patients were assessed for eligibility. Six were excluded, leaving 234 eligible patients who were randomly allocated to group S (n = 59), group N1 (n = 57), group N2 (n = 58), or group N3 (n = 60). Eleven patients were unable to be contacted by follow-up telephone call so were considered lost to follow-up, and in three patients, the duration of colonoscopy was more than 30 minutes (Fig 1). The patient demographic data for the study groups are presented in Table 2. No statistically significant demographic differences were noted between the four groups. The intensity of pain during colonoscopy, as evaluated by the modified BPS-NI, was not significantly different between the groups ( Table 3). The analgesic effects of nalbuphine at these doses did not differ significantly from those of sufentanil (p>0.05). A test for trend was also performed but did not show a dose-response relationship in terms of pain in the nalbuphine groups (p>0.05). There were no significant differences between the pre-intervention and post-intervention VAS scores within groups or the pre-examination and post-examination VAS scores among the groups. Further, there were no significant differences between the pre-to post-examination changes in scores among the four groups (p>0.05; Table 4). Using our measurement criteria, RD was detected as noted in Table 5 and Fig 2. The incidences of absent end-tidal CO 2 , SpO 2 <90%, and RR <6 breaths/min were significantly lower in the nalbuphine group than in the sufentanil group. Only patients in groups N1 and N2 showed significantly lower RD than group S (p<0.05). When a test for trend was performed, there was a dose-response relationship in terms of RD in the nalbuphine groups (p<0.05). The incidence of RD was increased when the dose of nalbuphine was increased. The mean (± standard deviation) total propofol dose administered was 130.56 ± 41.05 mg in group S, 146.09 ± 37.56 mg in group N1, 126.28 ± 37.28 mg in group N2, and 123.35 ± 38.15 mg in group N3 (N1 vs. N2, p<0.05 and N1 vs. N3, p<0.01, Tukey's honest significant difference test). There were no statistically significant differences between the other two groups. The total dose of propofol decreased when the nalbuphine dose was increased. Fig 3 shows the most common side effects, including drowsiness, nausea, vomiting, and abdominal distension, encountered in the PACU. There were no statistically significant differences between the groups (p>0.05). Table 6 shows these common side effects in the first 24-hour and second 24-hour periods after colonoscopy. Discussion The results of this study demonstrated that the three dose strengths of nalbuphine produced analgesia that was not significantly worse than that achieved by sufentanil. The BPS has been shown to dependably identify the extent of analgesia produced by fentanyl and other analgesics [11]. It is now well established that accurate assessment is the basis for effective pain management. Patients' manifestations of pain include vocalizations, movement and mobility, facial expressions, and mood and behaviors, which are also used as behavioral indicators to assess pain in patients who are non-verbal [12]. Behavioral pain assessment tools recommended by the American Society for Pain Management Nursing may help us to recognize patients who are in pain but are unable to self-report. The BPS-NI is an adaptation of the original BPS that 3 (%) 0 (0) 0 (0) 0 (0) 1(1.7) BPS-NI, Behavioral Pain Scale for non-intubated patients is used for non-intubated critically ill patients and could also be used for clinical research and during nociceptive procedures [13]. Comparison of the analgesic efficacy of nalbuphine and sufentanil for painful incidents during colonoscopy showed that both drugs provided adequate analgesia; most patients showed only slight or no response to the pain. It is noteworthy that all three doses of nalbuphine used in the study achieved pain relief similar to that achieved by sufentanil. Previous studies by Waye and Braufeld concluded that nalbuphine does not appear to be as effective as meperidine for relieving the discomfort of colonoscopy [14]. However, given the small sample size in their study, it is hard to reach any definitive conclusions. Other researchers demonstrated that visceral analgesia mediated by κ opioid receptor agonists is particularly effective [15]. In our study, nalbuphine proved to be an acceptable alternative to sufentanil and provided good analgesia of adequate duration. Perioperative RD has always been a major factor limiting the use and safety of intraoperative opioids. Many studies have suggested that end-tidal CO 2 monitored by nasal cannula is an excellent way of assessing RD during procedural sedation and analgesia [16]. Supplemental oxygen improved oxygenation in patients with decreased oxygen saturation levels in whom subclinical RD might not be able to be detected, but the increased end-tidal CO 2 persisted despite the correction of oxygen saturation [17]. Thus, in our study, we used a series of indicators to monitor RD. We observed that nalbuphine dosages of 0.1 mg/kg and 0.15 mg/kg were associated with significantly less RD when compared with sufentanil. This may due to the μselective opioid sufentanil showing high affinity for its binding sites, which would moderate their main action in the brainstem. Therefore, marked RD would be induced because of their close vicinity to the respiratory regulating centers in the brainstem. This is the reason why opioid-induced RD has always been considered to be related to agonism at the μ receptor. In contrast, because κ receptors are distributed mainly within the cortex [18], a weaker RD effect would be induced by nalbuphine. Previous research by Gal et al indicated that nalbuphine produced a ceiling effect for RD at doses above 0.15 mg/kg [19]. Another study showed that the dose-effect curve for RD was flatter for nalbuphine than for morphine, and the maximum RD [20]. It is possible that the mild RD effect of nalbuphine reflects the activity of this drug as a pure antagonist at the μ receptor and as an agonist at the κ receptor [21]. Another outcome observed in the present study was the lower mean dose of propofol in group N3 than in N2. This suggests that nalbuphine induces a sedative effect because of its action on the κ receptor, which could result in a reduction in the propofol dose required. Previous work has also shown that the total propofol dose were significantly lower in a group that received nalbuphine and propofol when compared with a group that received propofol alone [22]. We also found that the incidence of nausea in the 24 hours after colonoscopy was significantly higher in all the nalbuphine groups than in the sufentanil group. There were several factors that influenced the incidence of postoperative emetic episodes [23]. Several risk factors have been validated, including female sex, age <50 years, a history of postoperative nausea and vomiting, an opioid used in the PACU, and nausea in the PACU [24]. Garfield et al demonstrated that patients receiving nalbuphine 300 μg/kg or 500 μg/kg had a significantly higher incidence of nausea than patients receiving fentanyl and there was a suggestion of a dose-effect relationship [25]. However, a meta-analysis of randomized controlled trials showed that the incidences of nausea and vomiting were significantly lower in patients who received nalbuphine than in those who received morphine [5]. In our study, almost all the patients who developed nausea responded well to anti-emetic agents. Thus, preventive anti-emetic agents or low-dose nalbuphine could be given, especially in the subset of high-risk patients. In conclusion, the results of this study confirm that nalbuphine is a reasonable alternative to sufentanil for intravenous analgesia in patients undergoing colonoscopy. Nalbuphine also produces less RD and has a decreased risk of apnea during colonoscopy procedures. However, in this study, incidence of nausea was significantly higher in the nalbuphine group in the first post-discharge 24-hour period.
2018-04-03T03:55:43.131Z
2017-12-12T00:00:00.000
{ "year": 2017, "sha1": "d7a216d1c29cb2baec5d15e93bc382af118fd8c7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0188901&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7a216d1c29cb2baec5d15e93bc382af118fd8c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119040161
pes2o/s2orc
v3-fos-license
General Phase Matching Condition for Quantum Searching We present a general phase matching condition for the quantum search algorithm with arbitrary unitary transformation and arbitrary phase rotations. We show by an explicit expression that the phase matching condition depends both on the unitary transformation U and the initial state. Assuming that the initial amplitude distribution is an arbitrary superposition sin\theta_0 |1>+ cos\theta_0 e^{i\delta} |2>with |1>= {1 / sin\beta} \sum_k |\tau_k><\tau_k|U|0>and |2>= {1 / cos\beta} \sum_{i \ne \tau}|i>, where |\tau_k>is a marked state and \sin\beta = \sqrt{\sum_k|U_{\tau_k 0}|^2} is determined by the matrix elements of unitary transformation U between |\tau_k>and the |0>state, then the general phase matching condition is tan{\theta / 2} [cos 2\beta + tan\theta_0 cos\delta sin 2\beta]= tan{\phi / 2} [1-tan\theta_0 sin\delta sin 2\beta tan{\theta / 2}], where \theta and \phi are the phase rotation angles for |0>and |\tau_k>, respectively. This generalizes previous conclusions in which the dependence of phase matching condition on $U$ and the initial state has been disguised. We show that several phase conditions previously discussed in the literature are special cases of this general one, which clarifies the question of which condition should be regarded as exact. We present a general phase matching condition for the quantum search algorithm with arbitrary unitary transformation and arbitrary phase rotations. We show by an explicit expression that the phase matching condition depends both on the unitary transformation U and the initial state. Assuming that the initial amplitude distribution is an arbitrary superposition sin θ0|1 + cos θ0e iδ |2 with |1 = 1 sinβ k |τ k τ k |U |0 and |2 = 1 cos β i =τ |i i|U |0 , where |τ k is a marked state and sin β = k |Uτ k 0| 2 is determined by the matrix elements of unitary transformation U between |τ k and the |0 state, then the general phase matching condition is tan θ 2 [cos 2β + tan θ0 cos δ sin 2β] = tan φ 2 1 − tan θ0 sin δ sin 2β tan θ 2 , where θ and φ are the phase rotation angles for |0 and |τ k , respectively. This generalizes previous conclusions in which the dependence of phase matching condition on U and the initial state has been disguised. We show that several phase conditions previously discussed in the literature are special cases of this general one, which clarifies the question of which condition should be regarded as exact. I. INTRODUCTION Grover's quantum search algorithm [1] is one of the most important developments in quantum computation. For searching a marked state in an unordered list, it achieves quadratic speedup over classical search algorithms. In Grover's original paper [1], each search step consists of two phase inversions and two Hadmard-Walsh transformations, and the initial state is an even distribution of the basis states. There have been several generalizations of the Grover algorithm. For instance, people have studied the cases with (1) more than one marked item [2]; (2) an arbitrary unitary transformation instead of the Hadmard-Walsh transformation [2]; (3) arbitrary initial distributions [3,4]; (4) arbitrary phase rotations [5,6]; and (5) arbitrarily entangled initial distribution [7]. Arbitrary phase quantum searching has been extensively studied by our group. It was found that arbitrary phase rotation of the marked state alone can not be used for a quantum search [5]. It was later demonstrated [6] by an approximate treatment that a useful quantum search algorithm can be constructed only if the two phase rotations are equal, i.e. θ = φ (θ and φ are the phase rotation angles for the |0 state and the marked state, respectively). It is important that this phase matching condition should be satisfied during a searching process, because the systematic error induced by phase mismatching is the dominant gate imperfection in the Grover algorithm [8], and the error tolerance in phase mismatching is of the order O(1/ √ N ). By the isomorphism between SU (2) and SO(3) group, an SO(3) picture for the quantum search algorithm has been established [9]. The advantage of this picture is that one can use simple geometrical method to treat quantum searching problems, even for cases where application of an analytical method is difficult. In this picture, a quantum search is described as a series of rotations in a 3-dimensional space. State vector is represented by a polarization vector. The marked item corresponds to a point in the z-axis (x, y, z) = (0, 0, 1) in space. The task of a quantum search is to rotate the polarization vector, initially lying near (0, 0, −1), to the target point (0, 0, 1). During the searching process, the 3-dimension state vector (polarization vector) spans a cone in space, and the tip of the polarization vector draws a circle in this cone. If the target point lies on this circle, the searching process can find the marked state. Using this SO(3) picture, it was proven that the phase matching requirement θ = φ, which was obtained earlier through an approximation [6], is an exact condition. Recently, this phase matching condition has been demonstrated in a 2-qubit system by the liquid NMR technique [10]. Arbitrary phases have recently received much attention. Two papers have been published in Physical Review A, addressing particularly this issue [11,4]. In Ref. [11], Høyer discussed arbitrary phase rotations in quantum amplitude amplification, a generalization of Grover's quantum search algorithm. He obtained a phase condition tan φ 2 = tan θ 2 (1 − 2a), where a is the success probability of the search algorithm. Using this phase condition, Høyer constructed a quantum algorithm that searches a marked state with certainty. He also confirmed that the phase error tolerance is the order O(1/ √ N ). By considering θ = φ as an approximation to his phase condition, he can obtain our main results in Refs. [5,6,8,9]. Since a is of the order of 1/N , the difference between Høyer condition and our condition θ = φ is very small. However, Høyer claimed [11] that tan φ 2 = tan θ 2 (1 − 2a) is an exact phase condition and θ = φ is only an approximate one. In another development, Biham et al. [4] studied the arbitrary phase rotations in a quantum search algorithm that allows arbitrary phase rotations and arbitrary initial distribution using recursion relations. In their study, they found that in order for the algorithm to apply, the two rotation angles must be equal. The phase error tolerance in Ref. [4] is found also to be the order O(1/ √ N ). Although the main conclusions of these papers are similar, there is an apparent contradiction in the exact phase matching condition with arbitrary phases in a quantum search algorithm. In this paper, we will solve this paradox. More importantly, we have found a general phase matching condition for arbitrary phase rotations, with arbitrary unitary transformations and an initial distribution which is an arbitrary superposition of |1 and |2 . We shall show that the paradox mentioned above can be solved by realizing a difference in the initial state distribution in the previous works. The two phase matching conditions are special cases of this general phase matching requirement. The phase matching requirement θ = φ is obtained for a quantum search algorithm with an arbitrary unitary transformation U and an initial distribution U |0 . The initial distribution of Grover's original algorithm and most of the generalizations of quantum search algorithm use this initial state. Although Høyer's initial state [11] also takes this form, the actual initial state for the searching, i.e. the process of repeated operation of the rotations, is not, because he has to make some preparation to the initial state and this makes his initial state slightly different from U |0 . This makes Høyer's phase condition slightly different from ours. We shall also point out that other phase conditions are special cases of the general phase matching condition derived in this paper. The paper is organized as follows. After this introduction, we briefly review the structure of a quantum search problem in section II. Here we particularly divide a quantum search algorithm into two parts: the quantum searching engine and the quantum database (the initial state). In this way, one can see clearly the dependence of the phase matching condition on the unitary transformation U and the initial distribution. This detailed dependence was ignored in previous discussions because the initial state has been taken as U |0 . In section III, we give the general phase matching condition using the SO(3) quantum searching picture. The advantage of this SO(3) picture is the ease to treat quantum search problems in a simple geometrical picture. It is particularly useful in solving this problem. In section IV, we demonstrate our general phase matching conditions by several known examples. In section V, we discuss the influence of the phases on the computational complexity of the searching problem. Finally, a summary is given in section VI. II. STRUCTURE OF A QUANTUM SEARCH ALGORITHM Let us review two basic aspects in a quantum search problem. First, one must have a searching operation (we call it a search engine hereafter). Combining various generalizations, we can write a general quantum searching engine as the following operator: where Usually |γ is chosen as |0 ≡ |0 · · · 0 . Here |τ k is a marked state, and the summation runs over all the marked states. Thus, this quantum search engine can deal with cases with more than one marked state. We see that a quantum search engine is determined by the following factors: a unitary transformation U , two phase rotations and the marked states. Secondly, there must be a quantum database: the initial distribution |ψ 0 . This part is independent of the searching engine: for a given searching engine, the initial state may be prepared in various ways. However, a special form of the initial state makes the search problem simple. It was found in Refs. [12,2] that the space span by |1 and |2 is invariant under the action of the quantum searching operator Q. If the initial state is a superposition of these two state vectors, then the quantum search problem can be dealt with in a 2-dimensional space [2,13]. In the literature, nearly all the initial distribution is chosen as where For instance, in the Grover algorithm, the evenly distributed initial state takes the form Of course, the form of the initial state may take a more general form. For instance, using the standard Grover searching engine where the unitary transformation is chosen as the Hadmard-Walsh transformation, the quantum search problem with arbitrary initial state was studied in Ref. [3]. This was generalized to a quantum search engine with arbitrary phases and arbitrary unitary transformation in Ref. [4]. In that case, the amplitudes of the marked states and unmarked states are not tied together during a searching process, and one no longer has a 2-dimensional rotation structure. In this paper, we restrict ourselves to the case where the initial state is an arbitrary superposition of |1 and |2 (We refer this case as quasi-arbitrary initial distribution, to distinguish this from that in Refs. [3,4]). The action of operator Q on the two basis states are [6,11,14] Within this U (2)-formalism, after dropping a global phase factor, the initial state can be written most generally as During a searching process, the state of a quantum computer in general is where a ′ , b, c and d are real numbers, satisfying the normalization condition a ′2 + b 2 + c 2 + d 2 = 1. This state vector in the 2-dimensional space is represented by the polarization vector in the 3-dimensional space as [9] where σ are the Pauli matrices. The probability of finding the marked state is These expressions make the understanding of the searching process very easy. For example, when the state vector is the marked state, its polarization vector is (0, 0, 1) and the probability, according to Eq. (8), is 1. For the initial state, the polarization is about (0, 0, −1) and the probability for finding the marked state is nearly zero. Each searching iteration is a rotation of the polarization vector through angle α. After j iterations, the total angle rotated is ω = jα, and the polarization vector is rotated to r j = r 0 cos ω + l n ( l n · r 0 )(1 − cos ω) + ( l n ⊗ r 0 ) sin ω, where · and ⊗ are the ordinary scalar product and vector product operations. The vector l n is the axis vector (5) normalized to unity. Using Eqs. (9) and (8), the probability for finding the marked state can be easily calculated. During a searching process, the trajectory of the polarization vector (7) forms a cone whose rotational axis is given by (5). Starting from an initial position r 0 , the displacement vector r − r 0 is always perpendicular to the rotational axis. If the quantum searching process can find the marked state, then the vector r f = (0, 0, 1) T (T means transpose) must be on the trajectory, thus ( r f − r 0 ) · l = 0. By putting the initial state (3) into this equation, we obtain the following phase matching condition tan θ 2 [cos 2β + tan θ 0 cos δ sin 2β] = tan φ 2 1 − tan θ 0 sin δ sin 2β tan θ 2 . This is the general phase matching condition for a successful quantum search of marked states. This phase matching condition tells us that the rotational angles depend on both the unitary transformation through β and on the initial distribution through θ 0 and δ. In previous discussions, the dependence of the phase matching condition on the initial state was ignored because the initial state was taken as U |0 = sin β|1 + cos β|2 . In Høyer's work [11], the initial state is modified before the search, and this makes the initial state different from U |0 , implicating the dependence on the initial state. It should be pointed out that this condition is a necessary condition for searching with certainty, but not a sufficient one. Even if this condition is met, the probability of finding marked states is not guaranteed to be 1. The standard Grover algorithm is one example. In the Grover algorithm [1], the probability of finding the marked state with optimal iterations is sin 2 [(2j op + 1)β]. As β = arcsin 1 √ N is fixed, (2j op + 1)β may not be exactly π 2 . IV. EXAMPLES OF PHASE MATCHING CONDITIONS It has been seen that the phase matching condition depends both on the structure of the quantum searching engine and on the initial state. Now we discuss four examples, and show that the general phase matching condition (10) is satisfied in all these cases. The differences among these four examples are in their initial states. A. |ψ0 = sin(θinit)|1 + cos(θinit)e iu |2 In Ref. [11], although the starting state is U |0 = sin β|1 + cos β|2 , some preparations have to be made before searching. First, the following state is obtained through 8 steps [14]: Before the searching iteration starts, a phase rotation e iu is made for the unmarked state. This leaves the initial state for the quantum search engine of the form, In Eq. (11), θ init = π 2 − mϑ, where ϑ = arcsin(| sin θ 2 sin 2β|), and m is an integer Here, INT[ ] means taking the nearest integer part. Since θ init depends numerically on the quantities involved, an analytic proof is difficult. It has been carefully checked that, by using the initial state of (11), φ determined by tan φ 2 = tan θ 2 (1 − 2 sin 2 β) fulfills the general phase matching condition (10). A numerical example is given in the appendix. This is the initial state that the most quantum search algorithms have taken. In Ref. [6], the initial state is |ψ 0 = U |0 = sin β|1 + cos β|2 . C. |ψ0 used by Brassard et al. [15] In Ref. [15], a procedure was proposed for obtaining the marked state with certainty. The strategy is to run the search algorithm m ′ = m − 1 (m is given in (12)) number of iterations with θ = φ = π. At this stage, the state vector of the quantum computer is just one step short of the marked state: |ψ 0 = sin((2m ′ + 1)β)|1 + cos((2m ′ + 1)β)|2 . Afterwards, one does one more search with θ and φ determined from the following equation We now show that the θ and φ determined in this way satisfy the general phase matching condition (10). Eq. (13) is equivalent to two equations, which are the real and the imaginary part, respectively, cos φ tan θ 0 sin 2β = − cos 2β, sin φ tan θ 0 sin 2β = cot θ 2 . Here we have introduced the notation θ 0 = (2m ′ + 1)β. It is then straightforward to show This is exactly the general phase matching condition (10) with δ = 0. It should be pointed out that Eq. (13) is a necessary and sufficient condition for finding the marked state with certainty. It determines the two angles uniquely. D. "Difficult search problem limit" of arbitrary initial distribution by Biham et al. [4] We see from the above examples that the phase matching condition strongly depends on the initial state. Recently, using an arbitrary initial distribution, Biham et al. have studied the general quantum search algorithm with arbitrary phase rotations [4]. In particular, they obtained the phase matching condition θ = φ which is the same as the case with |ψ 0 = U |0 . It seems contradicting that the apparent initial state dependence is missing here. The reason for this is that the phase condition of Biham et al. is obtained by using the "difficult search problem limit": N ≫ N τ ≥ 1 [4], which gives the weighted averages |k ′ (0)| = O(W −1/2 k ) and |l ′ (0)| = O(1). This is equivalent to the case of |ψ 0 = U |0 . Thus it gives the same phase matching condition θ = φ. If this limit is not taken, then the phase matching condition can be varied greatly. V. THE COMPUTATIONAL COMPLEXITY Starting from the standard initial state U |0 and the standard Grover's quantum search engine, the number of iterations is O( √ N ). If an quasi-arbitrary initial state is used instead of the standard initial state, the number of iterations will be different from O( √ N ). For instance, if the initial state is just the marked state, there is no need for search at all. If the initial state is the one after m ′ iterations using the standard Grover as given in Ref. [15], then one needs only one iteration. Using the SO(3) picture of the quantum search, it is easy to study the computational complexity of the quantum search algorithm with arbitrary phases. Here, we present the results which can be proven through simple geometrical argument similar to the derivations given in Ref. [16]: 1) Given an initial state in Eq. (3) and an angle θ, determining φ by solving Eq. (10). (If the coefficient of the marked state is not real in the initial state, drop out a global phase factor in the initial state so that the coefficient of the marked state |1 is real); 2) Calculating the angle ω tot between the initial state and the marked state in the SO(3) picture by the following equation where cos α = 1 4 (cos(4β) + 3) cos θ cos φ + sin 2 (2β)( 1 2 cos φ − sin 2 ( θ 2 ) + cos 2β sin θ sin φ)); 3) Calculating the angle α, which is the angle rotated by the quantum search engine in each iteration in the SO(3) picture for given θ and the φ obtained through the phase matching condition. The number of iterations required to reach maximum probability in finding the marked state is given by Then maximum probability of finding the marked state is achieved by measuring the quantum computer at j op or j op + 1 step. To find the marked state with certainty, one has to modify the above procedure a little. If one wants to construct an quantum search engine that searches the marked state with certainty near a given θ, one first uses the above procedure to obtain j op . However, this quantum search engine does not guarantee to find the marked state with certainty. One has to use slightly different angles θ and φ. They are determined by letting θ and φ as unknowns and solving simultaneously the phase matching condition (10) and the equation ω/α = J with J > j op . Then the search algorithm with the angles so defined can find the marked state exactly when measured at the Jth iteration. J can be any number equal to or greater than J op . A quantum search engine for finding the marked state with certainty with the standard initial state was recently given by Long in Ref. [16]. VI. SUMMARY We have presented a general phase matching condition with arbitrary unitary transformations and an arbitrary initial state superposed by |1 and |2 . It has been shown that several phase conditions previously discussed in the literature are its special cases. Thus, there is a consistency between the results of [11] and [6] which have seemingly different expressions. The results in [15] and [4] also satisfy this general phase matching condition. The probability for obtaining the marked state has been given. |ψ 0 = sin(θ init )|1 + cos(θ init )e iu |2 , where θ init = π 2 − mϑ, ϑ = sin θ 2 sin(2β), sin β = √ a, and m =INT[( π 2 − β)/ϑ] + 1. u is the difference of arguments Q 22 and Q 12 , and φ = 2 arctan tan θ 2 (1 − 2a) . Taking a = 2/400, θ = π 2 , we have Putting the quantities θ, φ, β, δ = u, θ 0 = θ init into Eq. (10), we perform the calculation in Mathematica. With the number of digits up to 150, the result for the left side of Eq. (10) is 0.98723452878674 5048789300921936170 7162274151777317 884870759108154687027972247 696313773051896 66308976465471553486 4615871040404572923 23594964054244391216, and the one for the right hand side is exactly the same.
2019-04-14T03:17:58.115Z
2001-07-03T00:00:00.000
{ "year": 2001, "sha1": "2c49fe790fb45610a07af1bf018b7a13ba09dcd9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2c49fe790fb45610a07af1bf018b7a13ba09dcd9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
119222869
pes2o/s2orc
v3-fos-license
An Analysis of the Quantum Penny Flip Game using Geometric Algebra We analyze the quantum penny flip game using geometric algebra and so determine all possible unitary transformations which enable the player Q to implement a winning strategy. Geometric algebra provides a clear visual picture of the quantum game and its strategies, as well as providing a simple and direct derivation of the winning transformation, which we demonstrate can be parametrized by two angles. For comparison we derive the same general winning strategy by conventional means using density matrices. Introduction In 1999 Meyer 1 introduced the quantum version of the penny flip game, a seminal paper for quantum game theory. [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] In the classical form of this game a coin is placed heads up inside a box so that the state of the coin is hidden from the players. The first player Q then either flips the coin or leaves it unchanged, following which the second player P also either flips the coin or not, and finally Q flips the coin or not, after which the coin is inspected. If the coin is heads up Q wins, otherwise P wins. Classically each player has an equal chance of winning and the optimal strategy, in order to prevent each player predicting the other's behaviour, is to randomly flip the coin or not, corresponding to a mixed strategy Nash equilibrium. 18 In the quantum version of the game P is restricted to classic strategies whereas Q adopts quantum strategies and so is able to apply unitary transformations to the possible states of the coin, which behaves like a spin half particle with a general state |ψ = α|0 + β|1 , where |0 and |1 are orthonormal states representing heads and tails respectively, and α, β are complex numbers. Meyer identifies a winning strategy for Q as the application of the Hadamard transform, in which case the operation by P has no effect: as placing the coin "on its edge", which is why the flip operation of the following player has no effect. Our aim in this paper is firstly to find the most general unitary transformations which lead to a winning strategy for Q, and secondly to demonstrate that geometric algebra provides a convenient formalism with which to find the general solution, which is parametrized by angles θ, φ. Our motivation in using geometric algebra is ultimately to investigate quantum mechanical correlations in strategic interactions between two or more players of quantum games, and more generally to exploit the analytical tools of game theory to better understand quantum correlations. We demonstrate in Section 4, however, that for the quantum penny flip game conventional methods of analysis using density matrices 19 are also effective in analyzing this game, and run parallel to the geometric algebra approach, but we believe that for n-player games the concise formalism of geometric algebra is advantageous. Geometric algebra 20-23 is a unified mathematical formalism which simplifies the treatment of points, lines, planes in quantum mechanical spin half systems. 24 In general, given a linear vector space V with elements u, v, . . . we may form 25 the tensor product U ⊗V of vector spaces U, V containing elements (bivectors) u ⊗ v. The vector space may be extended to a vector space Λ(V ) of elements consisting of multivectors which can be multiplied by means of the exterior (wedge) product u ∧ v. The noncommutative geometric product uv of two vectors u, v is defined by uv = u · v + u ∧ v, which is the sum of the scalar inner product and the bivector wedge product, and may be extended to the geometric product of any two multivectors. Properties of the Pauli algebra have previously been developed 24 in the context of geometric algebra. Denote by {σ i } an orthonormal basis in R 3 , then σ i · σ j = δ ij . We also have σ i ∧ σ i = 0 for each i = 1, 2, 3 and so in terms of the geometric product we have σ 2 i = σ i σ i = 1, and σ i σ j = σ i ∧ σ j = −σ j σ i for each i = j. Hence the basis vectors anticommute with respect to the geometric product. Denote by ι the trivector where the associative geometric product σ 1 σ 2 σ 3 of a bivector σ 1 ∧ σ 2 and an orthogonal vector σ 3 is defined by We have σ 1 σ 2 = σ 1 σ 2 σ 3 σ 3 = ισ 3 and so σ i σ j = ισ k for cyclic i, j, k. We also find by using anticommutativity, associativity, and σ 2 i = 1 that ι 2 = σ 1 σ 2 σ 3 σ 1 σ 2 σ 3 = −1 and, furthermore, that ι commutes with each vector σ i . We may summarize the algebra of the basis vectors {σ i } by the relations which is isomorphic to the algebra of the Pauli matrices. We also require the following well known result in geometric algebra. For any unit vector u we can rotate a vector v by an angle θ in the plane perpendicular to u by applying a rotor R defined by which acts according to v The Quantum Penny Flip Game using Geometric Algebra The state of the quantum coin for heads up is |0 which is depicted on the Bloch sphere by the polarization vector pointing up on the σ 3 axis, corresponding to the initial vector ψ 0 = σ 3 as shown in Figure 1. Following operations performed by Q, P, Q in turn, in which Q always wins, the final wavefunction ψ 3 also corresponds to the unit vector σ 3 . Suppose Q first applies a general unitary transformation, represented by a rotor (4), namely U 1 = e ιθu/2 to obtain the state ψ 1 = U 1 ψ 0 U † 1 . P now applies the optimal classical 3/8 strategy of applying a coin flip operation F with probability p and no flip operation N with probability 1 − p, to obtain the mixed state The coin flip F is equivalent to the action on the spinor (|0 , |1 ) of the Pauli matrix σ 1 which is isomorphic to σ 1 in geometric algebra, so we have simply F = σ 1 and also N = 1. Q now applies a final unitary transformation U 3 which is independent of p to obtain Since we assume that Q always wins, i.e. ψ 3 = σ 3 for any p, the terms in this expression must equal pσ 3 and (1 − p)σ 3 respectively. For the second term this requires U 3 U 1 σ 3 U † 1 U † 3 = σ 3 and so U 3 U 1 must commute with σ 3 . Hence U 3 U 1 = e ιφσ 3 /2 for some angle φ, i.e. U 1 = U † 3 e ιφσ 3 /2 . On substituting into (5) we find which has no explicit dependence on the angle φ which therefore remains arbitrary. Evidently it is not necessary that U 3 be inverse to the initial rotation U 1 , i.e. the final rotation need not be about the same axis as the initial rotation. In order for the first term in (6) to equal pσ 3 and so U 3 σ 1 U † 3 commutes with σ 3 . This implies that U 3 σ 1 U † 3 is a multiple of σ 3 , since the rotated vector U 3 σ 1 U † 3 is a linear combination of the basis elements, i.e. U 3 σ 1 U † 3 = c 1 σ 1 + c 2 σ 2 + c 3 σ 3 for some scalars c i , and the Pauli algebra (3) then implies that (7) is satisfied only if c 1 = c 2 = 0. Since U 3 σ 1 U † 3 is a unit vector we also have c 3 = ±1. The final state ψ 3 is therefore equal to σ 3 , namely heads up independent of p, provided U 3 = e ιφσ 3 /2 U † 1 and U 1 σ 3 U † 1 = ±σ 1 . Hence Q's strategy is clear: by rotating the starting vector σ 3 to ±σ 1 , P 's coin flip operation has no effect because F σ 1 F † = σ 1 σ 1 σ 1 = σ 1 , and so Q simply then applies U 3 = e ιφσ 3 /2 U † 1 to turn the coin back to heads where it started. Solution for Q's Winning Strategy By substituting for the general rotor U 1 = R as given in (4), and by writing the unit vector as u = aσ 1 + bσ 2 + cσ 3 where the scalars a, b, c satisfy a 2 + b 2 + c 2 = 1, we find that we where c 2 3 = 1. This equation is satisfied if and only if a sin θ 2 = c 3 c sin θ 2 and b sin θ 2 = c 3 cos θ 2 , which implies sin θ 2 = 0. Hence a = c 3 c and b = c 3 cot θ 2 . Since u is a unit vector we have 2a 2 + cot 2 θ 2 = 1 which implies | cot θ 2 | 1, and hence θ can take any value such that π |θ| 3π 2 . Then together with c = c 3 a and b = c 3 cot θ 2 where c 3 = ±1. Thus we have the general expression with which Q rotates σ 3 to ±σ 1 about the axis defined by u through the angle θ, for any θ in the specified range. The unit vector u = aσ 1 + bσ 2 + cσ 3 lies in one of the two intersecting planes defined by |a| = |c|, as shown in Figure 1. Denote the angle between u and σ 3 by ψ showing that u is tilted with respect to σ 3 at an angle ψ in the range π/4 |ψ| 3π/4. The choice of sign for a in (9) can in effect be altered by replacing θ → −θ in (10), and the sign c 3 = ±1 can be reversed by replacing σ 2 → −σ 2 , σ 3 → −σ 3 , which leaves the Pauli algebra Analysis using Density Matrices The analysis of the quantum penny flip game using geometric algebra can be reproduced by means of density matrices and unitary transformations. 19 We may write any 2 × 2 unitary matrix in the form U = e iA where the Hermitean matrix A can be expanded in terms of the Pauli matrices σ i and the identity matrix I 2 according to A = α(aσ 1 + bσ 2 + cσ 3 ) + βI 2 where the scalars a, b, c are normalized such that a 2 + b 2 + c 2 = 1, and where α, β are fixed angles. If we define θ = 2α and also the 2 × 2 matrix u = aσ 1 + bσ 2 + cσ 3 (which satisfies u 2 = I 2 ), then U = e ib uθ/2 e iβ = I 2 cos which compares with the expression (4) for the rotor R. We emphasize, however, that in (4) the element ι is a tri-vector and u denotes a unit vector which is a linear combination of basis vectors σ i . If we denote the starting state by |0 = ( 1 0 ), then the first move by Q is to apply a general unitary transformation U 1 on the starting density matrix ρ 0 = |0 0|, which therefore evolves to ρ 1 = U 1 ρ 0 U † 1 . P now applies the optimal classical strategy of applying a coin flip operation 5/8 F = σ 1 with probability p and the no flip operation N = I 2 with probability 1 − p producing Q applies a final unitary transformation U 3 which is independent of p to obtain which is a matrix equation which can be compared with the geometric algebraic expression (5), in which the unit vector σ 3 replaces the initial density matrix ρ 0 and U 3 implements a quaternion rotation of σ 3 . Since we assume that Q always wins, i.e. that ρ 3 = |0 0| for any p, the terms in the expression (12) must equal p|0 0| and (1 − p)|0 0| respectively, which for the second term requires Hence we have U 3 U 1 = e iβ e iσ 3 φ/2 and on substituting which has no explicit dependence on the angle φ which therefore remains arbitrary. In order that the first term in Eq. (13) equal p|0 0| we require where U = U 3 σ 1 U † 3 is unitary. As discussed above, this matrix equation is equivalent to [U, σ 3 ] = 0 which implies that U is a linear combination of I 2 and σ 3 . We also have U 2 = I 2 which implies, since U = ±I 2 , that U = ±σ 3 = U 3 σ 1 U † 3 . Thus the final state is heads up independent of p, provided U 3 = e iβ e iσ 3 φ/2 U † 1 and U 1 σ 3 U † 1 = ±σ 1 . The phase angle β can be set to zero without loss of generality. By substituting for the general unitary transformation U 1 = U as given by Eq. (11) we require U † σ 1 = ±σ 3 U † , specifically: which compares with the isomorphic Eq. (8) derived using geometric algebra, and which therefore has the solution Eq. (10) in which σ 1 , σ 2 , σ 3 now refer to Pauli matrices, instead of unit vectors. Evidently this derivation of the general solution closely parallels that using geometric algebra, which uses quaternion rotations of vectors in real 3-space, with the formalism defined in terms of unit vectors σ 1 , σ 2 , σ 3 , whereas the density matrix formalism uses Dirac's bra-ket notation, density matrices and complex matrices for SU (2) rotations. Geometric algebra has 6/8 the advantage of avoiding global phase factors e iβ and also permits a geometric picture as shown in Figure 1, which is hidden in the density matrix formalism. Conclusion We have determined unitary transformations, parametrized by angles θ, φ, which enable Q to implement a foolproof winning strategy for the quantum penny flip game. These transformations are derived using both the formalism of geometric algebra, which facilitates a geometric approach, and also density matrices. The matrix condition given by Meyer 1 for the general solution is in effect parametrized and solved by this means. Geometric algebra in general has the significant benefit of an intuitive understanding and offers better insight into quantum games and, for the quantum penny flip game, allows an analysis using operations in 3-space with real coordinates, thus permitting a visualization that is helpful in determining Q's winning strategy. A natural extension of the present work (in progress) is to apply geometric algebra to n-player quantum games, in which all players perform local quantum mechanical actions on entangled states, with the outcome determined by measurement of the final state. 7/8
2009-02-25T08:45:35.000Z
2009-02-25T00:00:00.000
{ "year": 2009, "sha1": "1c3f17856e5fdaa90a40b6d0247e703f6026aa8e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0902.4296", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1c3f17856e5fdaa90a40b6d0247e703f6026aa8e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
263930075
pes2o/s2orc
v3-fos-license
Hypoxia and Ischemia Promote a Maladaptive Platelet Phenotype Supplemental Digital Content is available in the text. P latelets are activated in ischemic diseases such as myocardial infarction (MI), stroke, and peripheral artery disease (PAD). [1][2][3][4] Antiplatelet agents, including aspirin and clopidogrel, are recommended as part of the disease treatment. The expected antithrombotic benefits of antiplatelet agents are not observed in all patients [5][6][7] ; some develop unexpected thrombosis, 8 whereas others have bleeding complications. 9,10 Explanations for such treatment failure includes gene polymorphisms in enzymes responsible for antiplatelet drug metabolism or in their receptors, which was reported for the P2Y 12 receptor antagonist clopidogrel. [11][12][13][14] The rationale behind testing for differences in metabolism of antiplatelet agents on an individual basis is that the drug type and dose may be personalized, providing a more favorable clinical outcome. 15 However, reports have indicated that a personalized genetic approach to antiplatelet therapy failed to alter clinical outcomes or the progression of ischemic disease. 16,17 Preclinical platelet inhibitor studies typically use platelets isolated from normal volunteers. 18 There are risks associated with oversimplifying preclinical platelet studies and extrapolating findings using healthy donor platelets to studies with platelets from a diseased population. 19 A prudent research approach may include determining whether the signaling processes in platelets from diseased patients are similar to healthy persons. If platelets from a diseased population have different agonist signaling properties, the design and implementation of antiplatelet agents should be tailored to adjust for these changes. For example, Jurk et al 20 demonstrated that circulating platelets in patients after stroke are refractory to ex vivo stimulation, engendering an exhausted platelet phenotype, suggesting that central ischemic vascular disease may lead to the development dysfunctional platelets. Using a murine MI model, we recently demonstrated that the circulating platelet phenotype is changed in the postinfarct environment, with a similar observation noted in patients in the peri-MI period. 1, 21 We now report that platelet protein expression is altered both in vitro by exposure to hypoxia and in vivo in ischemic disease, demonstrating that the altered platelet phenotype is regulated at least in part at the platelet level. Using both a murine model of critical limb ischemia and Subjects Healthy volunteers, patients with diabetes mellitus as indicated by blood hemoglobin A1c concentration >6.5% with or without peripheral arterial disease determined objectively by the ankle brachial index were enrolled in this study. Patients with PAD were consented on the day of revascularization either by surgical bypass or by percutaneous intervention for critical limb ischemia. Venous blood was used to isolated platelet-rich plasma (PRP). Washed platelets were used for platelet stimulation studies using agonists against the P2Y 12 receptor (2-methyl-ADP), PAR1 (protease-activated receptor-1; TRAP6 [thrombin receptor-activating peptide-6]), or the thromboxane receptor (U46619), with flow cytometry used to detected activated platelets by surface p-selectin expression as described previously by our group. 21 This study had the approval of the Research Subjects Review Board of the University of Rochester. Mouse Colony All animal protocols were approved by the University Committee on Animal Resources. Eight-week-old male wild-type (WT) C57BL6/J were used in this study unless indicated. To interrogate the role of platelet ERK5 (extracellular regulated protein kinase 5) in some studies, we used ERK5flox/PF4cre(+; platelet-specific ERK5 −/− ) mice on a C57BL/6 background previously validated and shown to be deficient only in platelet ERK5. These platelet-specific ERK5 −/− mice were matched with ERK5 flox/flox mice as a control. 1,22 Critical Limb Ischemia Model Mice were anesthetized with 3% isoflurane. A skin incision was made with leg ligations made proximally and distally to the femoris profunda muscle, with 6.0 suture followed by left femoral artery dissection. The skin was closed using 4.0 coated vicryl in a subcuticular fashion. Mice were allowed to recover and returned to housing for up to 28 days. At various time points over this 28 days, mice will also be imaged under isoflurane using a laser Doppler imaging system. Expanded Methods in the online-only Data Supplement. Mouse Hemostasis and Thrombosis Models The tail bleeding method was used to assess the time to hemostasis. The ferric chloride-induced platelet activation and mesenteric arterial occlusion model was used to assess thrombosis. Both are described by us previously. 1 Mouse Pneumonectomy Model We performed a left pneumonectomy as described 23 to create another model in which the mouse was hypoxic. Expanded Methods in the online-only Data Supplement. Quantification of Blood Vessels A volume of 2×250 μL ice cold Matrigel which contained all the necessary growth factors to promote angiogenesis was drawn up into prechilled 1 mL syringes and injected into the ventral surface of the mouse subcutaneously around the hindlimb area using a 27 g needle. After 7 days, the mouse was euthanized, and the solidified matrix was removed at which point blood vessels were apparent, and so hemoglobin was extracted and quantified according to the instructions using a hemoglobin colorimetric assay (Cayman Chemicals). The other injected solidified matrix was removed and fixed with 10% formalin, then sectioned for H and E staining. Human Platelet Isolation For human platelet function studies and for biochemical analysis, venous blood was collected into citrate plasma tubes and mixed, then isolated according to our protocol published previously. 21 Mouse Platelet Isolation Mouse platelets were collected by 2 to 3 drops of retro-orbital blood into heparinized Tyrode as described by us previously. 1 Expanded Methods in the online-only Data Supplement. Biochemistry and Protein Studies Cell lysis and cell protein extraction, SDS PAGE, and Western blotting were conducted using buffers and techniques as described previously. 1 Expanded Methods in the online-only Data Supplement. Platelet Proteomics Whole blood was collected into citrate plasma tubes and thoroughly mixed. The sample was centrifuged at 1100 rpm for 15 minutes using a bench top centrifuge. The supernatant was then added in a 1:1 (vol/vol) mix of supernatant/Tyrode solution with final concentration 10 μmol\L prostaglandin I 2 (PG I 2 , Cayman Chemical) and centrifuged at 2600 rpm for 5 minutes using a bench top centrifuge. In an attempt to reduce further contaminating plasma proteins, the supernatant was discarded, and the washed platelet pellet was carefully resuspended in 1 mL fresh Tyrode solution with 10 μmol\L prostaglandin I 2 and centrifuged at 2600 rpm for 5 minutes using a bench top centrifuge. The final platelet pellet was then carefully resuspended in 1 mL fresh Tyrode solution with 10 μmol\L prostaglandin I 2 and centrifuged at 2600 rpm for 5 minutes using a bench top centrifuge one final time. We used CD45 and CD41 antibodies to identify leukocytes and platelets, respectively, and then gated and quantified each individual population as a proportion of all cells, not only those double positive cells. This second quality control study revealed even less leukocyte contaminants of PRP (0.12%). The resulting platelet pellet was then used for platelet protein extraction by reducing with 10 mmol/L tris (2-carboxyethyl) phosphine for 1 hour at 37°C and subsequently alkylated with 12 mmol/L iodoacetamide for 1 hour at room temperature in the dark. Samples were then diluted 1:4 with deionized water and digested with sequencing grade modified trypsin at 1:50 enzyme-to-protein ratio. After 12 hours at 37°C to promote digestion, another aliquot of the same amount of trypsin was added to the samples and further incubated at 37°C overnight. The digested samples were then acidified, cleaned up (SCX and C18) and dried as described above. An LC-MS/MS (liquid chromatography tandem mass spectrometry)-based method for quantitative proteomics using the iTRAQ (isobaric tags for relative and absolute quantitation) system reporter ion intensities as we have used previously to study the human platelet proteome and described elsewhere. 24 Statistical Analyses Clinical variables that are dichotomous are presented as frequencies and those that are continuous as mean with SEM unless otherwise stated. The distribution of each data set was interrogated for normality using the Shapiro-Wilk test before comparison between groups. For non-Gaussian distributed data between 2 comparative groups, data are graphically represented as median and the Mann-Whitney U test was used to assess for a difference between groups. For 3 or more groups comparisons, the Kruskal-Wallis test followed by Dunn posttest was used. For Gaussian-distributed data between 2 comparative groups, the t test was used to assess for a difference between groups. For 3 or more groups, 1-way ANOVA then the Bonferroni multiple comparisons test was used. Significance was accepted as a P value <0.05. All data were analyzed with GraphPad Prism 7 (GraphPad Software, Inc, La Jolla, CA). Results To test whether the platelet phenotype is altered in human vascular and metabolic disease, we isolated platelets from patients with several cardiovascular comorbidities including PAD, diabetes mellitus, and hypertension (referred to as patients with the vascular and metabolic disease). We compared platelet function in 30 individuals: either patients or relatively healthy control subjects ( Figure I in the online-only Data Supplement). We stimulated isolated platelets from healthy control subjects, healthy control subjects taking 81 mg aspirin daily, patients with vascular and metabolic comorbidities with PAD, and from patients with vascular and metabolic comorbidities without PAD (all patients were taking at least 1 antiplatelet agent). Control subjects taking 81 mg aspirin daily all showed suppression of platelet activity after surface receptor agonist stimulation compared with control subjects without aspirin therapy. However, platelet activation in response to PAR1 and thromboxane receptor stimulation (TRAP6 and U46619, respectively) was not inhibited in patients with vascular and metabolic comorbidities without PAD as we anticipated and, in fact, platelet function was enhanced in response to P2Y 12 receptor stimulation (2-me-ADP) in spite of taking aspirin and clopidogrel. Platelet function in patients with vascular and metabolic comorbidities with PAD was not inhibited by antiplatelet agents in response to receptor agonists as we anticipated compared with control volunteers taking 81 mg aspirin daily ( Figure 1A through 1C). These data indicate that platelets from patients with the metabolic and vascular disease have altered agonist sensitivity and apparent resistance to inhibition by antiplatelet agents compared with platelets from healthy subjects. To demonstrate that these observations may in part be because of changes in the platelets themselves, platelet proteomic profiles were assessed by liquid chromatography/tandem mass spectrometry. Protein expression data in patients with cardiovascular comorbidities, including PAD, was grouped by function, showing platelet protein expression differences in processes involved in inflammation, RNA processing, protein folding and trafficking, vesicular transport, protease activity, and platelet adhesion (Figures II and III in the online-only Data Supplement). Less than one half percent leukocyte contamination was seen in PRP isolates ( Figure IV in the online-only Data Supplement). These data indicate that changes in the platelet phenotype may contribute to antiplatelet drug resistance in patients with vascular and metabolic diseases. Our prior study demonstrated that platelet protein expression is altered by ischemic disease. 1 We, therefore, considered whether changes in the platelet proteome may in part be because of low tissue oxygen conditions and reactive oxygen species generated in those conditions associated with ischemic vascular diseases. Platelets were isolated from healthy subjects and incubated under normoxic (21% O 2 ) or hypoxic (5% O 2 ) conditions for 2 hours before stimulation with TRAP6, U46619, or 2-me-ADP. Platelet activation in response to agonist stimulation was enhanced after 2 hours in a hypoxic environment using both surface P-selectin and fibrinogen binding (activated GPIIb/IIIa) as platelet activation markers (Figure 2A through 2D (4) and healthy controls taking daily 81 mg aspirin (4) were compared with patients with metabolic and vascular comorbidities including diabetes mellitus and PAD (peripheral artery disease) taking platelet inhibitors (8), and patients with diabetes mellitus without PAD taking both platelet aspirin and clopidogrel (4). Platelets were stimulated with (A) a PAR1 (protease-activated receptor-1) agonist TRAP (thrombin receptor-activating peptide), (B) a thromboxane receptor agonist U46619, or (C) a P2Y 12 agonist 2-me-ADP for 15 min and activation was assessed by FACS (P-selectin expression, mean±SEM, *P<0.05 healthy vs diabetes mellitus+platelet inhibitors. **P<0.05 healthy vs vascular disease+platelet inhibitor(s). ***P<0.05 healthy vs healthy+aspirin, all by 1-way ANOVA. MFI indicates mean fluorescence intensity. Data Supplement). We also show that activation of human platelet ERK5, which is a redox sensor, seems to be sustained after 2 hours in a hypoxic environment in vitro ( Figure 2E). These data imply that hypoxia may prime platelets toward an activated state. Our prior study using a mouse MI model demonstrated altered platelet protein expression in the post-MI environment, including ERK5, P70S6K, and RAC1. 1 To assess whether platelet activation by agonists or hypoxia/ischemia alters the platelet phenotype independent of the megakaryocyte, we isolated mouse platelets and either agonist-stimulated platelets or incubated platelets in normoxic (21% O 2 ) or reduced (5% O 2 ) oxygen tension environments in vitro. P70S6K expression was increased by thrombin, though other agonists such as U46619, and ADP did not demonstrate the same effect ( Figure 3A and 3B). Hypoxia alone, however, significantly increased the expression of ERK5, P70S6K, and RAC1 in a time-dependent manner in mouse platelets ( Figure 3C). In vitro hypoxia also augmented thrombin-induced murine platelet activation ( Figure 3D). Mice with either sham operation or unilateral pneumonectomy develop a chronic hypoxic state after 3 weeks as indicated by increased blood hemoglobin concentration with a coincident increased in activation of platelet redox sensor ERK5 in vivo ( Figure 3E), and in vivo hypoxia enhanced thrombosis in a mouse mesenteric injury model ( Figure 3F), and shortened tail bleeding time ( Figure 3G). These data demonstrate that platelet function and protein expression are altered in hypoxia in a manner that may in part be because of platelet protein expression changes or changes in platelet ERK5 activation. Receptor agonist and hypoxia-induced changes in platelet protein expression were also determined using human platelets. We observed that human platelets also exhibited a similar though not identical agonist-specific protein expression change ( Figure 4A consistent with inhibitors of P70S6K and RAC1 having little impact on human platelet posthypoxia activation (Figures XI and XII in the online-only Data Supplement), but redox-sensitive ERK5, when pharmacologically inhibited after prolonged in vitro hypoxia, demonstrated a significant attenuation of human platelet activation after hypoxia which was not as profoundly obvious in platelets in a normoxic condition ( Figure XIII in the online-only Data Supplement; Figure 4B). Finally, human platelets show changes in agonist-induced activation in a hypoxic environment, possibly through additional mechanisms, including increased platelet surface receptor expression available for certain agonists after activation ( Figure XIV in the online-only Data Supplement). These experiments demonstrate that platelets alter the expression of key signaling proteins independent of the bone marrow-derived precursor megakaryocyte, particularly in response to hypoxic/ischemic stress. However, the downstream mediators of increased platelet activation may be fundamentally different between murine and human platelets. Platelet activation in response to some agonists in a hypoxic environment may be secondary to platelet activation of redoxsensitive protein kinases like ERK5. 1,22,26 Murine and human platelets both increase endogenous reactive oxygen species generation in a hypoxic environment ( Figure 5A) through the basal and maximal quantity of reactive oxygen species produced between species also differs. Furthermore, ERK5 activation (P-ERK5, but not total protein expression) was significantly greater in human platelets from patients with PAD ( Figure 5B). ERK5 and other platelet-activating proteins including RAC1 as well as proteins well-known to promote ribosome biogenesis and support translation such as mTOR were all elevated in murine platelets in hindlimb ischemia (HLI) model ( Figure 5D). This change in murine platelet protein expression coincided with dysregulated platelet activation which was not observed in shamoperated mice and coincident with lower extremity tissue remodeling and angiogenesis secondary to ischemic injury ( Figure 5D; Figures XV through XVII in the online-only Data Supplement). To evaluate a potential functional role for platelet ERK5 in platelet a phenotype alteration associated with ischemia, we performed HLI on WT mice and platelet-specific ERK5 −/− mice which was previously characterized and shows complete absence of ERK5 protein in isolated PRP which is obviously devoid of ERK5 contaminants from other cell types ( Figure XVIII in the online-only Data Supplement). Platelets were then isolated from mice on days 3, 7, and 14 post-HLI to assess activation. Platelets from platelet-specific ERK5 −/− mice had attenuated platelet activation at each of the time points compared with platelets from WT mice ( Figure 6A through 6C). Thermal laser Doppler imaging of the ischemic limb was also performed weekly to assess for reconstitution of blood flow as is often seen in human patients with advanced PAD. Platelet ERK5 −/− mice showed more rapid recovery of limb blood flow compared with WT mice (Figure 6D and 6E), with similar platelet counts throughout the ischemic period ( Figure XIX in the online-only Data Supplement). To determine whether ERK5 −/− mice have improved angiogenesis in general, an in vivo Matrigel assay was used to quantify blood vessel growth in vivo in WT and platelet ERK5 −/− mice. Vascular content was surprisingly similar in WT and platelet ERK5 −/− mice (extracted hemoglobin concentration 1.53±0.48 g/dL versus 1.18±0.15 g/dL) implying that the more rapid reconstitution of hindlimb blood flow in platelet ERK5 −/− mice is because of mechanisms other than enhanced angiogenesis, and potentially may include alterations in microvascular thrombosis. Together these data demonstrate that ischemic disease leads to a platelet phenotype that is more sensitive to agonist stimulation, and activation of platelet ERK5 may have a central role in this response (Figure 7). Discussion These data demonstrate that in both humans and mouse models, metabolic and vascular disease alters the platelet phenotype. Human diseases and extreme experimental conditions of ischemia and hypoxia revealed differences in platelet surface receptor expression, agonist sensitivity, postreceptor signal transduction, and proteomic profiles which could alter the response of the platelet to environmental and pharmacological stimuli. This may provide mechanistic insight into the unpredictable patient responses to antiplatelet agents in hypoxic and ischemic diseases. 21,27,28 The expectation that the platelet phenotype in a diseased state closely resembles healthy conditions may be incorrect. Preclinical studies evaluating antiplatelet agents, therefore, ought to include both healthy donors and donors with the metabolic and vascular disease because platelet antagonists presently available do not account for an evolving, disease-dependent platelet phenotype. The platelet phenotypic switch observed in diseased conditions may in part explain unpredictable platelet responses previously ascribed to changes in antiplatelet drug metabolism or platelet receptor variants. 17,29,30 Resistance to antiplatelet therapeutic agents has been described in diabetics, in MI, and in patients with PAD. 31 Explanations for such treatment failure may include metabolic comorbidities which alter the inflammatory environment, the metabolism of antiplatelet agents, and interactions of antiplatelet agents with other drugs. [32][33][34][35][36] Comparing differences in the platelet phenotype between clinical groups is challenging given the difficulty in exactly matching control human populations in a complex disease group such as PAD, where the clinical pathology leading to the vascular injury is multifactorial. A few studies showed that platelets from diseased populations might have altered surface receptor expression, implying a change in the mature platelet proteome. [37][38][39] Our data offer some mechanistic explanations for these observations because we demonstrate an adaptive platelet phenotype in models of ischemia. This includes changes in platelet receptor agonist sensitivity and the expression and activation of postreceptor signal transduction proteins. Although we, in fact, observe some important differences in human compared with murine responses to hypoxia, platelet ERK5 seems to be a common mediator of dysregulated platelet activation in both species. In as little as a few hours, we found changes in the expression of platelet proteins in vitro in a hypoxic environment or after a few days in vivo after limb ischemia that coincide with enhanced sensitivity to multiple platelet surface receptor agonists. We also show that the platelet proteome in patients with advanced vascular disease is not the same as in relatively healthy subjects, with the former demonstrating more signaling proteins involved in protein synthesis, inflammation, and thrombosis (Figures II and III in the online-only Data Supplement) These data extend prior observations in experimental models of extreme hypoxia in vitro and in models of venous thrombosis and sickle cell disease which all show increased platelet activation. [40][41][42] The change in platelet function observed in just a few hours in vitro may be sufficient to tip the balance of platelet protein synthesis and degradation toward an activated phenotype. 43,44 Previous studies indicate that P70S6K and RAC1 are involved in platelet cytoskeletal rearrangement and activation. [45][46][47] We previously showed that platelet ERK5 is a regulator of protein stability and platelet function in the inflammatory post-MI environment and that platelet-specific ERK5 −/− mice have attenuated platelet activation post-MI with reduced expression of P70S6K and RAC1. 1 Our current study supports these prior observations by demonstrating markedly increased murine platelet P70S6K protein expression after in vitro hypoxia, independent of the megakaryocyte. Ribosomal protein S6 promotes protein translation efficiency may be especially important in ischemic disease. 48,49 There is also the tantalizing possibility that platelet mRNA stability is markedly altered in different diseases Figure 5. ERK5 (extracellular regulated protein kinase 5) promotes dysregulated platelet activity in critical limb ischemia. A, Platelets isolated from wild-type (WT) mice (left) or healthy humans (right) were incubated for 2 h under normoxic conditions (21% O 2 ) or after hypoxia exposure (5% O 2 ) and then loaded with DCFDA (2',7'-dichlorodihydrofluorescein diacetate) to indicate reactive oxygen species (ROS) production, quantified by FACS analysis (mean±SD) *P<0.05 vs 21% O 2 by t test, n=3 in each group. B, Platelets isolated from humans with peripheral artery disease (PAD) or from mice after 4 d of unilateral left leg femoral artery ligation (hindlimb ischemia [HLI]) or sham surgery were assessed for ERK5 activation using a phospho-specific antibody (p-ERK5). Actin was used as an additional loading control for ERK5 because ERK5 protein content was increased in mice with HLI. ERK5 activation was quantified by densitometry and reported as mean pERK5/ERK5±SEM. *P=0.025 for control vs PAD, N=3 to 4, and P=0.14 for sham vs HLI mice by t test, N=4. (Continued ) with consequences on the final platelet translatome and subsequent proteome. It is tempting to speculate P70S6K is an ERK5 downstream mediator of dysregulated platelet activity in murine platelets under hypoxic conditions. An important observation in our investigation is that human and murine platelets, although similar in many ways, do in fact show clear differences in their responses to environmental cues that drive postreceptor signaling pathways and translation efficiency in vitro. These observations serve as a gentle reminder to investigators that experimental models, even when conducted rigorously, sometimes lack important features of human pathophysiology, which limits their ability to reveal a therapeutic solution for diseases. Patients with vascular diseases such as PAD have more on-treatment thrombotic events compared with other thrombotic diseases. 50,51 A recent report indicates diabetic platelets have dysregulated P2Y 12 receptor signaling which was independently replicated in the present study of isolated platelets from humans with metabolic and vascular comorbidities including diabetes mellitus and advanced PAD. 52 PAD is common in diabetic patients, and two-thirds of the patients in our study had diabetes mellitus. In support of other studies, despite taking daily platelet inhibitors, platelets from patients with PAD could be activated through several platelet receptor signaling pathways, and especially through the P2Y 12 receptor. In the analogous murine HLI model, we show that platelets are markedly more activated by agonists compared with sham-operated animals and this phenotype is partly reversed in platelet ERK5 −/− mice. We were quite surprised to observe that platelet ERK5 −/− mice also showed enhanced blood flow in the weeks after HLI compared with WT mice. These findings imply ERK5 inhibition may decrease small vessel thrombotic burden in the early ischemic limb, promoting blood flow. Our investigation has some limitations. Although we show that patients with the metabolic and vascular disease Figure 5 Continued. C, Platelets isolated from mice after 4 d of HLI or sham surgery were assessed for expression of proteins known to affect platelet activation. Protein expression was assessed by immunoblotting (IB), then quantified by densitometry and reported as mean±SEM *P=0.046 vs 0 (ERK5), P=0.034 vs 0 (mTOR) and P=0.022 vs 0 (RAC1) by t test, n=3 to 4 in each group. D, WT mice were subjected to HLI or sham surgery and platelets isolated 7 d later were stimulated with thrombin for 15 min and activation assessed by FACS (P-selectin expression, mean±SEM, n=4 in each group by 1-way ANOVA, *P<0.05 between groups). have a different platelet phenotype and seem to be somewhat resistant to antiplatelet medications, our control subjects had fewer comorbidities than those patients with diabetes mellitus and PAD, and there was an imbalance in the representation of sex among some groups. These present as potential confounding variables in data interpretation. A better and more direct correlation between these human and mouse data will require a larger cohort, ideally compared with control subjects with exactly matched comorbidities. However, these human disease-based data are only intended to highlight the basic conclusion of our study that the platelet phenotype is changed in vascular disease processes and that this may in part be because of changes in the platelet itself as well as in the vascular compartment. In summary, the present study confirms that metabolic and ischemic stressors alter platelet signal transduction pathways and subsequent agonist responsiveness. This may in part be regulated at the level of the platelet itself, independent of the megakaryocyte. A concerted effort should be made to personalize antiplatelet therapy, not only with respect to race and sex but also to the thrombotic disease in question. Figure 6. Platelet ERK5 (extracellular regulated protein kinase 5) inhibition improved blood flow in critical limb ischemia. A-C, Wild-type (WT) or ERK5 −/− platelets from mice (A) 3, (B) 7, or (C) 14 d after hindlimb ischemia (HLI) were isolated and stimulated with thrombin for 15 min and activation assessed by FACS. WT platelets had more post-HLI activation compared with ERK5 −/− (P-selectin expression, mean±SEM, n=4, *P<0.01 between groups by 1-way ANOVA). D and E, Thermal Doppler color imaging showed more rapid return of blood flow in ERK5 −/− mouse limbs. D, Representative images, (E) quantification (mean ratio in the ischemic:nonischemic limb±SEM *P<0.001 Sham WT vs WT HLI, **P=0.013 WT HLI vs ERK5 −/− HLI, ***P=0.003 WT HLI vs ERK5 −/− HLI all by 1-way ANOVA. The group population size is indicated in parentheses.
2018-05-09T00:43:46.005Z
2018-04-05T00:00:00.000
{ "year": 2018, "sha1": "5570dad1e71659282fdcd3f88170893f33c5df02", "oa_license": null, "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/ATVBAHA.118.311186", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5570dad1e71659282fdcd3f88170893f33c5df02", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256354941
pes2o/s2orc
v3-fos-license
Cats just want to have fun: Associations between play and welfare in domestic cats Play is often considered an indicator and promotor of animal welfare and may facilitate closer cat-human relationships. However, few studies have empirically investigated these associations. The current study aimed to investigate play-related factors associated with four welfare outcome measures in cats (Felis catus) including: cat quality of life; cat-guardian relationship quality; problem behaviour prevalence; and behavioural changes. An online survey was developed using demographic information, questions related to play and resources, free text sections and the following validated measures: cat quality of life (QOL), the cat owner relationship scale, and the adult playfulness trait scale. Responses were completed by 1,591 cat guardians from 55 countries. Higher cat playfulness scores and a greater number of games played were significantly associated with higher cat QOL scores while longer amounts of daily play, greater number of games, both cat and guardian initiating play and higher guardian playfulness scores were all significantly associated with higher cat-guardian relationship scores. Exclusively indoor housing was significantly associated with both higher cat QOL and higher cat-guardian relationships scores compared to cats with outdoor access. Behavioural changes associated with distress in cats were reported when play was absent. Play may be an important factor in assessing and maintaining cat welfare. Further research into the mechanisms of how play impacts welfare and cat-guardian relationships is needed. Introduction The potential of play as a behavioural tool to both identify and increase animal welfare has been a recent topic of interest for applied animal behaviour researchers (Held & Špinka 2011;Ahloy-Dallaire et al. 2018).This link between play and welfare is most often observed when animals are experiencing good health, have adequate resources and are free from fitness threats such as predation (Fagen 1978;Oliveira et al. 2010).In addition, research links play to juvenile development and long-term maintenance of neurological/physiological, cognitive-behavioural, and emotional skills (Burghardt 2005;Graham & Burghardt 2010;Vanderschuren & Trezza 2013;Pellis et al. 2015). Play may be especially useful in managing the welfare of animals in human care such as cats (Felis catus), an increasingly popular pet choice within human homes (PDSA Animal Wellbeing Report 2022).Previous studies into play in cats have found associations between welfare issues such as social isolation, inconsistent husbandry, space availability, problem behaviours and changes in play behaviour (Seitz 1959;West 1974;Guyot et al. 1980;Carlstead et al. 1993;Strickler & Shull 2014;Loberg & Lundmark 2016).Of the previous studies that assessed play and welfare associations in cats, only a few used a specific welfare metric within their study.For instance, measures of cat physical condition (Arhant et al. 2015), occurrence of abnormal, repetitive behaviours (Kogan & Grigg 2021) or physiological and behavioural signs of stress (Carlstead et al. 1993).In a recent review of the current literature, we highlighted the lack of specific welfare measures used in studies of play in cats (Henning et al. 2022a). Cat welfare encompasses many elements, from the cats' physical state and the quality of their environment as well as the resources available to them, to their mental state and social relationships (Ellis et al. 2013;Foreman-Worsley & Farnworth 2019;Henning et al. 2022a).For cats in human homes, all these elements rely heavily on their guardian and their relationship with that guardian.It is therefore important to consider guardian perceptions about their cat, their cat's behaviour and their relationship with their cat when assessing cat welfare within a human home.Human-animal interactions, such as play, are likely to impact the dynamic and quality of relationships between cat and guardian.Considering play may be integral to forming and maintaining social skills and communication intra-specifically in animals (Guyot et al. 1980, Bekoff & Allen 2011;Vanderschuren & Trezza 2013;Palagi et al. 2016), play may also be capable of assisting in establishing and maintaining healthy cat-guardian relationships (Henning et al. 2022b). Guardians are responsible for making homing and medical choices for their cats which may have serious welfare outcomes, such as surrender or euthanasia.How guardians perceive their cats may impact how they treat them and even what decisions they make concerning their lives.What guardians perceive as 'problem behaviours', such as scratching furniture or inappropriate urination are often the result of species-typical behaviours that are natural and functional within their usual environment but become problematic when viewed through the lens of a guardian who wishes to have a neat and undamaged home.Regardless of whether the behaviours are problematic for the cat or not, problem behaviours are the leading cat-related factor for surrender to shelters (Patronek et al. 1996;Jensen et al. 2020) and the foremost cause of euthanasia in otherwise healthy pet cats (Carney et al. 2014).Studies suggest that play may have a role in mitigating the occurrence of problem behaviours in cats, with a lack of play associated with greater occurrence of problem behaviours (Strickler & Shull 2014;Foreman-Worsley & Farnworth 2019). While previous studies show promise for some association between play and welfare in cats, there is still much to be understood about whether play is an adequate indicator of welfare, whether play can be used to promote welfare and how much or what kind of play is best suited to achieving these aims.A previous study by Henning et al. (2022b) investigated the factors associated with how much play occurs within human-cat dyads.Here, the present study follows on from this paper and aims to identify and assess whether play itself, its quantity or quality or its role in catguardian relationships, is associated with welfare outcomes in cats.To achieve this, four different potential welfare measures were used within a global online survey.These were: cat quality of life (QOL) scores; quality of the cat-guardian relationship scores; reported behavioural problems; and guardian observations of behavioural changes in times when play was absent.It was expected that higher play times would be positively associated with welfare measures. Materials and methods The survey development, validated measures, data management and study population have been described in more detail in Henning et al. (2022b).A short summary is included here as well as details relevant to the areas of the questionnaire analysed within this paper.The protocol for this study was conducted with approval from the Human Research Ethics Committee at the University of Adelaide, approval code: H-2021-091. Survey The survey was developed in consultation with veterinarians, animal behaviourists, and cat guardians.Participants were required to be over 18 years of age and the primary caregiver of a cat over one year of age.If the participant had multiple cats, they were asked to think of the cat they spent the most time with and answer questions as if they had been asked about that cat specifically, so that the survey was only answered for one cat per respondent.The resulting open survey comprised 105 questions and was hosted on Qualtrics XM®.The survey included demographic information, questions regarding duration and time of play, play type, guardian experiences of play, cat medical history, guardian perceptions of changes in behaviour, and three validated measures which included: the Feline Quality of Life Measure (QOL) (Tatlock et al. 2017); the Cat Owner Relationship Scale (CORS) (Howell et al. 2017); and the Adult Playfulness Trait Scale (APTS) (Shen et al. 2014).All included measures had previously been validated (Shen et al. 2014;Howell et al. 2017;Tatlock et al. 2017).Participants were asked to rate whether their cat exhibited certain problem behaviours on a 5-point Likert scale from 'never' to 'most of the time.'Only participants who completed every question of the cat QOL and CORS validated measures, and completed at least 98% of the survey overall, were included in final analysis (Henning et al. 2022b).The survey was open for participation between the 22 nd of June and the 17 th of July 2021. Analysis A Kolmogorov-Smirnov test was undertaken to assess normality.The data set were found not to be normally distributed.However, as the data set were large, and thus robust enough, to be tested using standard parametric inferential statistics, these were used (Ghasemi & Zahediasl 2012;Hector 2021).Eighteen reported problem behaviours were reduced to seven components using a Principal Component Analysis (PCA).After viewing the matrix, one component and one item were removed due to low loading, leaving six components and seventeen items.Components were then included in analyses, including ANOVAs, regression, and general linear models.Two general linear models were created with cat QOL and CORS scores as dependent variables.All variables were tested through two-way ANOVA against QOL and CORS scores, respectively, and all variables from regressions or ANOVAs with a P < 0.2 were included in subsequent modelling.Non-significant factors were removed using backwards elimination.Interactions that were relevant were checked for significance and included in the model where found.Statistical analyses were completed using SPSS® Statistics 27.A P-value of < 0.05 was considered significant.Descriptive and qualitative analyses of behaviour changes during times play was withheld were undertaken.Free text responses were collected and grouped into types of behaviour.Where quotes have been shortened a '[…]' is used.Quotes were only shortened where necessary while maintaining and respecting the original meaning. Results A total of 1,591 completed responses were recorded from 55 different countries.Most participants were currently living in Australia (49.3%), identified as female (90.3%) and reported living in a twoperson household at the time of the survey (47.8%).Most cats in the study were of mixed breed (76%) and almost equally reported to be male or female.Most cats were reported to be housed exclusively indoors (67%), between 3-5 years of age (28.8%), within a single cat home (38.9%), and had been living with their current guardian for 2-5 years at the time of the survey (42%).For further information on demographics of this study see Henning et al. (2022b). Factors associated with problem behaviours A list of problem behaviours and a free text option were presented to respondents along with a five-point Likert scale with options from 'never' to 'most of the time.'Answers above a three on the Likert scale were included in the following tallies and percentages.The most reported problem behaviour by participants was scratching furniture (47.5%), followed by aggression during play (40.2%), excessive vocalisation (37.6%), aggression towards unfamiliar cats (36.6%) and being overly active at night (35.8%) (Table 1). Eighteen items of reported problem behaviours were reduced using a PCA.Six components presented with an eigenvalue exceeding 1, explaining 51.6% of variance.Sucking on material did not present with a high enough value in any component and was removed from the analysis, leaving seventeen items.These summary indices included the following components: inappropriate excretion subset, aggression towards unknown animals' subset, annoyance behaviour subset, aggression towards people subset, stress behaviour subset, and aggression towards known animals' subset (Table 2). Analysis of the PCA components showed no significant results between play and problem behaviour components.Analysis of PCA components and cat QOL scores showed only the component for stress behaviours (e.g., overgrooming, pica, anxiety) as significantly negatively associated with cat QOL scores (see Tables 3 and 4). Factors associated with cat quality of life One-way ANOVAs showed that cat QOL scores were higher in cats who were younger, had no health issues, were housed exclusively indoors, were more playful, had access to a greater number of games, had lower stress behaviour PCA component scores, and higher CORS scores (Table 3). A univariate linear regression showed a significant association between cat QOL and CORS scores (P < 0.001).The correlation coefficient for CORS and cat QOL scores was R = 0.375, indicating a moderate correlation (standard error = 0.015).A linear regression also showed a significant association between cat QOL scores and stress behaviour PCA component scores (P < 0.001).The correlation coefficient for stress behaviours and cat QOL scores was R = 0.154 indicating a weak correlation (standard error = 0.08).A general linear model analysis of factors associated with cat QOL scores showed higher cat QOL scores where cats were housed exclusively indoors, were more playful, had access to a greater number of games, and where the guardian reported higher CORS scores (Table 4). An interaction was found between reported health issues and cat age (P = 0.005).QOL scores reduced at a greater rate where the cat was older and had health issues. Factors associated with cat-guardian relationships One-way ANOVAs showed that CORS scores were higher in human-cat dyads where the cat was housed exclusively indoors, the number of games played within the dyad was greater, the cat had access to a higher number of resources, guardians did not report avoiding play, and where both cat and guardian initiated play (Table 5). A univariate linear regression showed significant associations between CORS scores and guardian playfulness (APTS) scores (P < 0.001), total guardian-cat daily play time (P < 0.001), cat QOL scores (P < 0.001), and the length of guardianship.The correlation coefficient for CORS and APTS was R = 0.120, indicating a weak correlation (standard error = 0.017).The correlation coefficient for CORS and total guardian-cat daily play time was R = 0.271, indicating a moderate correlation (standard error = 0.004).The correlation coefficient for CORS and cat QOL scores was R = 0.375 indicating a moderate correlation (standard error = 0.037).The correlation coefficient for CORS and length of guardianship was R = 0.058, indicating a weak correlation (standard error = 0.06). A general linear model analysis of factors associated with CORS scores showed higher scores where cats were housed exclusively indoors, the number of games played within the human-cat dyad was greater, the cat had access to a larger number of resources, guardian age was younger, guardians identified as non-binary, guardians did not report avoiding play, both cat and guardian initiated play, the guardian had higher playfulness scores, were single, and where the cat was left home for less than 40 h (Table 6). Guardian-perceived behaviour changes when play is withheld Guardians were presented with the following question: "Does your cat's behaviour change if you haven't played with them for a while?"The most common reported behaviour changes when play was withheld (n = 468) were that their cat exhibited more attentionseeking behaviour, followed by increased vocalising, an increase in destructive behaviour, an increase in reclusive behaviour, and increased aggressive behaviour (Table 7).Examples of quotes relating to these reported behaviours are given in Table 8. Discussion The objective of the present study was to investigate associations between play and welfare in cats by assessing factors associated with cat QOL score and closeness of cat-guardian relationships.Results Animal Welfare indicated that cat playfulness and the number of games played by the cat and guardian were associated with QOL scores while the amount of daily play, the number of games played, who initiated play, whether the guardian avoided play, and guardian playfulness were associated with cat-guardian relationship scores.Cat behaviour, as perceived by their guardian, was reported to change during times when play was withheld.No significant associations were found between play and problem behaviours.(Fagen 1978).In a previous study into the effects of stress on cats, animals assigned to a stress condition of irregular or poor caretaking showed reduced or absent play behaviours (Carlstead et al. 1993).Interestingly, contrary to our expectations, daily play times were not significantly associated with cat QOL scores.However, the higher the number of games a cat was reported to regularly engage in was significantly correlated with higher cat QOL scores.This discrepancy may indicate that it is the quality or variety of play available and not simply the quantity of play that may be associated with positive welfare outcomes for cats (Carlstead et al. 1991;Ahloy-Dallaire et al. 2018;Fernandez 2022). The following distinction was made in this study between toys and games: toys were defined as stimuli that the cat can interact with while games were defined as actions that a cat (and/or guardian) can take.A game may include a toy.Cats have previously been observed to become habituated to toys quickly, showing decreases in play intensity with increased exposure to a particular toy followed by regaining interest and intensity of play when presented with a new toy (Hall et al. 2002).Cats may also become habituated to the types of games available to them.Where cats only had a few games available to them with their guardian, their guardian also reported lower QOL scores.If cats are given access to a greater number or regular variation of games and toys, this may help to minimise habituation and boredom with play, potentially resulting in higher playfulness, QOL scores and overall welfare. Play factors associated with cat-guardian relationships: Number of games and daily play As well as being associated with cat QOL scores, a higher number of games regularly engaged with was also associated with higher catguardian relationship scores.It is possible that guardians who report having a low CORS score were less likely to interact with their cat, or that regular engagement in a variety of play is beneficial for improving cat-guardian relationships.Similarly for humans, only engaging in a small number of games with their cats may lead guardians to experience boredom and dissatisfaction with play, potentially resulting in guardians playing less with their cats.It has previously been found that routine behaviours, which a person feels obligated to perform, undermine their sense of self and freedom (Iso-Ahola 2022).However, these negative aspects can be mitigated by increasing the variety in, and of, these behaviours, such as undertaking a variety of different activities that achieve the same purpose (Iso-Ahola 2022).Tasks that require too little cognitive engagement (such as repeating the same wand toy game with a cat) also can become boring and aversive, leading people to either disengage with the task or seek to enhance variety and novelty (Shenhav et al. 2016).Therefore, engaging with a variety of games is likely to increase both cat and guardian feelings of satisfaction and engagement with their play sessions, enhancing the human-cat bond and making future play sessions more likely.Henning et al. (2022b) previously reported that the number of games engaged with by human and cat was also associated with longer play session lengths and total daily play times.Total daily play times were significantly associated with cat-guardian relationship scores.This may be because more play offers more opportunities for human-cat bonding, or it may also be that guardians who are more invested in their cat are more willing to play with them. Play factors associated with cat-guardian relationships: Who initiates and guardian playfulness Cat-guardian relationship scores were highest where both cat and guardian were reported to initiate play sessions.Being able to both initiate play and recognise initiation of play requires both cat and guardian to be observant and capable of comprehending each other's communication signals.Communication is key to the health of many relationships; therefore, it is possible that better understanding of human and cat signals, be they vocalisations or body language, is likely to lead to a closer or more enriching cat-guardian relationship.Guardians who are able to understand their cat's signals are also more likely to identify their cat's needs (Heath 2007).Self-determination, including the freedom to choose to initiate and engage in an activity, is also a critical component to enjoyment in activities (Dattilo et al. 1998).Therefore, when a cat or guardian is choosing to participate in an activity, instead of being forced to do so, they may be more likely to derive benefit from the activity.Cats have been observed to respond well to being given choice and control within human-cat interactions (Mertens & Turner 1988;Haywood et al. 2021).Allowing cats to have choice in their interactions is an important behaviour to encourage within cat guardians.Further, cats have been shown to engage more enthusiastically with humans who are more responsive when the cat is seeking attention (Turner 1991).If a cat is regularly initiating play sessions, it indicates that they are applying, and able to utilise, their own self-determination. Guardian playfulness was also found to be significantly associated with higher cat-guardian relationship scores.Guardians who are more playful may be more willing to engage in play with their cat and may enjoy playing with their cat more than guardians who are not playful.Henning et al. (2022b), which utilised data from the same respondents as the current study, previously found that APTS scores were significantly associated with longer play session times between cats and guardians.A more engaged guardian who Increased vocalisation "He will start screaming at night while carrying toys up the stairs to my bedroom" [participant 864] Reclusive behaviour "My cat tends to not engage with us as much.She'll spend time around us, but she's more aloof.As soon as we take her out for a walk or play with her, her behaviour changes the following day and she's very playful and affectionate".[participant 683] "She becomes more distant from me…doesn't sleep near me as much…is less accepting of affection."[participant 677] Increased aggressive behaviour "He gets frustrated and takes it out on myself or the other cat (aggressive, violent play.")[participant 1459] Animal Welfare participates in longer play sessions may be better positioned to build social connections with their cat through increased interaction.In a previous study by Odendaal and Meintjes (2003) into dog-guardian relationships, interactions of between 5 and 24 min were associated (for both dog and human) with: decreases in blood pressure; significant increases in oxytocin, prolactin and phenylethylamine (neurochemicals that are known to be associated with bonding), plasma dopamine concentrations associated with pleasurable experiences and β-endorphin which is involved in learning, memory, analgesia, and euphoric states.Humans in the study also experienced a decrease in plasma cortisol, indicating stress relief (Odendaal & Meintjes 2003).Increased interactions due to guardian playfulness may therefore benefit both cat and guardian.However, it is important to note that this measure of catguardian relationship is solely guardian-reported and is not a direct measure of how the cat experiences the relationship.A recent study by Finka et al. (2022) found that cats preferred humans who interacted only when the cat showed interest in interacting and that people who self-reported as having a long history of experience with cats were more likely to force interaction, hold cats against their will and touch places that cats do not typically like to be touched (Finka et al. 2022).Direct observational measures of cat experiences of cat-guardian relationships are needed. Play factors associated with cat quality of life and cat-guardian relationships: Indoor or outdoor housing Exclusively indoor housing was significantly associated with both higher guardian-reported cat QOL scores and higher cat-guardian relationship scores.To our knowledge, this is the first study to compare QOL scores of indoor-and outdoor-housed cats.Some previous studies have focused on cat adaptation to confinement in cattery, shelter, or laboratory settings.However, these are limited in their ability to assess the needs and welfare of cats within a longterm, indoor-only home (Ottway & Hawkins 2003;Kry & Casey 2007;Stella et al. 2014;Rehnberg et al. 2015;Foreman-Worsley & Farnworth 2019).It has been suggested that outdoor cats may benefit from higher quality of life generally than indoors cats due to their ability to find their own amusement, express their natural behaviours, and choose whether to be inside or out (Rochlitz 2005).For some, this potential for a higher quality of life outweighs the increased risks of outdoor access, such as infectious disease, road accidents, trauma, and risk to native wildlife (Yeates & Yates 2017).While for others, the inherent dangers of the outdoors and their potential to limit the length of their cat's life outweigh any small difference in potential quality of life (Rochlitz 2005).However, contrary to the suggestion that outdoor cats may have a higher quality of life, the results of the current study showed that cats housed exclusively indoors recorded higher QOL scores than cats with outdoor access.While this is not a definitive result regarding the difference in QOL scores between indoor and outdoor cats, there are several reasons this result may have been observed in our study.Firstly, cat-guardian relationships are central to domestic cat welfare.Cats rely heavily on their guardian to care for their nutrition, shelter and both their physical and mental health.A good relationship with their guardian, therefore, is likely to increase their QOL score.Our results found a significant association with catguardian relationship scores and whether the cat was housed exclusively indoors or allowed outdoor access, with indoor cat guardians reporting higher cat-guardian relationship scores.Secondly, guardians who house their cats indoors are better able to observe their cat's day-to-day health.Any changes in behaviour, mobility, or important health markers, such as urination and defaecation regularity and consistency, are more likely to be noticed by a guardian who is observing their cat more often.Conversely, cats with outdoor access may be harder to observe and may eliminate in outdoor areas where their excrement cannot be examined for changes.If indoor-cat guardians are more able to observe changes in their cat's health, they are better placed to make any environmental changes needed, or seek veterinary attention, resulting in increased cat QOL scores.Finally, guardians who keep their cats indoors may have more opportunities to participate in enriching interactions such as play.A previous study by Pyari et al. (2021) reported that indoor cats showed stronger interest in play stimuli than outdoor cats.It is possible then, that indoor cat play preferences and needs may differ from outdoor cats.Pyari et al. (2021) discussed that this difference may be due to an absence of the ability to express hunting behaviour in indoor cats, leading to what they describe as an increase in predatory play behaviour to compensate.However, predatory play as a term and as a behaviour is not yet fully understood and may in fact be a misnomer.A previous study by Pellis et al. (1988) analysed 'predatory play' movements or playful movements during predation and found that these movement patterns were adaptive movements that functioned to protect the cat from injury during predation and in essence were not playful at all (Pellis et al. 1988).The trope of cats being 'psychopathic' playful hunters (Evans et al. 2021) is a potentially dangerous one for cat welfare, considering the current attitudes to cats and their frequent use as a scapegoat for the more major pressures imposed by people on biodiversity and climate (Palmer 2014;Wald & Peterson 2020). It is important that we acknowledge the play needs of cats and that these needs may differ depending on that cat's background and housing, while not anthropomorphising and casting moral judgments that do not apply to non-human animals and may not be an accurate assessment of their behaviour anyway, as observed here by Pellis et al. (1988). While there are strong arguments for both camps of thought as to whether cats should be housed indoors or out, it is generally agreed that indoor cats require environmental enrichments to support their welfare in an indoor only home (Foreman-Worsley & Farnworth 2019).It is also possible that guardians who are more closely bonded with their cat and therefore may be more worried for their cat's safety may choose to house their cats indoors (Crowley et al. 2019(Crowley et al. , 2020;;Foreman-Worsley et al. 2021) (as indicated here by the higher cat-guardian relationship scores), and that they may also be more willing to fulfill the play and enrichment needs of their cat.This, in turn, may have resulted in the higher cat QOL scores observed within this study.Further, it may not be that indoor or outdoor housing is strictly better for cats, but instead, how their guardians perceive, monitor, and treat them that has more of an impact. Behaviour changes during lowered play times Guardians reported noticing certain behaviour changes when they had not played with their cat for some time.While many guardians reported their cat exhibiting an increase in attention-seeking behaviours, several reported that their cat became more reclusive when play had been withheld.This showcases an interesting dichotomy of reactions to lack of play, with both an increase and decrease in cat engagement with guardian observed depending on the individual cat.This may be due to individual differences between cats and/or their guardians, familial dynamics within the household, the specific environment, or the housing situation of the cat (indoor or outdoor).Unfortunately, no clear pattens of differences were able to be deduced between behaviour changes in indoor versus outdoor cats within this study, however it is likely that this may impact behaviours and future studies could investigate this further.For many of the welfare measures we have investigated previously, the emphasis has been on whether play is indicative of welfare (Henning et al. 2022a).Observations of behaviour change when play is withheld may also offer us an insight into whether play is a promotor of welfare.Often it is difficult to separate the two concepts from each other, not knowing if more play means more welfare or if greater welfare means more play, or both.Within this study, guardians reported that when play is withheld, observable behaviour changes occur that may indicate a decrease in welfare or well-being.Increased attention-seeking behaviour, especially vocalisations and destructive behaviour, may indicate that the cat is experiencing frustration, a negative affective state that may impact the cat's well-being.Increased reclusive behaviour may also be detrimental to cat welfare.Cats in human homes must share their space with the humans of the household, who utilise most of the space in a house.A cat who becomes reclusive is limited in what areas of the house they feel comfortable accessing and this may limit their physical activity as well as their access to important resources such as food, water, and litter (Carlstead et al. 1993).Reclusive behaviour may also indicate that there has been a deterioration in the social relationship between cat and guardian.Within cat social groups, signs of affiliation include spending time around and interacting positively with another cat while avoidance of spaces where another cat is and aversion to interacting may indicate that the cats are not affiliated (Vitale 2022).Put in terms of cat and guardian, if a cat is avoiding spaces where their guardian is and is not interacting as usual, it may indicate that the cat is uncomfortable.Humans within relationships are often acutely attuned to the behaviour of those close to them, and how this behaviour may affect or be indicative of the health of their relationship.It is possible that cats are similarly aware of their guardian's behaviour towards them and what this may indicate about their safety within the relationship.Recent research shows that cats regularly keep a mental tab on where their guardian is within the house (Takagi et al. 2021).It is entirely possible that cats are also aware of what constitutes their guardian's regular behaviour and therefore notice any changes to this behaviour.Previous studies have shown that cats are highly sensitive to changes in routine (Stella et al. 2011).It is possible that changes the cat perceives in their regular interactions with their guardians may similarly make them uncomfortable.Many of the behaviour changes listed, such as destructive or aggressive behaviour, may also constitute a problem or annoyance for the guardian which may impact their perception of their relationship with their cat.As cat welfare within human homes relies so heavily on human perceptions, this may also constitute an impact to cat welfare. Limitations and future research This study has several potential limitations.Since it is not a direct measure, guardian-reported surveys are inherently limited in their ability to accurately capture their animals' behavioural data.Further, due to the self-reporting nature of surveys, an individual's responses may be prone to respondent and recall bias and limited in their ability to predict or assess behaviour (Schwarz & Oyserman 2001;Paulhus & Vazire 2007;Kormos & Gifford 2014).In addition, the survey population sampled may also be biased.Participants were recruited during a global pandemic and overwhelmingly identified themselves as female; the survey also took approximately 20-30 min to complete, and guardians prepared to volunteer for a survey of this length may be more invested in their cat's care than the average cat guardian.Therefore, survey responses may not be an accurate representation of the general population.Due to some overlap between the validated scales and questions such as frequency of stress and other problem behaviours, findings between the two may be limited in their ability for meaningful comparison.Finally, as this was a cross-sectional study it does not prove a causal link between factors that were associated. Future studies should further investigate the role of play in cat welfare with special attention paid to indoor and outdoor requirements, the mechanisms of how play impacts cat-guardian relationships, communication, socialisation, and play preferences in both cats and guardians. Animal welfare implications and conclusion Play has potential to be used as both an indicator and promotor of cat welfare.The present study aimed to assess the play factors associated with welfare in cats.Within this study, multiple welfare-impacting measures were used, including cat QOL score, the quality of the relationship between cat and guardian, the prevalence of problem behaviours and observations of behavioural changes.Results showed significant associations between cat playfulness and the number of games played with cat QOL scores.Catguardian relationship scores were significantly associated with the amount of daily play and number of games engaged in, whether the cat and guardian both initiated play sessions and guardian playfulness.Problem behaviours that clustered on a PCA into a stress component were significantly associated with QOL scores, but no other problem behaviours were found to have significant associations with play or welfare measures.Behavioural changes that indicated stress, frustration, or unease were reported when play had been withheld.This study shows support for the association between play and welfare and provides several avenues for further research.Future investigations should focus on the role of play in assessments of cat welfare, the use of play as an intervention to promote welfare, and the mechanisms of how play impacts cat welfare and cat-guardian relationships. Table 1 . Problem behaviours reported by cat guardians (n = 1,591).Guardians were able to report multiple behaviours.Based on online survey responses of guardians between June 22 nd and July 17 th , 2021 Table 2 . Loading for 18 problem behaviours reported by cat guardians, generated by means of a Principal Component Analysis *Not included due to low loadingAnimal Welfare Table 3 . Results of a one-way analysis of variance (ANOVA) for each factor associated with quality of life in cats (n = 1,590) except for: cat age (n = 1,584) and cat playfulness (n = 1,589).Based on online survey responses of cat guardians between June 22 nd and July 17 th , 2021 Table 4 . General Linear Model parameter estimates of factors associated with cat quality of life scores (n = 1,583), based on online survey cat guardian responses between June 22 nd and July 17 th , 2021 **Reference category.a Total 'games' played related to games the guardian regularly played with their cat and included: Fetch, playing with catnip toys, playing with noisy toys, playing with boxes, playing with hands, playing with digital devices, playing with wand toys, playing with laser pointers, playing with food, playing with motorised toys, chasing each other and training.b Housing: Outdoor access was defined as any regular unsupervised access to the outdoors without a harness, lead and not within a fully enclosed cat enclosure.Exclusively indoors was defined as a cat with no access to the outdoors except on a harness or within a fully enclosed cat enclosure.c Stress behaviour score relates to a guardian-reported frequency for anxiety, pica and overgrooming respectively, reported on a five-point Likert scale and coded into values (1 being never and 5 being most of the time) with values summed to create a composite score. Table 5 . (Continued) b Housing: Outdoor access was defined as any regular unsupervised access to the outdoors without a harness, lead and not within a fully enclosed cat enclosure.Exclusively indoors was defined as a cat with no access to the outdoors except on a harness or within a fully enclosed cat enclosure.c Total resources available included: scratching post, cat bed, hiding place, water fountain, wand toy, food puzzle, food treats, motorised toy, wall or window perch, harness walks, outdoor cat enclosure, cat grass, self-cleaning litter tray, Feliway TM and automatic feeder. Table 6 . General Linear Model parameter estimates of factors associated with cat-owner relationship scores (n = 1,372), based on online survey responses of cat guardians between June 22 nd and July 17 th , 2021Total resources available included: scratching post, cat bed, hiding place, water fountain, wand toy, food puzzle, food treats, motorised toy, wall or window perch, harness walks, outdoor cat enclosure, cat grass, self-cleaning litter tray, Feliway TM and automatic feeder. c Table 7 . Guardian-reported cat behaviour changes when play was withheld, reported by guardians (n = 468).Guardians could report more than one behaviour change.Based on online survey responses between June 22 nd and July 17 th , 2021 Table 8 . Examples of guardian-reported cat behaviour changes when play was withheld (n = 468).Based on online survey responses between June 22 nd and July 17 th , 2021 Example Quote Attention-seeking behaviour "He behaves to get my attention; does things he knows he's not supposed to, e.g., jumping on top of cabinets."[participant 729] Destructive behaviour "They are frustrated and destroy property (curtains, couch etc)" [participant 178]
2023-01-29T16:11:56.862Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "1533bfaccbc966d56b9f73dea9cc89b5a26b3e9e", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/03F9B841EFF468344BC1B0D01D37CBC6/S0962728623000027a.pdf/div-class-title-cats-just-want-to-have-fun-associations-between-play-and-welfare-in-domestic-cats-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c156d90938fd968584e46718e687cdf71be5c7f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
233087105
pes2o/s2orc
v3-fos-license
Research on the Application of Problem-Oriented Teaching Model in the Course of Accounting Information System The purpose of this study is to apply the problem-oriented teaching Model in the practice of Accounting Information System course, to improve students' ability to understand and apply knowledge, and to form a unique competitive advantage for accounting professionals. The research significance lies in that through the application of the problem-oriented teaching Model, students can truly understand the process and data relationship of the integration of finance and business, master the theory, practice and frontier development of the Accounting Information System course, learn to use the system Financial analysis of accounting data and business data in the program, and effectively improve students' ability to analyze and solve problems. Through the questionnaire survey of teaching evaluation, this article shows that the introduction of problem-oriented teaching methods into the course of Accounting Information System is an effective method to enhance students' motivation and effectiveness. I. INTRODUCTION The Accounting Information System (AIS) course is developed from the traditional Accounting Computerization course. So far, there are three main directions in its curriculum design: The first is information technology orientation, which focuses on IT technology. This Model allows students to open the "black box" inside the information system and understand the technical issues. The content of the course is based on the use of a programming language to design an accounting system. It is not difficult to see that this kind of guidance basically exists in universities of science and engineering, especially the AIS teaching team mainly based on the background of computer science. The difficulty lies in how to balance IT and accounting theory in the teaching content, so as to avoid the misunderstanding of non-accounting in teaching. The second is to simulate accounting orientation, that is, the traditional teaching Model of accounting computerization. This Model is usually paired with an AIS or ERP product with a high market share as practical teaching software. The software basically retain the traditional accounting workflow, from the entry of accounting documents to the output of books and reports. At the same time, students can intuitively feel the real-time processing of accounting business and are impressed by the advantages of computerized accounting. However, the disadvantage of this kind of guidance is also obvious. Because the accounting workflow of the general AIS software is basically automated, the operation interface is simple and easy to understand. In this learning environment, AIS teaching has become a software operation training course, and the software operating principles and technical details are often overlooked. More importantly, this guidance cannot reflect the changes and development of accounting in the IT environment, and in particular cannot reflect the new characteristics of internal control in the IT environment. Therefore, this is a teaching Model suitable for practical experience of financial accounting courses. The third is business process orientation, that is, arranging the content of AIS courses from the perspective of financial and business integration is a breakthrough in the simulation of accounting guidance, because it emphasizes the role of IT in reengineering accounting business processes. The advantage of this Model is that students can experience the whole process of accounting information generation, processing and utilization from the actual operation of the enterprise. More importantly, the organization of AIS course content through business processes makes it easier to embed business process control points and audit content. At present, the international mainstream ERP software has downplayed the traditional accounting operation interface. Instead, the process is the center, collecting original information in various business departments, and at the same time strictly controlling the entire information collection, storage, processing and reporting process. Under this guidance, the original information generated by the business department can automatically generate accounting vouchers according to business processing rules while entering the AIS, and subsequent books and reports can be automatically generated by the program. It can be seen that the status of the accounting function has gradually decreased in the process of informatization. The focus of the accountant's work has also begun to turn to control AIS and participate in management decision-making. With the deepening of accounting informatization, the design style of China's high-end ERP software is gradually in line with international standards, and the teaching materials used in the course of "Accounting Information System" are basically business process oriented, which is consistent with the development trend of financial and business integration. The course content of Accounting Information System under the guidance of business process is organized around various business processes of the enterprise, including financial processing, purchasing and payable, sales and receivable, production and cost, human resources, financial reporting system. As the core course of the accounting profession, AIS's mission is to deliver a compound talent who is familiar with the accounting standard system, internal control standards and accounting informatization through the organic combination of IT and accounting knowledge. The above three guiding goals are the same, with different emphasis. Information technology orientation focuses on IT, simulated accounting orientation focuses on accounting, and business process orientation is based on ERP system. It is not difficult to find that business process orientation is the current mainstream orientation. However, the ERP systembased experiment has many experimental contents, heavy tasks, and cumbersome operation steps. As a result, students are exhausted in the experiment, neglecting theoretical learning, and unable to consider thinking, analyzing, and solving problems. The problem-oriented teaching Model refers to students' learning activities that take the problem as the core and actively analyze and solve the problem by questioning and criticizing under the guidance of the teacher, and creatively acquire knowledge and experience. This study applies this Model to the teaching practice of Accounting Information System, hoping to improve students' ability to understand and apply knowledge and form a unique competitive advantage for accounting professionals. ORIENTED TEACHING Accounting Information System is a comprehensive and practical course, which is usually set up as an undergraduate major course in accounting, financial management and auditing. The course requires students to correctly understand and evaluate the accounting information system on the basis of mastering the accounting information processing flow and methods in the computer, and lay a good theoretical foundation for continuous improvement and innovation of accounting calculation and management methods. Before learning this course, you need to have basic knowledge in accounting, financial management, computer, database and so on. Because the phenomenon of "emphasizing operation and neglecting theory" is common in the course, students tend to lose ground in the study, they are exhausted in the experiment, they ignore the theoretical study, and they are unable to take into account thinking, analysis and problem solving. The goal is difficult to achieve. In the course of teaching, the effective introduction of problem-oriented teaching methods can play a good role in stimulating learning interest, enhancing students' depth of knowledge and improving analytical skills. The benefits of the problem-oriented teaching method mainly include the following four points: First, it is necessary to learn from the past. Teachers' questions can stimulate students to review and think about the knowledge they have learned in a timely manner. Help students make enough knowledge transfer before learning new content. Listening to the answers of other students can repair the forgotten knowledge points. Lay the foundation for understanding the content of the subsequent courses. For example, when explaining the voucher entry process, the constituent elements of the accounting voucher must be reviewed. Therefore, students can recall the bookkeeping voucher style they know and explain the specific requirements of each information element to help students review the knowledge they have learned and to further understand the error correction mechanism of the system in the voucher entry (such as borrowing must have Loans and loans must be equal). Second, it is necessary to cultivate of pioneering thinking. Repeated question answering and discussion can cultivate students' pioneering thinking and the spirit of digging into problems and seeking knowledge and truth, awe of history and thought, and truly "not only books, but only truth". When arguing with teachers, we should adhere to the attitude of "I love my teacher, I love the truth more" to discuss issues of common concern to everyone. For example, why the starting point of accounting information system processing is the accounting voucher, rather than the original Advances in Social Science, Education and Humanities Research, volume 493 voucher? With the use of various scanners, character recognition and other equipment, the original voucher will also be stored in the system. Will the starting point of system processing in the future become the original voucher? Through this design question, this question may not get a standard answer in the classroom, but it can inspire students to collect information after class, start discussion after class, cultivate their pioneering thinking, and stimulate their curiosity. Third, it is necessary to deepen the understanding of the problem. Being able to ask questions and being able to answer questions requires deep thinking about the problem and expressing it in your own language. This is essentially a process of knowledge decomposition and reprocessing. The decomposition and aggregation of ideas may lead to new insights. The collision of ideas may also generate new sparks. Everyone has different perspectives on problem understanding and analysis. Therefore, mutual communication will deepen the depth of the questioner and questioner's understanding of the problem. Fourth, it is necessary to strengthen the cultivation of systematic ability to master knowledge. Since knowledge exists objectively and disciplines are artificially set classifications, the interdisciplinary characteristics of "Accounting Information System" can be brought into full play by asking relevant knowledge. The problem setting that moves the whole body will make the students cross the barriers brought by the discipline setting, master the knowledge learned more systematically, and exercise the ability to think comprehensively. III. MAIN RESEARCHING ASPECTS The main content of this study focuses on the following three aspects: One is the combination of theoretical teaching and practical teaching to carry out problem design. The focus is on how to set up problems, which can not only cultivate students 'professional skills, but also stimulate students' subjective initiative, so that most students can participate actively and have challenges. The second is the application of problem-oriented teaching methods in teaching practice. This requires integrating the course content with a complete business case. The content of the existing AIS courses is arranged according to their functional modules, and there is a lack of concatenation between knowledge points. Therefore, an integrated case can be used for teaching, and systematic teaching content integration can be carried out in limited class hours. The third is feedback and evaluation of teaching effects. According to the teaching status of the "Accounting Information System" course, evaluate the effect of the problem-oriented teaching Model on improving teaching effectiveness. Both the theoretical teaching and practical teaching of Accounting Information System can adopt the "problem-oriented" teaching Model. Therefore, in the initial teaching design, it should be supplemented with corresponding problem design, these problems are derived from the key control points in the accounting information system. Teachers need to set up questions based on the experiment content in advance. In the specific teaching practice, we should design and guide the teaching practice according to the basic links of the "problem-oriented" teaching Model focusing on grasping the setting of the teaching situation and the reflection of the problem. Since the problem-oriented teaching method uses the problems that students may encounter in the future as the starting point of learning, this has obvious advantages for stimulating students' consciousness to learn and guiding students to improve their problemsolving skills; Comprehensive courses like Information System are more practical. IV. APPLICATION OF PROBLEM-ORIENTED TEACHING METHOD IN AIS COURSE The problem-oriented teaching method is mainly reflected by the introduction of thinking questions before class, the discussion of questions in the classroom, and the arrangement of thinking questions after class. The thinking questions before the class play three roles. One is to review old knowledge points, the other is to investigate the knowledge base of students, and the third is to stimulate students' interest in learning. For example, in the chapter of the cashier and bank reconciliation system, it is necessary to discuss: why bank reconciliation, when and how to check. The editor of the textbook assumes that the reader knows the knowledge in this respect, but the true situation of the student's knowledge is not the case. It needs to be verified by the instructor through questions before the class. "How does bank reconciliation work in manual Model, and what if something goes wrong?" By thinking about this problem, teachers can guide students to deepen their understanding of the manual processing process in the accounting courses they have previously learned, and After deep learning, understand the difference between it and computer process. The pre-class thinking question is a process in which the teacher asks predetermined questions, the students answer, and the teacher gives corrections and evaluations to the students' answers. The discussion of questions in the classroom should not be restricted to one form. It can be a teacher 's question, a student 's answer, or a student 's question about what they have Advances in Social Science, Education and Humanities Research, volume 493 learned, answered by other classmates, and the teacher can supplement and conduct further inquiries. Class discussions can clarify vague concepts and clarify learning ideas. Of course, there may be questions that teachers and students can hardly get a satisfactory answer. Such questions can be used as topics for discussion after class and will be answered in the course time. The role of after-class thinking questions is to strengthen the memory of the focus and difficulties of this course. After answering the thinking questions after class, students can check their understanding of the content of this course, and it is also a process of systematically combing knowledge from points to faces. For example, at the end of the chapter of the accounting processing system, you can set the "improvement of the accounting processing process" as a post-class thinking question, so that students can still do further data collection after the class, and integrate the learned theory with the actual system. The comparison of operations improves the depth of students' understanding of the problem and enables them to master the theories learned flexibly. The problem-oriented teaching needs to use the question as an introductory factor, which triggers the common thinking of teachers and students. Therefore, in the teaching, teachers need to carefully set the preclassroom, classroom and after-class problems, in order to effectively guide students to learn from the past. At the same time, teaching is good, and it also improves teachers' comprehensive ability of thinking and thinking. There are certain scientific rules to follow in setting up the problem, which can be summarized as follows: First, it is necessary to closely surround the teaching content. As mentioned earlier, there are many knowledge points involved in the course of `` Accounting Information System '', but not all points need to be asked. Questions that are closely connected with the knowledge taught in the lesson can be set up, but only relevant or relevant. Not sexual, should not waste valuable classroom time. Second, the difficulty of the problem should be Modelrate. If the question set by the teacher is too simple and boring for the students, it will not stimulate the students' interest in thinking; but if the question is too difficult, it will make the students feel fearful, and finally it will become the teacher's self-talk, and the question cannot be reached. Expected effect. The ideal question should be difficult to Modelrate, and the question is full of controversy will also bring more collision of ideas. Third, it is necessary to grasp the logic and frequency of questions. Judging from the logical order of asking questions, it is advisable to put the questions first and then near, first in common and then in special cases, first in depth and then in extension, which conforms to the general law of people's thinking. For dozens of minutes in the class, it is better to narrate and discuss. Teachers need to flexibly grasp the frequency of questions. Asking questions can strengthen the interaction between teachers and students, but the classroom also needs to be dynamic and static, so that students have time to precipitate and digest knowledge. In addition, it is necessary to avoid that questions and answers only occur among a limited number of students, and others are in a state of thinking rest during Q & A. The problem-oriented teaching method is a breakthrough to the traditional teaching Model characterized by the unidirectional transmission of knowledge information and examination-oriented. The reform of this teaching Model puts forward higher basic quality requirements for teachers. First of all, problemoriented teaching requires teachers to establish an equal concept and awareness of teachers and students. Comply with the ancient motto "Wen Dao has succession and skill has specialization", "Disciples do not have to be inferior to teachers, and teachers do not have to be good at disciples." Instructed in the original intention of progressing with students and classmates. Secondly, teachers need to have a wealth of basic knowledge accumulation, and be well versed in the connection between various courses and the ins and outs of each knowledge point. Third, teachers must have a keen insight and be able to find problems. Because valuable research results all originate from a creative question. Fourth, have a clearer understanding of students' knowledge base. Only by knowing the basic situation of the students can we design the problems more objectively and organize class discussions. Finally, teachers must have a sense of knowledge renewal that advances with the times. The limited experimental data in the course Accounting Information System is not enough to satisfy students' curiosity. The discovery of new materials from the currently operating enterprise examples (such as the computer processing of group financial consolidated statements, the realization of financial sharing Models, etc.) can enable the theoretical knowledge of the course to find a more specific foothold in real life, Students' interest in learning will be stronger. DESIGN AND RESULTS ANALYSIS In this study, the online questionnaire was adopted during the computer process, and the self-assessment survey (pre-and post-test) and learning satisfaction survey made by students were set as the measurement indicators of learning effectiveness; the curriculum and Advances in Social Science, Education and Humanities Research, volume 493 motivation questionnaire were set as the measurement indicators of learning motivation. Among them, the survey of motivation amount refers to the Motivated Strategies for Learning Questionnaire (MSLQ) of the University of Michigan, and the specific evaluation results are statistically analyzed. The setting of specific topics is shown in " Table I". Learning Satisfaction 1. The problem has practical application 2. I can fully understand the issues raised 3. After finishing homework, my ability has improved 4. I think it's difficult to set the topic appropriately 5. I think the teacher's story is very clear and vivid 6. I think the courses are arranged in the right order 7. I think the knowledge learned can be easily applied 8. I can easily apply the learning content to my future work 9. I can see the focus of the explanation in the theory in the hands-on practice 10. The goals and planning to be completed by the task are in line with my ability For the analysis of teaching evaluation, the feedback results of the recovered students were summarized, and Origin8 was used as a data analysis tool to import the data for analysis. First, the Cronbachs'α test was conducted on the items in the questionnaire. The results showed that the error in sampling the content of the questionnaire met a high degree of credibility. Then, the Pearson correlation coefficient is used to analyze the strength of the correlation between the two variables, and the learning motivation is divided into three aspects: intrinsic goals, learning control beliefs, and self-efficiency. Relevance of satisfaction. The results show that when the significance level is 1%, intrinsic goal orientation, learning control beliefs, self-efficiency orientation and learning satisfaction are positively correlated. Therefore, in the process of problem-oriented curriculum design, students 'individual intrinsic goal orientation, The stronger the learning control beliefs and self-efficacy orientation, the higher the satisfaction of their learning effectiveness. In order to understand whether the difference between the averages of the preand post-test observations is significant, a T-test is used to compare the pre-and post-test data of the sample. The results show that when the significance level is 5%, the P value is significant, the scores of the front and back tests have a significant difference, and the posttest is greater than the pretest. Therefore, the results of the T test show that students generally believe that problemintroduced curriculum design has better learning effectiveness. VI. CONCLUSION The problem-oriented teaching method still has some controversies in the teaching field, but we have experienced the teaching practice and realized that the problem-oriented curriculum design teaching method can not only stimulate students' learning motivation, but also enable students to be related to reality in the learning process. The company's work environment produces good learning results, and then learn to apply. In the future, the topics that need to be studied in depth are mainly how to discover new materials in the currently operating enterprise examples, connect theory with practice, and stimulate students' curiosity. It can be said that the effective use of problem-oriented teaching methods requires teachers to have a wealth of knowledge accumulation and keen insight on the one hand, but also requires the active cooperation of students. The application of problem-oriented teaching methods in the teaching of the Accounting Information System course has a good effect of stimulating learning interest, enhancing students' depth of knowledge and enhancing analytical skills. The setting of the question should focus on the core content of the course; the difficulty of the question should be Modelrate; the question should pay attention to the logical order and the frequency is appropriate. The application of problem-oriented teaching methods requires teachers to have a wealth of knowledge accumulation and keen insight on the one hand, and students' active cooperation on the other hand. Investigation and analysis show that the introduction of problem-oriented teaching methods into the course of Accounting Information System is an effective method to enhance students' motivation and effectiveness. Advances in Social Science, Education and Humanities Research, volume 493
2020-12-10T09:06:46.801Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "059225d01141e4dab6fbad0a2e18f442cf533578", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/assehr.k.201128.047", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d8f792d94a8b9cbb5d444ce076282ebc80a764fa", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
144779684
pes2o/s2orc
v3-fos-license
How Parental Encouragement and Evaluation for the English Teacher Reflect Pupils Attitudes to Learn the English Language This study aimed at identifying the factor of pupils` evaluation for parents and teachers while studying the English language as a major priority in Albanian schools. In order to discover the attitudes and the evaluation for the English teacher the ATMB test (Attitude and Motivation Test Battery) was used in a sample of 200 pupils of secondary school in Tirana. The test was administered for two of its scales, the parental encouragement and English teacher evaluation. The data was gathered and calculated afterwards on the SPSS 18. This study has taken out results based on quantitative research thus conveying statistical values of the respective answers from the respondents. The answers varied from the absolute disagreement to the utmost agreement of statements for each scale. The selected scales had a total score of 70 each. It resulted from the measurement for the scale of English teacher evaluation a score of 60 out of which stands for slightly higher than moderate degree of evaluation for the teacher and a score of 56 for the parental support to learning the language. In other terms this study concludes that pupils in secondary cycle of studies highly evaluated the teacher of English language and they were only moderately encouraged by their parents to learn the language. General Overview Educational reforms in Albania have significantly affected the curricula in undergraduate school system.Major national strategies instead promote the values and importance of the application of the English language in Albanian schools, as a necessity for integration and cultural development of pupils.National Education Strategy, 2005Strategy, -2015 writes in a special division: "Treatment with the advantage of the English language.Among other things, the Albanian state began addressing the priority of English, so mastering this language better.For this purpose it was decided that the teaching of English language start since primary third grade, (MASH, 2005, Case Approval Program English language, Primary Cycle, CF, p.3), and English test should be obligatory taken at Matura with a high coefficient of points for university admission.Also there is extended the market of publishing in English language, and English departments were established in universities (MASH, SCAP, 2007, p.10).Pace of these changes orients to a new approach to find analog programs and school systems in Europe.It is worth mentioning Bologna Declaration on Higher Education or the introduction of new ICT full program. English in primary school, third grade, which is writes in the introduction: " The integration of our country into the family of European -Atlantic structures, preparation of citizens of a united Europe ,necessarily requires an understanding and mastery of effective foreign language , which should begin to learn as a new age .Primary school level where the jump is the basis for everything that society holds as valuable to learn, there is space and learning a foreign language English.Sensitization to small children learning English is a positive contribution to the overall formation of the student, his identity, he extends to the ability to distinguish the difference between cultures" (MASH, 2005, Case Approval Program English Language, Primary Cycle, CF P.3) Problem of Research The school programs requires that young children start learning English since the pre -school system and then follow during the elementary school up to high school when they take Matura exam and English test is an optional test, with a very high coefficient for university admission 1.3 -1.6) (MASH, 2010, Udhëzues për zhvillimin e Kurrikulës së re të Gjimnazit, f.11) Due to ongoing reforms in education system in Albania mainly related to English language taught in Albanian schools, in terms of curricula, program, tests taken, teachers training, certification of English language knowledge with international standards and imposing the latter a must for the issuing of post university degrees, the continuous efforts to make the English test an obligatory test for the A levels, this study aims to find out the pupils evaluation for teachers of English class and the encouragement this group age going to the secondary gets from their parents. Research Questions • At what degree are students in Albania encouraged by their parents to learn the English language? • At what extent do students in secondary schools evaluate their English teacher? Literature Review The English language is now a must for student academic achievement, like the university degrees for different high education programs, which obliges them to take an international exam to certify the knowledge in English language.The affective role into language learning has been of great importance in the recent researches.The studies with pupils of high schools in the late 90`s revealed that there exists an intense correlation between the affective filter and achievement in language learning (Gardner, Tremblay and Masgoret, 1997: 344) The big challenge in an era of educational developments is to preserve the balance between the new and the old, to surpass gap between them and the new approach to environmental flows where they will be applied.In university education is handled with care by educational policy-making institutions such as the Institute of Curriculum Development (ICD) or Education Regional Directories (DARS) for designing programs and curricula.There has been significant sensitization training for working groups that deal with curricula and school programs. In the case of English language programs for pre-university cycle curriculum statistically there have been significant differences both in content and in form.The aim is to adapt educational strategies in place, such as sector but central, and what is most important for English language programs, their design based languages to European standards.Specifically, English language programs in Albanian schools match closely with the European Framework of Reference for Languages and Foreign Language Portfolio.New educational programs is also provided preparatory aspect of teachers, who must have a certain level of not only education but also vocational, cultural, aiming: "Increasing accountability of teachers for the recognition and implementation of education legislation and in particular the latest innovations to educational reform; Increasing skills and professional competence of the teaching staff; a direct impact on enhancing the effectiveness of the learning process towards a successful teaching.Increasing the accountability of teachers for the necessity of knowing the basic concepts and scientific cases and their implementation in practice, in accordance with the specifications of the age of the students and the class where they teach.Practical implementation through concrete demonstration of skills acquired, and in particular by assessing achievement through testing "(IED, 2011, program of professional development for getting the qualification rates for English, p.2) Harter in (1981) surveyed over 3000 pupils in Connecticut, New York, Colorado, and California with a five scale test measuring : challenge, curiosity , professionalism, an criteria which he defined as a " tendency to challenging tasks rather than easy ones, curiosity and interest against the teacher approval, the independent attempts for professionalism against the teacher dependency, independent judgment against the teacher belief, and outer criteria to inner criteria for success or failure' p. 300.He found that student's responses for the scale of challenge, curiosity and professionalism changed dependently to the age difference.Mascgoret et.al. (2001) in the test applied to measure some factors of language learning were not found 'distinct indicators for the grouping which reflected integration, students attitudes, motivation or anxiety for the language" (p.291).Nikolov (1999) in his study Pecs, in Hungary reported that the group age 11-14 stated that they studied mainly for utilitarian reasons compared to the answers of little children.Longman would state in the book `Practice of English teaching` that `there are two types of motivation, intrinsic and extrinsic motivation to learn`. Methodology This study tried to research the attitudes of 200 pupils to the language learning in a non public secondary school in the capital city of Albania, Tirana.This place selected for the study to the migration reasons occurring in the past decade when most of the population moved to the capital.This brought the increase of population and a greater concentration of young ages in the pre university system, but not only.The study also focused on the degree of evaluation the pupils had for the teacher of English language.To complete the research the AMTB (attitude motivation test battery) was used in two scales adopted for the Albanian territory.The test was translated in the Albanian language by a two experienced translators and was reviewed by a peer group, consisting of pedagogy and psychology test raters from the University of Tirana.After it was proved reliable with a coefficient of 0,7 in a sample of 100 pupils, it was later used for the whole sample, taken at convenience reasons.The test completion was monitored by two teachers from another secondary school so that the pupils were not affected; they were previously trained to fulfill the requirements of the test manual for effective test use. Procedures The study is fully based on quantitave method of research, relying on statistical data procession and analysis on SPSS 18.The test was given for completion to 6 groups of students of tenth, eleventh, twelfth grade.Each of the groups was composed of 35 pupils each.It was applied in 210 students and it took two months to administer.This was a sample with probability, taken from the names record list at school database.The scope of the study was introduced to the students for ethical reasons and the students were free to participate or not in it.The pupils were willing to complete the test for the scales of teacher evaluation and attitudes to the language learning.After the test completion, the data was taken and answers of the respondents were carefully processed in the statistical program for two weeks. Data Analysis and Results for the Scale `Evaluation of the Teacher of English` The overall score for each scale was 70 as it was instructed in the test manual.So the pupil's responses varied in values from the lowest degree with a null vale (0) to the most extreme value (70).In order to find out the respective values for the answers, the frequency values were taken, and descriptive statistics for the mean values, and the modes for the most frequent answers.The alternatives of choice started with a strong disagreement coded Absolutely Disagree to absolutely agree, in a range of 6 degrees. For the evaluation of the teacher results this data where the answers are mostly gathered around code 5, moderately agree. The mean value after calculation results 5.1 and it corresponds to the score 60 in this scale.The following graph shows the statements of the scale and their respective values gathered from the answers of respondents for each item.The values are compared to the maximum score of the scale (70).These values stand for the intensity of their feelings. Data Analysis and Results for the Parental Encouragement The alternatives of choice started with a slightly disagreement coded (3) Slightly Disagree to moderately agree (5), in a range of 3 degrees. For the parental encouragement results this data where the answers are mostly gathered around code 5, moderately agree.This means that most of the pupils 179 of them have answered that they moderately agree with the statement that they are moderately encouraged by the parents to learn the language.The mean value after calculation results 4,73 and it corresponds to the score 56 in this scale.The statements and the respective values for each of the item in this scale are revealed in the graph below, showing the intensity of the pupils' feelings toward their parents and their support for learning the language.The individual values are compared to the top value of the scale (70).See figure 2) Conclusions This study found that pupils in non public secondary school in Albania were moderately encouraged by their parents to learn the English language and they evaluated their teacher of English in a highly moderate degree.The average score after the data processing for the parents' encouragement to learn the language was 56 out of 70.This concludes that parents support and encourage their children at a moderate degree to learn English language.The two highest values for this scale are for the items about the encouragement to study more the language and parents believing that it English language would help their children for a better future (65).The lowest consideration resulted for parents being only little interested to know about the activities their children did in the English class.In relation to the degree of evaluation for the teacher of English class the respondents revealed a better consideration compared to the parents support by evaluating more their teacher of English at a moderately high degree, score 60 out of 70.They showed that they were considerably inspired by the teacher and looked forward to going to the English class.However, they little believed that teacher of English was the best one.These results conclude that English language is very important in the curricula of the secondary schools and pupils have a high interest in learning the language.They expect a better support from their parents to learn English language and their parents should encourage them more.The teachers of English are also crucial to language learning mainly due to the new programs offered by the Ministry of Education and the continuous changes in the Albanian educational system.It results from the study that the teaching behavior of the teacher has changed progressively and positively, considering that it is highly evaluated by the target group in this study.Furthermore, the teachers are more aware of the importance of their class in the Albanian school system.It seems that the pupils attending secondary schools have a higher awareness of the importance of the English language, since these expectations of the people concerned, like parents and teachers, are quite high. Figure 1 : Figure 1: Evaluation for teacher of English Table 5 . Data for the Evaluation of the teacher of English Table 2 . Data for the frequencies of the scale 'Parental encouragement'
2017-09-07T14:08:44.104Z
2014-08-05T00:00:00.000
{ "year": 2014, "sha1": "7e9e03244fdb29591d69869d320e7f170e42d8e6", "oa_license": "CCBYNC", "oa_url": "https://www.richtmann.org/journal/index.php/jesr/article/download/3506/3447", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7e9e03244fdb29591d69869d320e7f170e42d8e6", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
233176223
pes2o/s2orc
v3-fos-license
Everlasting UC Commitments from Fully Malicious PUFs Everlasting security models the setting where hardness assumptions hold during the execution of a protocol but may get broken in the future. Due to the strength of this adversarial model, achieving any meaningful security guarantees for composable protocols is impossible without relying on hardware assumptions (Müller-Quade and Unruh, JoC’10). For this reason, a rich line of research has tried to leverage physical assumptions to construct well-known everlasting cryptographic primitives, such as commitment schemes. The only known everlastingly UC secure commitment scheme, due to Müller-Quade and Unruh (JoC’10), assumes honestly generated hardware tokens. The authors leave the possibility of constructing everlastingly UC secure commitments from malicious hardware tokens as an open problem. Goyal et al. (Crypto’10) constructs unconditionally UC-secure commitments and secure computation from malicious hardware tokens, with the caveat that the honest tokens must encapsulate other tokens. This extra restriction rules out interesting classes of hardware tokens, such as physically uncloneable functions (PUFs). In this work, we present the first construction of an everlastingly UC-secure commitment scheme in the fully malicious token model without requiring honest token encapsulation. Our scheme assumes the existence of PUFs and is secure in the common reference string model. We also show that our results are tight by giving an impossibility proof for everlasting UC-secure computation from non-erasable tokens (such as PUFs), even with trusted setup. Introduction The security of almost all cryptographic schemes relies on certain hardness assumptions. These assumptions are believed to hold right now, and researchers are even fairly certain that they will not be broken in the near future. It is widely believed, for example, that the computational Diffie-Hellmann and the RSA assumptions hold in certain groups. But what about the security of these assumptions in 10, 20, or 100 years? Can we give any formal security guarantees for current constructions that remain valid in the distant future? This is certainly possible for information-theoretic schemes and properties. However, given that many interesting functionalities are impossible to realize in an information theoretic sense, this leaves us in a very unsatisfactory situation. To overcome this problem, Müller-Quade and Unruh suggested a novel security notion widely known as everlasting universal composability security [26] (building on the work of Rabin on virtual satellites [34]). The basic idea of this security notion is to bound the running time of the attacker only during the protocol execution. After the protocol run is over, the attacker may run in super-polynomial time. This models the intuition that computational assumptions are believed to hold right now, and therefore, during the protocol run. However, at some point in the future, known computational assumptions may no longer hold. Everlasting UC security 1 refers to a composable protocol that remains secure in these settings. The everlasting UC security model has also been considered for quantum protocols [35]. Everlasting UC security is clearly a very desirable security notion, and since it is strictly weaker than statistical UC security, one may hope that it is easier to achieve. However, Müller-Quade and Unruh showed that everlasting UC commitments cannot be realized, not even in the common reference string (CRS), or with a public-key infrastructure (PKI) [27]. Everlasting UC Security From Hardware Assumptions The stark impossibility result of Müller-Quade and Unruh raises the question whether the notion is achievable at all. The authors answered this question affirmatively by presenting two constructions based on hardware assumptions. The first construction is based on a tailored-made hardware token that embeds a random oracle. The second construction relies on signature cards [27]. However, both constructions assume that the hardware token is honestly generated. The authors left open the question whether it is possible to achieve everlasting security in the setting of maliciously generated hardware tokens. Goyal et al. [18] constructs unconditionally UC-secure commitments and secure computation (as opposed to everlasting) from malicious hardware tokens. However, the construction of [18] requires honest tokens to encapsulate other tokens, ruling out some classes of hardware tokens such as physically uncloneable functions (PUFs). Physically Uncloneable Functions (PUFs) In this work, we present an everlastingly UC secure commitment scheme assuming the existence of PUFs. Loosely speaking, PUFs are physical objects that can be queried by mapping an input to a specific stimulus and mapping an observable behaviour to an output set. The crucial properties for a PUF are (i) that it should be hard (if not impossible) to clone and (ii) that it should be hard to predict the output on any input without first querying the PUF on a close enough input. Our Contributions We initiate the study of everlasting UC security in the setting of maliciously generated hardware tokens, such as PUFs. Our model extends the frameworks of [4,8] by introducing fully malicious hardware tokens, whose state is not a-priori bounded, the generator of a token can install arbitrary code inside of it, and it can encapsulate (and decapsulate) other (possibly fully malicious) tokens within itself. Our contributions can be summarized as follows: • Aiming at bridging the gap between hardware tokens and PUFs, we propose a unified ideal functionality for fully malicious tokens that is general enough to capture hardware devices with arbitrary functionalities such as PUFs and signature cards. • We put forward a novel definition for unpredictability of PUFs. We argue that the formalization from prior works [3,24,30] is not sufficient for our setting because it does not exclude adversaries that may indeed predict the PUF responses for values never queried to the PUF. We demonstrate this fact in Sect. 4.1.1 by giving a concrete counterexample. • We show with an impossibility result that one cannot hope to achieve an everlastingly secure oblivious transfer (OT) (therefore, secure computation) in the malicious token setting by using non-erasable (honestly generated) tokens; non-erasable tokens can keep a state but are not allowed to erase previous states. • Finally, we present an everlastingly UC secure commitment scheme in the fully malicious token model. Our protocol assumes the existence of PUFs and allows for the PUF to be reused for polynomially many runs of the protocol. Our cryptographic building blocks can be instantiated from standard computational assumptions, such as the learning with errors (LWE) problem. Related Work Everlasting and Memory Bound Adversaries Everlasting security was first considered in the setting of memory-bounded adversaries [6,10], and later extended to the UC setting by Müller-Quade and Unruh [27]. Rabin [34] suggested a construction using distributed servers of randomness, called virtual satellites, to achieve everlasting security. The resulting scheme remains secure if the attacker that accesses the communication between the parties and the distributed servers is polynomially bounded during the key exchange. Dziembowski and Maurer [15] showed that protocols in the bounded storage model do not necessarily stay secure when composed with other protocols. Damgård [11] presented a statistical zero-knowledge protocol secure under concurrent composition. Although counterintuitive, statistical zero-knowledge may lose its everlasting property under composition. This was illustrated in [27] for statistically hiding UC commitments [16] which were shown to leak secrets under (even sequential) composition; they are composable and statistically hiding, but not at the same time (i.e. the composability only holds for the computational hiding property, intuitively). Technically, the reason for this is that the common reference string used by the simulator is not statistically indistinguishable. For the same reason, the protocol of Damgård [11] does not directly translate into an everlasting commitment scheme: for this specific case, the gap consists in extracting the witness from adversarial proofs using a common reference string that is statistically close to the honestly sampled one. (Malicious) Hardware Tokens A model proposed in [22] allows parties to build hardware tokens to compute functions of their choice, such that an adversary, given a hardware token T for a function F, can only observe the input and output behaviour of T . The motivation is that the existence of a tamper-proof hardware can be viewed as a physical assumption, rather than a trust assumption. The authors show how to implement UC-secure two-party computation using stateful tokens, under the DDH assumption. Shortly after, Moran and Segev [28] showed that in the hardware token model of [22] even unconditionally secure UC commitments are possible using stateful tokens. This result was later extended by [19] for unconditionally UC-secure computation, also using stateful tokens. One limitation of the model of [22] is the assumption that all parties (including the adversary) know the code running inside the hardware token it produces; this assumption gives extra power to the simulator, allowing it to rewind the hardware token in the proofs of [19,22,28]. However, this assumption rules out real scenarios where the adversary can create a new hardware token that simply "encapsulates" a hardware token it receives from some party and that the adversary does not know the code running inside of it. In this direction, Chandran et al. [8] extended the model of [22] to allow for the hardware tokens produced by the adversary to be stateful, to encapsulate other tokens inside of it and to be passed on to other parties. They constructed a computationally secure UC commitment protocol without setup, assuming the existence of stateless hardware tokens (signature cards). Unfortunately, the construction of [8] cannot fulfil the notion of unconditional (or everlasting) security since it requires perfectly binding, and therefore only computationally hiding, commitments as a building block. Goyal et al. [18], following the model of [8], prove that statistically secure OT from stateless tokens is possible if (honest) tokens can encapsulate other tokens. However, honest token encapsulation is highly undesirable in practice, and in particular not even compatible with PUFs as they are physical objects. Interestingly, the authors also show that statistically secure OT (and therefore secure computation) is impossible to achieve when one considers only stateless tokens that cannot be encapsulated. To circumvent this impossibility result, Döttling et al. [13,14] studied the feasibility of secure computation in the stateful token model, where the adversary is not allowed to rewind the token arbitrarily. Although this model has a practical significance, it does not cover certain classes of hardware tokens, such as PUFs. Later, a rich line of research investigated on the round complexity of secure computation using stateless hardware tokens [20,25] in the computational setting. Unfortunately, it seems that the security guarantees of these [17] have made partial progress on this question presenting a commitment scheme with unconditional security based on PUFs. However, as shown by [4] in the form of an attack, the construction of [17] completely breaks when the adversary is allowed to generate encapsulated PUFs. Dachman-Soled, Fleischhacker, Katz, Lysyanskaya, and Schröder [12] investigated the possibility of secure two-party computation based on malicious PUFs. Badrinarayanan, Khurana, Ostrovsky, and Visconti [4] introduced a model where the adversary is allowed to generate malicious PUFs that encapsulate other PUFs inside of it; the outer PUF has oracle access to all its inner PUFs. The security of their scheme assumes a bound on the memory of adversarially generated PUFs. In Table 1, we show a comparison of UC schemes based on malicious hardware tokens (including PUFs). Technical Overview In the following, we give an informal overview of our everlasting UC commitment scheme construction, and we introduce the main ideas behind our proof strategy. Besides PUFs, our protocol assumes the existence of the following cryptographic building blocks: • A non-interactive statistically hiding (NI-SH) UC-secure commitment (Com). • A strong randomness extractor H . The message flow of our protocol is shown in Fig. 1. The protocol is executed by a committer (Alice) and a recipient (Bob). We assume that both parties have access to a uniformly sampled common reference string that contains a random image of a one-way permutation y = f (x). Protocol Overview At the beginning of a commitment execution, Bob prepares a series of random string-pairs ( p 0 i , p 1 i ), and queries them to the PUF to obtain the corresponding pair (q 0 i , q 1 i ); the PUF is then transferred to Alice. Here, we make the simplifying assumption that a PUF is used only for a single run of the commitment. Note, however, that one can reuse the same PUF by having Bob computing as many tuples ( p 0 i , p 1 i ) as needed, and by querying the PUF on all of these values before passing it to Alice. Alice samples a random string k ∈ {0, 1} (λ) and engages Bob in many parallel OT instances, where Alice receives p k i i , and where k i denotes the i-th bit of k. Alice queries the strings p k i i to the PUF and sends to Bob: • a set of NI-SH commitments (com 1 , . . . , com (λ) ) to the outputs of the PUF, • an (NI-SH) commitment com to m, and • the string ω := H (seed, k) ⊕ m decom. Alice then produces a SWIAoK that certifies that either (i) all of her messages were honestly generated, or (ii) she knows a pre-image x such that f (x) = y. The idea here is that, if an algorithm recovers k, then it can also recompute H (seed, k) and extract the message m. Note that the value of k is "encoded" in the OT bits of Alice for the p k i i , and those values are queried by Alice to the PUF. Therefore, an extractor that sees the queries of Alice can easily recover the message m. What is not clear at this point is how to enforce Alice to query the PUF on the correct p k i i and not on some other random string. For this reason, we introduce an additional authentication step where Bob publishes all the pairs (q 0 i , q 1 i ). In the opening phase, Alice proves (with a SWIAoK) to Bob that the vector of commitments sent in the previous interaction opens indeed to q k 1 1 , . . . , q k (λ) (λ) , up to small errors (or she knows the pre-image of y). Intuitively, Alice cannot convince Bob without querying all the p k i i , since she would need to guess some q k i i without knowing the pre-image p k i i (due to the security of the OT). In the proof, the extractor can recover k by just looking at the queries Alice made to the PUF. To see why the commitment is hiding, it is sufficient to observe that k hides the message in an information theoretic sense, under the assumption that the OT and SWIAoK protocols are secure. One subtlety that we need to address is that some bits of k might be revealed by the aborts of Alice. For this reason, we one-time-pad the message m with H (seed, k): the strong randomness extractor guarantees that the value H (seed, k) is still uniformly distributed even if some bits of k are leaked. Proof Sketch (Hiding) We show that our commitment scheme is hiding through a series of hybrids where at the last step Alice can equivocate the commitment to any message of her choice. Note that every step is information-theoretic. H 1 : Alice uses x, the pre-image of y, as a witness to compute the SWIAoK. Since the AoK is statistically witness indistinguishable, this hybrid is statistically close to the original protocol. H 2 : Alice uses the simulator for the OT protocols and extracts both values ( p 0 i , p 1 i ). Since the OT is statistically receiver-private, this hybrid is statistically close to the previous. In the full proof, this is shown via a hybrid argument. H 3 : Alice computes com i as commitment to a random string. A hybrid argument can be used to bound the distance of this hybrid with the previous by the statistically hiding property of the commitment scheme. H 4 : Alice chooses the value of k for all sessions upfront. Here, the change is only syntactical. H 5 : Alice no longer queries the PUF token but instead checks that the output pairs (q 0 i , q 1 i ) sent by Bob correspond to the correct outputs of the PUF on input ( p 0 i , p 1 i ). Note that the state of the PUF is fixed once the PUF is sent to Alice and therefore the consistency of all pairs (q 0 i , q 1 i ) is well defined. Note that the relation is not efficiently computable by Alice, but for information-theoretic security the fact that it is defined is enough. Since Alice retains the ownership of the PUF, this hybrid is identical to the previous. H 5 : Alice samples ω uniformly at random. Note that in H 4 the leakage of Alice of k is bounded by whether she aborts or not. Since Alice aborts at most once and since there are at most polynomially many sessions, we can bound the leakage of k to O(log λ) bits. Leveraging the randomness extractor H , we can argue that H 4 and H 5 are statistically indistinguishable. H 6 : Alice opens the commitment to a message of her choice. Note that in H 5 the original message m is information-theoretically hidden. Proof Sketch (Binding) To argue that the scheme is binding, we define the following extractor: the algorithm examines the list of queries made by Alice to the PUF and, for each i, it checks whether some query q is equal to p b i (for b ∈ {0, 1}), if this is the case then it sets k i = b. Once the full k is reconstructed, the extractor computes ω⊕H (seed, k) = m decom and outputs m. To show that the extractor always succeeds, we need to argue that: 1. The value of k is always well defined: if some q = p 0 i and some other query q = p 1 i , then the bit k i is not well defined. However, this means that Alice learned both p 0 i and p 1 i from the OT protocol, which is computationally infeasible. 2. The string k is always fully reconstructed: if no query q is equal to p 0 i or p 1 i , then the i-th bit of k is not defined. This implies that Alice never queried p 0 i or p 1 i to the PUF. However, note that Alice should produce a commitment com i to either PUF( p 0 i ) or PUF(p 0 i ) and prove consistency. This is clearly not possible without querying the PUF unless Alice breaks the binding of the commitment or proves a false statement in the SWIAoK. To establish the latter, we also need to rule out the case where Alice computes the SWIAoK using the knowledge of x, the pre-image of y. In the full proof, we show this via a reduction against the one-wayness of the one-way permutation f . We are now in the position to show that the extracted message m is identical to the one that Alice decommits to. Recall that Alice proves that she committed to the values PUF( p k i i ) such that ω ⊕ H (seed, k) = m decom. It follows that, if k is uniquely defined, then the extractor always returns the correct m, unless Alice can break the soundness of the SWIAoK (or inverts the one-way permutation). By the above conditions, this happens with all but negligible probability. On the Common Reference String Our protocol needs to assume the existence of a common reference string to equivocate commitments in the security proof: having access to the generation of the crs, the simulator can craft proofs for false statements, simulate the OT, and extract the commitments. Note that the simulation has to be "straight-line", since we cannot rewind the adversary in the UC framework. A previous work [29] circumvented this issue by leveraging some computationally hard problem. Unfortunately, this class of techniques does not seem to apply to the everlasting setting since the environment can distinguish a simulated trace once it becomes unbounded. The work of [17] builds unconditionally secure commitments from PUFs without a CRS, but as shown by [4], the construction breaks down in our model where the adversary is allowed to generate encapsulated PUFs. It is not clear if the techniques of [17] can be adapted to our setting. We leave the question of removing the necessity of a common reference string from our protocol as a fascinating open problem. Preliminaries In the following, we introduce the notation and the building blocks necessary for our results. Notations An algorithm A is probabilistic polynomial time (PPT) if A is randomized and for any input x, r ∈ {0, 1} * the computation of A(x; r ) terminates in at most poly(|x|) steps. We denote with λ ∈ N the security parameter. A function negl is negligible, if for any positive polynomial p and sufficiently large k, negl(k) < 1/ p(k). A relation R ∈ {0, 1} * × {0, 1} * is an N P relation if there is a polynomial-time algorithm that decides (x, w) ∈ R. If (x, w) ∈ R, then we call x the statement and x witness for x. We denote by hd(x, x ) the Hamming distance between two bitstrings x and x . Given two ensembles X = {X λ } λ∈N and Y = {Y λ } λ∈N , we write X ≈ Y to denote that the two ensembles are statistically indistinguishable, and X ≈ c Y to denote that they are computationally indistinguishable. We denote the set {1, . . . , n} by [n]. We recall the definition of statistical distance. Definition 1. (Statistical Distance) Let X and Y be two random variables over a finite set U. The statistical distance between X and Y is defined as Cryptographic Building Blocks One Way Function A one-way function is a function that is easy to compute and hard to invert. It is the building block of almost all known cryptographic primitives. where the probability is taken over the random choice of x. Moreover, we say that f is a one-way permutation whenever the domain and range of f are of the same size. For convenience, we assume that the verification is deterministic and canonical (i.e. it takes as input the random coins used in the commitment phase and checks whether the commitment was correctly computed). Non-interactive Commitment Scheme We require commitments to be (stand-alone) statistically hiding. Let A be a nonuniform adversary against C and define its hiding-advantage as Furthermore, we require the commitments to be UC-secure: roughly speaking, an equivocator (with the help of a trapdoor in the CRS) can open the commitments arbitrarily. On the other hand, we require the existence of a computationally indistinguishable CRS (in extraction mode) where commitments are statistically binding and can be efficiently extracted via the knowledge of a trapdoor. Such commitments can be constructed in the CRS model from a variety of assumptions [32], including the learning with errors (LWE) problem. For a precise functionality, we refer the reader to Sect. 3.3. Oblivious Transfer A 2 1 -Oblivious transfer (OT) is a protocol executed between two parties called sender S (i.e. Alice) with input bits (s 0 , s 1 ) and receiver R (i.e. Bob) with input bit b. Bob wishes to retrieve s b from Alice in such a way that Alice does not learn anything about Bob's choice b and Bob learns nothing about Alice's remaining input s 1−b . In this work, we require a 2-round protocol (Sender OT , Receiver OT ) secure in the CRS model, which satisfies (stand-alone) statistical receiver privacy. We define the sender Alice's advantage of breaking the security of Bob to be Definition 4. (Statistical Receiver Privacy) (Sender OT , Receiver OT ) is statistically receiver-private if the advantage function Adv OT S is a negligible function for all unbounded adversaries A. In addition, we require our OT to be UC-secure: for a well-formed CRS, there exists an efficient equivocator that can (non-interactively) recover both messages of the sender. Furthermore, there exists an alternative CRS distribution (which is computationally indistinguishable from the original one) and an efficient non-interactive extractor that is able to uniquely recover the message of the receiver using the knowledge of a trapdoor. Such 2-round OT can be constructed from a variety of assumptions [31], including LWE [33]. For a precise description of the ideal functionality, we refer the reader to Sect. 3.3. Statistical Witness-Indistinguishable Argument of Knowledge (SWIAoK) A witnessindistinguishable argument is a proof system for languages in N P that does not leak any information about which witness the prover used, not even to a malicious verifier. If the prover is a PPT algorithm, then we call such a system an argument system, and if it is unbounded, we call it a proof system. For witness-indistinguishable arguments of knowledge, we formally introduce the following notation to represent interactive executions between algorithms P and V. By P(y), V(z) (x), we denote the view (i.e. inputs, internal coin tosses, incoming messages) of V when interacting with P on common input x, when P has auxiliary input y and V has auxiliary input z. Some of the following definitions are based on [29]. Definition 5. (Witness Relation) A witness relation for a N P language L is a binary relation R that is polynomially bounded, polynomial time recognizable, and characterizes L by L = {x : ∃w s.t. (x, w) ∈ R}. We say that w is a witness for x ∈ L if (x, w) ∈ R. Definition 6. (Interactive Argument System) A two-party game P, V is called an Interactive Argument System for a language L if P, V are PPT algorithms and the following two conditions hold: • Soundness: For every x / ∈ L and every PPT algorithm P * , there exists a negligible function negl(·), such that, Pr P * , V (x) = 1 ≤ negl(|x|). Definition 7. (Witness Indistinguishability) Let L ∈ N P and (P, V) be an interactive argument system for L with perfect completeness. The proof system (P, V) is witness indistinguishable (WI) if for every PPT algorithm V * , and every two sequences {w 1 x } x∈L and {w 2 x } x∈L such that w 1 x , w 2 x ∈ R, the following sequences are witness indistinguishable: Next, we define the notion of extractability for SWIAoKs. Definition 8. (Argument of Knowledge) Let L ∈ N P and (P, V) be an interactive argument system for L with perfect completeness. The proof system (P, V) is an argument of knowledge (AoK) if there exists a PPT algorithm Ext, called the extractor, a polynomial p, and a constant c such that, for every PPT machine P * , every x ∈ L, auxiliary input z, and random coins r , there exists a negligible function negl such that Strong Randomness Extractor A strong randomness extractor is a function that, applied to some input with high min-entropy, returns some uniformly distributed element in the range. we have that, and L = t − c is called the entropy loss of H . Universal Composability Framework In this section, we recall the basics of the Universal Composability (UC) framework of Canetti [5], and later we discuss the Everlasting Universal Composability framework 2 following [27] closely. We refer the reader to [5,27] for a more comprehensive description. Basics of the UC Framework Our description of the UC framework follows [27] closely. The composition of two provably secure protocols does not necessarily preserve the security of each protocol and the result may also be no longer secure. A framework that analyses the security of composed protocols and which is able to provide security guarantees is the Universal Composability framework (UC) due to Canetti [5]. The main idea of this security notion is to compare a real protocol π with some ideal protocol ρ. In most cases, this ideal protocol ρ will consist of a single machine, a socalled ideal functionality. Such a functionality can be seen as a trusted machine that implements the intended behaviour of the protocol. For example, a functionality F for commitment would expect a value m from a party C. Upon receipt of that value, the recipient R would be notified by F that C has committed to some value (but F would not reveal that value). When C sends an unveil request to F, the value m will be sent to R (but F will not allow C to unveil a different value). Given a real protocol π and an ideal protocol ρ, we say that π realizes ρ (also called "implements", "emulates", or "is as secure as") if for any adversary A attacking the protocol π there is a simulator S performing an attack on the ideal protocol ρ such that no environment Z can distinguish between π running with A and ρ running with Z. Here, Z may choose the protocol inputs and read the protocol outputs and may communicate with the adversary or simulator (but Z is, of course, not informed whether it communicates with the adversary or the simulator). First, the environment may communicate with the adversary during the protocol execution, and second, the environment does not need to choose the inputs at the beginning of the protocol execution; it may adaptively send inputs to the protocol parties at any time, and it may choose these inputs depending upon the outputs and the communication with the adversary. These modifications are the reason for the very strong composability properties of the UC model. Network Execution In the UC framework, all protocol machines and functionalities, as well as the adversary, the simulator and the environment are modelled as interactive Turing machines (ITM). Throughout a protocol execution, an integer k called the security parameter is accessible to all parties. At the beginning of the execution of a network consisting of π , A, and Z, the environment Z is invoked with an initial input z. From then on, every machine M that is activated can send a message m to a single other machine M . Then that machine M is activated and given the message m and the id of the originator M . If in some activation a machine does not send a message, the environment Z is activated again. Additionally the environment may issue corruption requests for some party P. From then on, the machines corresponding to the party P are controlled by the adversary (i.e. it can send and receive messages in the name of that machine, and it can read the internal state of that machine). Finally, at some point the environment Z gives some output m which can be an arbitrary string. By EXC π,A,Z (k, z) we denote the distribution of that output m on security parameter k and initial input z. Analogously, we define EXC ρ,S,Z (k, z) for an execution involving the protocol ρ, the simulator S, and the environment Z. We distinguish two different flavours of corruption. We speak of static corruption if the environment Z may only send corruption requests before the begin of the protocol, and of adaptive corruption if Z may send corruption requests at any time in the protocol, even depending on messages learned during the execution. In this paper, we will restrict our attention to the less strict security model using static corruption. We leave the case of adaptive corruptions, in which the environment may corrupt any party adaptively during the execution of the protocol as an interesting open problem. UC Definitions If the ideal protocol ρ consists of an ideal functionality F, for technical reasons we assume the presence of so-called dummy parties that forward messages between the environment Z and the functionality F. For example, assume that F is a commitment functionality. In an ideal execution, Z would send a value m to the party C (since it does not know of F and therefore will not send to F directly). Then, C would forward m to F. Then, F notifies R that a commitment has been performed. This notification is then forwarded to Z. With these dummy parties we have, at least syntactically, the same messages as in the real execution: Z sends m to C and receives a commit notification from R. Second, the dummy parties allow a meaningful corruption in the ideal model. If Z corrupts some party P, in the ideal model the effect would be that the simulator controls the corresponding dummy party P and thus can read and modify messages to and from the functionality F in the name of P. Thus, if we write EXC F ,S,Z , this is essentially an abbreviation for EXC ρ,S,Z where the ideal protocol ρ consists of the functionality F and the dummy parties. Having defined the families of random variables EXC π,A,Z (k, z) and EXC ρ,S,Z (k, z) we can now define security via indistinguishability. Definition 10. (Universal Composability [5]) A protocol π UC realizes a protocol ρ, if for any polynomial-time adversary A there exists a polynomial-time simulator S, such that for any polynomial-time environment Z, Note that in this definition, it is also possible to only consider environments Z that give a single bit of output. As demonstrated in [5], this gives rise to an equivalent definition. However, in the case of everlasting UC below, this will not be the case, so we stress the fact that we allow Z to output arbitrary strings. In particular an environment machine can output its complete view. Natural variants of this definition are statistical UC, where all machines (environment, adversary, simulator) are computationally unbounded and the families of random variables are required to be statistically indistinguishable, and perfect UC, where all machines are computationally unbounded and the families of random variables are required to have the same distribution. In these cases, one often additionally requires that if the adversary is polynomial time, so is the simulator. Composition For some protocol σ , and some protocol π , by σ π we denote the protocol where σ invokes (up to polynomially many) instances of π . 3 That is, in σ π the machines from σ and from π run together in one network, and the machines from σ access the inputs and outputs of π . (In particular, Z then talks only to σ and not to the subprotocol π directly.) A typical situation would be that σ F is some protocol that makes use of some ideal functionality F (say, a commitment) and then σ π would be the protocol resulting from implementing that functionality by some protocol π (say, a commitment protocol). One would hope that such an implementation results in a secure protocol σ π . That is, if π realizes F and σ F realizes G, then σ π realizes G. Fortunately, this is the case: Theorem 11. (Universal Composition Theorem [5]) Let π , ρ, and σ be polynomialtime protocols. Assume that π UC realizes ρ. Then, σ π UC realizes σ ρ . The intuitive reason for this theorem is that σ can be considered as an environment for π or ρ, respectively. Since Definition 10 guarantees that π and ρ are indistinguishable by any environment, security follows. In a typical application of this theorem, one would first show that π realizes F and that σ F realizes G. Then using the composition theorem, one gets that σ π realizes σ F which in turn realizes G. Since the realizes relation is transitive (as can be easily seen from Definition 10), it follows that σ π realizes G. This composition theorem is the main feature of the UC framework. It allows us to build up protocols from elementary building blocks. This greatly increases the manageability of security proofs for large protocols. Furthermore, it guarantees that the protocol can be used in arbitrary contexts. Analogous theorems also hold for statistical and perfect UC. Dummy adversary When proving the security of a given protocol in the UC setting, a useful tool is the so-called dummy adversary. The dummy adversaryà is the adversary that simply forwards messages between the environment Z and the protocol (i.e. it is a puppet of the environment that does whatever Z instructs it to do). In [5], it is shown that UC security with respect to the dummy adversary implies UC security. The intuitive reason is that sinceà does whatever Z instructs it to do, it can perform arbitrary attacks and is therefore the worst-case adversary given the right environment (remember that we quantify over all environments). We very roughly sketch the proof idea. Let protocols π and ρ and some adversary A be given. Assume that π UC realizes ρ with respect to the dummy adversaryÃ. We want to show that π UC realizes ρ with respect to A. Given an environment Z, we construct an environment Z A which simulates Z and A. Note that an execution of EXC π,Ã,Z A is essentially the same as EXC π,A,Z (up to a regrouping of machines). Then there is a simulatorS such that EXC π,Ã,Z A and EXC ρ,S,Z A are indistinguishable. Let S be the simulator that internally simulates the machines A andS and forwards all actions performed by A as instructions toS (remember thatS simulatesÃ, so it expects such instructions). Then, EXC ρ,S,Z A is again the same as EXC ρ,S,Z up to a regrouping of machines. Summarizing, we have that EXC π,A,Z and EXC ρ,S,Z are indistinguishable. A nice property of this technique is that it is quite robust with respect to changes in the definition of UC security. For example, it also holds with respect to statistical and perfect UC security, as well as with respect to the notion of Everlasting UC from [27]. Everlasting UC Security In this section, we present our definitions of everlasting UC security. Our formalization builds on Canetti's Universal Composability framework [5] and extends the notion of everlasting/long-term security due to Müller-Quade and Unruh [27]. Loosely speaking, everlasting security guarantees the "standard" notion of UC security during the execution of the protocol. This means that the security is guaranteed against polynomially bounded adversaries. Therefore, standard computational assumptions, such as the hardness of the decisional Diffie-Hellman problem and the existence of one-way functions can be used as hardness assumptions. However, after the execution of the protocol, we no longer assume that these assumptions hold, because they may be broken in the future. Müller-Quade and Unruh model this by letting the distinguisher become unbounded after the execution of the protocol. Everlasting security guarantees security and confidentiality in this setting. They showed in [27] that everlasting UC commitments cannot be realized, not even in the common reference string (CRS) or the public-key infrastructure (PKI) model. 4 The fact that everlasting UC commitments cannot be constructed in the CRS model shows a strong separation between the everlasting UC and the computational UC security notion, because commitments schemes do exist (under standard assumptions) in the computational UC security model [7]. The stark impossibility result of Müller-Quade and Unruh motivated the use of other trust assumptions, such as trusted pseudorandom functions (TDF) and signature cards [27]. It is not hard to see that everlasting UC security is strictly stronger than computational UC security, since the adversary is allowed to become unbounded after the execution of the protocol, and it is strictly weaker than statistical UC security, since the adversary is polynomially bounded during the run of the protocol. Defining Everlasting UC Security. The formalization of [27] is surprisingly simple and only extends the original UC definition by the requirement that the execution of the real protocol and of the functionality cannot be distinguished by an unbounded entity after the execution of the protocol is over (that is run by efficient adversaries and environments). Formally, this means that the output of the environment in the real and ideal worlds is statistically close. A comprehensive discussion is given in [27], and we briefly recall the definitions. In [27], the authors show that the composition theorem from [5] also holds with respect to Definition 12.A shortcoming of Definition 12, when applied to the token model, is that the distinguisher has no access to the hardware token after it becomes unbounded. Another issue is that Definition 12 does not model the case that the hardware assumption may be broken in the long-term. Everlasting UC Security with Hardware Assumptions We define a notion of everlasting security which allows the participants in a protocol to leak information in the long term. With the exception of the environment Z and the adversary A, we give each instance of a Turing machine (ITI for short) in the protocol an additional output tape, that we call long-term output tape. We modify the execution model to handle the long-term tapes as follows. At the end of the execution of the protocol (i.e. when the environment Z produces its output m), adversary A is invoked once again, this time with all long-term tapes, and produces an output a. We define the new execution model to be EXC := (m, a). A formal definition follows. Definition 13. (Everlasting UC with Long-term Tapes) A protocol π everlastingly UC realizes an ideal protocol ρ if, for any PPT adversary A, there exists a PPT simulator S such that, for any PPT environment Z, In Definition 13, the distinguisher does not get the long-term tapes directly, instead, the tapes go through the adversary. The real adversary A can, wlog, let the tapes go unchanged to the distinguisher (i.e. dummy adversary). The simulator S can replace the long-term tapes by any simulated a of its choice. We point out that Definition 13 is equivalent to Definition 12 when none of the ITIs in π or ρ have long-term output tapes. It is easy to show that the composition theorem from [27] carries over to our settings: the long-term tapes of the honest parties are also given to the adversary/simulator at the end of the protocol execution; however, the simulator (when communicating with the environment) can replace them with values of his choice. Formally, this means that the long-term tapes are just a message sent from protocol to adversary (in the same way as, e.g. the state is sent in the case of adaptive corruption), and consequently, when proving the composition theorem, those messages are handled in exactly the same way as the messages resulting from adaptive corruption. Functionalities In this section, we define some commonly used functionalities that we will need for our results. CRS The first functionality is the common reference string (CRS). Intuitively, the CRS denotes a string sampled uniformly from a given distribution G by some trusted party, and that is known to all parties prior to the start of the protocol. Multiple commitment Here, we recall the functionality for a commitment scheme. Throughout the following description, we implicitly assume that the attacker is informed about each invocation and that the attacker controls the output of the functionality. We omit those messages from the description of the functionalities for readability. Note that to securely realize this functionality, a protocol must guarantee independence among different executions of the commitment protocol. command (commit, sid, x), where x ∈ {0, 1} (λ) , from S, send the message (committed, sid) to R. Upon command (unveil, sid) from S, send (unveiled, sid, x) to R (with the matching sid). Several commands (commit) or (unveil) with the same sid are ignored. Oblivious Transfer Functionality The oblivious transfer functionality allows for the receiver party to select a bit b and the sender party to send two messages m 0 and m 1 to the receiver in such a way that, the sender never learns the bit b the receiver chose, and the receiver learns only the message m b , and nothing else about m b−1 . Transfer (OT)) Let R and S be two parties. The functionality F S→R, OT behaves as follows: upon receiving the command (transfer, id, m 0 , Definition 16. (Oblivious We call S the sender, and R the receiver. Remark 17. Looking ahead, we note that we cannot define the protocol of Sect. 6 in the F OT -hybrid model or in the F MCOM -hybrid model. The former is due to the protocol of Section 6 requiring an OT with the additional property of statistical receiver privacy, which is not the case of all OT protocols that realize the F OT functionality. The latter is due to the protocol requiring a commitment scheme with the additional property of statistical hiding, which is not the case of all commitment schemes that realize the F MCOM functionality. Moreover, the protocol of Sect. 6 requires to prove statements about the contents inside of a commitment, and as shown by [9] this is not possible using a UC commitment functionality. Physical Assumptions The functionality F HToken described in this section models generic fully malicious hardware tokens, including PUFs. A fully malicious hardware token is the one that its state is not bounded a-priori, its creator can install arbitrary code inside of it, and it can encapsulate an arbitrary number of (possibly fully malicious) tokens inside of itself, called children. As far as we know, this is the first functionality to integrate tamper-proof hardware tokens with PUFs, allowing us to design protocols that are transparent about the type of hardware token used, as the functionality can be instantiated with any of the former. Moreover, in the particular case of PUFs, our model extends the PUFs-inside-PUF model of [4] to the more general case of Tokens-inside-Token. 5 We handle encapsulated tokens in the functionality by allowing the parent token (i.e. the token that contains other token(s)) to have oracle access to all its children during its evaluation; we believe that token encapsulation models a realistic capability of an adversary and we believe that it is important to include it in our model for the soundness of the security analysis. We also note that F HToken is not PPT; this is because the functionality does not impose a restriction on the efficiency of the malicious code. The functionality F HToken allows tokens to be transferred among parties by invoking handover; a token can only be queried by the party that currently owns the token by invoking query. Malicious tokens can be created by the adversary and it can contain other tokens inside of it. In contrast to [8], the adversary can "unwrap" encapsulated tokens by invoking openup and read malicious tokens' state by invoking readout. Functionality F HToken F HToken is parameterized by an algorithm HTSamp, a PPT Turing machine M honest and a polynomial p(λ) that bounds the running time of M honest . F HToken runs on input the security parameter 1 λ , with parties P = {P 1 , · · · , P n }, and adversary A. The list L contains instances of tokens with the attributes id, st, M, children, owner, honest, that can be accessed with the notation token.attribute, and where id is a string that uniquely identifies a physical instance of the hardware token, st is the internal state of the token, M is a TM that contains the code to be executed, children is a list of children (tokens) that are contained within this token (can also be empty), owner is the party that currently owns the token (can be embedded in case of children), and honest is a boolean value that is true when the token was honestly generated, and false otherwise. For simplicity we omit the polynomial p(λ), since wlog any p(λ) can be considered. We note that F HToken is not PPT, and this is due to the fact that there is no runtime bound on M. The functionality F HToken receives commands and acts as follows. • Upon command (create) from P ∈ P, create an empty token tok and do: • Upon command (handover, id, P j ) from P i ∈ P ∪ {A}, where P j ∈ P ∪ {A}: For all tok ∈ L s.t. tok.owner = P i and tok.id = id do. • Upon command (query, id, q) from P ∈ P ∪ {A}: Define the recursive algorithm HTEval as follows. − Remove tok from L, and for each tok c ∈ tok.children: * Set tok c .owner := P, for some P ∈ P. − Return ok to A. • In all other cases, enter the waiting state without sending a message. The long-term output tape a records all the information from the tokens in L such that tok.owner = A (or tokens owned by some other token that is owned by A, for any number of layers). Physically Uncloneable Functions (PUFs) In a nutshell, a PUF is a noisy source of randomness. It is a hardware device that, upon physical stimuli, called challenges, produces physical outputs (that are measured), called responses. The response measured for each challenge of the PUF is unpredictable, in the sense that it is hard to predict the response of the PUF on a given challenge without first measuring the response of the PUF on the same (or similar) challenge. When a PUF receives the same physical stimulus more than once, the responses produced may not be exactly equal (due to the added noise), but the Hamming distance of the responses are bounded by a parameter of the PUF. A family of PUFs is a pair of algorithms (PUFSamp, PUFEval), not necessarily PPT. PUFSamp models the manufacturing process of the PUF: on input the security parameter, it draws an index σ , that represents an instance of a PUF that satisfies the security definitions for the security parameter (that we define later). PUFEval models a physical stimulus applied to the PUF: Upon a challenge input x, it invokes the PUF with x and measures the response y, that is returned as the output. The length of a response y returned by algorithm PUFEval is a bitstring of size rg. A formal definition follows. • Sampling. Let I λ be an index set. On input the security parameter λ, the stateless and unbounded sampling algorithm PUFSamp outputs an index σ ∈ I λ . Each σ ∈ I λ corresponds to a family of distributions D σ . For each challenge x ∈ {0, 1} λ , D σ contains a distribution D σ (x) on {0, 1} rg(λ) . It is not required that PUFSamp is a PPT algorithm. • Evaluation. On input (1 λ , σ, x), where x ∈ {0, 1} λ , the evaluation algorithm PUFEval outputs a response y ∈ {0, 1} rg(λ) according to the distribution D σ (x). It is not required that PUFEval is a PPT algorithm. Additionally, we require the PUF family to satisfy a reproducibility notion that we describe next. Reproducibility informally says that, the responses produced by the PUF when queried on the same random challenge are always close. Many PUF definitions in the literature [3,4,12,30] have had problems with the superpolynomial nature of PUFs. In particular, the possibility of PUFs solving hard computational problems, such as discrete logarithms or factoring, was not excluded, or excluded in an awkward way. We take our inspiration from the idea that a PUF can be thought as a function selected at random from a very large set, and therefore cannot be succinctly described; however, it can be efficiently simulated using lazy sampling. Conceptually, we will only consider PUFs that can be efficiently simulated by a stateful machine. where st denotes the initial state of the TM M. Security of PUFs The security of PUFs has been mainly defined by the properties of unpredictability and uncloneability [1,3,4,24,30]. In Sect. 4.1.1, we introduce a novel unpredictability notion for PUFs, and we later discuss why the standard unpredictability notion is not suited for our setting. Fully adaptive PUF Unpredictability. In contrast to the standard definition of unpredictability [3], in this work we require a stronger notion of adaptive unpredictability. Loosely speaking, unpredictability should capture the fact that it is hard to learn the response of the PUF on a given challenge without first querying the PUF on a similar challenge. Note that this implies uncloneability: if one could clone the PUF, one could use the cloned PUF to predict the answers of the original PUF. We express the similarity of inputs/outputs of the PUF in terms of the Hamming distance hd, however, our results can be easily adapted to other metrics. where Q is the list of all queries made by A. The adaptive PUF unpredictability says that the only way to learn the output of PUFEval(1 λ , σ, x) is to query the PUF on x (or something close enough to x). Our definition captures this by allowing adversary A to know the challenge x before having oracle access to PUFEval. The unsuitability of the standard PUF unpredictability of [3]. We first recall the standard unpredictability definition of [3]. As the definition itself is based on the notion of average min-entropy, for convenience, we present that first. [3]) The average min-entropy of the measurement PUFEval(q) conditioned on the measurements of challenges Q ={q 1 , · · ·, q poly(λ) } for the PUF family PUF = (PUFSamp, PUFEval) is defined bỹ Definition 23. (PUF Unpredictability We now argue why Definition 23 is not suited for our setting. We present a PUF family that satisfies Definition 23 and yet allows for an adversary to predict the response of the PUF on a challenge never queried to the PUF (and far apart from the other queried challenges). We prove the following theorem next. Proof. Let PUF = (PUFSamp, PUFEval) be a PUF family for challenges of size (n + 1)-bits and responses of size n-bits, We construct the family PUF as follows: We first show how an adversary can predict with probability 1 the output of a PUF from the family described above on a fresh input. Given some arbitrary fresh challenge input b m, the adversary can find the corresponding response PUFEval(1 λ , σ, b m), without ever querying the PUF on b m, by doing the following: Compute x * := PUFEval(1 λ , σ, 0 n+1 ) and compute y := PUFEval(1 λ , σ,b m ⊕ x * ). Note that both queries are far apart from b m, yet the adversary learns y = PUFEval(1 λ , σ, Now we show that the PUF family described above satisfies Definition 23. 6 Fix any polynomial-size challenge list Q = {q 1 , . . . , q κ−1 } and any challenge query q κ such that, for any k ∈ [κ − 1] : hd(q κ , q k ) ≥ 1, which is clearly minimal. Since f is a random function, it holds that PUFEval(1 λ , σ, q) has maximal average min-entropy, unless the PUF is queried on two inputs (q i , q j ) that form a collision for f . Note that this happens only if q i ⊕ q j = 1 x * . Thus, all we need to show is that, for any fixed set of queries {q 1 , . . . , q κ } the probability that q i ⊕ q j = 1 x * is negligible, over the random choice of x * . This holds because by applying the Bernoulli inequality. The above expression approaches 0 exponentially fast, as n grows. This concludes our proof. Contrasting our unpredictability definition with the one of [3]. The motivation behind our newly proposed adaptive unpredictability notion (Definition 21) is that the standard PUF unpredictability notion of [3] implicitly assumes that PUFs are only dependent on random physical factors (likely introduced during manufacturing), and in particular it does not capture families of PUFs that could have some programmability built in, allowing to predict the output of a PUF on an input by querying a completely different input. What our new PUF unpredictability notion explicitly captures is that a "good" PUF must solely depend on random physical factors, and in particular cannot have any form of programmability. On a more philosophical level, we believe that our new notion is what was meant to be modelled as a property for PUFs from the start. Since PUFs are inherently randomized devices that are specifically built to be unpredictable and uncontrollable, a PUF family such as the one described above should not be considered to be a "good" PUF family; however, the previous notion fails to capture this fact. 7 Overall, our new definition of unpredictability does not hinder in any way the progress and development of new real-world PUFs, but merely addresses a technical oversight by the previous unpredictability notion. Therefore, we conjecture that most real-world PUFs that satisfy the unpredictability notion of [3] will most likely also satisfy our unpredictability notion, since real PUFs are inherently randomized physical devices built to be unpredictable and uncontrollable. Impossibility of Everlasting OT with Malicious Hardware In this section, we prove the impossibility of realizing everlasting secure oblivious transfer (OT) in the hardware token model, even in the presence of a trusted setup. The result carries over immediately to any secure computation protocol due to the completeness of OT [23]. We consider honest tokens to be stateful but non-erasable (Definition 25) and the tokens produced by the adversary can be malicious but not encapsulate other tokens (note that this restriction on malicious tokens only makes our result stronger, as the impossibility holds even against an adversary that is more limited). The adversary A is PPT during the execution of the protocol, but A becomes unbounded after the execution is over (i.e. everlasting security). This extends the seminal result of Goyal et al. [18] that shows the impossibility of having statistically (as opposed to everlasting) UC secure oblivious transfer from stateless (as opposed to non-erasable) tokens. We stress, however, that our negative results does not contradict the work of Döttling et al. [13,14], since they assume honest tokens to be non-resettable or bounded-resettable (i.e. tokens cannot be reset to a previous state, or only reset up to an a-priori bound), whereas for our result to hold the token must be non-erasable. In the following, we show the main theorem of the section. The result holds under the assumption that the token scheduling is fixed a-priori, which captures most of the known protocols for secure computation [13,14,20]. The scheduling of the tokens determines the exchange of the tokens among parties. We stress that we do not impose any restriction on which party will hold each hardware token in the end of the execution. For a formal definition of OT, we refer the reader to Sect. 2.2. We first define "non-erasability" for hardware tokens next. Definition 25. (Non-erasable hardware token) A (stateful) hardware token is said to be non-erasable if any state ever recorded by the token can be efficiently retrieved. Note in particular that stateless tokens are trivially non-erasable, as the former cannot keep any state. Theorem 26. Let be a hardware token-based everlasting OT protocol between Alice (i.e. sender) and Bob (i.e. receiver) where the honest tokens are non-erasable and the scheduling of the tokens is fixed. Then, at least one of the following holds: • There exists an everlasting adversary S that uses malicious and stateful hardware tokens such that Adv S ≥ (λ), or • there exists an everlasting adversary R that uses malicious and stateful hardware tokens such that Adv R ≥ (λ), for some non-negligible function (λ). Proof. The proof consists of the following sequence of modified simulations. Let the game G 0 define an everlastingly secure OT protocol for S and R. Then, by assumption, we have that for all everlasting adversaries S and R , it holds that We define a quasi-semi-honest adversary to be an adversary that behaves semi-honestly but keeps a log of all queries ever made to the hardware token (i.e. the non-erasable token assumption). Let the game G 1 define an everlastingly secure OT protocol where S and R are quasi-semi-honest. Since we are strictly reducing the capabilities of the adversaries and the tokens are non-erasable, we can state the following lemma. Let G 2 be the same as G 1 except that whenever S (resp. R ) queries a token from R (resp. S) that will return to R (resp. S), instead of making that query to the token, S queries directly R who answers it as the token would have. Since the distribution of the answers for the queries does not change, we can state the following lemma. Lemma 28. For all quasi-semi-honest S and R , it holds that Let game G 3 be exactly the same as G 2 except that whenever S (resp. R ) sends a token to R (resp. S) that will not return to S (resp. R ), then S sends a description of the token instead. Since we consider everlasting adversaries we assume that after the execution of the protocol all tokens can be read out. Therefore, both parties will have the description of all the tokens, even the ones that are not sent to the other party. Note that at this point there are no hardware tokens involved, and only description of tokens. Therefore, a quasi-semi-honest adversary is identical to a semi-honest everlasting one. 8 We point out that a semi-honest unbounded adversary S (resp. R ) is also a semi-honest everlasting adversary, since during the execution of the protocol it performs only the honest (PPT) actions. We are now in the position of stating the final lemma. Lemma 30. For all semi-honest unbounded S and R , it holds that It was shown [2] that it is not possible to build a secure OT protocol against semihonest unbounded adversaries (even in the presence of a trusted setup), what gives us a contradiction and concludes our proof. Everlasting Commitment from Fully Malicious PUFs In this section, we build an everlastingly secure UC commitment scheme from fully malicious PUFs. Let C = (Com, Open) be a statistically hiding UC-secure commitment scheme, let (Sender OT , Receiver OT ) be a 1-out-of-2 statistically receiver- and let We denote by (P 1 , V 1 ) and (P 2 , V 2 ) the statistically witness-indistinguishable arguments of knowledge (SWIAoK) for the relations R 1 and R 2 , respectively. Our commitment scheme is described next. Everlasting Commitment scheme from PUF Setup: Let G be the distribution for a random y in the range of the one-way permutation f , let seed be a random seed for the strong randomness extractor H , and let crs , be the CRS for the non-interactive commitment and crs OT for the OT protocol. The ideal functionality F G CRS samples a random crs from the distribution of valid values, where crs := (y, seed, crs , , crs OT ) and provides Alice and Bob with crs. We denote by x ∈ {0, 1} λ the pre-image such that f (x) = y, used by F G CRS to sample y. Commitment: On input (commit, id, m), for a fresh id, Alice engages with Bob in the following interactive protocol. Bob samples a PUF token by querying to Alice and outputs (commited, id). Opening: On input (unveil, id), Alice parses (, , decom, ω, k, ) as the information generated in the commitment phase with the same id, if any, and m as the corresponding message. Then it interacts with Bob in the following manner. We note that many instances of the previously described protocol (with a different id) may run concurrently. • and let (P 1 , V 1 ) and (P 2 , V 2 ) be SWIAoK systems for the relations R 1 and R 2 , respectively. Then, the protocol above everlastingly UC-realizes the functionality F MCOM in the F PUFEval,PUFSamp HToken -hybrid model. Proof. We consider the cases of the two corrupted parties separately. The proof consists of the description of a series of hybrids and we argue about the indistinguishability of neighbouring experiments. Then, we describe a simulator that reproduces the real-world protocol to the corrupted party while executing the protocol in interaction with the ideal functionality. Corrupted Bob (recipient) Consider the following sequence of hybrids, with H 0 being the protocol as defined above in interaction with A and Z: H 1 : Defined exactly as in H 0 except that, for all executions of commitment and opening routines, the SWIAoK for R 1 and R 2 are computed using the knowledge of x, the preimage of y. This is possible as the F CRS functionality is simulated by the simulator that samples an f (x) = y such that it knows x. In order to avoid trivial distinguishing attack, we additionally require Alice to explicitly check that ∀i ∈ {[ (λ)]} : hd(β i , q k i i ) ≤ δ and abort (prior to computing the SWIAoK) if the condition is not satisfied. The two protocols are statistically indistinguishable due to the statistical witness indistinguishability of the SWIAoK scheme. In particular, for all unbounded distinguishers D querying the functionality polynomially many times, it holds that H 2 , . . . , H (λ)+1 : Each H 1+i for i ∈ [ (λ)] is defined exactly as H 1 except that in all of the sessions Alice uses the simulator of the statistical receiver private OT protocol (that implements the F OT functionality) to run the first i instances of the oblivious transfers. Note that the simulator (using the knowledge of the CRS trapdoor) returns both of the inputs of the sender, in this case ( p 0 i , p 1 i ). By statistical receiver privacy of the oblivious transfer, it holds that the simulated execution is statistically close to a honest run, and therefore, we have that for all unbounded distinguishers D that queries F PUFEval,PUFSamp HToken polynomially many times: with the difference that, in all of the sessions, the first i-many commitments com i are computed as Com(r i ), for some random r i in the appropriate domain. Note that the corresponding decommitments are no longer used in the computation of the SWIAoK. Therefore, the statistically hiding property of the commitment scheme guarantees that the neighbouring simulations are statistically close for all unbounded D. That is H (seed, k i ) in the former case and a random string in the latter). Note that k i is used only in the computation of H i and that the only variable that depends on k i is γ . Since γ is from a set of size n + 1, we can bound from above the entropy loss of k i to log(n + 1)-many bits. Recall that (n + 1) 2 λ , therefore we have that (λ) − c > log(n + 1), for an appropriate choice of (). Hence, by the strong randomness of H we have that Since the distance between H 2· (λ)+4,0 and H 2· (λ)+4,n is the sum of the bounds obtained by the leftover hash lemma [21], we can conclude that H 2· (λ)+6 : Defined as H 2· (λ)+5 except that in all sessions com is a commitment to a random string s. Note that in the execution of H 2· (λ)+5 the value of decom is masked by a random string H i and therefore it is information theoretically hidden to the eyes of the adversary. By the statistically hiding property of Com, we have that for all unbounded distinguisher A the following holds: H 2· (λ)+7 : Defined as H 2· (λ)+6 except that Alice opens the commitment to an arbitrary message m . We observe that the execution of H 2· (λ)+6 is completely independent from the message m, except when m is sent to Bob in clear in the opening phase. Therefore, we have that for all unbounded distinguishers D that query the functionality polynomially many times: S: We now define S as a simulator in the ideal world that engages the adversary in the simulation of a protocol when queried by the ideal functionality on input (committed, sid). The interaction of S with the adversary works exactly as specified in H 2· (λ)+7 , with the only difference that the message m is set to be equal to x, where (unveil, sid, x) is the message sent by the ideal functionality with the same value of sid. Since the simulation is unchanged to the eyes of the adversary we have that By transitivity, we have that H 0 is statistically indistinguishable from S to the eyes of the environment Z. We can conclude that our protocol everlastingly UC-realizes the commitment functionality F MCOM for any corrupted Bob. We stress that that we allow Bob to be computationally unbounded and we only require that the number of sessions is bounded by some polynomial in λ. Corrupted Alice (committer) Let H 0 be the execution of the protocol as described above in interaction with A and Z. We define the following sequence of hybrids: H 1 : Defined as H 0 except that the following algorithm is executed locally by Bob at the end of the commit phase of each session, in addition to Bob's normal actions. E(1 λ ): Let K be a bitstring of length (λ), the extractor parses the list of queries Q that Alice sent to F PUFEval,PUFSamp HToken before the last message of Bob in the commitment phase. Then, for all Q j ∈ Q it checks whether ∃ j ∈ [ (λ)] such that ∃z ∈ {0, 1} such that hd(Q j , p z i ) ≤ γ , where p z i is defined as in the original protocol. If this is the case the extractor sets K i = z. If the value of K i is already set to a different bit the extractor aborts. If at the end of list Q there is some i such that K i is undefined, the extractor aborts. Otherwise it parses ω ⊕ H (seed, K ) as m ||decom and it returns (m , decom). Note that Bob does not use the output of E and therefore, for all distinguishers D, we have that: We first derive a bound for the probability that the event NoUnique happens. Consider the following sequence of hybrids. The experiment H U 0 identical to H 2 except that we sample some j * from the identifiers associated to all sessions and some i * from [ (λ)]. Let n be a bound on the total number of session and let NoUnique( j * , i * ) be the event where NoUnique happens in session j * and for the i * -th bit. Since j * and i * are randomly chosen we have that H U 1 : The experiment H U 1 is defined as H U 0 except that it stops before the execution of the i * -th OT in session j * . Let st be the state of all the machines in the execution of H U 0 , the experiment does the following: • Continue the execution of H U 0 from st. • Input/output all the i * -th OT messages from session j * . • Simulate all other messages internally. The experiments sets the bit b = 1 if and only if the commitment of the j-th session succeeds. Let NoUnique * ( j * , i * ) be the event that K j * i * is not uniquely defined. Since the execution does not change to the eyes of Alice we have that H U 2 : Defined as H U 2 except that the CRS for the OT is sampled to be in extraction mode. By the computational indistinguishability of the CRS, it holds that H U 3 : Defined as H U 2 except that the extractor for the OT is used in the i * -th OT of the j * -th session. The experiment sets b = 1 if the simulation succeeds. Recall that the simulator outputs the choice of the receiver b i * and expects as input the value p b i * i * . Note that this implies that the value p is information theoretically hidden to the eyes of Alice. Also note that by the simulation security of the OT we can rewrite thus by Jensen's inequality we have that As we argued before the value of p is information theoretically hidden to the eyes of Alice. However, by definition of NoUnique * ( j * , i * ) Alice queries both ( p 0 i * , p 1 i * ) to the functionality F PUFEval,PUFSamp HToken . It follows that we can bound the probability of the event NoUnique * ( j * , i * ) to happen to a negligible function in the security parameter. Therefore, we have that Pr Abort : H 3 ≤ negl(λ) + Pr NoDefined : H 3 . In order to show a bound on the probability of NoDefined to happen in H 2 , we define another sequence of hybrids. The experiment H D 0 identical to H 2 except that we sample some j * from the identifiers associated to all sessions. Let n be a bound on the total number of session and let NoDefined( j * ) be the event where NoDefined happens for the session j. Since j * is randomly chosen we have that H D 1 : The experiment H B 1 is defined as H D 0 except that it stops before the execution of the SWIAoK in the commitment of session j * . Let st be the state of all the machines in the execution of H D 0 under the assumption that no machine keeps a copy of the pre-image x after generating crs. Let P * be the following algorithm: In the following analysis, we ignore the case where the two extracted witnesses are a valid trapdoor for the common reference string y, as this event can be easily ruled out with a reduction to the one-wayness of f . Let us denote by β i ← Open(com i , decom i ). Now it is now enough to observe that the successful termination of the protocol implies that for all i ∈ [ (λ)] we have that hd(q k i i , β i ) ≤ δ, for some k = k 1 || . . . ||k (λ). By definition of NoDefined * ( j * ) there exists some i * such that A never queried any p to F PUFEval,PUFSamp HToken such that neither hd( p , p 0 i * ) ≤ γ nor hd( p , p 1 i * ) ≤ γ , before seeing the last message of the commitment phase. By the unpredictability of the PUF, it follows that Pr (hd(m i * , q 0 i * ) ≤ δ) ∨ (hd(m i * , q 1 i * ) ≤ δ) ≤ negl(λ). We can conclude that there exists an i * such that β i * = m i * . Since decom i * and decom i * are valid opening information for m i * and β i * , respectively, then we can derive the following bound ; r : by the binding property of the commitment scheme. Therefore, we can conclude that Pr Abort : H 2 ≤ negl(λ). This proves our lemma. In order to conclude our proof, we need to show that the extractor always returns a valid message-decommitment pair for the same message that Alice outputs in the opening phase. More formally, let NoExt be the event such that for the output of the extractor (m , decom) ← E(1 λ ) it holds that m = Open(com, decom), where com is the variable sent by Alice in the same session. Additionally, let BadExt be the event such that the output of extractor (m , decom) is a valid opening for com but m = m, where m is the message sent by Alice in the opening for the same session. We are now going to argue that the probability that either NoExt or BadExt happens is bounded by a negligible function. We now observe that whenever the extractor of the SWIAoK is successful then, for all i ∈ [ (λ)] it holds that that β i ← Open(com i , decom i ) and that hd(q k i i , β i ) ≤ δ, for some k = k 1 || . . . ||k (λ) . Additionally, we have that H (k, seed) ⊕ ω is a valid decommitment information for com. By definition of NoExt, we have that m = Open(com, decom), where (m , decom) is the output of E and it is defined as ω ⊕ H (seed, K ). This implies that K = k, since the function H is deterministic. Therefore, there must exists some i * such that K i * = k i * . By Lemma 32, we know that K is uniquely defined and therefore Alice did not query F PUFEval,PUFSamp HToken for any p such that hd( p z i i , p ) ≤ γ for z i = K i , and therefore for all i ∈ [ (λ)] it holds, by the unpredictability of the PUF, that Pr hd(β i , q 1−K i i ) ≤ δ ≤ negl(λ), and in particular we have that β i * = m i * . Since decom i * and decom i * are valid openings for m i * and β i * with respect to com i * , the probability of NoExt * ( j * ) to happen in H D 4 can be bound to a negligible function by the binding property of the commitment scheme. This proves the initial lemma. Proof. The formal argument follows along the same lines as the proof of Lemma 33. The main observation here is that the argument implies that the output of E and the tuple (m, decom), where m is sent in plain by Alice and decom is the output of the extractor for the SWIAoK, must be identical with overwhelming probability. S: We can now define the simulator S that is identical to H 2 except that the output m of the algorithm E (defined as above) is used in the message (commit, sid, m ) to the ideal functionality F MCOM . The corresponding decommitment message (unveil, sid) is sent when the adversary returns a valid decommitment to some message m. Since the interaction is unchanged to the eyes of the adversary, we have that This implies that our protocol everlastingly UC-realizes the commitment functionality F MCOM for any corrupted Alice and concludes our proof. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2021-04-08T13:08:11.504Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "0cce822f9486b10be7b47a03f5ba6db117f38fa3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00145-022-09432-4.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "7ca0e89a4ceecdd023347e183a8db932ae0b8ad2", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
245140862
pes2o/s2orc
v3-fos-license
Effect of Cellulose Nanofibers (CNF) as Reinforcement in Polyvinyl Alcohol/CNF Biocomposite This research was targeted to use the planetary ball milling method to extract cellulose nanofibers (CNFs) from commercial microcrystalline cellulose and also to utilize the obtained extracted cellulose nanofibers (CNFs) as reinforcement in polyvinyl alcohol (PVA) thin film. The effect of cellulose nanofibers (CNFs) on the mechanical and physical properties of polyvinyl alcohol (PVA) thin films was investigated. As a result of the study, we found that the thin film’s tensile strength is good, and the surface morphology of the CNFs suspension enhances the bonding between the PVA and the reinforcement. Tyndall effect was accurate with the visible light scattering through CNF suspension, and the CNF/PVA thin film exhibited transparent thin film. In contrast, the CNF/PVA composite’s mechanical and physical properties are good due to the excellent dispersion and absence of agglomeration of CNFs. The prepared PVA/CNF biocomposite would be a suitable candidate to be implemented as biodegradable food packaging material. Introduction As of late, there has been a developing consciousness of the significance of the naturally agreeable plan of synthetic products and processes. The idea of maintainability is firmly affecting the substance network, which is more and additional zeroing in on limiting the utilization of hazardous substances also, embracing green manufactured techniques from sustainable, reasonable assets as beginning materials [1]. Cellulose is a characteristic polymer comprising straight homo polysaccharide β-(1,4)-D-glucose units connected together by β-1-4-linkages with plenty of hydroxyl groups and abundant organic polymers [2][3][4]. It is utilized for different applications since its most plentifully found in [5], chemically treated hydrolysis [6], corrosive hydrolysis treatment measure [7] and ball milling as a green preparation procedure. Cellulose nanofibers (CNFs) gives excellent mechanical properties [8], enormous explicit region, low coefficient of thermal expansion, ease and accessibility [9], better biodegradability, high angle proportion (L/D), biocompatibility and renewability [10,11]. Ball milling is a mechanical procedure generally used to granulate powders into fine particles and mix materials [12]. Being environmental-friendly and cost-effective, it has found wide application in the industry everywhere in the world. Since this review mostly centres around the conditions applied for the arrangement and functionalisation of nanocellulose subsidiaries by ball mill, as opposed to the apparatus itself, the various kinds of machines accessible in the industry won't be thus depicted. In any case, an overall depiction of the various kinds of gear is accounted for in this segment. Contingent upon the application, there are various sorts of ball mill. Nonetheless, the aim of this study was to apply the planetary ball milling process for CNFs extraction. The planetary ball milling method was chosen because the milling vessels are placed on a rotating supporting disk and they rotate around their own axes. This is an essential parameter for the process's efficiency as a higher distance allows higher kinetic energy and, therefore stronger impacts. Polyvinyl alcohol (PVA) is a water-soluble synthesized polymer, broadly utilized as a lattice to create biodegradable polymer composites because of its biodegradability, biocompatibility, high tractable quality, incredible resistance and adhesive properties. In this study, CNFs suspension was prepared through ball milling and CNF/PVA film prepared via mixing and mechanical stirring. Since the well-dispersed CNF in distilled water could be used as a dispersing agent for PVA and afforded to form a rigid nano-network structure [13,14], the CNF/PVA film was served as a template to develop thin-film via the casting process. In this process, the CNF/PVA nano-networks provided the composite with both improved mechanical strength and physical property due to the nano-network structures. Furthermore, morphological features and tensile properties were discussed. Extraction of cellulose nanofibers (CNFs) A 1.1g sample of microcrystalline cellulose and 20.9g of deionized water were added to a 45mL zirconia pot containing seven zirconia balls (15mm). Ball milling was conducted in the planetary ball mill at 300 rpm for 0.5 to 8 hours. After ball milling, cellulose slurry was then washed repeatedly with distilled water and centrifuged at 12000 rpm for 0.5 to 3 hours to obtain cloudy precipitation and bring the pH value of cellulose between 6 and 7. Preparation of polyvinyl alcohol thin film PVA granules of 4.4g were added to 100ml of distilled water under the heated condition and magnetically stirred to dissolve the polymer completely. The desired CNFs suspension (4.4g) was then added and sonicated for uniform dispersion of CNFs in PVA solution. The mixture was then poured into petri dishes to allow water to evaporate. Then, the film was demolded and stored. Finally, the thin films were undergone a freeze-thaw process for 6 hours (3 hours per cycle) to improve the thin film's resistance to deterioration after repeated temperature cycling. Characterization The surface morphology of the prepared CNFs suspension was characterized using a field emission scanning electron microscope (FE-SEM) operating at 1.5 kV. The tensile strength of the CNFs and PVA thin films were investigated by using a universal material-testing machine (UTM) at room temperature. The sample was cut (in 85 mm length, 25mm width). The average value of the tensile stress, fracture strain, and Young's modulus was calculated. The tyndall effect of the CNF suspension was determined by using a laser beams. The transparency of the thin film obtained was compared with a random picture. Results and discussion The Tyndall effect was observed to identify light's scattering through the CNF particles and distilled water, as shown in Figure 1. The laser light was directed towards the universal container (A) with distilled water and (B) with CNF suspension. The distilled water did not show the light beam passing through the water. The CNF suspension allows the light beam to pass through the solution. This appearance of the light beam passing through the solution is referred to as Tyndall effect. In distilled water, the water particles are too small to obstruct the path of light as it passes through and the straight line is not visible. However, the CNF suspension does because it has two phases, the dispersed phase and the dispersion medium. In a colloid, the light scatters into different directions due to the dispersed particles. Tyndall effect was used to determine if the solution is a solution or a colloid. The transparency of the CNF/PVA thin films prepared were observed in comparison with a clear random picture, for example, the UMK logo (as shown in Figure 3). The prepared thin films were placed on top of the pictures to observe the pictures seen through the transparent thin films. Figure 4 shows the pictures taken during the observation of thin-film transparency. As observed with the naked eye, CNF/PVA thin film exhibited a well transparent thin film compared to PVA thin film. Due to the rigid nano-network of the CNCs suspension, the mechanical properties induced by the reinforcement of CNCs with PVA were further investigated through tensile testing. Figure 4 shows the stress-strain curve of the CNF/PVA thin film tensile strength. The CNC/PVA thin film exhibits a rigid structure with tensile strength, Young's modulus, and fracture strain of 25.76 N/mm 2 , 5.579 MPa and 168.035 %. Notably, the CNF/PVA thin film exhibited significantly improved mechanical strength. The reinforcement of CNF into PVA arguably has provided good elastic behaviour of the composite. The tensile properties of CNC/PVA thin films have been listed down in Table 1 below. Conclusion The study aimed to obtain CNFs reinforced PVA thin films and see break down the impact on mechanical and physical properties. Reinforcement of CNFs expands the hydrogen bonding between the strands and polymers, which brought about better improvement in mechanical and physical properties of the PVA thin films, as seen from tensile test and FE-SEM analysis. Simultaneously, the addition of high concentration CNFs into the PVA grid framework can cause degradation in mechanical and physical properties due to the development of agglomeration. Ball milling causes better cooperation of CNFs with PVA polymer lattice by uncovering more hydroxyl bunches on the surface.
2021-12-15T20:12:42.170Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "5055a0f4bd87e72dfa3fe4edf30470e3272538e3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2129/1/012057", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5055a0f4bd87e72dfa3fe4edf30470e3272538e3", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
55961944
pes2o/s2orc
v3-fos-license
INTERACTION OF THE LAWS OF ELECTRODYNAMICS IN THE HUBER EFFECT A complex physical phenomenon, first discovered by engineer J. Huber in 1951, is investigated. From the perspective of an external observer, the phenomenon is as follows: an electric current is passed through the wheel pairs of the car moving from the rail to the rail. The current, passing through the movable contacts of the wheels and rails, creates an additional (up to the moment of inertia) torque. The research task is to explain the reason for the occurrence of torque. Based on the analysis of individual components of the electrodynamic phenomenon discovered by Huber, an algorithm for the successive interaction of the individual components of the effect is found on the basis of the laws of classical electrodynamics: electric, ferromagnetic, and mechanical. The identity of the effect is explained, both for the wheel pair and for the bearing (Kosyrev-Milroy engine). For the first time, the cause of the appearance of the torque is revealed: relative movement of surface charges in the region of the movable electrical contact to the wheel body and the rails (or balls and guides). Moving charges unevenly magnetized ferromagnetic bodies according to the Biot-Savart-Laplace law. Due to the reduction in the clearance of the oncoming side of the wheel (or balls) and the increase on the trailing side, the pulling force from the oncoming side and, accordingly, the moment are more than on trailing side. The presented theoretical explanations completely correspond to the experimental investigation of the effect carried out by different scientists at different times. Introduction Despite the large number of experiments of the Kosyrev-Milroy engines from 1951 to the present, there was no theoretical explanation of the effect. The complexity of the problem consisted in the desire to obtain an accurate mathematical description of the processes in Huber and Kosyrev-Milroy engines.But it is impossible to obtain a physico-mathematical model that is isomorphic to real processes. As follows from the fundamental laws of the general interaction of elements and the continuity of matter and motion, there are no such things in nature: a linear relationship between individual physical phenomena, autonomous (ideally isolated) objects, stationary deterministic and stochastic processes, physical constants, etc. Therefore sometimes the success of an explanation of this or that phenomenon depends on a reasonable compromise between the accuracy and complexity of its mathematical description.This approach is used by the authors to explain the Huber effect. Using the laws of classical physics, at a qualitative level, without complex numerical calculations, the logical sequence of the action of the corresponding laws is considered.These laws together provide explanations for the Huber effect in the corresponding engines. Review of existing attempts to explain the effect The explanation of the effect, presented by various scientists, unfortunately, did not correspond to the physical essence of the phenomenon.Thus, it was considered [1,2], the moment arises from the Ampere-law interaction of the guide currents and the wheels or balls that are located at an Fundamental and applied physics acute angle.This would create a moment if, on the second wheel of the Huber wheelset or the other side of the bearing ball, the same moment did not have the opposite sign.It was believed [3] that the moment arises from a spark and an increase in air pressure on the trailing side of the contact.To confirm this [3], the bearings were placed in a vacuum hood and the air was gradually evacuated.The movement ceased due to a significant decrease in the heat in the vacuum from the heated balls to 250 °C.This led to their jamming and stopping of Kosyrev-Milroy engine.The author of work [4] for 1982 argued that sparks are not the cause, and in [3] for 1973, the same author believed that the spark was the cause.The negative effect of sparks on the motion is discussed in [5].If the cause is pneumatic, the wheels or balls could move along non-ferromagnetic guides.However, in the absence of a ferromagnet, the motion ceased.There is an explanation [6] of the appearance of a moment from the thermal deformation of the guides.It allegedly creates a hill with which a ball or wheel rolls.This explanation does not take into account the considerable thermal inertia of the guides.It could only occur for ultra-low velocities W. But the moment does not arise with such W.In [5] provisions were proposed that do not correspond to the classical laws of physics, for example, the presence of magnetic induction created by a current, and coinciding with it in the direction.Additional ambiguities were made in [7,8], where the Huber effect is combined with the uncalled effect of J. Searle.The abstract mathematical variational approach in [9] also does not reveal the physics of the phenomenon. As observations [2] have shown, for the appearance of the torque M, in addition to the presence of motion, it is necessary that the wheels (balls) and the guides are ferromagnetic.In accordance with this, there must be a source of magnetization of the wheels (balls and guides).Magnetization of bodies should be asymmetrical relative to the point of contact: more towards the motion; material of the ferromagnet must be a magnet; lubrication of bearings, if it is not thick, also slightly improves the performance.However, the current I in the "source-consumer" circuit can't directly create an asymmetric magnetic field.The required current is perpendicular to the current I.It would asymmetrically magnetize the ferromagnet.The attraction of the ferromagnets from the oncoming side with subsequent demagnetization would create a torque M. Thus, the explanation of the effect remained unresolved. Explanation of the effect on the basis of the laws of electrodynamics of moving bodies The torque M occurs when there is a motion of the wheel pair of J. Huber [3] or the rotation of the bearings of the Kosyrev-Milroy engine [1] and the current in the contacts of the wheels (or balls) with the guides.The rotation velocity W increases with increasing current I.It does not depend on its direction, constancy or sinusoidal current; is equal to 0 if the material is not ferromagnetic.In addition, if the voltage of the power supply is unchanged, the dependence of M on W is extremal.For W=0, M=0, then M increases to the maximum (M max ).Further, if the mechanical moment of counteraction is less than M max , the angular velocity W continues to grow, but M decreases.Further, if the counter-moment changes sign, at some synchronous velocity W s , M=0, and then, with increasing W>W s , M<0. The specificity of electrodynamics [10,11] consists in the study of a spatially inhomogeneous system with a non-uniform charge distribution q and additional degrees of motion freedom.That is, the power indicators of the system depend not only on the current I, but on the position and motion of the components of the system. Let's consider "guides − contact zone − wheel or ball" system.Wheels or balls of radius r rotate with angular velocity W counterclockwise.They roll along fixed guides to the left of the point of contact with the linear velocity V 0 : In the Huber wheel pair, the rails have a significant previous cross section (relative to the contact area of the rails with the wheel).Therefore, the voltage drop in the rail relative to the drop in the contact area is not significant, and the voltage at the contact is almost independent of the place of rail connection to the source.The cross-sectional areas of the wheel and shaft are also large and Fundamental and applied physics have an electrical resistance much less resistance of the contact spot.A similar situation occurs in bearings.Therefore, almost all of the source voltage is applied to the contacts of the wheels, or bearings.Therefore, it is necessary to analyze the processes in the contact zone. 1. Flow І of electricity q through the fixed contact zone The contact zone is not an ideal line for the wheel or a point for the ball (Fig. 1).It has a finite area "a" of non-ideal (due to the roughness of the surface) of the electromechanical contact that surrounds the "b" area of a purely electrical contact through a small air or oil gap. Fig. 1. Contact zone: a -electromechanical, b -electric Within ±x 1 in zone "a", as a result of surface roughness, there is a mechanical contact with resistive resistance R a and air with a capacitance of C a .Resistance R a depends on the zone of direct contact of the surfaces, the resistivity r к of the contact medium and the average thickness l к of the mechanical contact zone: The capacity C a in zone "a" is indicated by the non-ideal contact, that is, the presence of a micro-gap between the surfaces.It is proportional to the part S a of the area S k , the permittivity e of air and inversely proportional to the thickness ( ) δ α of the micro-gap: for contact of balls and cages, taking into account the smallness of a, for contact of wheel and rail where the coefficient k takes into account both a decrease in the total area S k and an increase in the surface of the "plates" of the capacitor due to the roughness of the surfaces; f is the wheel width. The gap ( ) δ α for zone "a" is a few microns.Therefore, the capacity C а , in spite of the small C а area, can be quite appreciable, especially for the oil gap.For small a, 3), taking into account ( 4) and ( 6), for n parallel-connected balls: Capacity (3) of two wheel and rail contacts, taking into account ( 5) and (6): Voltage U k (potential difference j 1 and j 2 ) on the contact is determined by the resistance (2) and the current I: where, due to the presence of two or four (in the bearing pair) contacts connected in series between (±) poles of the constant voltage source U, the potentials j 1 and j 2 will be the same for four or j 2 will be zero for two consecutive contacts.The area S b in zone "b" (Fig. 1) for a ball: for a wheel: The area S b is much more than a S , but the gap (6) is more, because (Fig. 1) The approximate value of the capacitance C b for the mean value a av , which is equal to the half sum a min and a max : − for n ball bearings ( ) ( ) − for two contacts of wheels and rails For bearings, the total capacity will be approximately the fate, for the wheels − units of picofarads. The charges q 1 and q 2 of the same sign located on the contacting surfaces, in accordance with (3)−(9), will be thousandths of a Coulomb. 2. Flow I of electricity q through the movable contact zone In the absence of motion, the electric current І, like the flow of electricity q, are distributed in zones "a" and "b" (Fig. 1) symmetrically.A small zone "b" is limited by the coordinates ±x 2 , under which the phenomenon of gap breakdown disappears.The situation changes significantly (Fig. 2), if the wheel or ball rotates at with velocity Ω, moving with velocity 0 V . Fig. 2. The asymmetric distribution of the current density I In order to explain the appearance of asymmetry, let's imagine the flow І by the sum of the electric tubes with density k j  through the cross sections S ∆ of the total contact zone k S : ( ) Each k-th current tube І k , which is equal to the product of the density j k by the area DS, is formed at the moment t 1 of the discharge in the gap δ 1 and disappears at the moment t 3 of the discharge extinction in the gap δ 2 (Fig. 3).Having finite length δ 2 , area ΔS of cross-section and location in a ferromagnetic medium, each tube is an R k L k -circle, where R k and L k are electrical resistance and inductance.If (for simplicity) let's assume that R k and L k are constants, and the contact resistance is assumed to be zero for the presence and infinite in the absence of a discharge, then the instantaneous value i k (t) of the current I k is defined as solution of the equation: Namely: for time t in the interval between t 1 and t 2 ( ) ( ) for time t in the interval between t 2 and t 3 ( ) ( ) where ( ) ( ) -the time constant.V′ or part of it ( ) Further, after the opening of the mechanical contact, when the time t is greater than t 2 or t¢ 2 , it exponentially decreases (19) to zero. An intensity br ε at which the breakdown of the gap δ arises or disappears from the oncoming side is equal to the ratio of k U to 1 δ (Fig. 2).At the moment 1 t (regardless of velocity 0 V or 0 V′), a current (18) appears.The shaded area under the curve ( ) δ′ not will be compared to br ε .It will exponentially decrease until the moment 3 t or 3 t′ at which the ratio of the voltage to the gap δ or 2 δ′ is not compared with br ε .That is, the gap 2 δ will be more than (Fig. 2, 3).The shaded area under the current curve (19) ( ) It is much more than the charge q′ from the oncoming side (Fig. 3). Let's define the distances 1 b and 2 b .For 1 b the tension br ε at which the gap breaks 1 δ : ( ) for 2 b : ( ) ( ) Fundamental and applied physics The more 0 V , the more the module k di dt and, accordingly, the more ratio of 2 b to 1 b , that is, the non-metering. In the same way as in a circle with inductance, the current k i of the tubes does not change instantaneously, so the voltage k U on the capacitor (3) and, accordingly, the charges 1 2 q , q (15) also can't instantaneously change.Therefore, the charges 1 2 q , q move to the right of the contact zone.The current k i of the tubes is overcome with an end hour k t ∆ air gap k δ with the velocity k t ∆ that is proportional to intensity k ε : where b -the mobility coefficient of charged particles in the gap k δ .Then, according to the dimensionality (А×s= C), there is an assumption that not only on the surfaces, but also in the gap k δ of the k-th tube, an unbalanced charge k q is formed (due to the dynamism of the process) The total air charge can be represented by an point charge e q equal to q Σ and located to the right of the contact point at a distance eq x (Fig. 2), where Due to asymmetry, surface charges (15) will also be shifted to the right of the contact point. Asymmetric magnetization of the wheel (balls) and the guide by the moving charges The charges (15), (25), which are formed in the contact zone, move relative to the wheel bodies (balls) and the guide with velocity 0 V .The product of the total charge 1 2 eq q q q q = + + per velocity 0 V can be represented as an element y I x ⋅ ∆ of the conditional current c I : ( ) where 1 2 eq c 0 q q q x I , V t t According to the Biot-Savart-Laplace law, the current element c I (27) at the airspace point М forms the magnetic field of induction DВ (Fig. 4): To calculate the entire magnetic field ( ) 0 r, r Φ , it is sufficient to integrate (28) within the limits { } max 0, r ± , { } 0 max 0, r .If the ferromagnetic bodies of the wheel (balls) and the guide are brought into this space, they are locally magnetized, increasing the magnetic induction B by r µ (by 1000¸10000) and, having an air gap ( ) δ α , in order to minimize the energy M W of the magnetic Fundamental and applied physics field [9], form the forces M F acting in the case of increase of magnetic conductivity M Y , i. e., to reduce the gap ( ) As is known [10], the force M F is proportional to the square of the current c I , inversely proportional to the square of the gap ( ) δ α and acts on the decrease of ( ) δ α (increase of M Y ).x to the right with the same gaps on the left and on the right creates better conditions for magnetization and on the trailing side from the point of contact.If the moving system has zero mass, then the movement to the left would instantaneously cease.Asymmetry (Fig. 2) would be aligned to symmetry (Fig. 1). However, there is an interaction of four moments for the moment 0 t of the time t.The moment М 1 , "wants" to reduce the gap to the left of the contact point; moment М 1 "want" to reduce the gap behind the contact point.The dynamic moment М 3 , which is formed from the mass m and the velocity 0 V of motion of the movable part of the system.М 4 is the moment of loading.If the velocity 0 V is more than zero, then The more m and 0 V , the more М 3 , but the more 0 V , the more the distance eq x from the contact point to the coordinate of the equivalent charge q eq.(Fig. 4) and, respectively, М 2 .For a specific current І there exists a maximum velocity max V at which the action of the moments averaged over time Dt and the counteraction of the moments 1 3 M M + are equalized.For the time interval t ∆ of time t at which the inequality (30) is satisfied due to the presence of motion with velocity 0 V .On the way x ∆ , the moment М 1 value averaged over time t ∆ (due to the reduction of the gap) will increase, and М 2 − vice versa (due to the increase of the gap) − will decrease.Such statement is true if the average (over time t ∆ ) values of the gap on the left l δ are less than on the right r δ (Fig. 5). Reports on research projects (2017), «EUREKA: Physics and Engineering» Number Fundamental and applied physics Fig. 5 shows the position of the wheel (balls) for the moments t 0 = (lines) and t t = ∆ (dashed lines) of time t.In order to approximately determine the average value over time t ∆ of the gaps to the left and to the right of the point (t 0 = , x 0 = ), the dependences ( ) x , t − ∆ zones A it is followed from Fig. 6, the average value δ for the time t ∆ to the left of the point (0, 0) is 1 0,5δ , on the right − 1 δ .Accordingly, the average value of the action of the moment М 1 to the left of the point (0, 0) will be 4 times greater than the right for the time Dt, since the force (29) is inversely proportional to the square of the gap.The asymmetry of forces and moments leads to an acceleration of motion.However, for almost constant parameters R k , L k , that is constant time constant t k , asymmetry grows.Center х eq (26) of the charge q eq (25) is shifted to the right of the contact point (0, 0).This leads to a decrease in the asymmetry of the action of the forces and, accordingly, of the total moment, that is, the system has the property of self-balancing.Magnetization can occur both from a constant and sinusoidal current, since the strength of F M (29) depends on its square.A necessary condition for asymmetry is motion.It ensures the condition (30).On the trailing side of contact with zone "b" (Fig. 1), demagnetization of ferromagnets and the gradual disappearance of excess surface charges occur.The effect is increased if the air gap e 0 is replaced by an oil gap with e much more than e 0 .If further artificially increase the velocity W, then the moment will decrease to zero and then change the sign, as in a single-phase asynchronous motor. Conclusions On the basis of the analysis of the existing theoretical explanations and the results of the experiments carried out by the authors of [1−7, 9, 12, 13] and own theoretical and experimental studies, it is first established that: − the direct cause of asymmetric magnetization of the sections to the left and to the right of the contact point of ferromagnetic bodies is the movement of surface and, possibly, air charges that are formed in the zone of movable electrical contact with the current; − asymmetrical magnetization together with the mechanical action of the inertia moment of the moving body create the necessary conditions for supporting the movement; − it is necessary to conduct additional experimental studies to identify the quantitative characteristics of the charge in the contact zone from the current passing through the contact and from specific geometric parameters of the facility for the numerical calculation of the component of the total torque from the magnetization of ferromagnets. Fig. 4 . Fig. 4. Creation of a magnetic induction B ∆  by an element of a current y I x ⋅ ∆ For the moment of time "zero" the shift of the coordinate eqx to the right with the same gaps on the left and on the right creates better conditions for magnetization and on the trailing side from the point of contact.If the moving system has zero mass, then the movement to the left would instantaneously cease.Asymmetry (Fig.2) would be aligned to symmetry (Fig.1).However, there is an interaction of four moments for the moment 0 t of the time t.The moment М 1 , "wants" to reduce the gap to the left of the contact point; moment М 1 "want" to reduce the gap behind the contact point.The dynamic moment М 3 , which is formed from the mass m and the velocity 0 V of motion of the movable part of the system.М 4 is the moment of loading.If the velocity 0V is more than zero, then
2018-12-12T04:02:31.441Z
2017-05-31T00:00:00.000
{ "year": 2017, "sha1": "230ff2f203988971bdb1221f96fa069a86eb1681", "oa_license": "CCBY", "oa_url": "http://eu-jr.eu/engineering/article/download/360/339", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "230ff2f203988971bdb1221f96fa069a86eb1681", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
49381227
pes2o/s2orc
v3-fos-license
Food Calls in Common Marmosets, Callithrix jacchus, and Evidence That One Is Functionally Referential Simple Summary We studied the calls that are made by common marmosets when they see food and when they consume it. They were tested with two types of fruit (banana and blueberries) and two types of insect (live mealworms and crickets). All of these foods elicited Call A, described previously as a food call. Call B was softer and was given more often to crickets. Of particular interest was our discovery of Call C, which was produced when the marmosets discovered insects but not fruit. We showed that Call C has referential meaning, by playing recordings of the call to marmosets when they were eating banana, as follows: on hearing Call C, but not Call A, they stopped eating the banana and went to look in container where they had found insects before. Call C, therefore, has specific meaning, signaling the availability of insect food. All of the calls were produced more frequently when the marmosets were tested alone than when in pairs, indicating that they advertise the availability of food to companions. Call C, at least, is not merely an automatic response reflecting the marmoset’s emotional state. A meaningful call about the discovery of insect food would be essential for survival, since marmosets often forage out of sight of each other. Knowledge of their vocal communication can improve the care of marmosets held in captivity. Abstract We studied three calls of common marmosets, Callithrix jacchus, elicited in the context of food. Call A, but not B or C, had been described previously as a food call. We presented insects (live mealworms or crickets) and fruit (banana or blueberries) and used playbacks of calls. We found that Call C was produced only in response to seeing insects, and not fruit; it consistently signaled the availability of insects (includes mealworms), and more so when this food could be seen but not consumed. Playback of Call C caused the marmosets to stop feeding on a less preferred food (banana) and, instead, go to inspect a location where mealworms had been found previously, providing evidence that it has referential meaning. No such immediate response was elicited on hearing Call A or background noise. Call A differed from C in that it was produced more frequently when the marmosets were consuming the food than when they could only see it, and call A showed no specificity between insects and fruit. Call B was emitted less frequently than the A or C calls and, by the marmosets that were tested alone, most often to crickets. An audience effect occurred, in that all three calls were emitted more often when the marmosets were tested alone than when in pairs. Recognition of the functional significance of marmoset calls can lead to improved husbandry of marmosets in captivity. Introduction Calls that alert conspecifics to a food source have an essential role in group-living species, and many primate and avian species are known to produce food-associated calls when locating or consuming food [1,2]. One type of food call that is produced by the common marmoset (Callithrix jacchus) has been described previously [3], but no previous papers have reported other food calls in this species. Since the marmosets in our colony were found to emit two other calls in addition to the call that was described by Vitale et al. [3], we decided to investigate their vocal responses to food presentation in detail, and to test the possibility that the calls might be produced in reference to specific food-types. We wanted to determine whether these calls are involuntary responses to food and/or whether they are consistently produced calls that are referential. Calls are considered to be functionally referential if they are emitted reliably in response to a specific external event and if those hearing the calls (the receivers) react in a way that is consistent with the external event, even in its absence [4]. For example, a food call would have to be emitted reliably by the signaler when food is present, and possibly only when a specific type of food is present, and the receivers hearing this call would have to respond by searching for the food or approaching the caller and the food source [5]. It is possible that the signaler is not calling with the intention of communicating to the receivers but merely responding reflexively to an external event, and that the receivers may learn to associate this call with that particular external event; that is, the receivers may extract information from the sender's call without the sender having any intention to inform them. This can be further investigated by determining whether the calls are produced only in the presence of group members (i.e., the 'audience effect') [6]. The audience effect indicates that individuals are not simply vocalising because of a change in their level of arousal, but rather that they are able to control their vocal behaviour depending on the presence or absence of other individuals [7], and this is particularly seen in the antiphonal calling of common marmosets [8,9]. In fact, the presence of others can increase or decrease call production. This behaviour has been shown in many different primate species, including red-bellied tamarins [10], vervet monkeys [11], brown capuchins [12], tufted capuchins [13], and bonobos [14]. In chimpanzees, increased rates of food calling occur in the presence of an audience but only when a large, sharable quantity of food is available; the rate of calling decreases if the quantity of food is too small to share [15]. In short, these signals represent selective vocal responses that are designed for communication, rather than simply being a reflection of the caller's internal state, related to level of hunger or preference for a particular type of food [16]. A study of food calls (rough grunts) in chimpanzees has shown that, when a male is feeding silently, playback of pant hoot calls, simulating the arrival of another chimpanzee, elicits food calls from the feeding male. This occurs more frequently when the pant hoot calls that are played are those of a chimpanzee of much higher rank than the chimpanzee who is feeding [4]. Hence, in this species, food calls are functionally referential signals that are produced selectively and intentionally, and they are not simply broadcast indiscriminately. Note also that the structure of food calls that are produced by chimpanzees can be modified by social influences, as demonstrated by the convergence over time of the acoustic structure of food calls that are made after integration of two different groups with, originally, different food calls [17]. This indicates that such food calls of chimpanzees have enough flexibility to be modified by social learning. Considering New World primates, Geoffroy's marmosets (Callithrix geoffroyi) have been shown to increase foraging behaviour after hearing playbacks of food-associated calls [18], and white-faced capuchins approach playbacks of food calls, but not other calls [19]. The latter approach the source of food calls more often when the duration of the call is longer. Conspecifics are also seen to approach males who are calling more often than they approach calling females, and males are also observed to call more than females. Since infant cotton-top tamarins are more likely to obtain food when an adult is emitting food-associated calls [20], food calls may also function to facilitate social learning. Relatively little is known about the food calls of marmosets. Therefore, we considered it important to examine the calls that had been noticed by one of us (LS) to be produced by the marmosets in our colony at feeding times. One call was that which was described previously by Vitale et al. [3], and there were two others that had not been described previously. Our aim was to determine whether these three calls were produced (1) on seeing or on consuming food; (2) when the marmoset was alone or in the presence of a familiar cage-mate; and (3) to test whether one or more might be referential signals. Subjects and Housing The 12 subjects (6 male and 6 female) that were used in this study were part of a colony of common marmosets (Callithrix jacchus) that were housed at the University of New England, NSW, Australia (holding Permit AEC05/061). The individuals that were tested, males and females, ranged in age from 24 to 162 months. All were the offspring of captive-bred marmosets. They were housed in same-sex pairs, in three separate home rooms (4 × 3 × 3 m), each containing three or four home-cages (size 1 × 2.3 × 2 m). These home-cages were connected by runways (wire-mesh enclosed structures of 0.23 × 0.23 m cross section and several metres in length) to three indoor rooms (3 × 2.9 × 2.6 m), and each of these was, in turn, connected to an outdoor cage (1.7 × 1.7 × 2.5 m) where the marmosets could receive exposure to sun light and observe activities in the external environment [21]. Each home-cage was richly furnished with wooden perches, branches of different sizes, hanging objects, tunnels, tyre swings, at least one nest box, and a tray containing a blanket and heat pad. The indoor rooms had horizontal and vertical branches, and each outdoor cage was furnished with several branches and a suspended pipe, which the marmosets could enter. Access to the indoor rooms and outdoor cages was rotated between the home-cages every three days. The experiments were conducted in the indoor rooms. The rooms were kept within a temperature range of 18 • C to 30 • C. Lights came on at 07:00 h and went off at 19:00 h, and sunlight entered the home rooms via skylights. Additional ultra violet light (350-390 nm) was provided for 30 min each day in the home rooms. Feeding occurred daily between 12:00 h and 14:00 h. The daily diet was banana cake, meatloaf, dog pellets, apples, banana, with weekly extras of boiled egg, sultanas, yoghurt, green beans, cheese, stewed apple, nutra-grain, peanuts, wholemeal bread with vitamin supplement, gum Arabic, and fresh seasonal fruits, as well as occasionally including blue berries. Water was available ad libitum. Mealworms and crickets were given approximately once a month. Prior to conducting these experiments, and during them, the experimenter (LS) fed the marmosets twice per week to maintain familiarity and thus help to avoid stressful situations during the tests. Individual marmosets were identified by their cage group, as well as by characteristic features, such as size, facial markings, and other individual features. General Procedure for Testing The University of New England approved the experimental procedures (AEC08/034). All of the experiments were conducted in the indoor rooms between 09:00 and 12:00 h, before the daily feeding time. The indoor rooms were accessed by the marmosets via the runways, which had a number of sliding panels that could be closed to prevent retreat to the home cage or access to the outdoor cage. One-way mirrors from an observation room permitted the experimenter to remain unseen while recording the marmosets' behaviour during testing. Four types of food were used in testing, namely, two fruits (banana and blueberries) and two insects (live mealworms and crickets). The food was presented on a wooden table (60 × 90 × 75 cm), covered with hessian, and placed in the middle of the indoor room. The recording of the vocalizations was made using a Sennheiser microphone and a Marantz digital recorder. Video footage was obtained using a JVC mini DV recorder. All of the equipment was operated remotely. The recording microphone was placed in the indoor rooms 24 h before testing so as to ensure that the marmosets were familiar with all of the equipment that was used. The behaviour of the marmosets was also videotaped. The video camera was situated in the left corner of the indoor room on a tripod. Call Analysis The calls were identified from sonograms using Raven, a bioacoustics program that was developed by the Cornell Bioacoustics Research Program, and Adobe Audition 2.0. The frequency, duration, pitch, and amplitude were assessed. Statistical Analysis Normally distributed data were analysed by one-way ANOVA using SPSS Statistics program 17.0. The data that could not be normalised by transformation were analysed using non-parametric tests. Experiment 1: Determining the Relative Preferences of Food Types Used in Subsequent Tests Before assessing the vocal responses of the marmosets to food, tests were made to determine the relative preferences of the marmosets for the four food types that were used in subsequent tests. A set of paired choice tests was given prior to conducting Experiments 2 and 3, and a second set of choice tests was given after the completion of Experiments 2 and 3. Twelve marmosets (6 males and 6 females) were tested in the indoor rooms. Petri dishes (75 mm diameter, height 5 mm) were used in this experiment for easy visibility of the food types, because it was necessary for the marmoset that was being tested to see both of the food types that were presented at the same time, in order to make a choice. These dishes were attached to the sheet of Perspex using small pieces of Velcro, so as to stop the movement of the dishes and to allow easy removal. The marmoset could take only one piece of food per trial and the dishes were removed once a choice had been made. In each choice test two semi-randomly assigned food types were presented, and in small quantities (one mealworm, one cricket, one blueberry, and a 1 cm cube of banana). During each presentation, the individuals were considered to have made a choice when they had observed both dishes, by looking directly at each food type, and had then approached and taken food from one dish. Once a choice had been made, the experimenter removed both of the dishes and left the room to change them, before returning for the next presentation. Each trial consisted of paired combinations of food types. One set of choices was run per day and a score of 1 was recorded for the food that was chosen in each choice test. Experiment 2: Vocal Responses of Marmosets Tested in Pairs This experiment was designed to observe the vocal response of familial pairs of marmosets to four different food types. The calls were recorded during three conditions, namely, (1) no food present; (2) food presented in a bowl with a transparent lid so that it could be observed but not eaten; and (3) the lid removed from the bowl and the food available to be eaten. The aim was to observe whether particular calls were given in the presence of food, and if different calls were elicited by different food types. The marmosets were tested using home-cage mates (N = 12 marmosets, the same as in Experiment 1; 6 same-sex pairs of which 3 were female and 3 were male). The pairs received one presentation of each type of food per day, in random order and in large quantities; half a banana (chopped), 20 blueberries, 20 mealworms, and 10 crickets (i.e., approximately the same amount of each food type). All of the food was presented in large amounts, as previous research had shown that higher rates of food calling were elicited by larger amounts of food (marmosets [3]; red-bellied tamarins [10]). The food was presented in white ceramic bowls (diameter 9.5 cm, height 7 cm) that were covered with a transparent plastic lid (diameter 9 cm). Each food type had its own bowl to avoid any olfactory cues being transferred across the different types of food. The bowl containing food was placed on the table in the indoor room. The recording sessions consisted of a six-minute period prior to testing (no food present in the indoor room), followed by a three-minute period when the food was presented in a bowl with a transparent lid, to allow the marmosets to observe, but not eat it. This was followed by a three-minute period after the lid had been removed from the bowl, so that marmosets could access the food. Identification of Calls Three calls (A, B, and C), which were previously noted to be produced in the presence of food, could be distinguished accurately from the phee and tsik calls in structure and frequency, and from the other common and known calls of marmosets [22]. Since it was not possible to distinguish which individual made each call, the total scores were determined for each pair of marmosets. As a result of their entirely different structures (frequency, duration, and rhythm), Calls A, B, and C were easily distinguishable in sonograms (as well as by ear). All three of the calls were relatively stereotyped and could be readily identified ( Figure 1). transparent lid, to allow the marmosets to observe, but not eat it. This was followed by a three-minute period after the lid had been removed from the bowl, so that marmosets could access the food. Identification of Calls Three calls (A, B, and C), which were previously noted to be produced in the presence of food, could be distinguished accurately from the phee and tsik calls in structure and frequency, and from the other common and known calls of marmosets [22]. Since it was not possible to distinguish which individual made each call, the total scores were determined for each pair of marmosets. As a result of their entirely different structures (frequency, duration, and rhythm), Calls A, B, and C were easily distinguishable in sonograms (as well as by ear). All three of the calls were relatively stereotyped and could be readily identified ( Figure 1). Figure 1. Representative sonograms of Calls A, B, and C, and, for comparison, of a phee call. Frequency in kilohertz (kHz) is plotted on the y-axes and the duration, in seconds, is plotted on the x-axes. Note that the scale differs slightly for each call type. Call A consisted of short, loud, chirping sounds that were uttered consecutively with 4-7 chirps/second, with a duration ranging from a few chirps to a sequence lasting 6-8 s. The frequency of this call ranged from 4-11 kHz, usually beginning with one 'tsik' call or sometimes two ( Figure 1, Calls A). Each repeated element of the call had a descending frequency and was of a duration of approximately 0.08 to 0.10 s. This call was described previously as a food call by Vitale et al. [3]. Call B was a succession of 4-6 soft whistles, each of a 0.5 s duration, given with the mouth almost closed. It had a duration of approximately 2 s (four whistles), and a frequency ranging from 6-9 kHz ( Figure 1, Calls B). The amplitude of the B calls was quite low and the frequency contours were relatively flat. Although the structure of this call looked like a phee call (Figure 1, Phee), the average Note that the scale differs slightly for each call type. Call A consisted of short, loud, chirping sounds that were uttered consecutively with 4-7 chirps/second, with a duration ranging from a few chirps to a sequence lasting 6-8 s. The frequency of this call ranged from 4-11 kHz, usually beginning with one 'tsik' call or sometimes two ( Figure 1, Calls A). Each repeated element of the call had a descending frequency and was of a duration of approximately 0.08 to 0.10 s. This call was described previously as a food call by Vitale et al. [3]. Call B was a succession of 4-6 soft whistles, each of a 0.5 s duration, given with the mouth almost closed. It had a duration of approximately 2 s (four whistles), and a frequency ranging from 6-9 kHz ( Figure 1, Calls B). The amplitude of the B calls was quite low and the frequency contours were relatively flat. Although the structure of this call looked like a phee call (Figure 1, Phee), the average frequency of a phee call is 8-10 kHz, with call duration averaging 1.5 s, and it is a loud, long distance contact call [22]. The frequency of Call B was lower than that of a phee call, and the amplitude was much lower. The individual elements of phee calls also have a longer duration (1-2 s) than those of the B call (0.5 s). Call B was given in groups of 4-5 elements, whereas the phee calls usually contain 2-3 elements. In most cases, Call B had a trill in all or some of the syllables, appearing similar to a trill-phee, as described by Pistorio et al. [23]. Call C was a sequence of long, loud squeals, beginning with a single 'tsik' vocalization, peaking at 20 kHz, and often followed by a second quite similar element, peaking at 16-17 kHz. Subsequent elements ranged from approximately 8 to 13 kHz and dipped midway to about 11 kHz, so that they had an approximate m-shape ( Figure 1, Calls C). Each element had a duration of about 0.35 s. The call had a duration of 6-8 s, and the frequency ranged from 6-16 kHz, including the second element, but not the first, tsik element. Experiment 3: Vocal Responses of Marmosets Tested in Isolation This experiment measured the vocal responses of individual marmosets to the four different food types when they were tested alone in the indoor rooms. The number of each type of call could be determined and compared to the number of calls that were made in pairs. The testing procedure and apparatus that were used were the same as that which was used in Experiment 2, except that the twelve marmosets were now tested singly. Although they could hear the louder calls that were made by other members of the colony in the home rooms, these were muted. No visual contact with other marmosets was possible. 2.6. Experiment 4: Response to Playback of Calls A and C Experiment 4 was designed to test whether the A or C calls had a referential quality signalling the presence of 'insects' or 'live food'. The aim was to playback the calls to see whether they elicited a specific response in the absence of the particular food to which they might have referred. Based on the results of Experiments 2 and 3, Call B was not tested in this way. The marmosets were trained, prior to conducting the experiment, to find mealworms in a cup that was attached behind a branch in the far corner of the indoor room (1.5-1.8 m from the table). Mealworms were placed in the cup each afternoon, and when no marmoset was in the room, for at least four days prior to the commencement of the test trials. The cups were baited once per day after the marmosets had returned to their home rooms at feeding time and, since the marmosets had access to the indoor rooms via the runways, they could find the mealworms when they next entered this room. Since the mealworms were not visible from a distance, the marmosets had to inspect the cup to discover the mealworms. Once the marmosets had taken the mealworms from the cup on three consecutive days, testing began. At testing, the marmosets were assigned randomly to receive one of the three playback sequences, namely, (1) Call A; (2) Call C; and (3) background noise from the animal house. No mealworms were in the cup before or during the experiment. The marmoset that was being tested was given a bowl of banana on the table in the indoor room. The banana was presented in the bowl on the table and was mashed to ensure that the marmoset that was being tested remained at the bowl at least initially during the playbacks, and could not pick up a piece of banana and move away from the table. The playback sequences of the calls were presented once the marmoset had begun to eat the banana and had done so for a minimum of 30 s. If a call indicated that a more preferred food was available, or even more specifically, that insects were available, it was predicted that, on hearing this call, the marmoset should leave the bowl containing the banana and go to look for the mealworms in the cup at the far corner of the room. Either Call A or Call C was played back via a speaker that was placed 1 m from the bowl of banana, and approximately 4 m from the bowl of mealworms. Call A was chosen to be compared to Call C, because the previous experiments had shown that Call A was a non-specific call that was given to all of the food types, and mainly during consumption of food. It was also necessary to playback a control sound, which was a background noise that was recorded in the animal house. Playback Sequences All of the playback sequences were taken from recordings that were made during Experiment 3 using a Sennheiser microphone and a Marantz audio recorder. The Call A sequences were derived from recordings that were made when fruit had been presented, and the Call C sequences were taken from the recordings when either mealworms or crickets had been presented. Individuals were played sequences that had been recorded from their cage mate only. Playback of background noise from the animal house consisted largely of the sounds of an air conditioner and other noises that were familiar to the marmosets. No calls of any kind were present in this recording and it was 55 s in duration. The playback sequences of Call A were arranged using Adobe Audition, by extracting nine individual calls of type A and arranging each of them in a 55 s sequence, with 1.0, 3.0, or 10 s of silence between calls, and in that sequence of occurrence. The same procedure was used to prepare the playback sequences of Call C. These sequences were then cleaned of background noise and other sounds using Raven. All of the playback sequences were calibrated to 65 decibels at a distance of 1.0 m from the speaker. Subjects and Recording of Behavior Each marmoset was tested in isolation. The same marmosets as those that were used in the previous experiments were tested, excepting one male. The call sequences were played back through a speaker that was placed next to the table (1 m from the bowl of banana), facing towards the back of the room. The playbacks were presented via a Behringer 2-way active ribbon studio reference monitor with a Kevlar woofer Model B3030A, from an Apple iPod shuffle. Only one playback was presented per day and the order of the calls and the control sound was pseudo-randomised so that the playback sequence of any call type was not given twice in a row. The tests were repeated once for each marmoset (Test 1 and Test 2, separated by an interval of one month). The behaviour of the marmoset was video recorded. Scoring The behaviour was scored during 4 min of testing, starting from the beginning of the playback. The latency from the time of commencing playback to the first inspection of the cup was measured. As a strict criterion was used for scoring the inspection, the marmoset had to look directly into the cup and not just approach the cup. Marmosets often put their faces inside the cup when inspecting. The marmosets that did not inspect the cup during the test period were scored as a maximum latency of 240 s. Experiment 1: Determining the Relative Preferences of Food Types Used in Subsequent Tests The choice scores (i.e., preference) for the four food types were calculated (Figure 2). The scores for the first set of choice trials, which were conducted before the main experiments, were arcsine transformed using asin( √ n)x 52.298, as recommended by Zar [24], and were then analysed by one-way ANOVA with the food type as a repeated measure. A significant difference between the preferences for the different food types was found (F (3,24) = 4.231, p = 0.016). Post hoc comparisons revealed that there was a tendency towards a higher preference for banana over blueberries (Least Significant Difference (LSD): p = 0.051), a significantly higher preference for mealworms over blueberries (LSD: p = 0.024), and for crickets over blueberries (LSD: p = 0.034). The blueberries were preferred the least of all of the food types. There were no significant differences between the preferences for banana versus mealworms (LSD: p = 0.272), banana versus crickets (LSD: p = 0.242), or between mealworms versus crickets (LSD: p = 0.909). Since the scores for the second set of trials, which were conducted after completing Experiments 2, 3, and 4, could not be normalised by transformation, non-parametric tests were used. A significant heterogeneity was found in the preferences for the different food types (Friedman test: χ 2 = 11.830, N = 9, p = 0.008). Post hoc comparisons (Wilcoxon Signed Ranks Tests) revealed that there were significantly higher preferences for mealworms over blueberries (Z= −2.310, p = 0.021) and mealworms over banana (Z = −2.666, p = 0.008), as well as for crickets over blueberries (Z = −2.240, p = 0.025) and crickets over banana (Z = −2.240, p = 0.025). There were no significant differences between the preferences for banana and blueberries (Z = 0.0, p = 1.0), or between mealworms and crickets (Z = 0.059, p = 0.953). A comparison between the first and second data sets showed that banana was preferred more in the first testing period compared with those in the second testing period (Wilcoxon Signed Ranks, Z = −2.521, p = 0.012). No significant change occurred in the preferences for blueberries (Z = −0.674, p = 0.500), mealworms (Z = −0.338, p = 0.735), or crickets (Z = −0.350, p = 0.726). The marmosets chose insects, either crickets or mealworms, by preference over fruit, banana or blue berries. Except for banana, the preferences for the food types were consistent across the two tests. Experiment 2: Vocal Responses of Marmosets Tested in Pairs The mean numbers of calls that were produced (±SEMs) per pair in 3 min with the lid on and 3 min with the lid off are presented in Figure 3. No calls of type A, B, or C were produced during the 6 min pre-test period, although some phee calls were heard during this period. Some trills were heard when the experimenter entered the room with the food bowl, but Calls A, B, and C were produced only when the food was presented. Note that Calls A and B were made in the presence of all of the food types, but call C was produced only when the mealworms and crickets were presented. Non-parametric tests were used as the data could not be normalised by transformation. The data were first analysed for heterogeneity using Friedman's Tests for each call type, and separately in the lid-on and lid-off conditions, with the food type as the factor. If significant heterogeneity was found, post hoc tests (Wilcoxon Signed Ranks tests) were applied to locate the differences between the food types. Comparisons between the lid-on and the lid-off conditions were made using Wilcoxon Signed Ranks tests. N = 6, since 6 pairs were tested. Call A: There was no significant heterogeneity with respect to the food type in the number of A calls in the lid-on condition (Friedman Test, χ 2 = 4.143, p = 0.246: Figure 3 top graph), or in the lid-off condition (χ 2 = 4.636, p = 0.200: Figure 3 bottom graph). Hence, these scores were lumped across the food type to compare the lid-on versus lid-off scores. There were significantly more A calls made in the lid-off condition than the lid-on condition (Z = −3.291, p = 0.001). In other words, Call A was produced more often when the marmosets had access to the food than when they could see but not eat it. Call B: There was no significant effect of food type in the number of B calls in the lid-on condition (χ 2 = 7.286, p = 0.063) or in the lid-off condition (χ 2 = 4.059, p = 0.255). Hence, the number of B calls were lumped across the food type to compare the lid-on versus lid-off condition, but no significant difference was found between these two conditions (Z = −0.242, p = 0.808). Hence, although no B calls were produced unless food was present, the call was non-specific for food type or whether or not the food was accessible. Call C: During the lid-on condition, there was a significant difference between the number of C calls that were elicited by the presentation of the different foods (χ 2 = 16.22, p = 0.001), and also in the lid-off condition (χ 2 = 13.50, p = 0.004). No C calls were elicited by either blueberries or banana. Post hoc tests using Wilcoxon Signed Ranks tests revealed that, in the lid-on condition, significantly more C calls were produced when mealworms could be seen than when blueberries (Z = −2.207, p = 0.027) or banana could be seen (Z = −2.207, p = 0.027). Also, significantly more C calls were elicited by crickets than by blueberries (Z = −2.226, p = 0.026) or banana (Z = −2.226, p = 0.026). There was no significant difference between the number of C calls that were given for crickets and mealworms (Z = −1.153, p = 0.249). In the lid-off condition, post hoc tests revealed that significantly more C calls were produced when mealworms were presented than when blueberries (Z = −2.232, p = 0.026) or banana (Z = −2.232, p = 0.026) were presented (i.e., the scores in the mealworm presentations were significantly above zero, and no C calls were produced for the presentations of blueberries or banana). C calls were produced when access to crickets was possible, but the response was variable and none of the comparisons to C calls that were elicited by the other foods were significant (crickets versus mealworms, Z = −0.677, p = 0.498; crickets versus banana, Z = −1.633, p = 0.102; and crickets versus blueberries, Z = −1.633, p = 0.102). There were significantly more C calls given to mealworms in the lid-on condition than in the lid-off condition (Z = −1.992, p = 0.046). There was no significant difference between the number of C calls that were given in response to seeing crickets in either the lid-on or the lid-off conditions (Z = −1.687, p = 0.092). Hence, for mealworms, Call C was produced more frequently when the lid was on than when the lid was off, and it is note that this was opposite to the result for Call A. Experiment 3: Vocal Responses of Marmosets Tested in Isolation The results of Experiment 3 are presented in Figure 4. This figure does not include the control period prior to presentation of the food, as no A, B, or C calls were made during this period. Figure 3 but here each marmoset was tested alone and N = 12. Bars marked with (a) differ significantly from those marked with (b), those marked (c) differ significantly from those marked (d) and similarly for those marked (e) versus (f). Call A: Non-parametric tests were used, as the data could not be normalised by transformation. There were significant differences between the number of A calls that were given for different foods during the lid-on condition (Friedman Test: χ 2 = 10.481, p = 0.015) and also the lid-off condition (χ 2 = 7.907, p = 0.048). The post hoc Wilcoxon Signed Ranks test revealed that, during the lid-on condition, there were significantly more A calls that were produced when the marmosets could see banana than when they could see mealworms (Z = −2.521, p = 0.012) or crickets (Z = −2.310, p = 0.021). However, no significant difference was found between the banana and blueberries (Z = −1.245, Call B: Fewer B calls were given than either of the A calls or C calls. The scores of the B calls were analysed by ANOVA as they could be successfully normalised by log transformation. The factors that were used in the ANOVA were the food type as a repeated measure, and lid-on versus lid-off as a factor. There was a significant difference between the number of B calls that were given for different foods (F (3,33) Hence, more B calls were produced when the marmosets had access to crickets compared with the other three food-types. Call C: The data were analysed by ANOVA as they could be successfully normalised by log transformation. The factors that were used in the ANOVA were the food type as a repeated measure, and the lid-on versus lid-off conditions as the factor. There was a significant main effect of the lid-on versus lid-off condition (F (1,11) = 12.136, p = 0.005), and a significant interaction between the two factors (F (3,33) = 6.130, p = 0.002). There was also a significant difference between the number of C calls that were given for the different foods (F (3,33) = 56.773, p < 0.0005). Pairwise LSD comparisons revealed that significantly more C calls were produced on seeing mealworms compared with blueberries (p < 0.0005), mealworms compared with banana (p < 0.0005), crickets compared with blueberries (p < 0.0005), and crickets compared with banana (p < 0.0005). However, there was no significant difference between the number of C calls that were produced on the presentation of mealworms or crickets (p = 0.419). No C calls were elicited by banana or blueberries. Post-hoc tests showed that there were significantly more C calls given to mealworms during the lid-on than the lid-off condition (paired t-test: t = 4.369, p = 0.001), although this was not the case for crickets (paired t-test: t = 0.783, p = 0.450). In summary, C calls were elicited only by mealworms and crickets, as found in Experiment 1, and more C calls were produced when the marmosets could simply see the mealworms than when they had access to them; note that this is the opposite to the results that were found for Calls A and B. Correlation of Preference for a Food and the Number of Calls Elicited The relationship between the number of calls that were emitted by individuals to the four food types in Experiment 3 was correlated with the percent of the food preference for each food type that was determined in Experiment 1, using Pearson's correlation test. The preferences that were determined in the two periods of testing in Experiment 1 were combined. These data were continuous (i.e., not polarised). A significant and strong negative correlation was found between percent preference for mealworms and the number of A calls that were given during the lid-off condition (r = −0.818, p = 0.007). For Call B, the percent preference for blueberries correlated positively with the number of calls that were produced upon seeing blueberries with the lid on (r = 0.768, p = 0.016). For Call C, the number of calls that were produced correlated negatively with the preference for mealworms in the lid-on condition (r = −0.729, p = 0.026) and the lid-off condition (r = −0.794, p = 0.011). No other correlations were significant (r values ranged from −0.571 to 0.553, p-values ranged from 0.108 to 0.806). The Number Calls Given to Food by Marmosets when in Pairs and when Alone (i.e., Experiment 2 Compared to Experiment 3) Comparisons were made to see how the calling rate was affected by the presence or absence of a social companion. In order to do this, the scores for the number of calls were collapsed across the food types. The lid-on and lid-off conditions were considered separately. In each comparison, two tailed paired t-tests were applied. As the individuals could not be identified using audio recordings of their calls for the paired tests (Experiment 2), the scores of the pairs were divided by two and were compared to scores of the individuals that were tested alone ( Figure 5). Call A: In the lid-on condition, significantly more A calls were made per marmoset when tested alone, compared to in pairs (p = 0.035), and the same was the case for the lid-off condition (p = 0.016). Call B: During the lid-on condition there was no significant difference between the number of B calls that were made when the marmosets were tested in pairs, compared to when they were tested alone (p = 0.059), although the trend to make more calls when isolated was noted. In the lid-off condition, the difference was significant (p = 0.026). The marmosets produced more B calls when alone and eating food than when in pairs and eating food. Call C: Since no C calls were recorded when the blueberries or banana were presented, for statistical analysis, the scores for this call were collapsed only across the tests in which the mealworms and crickets were presented. When these insects were presented with the lid on, significantly more C calls were emitted when the marmosets were tested alone, compared with when they were tested in pairs (p = 0.002). In the lid-off condition, fewer C calls were emitted but, despite this, significantly more C calls were given when the marmosets were tested alone compared with when they were tested in pairs (p = 0.0007). Hence, significantly more A, B, and C calls were emitted when the marmosets were tested alone than when they were tested in pairs. Experiment 4: Response to Playback of Calls A and C The data of latency to inspect the cup during the three types of playback is presented in Figure 6. As these data could not be normalised by transformation, non-parametric tests were conducted. Of the 11 marmosets that were tested, 7 inspected the cup during testing, whereas 4 of the marmosets gave no response to any playback sequence during test 1 or test 2. Since these four individuals were non-responsive in this experiment, they were not included in the analysis. Test 1: Significant heterogeneity was found in the latency to inspect the cup when hearing the different playbacks during test 1 (Friedman's test: χ 2 = 13.040, N = 7, p = 0.001). Post hoc tests revealed that there were no significant differences in the latency between the playbacks of background noise and Call A (Wilcoxon Signed Ranks tests: Z = −1.826, p = 0.068: Figure 6). There was a significant difference between the latencies during the playbacks of background noise and Call C (Z = −2.366, p = 0.018), and also between the playbacks of Call A and Call C (Z = −2.366, p = 0.018). Test 2: Significant heterogeneity was found in the latency to inspect the cup, depending on the playback that was presented (Friedman's test: χ 2 = 8.857, p = 0.012). Post hoc tests showed no significant difference in the latency to inspect the cup between tests with background noise and Call A (Wilcoxon Signed Ranks tests: Z = −0.676, p = 0.499), but there was a significant difference between the background noise and Call C (Z = −2.371, p = 0.018), and also between the Call A and Call C playbacks (Z = −2.197, p = 0.028: Figure 6). In summary, the marmosets checked the cup after a short latency on hearing playbacks of Call C, whereas they rarely inspected the cup on hearing either the background noise or Call A and, on those occasions when they did so, it was after a much longer latency than when they had heard Call C. Discussion The three calls (A, B, and C) were made only in the presence of food and not in the 6 min prior to the presentation of the food. Only one of these calls, Call A, had been reported previously to be a food call of the common marmoset [3]. Our results indicate that marmosets have more than one food call, as is the case in other primates and closely related species, such as Geoffroy's marmosets [18], cotton-top tamarins [25,26], red-bellied tamarins [10], and golden lion tamarins [27], but had not been reported previously in common marmosets. Call A was emitted in the presence of all of the food types that were tested, fruit and insects, and more frequently when the food was being consumed, confirming the previous report by Vitale et al. [3]. Previous studies showed that food-associated calls were often positively correlated with a preference for a particular food [26,28], but there was no consistent association between the number of A calls that were produced and the measured order of the food preferences in our experiments, except for significantly more A calls that were produced when eating mealworms than when eating blueberries, and only when the marmosets were tested alone. In the lid-on condition, the marmosets that were tested alone produced more A calls when looking at banana compared to looking at either mealworms or crickets. Despite these differences, as seen in Figure 4, the number of A calls that were produced did not discriminate between fruit versus insects. This led us to conclude that Call A was nonspecific for the food type. Instead, the frequency of producing A calls may have been influenced by the arousal and recognition of the food. The function of Call A may have been to solicit group members to a food source, since the marmosets made these calls more frequently when they were tested alone compared with when they were tested in pairs, as found previously by Vitale et al. [3]. However, as the correlations showed, the more the marmosets preferred the mealworms, the fewer A calls they made when they had access to them (lid-off condition). It seemed that the consumption of the preferred food suppressed the production of Call A (discussed more below). From the results that were obtained, it was clear that Call B was produced less often than Calls A and C. The only significant findings about Call B were that (1) it was produced more often when the crickets were available in the lid-off condition; and, (2) although blueberries were the least favoured food type that was presented, significantly more B calls were vocalised the stronger the marmoset's preference for blueberries was, but only in the lid-on condition. These significant results indicated that Call B was a food call but it was difficult to draw any other firm conclusions. The intensity of the B calls was more similar to that of a close contact call (i.e., the trill [22]) and B calls were emitted at much lower intensities than the A or C calls. Hence, Call B may have been a close contact call that was given during feeding rather than a call that solicited others to a food source. Since no significant positive (or negative) correlations were found between the preferences for the other food types and the number of B calls that were emitted, it would be premature to draw any firm conclusion about this call. Further research will be required to decide on the function of Call B, although its low intensity suggests that it would have limited, if any, capacity to act as a referential signal, especially since marmosets tend to forage in sound-attenuating natural conditions. One of the clear findings of our study was that C calls were produced only when the marmosets were presented with mealworms and crickets, and that these calls were not emitted when fruit, banana or blueberries, was presented. The C calls were also given more often when the marmosets were observing mealworms and crickets, than when they were eating them. This result was opposite to that which was obtained for Calls A and B, suggesting that C calls may be given in response to discovering insects, or in anticipation of eating them. Marmosets in the natural environment consume insects as a large part of their diet, as these are an important source of protein [29]. Call C may, therefore, attract conspecifics to the discovery of an important dietary source. To test whether Call C could be a referential signal, the responses of the marmosets were scored during the playbacks of the call. If the marmosets responded to hearing the playbacks of the C calls by searching for insects, this would show that this call signals information about the availability of insects and, therefore, functions as a referential signal. As shown in Experiment 4, marmosets responded to hearing the playbacks of C calls by looking for mealworms in a location and a container that was known to contain mealworms on previous occasions, but not during the test. The marmosets inspected the cup very rarely or not at all during the tests in which either Call A or a background noise was played (when they did so, it was after a delay of at least 160 s). On hearing the playback of Call C, all of the marmosets left the table and inspected (peered into) the cup after a short latency of around 50 s. This is evidence that Call C conveys specific information about the food source and, hence, that it functions as a referential signal, whereas Call A does not. Moreover, our results provide some evidence in support of Call C being emitted intentionally, since there was an audience effect; that is, when the marmosets were tested in pairs compared to alone. According to Evans [6], the audience effect is exemplified by fewer calls being produced when the signaller is alone, since the signaller is able to suppress calling in the absence of a receiver. Our results showed the opposite and could be an adaptation of the marmosets' feeding habits to natural conditions, which demand foraging out-of-sight of conspecifics and a need to signal to group members the discovery of an important food source. Evans' research [6] was on chicks, which stay in groups and within visual and auditory contact. By contrast, marmosets often forage in dense forests, separately from each other, but remaining within auditory range. An increase in food calling when out of visual contact with conspecifics has been found in red-bellied tamarins [10]. In contrast, a study of cotton-top tamarins found that the presence or absence of a conspecific did not affect the call rate, although more food calls were given to preferred foods [20]. Di Bitetti [7] found that, in tufted capuchins, it was not the presence or absence of group members that mattered, but the distance from the finder to other conspecifics. Hence, different factors affect food calling in various primate species. In common marmosets, the presence/absence of conspecifics, and the food type were the primary factors that affected food calling. Producing more food calls when conspecifics are not in close proximity may benefit the social group by sharing food and/or benefit the finder of the food by enhancing vigilance and hence protection from predators [10]. Since common marmosets are prey to a number of species when in their natural environment, the recruitment of group members and social cohesion could benefit survival, as manifested in the group mobbing of predators [30]. Marmosets are also cooperative breeders [31] and the males and females share parenting; hence, sharing information about food with group members would be adaptive. Although few species have been shown to produce functionally referential food calls [2], our results suggest that at least one food call of common marmosets (Call C) was indeed functionally referential. Other primate species that have been shown to have functionally referential food calls are capuchins [13], rhesus macaques [32,33], chimpanzees [34,35], and Geoffroy's marmosets [36]. However, as pointed out by Clay et al. [2], since some of the testing procedures used prior training with the food, it is difficult to be sure that, during the playback of the food calls, the test subjects, knowing that food 'could' be present, were not responding to the caller's level of arousal rather than to specific information about the presence of the food. This could have been the case in our study, but we obtained a different result for Call A than for Call C. By presenting banana and waiting until the test animal was actively engaged in feeding on it before the test call was presented as a playback, our marmosets were not simply responding to the level of hunger. In the very least, the feeding marmoset interpreted Call C as signaling that a more preferred food was available. Not entirely distinct from this, but with greater specificity of the call, Call C might have signaled that 'insects' were available. Furthermore, the marmoset that was responding to the playback of the C calls did not approach the speaker but went to the site that was associated with mealworms in a different direction and at a different height in the room. In other words, the C calls were interpreted as a signal about the presence of a food type (insects) in a previously learnt location. It was not possible to say whether Call C was produced solely because the presentation was an insect (mealworm or cricket), or because these were the most preferred foods. However, during the first test of food preference (Experiment 1), the preference for banana was higher than for blueberries, but Call C was never elicited when banana was presented. This supports the conclusion that Call C is given only when insects are seen. It is interesting to consider whether the food calls of marmosets are produced spontaneously and 'honestly'. It has been mentioned above that the stronger the preference for mealworms, the fewer A calls were made while the marmoset had access to the food (lid-off condition). A similar result was found for Call C: the more strongly the marmoset preferred mealworms, the fewer C calls it produced in both the lid-on and lid-off conditions. Hence, while these calls may signal to conspecifics that food is available, some degree of suppression of call production occurs when the food is highly preferred. Deceptive calling about food has been noted in other species; some species that produce food calls use them deceptively to either attract conspecifics when no food is present [37], and others increase the latency to commence food calling when they have found a highly preferred food source, thus allowing the finder to obtain more food without competition from conspecifics [7]. These potential aspects of food calling by common marmosets deserve further research. Conclusions Our study has confirmed that the call that was described previously by Vitale et al. [3], Call A, is a food call. It signals the availability of food, regardless of the food type (fruit or insects). Although Call A is given more frequently when the marmoset is eating alone than when it is paired with a cage-mate, its production can be suppressed, perhaps intentionally, when the food is highly preferred. One of the new calls that we investigated, Call B, is emitted only in the presence of food but far less frequently than the other calls, and at lower intensity. Our results suggest that it may be elicited more often by crickets than mealworms or fruit, but further research will be required to determine whether this call is specific for the food-type and whether it is produced intentionally and referentially. Our main conclusion is the discovery of a new call (Call C) that has specificity for signaling the presence of insects (not fruit) and is a referential signal. This call is produced more frequently when insects can be seen than when they are actually consumed, and when the marmoset is tested alone rather than with its cage-mate. Our results show that C calls are not simply emitted when a preferred food is seen, as there was a negative correlation between the number of C calls that were produced and the strength of preference for mealworms. Also, even in a test in which the marmosets did express a preference for banana, no C calls were emitted. In fact, no C calls were produced in any of the tests in which banana or blueberries were presented. Our findings provide evidence of the referential function of a food call in common marmosets. They also have implications for improving the captive care of the species.
2018-07-03T23:06:12.940Z
2018-06-21T00:00:00.000
{ "year": 2018, "sha1": "04ad54da5686820f56ba016ab5049322debd4cfe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/8/7/99/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04ad54da5686820f56ba016ab5049322debd4cfe", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219158951
pes2o/s2orc
v3-fos-license
Scan-to-Building Information Modelling vs. HBIM in Parametric Heritage Building Documentation This paper introduces some studies developed regarding the Greco-Roman Museum at Alexandria, in Egypt. The heritage building was built by Dietriche and Steinon and had 11 halls were completed in 1895 and opened by Khedive Abbas Helmy II. The museum embodies many elements of the Italian Renaissance, including the columns, the entablature, and the pediment, as well as a staircase of white marble in the front façade, follows the Doric order. The museum was surveyed using terrestrial laser scanning to produce a model with high detail level translated into a Heritage Building Information Modelling (HBIM) prototype. The Scan-to-Building Information Modelling (BIM) approach was obtained with a generation of semantics and components proper of Dietriche and Steinon’s architectural grammar, first of all defining the object, then geometry, and then their parameterization. This method of modelling the Museum’s main facade, used to better understand the architectural composition of the volumes, producing particular hypothesis on its form according to the architectural patterns books, to understand the restrictions of the method and the perception of a research that goes from a digital survey to the HBIM model. Introduction There is a great need for the development of digital documentation using advanced technologies that are useful in collecting information with less time, effort, and cost, focusing on how this information can be converted to formal and organized documents and the ability to share them effectively. The Following researches investigated different workflows and methods in order to fill the gap between the existing building and the digital model creation. Digital modelling of heritage buildings Modelling of the existing building with the new methodology of digital survey led to a better understanding of cultural heritage especially in terms of expected output [2]. Nowadays, using a point cloud model generated by a laser scanner and following the workflow of managing and processing this model output can be used to generate two-dimensional technical drawings such as sections recognized in special software of reverse engineering (i.e. MeshLab, Geomagic Design, etc.). This model can also be cut into CAD environments using planes or boxes and extract point sections useful to draw technical graphic representations. In the same approach, a textured mesh with orthographic image processed with high detail and resolution often obtained after good processes of mesh repair and decimation. The conventional workflow changes when a BIM model is generated from a heritage building that is produced in the form of a polygonal model controlled by parameters. Heritage Building Information Modelling (HBIM) is a solution to the process of documenting buildings and thus preserving the heritage and can become an effective technique for preserving the architecture and cultural heritage. The objectives of the project or the goals of the researchers affect the possibilities that must be provided in this technique in terms of the accuracy of the method [3], the ability to analyse, time consumption and the communication level between the parties of the project. It should be considered that there are different workflows in applying HBIM. For example, the surveying work should be done by a sensitive scientist with interpretative and measurement skills, and the ability to use the technique. There are many skills needed to complete the process correctly. For example, it depends on the feedback resulting from experiences and which may lead to a change in the model and each step in this process is monitored deeply. The Greco-Roman Museum: a case study The Graeco-Roman Museum was chosen as a case study, one of the most important archaeological buildings in the city of Alexandria. This museum was built in 1892 in a five-room apartment inside a small house on Rosetta Street, now known as Horriya Street. The museum was transferred to a larger building near Gamal Abdel Nasser Street, where a larger number of archaeological artefacts are being displayed [4]. Because of the discovery of more monuments and required to display in the museum necessitated the need to increase areas by extending and redeveloping the museum. This development required the comprehensive documentation of the old museum before embarking on work, which consumed a lot of time, effort and cost. Therefore, HBIM technique was needed to facilitate documentation. The museum has a beautiful neoclassical facade of six columns and pediment bears the large Greek inscription "MOYΣEION". The museum contains 27 halls to display marble statues, mummies, carpets, coins and other antiquities offering a view of Greco-Roman civilization in contact with ancient Egypt. It also contains a patio that will be used as a space for temporary exhibitions. The building entrance was designed taking a Doric style, with columns, Pediment and Entablature. These elements are obviously visible in the western façade of the museum, which the main entrance exists. There are also some elements such as doors, stairs and ceiling that define the entrance. As a result of the archaeological activity and the increase in discoveries, the need for renovating the museum by increasing the number of halls to display what was discovered from the relics was required. Therefore, the Municipality of Alexandria increased the number of halls until it reached 25 halls now. The study was dealt with part of the museum, which includes the western façade as part of a larger project, the redevelopment of the museum. This project includes the development of an integrated BIM model for the entire site to inform future decisions related to the maintenance, addition, management of the museum, proposals for reuse and adaptation, for repairs, presentation and exhibition material. The BIM model will be a central source of information and a platform for interdisciplinary collaboration for use in different types of study, analysis, documents and proposals. In the field of surveying the historical architecture, there is continuous progress in applying the terrestrial laser scanning technology. This developed methodology allows the generation of a detailed 3D model of the captured elements in the form of point cloud, which utilized for inspection and metric inquiry. More precisely, the output of this type of survey is represented by a set of points that form the shell of the object which can be measured accurately [5]. There are a number of differences between this survey techniques and the traditional survey that can be found in creating high detail 3D reproduction, in the precision of the measurements, in the time consuming of capturing data, and in the adaptability and flexibility to different projects states and needs. This developed technique of using terrestrial laser scanning has many applications, useful for heritage documentation, conservation management and intervention decision making. 3D point cloud production can help in recognizing the deteriorated elements, distinguishing the original areas from restored ones, classifying historical layers and materials. BIM has been known to integrate a lot of data into a single model through the use of many specialized programs that have led to the production of a large database. Therefore, this database has contributed in significantly developing of research and has been continuously developed so far. Currently, this research is being developed through the possibility of integrating the point cloud into some of the available BIM softwares, thus allowing comparison between the techniques used in documenting the heritage buildings that are discussed in the research. This includes comparison between the standardized object and the captured cloud [6]. One challenge is defining the 3D point cloud as possible to be BIM objects and create an integrated library of historic elements to be stored within a specific database for new HBIM model [1]. However, creating all these elements based on standards, pattern books, and original sources may take a lot of time, especially as these elements are presented in very complex forms with many parameters to be constrained precisely. Extended surveys and architectural representation 2.3. "All classical architecture of the Greco-Roman tradition is composed, or written, in one language of forms. These elements of classical architecture include specific Mouldings and assemblages of moulding called an Order. And Order is an accepted way of assembling a column (supporting element) with an entablature (spanning element) while imparting a certain character. In short, an Order orders a design. Orders are never applied after the building is designed, as they are generative" [7]. There are several source materials that can be used for reference when determining the proportions and forms, such as "The Classical Orders of Architecture" book by Robert Chitham [8]. In this book Chitham details how to construct many traditional forms including the five architectural orders (Tuscan, Doric, Ionic, Corinthian and Composite) as well as many common mouldings, balustrades and other forms ( Figure 1). The book was first published on cusp of the computer age and therefore focused on traditional techniques to construct the orders using hand drafting methods. Other sources provided detailed drawings of each order with dimensions for each component of the order that was useful in the parametric modelling work ( Figure 2). Also, existing building drawings and other documents ( Figure 3) were useful for modelling process. As an original document this plan has been examined as a parameter to check differences between the original design intent and present building [9]. Historical buildings knowledge: the HBIM process Maurice Murphy, Eugene McGovern and Sara Pavia, have defined Historic Building Information Modelling (HBIM) in 2009, as "the procedure of remote data capture using laser scanning and the subsequent processing required in order to identify a methodology for creating full engineering drawings (orthographic and 3D models) from laser scan and image survey data for historic structures" [10]. A comparison between accurate surveys data (with a point cloud files output) and a created library of architectural components (represented in a parametric family into the Revit environment) was adopted as a methodology. By locating the parametric family in the related portion of the point cloud, it can then differentiate between the two models and the simplification of the complex shapes and creating the final model in a lighter three-dimensional representation. In previous years, the use of BIM was studied in the management of existing buildings, but without considering the approach of generating the data of them. Currently, a new methodology developed and defined as Scan-to-BIM that depends on the generation of point cloud model that are converted into "in-place" mass objects, which are modelled separately without storing them in libraries. This building data can be shared with the different parties of the project in the form of a 3D model [11]. The aim of both approaches is the creation of a standard 3D model that can be integrated and shared with a specific data format such as IFC or other standard data-schemes. The approach followed in this research work was partly derived from Scan-to-BIM, while most of the components investigated on the Graeco-Roman Museum were reproduced in the digital domain in form of geometric libraries, not authored as general components but as parametric and dedicated elements instead, in order to better fit the actual geometry. The HBIM methodology adopted, this way, combined the starting point cloud accuracy retracing with a production of libraries directly connected to the Doric order grammar. This research aims to generate a complete perceptible data that comes from the TLS survey of a heritage building, examines using different survey and measurement technique, the evidence of the ultimate Level of Accuracy, forms a point cloud to be integrated with a BIM environment. The parametric components of classical forms are not fully presented in the commercial platform libraries 5 and missing the parametric 3D of the existing condition of the building to be consulted for any intervention required: from documenting the building itself to the comparison with similar heritage building architectural style, study of the building deterioration and pathologies to a renovation project [1]. The research seeks to explore the possibility to reverse the construction process by converting the real building into a digital model that represents a particular case study taking into account the accuracy of the model compared to the actual building. The research adapted the following methodology to achieve the mentioned aims through the practical and analytical work. Planning the survey: preliminary approach to a wide and complex data collection The Museum main facade were surveyed using Terrestrial Laser Scanning (TLS) technique that granted a reliable metric model, providing architectural details and textures by adding image capturing, in order to document materials, damages and preservation. A preliminary plan was prepared to allow a successful survey campaign, taking into account geometries of the pediment, columns, entablature, which are the defining elements of the building and to work on it from a scientific perspective: the campaign was prepared for the terrestrial laser scanning (Figure 4). Some proportional systems or even the choice of a typological element such as the pediment, column, or the entablature in fact, can be properly evaluated in the light of an accurate survey model. Documentary sources to investigate the geometry of the column have been investigated both in the literature. The survey was performed, this way, also to investigate more and more geometric and constructive criteria chosen by the architect. For the best success of the laser scanning of pediment, column, and the entablature an accurate targeting project was drawn. The morphology of the entablature contains elements of increasing difficulty for the laser scanning restitution: the Triglyph and the Mutule along the entablature containing in-depth details, which are repeated with constant distances, featured a white and uniform background, difficult to be isolated by software. Furthermore, the clouds sometimes blockaded the sun light during the scanning process of the entablature causing darkened and whitened areas in the final point cloud model. Therefore, removing these scans from the file was required to correct the model ( Figure 5). Terrestrial laser scanning BLK360 (Leica) of the Museum TLS techniques were applied, registering almost 22 scans outdoor the monument: a general referenced point cloud related to the Museum was authored taking advantage of Autodesk ReCap and Geomagic Design software, processing colour raw files collected by a Leica BLK 360. The BLK360 (Figure 6) is the last low-cost 3D scanner commercialised by Leica. The company put a lot of effort in creating a compact product with a captivating design and user-friendly interface. The BLK 360 is selected for some advantages including the connection through Wi-Fi, and with Autodesk Recap Pro software to transfer the surveyed data automatically. In addition, this device doesn"t require high skills or an expert user, and all work is completed through a tablet. The weight and size of the device make manoeuvring easier in the surveying work of cultural heritage [12]. The following (Table 1) reported the specifications of the BLK 360. Getting the point cloud output The pediment, columns and the entablature existing in the Graeco-Roman Museum main façade were modelled following a working workflow consisting in four stages during which they were surveyed with high definition techniques (data collection stage), the survey results were processed and registered in overall point clouds (data processing stage), then the architectural elements were recognized and classified into categories after segmentation and mesh creation (semantic abstraction stage) and finally they were imported into a BIM software and converted to a parametric families to be managed (BIM modelling stage). After the first two stages of the process, the point cloud model is a finite model of precision, which may cause deformation of the model and the presence of some distortions that must be repaired during the process. Furthermore, the model is a single block without distinguishing between the different architectural elements that must be classified before the conversion to the BIM components. Since a predefined architectural grammar composed by patterns books, an HBIM approach as meant by its original definition [13] was partially pursued. Instead of superimposing digital libraries of components to the point clouds, the digital parametric objects were inferred retracing geometries over the survey references. Semantics and parametric modelling The main façade has many complex details that were analysed and classified, following Scan-to-BIM approach, taking the advantage of the generated and segmented point clouds. Even though BIM is a process mainly pledged to new constructions [14], whose information among actors is shared using digital models, the scientific literature proved it can be extremely useful on existing cultural heritage as well, regardless of the age of the investigated context. HBIM models depend on the standard geometries, which parameters are controlling the architectural scale, proportions, identity, and their semantic interactions [15]. Thus, the HBIM model incorporates a lot of heterogeneous data for all the architectural elements that can be ideal database for any conservation project. The point cloud digital model, even its precision and accuracy, cannot be used in quantity take-offs, simulate the building or extract the architectural drawings necessary for documentation without being introduced into the BIM environment [16]. Also, the HBIM model requires the prepared point cloud model for retracing accurately and creating the desired documentation drawings for the building and its details with the advantage of the flexibility using parameters. A parametric model, in fact, is a representation that binds the architecture of its components to numerical variables [17] that can be modified based on semantic relationships [7] (columns are always connected to the entablature supporting it, even if these change their initial shape), mathematical formulas [6] (in order to achieve the geometrical standards in the pattern books and the ideal proportions, by construction equations or polynomials considering the parameters as variables) or variable constraints (defining the relation between elements under certain conditions, following explicit mathematics that exclude possible alternatives). Starting by importing the survey data into a BIM software such as Autodesk Revit 2018, a parametric modelling was arranged in a grammatical way to describe the architectural patterns and their details. (Figure 7). Revit's parametric engine, unlike generic CAD software, manages the construction of a threedimensional model by verticalising the result to the architectural scale: it is not possible to use this modelling environment as an electronic drafting tool, since its main goal is the virtual construction of the investigated building"s digital prototype. The major drawbacks of the HBIM approach is the high time consumption required for modelling in a correct way and in identifying the parameters and equations for constraining the model in order to change morphologies to fit the real building situation. The automation of this process is still under test and research in order to reduce the problem of time consumption. The high skill level is required also and the knowledge about the specifications of the graphic engine is essential. lower number of parameters do not bring advantages in terms of architectural reference values, while a system of overabundant or poorly distributed constraints does not allow the parameters to explicate correct geometries. In general, constraints and parameters were prepared at first for the main elements like the entablature, while subdivisions and details (such as Triglyphs or Mutules) were added to the former, without introducing other dedicated variable parameters. In this way, by making grammar choices from global parameters, the proportional relationships were highlighted with the association only of numerical rules and geometries implied by the point clouds. Considering the model as a collector of data related to the Museum, the point cloud is still available: the parametric representation can be really considered as a contemplation, which serves as an interpretation of building components as well as a geometric code for extended data contents. Digital reconstruction The architectural composition of the Museum is the result of the semantics proper of every single component assembled. The final HBIM model of the exterior façade incorporates the functions of a digital model with the ability of historic data enhancement (photos provide the data about the materials and manifestations of deterioration were, for example, linked to walls and columns), to gather information on the original building techniques and modifications occurred over the years in materials and structures, and to perform simulations and analysis. The building"s columns in the front façade, for instance, was not built according to the original elevation drawing and changed during the lifetime of the monument. The model generated could be likely used as a knowledge framework to simulate the architect"s intent, with different base, shaft and capital of the columns and the entablature instead of the actual ones ( Figure 8). Furthermore, the model was experimentally already used to better understand the relation between the external stone walls with the new extension design (Figure 9). The exterior boundary and all the significant construction details needed to be coordinated with the new design and with minimum clashes between elements. Even if it would have been much more significant a complete analysis considering both exterior and interior walls, this first attempt proves the versatility and the updating possibilities of the digital reconstruction. Conclusion This study presented a methodology for developing a semantic-aware high-quality 3D model capable of connecting geometric-historical study with descriptive thematic databases. A centralized HBIM will thus serve as an extensive data set of information in the field of conservation, especially for the documentation work. The use of laser scanning can help to record the heritage building in a very high level of detail. Using accurate parametric objects can enhance the HBIM process to be automated. A HBIM parametric model is conceived to improve information quality and quantity as the knowledge on the existing building grows; when this approach is applied to an existing monument, just like Graeco-Roman Museum in Alexandria, data collection becomes a complete digital repository aimed at the knowledge of the architectural heritage. This paper proved that the methodology and its tools have certainly to be improved in terms of accuracy, reliability and automation; the theme of model accuracy is still open and the expected precision of a BIM model perhaps a case by case theme still in progress. But among winning aspects the BIM modelling gives the possibility to implement data set making them accessible to following studies and implementation. The approach based on a mixed Scan-to-BIM and a HBIM comparison with digital libraries of elements modelled in-place can be successful in gathering data even on wide and complex monuments. In addition, this framework has been applied by the researcher as a practitioner in the rehabilitation project in order to reduce the time consumption, cost, and the effort exerted in the documentation work. Future perspectives on this work are focused on the survey and the modelling of the whole building, since a HBIM repository has to be referred to the complete investigated context, in order to better understand how it was built and how many issues are solved. This research shows many of the advantages and boundaries connected with the HBIM approach in documenting heritage buildings, considering a development in the methods and techniques in further researches to achieve the automation of the process in less time consumption aimed at managing the cultural heritage conservation.
2019-12-19T09:15:51.187Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "c5ef26550333e07662c91e964228ea191f3e6bfa", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/397/1/012015", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "82a38c4ee8c6be9791631849eb71a82f19fca1a9", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Engineering" ] }
4871991
pes2o/s2orc
v3-fos-license
Deep Interactive Region Segmentation and Captioning With recent innovations in dense image captioning, it is now possible to describe every object of the scene with a caption while objects are determined by bounding boxes. However, interpretation of such an output is not trivial due to the existence of many overlapping bounding boxes. Furthermore, in current captioning frameworks, the user is not able to involve personal preferences to exclude out of interest areas. In this paper, we propose a novel hybrid deep learning architecture for interactive region segmentation and captioning where the user is able to specify an arbitrary region of the image that should be processed. To this end, a dedicated Fully Convolutional Network (FCN) named Lyncean FCN (LFCN) is trained using our special training data to isolate the User Intention Region (UIR) as the output of an efficient segmentation. In parallel, a dense image captioning model is utilized to provide a wide variety of captions for that region. Then, the UIR will be explained with the caption of the best match bounding box. To the best of our knowledge, this is the first work that provides such a comprehensive output. Our experiments show the superiority of the proposed approach over state-of-the-art interactive segmentation methods on several well-known datasets. In addition, replacement of the bounding boxes with the result of the interactive segmentation leads to a better understanding of the dense image captioning output as well as accuracy enhancement for the object detection in terms of Intersection over Union (IoU). Introduction As one of the main sources of the human knowledge, our visual system including eyes, optic nerves and brain is able to easily detect, separate and describe each object of a scene. Inspired by this natural ability, interactive region segmentation and captioning is the task of parallel detection, separation and description of the visual user interests. This procedure can be exploited in several complex applications such as automatic image annotation and retrieval [25,53]. To approach the task, one needs to have a full understanding of the scene which is equivalent to recognize and also locate all the visible objects. To this end, several object recognition techniques [16,49,54] have been proposed to detect image objects in different scales. In most of the literature, detected objects are determined by drawing bounding boxes around them. Although this c 2017. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. (a) (b) (c) (d) (e) Figure 1: Input image including positive and negative user clicks (a), probability map of our LFCN considering user intention (b), output of the dense image captioning [26] (c), best match bounding box w.r.t. user intention (d), and the final output of the proposed model including determined user intention region (UIR) and its description (e). notation is able to facilitate the detection process by decreasing its computational complexity, such an output is less informative when dealing with geometrical properties of the objects. As a more illustrative visual recognition technique, semantic segmentation [5,35,37,38] aims to assign a label to each pixel of the image where the labels can be class-aware or instance-aware. While the multi-level nature of semantic segmentation increases the problem dimensionality, interactive image segmentation [3,17,48] tries to adjust the segmentation task with the user priorities in a simpler problem space. In reality, it sounds reasonable that human users may have a more restricted area of interest than the entire scope of the scene. Thus, the multi-dimensional semantic segmentation task can be shrunk to a binary segmentation problem aiming to separate the User Intention Region (UIR) as the foreground from other parts of the scene which requires less time and computations. Equipped by the rich semantic memory of the visual data, the human observer is easily able to provide detailed explanation about different parts of an image which is a hard task in artificial intelligence. Thanks to recent developments of language models [28,42], image captioning [6,10,13,29,39,58,60] makes it possible to produce linguistic descriptions of an image through a multimodal embedding of the visual stimuli and the word representation [43] in a joint vector space [28]. In this paper, we propose a novel hybrid deep architecture for integrated detection, segmentation and captioning of the user preferences where the amount of the user interactions is limited to one or a few clicks. To this end, we designed a heuristic technique for the efficient generation of the synthetic user interactions. In addition, the new architecture of the proposed Lyncean Fully Convolutional Network (LFCN) leads to a better sight of the deep component that is responsible for interactive segmentation. Last but not least, as depicted in Fig. 1, our combination of interactive segmentation and dense captioning tasks introduces a new class of outputs where the user intention recognition meets linguistic interpretations and vice versa. Let us stress at this point that our main contributions are (i) to provide the first deep framework for combined interactive segmentation and captioning, and (ii) to achieve segnificant improvements in the interactive segmentation over other methods. More Details on Related Works With the increasing popularity of deep learning architectures [31,55,64], both detection and captioning procedures have attracted a new wave of considerations. Convolutional Neural Networks (CNNs) [31,32] have presented the ability to construct numer-ous visual features in different levels of abstraction through supervised learning. This property leads to feature generators that are able to reach near-human performance in various computer vision tasks [20]. In addition, the structure of Fully Convolutional Networks (FCNs) [38] made it feasible to apply inputs of any size to the network and generate associated output in the same spatial domain. In contrast to CNNs, FCNs are able to maintain spatial information which is crucial to perform a pixel-level prediction such as semantic segmentation, object localization [49], depth estimation [11] and interactive segmentation. Furthermore, Recurrent Neural Networks (RNNs) [21,59] reveal potential for learning long term dependencies which is essential for simulating the continuous space of natural languages. Recently, CNN-RNN models are proposed to wrap detection and captioning tasks in an end-to-end learnable platform [26]. However, up to now the results appear to be mostly an unorganized and overcrowded set of captions and bounding boxes. These results are not easily understandable especially in the presence of several overlapping region proposals cf. Fig. 1 (c). In addition, they do not involve user intentions. Before the success of CNNs in object detection, some classical techniques such as Histogram of Gradients (HoG) [8], Deformable Part Models (DPM) [14] and selective search [56] (as an explicit region proposal method) were proposed. Later, in the Region-based CNN (R-CNN) model [16], each proposed region has been forwarded through a separate CNN for the feature extraction. This model had some drawbacks such as a complex multi-stage training pipeline and expensive training process. To overcome those obstacles, the Fast R-CNN model [15] is proposed where a combination of a CNN and the Region of Interest (RoI) pooling mechanism is used to produce better information for region proposal. In the Faster R-CNN [46], the CNN architecture is used not only for the feature extraction but also for region proposal itself. This leads to the invention of the Region Proposal Networks (RPNs) that are able to share full-image convolutional features with detection networks. The main achievement of this innovation is the parallel detection and localization of the objects in one forward pass of the network. In spite of these improvements, such models are not able to backpropagate through the bounding boxes information. Recently, Johnson et. al [26] proposed a localization layer based on Faster R-CNN architecture where the RoI pooling mechanism is replaced by bilinear interpolation [18,23] that makes it possible to propagate backward through all the information. The primary purpose of the image captioning was image annotation [24,52] as the automatic assignment of some keywords to a digital image. By replacing keywords with some sentences that are able to describe not only the image objects but also the semantic relations in between, image captioning received more attention. The main problem in the automatic image description was the scarcity of the training data. Recent development of large datasets including images and their descriptions [22,30,36] makes it feasible to expand learning-based captioning techniques. Classical image captioning approaches produced image descriptions by generative grammars [44,62] or pre-defined templates working on some specific visual features [2,9]. In contrast, recently developed deep learning solutions apply an RNN-based language model that is conditioned on the output vectors of a CNN to generate descriptive sentences [6,10,28,40,58,63]. With the growing popularity of interactive devices such as smart phones and tablets, interactive image processing attracts more attention. Interactive segmentation offers a pixel-wise classification based on user priorities. Among all the traditional approaches of the interactive segmentation, stroke-based techniques [34,50,57] are often based on graph cut techniques. In these methods, an energy function based on region/boundary division is optimized to find the segmentation result. Alternative approaches include random walks [17] and geodesics [3,7], mostly relying on low-level features such as color, texture and shape information. These types of attributes can be difficult to apply when the image appearance is complicated due to complex lighting conditions or existence of intricate textures. Recently, deep learning models have been used for interactive segmentation where the information of the image will be considered in higher semantic levels. To this end, FCNs as the standard frameworks for pixel-wise end-to-end learning tasks, have been applied [33,35,61,65]. Proposed Method Our model receives an input image as well as user interactions in the form of positive/negative clicks and provides a seamless framework to generate accurate segmentation as well as expressive description of the UIR. In the preprocessing step, an efficient morphological technique will be used to provide a huge amount of training samples in the form of synthetic user interactions. Then, each set of positive/negative seeds will be transformed into separate Voronoi diagrams as shown in Fig. 2. Next, a sequence of dedicated LFCNs with different granularities are applied as the interactive segmentation modules. Afterwards, a dense captioning architecture inspired by [26] will be utilized to obtain a number of region proposals along with their captions. In the fusion step, a heuristic method will be provided to combine results of localization, segmentation and captioning procedures to acquire highlighted borders of the UIR along with its expositor caption. In the following, we will investigate all steps of our model in detail. User Action Imitation During interactive segmentation, the user will be asked to provide some general information about the position of the intended region. The requested information consists of some positive and negative seeds as depicted in Fig. 2 which are equivalent to internal and external points of the UIR, respectively. Next, each set of seeds will be used to shape a Voronoi diagram. We denote each seed by s k , k = {1, . . . , n}. The value of pixel v i, j of the Voronoi diagram will be calculated by v i, j := min{D 1 , D 2 , . . . , D n } where D k is the Euclidean distance of v i, j to the seed s k . To summarize, the value of each pixel in the Voronoi diagram is the Euclidean distance of that pixel to the nearest seed. For the sake of clarity, there should be a minimum inter-cluster distance in each set of the seeds. In addition, a minimum intra-cluster distance is also required to retain boundary regions of the clusters as distinctive as possible. So: • Every pair of seeds in each set should preserve a pre-defined distance from each other: • All the seeds of each set should preserve a minimum distance from boundary pixels of the UIR (∂ (UIR)): As expected, natural collection of such a data is unreasonably time consuming and expensive. Recently, Xu et al. [61] proposed some strategies for synthetic generation of user interactions. They ordained random generation for positive clicks inside the UIR while three distinct set of negative clicks are chosen as: 1) random background pixels with a certain distance to the UIR, 2) a point cloud inside the negative objects and 3) a uniform set of surrounding points of the UIR. Since their implementation is not publicly available, it seems their first and the second negative strategies do not obey natural interactions and the third one may be computationally expensive (see equation (2) in [61]). Morphological Cortex Detection (MCD). While the inside of the UIR can be quite small, the background region is usually large enough to provide useful geometric information about the UIR. Consequently, it is beneficial to generate negative seeds that surround the UIR uniformly. To provide an efficient implementation for such an interaction, we replace third negative strategy proposed in Xu et al. [61] with a Morphological Cortex Detection (MCD) technique that noticeably improves computational efficiency. Moreover, this method is able to simulate UIR cortex in different scales that enables convolutional filter of the LFCN to track UIR geometry in different layers. To implement this idea, a 1-pixel-wide boundary shape of the UIR will be extracted by performing a dilation on the binary mask of UIR in training dataset. Then, the original mask is subtracted from the dilation result. In the next step, this boundary path will be completely traversed using a 3 × 3 window to transfer all the boundary points' coordinates into a 1-D array in which the requested negative seeds can be selected uniformly. As the result of the MCD process, a uniform set of negative seeds will be obtained that represents the cortex of the UIR perfectly. The visual illustration of this technique is shown in Fig. 3. During our experiments, positive clicks are simulated randomly inside the UIR while negative seeds are generated by MCD mechanism in three different levels. Intention Recognition For the task of intention recognition, we make use of a dedicated version of the standard FCN [38] where the last two fully connected layers are replaced with three convolutional layers containing decreasing kernel sizes of 7, 5 and 3. The impact of such an alternation is the gradual growth of the receptive field. This property improves network recognition of objects' geometry. Hence, we named this architecture as Lyncean Fully Convolutional Network (LFCN). By a proper use of zero padding, all the extended convolutional layers have the same output size. At the end of the extended part, the aggregated output of the additional layers will be upsampled to the size of the input as elaborated in [38]. Fusion Approach In order to supplement the result of the interactive segmentation with a proper linguistic commentary, we employ the dense image captioning framework [26]. The internal RPN of this architecture provides confidence scores for the existence of the object in proposed regions. After descending sort of the objectness scores, top-ranked region proposals include the most reasonable captions for the objects of the scene. With the comparison of the interactive segmentation result and the bounding boxes, the best match bounding box and the corresponding caption will be obtained (Fig. 4). Experiments Datasets. For fine tuning of the LFCN, we used the PASCAL VOC 2012 segmentation dataset [12]. The dataset includes 1464 images for training and 1449 images for validation that are distributed in 20 different classes. We used the whole bunch of these samples to generate our special training pairs in the preprocessing step. For the final validation of the model as well as its comparison with state-of-the-art interactive segmentation, we utilized different well-known segmentation benchmarks including Alpha Matting [47], Berkeley segmentation dataset (BSDS500) [41], Weizmann segmentation evaluation database [1], image object segmentation visual quality evaluation database [51] and VOC validation subset. Preprocessing. To generate all the necessary training pairs of the interactive segmentation process, we produced positive and negative Voronoi diagrams with respect to each object that is visible in VOC dataset. The positive seeds are selected randomly inside each object while the MCD approach is used to generate three distinct sets of negative seeds with different distances from the intended object. In the last step, each combination of positive/negative Voronoi diagrams, forms a unique training pair. This leads to production of 97,055 interaction patterns. We preserved 7,055 instances for the test and used the rest as the training data. Fine Tuning of the Proposed LFCN Architecture To reach the best quality for the interactive segmentation, our LFCN is trained in three different levels of granularity as proposed in [38]: LFCN32s, LFCN16s and LFCN8s. RGB channels of the input image should be concatenated with the corresponding Voronoi diagrams to form a training instance. Consequently, the first convolutional layer of our LFCN contains five channels. During the network initialization, the RGB-related channels will be initialized by the parameters of the original FCN [38]. For two extra channels that are associated with Voronoi diagrams, the zero initialization is the best choice as also mentioned in [61]. Learning parameters of the finer networks should be initialized from the coarser one. The global learning rates of the networks are 1e-8, 1e-10 and 1e-12, respectively while the extended convolutional layers exploit one hundred times bigger learning rates. The learning policy is fixed and we used the weight decay of 5e-3. Metrics In order to evaluate UIR localization accuracy of the proposed model, we calculated the well-known measure of Intersection over Union (IoU). To this aim, we computed the IoU of the detected UIR and the corresponding binary label of the validation samples. For the sake of complete comparison between our model and other interactive segmentation techniques, three performance metrics of pixel accuracy, mean accuracy and mean IoU are computed. The segmentation task of the proposed approach can be considered as a binary segmentation where the classes are limited to foreground (UIR) and background. So, we used binary interpretation of the semantic segmentation metrics that are proposed by Long et al. [38]: • Pixel Accuracy (Pixel Acc.): This measure represents the proportion of the correctly classified foreground (C f ) and background (C b ) pixels (true positive rates) to the total number of ground truth pixels in foreground (F) and background (B). Unfortunately, this metric can be easily influenced by the class imbalance. Hence, high pixel accuracy does not necessarily mean that the accuracy is acceptable when one of the classes is too small or too large. • Mean Pixel Accuracy (Mean Acc.): This measure is computed as the mean of the separate foreground and background pixel accuracies: This metric alleviates the imbalance problem but can be still misleading. For example when the great majority of pixels are background, a method that predicts all the pixels as background can still have seemingly good performance. • Mean Intersection over Union (Mean IoU): Intersection over union is the matching ratio between the result of object localization process and the corresponding ground truth label. This metric is the average of the computed intersection over union for the foreground and background regions: ( Here FP and FN denote the number of the false positive and false negative rates of each class, respectively. This metric solves the previously described issues. Results Test of localization accuracy. In the first step of evaluation, we test our model with a random subset of unseen samples in validation datasets. The response of the model to some instances is shown in Fig. 5. It can be noticed that the output of our approach achieves a considerable rate of accuracy regarding the similarity of the model output with the corresponding ground truth. Furthermore, the confusing output of the dense image captioning is replaced with an explicit situation where the segmented UIR and its description are easily distinguishable. Fig. 6 (left diagram) presents a comparison between the localization accuracy of the proposed method and the internal RPN of the DenseCap [26] in terms of the obtained IoU for the samples presented in Fig. 5. As illustrated, our model provides a significant improvement regarding the localization accuracy. These results also demonstrate the proficiency of our model in combining interactive segmentation, region proposal and image captioning techniques. Sensitivity analysis. In this part, we analysed variations of the model output quality against the number of user interactions. As it is shown in Fig 7, although IoU can be improved by applying more user interactions that facilitate boundary detection, our model still provides very good results even by minimum number of clicks. We also provided mean IoU accuracy of the proposed model for five different datasets in Fig. 6 (right diagram) that confirms satisfying performance of our model in the case of low interactive information. This noticeable property of our approach makes it convenient to be applied in real-world applications. During our experiments, the proposed method clearly achieves a satisfying segmentation outcome with just one click. Comparison of the segmentation quality. In this part we performed an extensive evaluation on segmentation capabilities of the proposed method versus some prevalent segmentation techniques such as Geodesic Matting (GM) [3], GrowCut [57], Grabcut [48], Boykov Jolly (BJ) interactive graph cuts [4], Geodesic Star Convexity (GSC) [19], Geodesic Star Convexity with sequential constraints (GSCSEQ), Random Walker (RW) segmentation [17], Shortest Path-based interactive segmentation (SP) [27] and Matching Attributed Relational Graphs (MARG) [45]. In all the experiments we generated five positive and five negative clicks randomly. For some of the approaches where the user interactions were defined as points or scribbles, we determined click positions with five-pixel-wide circles. To observe the impact of the extended part of the LFCN on the output quality, we also report all the accuracy measures for the normal version of the FCN as well. Table 1 and 2 present quantitative results that confirm our approach superiority over several other segmentation techniques on five different benchmarks. As a qualitative comparison, Fig. 8 represents final segmentation output of the methods in Table 1 for two different samples. As it can be seen, our approach provides the most accurate segmentation result with respect to semantic interpretation of the scene using same number of interactions. Conclusion In this paper, we presented a novel hybrid deep learning framework which is capable of targeted segmentation and captioning as a response to user interactive actions. A wide variety of experiments confirmed our model superiority over various state-of-the-art interactive segmentation approaches. In addition, further experiments demonstrated our model capability to caption an arbitrary region of the image with one or few clicks, which is especially convenient for real-world interactive applications.
2017-07-26T10:40:33.000Z
2017-07-26T00:00:00.000
{ "year": 2017, "sha1": "8cafa1f108063ea6ab11c587ba74f91f13d2ba50", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1707.08364", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f9168f71c48cdab5af8b84adf9f7a4b6b04defda", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
234210907
pes2o/s2orc
v3-fos-license
Design of a flyback high-efficiency switching power supply Switching power supplies are widely used in automotive, aviation, and DC speed control fields. Its biggest requirement is stable output and high efficiency. It is usually realized by resonant converters, but its control circuit is complex and the output ripple is large. In this paper, a new type of flyback switching power supply is designed using a flyback conversion circuit and a single-tube self-excited conversion circuit. Experiments have shown that the designed switching power supply improves the efficiency and reliability of the power supply while obtaining a small ripple, continuously adjustable, and stable DC voltage. Introduction With the development of production and technology, the requirements for environmental protection and energy have become higher and higher, and the application of switching power supplies has become more and more extensive. There are many types of switching power supply circuit structures, including single-ended converters and double-ended converters. This article introduces a design method that uses a flyback conversion circuit to implement a 5V switching power supply, which improves the efficiency of the power supply system and can be used as a mobile phone charging power supply [1]. Design index requirements We have made the switching power supply as shown in Figure 1 ISCME Rectifier filter circuit design According to different rectification methods, the rectification circuit can be divided into full-wave rectification and half-wave rectification. In order to improve working efficiency, the rectifier circuit generally adopts a bridge rectifier circuit. Considering that the output power of the circuit is not very large, if a three-phase bridge rectifier circuit is used, it will increase the complexity of the circuit and increase the loss, so this system uses a single-phase bridge rectifier circuit. A bridge rectifier circuit or rectifier bridge composed of four diodes can be selected. The diode used is a more general 1N4007, which has good stability and can meet the requirement of low current. RS310 is recommended for the rectifier bridge. The rectified unipolar voltage fluctuates greatly and is not smooth enough. It is necessary to use a filter circuit to filter out the AC component, so that the unipolar voltage with large fluctuations becomes a relatively smooth DC voltage. The filter circuit is generally composed of capacitors, inductors and resistors. In addition to resistors, capacitors and inductors can be used alone. For low-power power supplies, capacitor filtering is generally used. DC/DC topology circuit scheme According to design requirements, the DC/DC converter should have a step-down function, and there are many circuit structures that meet the requirements [2][3]. There are mainly two types of singleended converters (flyback, forward) and double-ended converters (push-pull, half-bridge, and fullbridge). This project uses a single-ended flyback converter, because the flyback circuit uses the fewest components. Experience has shown that when the power level is lower, the total power supply device cost will be lower than other circuit technologies. A simplified schematic diagram of the working principle of a single-ended flyback converter is shown in Figure 2. Figure 2. Working principle of single-ended flyback converter and diagram of the primary and secondary current waveforms. When the excitation pulse applied to the primary side main power switch tube VT is high, VT is turned on, and the DC input voltage E is applied to both ends of the primary side winding NP. The phase of the secondary winding is upper negative and lower positive, so that the rectifier VD is reversely biased and ends. When the driving pulse is low to make the switching tube VT cut off, the polarity of the voltage across the primary winding NP is reversed, and the rectifier tube is forward biased and turned on. After that, the magnetic energy stored in the transformer is transferred and released to the load. Therefore, the single-ended flyback converter is a kind of "inductive energy storage converter". When the single-ended flyback converter works in the discontinuous state of magnetizing current, the calculation formula of the output voltage is as formula (1). In the formula, ton is the conduction time of the switch, T is the conduction period, and LP is the equivalent inductance of the primary winding. When the single-ended flyback converter works under the continuous state of magnetizing current, the output voltage is calculated as formula (2). In the formula, the duty cycle D=ton/T. It can be seen from the above formula that the single-ended flyback circuit can stabilize the output voltage when the grid voltage changes or the load changes. Most single-ended flyback converters work in a continuous state of magnetizing current [3]. During the cut-off period of the switch tube VT, the voltage it bears is as formula (3). When selecting a switching transistor, not only the maximum value of the primary current of the transformer not exceed the limit value of the transistor, but also the voltage amplitude of the transistor must not exceed the allowable value of the transistor. Especially in the open circuit test, the load should not be disconnected to cause the output voltage to increase sharply and damage the power tube. Power switch tube drive circuit The switching tube drive circuit is the most critical part of the switching converter. Switching power converters can be divided into self-excited conversion circuits and separately excited conversion circuits according to different excitation methods. The system uses the self-excited conversion circuit shown in Figure 3. It is a self-excited inverter circuit that uses transformer coupling to form positive feedback. Figure 3. Single transistor self-excited conversion circuit. In this circuit, T is the switching transformer, L f is the feedback winding, and R 1 provides the initial starting current for the base of the switching tube VT, also known as the starting winding. C is a coupling capacitor, and R 2 provides a discharge path for capacitor C. The transformer design method of the single-ended flyback switching power supply is quite different from other types of converters. Its design parameters mainly include the following two items. Primary winding inductance L P From formula (1), we can get as formula (4). High frequency transformer core The power of a flyback converter is usually small, and a ferrite core is generally used as the transformer, which is determined according to formula (5). K is the filling factor of the magnetic core. S f is the frequency of the switching tube [4][5][6]. According to the calculated value P A , select a core with a larger margin. Each magnetic core has a fixed area. In the magnetic core parameter table provided by the manufacturer, query the magnetic core greater than or equal to the required area to obtain a magnetic core that meets the requirements. System design The overall design of the system is shown in Figure 4. In conclusion Compared with the traditional converter, the switch tube drive circuit adopts a single-tube self-excited conversion circuit, the control circuit is simple, the switch tube voltage stress is smaller, and the efficiency is higher. Experiments show that the power supply has a low voltage regulation rate, good output waveform, reliable work, and can be used in small and medium power supply occasions.
2021-05-11T00:05:39.412Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "02c33ddf78e62d39c47f1a001681d34b44c1fee8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1748/5/052042", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "97b9e463485bb31cff63e8d40152bd131ac3a7d7", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
237452518
pes2o/s2orc
v3-fos-license
1-Bit MIMO for Terahertz Channels This paper tackles the problem of single-user multiple-input multiple-output communication with 1-bit digital-to-analog and analog-to-digital converters. With the information-theoretic capacity as benchmark, the complementary strategies of beamforming and equiprobable signaling are contrasted in the regimes of operational interest, and the ensuing spectral efficiencies are characterized. Various canonical channel types are considered, with emphasis on line-of-sight settings under both spherical and planar wavefronts, respectively representative of short and long transmission ranges at mmWave and terahertz frequencies. In all cases, a judicious combination of beamforming and equiprobable signaling is shown to operate within a modest gap from capacity. I. INTRODUCTION As they evolve, wireless systems seek to provide ever faster bit rates and lower latencies, and a key enabler for these advances in the increase in bandwidth. From 1G to 5G, the spectrum devoted to wireless communication has surged from a handful of MHz to multiple GHz, roughly three orders of magnitude, and this growth is bound to continue as new mmWave bands open up and inroads are made into the terahertz realm [2]- [6]. Besides bandwidth, another key resource is power. Leaving aside the power spent in duties unrelated to communication, the power consumed by a device can be partitioned as P t /η + P ADC + P other where P t is the power radiated by the transmitter, η is the efficiency of the corresponding power amplifiers, P ADC is the power required by the receiver's analog-to-digital (ADC) converters, and P other subsumes everything else (including oscillators, filters, the transmitter's digital-to-analog (DAC) converters, and the receiver's low-noise amplifier). With B denoting the bandwidth and b the resolution in bits, each ADC satisfies where FoM is a figure of merit and κ ranges between two and four [7], [8]. Power consumption has traditionally been dominated by P t /η and thus high resolutions (b = 8-12 bits) could be employed. In 1G and 2G, a higher η was facilitated by the adoption of (respectively analog and digital) signaling formats tolerant of nonlinear amplification, but after 2G this took a backseat to spectral efficiency. Linearity has A. Lozano is with Univ. Pompeu Fabra, 08018 Barcelona (e-mail: angel.lozano@upf.edu). His work is supported by the European Research Council under the H2020 Framework Programme/ERC grant agreement 694974, by MINECO's Projects RTI2018-102112 and RTI2018-101040, and by ICREA. Parts of this paper were presented at the 2021 Int'l ITG Workshop on Smart Antennas [1]. since reigned, despite the lower η, as P t /η was well within the power budget of devices for the desirable P t . The advent of 5G, with the move up to mmWave frequencies and the enormous bandwidths therein, is a turning point in the sense of P ADC ceasing to be secondary, and this can only accelerate moving forward [9]. Consider this progression: with b = 10 at a typical 4G bandwidth of B = 20 MHz, P ADC is only a few milliwatts; for B = 2 GHz, it is already on the order of a watt; and for B = 20 GHz, it would reach roughly 10 watts. Indeed, as B continues to grow, P ADC is bound to swallow up the entire power budget of portable devices unless FoM or b change. But FoM is approaching a fundamental limit [8]. Moreover, while holding steady up to about B = 100 MHz, FoM drops sustainedly after that mark, which is coincidentally the largest 4G bandwidth. Inevitably then, b has to decrease and, ultimately, it should reach b = 1, to drastically curb the power consumption and to further enable dispensing with automatic gain control at the receiver while simplifying the data pipeline between the ADCs and the baseband processing [10]. While 1-bit ADCs curb the spectral efficiency at 1 bit per dimension, the vast bandwidths thereby rendered possible make it exceedingly beneficial. Going from b = 10 down to b = 1 cuts the spectral efficiency by a factor of 2-3, but in exchange B can grow by as much as 1000 under the same P ADC ; the net benefits in bit rate and latency are stupendous. Spectral efficiency is then best recovered by expanding the number of antennas, which P ADC is only linear in. This naturally leads to multiple-input multipleoutput (MIMO) arrangements with 1-bit ADCs. Although 1-bit ADCs at the receiver do not necessarily entail 1-bit DACs at the transmitter, and in some cases the spectral efficiency could improve somewhat with richer DACs, it is inviting to take the opportunity and adopt 1-bit transmit signals. This not only minimizes the DAC power consumption-somewhat lower than its ADC's counterpart, yet also considerable [11]- [13]-but it enables the power amplifiers to operate in nonlinear regimes where η is higher. Altogether, 1-bit MIMO architectures might feature prominently in future wireless systems, and not only for mmWave or terahertz operation: these architectures are also a sensible way forward for lower-frequency extreme massive MIMO, with antenna counts in the hundreds or even thousands [14]. All this interest is evidenced by the extensive literature on transmission strategies and the ensuing performance with 1-bit converters at the transmitter or receiver only (see [15]- [45] and references therein), and by the smaller but growing body of work that considers 1-bit converters at both ends [46]- [59]. Chief among the difficulties in this most stringent case stand (i) computing the information-theoretic performance limits for moderate and large antenna counts, and (ii) precoding to generate signals that can approach those limits. On these fronts, and concentrating on single-user MIMO, this paper has a two-fold objective: • To provide analytical characterizations of the performance of beamforming and equiprobable signaling, two transmission strategies that are informationtheoretically motivated and complementary. • To show that a judicious combination of these strategies suffices to operate within a modest gap from the 1-bit capacity in various classes of channels of high relevance, foregoing general precoding solutions. A. Signal Model Consider a transmitter equipped with N t antennas and 1-bit DACs per complex dimension. The receiver, which features N r antennas and a 1-bit ADC per complex dimension, observes where the sign function is applied separately to the real and imaginary parts of each entry, such that y n ∈ {±1±j}, while H is the N r × N t channel matrix, z ∼ N C (0, I) is the noise, and SNR is the signal-to-noise ratio per receive antenna. Each entry of the transmit vector x also takes the values ±1 ± j. Each antenna in the foregoing formulation could actually correspond to a compact subarray, in which case the model subsumes array-of-subarrays structures for the transmitter and/or receiver [60]- [64] provided SNR is appropriately scaled. For each given H, the relationship in (2) embodies a discrete memoryless channel with 4 Nt × 4 Nr transition probabilities determined by where the factorization follows from the noise independence per receive antenna and complex dimension. Each such noise component has variance 1/2, hence where h n is the nth row of H (for n = 0, . . . , N r − 1) and Q(·) is the Gaussian Q-function. Similarly, From (6) and (7), and, mirroring it, finally The transition probabilities correspond to (9) evaluated for the 4 Nr possible values of y and the 4 Nt values of x. If H is known, these transition probabilities can be readily computed. Conversely, if the transition probabilities are known, H can be deduced. The 4 Nt transmit vectors x can be partitioned into 4 Nt−1 quartets, each containing four vectors and being invariant under a 90 • phase rotation of all the entries: from any vector in the quartet, the other three are obtained by repeatedly multiplying by j. Since a 90 • phase rotation of x propagates as a 90 • phase rotation of Hx, and the added noise is circularly symmetric, the four vectors making up each transmit quartet are statistically equivalent and they should thus have the same transmission probability so as to convey the maximum amount of information. Likewise, the set of 4 Nr possible vectors y can be partitioned into 4 Nr−1 quartets, and the four vectors y within each received quartet are equiprobable. B. Channel Model If the channel is stable over each codeword, then every realization of H has operational significance and SNR is well defined under the normalization tr(HH * ) = N t N r . Conversely, if the coding takes place over a sufficiently broad range of channel fluctuations, that significance is acquired in an ergodic sense with E tr(HH * ) = N t N r [65]. The following classes of channels are specifically considered. a) Line-of-Sight (LOS) with Spherical Wavefronts: LOS is the chief propagation mechanism at mmWave and terahertz frequencies, and the spherical nature of the wavefronts is relevant for large arrays and short transmission ranges. For uniform linear arrays (ULAs) [66], where D rx and D rx are diagonal matrices with entries [D rx ] n,n = e −jπ 2n λ dr sinθr cosφ+ n 2 λD d 2 r (1−sin 2 θr cos 2 φ) and with D the range, λ the wavelength, d t and d r the antenna spacings at transmitter and receiver, θ t and θ r the transmitter and receiver elevations, and φ their relative azimuth angle. In turn,H is the Vandermonde matrix is a parameter that concisely describes any LOS setting with ULAs. Uniform rectangular arrays can be expressed as the Kronecker product of ULAs, and expressions deriving from (10) emerge [67]. For more complex topologies, the entries of H continue to be of unit magnitude, but the pattern of phase variations becomes more cumbersome. b) LOS with Planar Wavefronts: For long enough transmission ranges, the planar wavefront counterpart to (10) is obtained by letting D → ∞, whereby the channel becomes rank-1 with h n,m = e −j 2π λ (ndr sin θr cos φ+mdt sin θt) . c) IID Rayleigh Fading: In this model, representing situations of rich multipath propagation, the entries of H are IID and h n,m ∼ N C (0, 1). We note that the frequency-flat representation embodied by H is congruous for the two LOS channel models, but less so for the IID model, where the scattering would go hand in hand with frequency selectivity over the envisioned bandwidths. The analysis presented for this model intends to set the stage for more refined characterizations that account for the inevitable intersymbol interference. In fact, even for the LOS channels, over a sufficiently broad bandwidth there is bound to be intersymbol interference because of spatial widening, i.e., because of the distinct propagation delays between the various transmit and receive antennas [68], [69]. III. 1-BIT CAPACITY Denote by p 1 , . . . , p 4 N t −1 the activation probabilities of the transmit quartets, such that k p k = 1 and p x (x k ) = p k /4 with x k any of the vectors in the kth quartet. Letting H(·) indicate entropy, and with all the probabilities conditioned on H, the mutual information is − H(y|x) (17) where (17) follows from the equiprobability of the vectors in each received quartet and y ℓ is any of the vectors in the ℓth such quartet while p y (y) = with p y|x depending on SNR and H as per (9). Elaborating on (18), In turn, because of the factorization of p y|x in (9), H(ℜ{y n }|x) + H(ℑ{y n }|x) (20) where H b (p) = −p log 2 p−(1−p) log 2 (1−p) is the binary entropy function. Since changing i merely flips the sign of some of the Q-funcion arguments, and Q(−ξ) = 1 − Q(ξ) such that H b (Q(−ξ)) = H b (Q(ξ)), it follows that The combination of (17), (19), and (23) gives I(SNR, H), whose evaluation involves O(4 Nt−1 4 Nr−1 ) terms. This becomes prohibitive even for modest N t and N r , hence the interest in analytical characterizations. From I(SNR, H), the 1-bit capacity is with maximization over p 1 , . . . , p 4 N t −1 . Since I(SNR, H) is concave in p 1 , . . . , p 4 N t −1 and these probabilities define a convex set, (24) can be solved with off-the-shelf convex optimization tools. Or, the Blahut-Arimoto algorithm that alternatively maximizes p x and p x|y can be applied, with converge guarantees to any desired accuracy [70], [71]. In ergodic settings, what applies is the ergodic spectral efficiency and likewise for the ergodic capacity. Alternatively, if the channel is stable over each codeword, then I(SNR, H) and C(SNR, H) themselves are meaningful for each H. The 1-bit capacity cannot exceed 2 min(N t , N r ) b/s/Hz, with three distinct regimes: • Low SNR. This is a key regime at mmWave and terahertz frequencies, given the difficulty in producing strong signals, the high propagation losses, and the noise bandwidth. • Intermediate SNR. Here, the spectral efficiency improves sustainedly with the SNR. • High SNR. This is a regime of diminishing returns, once the capacity nears 2 min(N t , N r ). A. Low SNR The low-SNR behavior is most conveniently examined with the mutual information expressed as function of the normalized energy per bit at the receiver, Beyond the minimum required value of where ε is a lower-order term, S 0 is the slope at E b N0 min in b/s/Hz/(3 dB), and z| dB = 10 log 10 z. N0 min and S 0 descend from the first and second derivatives of I(SNR) at SNR = 0, which themselves emerge from (17), (19), and (23) after a tedious derivation [29], (28). [31]. Plugging these two derivatives into the definitions of with (29) where nondiag(·) returns a matrix with its diagonal entries set to zero and · 4 denotes L4 norm. The expectations in (30) and (29) are conditioned on H when there is operational significance attached to a specific such value, and unconditioned in the ergodic case. A worthwhile exercise is to appraise the expansion in (28) against its exact counterpart, a contrast that Fig. 1 presents for N t = N r = 1 in Rayleigh fading. The characterization provided by (28) is indeed precise, a fact that extends to all other channels considered in the paper. B. Intermediate SNR In 1-bit communication, the intermediate-SNR regime steals relevance from its high-SNR counterpart, which becomes unappealing. In order to delineate the reach of this intermediate-SNR regime, it is of interest to establish the limiting capacity for SNR → ∞. Let us define and consider channels satisfying h n x = 0 with probability 1 for n = 0, . . . , N r − 1, such that the transition probabilities have a positive mass only at 0 and 1, meaning that y is fully determined by x. The vast majority of channels abide by the condition, and in particular the ones set forth in Sec. II-B. For N t = 1, a single quartet is available for transmission and, by virtue of its four equiprobable constituent vectors, C ∞ (H) = 2. For N t > 1 and N r = 1, it can be verified that (24) is maximized when a single quartet is activated, depending on H [57]. Again, C ∞ (H) = 2. For N t > 1 and N r > 1, it must hold that C ∞ ≤ 2N t , but this bound is generally not achievable because some vectors x map to the same receive vector y [46]. As the transition probabilities are either 0 or 1, every binary entropy function in (23) vanishes and H(y|x) → 0, hence the mutual information comes to equal H(y). Letting denote the set of vectors y that can be elicited for channel H, the maximization of H(y) occurs when this set is equiprobable. Then, with E[C ∞ (H)] being the limiting ergodic capacity. The evaluation of (33) is far simpler than that of C(SNR) in its full generality. IV. 1-BIT VS FULL-RESOLUTION CAPACITY A naïve comparison of the 1-bit and full-resolution capacities would indicate that the former always trails the latter. In terms of power, their gap in dB is at least the difference between their E b N0 min | dB values; in a scalar Rayleigh-faded channel, for instance, what separates the full-resolution mark of −1.59 dB [72, sec. 4.2] from its 1-bit brethren of 0.37 dB is 1.96 dB as noted in Fig. 1. As shall be seen in the sequel, this gap remains rather steady with MIMO and over a variety of channels. Such naïve comparison, however, only accounts for radiated power, disregarding any other differences in power consumption between the full-resolution and 1-bit alternatives. While appropriate when the radiated power dominates, this neglect becomes misleading when the digitalization consumes sizeable power and, since this is the chief motivation for 1-bit communication, by definition the comparison is somewhat deceptive. Indeed, whenever the excess power of a full-resolution architecture, relative to 1-bit, exceeds a 1.96-dB backoff in P t /η, there is going to be a range of SNRs over which, under a holistic accounting of power, the 1-bit capacity is actually higher. For a very conservative assessment of this phenomenon, let us assume that κ = 2 in (1) and that η, FoM, and P other , are not affected by the resolution-in actuality all of these quantities shall be markedly better in the 1-bit case-to obtain the condition 1 10 1.96/10 where we considered two ADCs (N r = 1) and b = 10 bits for full resolution. The above yields which, for the sensible values P t = 23 dBm and η = 0.4, and with a state-of-the-art FoM = 10 pJ/conversion [8], evaluates to B ≥ 8.8 GHz. This highly conservative threshold drops rapidly as the number of digitally processed antennas grows large and thus, for bandwidths well within the scope of upcoming wireless systems, 1bit MIMO can be viewed as information-theoretically optimum for at least some range of SNRs. V. TRANSMIT BEAMFORMING Transmit beamforming corresponds to Σ x being rank-1, i.e., to x being drawn from a single quartet, with such quartet generally dependent on H. We examine this strategy with an ergodic perspective; for nonergodic channels, the formulation stands without the expectations over H. A. Low SNR For vanishing SNR, transmit beamforming is not only conceptually appealing, but information-theoretically optimum. Indeed, (30) can be rewritten as which is maximized by assigning probability 1 to the quartet k ⋆ = arg max Hx k 2 for each realization of H. Therefore, it is optimum to beamform, and the optimum beamforming quartet is the one maximizing the received power. The task is then to determine k ⋆ from within the 4 Nt−1 possible quartets. For N t = 1, there is no need to optimize over k-only one quartet can be transmitted-and thus which amounts to 0.37 dB for N r = 1 [73] and improves by 3 dB with every doubling of N r thereafter. For N t > 1, it is useful to recognize that the choices for x that are bound to yield high values for Hx 2 are those that project maximally on the dimension of H that offers the largest gain, namely the maximum-eigenvalue eigenvector of H * H. This, in turn, requires that x mimic, as best as possible, the structure of that eigenvector; since the magnitude of the entries of x is fixed, this mimicking ought to be in terms of phases only. Formalizing this intuition, it is possible to circumvent the need to exhaustively search the entire field of 4 Nt−1 possibilities and conveniently identify a subset of only N t quartet candidates that is sure to contain the one best aligning with the maximum-eigenvalue eigenvector of H * H, denoted henceforth by v 0 . Precisely, as detailed in Appendix A, if we let ϕ m = ∠(v 0,m ) + ǫ for m = 0, . . . , N t − 1, the N t quartets in the subset can be determined as where ǫ is a small quantity, positive or negative. If the channel is rank-1, then this subset is sure to contain the optimum x k ⋆ ; if the rank is higher, then optimality is not guaranteed, but the best value in the above subset is bound to yield excellent performance. Turning to the E b N0 min achieved by x k ⋆ , its explicit evaluation is complicated, yet its value can be shown (see Appendix A again) to satisfy where λ 0 is the maximum eigenvalue of H * H while · 1 denotes L1 norm. For N r = 1, (39) specializes to Finally, S 0 can be obtained by plugging (29). B. Intermediate SNR The low-SNR linearity of the mutual information in the received power is the root cause of the optimality of power-based beamforming in that regime. The orientation on the complex plane of the received signals is immaterial-a rotation shifts power from the real to the imaginary part, or vice versa, but the total power is preserved. Likewise, the power split among receive antennas is immaterial to the low-SNR mutual information. At higher SNRs, the linearity breaks down and the mutual information becomes a more intricate function of Hx, such that proper signal orientations and power balances become important, to keep h n x away from the ADC quantization boundaries for n = 0, . . . , N r − 1. This has a dual consequence: • Transmit beamforming ceases to be generally optimum, even if the channel is rank-1. • Even within the confines of beamforming, solutions not based on maximizing power are more satisfying. As exemplified in Fig. 2 for N r = 1, a beamforming quartet with a better complex-plane disposition at the receiver may be preferable to one yielding a larger magnitude. This is because, after a 1-bit ADC, only 90 • rotations and no scalings are possible (in contrast with full-resolution receivers, where hx can subsequently be rotated and scaled). The best beamforming quartet is the one that simultaneously ensures large real and imaginary parts for h n x in a balanced fashion for n = 0, . . . , N r −1, and the task of identifying this quartet is a fitting one for learning algorithms [58], [74]. We note that, with full-resolution converters, multiple receive antennas play a role dual to that of transmit beamforming [72, sec. 5.3], and the spectral efficiency with N transmit and one receive antenna equals its brethren with one transit and N receive antennas. With 1-bit converters, in contrast, transmit beamforming optimizes h n x for n = 0, . . . , N r − 1, to mitigate the addition of noise prior to quantization, while multiple receive antennas yield a diversity of quantized observations from which better decisions can be made on which of the possible vectors was transmitted. This includes majority decisions and erasure declarations in the case of split observations. Left-hand side, for x k , which has a larger magnitude but worse orientation. Right-hand side, for x ℓ , which has a smaller magnitude but better orientation. On this channel, quartet k yields a higher mutual at low SNR while quartet ℓ yields a higher mutual information beyond the low-SNR regime. VI. EQUIPROBABLE SIGNALING The complementary strategy to beamforming is to activate multiple quartets, increasing the rank of Σ x . Ultimately, all quartets can be activated with equal probability, such that Σ x = 2I. This renders the signals IID across the transmit antennas, i.e., pure spatial multiplexing. We examine this strategy with an ergodic perspective. A. Low SNR With equiprobable signaling, (30) gives In addition [31], based on which S 0 in (29) simplifies considerably. Combining (39) and (40), the low-SNR advantage of optimum beamforming over equiprobable signaling, denoted by ∆ BF , is tightly bounded as This enables some general considerations: • The low-SNR advantage of beamforming is essentially determined by the maximum eigenvalue of H * H. The advantage is largest in rank-1 channels, and minimal if all eigenvalues are equal (on average or instantaneously, as pertains to ergodic and nonergodic settings). • If all eigenvalues are equal, beamforming may still yield a lingering advantage for N t > N r , but not otherwise. Indeed, for N t ≤ N r , if all eigenvalues all equal then E λ 0 = N r and thus ∆ BF ≤ 1. B. Intermediate SNR While beamforming is optimum at low SNR, it is decidedly suboptimum beyond, and activating multiple quartets becomes instrumental to surpass the 2-b/s/Hz mark. This is the case even in rank-1 channels, where the activation of multiple quartets allows producing richer signals; this can be seen as the 1-bit counterpart to higherorder constellations. And, given how the curse of dimensionality afflicts the computation of the optimum quartet probabilities, equiprobable signaling is a very enticing way of going about this. As will be seen, not only is it implementationally convenient, but highly effective. VII. CHANNELS OF INTEREST Capitalizing on the analytical tools set forth hitherto, let us now examine the performance of transmit beamforming and equiprobable signaling in various classes of channels, starting with the nonergodic LOS settings and progressing on to the ergodic IID Rayleigh-faded channel. A. LOS with Planar Wavefronts This channel is rank-1, hence the optimum E b N0 min can be achieved with equality by the best beamforming quartet in subset (38). More conveniently for our purposes here, we can rewrite (10) Irrespective of the array orientations, λ 0 = N t N r and v 0 which depends symmetrically on N t and N r . The significance of E b N0 min as the key measure of low-SNR performance can be appreciated in Fig. 3, which depicts the low-SNR capacity as a function of E b N0 for N t = N r = 1, 2, and 4 in an exemplary LOS setting. Adding antennas essentially displaces the capacity by the amount by which Shown in Fig. 4 is how E b N0 min improves with the number of antennas (N t = N r ) for the same setting. Also shown are the values for equiprobable signaling, undesirable in this case as per (42). The low-SNR advantage of beamforming accrues steadily with the numbers of antennas and the bounds in (39) tightly bracket the optimum E b N0 min . As anticipated, the gap of 1-bit beamforming to full-resolution beamforming (included in the figure) remains small. Moving up to intermediate SNRs, the beamforming and equiprobable-signaling performance on another setting is presented in Fig. 5. Also shown is the actual capacity with p 1 , . . . , p 4 N t −1 optimized via Blahut-Arimoto. Up to when the 2-b/s/Hz ceiling is approached, beamforming performs splendidly. Past that level, and no matter the rank-1 nature of the channel, equiprobable signaling is highly superior, tracking the capacity to within a roughly constant shortfall. This example represents well the intermediate-SNR performance in planar-wavefront LOS channels, a point that has be verified by contrasting the asymptotic performance of equiprobable signaling in a variety of such channels against the respective C ∞ . B. LOS with Spherical Wavefronts The scope of channels in this class is very large, depending on the array topologies and relative orientations; for the sake of specificity, we concentrate on ULAs, and draw insights whose generalization would be welcome follow-up work. A key property of ULA-spawned channels within this class is that [66] are as introduced in Sec. II-B, and N min = min(N t , N r ). Therefore, λ 0 ≈ N max /η and v 0 2 1 ≈ N t . By specializing (39), the optimum E b N0 min attained by beamforming is seen to satisfy which indicates that a smaller η is preferable at low SNR, meaning antennas as tightly spaced as possiblethis renders the wavefronts maximally planar-and array orientations as endfire as possible-this shrinks their effective widths. Indeed, wavefront curvatures trim the beamforming gains, and reducing η mitigates the extent of such curvatures. With growing η, the low-SNR performance does degrade, but beamforming retains an edge over equiprobable signaling for η < 1 or N t > N r . Alternatively, for η = 1 and N t = N r , (46) is no better than the equiprobable-signaling E b N0 min in (40). In fact, for this allimportant configuration whose eigenvalues are equal [67], [75]- [79], any transmission strategy achieves this same N0 min for all transmission strategies when η = 1 and N t = N r does not translate to S 0 , which is decidedly larger for equiprobable signaling, indicating that this is the optimum low-SNR technique for this configuration as illustrated in Fig. 6. Precisely, applying (29) and (41), a channel with η = 1 and N t = N r = N is seen to exhibit Be am fo rm ing Be am fo rm in g Equi prob able Equ ipro bab le on η and it improves monotomically with η up to η = 1, where capacity is achieved by IID signaling. All these insights, underpinned by the approximate equality of the ηN min nonzero eigenvalues of H * H, cease to hold in the 1-bit realm due to the transmitter's inability of accessing those singular values directly via precoding. Indeed, when the only ability is to manipulate the quartet probabilities (see Fig. 7 for an example): • The performance does not depend only on η, but further on θ t , θ r , φ, D, and d t and d r . • The optimum configuration need not correspond to η = 1. The main takeaway for our purpose, though, is that at intermediate SNRs equiprobable signaling closely tracks the capacity. C. IID Rayleigh Fading For H having IID complex Gaussian entries, we resort to the ergodic interpretation. Shown in Fig. 8 is the evolution of E b N0 min with the number of antennas for the optimum strategy (beamforming on every channel realization) as well as for equiprobable signaling. The bounds in (39) provide an effective characterization of the optimum E b N0 min . Moreover, λ 0 and v 0 are independent [80, lemma 5] and, although E[λ 0 ] does not lend itself to a general characterization, for growing N t and N r it approaches ( √ N t + √ N r ) 2 . Thus, beamforming achieves which sharpens with N t and N r . For N t = N r = 64, for instance, (48) gives E b N0 min ∈ [−21.77, −23, 71] dB, correctly placing the actual value of −22.21 dB. The term E v 0 2 1 is readily computable for given values of N t and we note, as a possible path to taming it anaytically, that v 0 is a column of a standard unitary matrix, uniformly distributed over an N t -dimensional sphere. With equiprobable signaling, E b N0 min is given by (40) and we can further characterize S 0 . Starting from (29) and using nondiag(HH * ) 2 = (HH * ) 2 − HH * diag(HH * ) − diag(HH * ) HH * in conjuntion with [80, lemma 4] E tr (HH * ) 2 = N t N r (N t + N r ) (50) we have that In turn, and, altogether, which is an increasing function of both N t and N r . At intermediate SNRs, equiprobable signaling is remarkably effective (see Fig. 9). At the same time, the complexity of computing the mutual information-for equiprobable signaling, let alone with optimized quartet probabilities-is compounded by the need to expect it over the distribution of H, to the point of becoming unwieldy even for very small antenna counts. Analytical characterizations are thus utterly necessary, and it is shown in Appendix B that where P ∩ (i, j) is given by (59). The bounds specified by (57)-(59) are readily computable even for very large numbers of antennas. For N t = N r = 64, for instance, a direct evaluation of E H I(SNR, H) would require the I(SNR, H) for many realizations of H, with each such mutual information calculation involving over 10 75 terms. In contrast, the bounds entail the single SNR-dependent integral in (57) along with (58), which does not depend on the SNR and can be precomputed; Table I provides such precomputation for a range of antenna counts. Also of interest is that the upper bound becomes exact for SNR → 0. The range specified by the bounds is illustrated in Fig. 10 for various values of N t = N r , alongside the actual spectral efficiencies (obtained via Monte-Carlo) for N t = N r = 2 and N t = N r = 4. For N t > N r , the lower bound approaches its upper counterpart and, for N t ≫ N r , Indeed, as detailed in Appendix B, this approximation becomes an exact result for N r = 1 or for N t → ∞ with N r arbitrary. Some examples for N t = 4N r , presented in Fig. 11, confirm how precisely the ergodic spectral efficiency is determined when the antenna counts are somewhat skewed. VIII. CONCLUSION A host of issues that are thoroughly understood for fullresolution settings must be tackled anew for 1-bit MIMO communication. In particular, the computation of the capacity becomes unwieldy for even very modest dimensionalities and the derivation of general precoding solutions becomes a formidable task, itself power-consuming. Fortunately, in the single-user case such general precoding can be circumvented via a judicious switching between beamforming and equiprobable signaling, with the added benefits that these transmissions strategies are much more amenable to analytical characterizations and that their requirements in terms of channel-state information at the transmitter are minimal: log 2 4 Nt−1 = 4 (N t − 1) bits for beamforming, none for equiprobable signaling. The transition from beamforming to equiprobable signaling could be finessed by progressively activating quartets as the SNR grows, but the results in this paper suggest that there is a small margin of improvement: a direct switching at some appropriate point suffices to operate within a few dB of capacity at both low and intermediate SNRs. It would be of interest to gauge this shortfall for more intricate channel models such as those in [81]- [83]. Channel estimation at the receiver is an important aspect, with the need for procedures that avoid having to painstakingly gauge all the transition probabilities between x and y to deduce H. Of much interest would be to extend existing results for channel estimation with full-resolution DACs and 1-bit ADCs [24], [25], [84]- [86] to the complete 1-bit realm. Equally pertinent would be to establish the bandwidths over which a frequencyflat representation suffices for each channel model, and to extend the respective analyses to account for intersymbol interference. This is acutely important given the impossibility of implementing OFDM with 1-bit converters. In those multiuser settings where orthogonal (time/frequency) multiple access is effective, switching between beamforming and equiprobable signaling is also enticing. In other cases, chiefly if the antenna numbers are highly asymmetric, orthogonal multiple access is decidedly suboptimum, and there is room for more general schemes. We hope that the results in this paper can serve as a stepping stone to such schemes. we have that which is the quantity to maximize. Under full-resolution transmission, (62) is maximized by x ∝ v 0 : complete projection on the dimension exhibiting the largest gain and zero projection elsewhere [72, sec. 5.3]. With 1-bit transmission, perfect alignment with v 0 is generally not possible, and the goal becomes to determine which x best aligns. If H is rank-1, then such x is sure to maximize (62). If the rank is plural, however, optimality cannot be guaranteed from best alignment with v 0 because some other x leaning further away could have a more favorable projection across the rest of dimensions. Suppose, for instance, that the rank is 3; if the x best aligned with v 0 does not further project on v 1 , but only on v 2 , there could be another x aligning slightly less with v 0 but projecting also on v 1 in a way that yields a higher metric in (62). This possibility may arise when the largest singular value is not very dominant. Even then, though, the x that projects maximally on v 0 is bound to perform well. Values of x that align well with v 0 can be obtained as x = sgn e jϕ v 0 where ϕ allows setting the absolute phase arbitrarily before quantization. Letting ϕ run from 0 to 2π, every entry of the quantized x changes four times and a subset of 4N t vectors x is obtained. These 4N t vectors actually belong to N t quartets because, if x k is in the kth subset, jx k is sure to be there too. Since identifying one representative per quartet suffices for our purposes, attention can be restricted to those values of ϕ that trigger a change in sgn e jϕ v 0 , i.e., ϕ = ∠(v 0,m ) for m = 0, . . . , N t − 1. Letting ϕ m = ∠(v 0,m ) + ǫ with ǫ a small quantity, we obtain the subset of N t quartet representatives as The sign of ǫ is irrelevant, it merely changes which representative is selected for each quartet. Confirming the intuition that the N t quartets in (63) are good choices, it is proved in [47] that the quartet that best aligns with v 0 is sure to be in this subset. Thus, searching a subset of N t candidates suffices to beamform optimally in rank-1 channels, and quasi-optimally in higher-rank channels, without having to search the entire field of 4 Nt−1 possibilities. Let us now turn to the performance. An upper bound on Hx k ⋆ 2 can be obtained by assuming that, on every channel realization, there is a value of x that aligns perfectly with v 0 . From (62), this gives which, along with (36), yields the lower bound in (39). In turn, a lower bound on Hx k ⋆ 2 is obtained for any choice of x, and in particular for x = sgn e jϕ v 0 with ϕ ∈ [−π/4, π/4], such that [47] For any θ ∈ [0, 2π], the phase of e −jθ sgn e jθ is within [−π/4, π/4] while e −jθ sgn e jθ = √ 2. Hence, letting where (70) holds because cos(θ m ) > 0 for θ m ∈ [−π/4, π/4]. Disregarding |v * m x k ⋆ | for m > 0 in (62), which, along with (36), yields the upper bound in (39). APPENDIX B In (23), for every n and k, ℜ{h n x k } ∼ N (0, N t ) and ℑ{h n x k } ∼ N (0, N t ). Thus, letting r ∼ N (0, 1), In turn, with equality for SNR → 0, when the receiver observes only noise and y is equiprobably binary on 2N r real dimensions. As the removal of noise can only decrease it, H(y) diminishes as the SNR grows, being lowerbounded by its value for SNR → ∞. The expectation of such noiseless lower bound over H can be elaborated by generalizing to our complex setting a clever derivation in [46], starting from where 1{·} is the indicator function. Since H is isotropic and x is equiprobable, no y ℓ is favored over the rest in terms of the probability of sgn(Hx) equalling such y ℓ . Hence, (78) can be evaluated for any specific y ℓ , say y 1 whose entries all equal 1 + j. This gives Likewise, the probability that sgn(Hx) = y 1 is common to every realization of x and thus where all entries of x 1 equal 1 + j and where it is convenient to retain the second expectation over x in order to later solve its counterpart over H. Then, E H H(y) = −4 Nr E H|sgn(Hx1)=y 1 1{sgn(Hx 1 ) = y 1 } · log 2 E x 1{sgn(Hx) = y 1 } · P[sgn(Hx 1 ) = y 1 ] and, since P[sgn(Hx 1 ) = y 1 ] = 1 4 Nr and the factor 1{sgn(Hx 1 ) = y 1 } becomes immaterial once the expectation over H has been conditioned on sgn(Hx 1 ), = −E H|sgn(Hx1)=y 1 log 2 E x 1{sgn(Hx) = y 1 } = P sgn(Hx k ) = y 1 ∩ sgn(Hx 1 ) = y 1 P sgn(Hx 1 ) = y 1 = 4 Nr P sgn(Hx k ) = y 1 ∩ sgn(Hx 1 ) = y 1 . For N r = 1, the scalar quantized signal y takes four equiprobable values-again, not only on average, but for every channel realization-and thus E H H(y) = 2. For fixed N t and N r → ∞, the key is the observation that P ∩ (i, j) achieves its largest value for i = j = 0, namely P ∩ (0, 0) = 1/4. For i > 0 and/or j > 0, P ∩ (i, j) < 1/4 because any negative sign in either the real and imaginary parts of x k reduces the probability in (87). The largest term in the summations within the logarithm in (58) equals 4 −Nr and, as N r → ∞, every other term vanishes faster and the lower bound on E H H(y) converges towards 2N t . Finally, for fixed N r and N t → ∞, the rows of H become asymptotically orthogonal [72, sec. 5.4.2] and hence, for every realization of H, y consists of IID complex components. Again, E H H(y) = 2N r . For N r = 1 and for N t → ∞ with fixed N r , the above observations reveal that the lower and upper bounds coincide, fully determining, as per (60), the ergodic spectral efficiency with equiprobable signaling and IID Rayleigh fading.
2021-09-10T01:16:26.826Z
2021-09-09T00:00:00.000
{ "year": 2021, "sha1": "2e6d43b3c34c31a9bb5f7c71e606aa1e5f99a9ac", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2e6d43b3c34c31a9bb5f7c71e606aa1e5f99a9ac", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
119664867
pes2o/s2orc
v3-fos-license
A note on Berezin-Toeplitz quantization of the Laplace operator Given a Hodge manifold, it is introduced a self-adjoint operator on the space of endomorphisms of the global holomorphic sections of the polarization line bundle. Such operator is shown to approximate the Laplace operator on functions when composed with Berezin-Toeplitz quantization map and its adjoint up to an error which tends to zero when taking higher powers of the polarization line bundle. Introduction Let M be a n-dimensional projective manifold and let g be a Hodge metric on M . This means that M is equipped with a complex structure J and with a positive Hermitian line bundle (L, h). Denoted by Θ the curvature of the Chern connection, the form ω = 2πiΘ is positive, and it holds g(u, v) = ω(u, Jv). Let ∆ : C ∞ (M ) → C ∞ (M ) be the positive Laplacian associated with the metric g (recall that it is defined by ∆(f )ω n = −n i∂∂f ∧ ω n−1 for any complex-valued smooth function f on M ). In this note it will be shown that ∆ is approximated in a suitable sense by a sequence of self-adjoint positive operators acting on finite dimensional Hermitian vector spaces V m (see definitions at Sections 2 and 4). To be a little more precise, it will be proved that there exist maps in fact adjoint to each other with respect to suitable Hermitian products, such that as m → ∞ for any given smooth function f . For any m > 0 the map T m is the well known Berezin-Toeplitz quantization map, and the operator ∆ m depends only on the projective geometry of the Kodaira embedding of M via L m . Moreover ∆ m is related to the metric g via the Fubini-Study metric induced by the L 2 -inner product on the space of global holomorphic sections of L m (see Section 4). Thanks to results available on asymptotic expansions of Bergman kernel [5] and Toeplitz operators [6], what one can prove is indeed the following result, which obviously implies (1). There is a complete asymptotic expansion where P r are self-adjoint differential operators on C ∞ (M ). More precisely, for any k, R ≥ 0 there exist constants C k,R,f such that Moreover one has P 0 (f ) = ∆f, The construction of the quantized Laplacian ∆ m was inspired by a work of J. Fine on the Hessian of the Mabuchi energy [2]. Even though in principle ∆ m is unrelated to the problem of finding canonical metrics on M , when ω is balanced in the sense of Donaldson (see definition recalled at Section 6) the relation between ∆ m and ∆ is even more evident as shown by the following Thanks to A. Ghigi and A. Loi for some useful discussion on Berezin-Toeplitz quantization. The main part of these note has been written in 2010 while the author was visiting Princeton University, whose hospitality is gratefully acknowledged. At that time the author was partially supported by a Marie Curie IOF (program CAMEGEST, proposal no. 255579). A recent pre-print of J. Keller, J. Meyer, and R. Seyyedali has a substantial overlapping with the present work [4]. The author became aware of that pre-print when it appeared on the arXiv. for all s, t ∈ H m . Thus V m is a Hermitian vector space with inner product defined by for all A, B ∈ V m . Here B * denotes the adjoint of B with respect to b m . 3 The maps T m and T * m The map T m : C ∞ (M ) → V m is the well known Berezin-Toeplitz quantization operator [5]. Given a smooth function f on M , the operator Proof. It is an easy consequence of general theory. Substituting which gives the thesis by arbitrariness of f after noting that Note that the map T * m takes an endomorphisms A ∈ V m to the restriction to the diagonal of its integral kernel. More precisely, given an orthonormal basis is the metric dual of s α (x) in the fiber of L m over the point x. The restriction of the kernel to the diagonal is (naturally identified with) the smooth function T * m (A) thanks to Lemma 3.1. When A is of the form T m (f ) for some smooth function f , the integral kernel is given by For a constant function f = c ∈ R, one has The right hand side of the equation above can be related to a function on P(H m ) naturally associated to A. Indeed we claim that ν(A) is the gradient of the function µ A defined by This is quite standard, but a proof of that fact is included at the end of the proof for convenience of the reader. Now we go ahead taking the claim for grant. From (6) one gets To this end, let {s α } be an orthonormal basis of H m , so that the pull-back of µ A to M is given by where the ratio sα(x)s β (x) γ |sγ (x)| 2 is well defined and can be computed choosing an arbitrary Hermitian metric on the line bundle L m . In particular, taking h m it becomes , and the identity (7) follows by definition of ρ m and Lemma 3.1. Finally, in order to prove the claim above, let (z α ) be homogeneous coordinates on P(H m ) corresponding to the basis {s α }. The function µ A then takes the form Az t |z| 2 , where now A = (A αβ ) denotes the matrix that represents the endomorphism A with respect the chosen basis. The equality between ν(A) and the gradient of µ A can be proved in local affine coordinates, but here we consider the projection of H m \ {0} on P(H m ), and the fact that ν(A), g F S and µ A lift to C * -invariant objects (which will be denotes with the same symbols). In particular one has which proves the claim. Next lemma characterizes the kernel of ∆ m . for all A ∈ V m and z ∈ P(H m ). By computation above we proved the following Then it holds Here e ω F S is a mixed-degree form defined by the exponential series. Since ω k F S = 0 for all k ≥ dim H m , one has This implies that Ξ m has mixed degree. More interestingly it depends just on the dimension of P(H m ) (and on choice of homogeneous coordinates) and it is independent of M . whence the thesis follows since ω m is cohomologous to mω. Proof of Theorem 1.1 First of all we recall some results on asymptotic expansions in Berezin-Toeplitz quantization. Theorem 5.1. There is a sequence {b r } of self-adjoint differential operators acting on C ∞ (M ) such that for any smooth function f ∈ C ∞ (M ) one has the asymptotic expansion and for any k, R ≥ 0 there exist constants C k,R,f such that Moreover one has Proof. See Ma and Marinescu [5]. The only fact one still needs to show is self-adjointness of operator b r . It follows readily by self-adjointness of T * m • T m and expansion (9). Indeed one has as m → +∞, for all f, g ∈ C ∞ (M ).
2015-05-15T09:04:07.000Z
2015-01-28T00:00:00.000
{ "year": 2015, "sha1": "1478f3dbe122a17d5f05ef68121d9f3b2caddb82", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1515/coma-2015-0010", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "4d4b6dfb70bdf1d9a76912346b77721b3e269c84", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
212424950
pes2o/s2orc
v3-fos-license
Analytic Initial Relative Orbit Solution for Angles-Only Space Rendezvous Using Hybrid Dynamics Method A closed-form solution to the angles-only initial relative orbit determination (IROD) problem for space rendezvous with non-cooperated target is developed, where a method of hybrid dynamics with the concept of virtual formation is introduced to analytically solve the problem. Emphasis is placed on developing the solution based on hybrid dynamics (i.e., Clohessy-Wiltshire equations and two-body dynamics), obtaining formation geometries that produce relative orbit state observability, and deriving the approximate analytic error covariance for the IROD solution. A standard Monte Carlo simulation system based on two-body dynamics is used to verify the feasibility and evaluate the performance proposed algorithms. The sensitivity of the solution accuracy to the formation geometry, observation numbers is presented and discussed. Introduction Gauss' initial orbit determination problem using angles-only observations is well known. An Earth-based position-known observer gathers line-of-sight angles information (i.e., pairs of azimuth and elevation angles) of a space target over a period time. Theoretically, Gauss' method can be applied to space rendezvous mission when the chaser's positions are known. However, Gauss' method is naturally iterative and has no known closed-form solution [Battin (1987); Curtis (2010)]. Moreover, if the target and chaser are in similar orbits, Gauss' method may be ill-conditioned and suffer from numerical problems. Analytic solution to the IROD problem may be possible if the linear relative motion dynamics such as Clohessy-Wiltshire (CW) equations [Clohessy and Wiltshire (1960)] are applied. Unfortunately, the angels-only IROD problem during proximity operations suffers from a state observability problem for the short of range measurements [Woffinden and Geller (2009)]. To overcome the observability problem, Chen et al. [Chen and Xu (2011)] proposed a two-sensor scheme with double line-of-sight measurements are utilized to solve the IROD problem by using the basic theorem of triangle geometry. Newman et al. ;] successfully applied second-order relative motion models to the IROD problem. Perez et al. [Perez, Geller and Lovell (2018)] established sphere-frame-based dynamics to analyze the angles-only IROD problem, however closed-form solution is still not achieved. Grzymisch et al. [Grzymisch and Ficher (2014)] proposed the orbital maneuver method to improve the state observability. Gaabarri et al. [Gaabarri, Sabatini and Palmerini (2014)] took advantage of the reference image information of the target to solve the angles-only problem. Geller et al. [Klein and Geller (2012); Geller and Klein (2014); Geller and Perez (2015)] demonstrated an angles-only IROD solution for orbital proximity operations by taking advantage of the lever-arm-effect of the offset camera. Gong et al. [Gong, Li, Li et al. (2018)] improved the IROD performance of the lever-arm-effect algorithm. However, this scheme is limited by the fact that the leverarm cannot be too large for most spacecraft [Gong, Geller and Luo (2016)]. The objectives of this paper are to develop closed-form solution for the angles-only IROD problem during space rendezvous phase based on hybrid dynamics with the concept of virtual distributed, with which no orbital maneuver, double line-of-sight or lever-arm are required. Additionally, the emphasis is also focused on analyzing observable conditions of the orbital state, developing analytic expressions for the IROD state error mean and covariance, evaluating the performance of the IROD algorithm in a standard two-body environment, and determining the better formation geometries which may be potentially used for on-board applications. While contributions due to 2 J and higher-order gravity terms, atmospheric drag, solar radiation pressure are important, these effects are specific to spacecraft orbit selection, mass and geometry, which is beyond the scope of this paper. The formulation of the IROD problem is presented in Section 2 and its general solution is presented in Section 3. The state observability is analyzed in Section 4 while the linear error covariance analysis for the proposed algorithm is presented in Section 5. The results of Monte Carlo simulation and performance analysis for different formation geometries and measuring conditions are presented in Section 6. Conclusions are presented in Section 7. Fig. 1 illustrates the formation geometry and vector quantities associated with the IROD problem, where v C is the virtual chaser which does not have a camera on-board, r C is the real observer and also named chaser which has a camera mounted in the center of mass. As we can see from the figure, this kind of geometry looks like the case shown in Geller et al. [Geller and Perez (2015)], i.e., the observer r C seems to be a flying camera offset from the center of mass of chaser v C and the offset is changing with time. However, it has been deduced that if both of the chaser and offset camera satisfy the CW equations the observability of the orbital state is not possible [Geller and Klein (2014)]. But actually this hypothesis was made under the assumption of the chaser and the camera are close to each other and CW dynamics is used. Thus, in this paper the observability problem will be tried to be solved by the use of nonlinear orbit propagation of the observer r C . Problem formulation The same as previous work, the relative motion of the chaser r C with respect to the target is still governed by the analytic solution to the CW equations in the LVLH (Local Vertical Local Horizontal) reference frame. The origin of a rotating LVLH reference frame is colocated with the target center-of-mass. The axes of the LVLH frame are aligned with the chaser inertial position vector (z-axis or radial), the normal to the orbit plane (y-axis or cross-track), and the along-track direction (x-axis, in the direction of the v-bar or alongtrack, completes the orthogonal set). The position of the chaser center-of-mass relative to the target center-of-mass in LVLH coordinates is denoted by ( ) t r , and the velocity of the chaser relative to the target as observed from a rotating LVLH frame is denoted by ( ) t v . Vectors without a superscript are assumed to be coordinatized in LVLH coordinates. The motion of the chaser with respect to the target, whether on a flyby orbit, a circumnavigation/football orbit [Woffinden and Geller (2009)], or any other coasting trajectory, is governed by the analytic solution to the CW equations where ( where 1 k is some unknown scale factor of (0) i , and the baseline rv (0) r is calculated as where v (0) R and r (0) R are the inertial position vectors of the Spacecraft v C and r C , is the transformation matrix from the inertial frame to LVLH frame. The value of r R will be given by on-board GPS receiver, but the value of v R (or the orbit of r C ) will be propagated by absolute orbit dynamics with a given initial state where g is the acceleration due to gravity acting on the virtual spacecraft v C which is based on a point-mass gravity model [Kaplan (1976)]. Similarly, for the second and third LOS observations the solution for the initial position and velocity where 2 k and 3 k are also unknown scale factors of (1) i and (2) i , respectively. Then, when 3 N ≥ observations are available during a coasting period, the th i observation also satisfies 3 3 3 3 3 3 3 3 3 1 3 1 3 3 3 3 3 1 3 1 3 3 3 3 3 1 3 1 3 3 3 3 3 3 3 3 3 1 3 1 3 3 3 Then, the least-squares solution to this set of over-determined equation is After that, unique values for the initial position (0) r and velocity (0) v can be extracted. Thus, Eqs. (16)-(17) represents a simple algorithm that can be used to determine the solution to the angles-only IROD problem based on 3 N ≥ observations for any relative motion coasting trajectory, and for any known constant or time-varying chaser orientation. Observability analysis Conceptually, the relative state X can be uniquely determined from the measured LOS time history, the angles-only IROD problem is said to be observable. By contrast, it is said to be unobservable if more than one set of states share the same LOS time history. The goal of this section was to analytically analyze the initial relative state's observability criteria of the angles-only IROD problem based on proposed algorithm. Firstly, as shown in Eq. (5), the baseline rv r is calculated from the inertial position of the two spacecraft. But if it is propagated by CW dynamics, i.e., So when N observations can be obtained, Eq. (13) is reduced to where A  is a matrix depending on line-of-sight and transition matrix. Thus, according to the linear system theorem [Strang (2014)], the unique physical solution cannot be determined from Eq. (21) whatever A  is full rank or not, which means unobservable. On contrast, it will be observable if the baseline rv r is calculated by Eq. (5). The reason is Eq. (5) stands for the orbital propagation of high-order relative motion dynamics. And as concluded in Woffinden [Woffinden (2008)], the angles-only navigation system will has some observability if the nonlinear dynamics is used. But if spacecraft v C and r C are so close to each other, e.g., hundreds meters away, Eq. (5) will get a similar result with CW dynamics which leads to be unobservable. Therefore, it is better to have a longer distance, especially have a larger altitude difference between v C and r C in order to improve the observability. Secondly, as shown in Eq. (13), if column vector B is zero vector, i.e., rv ≡ r 0, Eq. (13) is homogeneous. Then, the initial orbit cannot be uniquely solved. Thus, the necessary condition for Eq. (13) has unique physical solution is rv ≡ / r 0 Further, if vector rv r is parallel to the line-of-sight, the column vector B is dependent with system state X . Then the initial orbit is unobservable. For example, if there have three observations and rv r where m , n and k are unknown scale factors respectively. Then the column vector B can be re-expressed as Substituting Eq. (26) into Eq. (13) produces ( ) Thus, the initial orbit cannot be solved from Eq. (28) whatever the coefficient matrix is full rank or not. As a result, the sufficient and necessary condition of observability is the projection of the baseline rv r in the LVLH frame cannot be parallel to the line-of-sight, i.e., rv × ≡ / r i 0 (29) Linear error covariance analysis As shown in Eqs. (13)-(15), the IROD solution requires the knowledge of line-of-sight i and baseline rv r . The measured values of these variables i  and rv r  contain errors which will lead to estimation errors in the initial relative orbit. Thus, it is very important to figure out how the IROD estimation error and covariance propagate in terms of measurement errors. In the following subsections, the actual measurement models and error models will be built and utilized to do the linear error covariance analysis. Measurement models Firstly, it is assumed that the measured value of the unit LOS vector ( ) j i  from spacecraft r C contains camera measurement error j  modeled by zero mean Gaussian noise with a standard deviation cam σ . The measured value of line-of-sight is given by where [ ] × is a skew-symmetric cross-product matrix operator and j is a time label. Secondly, the calculation of the baseline rv r  requires the inertial-to-LVLH transformation matrix LVLH inertial T and both the inertial positions of spacecraft v C and r C . According to the development of the navigation technologies, errors in the chaser inertial position and velocity vector will be small if there is an average GPS receiver on-board which is very commonly used on low-earth orbital spacecraft. Thus, the position error of r C can be modeled by zero mean Gaussian noise with a standard deviation gps σ , i.e., . Then, the measured baseline 12 r  can be modeled as follows: As a result of LVLH inertial T is calculated from the estimated inertial position and velocity of v C and δ R is small, it is assumed that LVLH inertial T is known perfectly and its error can be negligible. Analytic error covariance First of all, the estimation error of initial relative orbit state is given by where  X denotes the estimation value of the initial state and X is the true value of the initial state given by where A + is pseudo-inverse of A . Letting where A δ and δ B can be obtained by substituting the measurement models into the definition of A and B . Then, these two variables can be expressed as 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 6 3 6 3 6 3 6 3 6 3 6 3 where T is a shortcut for the inertial-to-LVLH transformation matrix LVLH inertial T . 2 pos 3 3 3 3 3 3 3 3 2 3 3 gps 3 3 3 3 3 3 0 2 3 3 3 3 pos 3 3 3 3 2 3 3 3 3 3 3 Monte Carlo simulation A Keplerian two-body dynamics Monte Carlo simulation system is built to verify the validation and evaluate the performance of the proposed algorithm. The truth model state vector for reference is a 12-dimensional vector defined by the inertial state (includes position and velocity) of the two vehicles [Gong, Geller and Luo (2016)]. In the following, the error covariance computational models is presented, the key parameters are set for the simulation and then the performance of the proposed IROD algorithm is analyzed. Error covariance computation models The true estimation error statistics are generated by selecting a fixed set of initial conditions for both spacecraft's inertial state. The truth models generate the spacecraft's trajectories and a LOS time-history. Then the error models in Subsection 3.1 are used to generate a set of observations that are processed by the proposed algorithms to obtain the orbital solution for the i th Monte Carlo run 0 ( ) i x . When n Monte Carlo runs are available, the true error mean and covariance for the estimation are calculated by using the following models where the estimation error for the th i Monte Carlo run is Further, it is more intuitive to know how good the range estimate is, because of the wellknown range observability problem of angles-only IROD and navigation during orbital proximity operations. Thus, 2-norm will be used to describe the accuracy of the proposed IROD algorithm where x σ , y σ and z σ are square roots of the first three diagonal elements of true P , i.e., the standard deviation of the error in the x , y , and z directions. As a result of d M and d σ are sufficient to characterize the uncertainties of the relative range estimate, IROD performance will be measured and presented by d M and d σ in the following section. Parameters setting First of all, the emphasis is placed on verifying the proposed IROD algorithm and testing the performances of different distributed formation but not discussing the effect of dynamics, so a circular target orbit is used and the initial orbital elements are as follows: semi-major axis, 6790.15 km; eccentricity, 0.001; inclination, 51.65 degree; ascending node, 281.65 degree; argument of perigee, 37.39 degree; true anomaly, 322.76 degree. The nominal initial position of virtual chaser v C is in +v-bar direction and the corresponding velocity is zero, i.e., v C is initialized as v-bar stationary with respect to the target. The initial relative orbit of r C will be set in the simulation cases separately. Secondly, a summary of the other key parameters is provided in Tab. 1. The accuracy of GPS receiver is assumed to be 10 m/axis while the position uncertainty of virtual spacecraft v C is supposed to be 1 m/axis. And the integration time-step is 1 sec, Monte Carlo runs n is 500 (which can roughly lead to more than 90% confidence), the number of observations N varies from 3 to 21. Additionally, the line-of-sight uncertainty is chosen to medium level, i.e., 0.0001 ran/axis, according to the development of the optical sensor. Angles-only IROD performance analysis The results of Case 1 are shown in Fig. 2. The initial positions of virtual v C is 5 km downrange of the target in +v-bar direction. The initial relative position of Chaser r C is 5 km altitude in the +r-bar direction, and the relative orbit is changed by setting different initial velocity in the direction of along-track while the corresponding velocities are zero in the direction of radial and cross-track. It can be seen that the best estimation is achieved when the initial along-track velocity is zero. Further, the range estimate accuracy can be only several meters if more than 3 observations are available. And the larger the initial along-track velocity is, the worse the estimation is. When the initial along-track velocity is 5 ± m/s and 21 observations are available, d M is about 1400 m while d σ is smaller than In order to make the simulation case being more close to the practical rendezvous circumstance, the chaser is initialized in the position where is 35 km behind and 10 km below the target while the relative velocity is [5.9,0,-0.2] m/s. And the IROD performance will be tested and analyzed by changing virtual spacecraft's orbit. Its nominal initial positions are 1, 5, 15, 50 and 100 km downrange of the target in +v-bar direction, respectively. As shown in Fig. 2, d M (the estimate error) nearly has no change when the initial virtual position varies from 1 km to 50 km, d M is around 1600 m (about 4.4% of the initial separation) for 21 observations. d M becomes smaller that is about 570 m for 21 observations, 1.6% of the initial separation. Further, the vbar stationary position of the virtual spacecraft almost have no influence on the estimate error std which is coincident with the conclusion made by covariance analysis, i.e., the estimate error covariance mainly depends on the level of the absolute positioning noise and LOS measuring noise. It can be seen that d σ is about 60 m when the number of observations are more than 3. varies from 0 km to 10 km while other conditions are the same with Case 2. It can be seen that the changing of the virtual spacecraft's radial position also has slightly influence on d σ . However, d M changed a lot, i.e., about 6 km for the case of 5 km radial position (16.5% of the initial separation) and 18 km for the case of 10 km radial position (49.5% of the initial separation) which are not acceptable. Thus, the virtual spacecraft's v-bar stationary keeping is a good choice for the IROD problem. same trend with those of Cases 2 and 3. And it can be seen that the change of the oscillating magnitude has slight impact on the range estimation, i.e., when the oscillating magnitude increases, the estimate error increases a little bit. Therefore, it is better to assume the virtual spacecraft is co-planar with the target. Conclusions This paper presented a hybrid dynamics scheme with the concept of virtual distribution to analytically determine the initial relative orbit for angles-only rendezvous mission. Observable condition and approximately closed-form estimate error covariance were obtained by observability analysis and linear covariance analysis in the context of Clohessy-Wiltshire equations. And the detailed performance analysis of the proposed IROD algorithm was conducted and presented based on standard two-body Monte Carlo simulations and classical rendezvous missions. The simulation results has shown the possible potential of the proposed algorithm to analytically solve the angles-only IROD problem for rendezvous mission.
2020-01-16T09:05:34.570Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "3de4990275503cdf30f96c74a72e03cff6a8a1fe", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32604/cmes.2020.07769", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a0d144d20b3b1b3a33060b581362bcc32d5d65f0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
86463
pes2o/s2orc
v3-fos-license
Free radicals and antioxidant status in acute myocardial infarction patients with and without diabetes mellitus In this study we investigated the oxidative stress, antioxidants and inflammatory molecules in patients of acute myocardial infarction (AMI) with diabetes (n=50) and non-diabetes (n=50). Fifty healthy subjects were taken as control. The levels of plasma TBARS and ceruloplasmin levels were significantly high in diabetic and non-diabetic AMI patients as compared with control. On the other hand, the activities of both enzymatic and non-enzymatic antioxidants were significantly decreased in diabetic and non-diabetic AMI patients as compared with control. Inflammatory markers showed significant rise in diabetic patients as compared with controls. Our results show increased inflammation and oxidative stress in patients with AMI, and magnitude of imbalance is greater in diabetic AMI patients, possibly because of greater inflammation in diabetic patients. Introduction Diabetes mellitus is a major risk factor for coronary artery disease and is associated with a higher incidence of acute myocardial infarction (AMI) and sudden death.Morbidity, mortality and reinfarction rate are higher following AMI in diabetic than non-diabetic subjects, with one-year mortality in this population as high as 50%.Similarly the acute and long-term efficacy of reperfusion strategies, has, historically, been worse in patients with diabetes [1][2][3] .Artherosclerotic coronary artery disease is a major causative of AMI.It results in erosion or rupture of a plaque causing complete, transient or partial occlusion of arteries.Heart cannot continue to function without adequate blood flow, and if it is severely compromised, death is inevitable.Several risk factors for coronary heart disease have been well documented, including hypertension, hyperlipidemia, diabetes, a positive family story, smoking, obesity and inactivity 4 . Reactive oxygen species are capable of reacting with unsaturated lipids and of initiating the selfperpetuating chain reactions of lipid peroxidation in the membranes.Free Radicals can initiate oxidation of sulphydryl groups in proteins and cleavage of nucleic acid strands.Myocardial antioxidants inhibit or delay the oxidative damage to sub cellular proteins, carbohydrates, lipids and DNA. There is evidence that antioxidants can protect against free radical defense, which is responsible for reperfusion-induced damage and lipid peroxidation, and may thereby inhibit thrombosis, myocardial damage and arrhythmias during AMI.Antioxidant status is a critical tool for assessing redox status 5 .The antioxidant status or related antioxidants may play an important role in protecting the organism from free-radicalsmediated damage.The role of such compounds in AMI development is vital as they may decrease the damage resulting from blood reactive oxygen species during reperfusion.Formation of lipid peroxides by the action of free radicals on unsaturated fatty acids has been implicated in the pathogenesis of atherosclerosis and vascular diseases 6 .The incidence of vascular disease is higher in case of diabetic patients and it is mainly attributed to increased free radical activity.The major contributing factors to increased oxidative stress are increased non-enzymatic glycosylation, autoxidative glycosylation and metabolic stress resulting from changes in energy metabolism, alterations in sorbitol pathway, changes in the level of inflammatory mediators, the status of antioxidant defense systems and localized tissue damage results from hypoxia and ischemic reperfusion injury.Increased levels of the products of oxidative damage to lipids have been detected in serum of AMI diabetic patients, and their presence correlates with the development of complications 7- 9 .Evidence suggest that antioxidants can give effective protection against free radical production, which is responsible for reperfusion induced damage and lipid per oxidation and it ultimately leads to inhibition of thrombosis, myocardial damage and arrhythmias during AMI. Materials and Methods Study population: This study has been carried out on patients of AMI with diabetes (n=50) and without diabetes (n=50).The other 50 subjects age and sex matched healthy subjects were studied as controls.All patients had been admitted to the coronary care units of Raajam Hospital, Salem, Tamil Nadu, India, between July 2007 and March 2009.The diagnosis of AMI was based on a history of prolonged ischemic chest pain, characteristic electrocardiogram changes and elevated creatine kinase isoenzyme MB (CK-MB) and troponin T within 12 hours after the onset of pain. Hypertension was defined as a diastolic blood pressure ≥90 mmHg, systolic blood pressure ≥140 mmHg, or self-reported use of an antihypertensive drug.Cardiovascular disease was diagnosed angiography, electrocardiogram, sintigraphy and effort test, or self-reported use of a β-blocker, angiotensin I converting enzyme inhibitor and/or diuretic drug.The patients who had total cholesterol level of >220 mg/dL or triglycerides concentration >200 mg/dL, or receiving lipid lowering drugs were defined as having hyperlipidemia.Diabetes mellitus was diagnosed if the fasting plasma glucose concentration was ≥120 mg/dL or if the patient was treated with insulin or oral hypoglycemic agents.Normal subjects, of same age group, who were free from diabetes mellitus and other chronic disease, were selected.No subject (patients or controls) was taking antioxidant or vitamin supplements, probucol, allopurinol, quinidine, disopyramide, or other drugs known as affecting serum lipid peroxidation and antioxidant values.Prior to study, oral consent was procured from the patients' relatives and normal subjects. Blood collection and erythrocyte lysate preparation: Blood samples were collected by venous puncture in heparinized tubes and the plasma was separated by centrifugation at 1,000 g for 15 min.The plasma was collected with the simultaneous removal of buffy coat.The packed cells were washed thrice with cold physiological saline.A known volume of erythrocytes was lysed with hypo tonic phosphate buffer (pH 7.4).The hemolysate was separated by centrifugation at 2,500 g for 10 min at 2°C. Biochemical investigation: Blood glucose, HbA1 C , total protein, albumin, total cholesterol, triglyceride, HDL-C, LDL-C, CK and CK-MB were determined using fully automated clinical chemistry analyzer (Hitachi 912, Boehringer Mannheim, Germany).Serum VLDL-C was calculated according to Friedewald et al 10 .Roche Elecsys 2010 Immuno assay analyzer, USA, measured myoglobin and Troponin T and determination of CRP level was measured by the nephelometric method on the basis of particlebound goat antihuman CRP (Beckman Instruments, Inc, Fullerton, CA).The plasma fibrinogen was determined by using semi automated diagnostic stogo STart 4 coagulation instrument. Estimation of lipid peroxidation: Lipid peroxides were estimated by measurement of thiobarbituric acid reactive substances in plasma by the method of Yagi 11 .The pink chromogen produced by the reaction of thiobarbituric acid with malondialdehyde, a secondary product of lipid peroxidation was estimated.The absorbance of clear supernatant was measured against reference blank at 535 nm.Ceruloplasmin was determined using its copper oxidase activity by method of Ravin 12 .In this method, action of ceruloplasmin on p-phenylenediamine is used to measure the amount of ceruloplasmin present in the serum.Dark lavender color was read at 530 nm using control tube as blank.Concentration of ceruloplasmin in mg/dL is absorbance X 87.5. Assay of enzymatic antioxidants: Superoxide dismutase (SOD) was assayed utilizing the technique of Kakkar et al 13 based on inhibition of the formation of nicotine amide adenine dinucleotide, phenazine methosulfate and amino blue tetrazolium formazan.A single unit of enzyme was expressed as 50%inhibition of NBT (nitroblue tetrazolium) reduction/min/mg protein.CAT was assayed colorimetrically at 620 nm and expressed as µmoles of H 2 O 2 consumed/min/mg Hb as described by Sinha 14 .The reaction mixture (1.5 mL) contained 1.0 mL of 0.01 M phosphate buffer (pH 7.0), 0.1 mL of hemolysate and 0.4 mL of 2 M H 2 O 2 .The reaction was stopped by the addition of 2.0 mL of dichromate-acetic acid reagent (5% potassium dichromate and glacial acetic acid were mixed in 1:3).GSH content was determined by the method of Ellman 15 .1.0 mL of plasma was treated with 0.5 mL of Ellmans reagent (19.8 mg of 5,5'dithiobisnitro benzoic acid (DTNB) in 100 mL of 0.1%sodium nitrate) and 3.0 mL of phosphate buffer (0.2 M, pH 8.0).The absorbance was read at 412 nm.GSH content was expressed as mg/dL.GPx activity was measured by the method described by Rotruck et al 16 .Briefly, reaction mixture contained 0.2 mL of 0.4 M Tris-HCl buffer pH 7.0, 0.1 mL of 10 mM sodium azide, 0.2 mL of homogenate (homogenized in 0.4 M, Tris-HCl buffer, pH 7.0), 0.2 mL glutathione, 0.1 mL of 0.2 mM H 2 O 2 .The contents were incubated at 37°C for 10 min.The reaction was arrested by 0.4 mL of 10% TCA, and centrifuged.Supernatant was assayed for glutathione content by using Ellmans reagent (19.8 mg of 5,5'-dithiobisnitro benzoic acid (DTNB) in 100 mL of 0.1%sodium nitrate). Assay of non-enzymatic antioxidants: Plasma vitamin A (β-carotene) was estimated by the method of Bradle and Hombeck 17 .Ethanol was used to precipitate proteins and the carotenes were extracted into light petroleum.The intensity of yellow colour developed due to carotene was read directly at 450 nm using a violet filter.Vitamin E was measured by the method of Baker et al 18 .on the basis of the reduction of ferric ions to ferrous ions by vitamin E (α-tocopherol) and the formation of a red colored complex with 2.2'-dipyridyl at 520 nm.Vitamin C (ascorbic acid) was estimated by the method of Roe and Kuether 19 .It involves oxidation of ascorbic acid by copper followed by treatment with 2,4 dinitrophenyl hydrazine that involves rearrangement.The product formed has absorption maximum at 520 nm. Statistical analysis: All data were expressed as mean ± SD.The statistical significance was evaluated by Student's t test using Statistical Package for the Social Sciences (SPSS Cary, NC, USA) version 10.0. Results and Discussion Table I shows the demographic characteristics of study population in control and AMI patients with and without diabetes.Body mass index was significantly high in diabetic group as compared with control.Compared with controls, there were no significant differences in non-diabetic AMI.Systolic blood pressure was significantly high in both the patients groups as compared with controls. The level of blood glucose, HbA1 C levels were significantly high in diabetic AMI group (p<0.001) as compared with control (Table II).Compared with controls, there were no significant differences in non-diabetic AMI.Serum lipids showed a significantly higher concentration (p<0.001) of total cholesterol, triglyceride, low-density lipoprotein and very low-density lipoprotein in diabetic AMI group.On the other hand, the levels of serum total protein, albumin and HDL-C were significantly decreased in diabetic AMI patients when compared to healthy control subjects.The level of cardiac markers (CK, CK-MB and troponin T) were significantly higher in both MI groups when compare to control subjects. Table III illustrates the level of circulatory lipid peroxidation, antioxidant status and inflammatory markers in control and AMI patients with and without diabetic patients.Lipid peroxidative markers TBARS and Ceruloplasmin were significantly higher in diabetic and non-diabetic AMI patients as compared with control (p<0.001).The activities of erythrocyte antioxidants such as SOD, CAT, GPx, GSH and vitamin A, E and vitamin C were significantly decreased in DM and NDM groups of AMI (p<0.001) as compared with controls.All inflammatory parameters were significantly raised in diabetic AMI patients as compared to controls.In non-diabetic patients, CRP and fibrinogen levels were significantly high as compared to controls.Changes in the concentration of plasma lipids including cholesterol are complications frequently observed in patients with MI and certainly contribute to the development of vascular disease.Cholesterol has been singled out as the primary factor in the development of atherosclerosis.HDL is regarded as one of the most important protective factors against arteriosclerosis.HDL's protective function has been attributed to its active participation in the reverse transport of cholesterol.Numerous cohort studies and clinical trials have confirmed the association between a low HDL and an increased risk of coronary heart disease 21 .The concentration of LDL correlates positively whereas HDL correlates inversely to the development of coronary heart disease.Smokers have significantly higher serum cholesterol, triglyceride, and LDL levels, but HDL is lower in smokers than in nonsmokers 22 .There is evidence for the role of oxidatively modified LDL in the pathogenesis of atherosclerosis.Increased oxidative stress and the generation of the free oxygen radicals can result in modification of LDL to oxidized LDL that could lead to atherosclerotic lesions 23 . Elevated levels of CK, CK-MB, troponin T have been regarded as biochemical markers of myocyte necrosis 24 .Both CK and its isoenzyme CK-MB play a major role in defining myocardial infarction.These enzymes normally exist in cellular compartment and leak out into the plasma during myocardial injury due to disintegration of contractile elements and sarcoplasmic reticulum. Troponins T are proteins of the troponin regulatory complex involved in cardiac contractility.Both have very high myocardial tissue specificity and offer an improved sensitivity and specificity for MI versus a combination of electrocardiogram and traditional biochemical markers.The cardiacspecific troponins are highly sensitive and specific markers of myocardial damage and therefore cardiac troponins are the preferred markers for the diagnosis of myocardial infarction 25 .In this study, increased CK, CK-MB, troponin T levels were found in patients with MI as compared to healthy controls. Significant rise in TBARS levels, a lipid peroxidation product, in our patients is indicative of elevated oxidative stress in diabetic AMI patients.An increased level of serum ceruloplasmin in AMI patients suggests that this molecule may act as an oxidative stress indicator, though mechanism remains unclear.It is an inflammation-sensitive protein and an acute phase reactant 26 .It was shown that ceruloplasmin exhibits pro-oxidant activity and causes oxidative modification of LDL.This indicates that ceruloplasmin is an independent risk factor for cardiovascular diseases.A positive correlation was observed between ceruloplasmin and sialic acid in AMI group.As sialic acid is a well-known inflammatory marker, ceruloplasmin may have possible role in inflammation.In AMI group, we found positive correlation between ceruloplasmin and total cholesterol, triglycerides.All these results indicate that ceruloplasmin may be considered as an inflammatory molecule 27 . Antioxidants act as the foremost defense system against free radicals, thereby limiting their toxicity.It is known that plasma antioxidant capacity decreases and oxidative/antioxidative balance shifts to the oxidative side in patients with MI.A reason for increased lipid peroxidation in plasma of patients MI may be a poor enzymatic and nonenzymatic antioxidant defense system.SOD along with CAT and GPx, the preventive antioxidants, plays a very important role in protection against lipid peroxidation.In this study, SOD, CAT and GPx activities were significantly lower MI and IHD patients than in control subjects.Besides, decrease of SOD, CAT and GPx activity was much more pronounced in smokers than in non-smokers with MI, thus making those individuals more vulnerable to oxidative stress 28 . Free radical-scavenging enzymes such as SOD, CAT and GPx are the first line of cellular defense against oxidative injury, decomposing O 2 ._ and H 2 O 2 before interacting to form the more reactive hydroxyl radical (SOH).These enzymes protect the red cells against O 2 . _ and H 2 O 2 -mediated lipid peroxidation.We have observed decreased activities of SOD and CAT in the erythrocyte in AMI patients.Decrease in the activity of SOD and CAT may be due to inactivation of enzyme by cross-linking or due to exhaustion of the enzymes by increased peroxidation. Glutathione peroxidase (GPx) catalyzes peroxide reduction using GSH as the substrate and it is finally converted to GSSG.We have observed a decrease in the GPx activity in the erythrocyte AMI patients.Inactivation of GPx after endogenous exposure to aldehydic by products of lipid peroxidation or of NO has been reported.Decreased GSH concentration may in turn lead to decreased GPx activity because GSH is one of the substrates for GPx.GSH is one of the most important endogenous antioxidants.It provides sulfhydryl (SH) group for direct scavenging reactions.GSH acts both as a substrate in the scavenging reaction catalyzed by GPx and as a scavenger of vitamins C and E radicals.In our study, the plasma and erythrocyte GSH concentrations significantly decreased in AMI patients.This may be possibly due to increased consumption of GSH.In AMI patients, we found significantly lower levels of vitamins E and C compared with controls 28 .This indicates severe damage to antioxidant system, which is unable to combat oxidative stress and inflammation. Inflammation is a major contributing factor for the development of atherosclerosis and coronary heart disease.Elevated markers of inflammation, in particular CRP, are associated with an increased risk of future cardiovascular events in healthy subjects, in patients with stable or unstable coronary artery disease and AMI 29,30 .Although the prognostic value of CRP in patients with myocardial infarction has not been taken into consideration in several studies, data suggest that CRP is an important marker of risk 31 .We observed increased CRP levels in AMI patients as compared to healthy controls.Moreover, CRP concentrations in each patient group were found higher than the control group.We have observed highest concentrations of CRP in patient group.Elevated CRP levels were also observed in cardiovascular, hypertension group, respectively.CRP levels may rise exponentially due to hypertension, cardiovascular disease or hyperlipidemia.Hypertension, diabetes mellitus, older age, extension of necrosis area, previous AMI, and anterior site of AMI are considered among the most important features leading to heart failure during AMI 32 . In AMI patients, we found significantly higher levels of fibrinogen in patients group.Fibrinogen is an acute phase reactant that is increased in inflammatory states. [33]Hepatic synthesis of fibrinogen can increase up to 4-fold in response to inflammatory or infectious triggers.Some earlier studies have identified fibrinogen as a major independent risk factor for cardiovascular diseases.Fibrinogen is directly associated with AMI and is an independent short-term predictor of mortality 34 .It also indicates role of fibrinogen as an acute phase reactant and its role in response to inflammation.Thus, our study indicates an imbalance between oxidant and antioxidant molecules in AMI patients, and magnitude of imbalance is greater in diabetic AMI patients, possibly because of greater inflammation in diabetic patients. Table I : The demographic characteristics of study population in control and AMI patients with and without diabetic patients Table II : Biochemical changes and cardiac markers in control and AMI patients with and without diabetes ¤¤Values are mean ± S.D from 50 subjects in each group; ¤ AMI without DM patients compared with control subjects ( ¤ p<0.05, ¤¤ p<0.01, ‡ -Not significant, *AMI with DM patients compared with control subjects (*p<0.001),† AMI with DM patients compared with AMI without DM ( † p<0.001)
2017-03-30T21:40:46.137Z
2010-02-07T00:00:00.000
{ "year": 2009, "sha1": "92d811d3f8c5b3044e7e9737c8a0c05b80df1f2f", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/BMRCB/article/download/2999/3572", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "92d811d3f8c5b3044e7e9737c8a0c05b80df1f2f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259318849
pes2o/s2orc
v3-fos-license
Evaluation of land use/land cover effect on streamflow: a case of Robigumero watershed, Abay Basin, Ethiopia Land use land cover change has an impact on hydrology of the watershed on the Robigumero watershed. The study mainly focused on estimating land use change and stream flow under different land use land cover changes of the Robigumero watershed. Land use land cove maps of 1996, 2006 and 2016 were collected from Ethiopian water irrigation and energy office. The soil and water assessment tool model (SWAT) was used to simulate LULC effects on the streamflow of Robigumero Watershed. The SWAT model performance was evaluated through sensitivity analysis, calibration, and validation. During the study period the land use land cover has changed due to growth in population of the study area. The Agricultural land increased by 22.4% and while grass land & forestland decreased by 17.5 and 5.3% Respectively in the year between 1996 to 2016. The findings of the stream flow simulation were used to assess the seasonal variability in stream flow caused by changes in land use and land cover. Both the calibration and validation result shows very good agreement between observed and simulated stream flow with NSE values of 0.81 and R2 values of 0.83 for calibration and NSE Values of 0.86 and R2 values of 0.87 for validation. The result of this study indicated that mean monthly stream flow were increased by 44.1m3/s for wet season and decreased by 2.3m3/s in dry season over 21 years’ period. In general redaction of agricultural land and increment of forest land on the degraded land reduce stream flow which shows the reduction of soil erosion. Therefore, this study results can be used to encourage different users and policymakers for planning and management of water resources in the Robigumero watershed as well as in other regions of Ethiopia. Background The land and water resource of the watershed and its ecosystem are danger due to the nature of the watershed, rapid population growth, deforestation, overgrazing, and soil erosion or soil detachment from the surface are the serious problems in Nile basin (Mengie et.al. 2019). The land use and land cover of the certain basin is subjected to the given change from one land use to the other land use from time to time (Lambin et al. 2003;Welde and Gebremariam 2017a;Bewket and Woldeamlak 2002). The change in land use and land cover are the direct and indirect consequence of human activities (Hassen and Assen 2017;Tadele and Förch 2007). Land use and land cover also has impact on hydrology the basin and these impact are integrated strongly (Hassen and Assen 2017;Ayele et.al. 2017;Getachew and Melesse 2012). In Ethiopia where nearly about 85% of the population is engaged primarily in agriculture and depends heavily on available water resources, the assessment and management of available water resources is a matter of prime importance. Surface water flow modeling is an important tool frequently used in studies in surface water system and watershed management (Bezawit A., 2019). The land use land cover condition is dynamically changing, especially in developing country like, Ethiopia, whose economy depending on agriculture. In particular, the forest land, shrubs, and grass land changed to agricultural and settlement land in most part of the country (Ayele et.al. 2017;Tadele and Förch 2007). For example, studies conducted in Gilgel Abay watershed of Blue Nile basin show that there was the redaction of forest land and shrub land with the increment of agricultural land. About 570 km 2 of forest and shrub land converted to agriculture and settlement in the year between 1973 to 2001 (Getachew and Melesse 2012). There are several available and important hydrological models which consider physical environment or land use land cover condition to estimate the stream flow or surface runoff including HEC-HMS, MIKE SHE, SWAT, etc. (Tadele and Förch 2007). There are many hydrological models within each class of modeling. Hence choosing the particular model is one of the challenge of model use community. the Two criteria in order to select the hydrological model structure are suggested by (Lambin et al. 2003;Mohammed and Thatiparthi 2020;Jain et al. 2017;Nicótina et al. 2008;Ghonchepour et.al., 2003). The model must be readily and freely available within available documentation and should be applied over arrange of watershed size from large to global (Ghonchepour et.al., 2003). Based on the above criteria Soil and Water Assessment Tool (SWAT) model was selected and used for many studies in Ethiopia. The Soil and Water Assessment Tool (SWAT) model was examined for its applicability to the assessment of water resources in the Upper Awash watershed by (Chekol et al. 2007). In the last thirty years, the land use land cover change was huge which were due to the increment of agricultural land and reduction of forest and grass land in the Robigumero watershed. Several visible change in stream flow and surface run off were observed in the form of flooding and soil erosion during rainy season while reduction of stream flow in dry season in the study area. However, these change of stream flow were not well understood that what couse the change in the watershed. In the study area the major cause of altering streamflow is observed primarily the change in land use land cover including deforestation activities and conversion of grass land to agricultural land. This causes various effects on resource bases like deforestation and agricultural land this leads to the changes in hydrology of the watershed and sediments deposited in stream channels reduce flood carrying capacity, resulting in more frequently over flows and greater floodwater damage to adjacent properties. The main objective of this study is to evaluate land use land cover change effects of on stream flow of Robigumero Watershed. Moreover, this research tried to evaluate the land use land cover changes between 1996, 2006 and 2016 and its implication on stream flow. The outcome of this study befits the stakeholders, water resource planers, farmers, residents' decision makers and beneficiaries to get aware of the land use land cover change in the watershed and further adaptive important measures to control and protect the negative impact of land use land cover change on the stream flow in the study area. Description of the study area The Jemma River is one of the biggest tributaries of the Blue Nile (Abay River) Basin and founds in the central highlands of Ethiopia, 180 km North of Addis Ababa. It includes parts of the Wollo, North Shewa Zones of the Amhara, and Oromia Regions. Jemma River is located in the East of the Blue Nile River Basin between 9° 05′ 37''-11° 10′ 07'' N latitude to 37° 12′ 07''− 40° 0′ 01'' E longitude and cover an area of 15,720 km 2 . From the number of small tributaries flowing from the east of the basin into the Jemma River, the Robigumero River is one of the major gauged tributaries. It covers, the catchment area of 914.7 km 2 in between 9° 25'-9° 55' N and 38° 54''-39° 20' East position. The watershed's altitude varies from slightly over 2546 m above mean sea level (m.a.s.l) in the southern part to 3624 m a.s.l. The study site has two major seasons: a wet season from May to October and a dry season that extends from November to April. Based on the records from 31 years at nearby meteorological stations, the annual rainfall depth ranges from 986.7 to 1266.7 mm. More than 85% of the rains fall during the wet season. According to the FAO soil classification, the dominant soil for the Robigumero watershed was grouped as Calcic Vertisols, Eutric Leptosols, Eutric Vertisols, and Eutric cambisols The most common soil texture for these soil types i.e. for Calcic Vertisols, Eutric Vertisols is clay, for Eutric cambisols is clay-loam, and for Eutric Leptosols is loam. In this study, according to the Minister of Water Resource, Irrigation, and Electricity land uses land cover data the dominant land covers of the study area are Agricultural land, grassland, deciduous forest, and the catchment area < 1% covered with the Urban area. SWAT model input and data analysis Physically based Soil and Water Assessment Tool (SWAT) was used for watershed delineation, hydrologic response unit analysis (HRUs), weather data write up, sensitivity analysis and other watershed characteristic determinations. The watershed delineation operation uses and expands Arc-GIS and spatial analyst extension functions to perform watershed delineation (Easton et al. 2010;Tadele & Förch, 2007;Khalid et al. 2016a, b;Wheater 2007;Bekele et al. 2021). The initial stream network and sub-basin outlets were defined based on drainage area threshold approach. Multiple Hydrological response units (HRU) of the watershed were formed using 20%/10%/20% threshold levels of land use, soil and slope classes respectively (Neitsch et al. 2011;Arnold et al. 2012). After creating multiple HRUs weather write up and simulation of the model follows (Neitsch et al. 2011;Setegn et al. 2008). The swat model simulates land phase of the hydrologic cycle based on the balance equation (Arnold et al. 2012;Githui and Mutua 2009;Neitsch et al. 2011). where, SWt: the final soil water content(mm), SWo: the initial soil water content(mm), t: the time (days), Rday: the amount of precipitation on day(mm), Qsurf: the amount of surface runoff on day(mm), Ea: the amount of evapotranspiration on day(mm), Wseep: the amount of water entering the vadose zone from the soil profile on day (mm), and, Qgw = the amount of return flow on day (mm). Runoff in SWAT model may be estimated by ether the soil conservation curve number (Mohammed and Thatiparthi 2020) or green and Ampt infiltration method (Green& Ampt, 1911). For this study, the curve number method was (1) employed. because of it is efficient and most popularly used in estimation of runoff (Bewket and Woldeamlak 2002); Mengie et al. 2019) mainly based on the physical characteristics including the land use, soil and the slope of the study area and the hydrology condition (Githui and Mutua 2009;Githui et al. 2010;Tang et al. 2012). Soil conservation curve number method estimates the runoff based on the Eq. 2 (Narsimlu et al. 2015;Nicótina et al. 2008); Alemu 2013); Sloan and Sayer 2015). where, Rday: the amount of precipitation on day(mm), Qsurf: the amount of surface runoff on day(mm), s: retention parameter on day (mm). Digital elevation model (DEM) The topography is defined by DEM, which describes the elevation of any point in a given area at a specific spatial resolution, which is used for watershed delineation. A 30 by 30-m resolution DEM was collected from Ministry of water, Irrigation and Energy of Ethiopia. Soil data Soil data is one of the major input for SWAT model with inclusive and chemical properties (WaleWorqlul et al. 2018;Barbalho 2014;Pontes et al. 2016). The soil map of the study area was also obtained from Ministry of Water, Irrigation and Energy of Ethiopia. According to the FAO soil classification, the dominant soil for the Robigumero watershed was grouped as Calcic Vertisols, Eutric Leptosols, Eutric Vertisols, and Eutric cambisols and summarized in Table 1. To integrate the soil map with SWAT model, a user soil data base which contains textural and chemical properties of soils was prepared for each soil layers and added to the SWAT user soil data bases. Climatic data Meteorological data is needed by the SWAT model to simulate the hydrological conditions of the watershed. The meteorological data required for this study were collected from National Meteorological Agency of Ethiopia. The meteorological data collected were precipitation, maximum and minimum temperature, relative humidity, and wind speed and sunshine hours for four stations (Debrebirhan, Chacha, Deneba and Lemi) from the year 1988 -2018.In this study, the weather generating station was Debrebirhan rain gauge station. The monthly statistical weather parameters needed when WGEN was prepared from daily weather data are rainfall parameters (PCPMM, PCPSTD, PCPSKW, PCP_W1, PCP_W2, PCPD, RAINHHMX), temperature parameters (TMPMX, TMPMN, TMPSTDMX, TMPSTDMN), solar radiation parameters (SOLARAV), wind parameters (WNDAV) and dew point temperature parameters (DEWPT). The rainfall parameters were calculated by using pcpSTAT.exe, whereas the dew temperature parameters were calculated using dewp02.exe (Neitsch et al. 2011 A. Filling missing data Data were missing from a particular gauge site or representative precipitation is necessary at a point of interest. There are different methods for filling the missing data from those methods station average and normal ratio method were used for the rainfall in this study (Rientjes et al., 2011). All of the rainfall recorded from the stations has missing data with ranging greater than 10% of missing. Therefore, before using the data to runoff modeling it was first essential to apply a gap filling techniques. (4) Dew = 234.18(log10(ea) − 184.2) 8.204 − log(ea) where PX is the missing data at station x, Nx is the missing data stations normal annual rainfall, Ni is normal annual rainfall at station i, and n is number of nearby gauges The station-average method for estimating missing data uses n gages from a region to estimate the missing point rainfall Fig. 1. B. Consistency The consistent record is one where the characteristics of the record have changed with time. Adjusting for gauge consistency involves the estimation of the effect rather than a missing value (Pontes et al. 2016;Richards 1998;Nicótina et al. 2008). For this study double mass curve method was used in order to estimate the consistency of four stations in the study area and as shown in Figs. 2, 3 below the station rainfall dates were consistent. C. Homogeneity test Homogeneity analysis was used to identify a change in the statistical property of the time series (Neitsch et al. 2011;Arnold et al. 2012). The cause may be either natural or manmade. Therefore, to select the representative metrological station for the analysis of areal rainfall estimation, checking the homogeneity of group is essential. The RAINBOW software is used based on the cumulative deviation from the mean (Wheater 2007;Neitsch et al. 2011). D. Areal rainfall computation The average rainfall over an area may be considered as the main input on the watershed modeling process, especially of those which deal with surface runoff because, the rain is the only climatic variable that can explain fast increasing flow Consistancy Test Debrebirhan CHACHA lemi Deneba Fig. 3 Homogeneity test of Rainfall gauging stations of Robigumero (Anctil et al. 2006;Wheater 2007). According to Andréassian et al. 2001;Nicótina et al. 2008;Younger 2010), spatial variability of rainfall over the basin and their distribution pattern, as well as its interaction with the basin, have a considerable effect on runoff response generated. There are different methods used to calculate the mean annual rainfall which represents its distribution on the watershed (Tadele and Förch 2007;Chaubey et al. 2005). However, The Thiessen-polygon method is the best technique that shows the convergence for increasing the rain gauge density in the basin (Barbalho (2014). The average rainfall over the catchments was calculated as Equation below. where Pav is mean areal precipitation (mm), Pi is mean annual precipitation (mm) and Ai is coverage area at itℎ tℎe station, within Thiessen polygon respectively. Stream flow data The observed daily streamflow data is the required data for calibration and validation of the simulated streamflow from the Watershed (Rientjes et al., 2011;Getachew and Melesse 2012;Githui et al. 2010). The streamflow in the Blue Nile basin including the Robigumero watershed was recorded by the Ministry of Water, Irrigation, and Energy (MoWIE). The available observed daily streamflow data recorded at Robigumero gauging station from 1990-2009 years was collected from the Ministry of Water, Irrigation, and Electricity. Land use land covers data Land use is also The most important factor that affects runoff, evapo-transpiration and surface erosion in a watershed (van Griensven et al. 2006;Pontes et al. 2016). There are many studies on land use and land cover change in the districts and catchments of the Blue Nile basin. These studies support this study in many aspects especially in the continuous expansion of farm land (WaleWorqlul et al. (2018). The Land use and land cover change studies usually need the development of land cover units before the analysis is started (Nicótina et al. 2008;Sloan and Sayer 2015). The Three different year's land use/land cover data were collected from mister of water irrigation and energy. A model sensitivity analysis can be help full in understanding which model input are the most important. Sensitivity analysis is a method of identifying the most sensitive parameters that significantly affects the model calibration and validation (Neitsch et al. 2011;Tang et al., 2012 ;Abbaspour, 2013). Sensitivity analysis describes how model output varies over a range of a given input variable (Khalid et al. 2016a, b;Welde and Gebremariam 2017b;Andualem and Gebremariam 2016).So that twenty-six flow, parameters were checked for sensitivity (Garzanti et al. 2006;Khalid et al. 2016a, b). For this study, the global sensitivity analysis was employed in SWAT-CUP 2012 and the p-value were used to select the sensitive parameters (Abbaspour 2012;Arnold et al. 2012). Model calibration and validation Calibration is the process whereby model parameters are adjusted to make the model output match with the observed data (Rientjes et al. 2011). The period from 1990 to 2002 was used as a calibration period since the data for this period was with little missing data or representative data. Validation is the comparison of the model outputs with in independent data set without making any adjustment. The purpose of model validation is to check whether the model can predict flow for another range of period (Tang et al. 2012). The period from 2003 to 2009 was used as a validation period. Model performance evaluation Model evaluation is an essential measure to verify the robustness of the model. In this study, two model evaluation methods were used, which were Nash-Sutcliffe efficiency (NSE) and coefficient of determination (R 2 ) Barbalho (2014). where Si and Oi are simulated and observed values during model evaluation at time step ith respectively, O min is the average observed value, and "n" is the number of values. The coefficient of determination (R 2 ) describes the proportion the variance in measured data by the model. It is the magnitude linear relationship between the observed and the simulated values. R 2 ranges from 0 (which indicates the model is poor) to1 (which indicates the model is good), with higher values indicating less error variance, and typical values greater than 0.6 are considered acceptable according to (Barbalho 2014). where Si and Oi are simulated and observed values during model evaluation at time step ith respectively, O min and Smin is the average observed and simulated value, and "n" is the number of values Fig. 4. Land use and land cover analysis The Three-land use cover maps of 1996, 2006 and 2016 were collected from minster of water irrigation and energy (Fig. 5). It is easily shown that there is an increase of agricultural land, and urbanization and decrease of forested areas, and grassland over 21 years. In general, during 21-year period the Agricultural land increases at about 22.4% whereas the forested area decreased by 5.3%. For the individual class area and change statistics for the three periods are summarized as follows ( Table 2). The land use land cover map of 1996 (Fig. 5) showed that the total agricultural land coverage was about 63.3% of the sub basin and increased rapidly to 85.7% of the Watershed in 2016 (Tables 3, 4 and Fig. 3). The reason is mainly the growth of the population that caused the increase in demand for new Agricultural land and settlement, which in turn resulted shrinking of other types of land use percentage of the watershed. On the other hand, the total forest coverage in 1996 was about 14.9% and then reduced to 9.6% in 2016. This was due to deforestation activities that have taken place for the purpose of agriculture, firewood and new settlement. Sensitivity analysis Sensitivity analysis of simulated stream flow for the sub basin was performed using the daily observed flow data for identifying the most sensitive parameter and for further calibration of the simulated stream flow (Neitsch et al. 2011;Lambin et al. 2003).Twenty-six flow parameters were checked for sensitivity and five of them were found to be highly sensitive (Table 5). Flow calibration After sensitivity analysis has been done, the calibration of stream flow was done automatically. The result of calibration for the average monthly stream flow showed a very good agreement between observed and simulated stream flow ( Fig. 6) with Nash -Sutcliffe simulation efficiency of 0.81 and coefficient of determination (R 2 ) of 0.83. Model validation After calibration was done manually and getting acceptable values of NSE and R 2 , validation was checked using monthly-observed flows. The model validation also showed a very good agreement between simulated and measured monthly flow (Fig. 7) with the NSE value of 0.86 and R 2 0.87. The calibrated and validated stream flow result shows a very good agreement between observed and simulated stream flow. Therefore, the results of stream flows (Table 6) indicate that SWAT model is a very good predicator for stream flow of Robigumero Watershed. Different studies that were conducted in the upper Blue Nile basin also showed similar result. For example, The SWAT model showed a good match between measured and simulated flow of Gumara watershed both in calibration and validation periods with (NSE = 0.76 and R 2 = 0.87) and (NSE = 0.68 and R 2 = 0.83), respectively (Awlachew, 2006). This indicates that SWAT can give sufficiently reasonable result in the upper Blue Nile basin. The following figure shows that the scatter plots of observed and simulated value for both calibration and validation (Fig. 8, 9). This shows good linear correlation between observed and simulated values. Impact of LULC change on stream flow This study assessed the impact of LULC change on streamflow in Robigumero watershed. Also, seasonal variability of streamflow was evaluated on wet (July, August, and September) and dry (Jan, Feb, and March) months. The simulation results of mean monthly streamflow for 1996, 2006 and 2016 LULC maps are shown in Fig. 10. The wet and dry mean monthly streamflow of 1996, 2006 and 2016 LULC and its variability during the study period are presented in Table 7, 8, 9. The results indicated that mean monthly streamflow was increased in the wet months (27.9%) and increase in dry months (1.9%) in the year 1996 and 2006 (Table 7, 8,9). This was attributed to increase in the area under agriculture and decrease of forest land in the Robigumero watershed. This is due to rainfall satisfies soil moisture deficit more quickly in the agricultural land than forest there by generating more runoff in agricultural land. As a result, more runoff was generated due to streamflow in the year 2006 than 1996 (Fig. 11). Moreover, expansion in agricultural land decreased rainfall infiltrated into the soil and increase surface runoff. Therefore, the streamflow was increased in wet months and decreased in dry months. The streamflow was contributed more in wet months from surface runoff while in dry months, it was contributed more from groundwater. However, streamflow was increased in 2016 both in wet (16.2%) and dry (0.4%) seasons as compared to 2006 due to LULC change (Fig. 8). Besides, a slight decrease in land under grassland which contributed to increases of groundwater in the watershed. It generates more surface runoff in grassland due to less infiltration. The result indicates that mean monthly streamflow was increased by 6% in the year 1996 to 2006 and 2.3% between the years 2006 to 2016 (Table 8). The dominant land cover in the year 2006 was agriculture and there was high agricultural expansion at the expense of other land use from the year 1996 to 2016. As a result, high runoff was generated during this period; this increases streamflow of 2006 as compared to 1996. In the year 2016, there was a further expansion of the land under agriculture and decrease of the grass land with slightly increase in forest land. Therefore, for the same reason, the streamflow was increased in 2016 as compared to 2006. Generally, during the study period, Robigumero watershed experienced an increase of streamflow due to extreme LULC change. Discussion The expansion of cultivated land at the expense of forest, and grassland in the study watershed between 1986 and 2016 periods is aligned with many studies in the Ethiopian Highlands has reported the expansion of cultivated land at the reduction of forest, shrub land and grassland in the Andassa watershed during 1985-2015 periods. There was also an increase of cultivation land and decrease of shrub land in the Lake sub-basin between 1986 and 2010 periods. The area covered by natural vegetation showed was also decreased in Kasiry catchment (Upper Blue Nile Basin) during 1982-2016/17 periods. Getachew and Melesse (2012) also found that urban settlement and cultivated land were increased significantly in Angereb watershed during 1985 and 2011 periods while forest and grassland were reduced in these periods. The reduction of Grass land and increase of Agricultural land in the Robigumero watershed during 1996-2016 periods is also in agreement with many other previous studies in Ethiopia. For instnace, Yeshaneh et al. (2014) has found that the expansion of Agricultural land at the expense of forest and grazing lands in Koga watershed during 1957 and 2010 periods. The decreasing of forest cover by 5.2% in Kasiry watershed, Fageta Lekoma District was mainly through increasing agricultural land from 2010 to 2015 periods (Wondie and Mekuria, 2018). Nigussie et al. (2017) has also indicated that the reduction of grass land in the Upper Blue Nile Basin between 2006 and 2017 periods was mainly attributed to the farmers' growing interest in allocating more land to agricultural land to increase crop productivity. Shawul et al. (2019) study in the Upper Awash Basin has also shown that the redaction of vegetation cover in the 2000-2014 periods could be due to the deforestation, and over grazing practices. The change in monthly stream flow due to LULC change was assessed for years 1996, 2006and 2016). It was found that the mean annual surface runoff was increased to 211.63 mm to 221.81 mm from 1996 to 2006 (Table 9). Therefore, high surface runoff was generated in the year 2006 as compared to 1996 due to increment in the area under agriculture. In the year 2016, there was also increase of agriculture and urban at the expense of other land covers, this result increase of surface runoff. Surface runoff was slightly increased from 221.81 mm to 227.17 mm (Table 9). Similar, studies were also conducted in Ethiopian region to evaluate the impact of LULC change on stream flow. The mean wet monthly stream flow was increased by 39% and dry average monthly flow decreased by 46% for 2011 as compared to 1985 due to LULC change in Angereb Watershed (Rientjes et al. 2011). Also, the mean monthly stream flow for wet months had increased by 16.26 m 3 /s. While the dry season had decreased by 5.41 m 3 /s for the years 1986 to 2001 due to the LULC change in Gilgel Abay watershed (Geremew 2013). Therefore, the changes in LULC are expected to have a great impact on watershed hydrology. LULC change alters the hydrologic cycle which has direct effects on hydrological processes such as precipitation, evapotranspiration regime and surface runoff. Conclusions The performance and evaluation of the model were found very good (NSE = 0.81 and R 2 = 0.83 for calibration) and (NSE = 0.86 and R 2 = 0.87 for validation). From this study, the Land area under agriculture increased by 12.7% in expenses of other land cover classes while the land area under forest decreased by 7.9% during 1996 to 2006. Between the year 2006 and 2016, further increase of the land under agriculture, forest and urban in the expense of other land cover was observed in the Robigumero watershed. The impact of LULC dynamics showed that mean monthly streamflow was increased by 27.9% in wet months and decreased by 1.9% in dry months between the years 1996 and 2006. While in 2016, it was increased by 16.2% and 0.4% for wet and dry, respectively as compared to 2006 due to LULC change. The annual Surface runoff was increased from 211.62 mm to 221.81 mm in the years 1996 and 2006. Also, the annual surface runoff was increased from 221.81 mm to 227.17 mm in the year 2006 and 2016. This is mainly attributed to conversion of forest cover to agricultural land, which in turn increased surface runoff during the wet and dry season. In 2016, a minor decrease of the land under grassland and bare land which contributed to increases streamflow in the watershed from the year 2006 to 2016. In general redaction of agricultural land and increment of forest land on the degraded land reduce stream flow which shows the reduction of soil erosion. Therefore, this study results can be used to encourage different users and policymakers for planning and management of water resources and adoption of suitable adaptation measures in the Robigumero watershed as well as in other regions of Ethiopia.
2023-07-04T13:40:08.736Z
2023-07-04T00:00:00.000
{ "year": 2023, "sha1": "4ed63ea2fb67984ced79f84b144a26de402625e3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40899-023-00891-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "4ed63ea2fb67984ced79f84b144a26de402625e3", "s2fieldsofstudy": [ "Environmental Science", "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
261557489
pes2o/s2orc
v3-fos-license
Introducing Thermodynamics-Informed Symbolic Regression -- A Tool for Thermodynamic Equations of State Development Thermodynamic equations of state (EOS) are essential for many industries as well as in academia. Even leaving aside the expensive and extensive measurement campaigns required for the data acquisition, the development of EOS is an intensely time-consuming process, which does often still heavily rely on expert knowledge and iterative fine-tuning. To improve upon and accelerate the EOS development process, we introduce thermodynamics-informed symbolic regression (TiSR), a symbolic regression (SR) tool aimed at thermodynamic EOS modeling. TiSR is already a capable SR tool, which was used in the research of https://doi.org/10.1007/s10765-023-03197-z. It aims to combine an SR base with the extensions required to work with often strongly scattered experimental data, different residual pre- and post-processing options, and additional features required to consider thermodynamic EOS development. Although TiSR is not ready for end users yet, this paper is intended to report on its current state, showcase the progress, and discuss (distant and not so distant) future directions. TiSR is available at https://github.com/scoop-group/TiSR and can be cited as https://doi.org/10.5281/zenodo.8317547. Introduction Accurate knowledge of the thermodynamic properties of pure fluids and mixtures is crucial for many scientific and technical tasks. In research, thermodynamic property data are required, among other things, for the investigation of physical relationships and for the corresponding development of models. For industry, property dat are an essential basis for the design of processes and equipment, whereby complex energy and process engineering plants can be developed and optimized using process simulation. However, the quality of such simulations strongly depends on the accuracy of the available thermodynamic property data. For the calculation of such data, EOS are used. The development of EOS is a sophisticated and time-consuming process. To address both, the measurement and the modeling, we recently proposed a new combined measurement and modeling procedure, see Frotscher et al., 2023. There it is shown that using optimal experimental design (OED), SR, and hybrid data acquisition can significantly reduce measurement and development expenses for EOS. For this purpose, TiSR was utilized, the newly developed SR tool aimed at thermodynamic EOS modeling. It is an integral part of our proposed combined measurement and modeling procedure, as well as a post-processing tool. There are many forms of thermodynamic EOS for different purposes and with varying accuracy. However, only very little has been published on improving functional form of thermodynamic models, e. g., of multi-parameter fundamental EOS, in the recent decades. SR is well poised to change this and discover new model forms, especially for much simpler than fundamental EOS. As mentioned above, even with the measurement expenses left aside, the modeling process of thermodynamic EOS itself is a time-consuming process, that does still rely on "expert experience and intuition" and iterative fine-tuning. To remedy this, we are working on, among other things, the combination of an SR algorithm with the additional features required for often scattered experimental data, the specific extensions for the thermodynamic EOS development as well as the formalization and enforcement of some of the "expert experience and intuition" and especially thermodynamic constraints. TiSR is work-in-progress and does not yet have all mentioned features, nor is it ready for end users. However, we are working with further measurement and modeling experts to incorporate all of the above and more. Nevertheless, TiSR is available at https://github.com/scoop-group/TiSR and can be cited as Martinek, 2023. Design Philosophy The current version of TiSR is not aiming to be a state-of-the-art general purpose SR tool. Its symbolic regression basis is relatively simple but flexible. Its goal is to introduce the physics-and thermodynamics-specific extensions to create a tool for the thermodynamic EOS development. At a later stage, these adaptions and extensions may be applied to other SR algorithms. The SR library SymbolicRegression.jl by Cranmer, 2020 was used as a reference and starting point for TiSR. However, numerous adaptions were made at many levels, sacrificing general performance to gain flexibility, extensibility, and simplicity. As a rule of thumb and to give a general idea, when given the choice, the pragmatic approach of using half the code is used, even if it reduces performance by ≈ 20%. For our work in progress tool TiSR, this approach allows us to do fast prototyping and extend the core functionality by several features, some of them thermodynamics-specific. Performance optimization, which is a necessity for genetic algorithms, will come at a later stage, when most or all of the features we, and others utilizing TiSR, envision and require are implemented and tested. Nevertheless, the current performance is sufficient for academic purposes. Overview and Features TiSR is written in the programming language Julia developed by Bezanson et al., 2017. It utilizes a genetic programming (see Koza, 1994) algorithm, i. e., a modified version of NSGA-II (see Deb et al., 2002) with an island model population structure (see Gorges-Schleuter, 1991). 3.1. Overview. TiSR's main loop consists of expression mutation (Section 3.4), individual instantiation (described next), and selection (Section 3.3). Every iteration of this main loop constitutes a generation of the genetic algorithm. Each individual contains one expression and a number of related attributes, which are determined in the "individual instantiation", which is divided in the following steps: (1) unnecessary parameter removal • for example: parameter + parameter → parameter, function(parameter) → parameter, . . . • values of parameters are not adjusted, as parameters are identified after this step (2) randomly trimming expressions that exceed the size limit (3) reordering of operands of + and · • for example: • reorder according to following rules (< means before): parameter < variable < unary operator < binary operator (4) grammar checking (described below) (5) parameter identification (Section 3.5) (i) calculate residual-related measures (see Table 3.1) (ii) calculate constraint violations (coming soon) (6) singularity prevention (coming soon) (7) determination of attributes unrelated to the residual (see Table 3.1) At the "grammar checking", "parameter identification" and the "singularity prevention" steps, individuals may be deemed invalid, resulting in their termination and removal. The use of grammar may increase the algorithm's performance by filtering out individuals before parameter identification. Currently, two grammar options are available. The user may prohibit certain operator compositions, e. g., cos(cos(x)) or exp(log(x)). These are also enforced during the random creation of expressions. The second currently implemented grammar option prohibits parameters in exponents, i. e., (x + 1) 3 would be allowed but 3 (x+1) would not. The latter grammar is not enforced at the "grammar checking" step above, but rather throughout TiSR in the individual creation and mutations. We plan to introduce more grammar options in the future. The attributes of each individual, their brief description, and whether the attributes are related to the residual can be seen in Table 3.1. Currently, the ability to add custom attributes by the user is not implemented but will be added in the future. For parallelization, we currently employ multithreading on the "individual instantiation" step which includes parameter identification. As the parameter identification is by far the most expensive step, parallelizing the complete generational loop offers only a small additional performance benefit. 3.2. Population Structure. The island population model we use maintains several subpopulations which evolve separately. The islands are arranged in a static ring topology (see Lin, Punch, Goodman, 1994). At a user defined generational interval, a random emigration island and a random direct neighbor, acting as immigration island, are chosen. The migrating individual is chosen randomly from the emigration island's population, and it is copied (not moved) to the immigration island's population. 3.3. Selection. The selection of individuals for the next generation is performed by non-dominated sort (Pareto optimal), or tournament selection (see Brindle, 1980), or both. If both are used, the Pareto optimal selection is performed first, before tournament selection is performed on the remaining individuals. The ratio to which the two selections are performed can be modified in the range [0, 1]. The selection objectives for both selection modes can be set individually. Any of the attributes of the individuals, and any number of them, can be chosen (except for valid) (see Table 3.1 for currently implemented attributes). Creating and Mutating Expressions. In the first generation, instead of mutating expressions, new ones are at random. The user may provide starting expressions, which allows to either incorporate domain knowledge or resume another run. The currently implemented types of mutations are listed in Table 3.2. In some cases, simplifications may lead to less desirable expressions in terms of the selection objectives. Therefore, apart from the very basic simplifications to remove unnecessary parameters mentioned in Section 3.1, we choose to perform the more complex simplification using the SymbolicUtils.jl package (see Gowda, Ma, Protter, et al., 2023) as mutations. The drastic_simplify! mutation removes parameters which are smaller than a set value in case they appear in an addition or subtraction. In the context of multiplication, the complete term affected by the small parameter is removed. This yes select a random part of the expression and replace it with another part of the same expression (creating a new directed acyclic graph connection) subtree_mutation! yes choose random operator and exchange it with a random expression snippet drastic_simplify! yes remove all parameters in +, −, and * operations which are smaller than set tolerance, and simplify accordingly (x + 0.00001 → x, x + y · 0.00001 → x) simplify_w_symbolic_utils! yes simplify the expression using the Sym-bolicUtils.jl Gowda, Ma, Protter, et al., 2023 package crossover_mutation! yes randomly combine two individuals simplification helps guide the expression search, but it is especially powerful in combination with LASSO (least absolute shrinkage and selection operator) regression. In LASSO regression, an ℓ 1 -norm of the parameter vector is added to the squared residual norm as a regularization term, which incentives potentially zero parameter values. Currently, TiSR does not support LASSO regularization (see Tibshirani, 1996) in the parameter estimation, but it will soon. Evaluation and Parameter Identification. For the evaluation of candidate expressions, the power, logarithm, and division operators are protected to allow their direct use, rather than necessitating the use of implicit domain restrictions, e. g., abs(x)^y. This protection is implemented by checking the operands and preventing the evaluation, if they are outside the valid domains for the respective operators. These individuals are deemed invalid and removed from the population. One noticeable benefit of this approach is that it makes simplifications of expressions in many cases easier. For example, x * abs(x)^2 cannot be simplified without assuming x > 0 or x < 0. The prediction of the expressions may be processed before the residual is calculated. This can be used to search for parts of an expression, while presupposing the remainder. Usually, SR searches for an expression f(X) which satisfies y = f(X) for the given data X and y. In this case, the residual vector is calculated using y -f(X). However, in some cases, parts of the expression may be known. If, for example, we search an expression f(X) and presuppose that y = exp(f(X)) holds, we could rearrange the expression as log(y) = f(X) and search for the f part of the expression directly. This however, does change the residual to log(y) -f(X) and thus the minimization objective, which may lead to inferior results. In other cases, the presupposed expression parts may not be possible to rearrange. The expression y = f(X)^2 + exp(f(X)) cannot be solved for f(X). In TiSR it is possible to define pre_residual_processing! function, which is applied to the output of the expression f(X) before residual is calculated. This allows to implement the transformations of the first or the second example above. For the second example, for the evaluation of each expression f(X) proposed by TiSR, first, the output of the expression is calculated. The pre_residual_processing! function performs y_pred = f(X)^2 + exp(f(X)) to calculate the prediction of the proposed expression, and the residual is calculated by y -y_pred. All the residual-related measures listed in Table 3.1 are, of course, affected by this function. For the parameter identification step, TiSR employs a Levenberg-Marquardt algorithm (see Levenberg, 1944;Marquardt, 1963). We use a modified version of the implementation of White, Mogensen, Johnson, et al., 2022. The objective of the optimizer is to minimize the ms_processed_e measure, which stands for mean squared processed error and has several user-exposed processing options. By default, ms_processed_e is equal to the mse. One of the customization options for the ms_processed_e is to provide weights for the residuals to, e. g., minimize the relative rather than the absolute deviation, incorporate uncertainties, or both. To process the residuals in other, possibly more complex ways, a custom function may be provided by the user. To mitigate overfitting and improve generalization performance, early stopping (see Morgan, Bourlard, 1989) is employed. In early stopping, the parameter identification is conducted for a fraction of the data, while the residual norm is also calculated for the remainder. One method to perform early stopping, as described in Prechelt, 1998, is to terminate the parameter estimation as soon as the residual norm for the remaining data increases monotonically for a number of iterations. Apart from reducing overfitting, early stopping provides two added performance advantages. First, the parameter identification, which is currently by far the most expensive part of the algorithm, may be stopped after fewer iterations for candidate expressions which do not appear to capture behavior underlying the data. This decreases the performance penalty of allowing a larger maximum number of iterations during fitting while retaining its benefits. Second, the Jacobian for the Levenberg-Marquardt algorithm is only calculated for a fraction of the data, which increases the performance for large data sets. Conclusion and Outlook We introduced and provided a first overview of TiSR. Although it is work-inprogress, TiSR is already a capable SR tool, e. g., for EOS modeling, with a genetic programming basis and similar capabilities as comparable tools. Notable differences to other SR algorithms in this combination are, among other things, the following: • protected evaluation of power and similar functions, avoiding the need for constructions such as abs(x)^y ▷ improved simplification possibilities and more sensible expressions • many residual pre-and post-processing options ▷ weighting ▷ custom post-processing ▷ custom processing of the prediction before residual calculation to, i. e., search for sub-expressions • removal of expression parts, which do not contribute much ▷ targeted simplification • early stopping ▷ mitigate overfitting and improve generalization performance ▷ improve performance by stopping earlier For the future, the most notable feature of TiSR is its flexibility and extensibility, which allows us to implement major changes comparably easily and fast. Many of the crucial features for EOS development are planned to be implemented in the near future or are currently being implemented. Our plans include: • constrained fitting and constraint violation as additional selection objective • prevent singularities • support for factor variables as introduced by Kommenda et al., 2015, which allow the incorporation of nominal variables • support for the development of EOS formulated in terms of the Helmholtz energy • LASSO regularization for better simplification • more grammar • experiment with other SR algorithms at the basis
2023-09-07T06:42:17.151Z
2023-09-06T00:00:00.000
{ "year": 2023, "sha1": "bb704fd3bf1105ddbe47fd3e17b3896ef5f17eaa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f2388db7a5361f3aa4dec0ea8f87e3b8e7061443", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
133915954
pes2o/s2orc
v3-fos-license
Cottus Gobio Linnaeus, 1758, Ecological Status and Management Elements in Maramureş Mountains Nature Park (Romania) Abstract Cottus gobio is considered a fish species of conservation concern within the Vişeu Watershed. The habitats state, usually populated by Cottus gobio within the Maramureş Mountains Nature Park (Vişeu and Bistriţa Aurie watersheds) vary among reduced (34.42%), average (45.91%), and good (19.67%). The excellent conservation status is currently missing for populations of this fish in the Vișeu Basin. Human impact categories were inventoried as inducing the diminishment of Cottus gobio habitats and populations in the researched area in comparison with its natural potential are: minor riverbed morphodynamic changes, liquid and solid natural flow disruption, destruction of riparian tree and shrub vegetation, habitat fragmentation-fish populations isolation, organic/mining pollution activities, fish washing away at floods, and poaching. INTRODUCTION The highland water resources are naturally of very good quality if the human impact in these areas are not significant (Romanescu, 2016), and do not induce instable ecologic conditions (Schneider-Binder, 2017), the impact of human activities needs to be evaluated and monitored in the local protected species and habitats circumstances. Fish are one of the most well known taxonomic groups which are affected by different types of human impact (Năvodaru and Năstase, 2006;Bănăduc et al., 2011;Florea, 2017; Khoshnood, 2017). STUDY AREA AND METHODS Lotic ecosystems of the Maramureș Mountains Nature Park area belong mainly to the Vișeu River watershed (Fig. 1) and are very limited to the Bistrița Aurie River watershed, in northern Romania (Fig. 2). The Vișeu River watershed is neighboured by the Maramureș Mountains in the northeastern area, by the Maramureș Hills in the west and southwest parts, and by Rodna Mountains in its southeast part. The area with the lowest altitude of this watershed is at 303 m above the sea level where the Vişeu River is at the confluence with the Tisa River, while the highest altitude is 2,303 m in the Rodna Mountains (Pietrosul Rodnei Peak). Due to the geographical variety within this watershed, the researched area is varied in landscapes, and characterised by a relatively high diversity of biotopes, biocoenosis, and among others, fish species. With a length of 80 km, a watershed of 1,606 km 2 and a maximum multiannual average flow of 30.7 m 3 /s at its lowest sector, the Vişeu River is a second degree tributary of the Danube River, confluencing into the much biger Tisa River. It springs in the Prislop Pass (1,416 m) and it confluence with the Tisa River near the Valea Vișeului Village. In its highland area, from the springs to the Moisei locality, the Vișeu River has an appreciable average slope (20-50 m/km), under the name Borșa or Vișeuț. The Vișeu River enters the Moisei locality in the Maramureș Depression where its valley is larger, with the exception of a few narrow gorge-like sectors such as the Rădeasa Oblaz, and Vișeu gorges. The Vișeu River hydrographical characteristics belong to the Eastern-Carpathian-Moldavian type in its highland part and of Eastern-Carpathian-Transylvanian type in its lower sector. This river discharge is important, at 39.4% of the annual discharge in the spring season, a 27% decline in the summer, 18.6% during autumn, and its minimum 15% in winter. ( The diversity of lotic habitats and their connected aquatic and semi-aquatic species of national and international conservation interest from the Vişeu Watershed are also 16 diverse and vital under conservation circumstances. The fish species of this research are the same, as noted and published by various ichthyologists in the last century of specific ichthyologic studies (Bănărescu, 1964;Staicu et al., 1998;Telcean and Bănărescu, 2002;Curtean-Bănăduc et al., 2008). Half of the local fish species are of important conservation significance. Cottus gobio Linnaeus, 1758, is one of the most important valuable fish species, where populations within the researched area have diminished in the last decades. The dispersion and ecolgical state of this threatened fish species are not exactly known and present data for proper management of Cottus gobio is necesary. The Cottus gobio populations study was performed from 2007-2017, and was based on 370 sampling sectors, (Fig. 3). This species was found in 61 stations (Tab. 1), included in the mapping of populations, assessment of the conservation status, and identification of the anthropogenic elements which affect these species. To assess the conservation status and the populations' ecological state of Cottus gobio within the Maramureş Mountains Nature Park, quantitative samples were taken from sampling stations of approximately three kilometres between two consecutive sectors on all lotic systems with suitable habitats for the fish. The position of these stations allows for the assessment of the negative effects of human activities on the studied populations, including: minor riverbed morphodynamic changes, liquid and solid natural flow disruption, destruction of the riparian tree and shrub vegetation, habitat fragmentation-fish populationsʼ isolation, organic/mining pollution activities, fish washing away at floods, and poaching. Quantitative sampling of the fish was realised based on electronarcosis, per unit of time and effort per sampling section (two hours on Vişeu River, one hour on Vaser, Ruscova and Frumuşeaua rivers, 30 minutes on other rivers of the references zone -Repedea, Novăţ, Şesuri, Fântânele, Bistra, Socolău) on five longitudinal sections of 100 m length. After species identification and counting individuals, the sampled fish were released in their natural habitat. The number of fish sampled in the unity of time and effort can be converted through correspondence in some categories like: (C) -common fish species, (R) -rare, or (V) -very rare, according to the Natura 2000 standard data form for filling guidelines, "In mammals, amphibians, reptiles and fishes, no numeric information can be indicative and then the size/density of the population is evaluated as (C) -common species, (R) -rare species, or (V) -very rare species". There were different criteria to evaluate the studied populations' status: size of populations, balanced distribution of individuals by age classes, distribution areal size and the percentage of fish individuals of Cottus gobio in the local fish associations. According to the Natura 2000 guidelines, the standard data form filling is based on the following criteria: "The conservation degree of specific habitats," contain the subcriteria: i) the degree of conservation of the habitat features which are important for the species; ii) possibilities for recovery. The criteria i) needs a comprehensive assessment of the characteristics of the habitat regarding the needs of the species of interest. "The best expertise" is used to rank this criterion in the following way: I. elements in excellent condition, II. well preserved elements, III. elements in average or partially degraded conditions. In the cases in which the subclass I is granted, "I: elements in excellent condition" or "II: well preserved elements," the criteria B (b) should be classified entirely as "A: excellent conservation" or "B: good conservation" regardless of the other sub-criterion classification. In the case of this sub-criterion ii) which is taken into account only if the items are partially degraded, an evaluation of the viability of the analysed population is necessary. The obtained ranking system is: I. easy recovery; II. restoration possible with average effort; III. restoration difficult or impossible. The combination used for classification is based on two subcriteria: A -excellent conservation = elements in excellent condition, regardless of classification of recovery possibility; B -very good conservation = well preserved elements, regardless of classification of recovery possibility; B -good conservation = average or partially degraded condition and easy to restore; C -average or reduced conservation = all other combinations. 18 In every sampled sector, the following were assessed: condition, pressures/threats of habitats and populations of Cottus gobio. The sampling sections to evaluate fish population and the conservation status of Cottus gobio in the study area appear in sectors where the populations are permanent, with a good conservation status and well preserved specific habitats; also as in lotic sectors situated at the edge of the distribution area for the studied species, which contain sectors under human activities impact that can put the researched populations statethe Representativity Criteria. This species is often confused in the area with Cottus poecilopus, but has distinctive identification elements. This species has an elongated and thick body. The profile is slightly convex between the tip of the snout and the eyes, the back is almost horizontal, and the head is just a little lower than the body. The head is big, dorso-ventral flattened and thicker than the body. The eyes are situated in the anterior part of the head, semi-spherical, looking upward. The superior part of the eye is often covered by a pigmented eyelid, easy to be confused with the skin. They also have two pairs of small, distanced and simple nostrils; the anterior pair is situated far in the front of the eyes. The inter-orbitary space is slightly holed. The snout is rounded. The mouth is big and terminal, its ends reach an under eye position or near this area. The teeth are small and the caudal peduncle is laterally compressed. The dorsal fins are close, the first is low with a convex edge, the second with a plain edge. The anal fin is inserted a little after the second dorsal fin insertion. The pectoral fins are big and broad, and their tips usually reach or overdraw the anus. The caudal fin has a convex edge, sometimes almost plain. The lateral line is complete, on the middle of the caudal peduncle, when it reaches the caudal fin base. The dorsal part of the body is brown with marbled-like spots. The ventral part of the body is light-yellowish or white. In the posterior part of the body there are 3-4 dark transversal lines. The dorsal, caudal and pectoral fins have brown spots distributed in longitudinal lines. The anal and ventral fins are not spotted. It can reach 13 cm length (Bănărescu and Bănăduc, 2007). This species lives in warm, mountainous lotic freshwater, and it is rare in lakes. They are usually demersal, they stay under rocks, in the sectors with not deep and relatively slow water. Sexual maturity is reached at two years old. Its reproduction occurs in March -April. Its food consists of insect larvae, amphipoda, roes and alevines. (Bănărescu and Bănăduc, 2007) Results The stream and river sectors where Cottus gobio (Fig. 4) was sampled during the research are presented in table 1 (Fig. 5), together with the catch index values (individuals number per time and effort unit). DISCUSSIONS Based on this study's outputs, correlated with Cottus gobio ecological and biological needs, the following risk elements were identified: minor riverbed morphodynamic changes, liquid and solid natural flow disruption, destruction of riparian tree and shrub vegetation, habitat fragmentation-fish populations' isolation, organic/mining pollution activities, fish washing away at floods, and poaching. Minor riverbed morphodynamic modifications. Typical habitat needs for Cottus gobio, in conformity with its life cycle, involve a natural variation of riverbed morphodynamics. Dikes, sills, dams, roads in riverbeds, modified riverbeds, and riverbed mineral exploitation (Fig. 6) modify the liquids and solids flow dynamics, etc., and all induced changes of the natural morphodynamics of major and minor riverbeds. These modifications negatively influence the habitats needed for the life cycle stages of the Cottus gobio, which could determine the decrease in abundances of this fish species. New and different obstacles on the lotic systems, and water resource development activities in the researched area should not be accepted by the Maramureş Mountains Nature Park Administration without relevant ichthyologic research for this specific fish species. Figure 6. Overexploitation of minerals in the banks and terraces of the lower Vișeu River. Solid and liquid natural flow modifications. The changing of natural flow and riverine morphology keep out the genesis of peculiar microhabitats, habitats, and environmental elements essential for the permanent presence of Cottus gobio. These riverbed natural morphodynamic modifications can influence the Cottus gobio population size. Anomalous happenings occur when the turbidity of water is increased due to negligent forestry activities in the more or less riverine areas, and can be a common example of human activities which cause the interruption of the solid and liquid flow natural balance. The solid and liquid natural flow can be kept close to the local natural condition if the forestry practices and the riverbed gravel exploitations do not considerably disrupt the basin selfsustainable functions. This can be realised by harmonizing such human activities in the basin with the periods when the natural conditions are relatively similar to those to be created, (e.g. very high water turbidity induced by precipitations.) Suggested in-channel artificial structures and changes, like dams, thresholds (Fig. 7) embankments, crossings, water extractions, bank modifications, roads in the waterbed, and thalweg changes by exploitations of construction materials from the riverbed, should not be admitted by the Maramureş Mountains Nature Park Administration without the ichtiologists agreement, based on the study of the identified local stress factors and the biological and ecological needs of Cottus gobio. In this specific study case, no crossing should be higher than 10-15 cm in the shallow water sectors and dry season. We also suggest a better monitoring of the forestry activities including the forbidding of dragging and storing lumber through/in the riverbeds and riverine areas. We also propose the monitoring of the development works for lumber storage and exploitation terraces, (Fig. 8) and the imperative requirement to rapid reforestation. In this context, the rotation of forest exploitations in the sub-basins of the Vișeu Basin is needed. Habitat fragmentation/isolation of populations regularly push to genetic isolation, shortened gene diversity, inbreeding, and in some cases, extinction. Not blocked movement upstream and downstream in the lotic sectors, as well as proper connectivity of the distinct sub-drainage basins of the Vișeu Watershed, is an essential element for the optimum management of Cottus gobio. The authors suggest in the context of the future economic investments in the studied basin, to be careful, as some of them can reduce or block the water course connectivity, i.e. by various crosswise obstacles in the riverbed, by decreasing the water flow or draining of some river sectors, etc. Pollution caused by mining activities. The long time pollution provoked by mining activities for heavy metals in the Țâșla River watershednot negatively influence only the Țâșla River but also the downstream habitats and species of the upper and middle part of Vișeu River. The consequence of the rain and snow water which wash the dumped mine galleries and greened refuse heaps is significant on the Țâșla River and relevant on the upstream Vișeu River. The effects of meteoric waters which wash the dumped mine galleries and the refuse heaps into the river can be significantly decreased by insulating the old mine galleries and the refuse heaps from the Țâșla Basin. Mixed human impacts disturb many lotic sectors in the studied areas (Figs. 10 and 11), and as a consequence, the Cottus gobio in comparison to its natural potential. The minimal management plan for the Maramureş Mountains Nature Park area should include: buildup of lotic systems buffer zones; judicious management of water use; optimum management of sewage and waste water and surface water pollution; adjustment to different situations and conditions of the potential hydroenergetic use of the lotic systems; imposition of integrated water management at the Vișeu Watershed level; constitute and develop ecological networks; lotic systems connectivity rehabilitation; back adapted proper scientific quality evaluations and monitoring, and basin integrated management adjusted research. Organic pollution is a continuous negative issue, sewage and wastewater treatment are connected at the same time with farm activities, mainly on the Vișeu River, and also on some of its tributaries where these human activities are present (Curtean-Bănduc, 2008), which is a durable stress source for fish populations. Apropriate sewage systems must be created and developed in the Vișeu Watershed and the wastewaters of all the villages and cities should be correctly treated. Displacement of fish washed away during floods in the anthropised riverbeds and banks sectors. In the lotic sectors uniformized by human activities, fish are more often washed away during floods. In these sectors shelters should be created with a maximum high of 10-15 cm. Poaching. During the field study, lawless fishing was noted with electricity and diverse substances. By interrogating numerous inhabitants of the Vişeu River watershed localities, poaching is considered a permanent habit for some of the local people. The inefficiency to control this abnormal situation can induce the diminishment of the Cottus gobio individuals' numbers. Cottus gobio is a fish species of significant conservation concern within the Vişeu Watershed. Its habitats state within the Maramureş Mountains Nature Park vary among reduced (34.42%), average (45.91%), and good (19.67%). CONCLUSIONS Cottus gobio from the studied Vişeu Watershed is characterised by the steady populations, but it did not reach its natural maximum potential due to human activities negative effects, especially in: upper and lower Vişeu River, lower Repedea, lower Ruscova, lower Bistra and lower Frumuşeaua. The preferred habitat for Cottus gobio is extended sufficiently within the Vişeu Watershed to conserve the present average state of the Cottus gobio studied populations. The studied fish species can be considered in the present as a relatively rare species in the studied area but relatively good options for rehabilitation aims exist. Among the studied streams and rivers, the conditions of the Bistra River are the most degraded as a whole from this fish species perspective, and do not meet proper habitat quality necessities for Cottus gobio species. AKNOWLEDGEMENTS These data were obtained in the project "Inventarierea, cartarea și evaluarea stării de conservare a speciilor de pești din Parcul Natural Munții Maramureșului (ROSCI 0124 Munții Maramureșului)/ Inventory, mapping and assessment of the conservation status of fish species of Munții Maramureșului Nature Park (ROSCI 0124 Maramureșului Mountains)". Special thanks for the continuous support of the Munții Maramureșului Natural Park Administration and Scientific Council members especially to: Bogdan C., Bucur C., Szabo B., Brener A. and Mărginean M.
2019-04-27T13:12:23.110Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "c17d75ba8573673793102a4ecff2195c918339e3", "oa_license": null, "oa_url": "https://doi.org/10.1515/msd-2017-0009", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "008309bc3f84aa3f19711f68902f7f2ac21c7314", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
249096681
pes2o/s2orc
v3-fos-license
CF4 Plasma‐Generated LiF‐Li2C2 Artificial Layers for Dendrite‐Free Lithium‐Metal Anodes Abstract Lithium metal anodes have long been considered as “holy grail” in the field of energy storage batteries, but dendrite growth and large volume changes hinder their practical applications. Herein, a facile and eco‐friendly CF4 plasma treatment is employed for the surface modification of Li anodes, and an artificial layer consisting of LiF and Li2C2 is fabricated for the first time. Experimental results and theoretical calculations reveal that the high adsorption energy of LiF and low Li+ diffusion barriers in Li2C2 induce uniform nucleation and planar growth of Li, guaranteeing a stable and dendrite‐free Li structure during the repeated plating/stripping process of cycling. Symmetric cells using CF4 plasma‐treated Li operate stably for more than 6500 h (at 2 mA cm−2 and 1 mAh cm−2) or 950 h (at 1 mA cm−2 and 10 mAh cm−2). When paired with a LiFePO4 cathode, full batteries deliver a high reversible capacity of 136 mAh g−1 (at 1 C) with considerable cycling stability (97.2% capacity retention over 200 cycles) and rate performance (116 mAh g−1 up to 5 C). This powerful application of plasma technology toward novel LiF‐Li2C2 artificial layers provide new routes for constructing environment‐friendly and high‐performance energy storage devices. Fabrication of LiF/Li 2 C 2 Decorated Li Foil (CFP-Li-5/10/20): Commercially purchased Li foil (CEL corp. d=15.6 mm, thickness=60 µm) was placed into a stainless steel plasma chamber. The chamber was evacuated until the pressure reached 80 Pa, at which point CF 4 gas was allowed to flow into the chamber such that the pressure was maintained at 50 Pa for 5 min to remove the excess air in the chamber. A CF 4 plasma was then generated using a 13.56 MHz radio frequency emission source with a power of 200 W, and the Li foil treated for 5/10/20 minutes. The pressure in the chamber was maintained at 50 Pa during the reaction. The target product was obtained after cooling to room temperature (25℃), and named as CFP-Li-x (where x corresponds to the treatment time). Material Characterization: Grazing incidence XRD (GIXRD) was performed using a Rigaku Smart Lab diffractometer with a Cu Kα radiation (λ=0.1546 nm), and scanning electron microscopy (SEM) was performed using a JSM-7600F scanning electron microscope. X-ray photoelectron spectroscopy (XPS) was performed using a Thermo Scientific K-Alpha+ spectrometer with Al Kα radiation. Atomic force microscopy (AFM) was performed using a Bruker Dimension Icon instrument. In situ Optical Microscopy Videos were performed by LIB-MS-Ⅱ and LW750LJT metalloscope. Plasma emission spectra were obtained using HR+C1702 Ocean Optics. CFP-Li-10 (d = 15.6 μm) was used as the electrode, and an untreated Li foil electrode was used as the comparison group. Celgard2400 (d = 19 μm) was used as the separator with an electrolyte of 40 μL 1M lithium bis(trifluoromethanesulfonyl) imide (LiTFSI) in a mixture of 1,3-dioxolane (DOL) and 1,2-dimethoxyethane (DME) (1:1 volume ratio) with the addition of 1 wt. % lithium nitrate (LiNO3). For the full cell, lithium iron phosphate (LFP), super P carbon, and polyvinylidene fluoride (PVDF) (8:1:1 weight ratio) were prepared in N-Methyl-2-pyrrolidone (NMP), coated on charcoal-coated aluminum foil, dried at 120 °C in a vacuum oven for 12 h, and cut into circular discs with a diameter of 12 mm. The active material mass loading for the resulting electrodes was 2.5 or 4 mg cm -2 . CFP-Li-10||LFP and Li||LFP full cells were cycled between 2.5V and 3.8V at 1C. The symmetric cells and full cess were tested at a battery testing system (LAND Electronic Co., China). Electrochemical impedance spectroscopy (EIS) was performed for symmetric cells using CHI604E (Chenhua, Shanghai) electrochemistry workstation with frequencies ranging from 0.01 Hz to 10 6 Hz. Theoretical Calculations: Theoretical calculations were performed using the Vienna ab initio simulation package (VASP) code. The Electron core interactions is treated with the projector augmented wave (PAW) pseudopotential and Perdew−Burke−Ernzerhof (PBE) of the generalized gradient approximation (GGA) scheme is carried out for exchange-correlation potential. Considering the periodic structure, all slabs contain a 20 Å vacuum slab to avoid layer-to-layer interaction. The cutoff energy for the plane-wave basis set was set to 520 eV. The adsorption energy, E, was calculated according to the following equation: where E sys , E ion , and E slab represent the total energy after ion adsorption, the energy of Li ions, and the energy of the slab, respectively. In this equation, the adsorption energy on the Li (1 1 0) slab is equal to the average atom energy in a bulk Li volume containing 16 Li atoms. Besides, because of the poor description of dispersion force in PBE functional, DFT-D3 is used to correct the dispersion force when calculating adsorption energy. Gaussian smearing was used for all calculations in this study. The smearing value, self-consistent field energy, and ionic force convergence tolerance were set to 0.1 eV, 1×10 -6 eV, and 0.02 eV/ Å, respectively. The electron core interactions were treated with the projector augmented wave (PAW) pseudopotential, and the Perdew−Burke−Ernzerhof (PBE) of the generalized gradient approximation (GGA) scheme was carried out for the exchange-correlation potential. A 2 × 2 × 1 K-mesh was used for Brillouin zone sampling. The Climb image-nudged elastic band (CI-NEB) method using the LBFGS algorithm was used to calculate the diffusion energy barrier and ionic force convergence tolerance are set to 0.02 eV/ Å. AFM Topography and Young's Modulus Mapping Surface morphology was characterized by AFM. Localized craters and raised structures were seen on the bare Li surface, and small, evenly distributed bulges were seen on the CFP-Li-10 surface. The surface roughness was compared between the two by the average Electrolyte Contact Angle The contact angle of electrolyte on CFP-Li-10 is 12.1˚, the contact angle of electrolyte on bare Li is 17.3˚, indicating the better wettability of the CFP-Li-10. The improved wettability of CFP-Li-10 accelerated the penetration of the electrolyte, thus facilitating rapid lithium-ion transport. Figure S4. Electrolyte contact angle of a) Bare Li and b) CFP-Li-10. SEM-EDS Line Scan Analysis SEM-EDS line scan analysis exhibited an 18-μm-thick modified layer with a high carbon content in the outer layer and a high fluorine content in the inner layer, corresponding to the presence of Li2C2 and LiF, respectively. The high oxygen content in the outermost and innermost layers is associated with partial oxidation due to the lithium foils manufacturing and the testing process. Electrochemical Performance A parallel cells of bare Li and CFP-Li-10 as shown in Figure S7a. To extract the values of each resistance component, different equivalent circuit models were constructed depending on the electrode system. The CFP-Li-10||CFP-Li-10 symmetric cell had a lithium-metal/LiF-Li 2 C 2 -modified layer, an SEI film, and an electrode/electrolyte interface. However, bare Li, lacked R rl , which represents the reaction layer resistance in its equivalent circuit diagram, owing to the absence of LiF modification on the surface. R s , R SEI , and R ct represent the solution resistance, SEI film resistance, and charge transfer resistance, respectively. Figure S8. SEM images of bare Li after a) 1 cycle, b) 10 cycles, c) 100 cycles. SEM images of CFP-Li-10 after d) 1 cycle, e) 10 cycles, and f) 100 cycles. For slab geometric optimizations, the K-spacing value is 0.06 for generate K-Mesh. When the adsorption geometry is optimized, the bottom three layers are fixed, and only the adsorbed atom and the surface layer relax, so as to simulate the limited influence of adsorbed atom on the crystal structure in the actual situation as much as possible. Value of the Total Energy In addition, electron spin is considered in the calculation, especially the calculation of Li ion energy. In fact, the energy obtained regardless of spin is the difference between the total energy and the atomic energy defined in the pseudopotential file. Although It has no effect on the numerical trend, but it does have an effect on the energy accuracy. Figure S10. XPS spectra of CFP-Li-10 and bare Li after a, b) 1 cycle and c, d) 100 cycles. Full Cells Full cells with higher mass loading of LFP (4 mg cm -2 ) have been assembled and tested (The amount of electrolyte of 80 μL). Full cells using CFP-Li-10 deliver a high reversible capacity of 156.5 mAh g -1 (at 1C) with considerable cycling stability (98.0% capacity retention over 170 cycles). In contrast, the bare Li||LFP cell showed visible capacity decay over 170 cycles, with a capacity retention of only 94.6%. Figure S11 Electrochemical performance of full cells containing bare Li or CFP-Li-10 at 1 C (at a current density of 170 mA g -1 ). In situ Optical Microscopy Videos b are Li.m p 4 Video S1. In situ optical microscopy video of bare Li at 0.5 mA cm -2 .
2022-05-28T06:22:55.145Z
2022-05-26T00:00:00.000
{ "year": 2022, "sha1": "eadddf2ff400a4affd909b5c8cffa17ab54ddeb8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/advs.202201147", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be0edd4e0e427af323e520f93bb0f6de7f586515", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
49566705
pes2o/s2orc
v3-fos-license
Maize opaque mutants are no longer so opaque The endosperm of angiosperms is a zygotic seed organ that stores nutrient reserves to support embryogenesis and seed germination. Cereal endosperm is also a major source of human calories and an industrial feedstock. Maize opaque endosperm mutants commonly exhibit opaque, floury kernels, along with other abnormal seed and/or non-seed phenotypes. The opaque endosperm phenotype is sometimes accompanied by a soft kernel texture and increased nutritional quality, including a higher lysine content, which are valuable agronomic traits that have drawn attention of maize breeders. Recently, an increasing number of genes that underlie opaque mutants have been cloned, and their characterization has begun to shed light on the molecular basis of the opaque endosperm phenotype. These mutants are categorized by disruption of genes encoding zein or non-zein proteins localized to protein bodies, enzymes involved in endosperm metabolic processes, or transcriptional regulatory proteins associated with endosperm storage programs. Introduction Endosperm is a product of double fertilization in the female gametophyte (embryo sac), and it functions as a nutritive organ to support embryogenesis and seedling development (Olsen and Becraft 2013;Olsen 2004). In cereals, the endosperm comprises the largest portion of the seed, and is a major source of food, feed, and industrial raw materials (Becraft and Gutierrez-Marcos 2012;Lopes and Larkins 1993). In maize, the endosperm initially differentiates into four main cell types, which are termed starchy endosperm (SE), aleurone (AL), embryo-surrounding region (ESR), and basal endosperm transfer layer (BETL) (Becraft and Gutierrez-Marcos 2012;Leroux et al. 2014;Zhan et al. 2015Zhan et al. , 2017. Each cell type has unique morphological and functional properties. For example, the SE, as the central and largest portion of the endosperm, accumulates starch and storage proteins; AL is the epidermal cell layer that synthesizes hydrolases to mobilize the starch and storage proteins to support seedling establishment during germination; and the BETL mediates transport of nutrients into the kernel (Becraft and Gutierrez-Marcos 2012;Gontarek and Becraft 2017;Zhan et al. 2017). The proper development of these cell types is critical to the overall function of the endosperm and development of the seed. In typical dent maize varieties, the mature SE has two distinct areas, a peripheral, vitreous region, and a central, starchy region (Gibbon and Larkins 2005). The latter has poorer light transmittance than the former. If the vitreous region is unable to form during endosperm maturation, it generates an opaque or floury endosperm phenotype (Gibbon and Larkins 2005;Larkins et al. 2017). Over the past century, a variety of maize mutants with an opaque or floury endosperm phenotype have been identified and are termed "opaque" [e.g., opaque1-17 (o1, o2, o5-o7, o9-o11, o13-o17), recessive], "floury" [e.g., floury1-4 (fl1-fl4), semidominant], or other assorted names [e.g., defective endosperm (De)-B30 and Mucronate (Mc), dominant; mutator-tagged opaque 140 (mto140), recessive] (Gibbon and Larkins 2005;Larkins et al. 2017). The diversity of these mutations raises the question as to what forms the basis of the opaque endosperm phenotype (referred to as opaque phenotype hereafter). The opaque phenotype is commonly associated with altered protein bodies (PBs) and/or starch granules-that is, mutants with an opaque phenotype display defective PB structure or function to varying degrees, depending on the causal gene. As an example, many opaque-type mutants (collectively referred to as opaque mutants hereafter) are defective in the expression or accumulation of prolamins which are the most abundant seed storage protein in maize (> 60%) (Wu and Messing 2014). Prolamins, called zeins in maize, are encoded by single and multiple gene families to produce four distinct classes of proteins: α-, β-, γ-, and δ-zeins (Coleman and Larkins 1999;Larkins et al. 2017). Recently, a number of genes that underlie the wellknown opaque mutants have been characterized (Table 1). These mutants provide novel insights into the molecular mechanisms associated with the opaque kernel phenotype and the larger question of how endosperm development is regulated. The nature of the underlying genes generally falls into three main categories: (1) zein coding sequences (e.g.,FL2,FL4,and Mc) and genes encoding non-zein proteins associated with PBs (e.g., O1, O10, and FL1); (2) genes encoding enzymes involved in endosperm metabolic processes (e.g., O5, O6, O7, and MTO140); and (3) transcriptional regulatory genes (e.g., FL3, O2, and O11; Table 1). The opaque endosperm mutant phenotype is often due to changes in storage protein synthesis, deposition, and metabolism Mutations in some storage-protein genes, or genes associated with PB development and metabolism can produce an opaque endosperm. Mutations in zein genes produce either a dominant or semidominant opaque or floury phenotype. Three mutants, including fl2, fl4, and De-B30, were shown to result from point mutations in the signal peptide cleavage site of a 22-kD α-zein Gillikin et al. 1997) and two 19-kD α-zein proteins (Kim et al. 2004;Wang et al. 2014a). These mutations alter zein protein deposition and generate irregularly shaped PBs Gillikin et al. 1997;Kim et al. 2004;Lending and Larkins 1992;Wang et al. 2014a). In contrast, the Mc mutant results from a 38-bp deletion in a 16-kD γ-zein gene, creating a frame shift in the coding sequence that gives rise to misshapen PBs (Kim et al. 2006). Knockdown of one or a combination of zein genes using RNA interference (RNAi) can also generate an opaque phenotype (Guo et al. 2013;Larkins et al. 2017;Segal et al. 2003). Therefore, the dominant or semidominant opaque phenotypes are primarily associated with zein gene loss of function that is critical for proper PB formation. Holding et al. (2010) Mutations in some genes, including O1, O10, and FL1, can result in an opaque phenotype via alterations to proper PB assembly. O1 encodes a myosin XI motor-like protein (Wang et al. 2012); although zein protein synthesis is not detectably affected, the o1 mutant exhibits PBs that are smaller and somewhat misshapen compared to wild type (Wang et al. 2012). A role in proper formation of PBs, particularly in the ring-shaped distribution of 22-kD α-zeins and the 16-kD γ-zein, has also been shown for O10 (Yao et al. 2016). The single o10 mutant allele described thus far, which encodes a cereal-specific PB protein, has a point mutation (a G-to-A transition) at the 3′ end of intron 6 (Yao et al. 2016). This mutation produces a truncated O10 protein, due to retention of intron 6 and premature termination of its synthesis (Yao et al. 2016). The truncated protein is able to interact with a subset of α-zeins and γ-zeins through its N-terminal amino acid residues, as its wild-type counterpart, but it lacks the ability to localize to the ER and become deposited in PBs because it lacks the requisite C-terminal transmembrane domain (Yao et al. 2016). FL1 is also linked with PBs and zeins. It encodes a membrane protein that resides in the ER surrounding the PBs, and likely facilitates localization of 22-kD α-zeins in PBs (Holding et al. 2007). Opaque phenotypes also have resulted from mutations in genes encoding enzymes involved in metabolic processes that can be linked with zein protein accumulation and/or PB development. The o5 mutant is defective in a Monogalactosyldiacylglycerol synthase (MGD1), which is required for normal amyloplast and chloroplast functions and normal morphology of starch granules (Myers et al. 2011). O6 encodes a D1-pyrroline-5-carboxylate synthetase that catalyzes synthesis of proline from glutamic acid (Wang et al. 2014b). The o6 [also named proline responding 1 (pro1)] mutant has been reported to block biogenesis of proline, resulting in a general reduction in protein synthesis, an inhibition of cell proliferation, and an associated down-regulation of cyclin gene expression (Wang et al. 2014b). As expected, the level of zein (all of which are high in proline) protein synthesis is dramatically reduced in o6 (Wang et al. 2014b). The o7 mutant, which has been mapped to the AAE3 gene that encodes an acyl-activating enzyme-like protein, shows a preferential reduction of 19-kD α-zeins due to an unknown mechanism (Miclaus et al. 2011;Wang et al. 2011). Finally, the mto140 mutant, defective in the arogenate dehydrogenase 1 (AroDH-1) involved in tyrosine synthesis, has been shown to affect accumulation of all families of zeins (Holding et al. 2010). The nature of this group of genes further supports a link between dysregulated zein gene expression and PB formation, with the consequent generation of an opaque endosperm phenotype. Mutations in regulatory genes associated with endosperm storage programs The best characterized transcription factor (TF) gene whose loss-of-function mutants can produce an opaque phenotype is Opaque-2 (O2). O2 is specifically expressed in the endosperm as early as 6 days after pollination and encodes a bZIP-family TF ( Fig. 1) (Li et al. 2014;Schmidt et al. 1990). Previous studies showed that O2 directly regulates many target genes associated with storage functions, including zeins, through binding to a number of conserved cis-motifs collectively known as the O2 box (Cord Neto et al. 1995;Frizzi et al. 2010;Hartings et al. 2011;Hunter et al. 2002;Jia et al. 2007Jia et al. , 2013Li et al. 2015;Muth et al. 1996;Schmidt et al. 1987Schmidt et al. , 1990Schmidt et al. , 1992Zhang et al. 2015Zhang et al. , 2016 (Bhat et al. 2004;Hwang et al. 2004;Jin et al. 2014;Pysh et al. 1993;Pysh and Schmidt 1996;Qiao et al. 2016;Vicente-Carbajosa et al. 1997;Wang et al. 1998;Yilmaz et al. 2009;Zhang et al. 2012). Except for a few genes that are primarily expressed in the endosperm (e.g., PBF and O2 itself), most of these proteins are encoded by genes that are ubiquitously expressed throughout the plant life cycle (Fig. 1). Within the endosperm, these genes show diverse spatial patterns of expression (Fig. 2). These data indicate that the gene expression programs associated with an opaque phenotype are regulated by TFs programmed for specialized roles in the whole endosperm or even in individual compartments (cell types) of the endosperm, and also TFs that may have broader roles in regulation of transcription in different developmental contexts. PBF is a DOF-family TF protein that binds the prolamin box (P box) and co-regulates a subset of target genes with O2 (Hwang et al. 2004;Vicente-Carbajosa et al. 1997;Wang et al. 1998;Zhang et al. 2015Zhang et al. , 2016. The O2 paralogs, OHP1 and OHP2 can form heterodimers with O2, and have been shown to co-activate zein genes with O2 in a partially redundant manner (Zhang et al. 2015). However, O2 is considered as the major regulator of α-zeins, while OHPs are key regulators of the 27-kD γ-zein gene (Zhang et al. 2015). In agreement with their roles as regulators of storage-protein gene expression, RNAi lines of PBF and OHPs (PbfRNAi and OhpRNAi, respectively) show a reduction of zein synthesis, alleviation of the opaque phenotype, and display additive or synergistic defects in combination with o2 mutants (Zhang et al. 2015(Zhang et al. , 2016. Different from PBF and OHPs, MADS47 is unable to activate zein expression on its own, but can synergistically activate zein gene transcription with O2 (Qiao et al. 2016). RNAi lines of MADS47 show a reduction of zein synthesis and a decreased size of PBs (Qiao et al. 2016). In contrast, Taxilin has been shown to interact with O2 to modulate the transcriptional regulatory role of the O2 protein through changing its subcellular distribution (Zhang et al. 2012). O2, GCN5 (a putative histone acetyltranferase), and ADA2 (a putative transcriptional adaptor protein) have been reported to interact with one another and co-regulate expression of target genes (Bhat et al. 2003(Bhat et al. , 2004. The nkd mutants, which were initially identified based on defects in AL development, also show an opaque phenotype (Gontarek et al. 2016;Yi et al. 2015). NKD1 and NKD2 are recently duplicated genes that encode INDETERMINATEdomain-family TFs that can directly activate a number of genes that were also shown to be regulated by O2, including a 22-kD α-zein gene (Gontarek et al. 2016). NKD1 and NKD2 also can directly activate O2 itself (Gontarek et al. 2016). Furthermore, NKD1 expression can be directly activated by DOF3 (Qi et al. 2017 (Zhan et al. 2015), were used nkd mutants and RNAi lines of DOF3 (Dof3RNAi) exhibit defects in SE and AL cell differentiation (Gontarek et al. 2016;Qi et al. 2017;Yi et al. 2015). NKD genes, DOF3, PBF, and O2, have recently been identified as direct targets of O11, which is a bHLH family TF encoded by a gene expressed specifically in the endosperm (Fig. 1) (Feng et al. 2018). Moreover, O2 and O11 have been shown to antagonistically regulate a number of common target genes including CYTOSOLIC PYRUVATE ORTHOPHOSPHATE DIKINASE1 (cyPPDK1) and cyPPDK2 (Feng et al. 2018). In addition to an opaque phenotype, the o11 mutant also manifests an abnormal interface between the endosperm and embryo, which is consistent with detection of several ESRspecific genes (e.g., YODA, encoding a MAPKK kinase) as direct target genes of O11 (Feng et al. 2018). These observations suggest that cellular differentiation in AL, ESR, and internal SE cells could be coordinately regulated through an O11-DOF3-NKD1/2-O2-PBF regulatory axis. The nature of genes regulated by O2 suggests that the O2-regulated gene network plays important roles in controlling kernel nutritional quality and yield (Zhang et al. 2016). Recent profiling of mutants and knockdown lines of O2 and its nuclear partners (o2, PbfRNAi, and OhpR-NAi) showed O2-network genes exhibit diverse spatial and temporal patterns of expression and functionalities (Frizzi et al. 2010;Hartings et al. 2011;Hunter et al. 2002;Jia et al. 2007Jia et al. , 2013Li et al. 2015;Zhang et al. 2016). As mentioned above, a key subset of target genes includes the zein multi-gene family. Mutations in or down-regulation of some zein genes can improve the lysine deficiency of wild-type maize, and thereby increase its nutritional quality (Frizzi et al. 2010;Hunter et al. 2002). The LKR/SDH gene encoding a lysine-ketoglutarate reductase/saccharopine dehydrogenase is activated by O2 (Kemper et al. 1999). This enzyme functions in the lysine degradation pathway during late endosperm development (Kemper et al. 1999). Endospermspecific knockdown of LKR/SDH using RNAi resulted in up to 20-fold increase in free lysine content (Houmard et al. 2007). Therefore, the increased lysine content of o2 mutants can be partially explained by the down-regulation of the LKR/SDH gene. With respect to the potential role of O2 in regulating starch and protein content, a number of genes encoding enzymes in the starch synthesis pathway have been shown to be either directly [e.g., STARCH SYNTHASEIII (SSIII)] or indirectly [e.g., SSIIa and STARCH-BRANCHING ENZYME1 (SBE1)] activated by O2 (Zhang et al. 2016). In addition, O2 can also transcriptionally activate b-32, which encodes an RNA N-glycosidase that likely functions as a defense-related protein by inhibiting protein synthesis through its ribosome inactivating activity (Bass et al. 1992;Lohmer et al. 1991). The role of b-32 in the context of a regulatory program primarily associated with storage protein and starch accumulation remains elusive. A number of other questions remain to be addressed with respect to the full scope of O2's role as a regulator of these diverse functionalities, including a detailed view of the associated gene regulatory networks and the full repertoire of its molecular partners. Interestingly, a recent analysis of the fl3 mutant identified a PLATZ-family TF as a regulator of a subset of functionalities that overlap with O2 (Li et al. 2015. FL3 is preferentially expressed in SE cells based on mRNA localization and regulates many tRNAs, 5S rRNAs, and other genes involved in translation, ribosome assembly and function, the unfolded protein response (UPR), and nutrient reservoir activity (e.g., zein genes and starch biosynthetic pathway genes) . The regulatory function of FL3 likely occurs through its interaction with components of transcriptional machinery, including transcription factor class C 1 (TFC1) and RNA polymerase III subunit 53 (RPC53), two critical factors of the RNA polymerase III (RNAPIII) complex . Interestingly, fl3 exhibits a semidominant phenotype, which is likely due to its parent-of-origin-dependent expression pattern, with the maternal allele being expressed and the paternal allele silenced specifically in the endosperm . As part of the effort to breed for maize varieties with increased lysine content but a harder endosperm (in contrast to normal o2 mutants which have starchy and soft endosperm and therefore are more susceptible to damage by fungi or insects), a number of genetic suppressors of o2 (o2 modifiers) have been discovered that enabled development of "quality protein maize (QPM)," which manifests a high lysine content and vitreous endosperm (Gibbon and Larkins 2005;Larkins et al. 2017). Genetic markers linked to o2 modifiers have been identified, including the 27-kD γ-zein, which has been suggested to play an essential role in modification of the o2 phenotype (Holding et al. 2011;Liu et al. 2016;Yuan et al. 2014). However, the underlying molecular mechanisms are yet to be fully elucidated. Future perspectives Analysis of maize opaque mutants indicates a tight association between a starchy endosperm phenotype of the kernel and altered storage protein deposition, that is, an altered size, number, and/or structure of endosperm PBs. Several recently published transcriptome analyses of opaque mutants revealed that many of them display altered accumulation of other storage products or dysregulated expression of genes associated with their synthesis and/or metabolism. For example, carbohydrate and lipid metabolism is perturbed in o2, o7, and o11 mutants (Feng et al. 2018;Frizzi et al. 2010;Hartings et al. 2011;Jia et al. 2007Jia et al. , 2013Li et al. 2015;Wang et al. 2011). Therefore, the opaque phenotype is often associated with perturbation in primary metabolism. UPR is another common feature of the dysregulated genes in opaque mutants. Generally, UPR is a homeostatic response to alleviate ER stress due to interference with protein folding, or as a result of adverse environmental conditions (Howell 2013). Recently, UPR has been shown to have a higher activity in the central endosperm (corresponding to the starchy region in mature endosperm) as compared with peripheral endosperm (corresponding to the vitreous region in mature endosperm) (Gayral et al. 2017). Interestingly, opaque mutants, defective in forming vitreous endosperm, also show upregulation of genes involved in UPR. These include mutants with defective zein genes (e.g., fl2, fl4, and Mc), transcriptional regulators (e.g., o2 and fl3), and others (e.g., o1 and o5) (Gibbon and Larkins 2005;Hunter et al. 2002;Li et al. 2017). Together, these observations suggest a mechanistic connection between the opaque phenotype and processes associated with storage product metabolism and the UPR. In addition to seed storage-function-associated biological processes (discussed above), many opaque mutants also show defects in developmental processes of both seed and non-seed tissues. First, the coincident perturbation of SE/ AL/ESR differentiation with storage compound accumulation in the nkd, Dof3RNAi, and o11 mutants suggest coordinated regulation of these developmental processes, which may occur temporally in a partially overlapping manner. Second, in addition to their respective mutant phenotypes described above, the nkd mutants show pleiotropic seed phenotypes, including a multilayered, partially differentiated AL and occasional vivipary (Gontarek and Becraft 2017;Gontarek et al. 2016;Yi et al. 2015). Moreover, the o11 mutant exhibits abnormal embryo (scutellum) morphology (Feng et al. 2018). These observations suggest that the opaque phenotype is also linked with other key seed developmental programs, such as mitotic proliferation of endosperm cells, embryogenesis, and seed maturation. However, whether these developmental processes are also dysregulated in other opaque mutants is unclear. Third, in contrast to other opaque mutants, which generally do not display vegetative defects, the o5 mutant seedlings display a pale green or albino phenotype, the mto140 mutant shows slightly retarded vegetative growth, while o6 shows a reduction in seedling height and root length that can be rescued by application of l-proline (Holding et al. 2010;Myers et al. 2011;Wang et al. 2014b). Therefore, detailed analyses of mutant phenotypes in seed and non-seed tissues, including the morphology and cytology of differentiating/differentiated endosperm cell types and the associated gene expression programs in comparison with wild type, will be needed to fully understand the relationship between the opaque phenotype and its cellular and molecular basis. Recent studies of TFs (e.g., O2, O11, NKDs, and FL3) with corresponding mutants that show an opaque phenotype led to identification of additional regulatory proteins that function upstream, downstream, or as partners with the TFs. Some of these proteins are implicated in regulation of storage programs and/or other key programs of seed development or the endosperm's response to abiotic stress. For example, NKDs regulate the Viviparous-1 (VP1) gene that encodes an ABI3-VP1 TF family required for proper seed maturation and germination (Gontarek et al. 2016;Hoecker et al. 1995); O2 and O11 directly regulate the bZIP-family G-BOX BINDING FACTOR1 (GBF1) gene that is involved in response to hypoxia (Feng et al. 2018;Li et al. 2015;Vetten and Ferl 1995). In addition, nuclear proteins GCN5, ADA2, and MADS47 interact with O2 to co-regulate downstream gene expression (Bhat et al. 2003(Bhat et al. , 2004Qiao et al. 2016). Moreover, although the molecular mechanisms associated with o2 modification are still unclear, the o2 modifiers constitute potentially useful tools for understanding the genetic processes underlying the opaque phenotype. Further understanding of the molecular basis of the opaque phenotype will likely require an understanding of the function of additional regulatory hubs, particularly TF proteins, their respective networks, and other genetic factors (and their respective gene networks) that can modify the opaque, starchy endosperm phenotype. Author contribution statement SZ and JZ made the table and figures. SZ, JZ, and RY wrote the manuscript.
2018-07-07T01:11:32.436Z
2018-07-05T00:00:00.000
{ "year": 2018, "sha1": "4760fb3f26d4276c753d37b82ffd6ebbbe6b2075", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00497-018-0344-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3f60a6668a19430ad8c978490d2aa08b048020c8", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }